Hacker Newsnew | past | comments | ask | show | jobs | submit | more pornel's commentslogin

GC is perfectly fine for ~99% of applications, which is why all the languages in the last few decades adopted it, and eliminated themselves from being direct C/C++ competitors.

Lack of GC isn't a requirement for most programs. However, it is a requirement for a language meant to actually replace C and C++, because these two are mostly used in the remaining fraction of programs and libraries where any GC or any fat runtime is undesirable. Even where some GC could be made to work, it makes it a tough sell for C/C++ users.


The article says surprisingly little about the state-of-the-art in realistic cloud rendering.

It's now feasible to pretty accurately simulate light propagation through the atmosphere and the clouds. The blue skies and red sunsets can be calculated based on physical properties.

Raymarching can be used to accurately render any shape of the clouds from any viewpoint. It's amazing how a couple of relatively simple formulas (Beer-Lambert law, Henyey-Greenstein phase function) can be used to realistically render almost everything about the clouds: the silver linings, darkened bodies, crepuscular rays, various appearances from thin fog to white powder, and colorful graidents at dusk and dawn.

https://youtu.be/GOee6lcEbWg?t=1422 https://youtu.be/mGHCOOnI5aE?t=1640 https://www.youtube.com/watch?v=Qj_tK_mdRcA


C can add a whole alphabet if str?cpy functions, and they all will have issues, because the language lacks expressive power to build a safe abstraction. It's all ad-hoc juggling of buffers without a reliably tracked size.


The language is expressive enough to have a good string library. It has string.h instead because of historical reasons. When it was introduced the requirements for a string library were very different from today's.


Null terminated strings is a relic that should have been recycled the day after it was created.

Any attempts to add more letters to the strx functions is just polishing the turd.


The types that C knows about are the types that the assembly knows about. Strings, especially unicode strings, aren't something that the raw processor knows about (as far as I know). At the machine level, it is all ad-hoc juggling of buffers without a reliably tracked size, until you impose constraints like only use length-prefixed protocols and structures. Where "only" is difficult for humans to achieve. One slip up and wham.


C with its notion of an object, TBAA, and pointer provenance is already disconnected from what the machine is doing.

The portable assembly myth is a recipe for getting Undefined Behavior. C is built on an abstract machine described in the C spec, not any particular machine code or assembly.

Buffers (objects) in C already have an identity and a semantically important length. C just lacks features to keep track of this explicitly and enforce error handling.

Languages exist to provide a more useful abstraction on top of the machine, not to naively mirror everything even where it is unhelpful and dangerous. For example, BCPL did not have pointer types, only integers, because they were the same thing for the CPU. That was a mess that C (mostly) fixed by creating "fictional" types that didn't exist at the assembly level.


The abstract machine of C is defined with careful understanding of what real CPUs do.


In need of a copy of ISO C printout?


The people who define Cs abstract machine are well aware of what real hardware is like. The standard of course doesn't mention real hardware but what is in there is guided by real hardware behaviour. they add to the specs when a change would aid real implementation


How does AVX512 guide ISO C?


The ommitte empers have been awareof simd for a long time and asking that. So far they have either not agreed, or because they have seen no need because autovectorization has shown much promise without. (that is both of the above are true though not always to the same people)

multi core is where languages have had to change because the language model of 1970 wasn't good enough.


It was an example among many others.

How does (FPGA, HiLow, Havard, CUDA, MPI,...) guide ISO C?


How should they? in some cases they have decided that isn't where they want c to go, in others the model of 1970 is still good enough, and in others they are being slow (possible intentional to not make a mistake)


So C isn't about being designed close to the hardware after all.


it is designed with careful understanding of real hardware. However that is not close to any particular hardware.


You must be thinking of c++? There is no object in C just structs which is just a little bit of organization of the contiguous memory. C exists to make writing portable CPU level software easier than assembler. It was astonishingly successful at this niche; many more people could write printer drivers. While ptr types may not formally exist in assembly, the variety of addressing modes using registers or locations that are also integers has a natural resonance with ptr and array types.

I would say C precisely likes to mirror everything even where it is unhelpful and dangerous. The spirit is captured in the Hole Hawg article: http://www.team.net/mjb/hawg.html

It is the same sort of fun one has with self modifying code (JIT compilers) or setting 1 to have a value of 2 in Python.

ed: https://en.cppreference.com/w/c/language/object is what is being referred to. I'm still pretty sure in the 80s and 90s people thought of and used C as a portable low-level language, which is why things like Python and Linux were written in C.


C has objects in the context of how the Abstract C Machine is defined by ISO C standard, nothing to do with OOP.


Assembly only knows about raw bytes, nothing else.


Depends on the assembly, but even most (all?) RISC instruction sets know about words (and probably half-words too) in addition to bytes.


Pairs, quads and octects of bytes.


There's also vector instructions.


Operating on stream of bytes, defined by registers.


The context was this comment:

> Depends on the assembly, but even most (all?) RISC instruction sets know about words (and probably half-words too) in addition to bytes.

Of course, you could define words and half-words in terms of bytes, too. Just as you can do with vectors.

And many vector instructions operate more on eg streams of words than streams of bytes.


Nope, the context was:

"The types that C knows about are the types that the assembly knows about."


I don't get how can each file system have a custom lifecycle for inodes, but still use the same functions for inode lifecycle management, but apparently with different semantics? That sounds like the opposite of an abstraction layer, if the same function must be used in different ways depending on implementation details.

If the lifecycle of inodes is filesystem-specific, it should be managed via filesystem-specific functions.


>> I don't get how can each file system have a custom lifecycle for inodes, but still use the same functions for inode lifecycle management, but apparently with different semantics?

I had the same question. They're trying to understand (or even document) all the C APIs in order to do the rust work. It sounds like collecting all that information might lead to some [WTFs and] refactoring so questions like this don't come up in the first place, and that would be a good thing.


If you haven't seen it before, you might find this useful https://www.kernel.org/doc/html/latest/filesystems/vfs.html

It's an overview of the VFS layer, which is how they do all the filesystem-specific stuff while maintaining a consistent interface from the kernel.


I understood it as they're working to abstract as much as is generally and widely possible in the VFS layer, but there will still be (many?) edge cases that don't fit and will need to be handled in FS-specific layers. Perhaps the inode lifecycle was just an initial starting point for discussion?


I assume it's supposed to work by having the compiler track the lifetime of the inodes. The compiler is expected to help with ephemeral references (the file system still has to store the link count to disk).


> but still use the same functions for inode lifecycle management

I'm not an expert by any means but I'm somewhat knowledgeable, there's different functions that can be used to create inodes and then insert them into the cache. `iget_locked()` that's focused on here is a particular pattern of doing it, but not every FS uses that for one reason or another (or doesn't use it in every situation). Ex: FAT doesn't use it because the inode numbers get made-up and the FS maintains its own mapping of FAT position to inodes. There's then also file systems like `proc` which never cache their inode objects (I'm pretty sure that's the case, I don't claim to understand proc :P )

The inode objects themselves still have the same state flow regardless of where they come from, AFAIK, so from a consumer perspective the usage of the `inode` doesn't change. It's only the creation and internal handling of the inode objects by the FS layer that depends based on what the FS needs.


The suggestions are purely about file paths, and don't affect the code. It's like advising "symlink __init__.py as __myproject_init__.py". The author opens files via search, doesn't want to have multiple files with the same name.

Declaring dependencies in the workspace ensures that the same version is used by every library you're writing. Versions are generally deduplicated anyway, but workspace makes it easier to bump versions and configure defaults in one place.


In the UK, roads entering the roundabout are often curved sharply towards direction of the traffic on the roundabout, so that it's more like merging lanes than entering a junction, so collisions are more likely to be sides hitting sides than a t-bone.


For Rust users there's a significant difference:

* it's a Cargo package, which is trivial to add to a project. Pure-Rust projects are easier to build cross-platform.

* It exports a safe Rust interface. It has configurable levels of thread safety, which are protected from misuse at compile time.

The point isn't that C++ can match performance, but that you don't have to use C++, and still get the performance, plus other niceties.

This is "is there anything specific to C++ that assembly can't match in performance?" one step removed.


I had expected that’s true. You just never know if perhaps Rust compilers have some more advanced/modern tricks that can only be accessed easily by writing in Rust without writing assembly directly.


There is a trick in truly exclusive references (marked noalias in LLVM). C++ doesn't even have the lesser form of C restrict pointers. However, a truly performance focused C or C++ library would tweak the code to get the desired optimizations one way or another.

A more nebulous Rust perf thing is ability rely on the compiler to check lifetimes and immutability/exclusivity of pointers. This allows using fine-grained multithreading, even with 3rd party code, without the worry it's going to cause heisenbugs. It allows library APIs to work with temporary complex references that would be footguns otherwise (e.g. prefer string_view instead of string. Don't copy inputs defensively, because it's known they can't be mutated or freed even by a broken caller).


> C++ doesn't even have the lesser form of C restrict pointers.

Standard C++ doesn't but `noalias` is available in basically every major compiler (including the more niche embedded toolchains).


And they're all extremely buggy, to the point where Rust has disabled it and reenabled it and disabled it and so on many times over as bugs are constantly discovered in LLVM because Rust is the only major user of it


This sounds like they're unlocking battery reserve capacity to be freely usable.

NMC batteries wear out quicker if they're fully discharged or kept fully charged, so automakers typically keep extra 3-5% of the battery hidden from the user to prolong the battery life (sort of like extra cells in SSDs).

If the unlock doesn't shorten the battery warranty, then it may be justified to charge for it.


Are they transparent that this what they will be doing? And that the money is for them to accept liability for the risk of a battery which is degrading faster?

>automakers typically keep extra 3-5% of the battery hidden from the user

Even 5% aren't 30 miles.


The optimal usage of the battery is actually 80%. I think it's very possible that they only charge to 90% originally


It sounds like this:

> When it cancelled the SR model, Tesla also stated that the batteries in those models were actually bigger than advertised, and that it planned to offer software unlocks that would add 40-60 miles of range, depending on your battery cells, for $1,500-$2,000, as soon as it had regulatory approval to do so.

> And now it looks like those upgrades are ready to go, as certain Model Y owners have started seeing an “Energy Boost” upgrade available in their Tesla app.

It only sucks that the folks without the unlock are merely carrying around the extra mass of the larger battery.


Not necessarily, economies of scale means ‘use as few skus as possible” if operational expenses approach raw materials costs.

Giving buyers extra battery volume at low marginal cost for future sales potential is definitely possible.


Range is such an important differentiator for EVs that I doubt they'd opt for a lower one if they could have a higher one for marginal cost, at no risk to reliability/longevity of the car.


The answer depends on a still unresolved issue of whether RFL can build usable safe and zero-overhead abstractions around kernel APIs. This work isn't complete, and it's hard to say whether the current issues are from immature integration, or a sign there are fundamental limitations.

Rust was able to give safety guarantees where other languages couldn't by forbidding certain code patterns that complicate or break static analysis (ambiguous ownership, unrestricted mutations). Rust also has its own arbitrary design choices such as reserving a right to memcpy structs to a different address.

For Rust code written from scratch these limitations are not a big deal. Rust programs pick architectures and design patterns that work with and not against the language.

But Linux never had to worry about having statically known ownership of pointers. It could do whatever it wanted with uninitialized memory. It could play with pointers however it wanted.

So RFL needs to bend backwards to support design patterns that are not idiomatic Rust. It remains to be seen where the compromise will end up (less safe Rust APIs? or more complex? Not zero-overhead? Maybe the Linux will change its APIs to fit Rust, or maybe Rust will change to help RFL).


GPL doesn't apply/doesn't have to be agreed to when the usage is allowed by the copyright law in another way. GPL can't override copyright exceptions like fair use (details vary by jurisdiction, but the principle is the same everywhere).

Even the license itself states it's optional, and you don't have to agree it (if you don't, you get the copyright law's default).

Author of the article is a former member of the Pirate Party and EU parliament, so they have expertise in the copyright law.


I would say that the Pirate Party has expertise in nothing apart from perhaps protecting Internet freedoms.

So the same persons that supported Napster and the Pirate Bay now want to circumvent copyright for open source software.

An unholy alliance, but the recent comments from some Microsoft brass about everything on the Web being freeware seems to indicate that these are the talking points that Microsoft and its new allies will put out.


In this article, Reda explains the current copyright laws in the EU, not a hypothetical policy of the Pirate Party. They're not a member of the PP any more AFAIK.

I expect that people professionally dedicated to a copyright reform are very familiar with it, regardless of which way they want to reform it.

The copyright laws were written before generative AI existed, so they may not be adequate or fair in the new reality, but that's the current state anyway. As Reda notes, the law is not specific enough to draw the difference between collecting and processing data for search engines (that may be using ML for retrieval) and using the same data with LLMs.


If the Web is freeware... I wonder what options remain for licenced online information.


Content gating behind login screens. Scraping content behind a login screen could constitute a contract violation and would give rise to a lawsuit independent of copyright.


Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: