Hacker Newsnew | past | comments | ask | show | jobs | submit | IsTom's commentslogin

I don't have any experience with ARM, but from what I've seen people write, isn't 32-bit ARM discontinued after v7?

Their motivation is explained in the first post of the series[1]

[1] https://www.grisp.org/blog/posts/2025-06-23-jit-arm32.1#why-...


There's still a huge embedded market!

Plenty of microcontrollers have a single-digit number of Cortex-M cores and memory/flash counted in the megabytes. It'll be decades until that market reaches the multi-gigabyte point, so why bother wasting a whole bunch of memory on 64-bit pointers?

I'm not quite sure why you'd want to run Erlang on it, but the hardware exists.


> I'm not quite sure why you'd want to run Erlang on it, but the hardware exists.

Erlang is invented before IoT was a thing to facilitate distributed computing for telecommunication in a highly reliable manner. It makes perfect sense to adapt it for driving fleets of cheap IoT devices.


No, it's a supported ISA on most v8-a and I believe all v8-m implementations.

It's the only ISA on Cortex-A32, but not sure if any mainstream chips were ever produced with that core.

(Depending on course if you want to get specific about Arm/Thumb/Thumb2, I lumped them all together above).


That does not mean ARM32 implementations and uses are stopping any time soon. Afaik arm hasn’t even obsoleted armv6, although Linux distributions are starting to drop it.

Doesn't mean that machines won't be built with other chips for a considerable time.

That said, if you're putting something like Erlang on a chip, aren't one likely to want the extra memory (and performance) of a slightly newer SoC.


Take a look at their products. Seems like they run bare metal Erlang on embedded devices.

https://www.enforcementtracker.com/ and sort by amount, these are not small companies and amounts aren't exactly trivial either, with a mechanism to get bigger if ignored.

Meta appear 4 times in the top 10 with a total of about 2.25bn in fines. That sounds like a lot but it's only 1.6% of their revenue. As a cost of doing business that's probably acceptable to the Meta board. It'd cost them more to do things properly, so there's little incentive to do so.

The fines will increase if they continue breaking the rules, so there is incentive.

The fines are calculated to be enough to pad the coffers of the EU bureucracy and for FB to not really care, to keep this racket going.

Besides fines being able to grow that's global revenue, probably a bigger part of EU revenue. And their margins aren't 100%.

You can have a separate connection pool for 'cache' requests. You shouldn't have too many PG connections open anyway, on the order of O(num of CPUs).

In C can't you just offset pointer and then you'll be able to index with arbitrary starting value?

Yes. I do this a lot when writing linear algebra stuff. All the math texts write things in 1-based notation for matrices. The closer I can make the code match the paper I'm implementing makes life so much easier. Of course there's a big comment at the beginning of the function when I modify the pointer to explain why I'm doing it.

Technically no, a pointer pointing outside of its array (or similar) at any point is undefined behaviour. More importantly for this discussion, without support from the language it's not very ergonomic to work with. What happens when you need to call strlen, memcpy, or free?

It works in this case where you want to move the zero index forward a few cells to a valid offset. It is only UB for the general case where the offset may land outside valid memory. C has always supported negative indices, so moving index zero forward into the middle of the array is fine.

You can compute a number that is equal to BB(n), but you can't prove that it is the right number you are looking for. For any fixed set of axioms you'll eventually run into BB(n) too big that gets indepentent.


>You can compute a number that is equal to BB(n), but you can't prove that it is the right number you are looking for.

You can't categorically declare that something is unprovable. You can simply state that within some formal theory a proposition is independent, but you can't state that a proposition is independent of all possibly formal theories.


They didn't claim that. They claimed that any (sound and consistent) finitely axiomatizable theory (basically, any recursively enumerable set of theorems) can only prove finitely many theorems of the form BB(n) = N.


I quoted the specific statement that I refuted.


Only if your goalpost of what "mathematics" is endlessly shifting. To prove values of BB(50000) you're probably going to need some pretty wacky axioms in your system. With BB(any large number) that's just going to be unfeasible to justify that the system isn't tailored to prove that fact, just short of adding axiom of "BB(x) = y".


It's not that there "exists n, such that for all theories", but that "for all theories there exists n", that BB(n) will get independent eventually.


> particular values can be calculated

You need proofs of nontermination for machines that don't halt. This isn't possible to bruteforce.


If such proofs exist that are checkable by a proof assistant, you can brute force them by enumerating programs in the proof assistant (with a comically large runtime). Therefore there is some small N where any proof of BB(N) is not checkable with existing proof assistants. Essentially, this paper proves that BB(5) was brute forcible!

The most naive algorithm is to use the assistant to check if each length 1 coq program can prove halting with computation limited to 1 second, then check each length 2 coq program running for 2 seconds, etc till the proofs in the arxiv paper are run for more than their runtime.


In this perspective, couldn't you equally say that all formalized mathematics has been brute forced, because you found working programs that prove your desired results and that are short enough that a human being could actually discover and use them?

... even though your actual method of discovering the programs in question was usually not purely exhaustive search (though it may have included some significant computer search components).

More precisely, we could say that if mathematicians are working in a formal system, they can't find any results that a computer with "sufficiently large" memory and runtime couldn't also find. Yet currently, human mathematicians are often more computationally efficient in practice than computer mathematicians, and the human mathematicians often find results that bounded computer mathematicians can't. This could very well change in the future!

Like it was somewhat clear in principle that a naive tree search algorithm in chess should be able to beat any human player, given "sufficiently large" memory and runtime (e.g. to exhaustively check 30 or 40 moves ahead or something). However, real humans were at least occasionally able to beat top computer programs at chess until about 2005. (This analogy isn't perfect because proof correctness or incorrectness within a formal system is clear, while relative strength in chess is hard to be absolutely sure of.)


Not quite. There is some N for which you can’t prove BB(N) is correct for any existing proof assistant, but you can prove that BB(N) by writing a new proof assistant. However, the problem “check if new sufficiently powerful proof assistant is correct” is not decidable.


Length of proof / machine for proving it can be much bigger than BB(n) itself. Or even there can be specific machines that don't halt, but there is no proof for this at all – you can encode problems independent of ZFC in a few hundred states.


You can try them with simple short loops detectors, or perhaps with the "turtle and hare" method. If I do that and a friend ask me how I solved it, I'd call that "bruteforce".

They solved a lot of the machines with something like that, and some with more advanced methods, and "13 Sporadic Machines" that don't halt were solved with a hand coded proof.


I think these are chiefly about working around limitations of 2000s Java, which was (no idea about newer releases) a very verbose and inexpressive language.


Everyone in a big org has their own incentives and they don't match with what makes a better product. People are looking out for themselves and there's no accountability for the end result. Checkboxes get checked and that's all that matters.


Finally somebody thinking about keeping the pesky little guy in the dark for longer.


Running it on a single laptop might be an exaggeration, but I can't imagine there's any essential complexity that requires more than a few dozen servers.


Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: