I have a Rust web project written ~5 years ago. Zero updates until a month ago, also zero problems. A month ago I upgrade the docker file to latest rust, zero problems compiling. Could have left it to run like this for 5 more years probably but I decided to experiment...
I issue a `cargo update` to upgrade the leaf dependencies and do automatic minor version updates. Issue cargo check. Some new warnings from `clippy`, but it still compiles and runs without problems. Could have deployed for 5 more years, but I decided to experiment more...
I upgrade some of the libraries to major new versions - I am experienced and I know which ones will upgrade without problem. They do upgrade without problem. Could have deployed for 5 more years but decided to walk the extra mile...
I upgrade the more problematic ones, especially actix_web, the web framework, which had a massive rewrite and a huge new release with almost completely different API surface... It's a bit difficult to understand the changes, especially some parts of the old code written for the old version (which I no longer remember), but in an hour I'm done. Afterwards `cargo outdated` does not report any outdated libraries. I deploy for the next 5 years. Zero problems since then.
Well, it's not decades yet, but I can imagine similar effort to maintain it over the next decade.
I migrated from OCaml to Rust around 2020, haven't looked back. Although Rust is quite a lot less elegant and has some unpleasant deficiencies (lambdas, closures, currying)... and I end up having to close one one eye sometimes and clone some large data-structure to make my life easier... But regardless, its huge ecosystem and great tooling allows me to build things comparatively so easily, that OCaml has no chance. As a bonus, the end result is seriously faster - I know because I rewrote one of my projects and for some time I had feature parity between the OCaml and Rust versions.
Nevertheless, I have fond memories of OCaml and a great amount of respect for the language design. Haven't checked on it since, probably should. I hope part of the problems have been solved.
Your comment makes me think the kind of people who favor OCaml over Rust wouldn't necessarily value a huge ecosystem or the most advanced tooling. They're the kind who value the elegance aspect above almost all else, and prefer to build things from the ground up, using no more than a handful of libraries and a very basic build procedure.
Yeah, I was that kind of person, then I wrote a real tool that does real work in OCaml... and then I discovered than I am no longer such a person and went to Rust.
Just the straight/naive rewrite was ~3 times faster for my benchmark (which was running the program on the real dataset) and then I went down the rabbit hole and optimized it further and ended up ~5 times faster. Then slapped Rayon on top and got another ~2-3x depending on the number of cores and disk speed (the problem wasn't embarrassingly parallel, but still got a nice speedup).
Of course, all of this was mostly unneeded, but I just wanted to find out what am I getting myself into, and I was very happy with the result. My move to Rust was mostly not because of speed, but I still needed a fast language (where OCaml qualifies). This was also before the days of multicore OCaml, so nowadays it would matter even less.
How much of that do you think comes from reduced allocations/indirections? Now I really want to try out OxCaml and see if I can approximate this speedup by picking up low hanging fruits.
Were you using the ocamlopt compiler? By default, ocaml runs in a VM, but few people figure that out because it is not screaming its name all the time like a pokemon (looking at you JVM/CLR). But ocaml can be compiled to machine code with significant performance improvements.
Having the power to deny others to mine blocks does not mean that you can obtain the tokens from their wallets. Miners can't sign transactions on users' behalf. You can rewrite all of history but then no exchange will accept your version of it to let you exchange the tokens for fiat. Also this will almost certainly crash the price of XMR substantially. And later people will be able to fork/restore the original version. The technological side of the blockchain is only part of the consensus/trust/market/popularity. People are the other part, and people will not pay the attacker for their successful attack.
The attacker doesn't need to steal tokens. They just need to short the token while they sufficiently disrupt the network to drive down the price. They get the money and your tokens become worthless.
I was completely wrong about the cost. XMR mining rewards amount to only $150k/day.
At the height of the attack, Qubic (the company) paid people up to $3 in QUBIC for every $1 of XMR they mined through QUBIC, and they achieved around 33% of XMR's hashrate which was sufficient to mine the majority of blocks for a few hours.
If they were forced to buy back all those QUBICs they paid out, this might have cost them ~$100k/day. But thanks to the media attention it's likely that they didn't need to buy anything back and actually were able to emit more than they otherwise could have.
XMR needs to adapt -- switch to PoS, or ASICs-based POW, or a hybrid of both.
Controlling 51% of XMR costs ~$30M per day, you'd have to short a huge amount of XMR to make that worthwhile. Who would be the counter party and how would you do that anonymously?
The attack itself is unprofitable, the "profit" for Qubic is the publicity they get. (or at least that's what they're betting on)
Monero has a theoretical market cap of $4.7B USD and daily volumes >$100M USD. I wouldn't recommend taking that short position in one go but over a few days and a few exchanges I wouldn't see a problem acquiring a very large short of the token.
You can only do that on centralized exchanges, which would mean that you effectively doxx yourself by shorting. Also the exchange will most probably seize your funds before you are able to withdraw them.
You'd have to spend $30M per day in order to control 51% of XMR, and then you'd YOLO your life savings (which would have to be another couple hundred million dollars) on centralized exchanges without anyone noticing?
I was completely wrong about the cost. XMR mining rewards amount to only $150k/day.
At the height of the attack, Qubic (the company) paid people up to $3 in QUBIC for every $1 of XMR they mined through QUBIC, and they achieved around 33% of XMR's hashrate which was sufficient to mine the majority of blocks for a few hours.
If they were forced to buy back all those QUBICs they paid out, this might have cost them ~$100k/day. But thanks to the media attention it's likely that they didn't need to buy anything back and actually were able to emit more than they otherwise could have.
XMR needs to adapt -- switch to PoS, or ASICs-based POW, or a hybrid of both.
We can argue all day what "think" means and whether a LLM thinks (probably not IMO), but at least in my head the threshold for "decide" is much lower so I can perfectly accept that a LLM (or even a class) "decides". I don't have a conflict about that. Yeah, it might not be a decision in the human sense, but it's a decision in the mathematical sense so I have always meant "decide" literally when I was talking about a piece of code.
It's much more interesting when we are talking about... say... an ant... Does it "decide"? That I have no idea as it's probably somewhere in between, neither a sentient decision, nor a mathematical one.
Well, it outputs a chain of thoughts that later used to produce better prediction. It produces a chain of thoughts similar to how one would do thinking about a problem out loud. It's more verbose that what you would do, but you always have some ambient context that LLM lacks.
At least under German law, if you offer services or products for purchase, you need to provide an address where you can be physically reached. For self-employed entrepreneurs the only address that will fulfill that criterium is your private domicile.
> a message bus is kinda integrated in the BEAM runtime or Erlang
You have that on a single node.
If you need to run more than one node, you will end up inventing your own on top of mnesia and usually the results are not spectacular or/and you will end up paying happihacking to do it for you. Or one of the other Erlang oldtimers who you can count on the fingers of your hands.
This is really suboptimal compared to what you can achieve by using any normal language + any message bus. You are actually much better using a proper message bus even if you use Erlang.
I issue a `cargo update` to upgrade the leaf dependencies and do automatic minor version updates. Issue cargo check. Some new warnings from `clippy`, but it still compiles and runs without problems. Could have deployed for 5 more years, but I decided to experiment more...
I upgrade some of the libraries to major new versions - I am experienced and I know which ones will upgrade without problem. They do upgrade without problem. Could have deployed for 5 more years but decided to walk the extra mile...
I upgrade the more problematic ones, especially actix_web, the web framework, which had a massive rewrite and a huge new release with almost completely different API surface... It's a bit difficult to understand the changes, especially some parts of the old code written for the old version (which I no longer remember), but in an hour I'm done. Afterwards `cargo outdated` does not report any outdated libraries. I deploy for the next 5 years. Zero problems since then.
Well, it's not decades yet, but I can imagine similar effort to maintain it over the next decade.