The point GP made was that unless the fire starts in the battery, the battery is unlikely to catch fire. Thus when an EV catches fire, as opposed to starts a fire, in the EV burns about the same as an ICE.
They even note this in the Sola fire report[1], that the EVs did not contribute anything particular compared to regular ICEs to how the fire evolved.
They even say there is so much plastic and composites in modern cars, regardless of power source, that they output twice the heat compared to old cars made mostly of metal.
I have no idea what this says. Quite literally every single English language reference returned by a simple search refutes the assertion that EV fires are not more difficult or toxic than a regular car.
The fact that EV fires are more difficult to extinguish and produce substantially more toxic byproducts than regular gasoline powered cars is well established.
Last month, a battery-storage plant went up in flames and burned for days, prompting the evacuation of more than 1,000 residents and shutting down local schools. The plant, located in Moss Landing, an unincorporated community in Monterey County, is the largest facility in the world that uses lithium-ion batteries to store energy. Residents have reported feeling ill, and many of them worry that the fire polluted the air, soil and water with toxins.
“Now you don’t see anybody walking outside because it’s terrifying, everything that’s going on,” said Esmeralda Ortiz, who had to evacuate from her home in Moss Landing after the plant began burning on Jan. 16.
> a simple search refutes the assertion that EV fires are not more difficult or toxic than a regular car
The assertion is that EV fires are not particularly more difficult than ICE vehicle fires if the battery has not entered thermal runaway.
Most EV fires do not start in the battery (at least for EVs that are not involved in a collision).
And while the battery certainly can enter thermal runaway by an external fire heating it up sufficiently, it's not a given as real-world examples like the Sola fire shows as well as various research. Here are some quotes from a paper about full-scale EV fire tests[1]:
In both cases the fire ignition took place in the rear seats. However, it has to be mentioned that in the case of the BEV, the battery was not involved in the fire for the first 800 s (full voltage in all cells of the battery).
However, the test also showed that although the vehicle had already burned for more than 10 min, the battery was still not involved in the fire and the temperature inside the battery was well below 50 °C
In the tests they forced thermal runaway after a while, by shorting the batteries.
Here's[2] another, smaller study where they tried to initiate a thermal runaway by placing a propane burner under the battery, but failed as they removed it too soon.
The burner was in place for 12 minutes, at which point the rest of the car had caught fire which also contributed to heating the battery. Yet no thermal runaway occurred.
Modern cars, EVs and ICEs alike, have more flammable material in the form of plastics than in their batteries or gas tanks[3]. And those plastics also release a lot of toxic smoke when burning. Sure, if the battery catches fire it will release nasty HF gas, but it's not like fumes from an ICE fire is healthy stuff.
For any fire, once the fire gets hot enough, it can be difficult or impossible to extinguish.
The fundamental problem is that battery fires get to be very high temperatures 1200C and cannot be extinguished at that point. I think the distinction you’re making about presence of thermal runaway or not is really rather irrelevant because yes you can put that fire out. That’s not the problem. The problem is that the devices do runaway and when they do it’s very difficult to put them out.
The ship in the original article was abandoned because it the fire could not be extinguished. The battery fire at Moss Landing could not be extinguished for 2 weeks.
Here’s a great video of the MountainView Fire Department talking about the difficulties of putting out EV fires. They explain that they’ve had cars catch on fire again 6 days later. They purchased new specialized equipment but at the time their department was one of the only fire companies that had this in California.
Norwegian Institute of Health has a calorie calculator[1] based on scientific studies.
For a 40 yo male, 180 cm (5' 10") high and weighing 80 kg (176 punds), it spits out 1800 kcal at complete rest.
That then gets multiplied by a factor of 1.4-1.5 if you're a sedatary office worker, resulting in around 2400-2500 kcal.
This number is for maintaing weight. I used the calculator to find my 500 kcal deficit to lose weight, and based on my actual losses it seems quite accurate.
I ran similar numbers and experiments over and over. In the end I found composition and "starting weight" matters quite a lot. https://jodavaho.io/tags/diet.html
You can easily go +/- 5-10 lbs at my height, just by choosing carbs over protein or fat, and it's not "Fat loss" or body composition, it's water weight associated with converting carbs to something your body can use.
Then a fast day drops you another 5-10 lbs for opposite reasons + burning glycogen stores. And that >10% swing in body "weight" is just noise on the calorie estimates if you plug it directly into such a calculator.
(as an example, if I choose 2000 calories of carb-heavy foods, I will weigh more, increasing my calorie "budget" to maintain weight b/c I eat more carbs, producing a higher weight, further increasing my calorie "budget", etc etc).
I can find 2200 for my 2 meter height, or 2500, or 1800, depending on input weight, and since input weight varies by day, and by caloric intake (due to food weight, water weight, and weight gain), it's not reliable to say the "average" person is healthy with 2400 calories. Because perhaps that person should be eating different foods or have 10 lbs less bodyfat to begin with!
I find you have to reverse-engineer it given what you know to be a healthy weight including muscle and fat composition.
Been an SRE and DBA for almost 20 years and the only truth in tech I firmly believe in: use whatever you want in a new project. Build fast and get it out. Once you have paying users, hire old dads like me to move you to Cassandra or Vitess or TiDB or something, or dont and just pay the bills for mongodb and laugh all the way to Series C.
I wouldn't start a new project with MongoDB, I'd probably use ScyllaDB, and i'd spent months getting the data model just right while you launch and get paying customers.
We're seeing a convergence of document DBs adding relational features, and relational DBs adding document features. At this point I find the best of both worlds to simply be PG with JSONB.
Until very recently, at the company I work for, we were running one of the largest (if not the largest) replica set cluster in terms of number of documents stored (~20B) and data size ~11TB. The database held up nicely in the past 10 years since the very first inception of the product. We had to do some optimizations over the years, but those are expected with any database. Since Mongo Atlas has a hard limit on the maximum disk size of 14TB we explored multiple migration options - sharding Mongo or moving to Postgres/TimescaleDB or another time series database. During the evaluation of alternative database we couldn't find one which supported our use case, that's highly available, could scale horizontally and that's easily maintainable (e.g. upgrades on Mongo Atlas require no manual intervention and there's no downtime even when going to the next major version). We had to work around numerous bugs that we encountered during sharding that were specific to our workloads, but the migration was seamless and required ~1h of downtime (mainly required to fine-tune database parameters). We've had very few issues with it over the years. I think Mongo is a mature technology and it makes sense depending on the type of data you're storing. I know at least few other healthcare companies that are using it for storing life-critical data even at a larger scale.
Ravendb is trash. It can’t handle even 1/10 of this type of a load, it was trashed in jepsen testing too. I had to work with it 4 years and i disliked it.
> are there still good reasons for using it in a new project in 2025?
I've written this before: if your data looks like trees, with some loose coupling between them, it's a good choice. And most data does look like trees.
It does place some extra duties on the backend. E.g., mongodb doesn't propagate ("cascade") deletes (this also happens to be a feature I dislike: not so long ago, a delete of an insignificant record triggered an avalanche of deletes in a postgresql database, and restoring that took quite a bit of time.)
> I've written this before: if your data looks like trees, with some loose coupling between them, it's a good choice. And most data does look like trees.
I had an education statup a little while ago.
Courses had many cohorts, cohorts had many sessions.
It really was much nicer having a single tree structure for each course, appending new cohorts etc, rather than trying to represent this in a flat Postgres database.
That said I acknowledge and agree with other commenter's experiences about MongoDB and data loss in the past.
My main concern would be with querying. What is querying like in modern mongodb, or scylladb? I've seen couchbase ads showing a superset of sql, etc. Last I recall Mongo had a weird query system.
Cascading deletes are an optional feature in every DB that supports them, aren’t they? I only have DBA-ish experience with SQL Server, but there they are fully opt in.
MongoDB is used for ~8 years straight here with a replica set that replicates itself and even crossing datacenter boundaries. It runs smooth, stable, fast and allows for a good backup strategy. It implements all the database concept you can image, offers great compression algos and it easy to maintain. Short: It drives the job and it does that good.
MongoDB as a company is growing 20% y/y and 2B in revenue.
So very far from being a legacy product.
I still use it for new projects because (a) Atlas is genuinely a solid offering with a great price point for startups and (b) schema-less datastores have become more of a necessity as our control of data has decreased e.g. SaaS companies dictate their schema and we need to accomodate.
They're really good at sales and marketing, especially aimed at beginners.
I'm still puzzled why people use it given that it's a database and there's nothing technical it ever did better than any of its competitors. The best that can be said for it is that it works about as well in some circumstances as competing databases do.
I swore away from it for 10 years, but came back recently. And I'm pleasantly surprised with the developer experience of MongoDB Atlas (the cloud version).
You just have to keep in mind the common sense best practices about developing with kv stores, and you'll be mostly alright.
The original storage engine was terrible, but they wised up and later acquired and adopted WiredTiger as the default. It was sort of their InnoDB moment and went a long way to improving performance, reliability, replication, and other "enterprisey" features.
If you rely on Postgres and/or Mysql replication then you'll need a DBA on call. Lots of manual work involved in babysitting them when stuff goes south.
Mongodb is entirely automatic, you just delete bad servers and reprovision them automatically and everything "just works" with no interruption.
I disagree with that. The Server Side Public License is more open than the AGPL.
In the same sense that the GPL is more open than the MIT license; more viral requirements for openness are generally a good thing. I don't want Amazon and the ilk deploying hosted Mongodb clusters.
I doubt they use mongodb for the actual real time game. Probably just for the user accounts, items, skins and that sort of stuff. It could literally be done by anything else.
> or are there still good reasons for using it in a new project in 2025
it's not clear there ever was. Most of the big users I'm aware of, like Stripe, don't seem to have needed it and regretted the decision. Big data didn't become a thing in the way people expected[0]. If you really did need the scalability of Mongo you'd choose a NewSql database like TiDB[1].
a) Big data is more than just the size of the data. It's about how you treat that data i.e. instead of doing expensive and brittle up-front RDBMS modelling you instead dump it all into a data lake and figure out how to handle the data at run-time. And it is still the standard pattern in almost all companies today.
b) Nobody was choosing MongoDB solely for performance. If it was you would choose some in-memory K/V store. It was about it being the only well supported document store that was also fast and scalable.
That's not how software works. In my experience using mongodb just means that now every single bug in the code creates messed up data for a few months/years and then when that data pops up the software crashes :D
it's probably stable enough... but our teams got like 90% reduction in their bills by moving off mongodb atlas to postgres.
of course, architect your data schema right and you'll have much more flexibility in choosing a database engine that can be fast, cheap, and easy to operate.
I find search engines like Google and Bing are so overly keen on displaying any results that they'll ignore your search parameters and return something else instead.
Thus, I find LLMs quite useful when trying to find info on niches that are close to a very popular topic, but different in some key way that's hard to express in search terms that won't get ignored.
Noise. Either directly as a source, which is then filtered to produce say a cymbal sound, or indirectly by creating timing variance, ie notes not playing exactly at the beat each time, which makes things sound less artificial and more pleasing.
It can also be used to dither[1] the result when doing a final rendering, to improve the noise floor.
I guess I should have pointed out the noise shaping could be useful if you're doing 16bit playback of a 24bit source, so possibly useful during playback as well.
For a while I was struggling to pinpoint my discontent, but I ended up realizing it was because I felt I was undervalued compared to the contributions I had made over the years.
As it happened, my superiors had come to realize the same, so when I asked for a talk, they preempted my plan by announcing this.
As mentioned by the article, I've since been included in much more strategic talks and discussions to help shape the future of the company, as we're moving our products from the desktop to the web.
It's still something I'll keep an eye on, but just realizing the source of my frustration was very helpful. It also made me more aware of how I shouldn't sacrifice too much unless it's being valued, as opposed to just being more useful.
Was in a hurry, so realize I forgot my main point.
I wish I had realized the source of my frustration and thus acted on it earlier.
It lead to a quite downbeat feeling, which still lingers a bit every now and then.
Of course this was just before the current mass layoffs, so getting a new job was definitely an option then, which I almost certainly would have taken had they not seen me.
While it's less than 3% of the fund, it's about 25% of the national budget. So it's most certainly used to fund these exemptions and other political goals.
This is RFC 4086, which was published in 2005. It's still listed as the current best practice, however much has happened since 2005, especially in the field of security.
So I wonder if there are some areas in which this document is lacking or which aren't holding up as well?
One thing I have picked up is randomness inside virtual machines, and issues surrounding that. Sure if you got hypervisor support you're golden, but what if you don't?
>So I wonder if there are some areas in which this document is lacking or which aren't holding up as well?
Ring oscillators have been embedded into Intel/AMD CPUs, and they're accessible via RDRAND/RDSEED. Blum-Blum-Shub has been phased out, these days you see AES-based CSRPNGs and Linux uses ChaCha20. The RNG in Linux has been overhauled at least once and so the /dev/random section is outdated.
Interestingly the key size recommendations were at around 90 bit range already 20 years ago, and they haven't changed that much. That's still quite close to the password minimum recommendation. Makes you wonder whether it should be closer to the 103 bits now.
Triple DES has been deprecated.
All in all, the guidance has changed. These days you should not be concerning yourself with any userland CSPRNG, just use the OS syscalls like GETRANDOM. Nothing you do above a kernel module RNG will make it more secure.
They even note this in the Sola fire report[1], that the EVs did not contribute anything particular compared to regular ICEs to how the fire evolved.
They even say there is so much plastic and composites in modern cars, regardless of power source, that they output twice the heat compared to old cars made mostly of metal.
[1]: https://www.rogbr.no/Rapporter%20og%20utredninger/Evaluering... (page 24)
reply