There is this dumb belief stemming from lack of proper CS education that any code you write can just randomly have memory safety issues.
This is effectively true in C and C++ though. Show me a nontrivial project in either of those languages that has never had a memory safety issue and I'll show you a project that doesn't look at quality. Even SQlite doesn't meet this bar, despite incredibly skilled programmers and an obsessive commitment to quality.
I mind DST because I have to consider clock changes, not because I care how many hours of sunlight are in the period between 6h and 18h. Even the extreme of Arctic winter is annoying only because 4h looking similar to 16h messes with your awareness of time until you adopt a structured schedule.
And that was largely a fair point back before we had everything managed by computers. Nowadays, I'd wager the vast majority of folks didn't realize their phone's changed overnight. My kids certainly didn't realize it.
Not that I don't think we couldn't have a better system. But nobody likes my idea of "base it on the month with 6 going up 10 minutes and 6 going down." Well... I think some folks like the idea, but nobody (including me) thinks that is where we are headed.
> "base it on the month with 6 going up 10 minutes and 6 going down."
Back before Japan adopted our current time system, they adjusted their clocks every two weeks. They adjusted the "hour" markers so that there were always six temporal hours of daylight and six of darkness, with the time period of the hours changing.
Thats really cool, I'll have to dig into knowing why they did that and why they stopped. My gut is that the way we do DST -- by moving what timezone everyone is in -- is just not compatible with that sort of system.
I pushed for the idea as it is by far more closely aligned with solar time than what we have. That is, sun dials did this somewhat automatically for many years, no? (They, of course, also stretch how long an hour is... I am not that sadistic)
My country (Italy) could simply leave DST on all year long and it will be better suited to our way of living. Spain basically did it for the last 90 years when they moved to the timezone of GMT+1 from GMT+0 where they belong. As a bonus they got DST in the summer. After all I could get a double DST in the summer too. All that light at 5 AM is wasted.
This is part of what I find "quaint" about the conversation. I think there is basically no argument that many states and nations could just stick to a single timezone. Essentially, the closer to the equator you are, the less you are impacted by daylight changing.
So, calling it "quaint" is wrong. But a lot of talking past each other. The shift in daylight is much more dramatic for people further from the equator.
Right, and that I think is far easier to deal with than you realize. Easier if you don't try it in 1 hour jumps twice a year. But basically every animal already has to deal with the shift due to the nature of actual sunlight. And we do just fine by that.
(I should add that I have also never used an alarm clock. The shift from DST has never really been difficult to manage. Only was hard when I would be awake, but not leave for work/school on time because I forgot to update the clock.)
What time will X occur on a particular future date? E.g. meetings.
What time will X occur in another time zone?
What time will X occur across multiple time zones that shift at different times?
What time will X occur across multiple time zones where one doesn't shift at all? E.g. meetings with people in Arizona.
How long ago did X occur, across a transition? E.g. correlating Unix time to civil time.
How long is X, where X is an event spanning multiple time zones that shift independently? E.g. plane flights.
Does my cron job need to run intermittently every X hours, or at 24/X particular times every day, or every X hours but managed so that processing doesn't intrude into normal business hours?
Personally, when I'm in the wilderness my circadian rhythm follows the sun. Up at sunrise and asleep not long after sunset. That's not how I work when I'm living in society, so I don't see why animal adaptations are particularly relevant here.
On a social level, the shift causes tremendous levels of stress to people. It essentially creates a day of national jet lag, for no real benefit that I can see.
None of those are difficult in the modern world. Computers have pretty much solved all of them. There are confusing to talk about cases. But none that are truly problematic.
Consider some of the most sensitive for people. Take your medication twice a day. What do people do when they fly across the country? Most don't keep the same schedule they had back home on the medication. They just start doing it at the new place.
Folks that do have "take this exactly every 12 hours" rely on a timer, not a clock.
Even talking about how long ago something was is just not that important for folks in most scenarios. Consider, when people move they don't change their birth date if it would have been a different day in the new location. (That is, nobody does their birthday by GMT.)
Now, about the only idea I fully reject is that we could just tell all businesses to change their opening hours so that we don't have to change the clocks. This is mind numbingly crazy to me. The entire point of changing the clocks is to get everyone to update together.
When a meeting is set up and time shifts happen, in practice people don't know what time the meeting is. Same for the rest. I'm not concerned about whether you can calculate it in theory (you actually can't in general, just the common cases) since it doesn't happen reliably in my experience. There were also many cases of things incorrectly applying DST when I lived in AZ.
Also, I assure you people care very much about how long ago they worked when it involves their paychecks and hours worked. Datetime/DST bugs in payment systems and other critical computers are scarily common.
Plus the billions in social costs. Children have to dream with sleep deprivation, hospitals have to treat more strokes, pollution worsens, and criminals get longer sentences.
I'm not sure why changing opening hours is difficult. It's incredibly common to have off-season and high season hours in tourist towns. You don't have to coordinate anything. If the idea of multiple hours is too difficult for anyone, they can simply pick one or the other. Nothing bad will happen.
The benefits we get for the trouble of changing the clocks are minor.
If you are saying that mistakes will be made, I agree. People make mistakes all of the time. About things easier than schedules.
If you want to single out the idea of "shifting timezones" being how we accomplish DST. I agree that is problematic, at best. I can assume there were reasons to do it by completely changing what timezone an area is in. I struggle to understand them, myself. Especially when we don't do the same for other time shifts. (Leap years and seconds, in particular.)
I further agree that doing it in one hour jump is bad. Is literally why I suggest shortening the jump to 10 minutes would be fine. Argument being that that is far more natural for how time was felt by people for most of history. (Indeed, originally, hours were not fixed in terms of how many minutes they took.)
But no, nobody cares what time you said they clocked in yesterday. They care that you accurately pay them for how long they were clocked in for. Obviously, you have to make any system that deals with those work correctly. But, again, people make mistakes on those already, irrespective of timezone changes.
For evidence, see how little energy people expend on how incomprehensible airline tickets are. Look at a ticket and see if you can quickly say how long a flight is. It isn't like they don't know. It just doesn't matter on your ticket. (Even if I like to consider my options based on how long I'll be in the air...)
Is it a good argument to not necessarily change things? Yeah, which is why I think my suggestion of 10 minute changes is largely silly. A lot of inertia in the system we have is not necessarily a bad thing.
Getting everyone to change their operating hours just feels daft to me. And, ultimately, how is that any different?
For example, you want to get it so that schools start/end an hour earlier starting a week. Which means that we still have to deal with the idea that you have to shift your sleep. Probably wise to also go to sleep an hour earlier. And, yeah, I think people could adjust to knowing that they have to change their bedtime from 9 to 8, for example. But, there is a reason we try to keep sunrise and sunset as close to consistent times as we can. No matter where we go.
I'm somewhat sympathetic to the data about how much worse the week of the hour loss is. I'd be curious to know if that is better or worse in recent years. And I genuinely don't know how to square the fact that the data dang near cancels out with how much better it is in the week we gain an hour. That, honestly, feels a bit too convenient. (And again, this just gets me back to the idea that the problem is losing a full hour.)
I'm reminded of the M.A.D.D. campaigns to reduce drunk driving with faked crash scenes in front of schools. They would set up a crashed car with dummy "bodies" strewn (and even scattered blood/glass) across walkways where everyone could see them.
I don't think it was a particularly effective tactic.
Human reaction time is very difficult to average meaningfully. It ranges anywhere from a few hundred milliseconds on the low end to multiple seconds. The low end of that range consists of snap reactions by alert drivers, and the high end is common with distracted driving.
400-500ms is a fairly normal baseline for AV systems in my experience.
> MIT researchers have found an answer in a new study that shows humans need about 390 to 600 milliseconds to detect and react to road hazards, given only a single glance at the road — with younger drivers detecting hazards nearly twice as fast as older drivers.
But it'll be highly variable not just between individuals but state of mind, attentiveness and a whole lot of other things.
The Chinese laborers working in BYD and foxconn factories have higher wages than their equivalents in Mexico and Vietnam building products sold for 3-5x as much in the US. The cheapest labor in the world is found in Africa and yet Western industrial manufacturing has largely ignored the continent. The price of labor isn't the most important factor here.
Western countries wouldn't have moved manufacturing to China in the past if wages weren't cheaper.
I think the cost of labour now is kind of irrelevant. It was the cost of labour (and China being a stable country with favourable rule of law) that drove offshoring in the 90s and 2000s. The Chinese manufacturers chose to invest in process improvement and automation rather than just chasing the cheapest labour - and so now they've got a massive technical advantage.
> The Chinese laborers working in BYD and foxconn factories have higher wages than their equivalents in Mexico and Vietnam building products sold for 3-5x as much in the US.
I'm having a hard time parsing this. Also, source?
> The cheapest labor in the world is found in Africa and yet Western industrial manufacturing has largely ignored the continent. The price of labor isn't the most important factor here.
... Yeah this seems fair. I think a lot of Africa has an infrastructure problem - it doesn't matter how cheaply you can manufacture if you can't move large volumes of raw materials/parts to the factory and finished goods from the factory. Plus many areas in Africa have security issues which make them less attractive places to do business. Geographically, a lot of the continent is cursed with hard to navigate rivers as well (the upper Nile being an exception), so only coastal shipping is really viable.
Re: wages, we have info from reporting. BYD had protests last year when they cut worker overtime at one of their factories, dropping salaries that were previously 8.5k-11.5k USD to 5-6k. Foxconn offers base rate around $2.50/hr, so 5k USD without overtime (which you'll inevitably work). This used to be higher as well.
Mexican autoworker wages came up during the GM UAW negotiations. Those range from about $9/day (~3k USD) up. Higher paying positions tend to go to Americans crossing the border.
Vinfast pays about 100M dong (4k USD with bonus) to their factory workers in Vietnam, which is quite a decent wage locally from what I understand.
You could do that and some very expensive aerospace tooling exists in that direction (for verifying compiler outputs), but an easier way is to use sanitizers on a best effort basis. You can either use something like AFL++ with QAsan or use retrowrite to insert whatever you want into the binary as if it was a normal compiled program.
I wish memory safety skepticism was nothing more than a rhetorical strawman. It's not hard to find prominent people who think differently though. Take Herb Sutter for example, who argues that "memory safety" as defined in this article is an extreme goal and we should instead focus on a more achievable 95% safety instead to spend the remaining effort on other types of safety.
I can also point to more extreme skeptics like Dan O'dowd, who argue that memory safety is just about getting gud and you don't actually need language affordances.
Discussions about this topic would be a lot less heated if everyone was on the same page to start. They're not. It's taken advocates years of effort to get to the point where we can start talking about memory safety without immediate negative reactions and that process is still ongoing.
> Take Herb Sutter for example, who argues that "memory safety" as defined in this article is an extreme goal and we should instead focus on a more achievable 95% safety instead to spend the remaining effort on other types of safety.
One thing I've noticed when people make these arguments is that they tend to ignore the fact that most (all?) of these other safeties they're talking about depend on being able to reason about the behaviour of the program. But when you violate memory safety a common outcome is undefined behaviour, which has unpredictable effects on program behaviour.
These other safeties have a hard dependency on memory safety. If you don't have memory safety, you cannot guarantee these other safeties because you can no longer reason about the behaviour of the program.
For C/C++, memory safety is a retrofit to a language never designed for it.
Many people, including me, have tried to improve the safety of C/C++ without breaking existing code. It's a painful thing to attempt. It doesn't seem to be possible to do it perfectly. Sutter is taking yet another crack at that problem, hoping to save C/C++ from becoming obsolete, or at least disfavored. Read his own words to see where he's coming from and where he is trying to go.
Any new language should be memory safe. Most of them since Java have been.
The trouble with thinking about this in terms of "95% safe" is that attackers are not random. They can aim at the 5%.
The most popular ones have not been necessarily. Notably Go, Zig, and Swift are not fully memory safe (I’ve heard this may have changed recently for swift).
Go's memory safety blows up under concurrency. Non-trivial data races are Undefined Behaviour in Go, violating all safety considertions including memory safety.
I would not expect that it makes sense to provide this as the default for Go's hash table type, my understanding is that modern Go has at least a best effort "fail immediately" detector for this particular case, so when you've screwed this up your code will exit, reporting the bug, in production and I guess you can curse "stupid" Go for not allowing you to write nonsense if you like, or you could use the right tool for the job.
It's hard to imagine that if a memory problem were reported to Sutter about one of his own programs, that he would not prioritize fixing that, over most other work.
However, I imagine he would probably take into consideration the context. Who and what is the program for? And does the issue only reproduce if the program is misused? Does the program handle untrusted inputs? Or are there conceivable situations in which a user of the program could be duped by a bad actor into feeding the program a malicious input?
Imagine Sutter wrote a C compiler, and someone found a way to crash it. But the only way to reproduce that crash is via code that invokes undefined behavior. why would Herb prioritize fixing that over other work?
Suppose the user insists that he's running the compiler as a CGI script, allowing unauthenticated visitors to their site to compile programs, making it a security issue.
The problem in this conversation is that you are equivocating between "fixing memory safety bugs" and "preventing memory safety bugs statically." When this blog post refers to "memory safety skeptics," it refers to people who think the second is not a good way to expend engineering resources, not your imagined flagrantly irresponsible engineer who is satisfied to deliver a known nonfunctional product.
It's worth differentiating the case of a specific program from the more general case of memory safety as a language feature. A specific program might take additional measures appropriate to the problem domain like static analysis or using a restricted subset of the language. Memory safety at the language level has to work for most or all code written using that language.
Herb is usually talking about the latter because of the nature of his role, like he does here [0]. I'm willing to give him the benefit of the doubt on his opinions about specific programs, because I disagree with his language opinions.
Yeah, what kind of Crazy Person would make a web site where unauthenticated visitors can write programs and it just compiles them?
What would you even call such a thing? "Compiler Explorer"?
I guess maybe if Herb had helped the guy who owned that web site, say, Matt Godbolt, to enable his "Syntax 2 for C++" compiler cppfront, on that site, it would feel like Herb ought to take some responsibility right ?
> Imagine Sutter wrote a C compiler, and someone found a way to crash it. But the only way to reproduce that crash is via code that invokes undefined behavior. why would Herb prioritize fixing that over other work?
Because submitting code that invokes undefined behavior to one's compiler is a very normal thing that most working C developers do dozens of times per day, and something that a decent C compiler should behave reasonably in response to. (One could argue that crashing is acceptable, but erasing the developer's hard drive is not - but by definition that means means undefined behaviour in this situation is not acceptable).
> Suppose the user insists that he's running the compiler as a CGI script, allowing unauthenticated visitors to their site to compile programs, making it a security issue. How should Herb reasonably reply to that?
I would not describe Herb as a memory safety skeptic. He's a skeptic of what is practically achievable w/r/t memory safety within the C++ language and community. All 100% memory safe evolutions of the language are guaranteed to break so much that they receive near zero adoption. In that context I think it makes sense to talk about what version of the language can we create to catch the most errors while actually getting people to use it.
> Take Herb Sutter for example, who argues that "memory safety" as defined in this article is an extreme goal and we should instead focus on a more achievable 95% safety
I wonder how you figure out when your codebase has reached 95% safety? Or is it OK to stop looking for memory unsafety when you hit, say, 92% safe?
Anything above 90% safety is acceptable because attackers look at that and say “look they’ve tried hard. We shouldn’t attack them, it’ll only discourage further efforts from them.” When it comes to software security, it’s the thought that counts.
> Take Herb Sutter for example, who argues that "memory safety" as defined in this article is an extreme goal and we should instead focus on a more achievable 95% safety instead to spend the remaining effort on other types of safety.
I don't really see how that's a) a scepticism of memory safety or b) how it's not seen as a reasonable position. Just because someone doesn't think X is the most important thing ever doesn't mean they are skeptical of it, but rather that the person holding the 100% viewpoint is probably the one with the extreme position.
[A] program execution is memory safe so long as a particular list of bad things, called memory-access errors, never occur
"95% memory safety" is not a meaningful concept under this definition! That's very much skepticism of memory safety as defined in this article, to highlight the key phrase in the comment you're quoting.
It's also not a meaningful concept within the C++ language standard written by the committee Herb Sutter chairs. Memory unsafety is undefined behavior (UB). C++ code containing UB has no defined semantics and is inherently incorrect, whether that's 1 violation or 1000.
Now, we can certainly discuss the practical ramifications of 95% vs 100%, but even here Herb's arguments have fallen notoriously flat. I'll link Sean Baxter's piece on why Herb's actual proposals fail to achieve even these more modest goals as an entry point [0]. No need to rehash the volumes of digital ink already spilled on this subject in this particular comment thread.
Skepticism of an absolutist binary take on memory safety is not the same as skepticism of memory safety in general and it's important to distinguish the two.
It's like saying that people skeptical of formal verification are actually skeptical of eliminating bugs. Most people are not skeptical of eliminating bugs, but they might be skeptical of extreme approaches to do so.
As I explained in a sibling comment, memory safety violations aren't comparable to logic bugs. Avoiding them isn't an absolutist take, it's just a basic requirement in common programming languages like C and C++. That's not debatable, it's written right into the language standards, core guidelines, and increasingly government standards too.
If you think that's impossibly difficult, you're starting to understand the basic problem. We already know from other languages that memory safety is possible. I've already linked one proposal to retrofit similar safety onto C++. The author of Fil-C is elsewhere in these comments arguing for another way.
Everything you say about memory safety issues applies to logic bugs too. And likewise in reverse - you can have a memory safety issue that doesn't result in a vulnerability or crash. So I don't buy it that memory safety is so different from other types of bugs that it should be considered a binary issue and not on a spectrum like everything else!
> Everything you say about memory safety issues applies to logic bugs too.
It doesn't, because logic bugs generally have, or can be made to have limited scope.
> And likewise in reverse - you can have a memory safety issue that doesn't result in a vulnerability or crash.
No you can't, not in standard C. Any case of memory unsafety is undefined behaviour, therefore a conforming implementation may implement it as a vulnerability and/or crash. (You can have a memory safety issue that happens to not result in a vulnerability or crash in the current version of gcc/clang, but that's a lot less reassuring)
This whole memory-bugs-are-magical thinking just comes from the Rust community and is not an axiomatic truth.
It’s also trivial to discount, since the classical evaluation of bugs is based on actual impact, not some nebulous notions of scope or what-may-happen.
In practice, the program will crash most of the time. Maybe it will corrupt or erase some files. Maybe it will crash the Windows kernel and cause 10 billion in damages; just like a Rust panic would, by the way.
We simply don't treat "gcc segfaults on my example.c" file the same way as "libssl has an exploitable buffer overflow". That's a synopsis of the nuance.
Materials to be consumed by engineers are often unsafe when misused. Not just programs like toolchains with undefined behaviors, but in general. Steel beams buckle of overloaded. Transistors overhead and explode outside of their SOA (safe operating area).
When engineers make something for the public, their job is to combine the unsafe bits, but make something which is safe, even against casual misuse.
When engineers make something for other engineers, that is less so; engineers are expected to read the data sheet.
even if you know what the data sheet says, it's easier said than done, especially when the tool gives you basically no help. you are just praying people will magically just git gud.
I prefer to treat testing like insurance. You purchase enough insurance to get the coverage you need, and not a penny more. Anything beyond that could be invested better.
Same thing with tests, get the coverage you need to build the confidence in your codebase, but don't tie yourself in knots trying to get that last 10%. It's not worth it. Create some manual and integration tests and move one.
I feel like type safety, memory safety, thread safety, etc. are are all similar. Building a physics core to simulate the stability of your nuclear stockpile? The typing should be second to none. Building yet another CSV exporter? Who gives a damn.
This is a perfectly reasonable argument if memory safety issues are essentially similar to logic bugs, but memory unsafety isn't like a logic bug.
A logic bug in a library doesn't break unrelated code. It's meaningful to talk about the continued execution of a program in the presence of logic bugs. Logic bugs don't time travel. There are ways to exhaustively prove the absence of logic bugs, e.g. MC/DC or state space exploration, even if they're expensive.
None of these properties are necessarily true of memory safety. A single memory safety violation in a library can smash your stack, or allow your code to be exploited. You can't exhaustively defend against this with error handling either. In C and C++, it's not meaningful to even talk about continued execution in the presence of memory safety violations. In C++, memory safety violations can time travel. You typically can't prove the absence of memory safety violations, except in languages designed to allow that.
With appropriate caveats noted (Fil-C, etc), we don't have good ways to retrofit memory safety onto languages and programs built without it or good ways to exhaustively diagnose violations. All we can do is structurally eliminate the possibility of memory unsafety in any code that might ever be used in a context where it's an important property. That's most code.
All of that stuff doesn’t matter though. If you look close enough everything is different to everything, but in real life we only take significant differences into consideration otherwise we’d go nuts.
Memory bugs have a high risk of exploitability. That’s it; the threat model will tell the team what they need to focus on.
Nothing in software or engineering is absolute. Some projects have decided they need compile-time guarantees about memory safety, others are experimenting with it, many still use C or C++ and the Earth keeps spinning.
If your attacker controls the data you're exporting to a CSV file, they can take advantage of a memory safety issue in your CSV exporter to execute arbitrary code on your machine.
> Building yet another CSV exporter? Who gives a damn.
The problem with memory unsafe code is that it can have unexpected and unpredictable side-effects. Such as subtly altering the critical data you're exporting, of letting an attacker take control of your CSV exporter.
In other words, you need quite a lot of context to figure out that a memory bug in your CSV exporter won't be used for escalation. Figuring out that context, documenting it and making sure that the context never changes for the lifetime of your code? That sounds like a much complex proposition that using memory-safe tools in the first place.
Estates do most of that without any notion of personhood. Suing an estate is in rem. When the estate sues someone else, the executor sues on its behalf. The executor can also enter the estate into new contracts and administer the bank accounts it owns, and so on. The estate can even own a corporation.
The estate "inherits" (pun intended) its abilities from the personhood of the deceased. It is in effect a legal "Weekend at Bernie's", keeping the deceased party legally alive in order to continue their interests until those interests can be appropriately disposed of.
The estate doesn't have independent interests distinct from those of the deceased. (In particular, the estate is not owned by, and does not serve, the beneficiaries.)
A corporation has independent goals and interests from any of its owners or officers.
It's certainly a choice to call Napoleon, poster boy for great man history, underrated, but I think it's emblematic of the kind of thinking that makes up this list.
It's also pretty notable how few of these choices are fiction of any sort. They're mainly non-fiction books describing conventionally successful people and organizations.
They've also seen improvements in developer confidence and onboarding time, but not to the same degree.
reply