They focused heavily on the quality of things you can see, i.e. slick visuals, high build quality, even fancy cardboard boxes.
Their software quality itself is about average for the tech industry. It's not bad, but not amazing either. It's sufficient for the task and better than their primary competitor (Windows). But, their UI quality is much higher, and that's what people can check quickly with their own eyes and fingers in a shop.
Yea, it seems like the wrong lesson was learned here: It should have been "Don't abuse your users' computers," but instead it was, "When you abuse your users' computers, make sure it doesn't cost the company anything."
The main issue I see is that papers are actually becoming so focussed on form that they are now unreadable. People prefer reading my blog for my papers than reading the papers themselves. In fact I hear people telling me they understood the blog _better_. The whole academic writing shtick has become so obtuse that not only writing is cumbersome, but so is reading.
The other side of all this academic brownie points via papers (and doing reviews, which has become "brownie points for gatekeeping") is that most academic software is not only unmaintained, but actually unusable. They rarely even compile, and if they do, there is no --help, no good defaults, no README, and no way to maintain them. They are single-use software and their singular use is to write the paper. Any other use-case is almost frowned upon.
One of the worst parts of Academic software is that if you re-write it in a ways that's actually usable and extensible, you can't publish that -- it's not new ("research") work. And you will not only have to cite the person who wrote the first useless version forever, but they will claim they have done it if your tool actually takes off.
BTW, there are academics who don't follow this trend. I am glad that in my field (SAT), some of the best, e.g. Armin Biere and Randal Bryant are not like this at all. Their software is insanely nice and they fix bugs many-many years after release. Notice that they are also incredibly good engineers.
This take has a few problems:
Poor people in the US are capable of using fluoride toothpaste and flossing. At least at homeless shelters at outreach things I’ve been to, toothpaste and toothbrushes are freely available. Your argument hinges on them being incapable on the whole and needing a Benevolent But Superior Intelligence to provide an alternative for them.
Second, it completely ignores any debate over effectiveness or side effects. It could well be that fluoride in water is great for teeth but bad for brains. The objections to fluoride in water I’ve seen are more along those lines. Im not clear the validity of those claims but for example anti fluoride advocates don’t typically object to chlorine in water to kill germs. That seems the core issue- without bias from stakeholders, is the benefit of fluoride proven and the risks disproven? It’s hard to answer because a study needs to span many years and exclude many variables.
And in general I think that is what needs to happen with these type debates. Take them _out_ of the sphere of charged political opinion and focus on getting to the objective truth of risks and benefits, then be transparent. People can handle “here are the known pros and cons and what we think that means” over “there are only pros and no cons and if you disagree you hate poor people”
The author takes great care to rebut a common theme among objections to the proposal - “this isn’t necessary if you just write code better”. I am reminded of this fantastic essay:
> If we flew planes like we write code, we’d have daily crashes, of course, but beyond that, the response to every plane crash would be: “only a bad pilot blames their plane!”
> This doesn’t happen in aviation, because in aviation we have decided, correctly, that human error is an intrinsic and inseparable part of human activity. And so we have built concentric layers of mechanical checks and balances around pilots, to take on part of the load of flying. Because humans are tired, they are burned out, they have limited focus, limited working memory, they are traumatized by writing executable YAML, etc.
> Mechanical processes are independent of the skill of the programmer. Mechanical processes scale, unlike berating people to simply write fewer bugs.
I always tell this story about working with sales at a job where I worked in tech support. Sales would call me up and ask why I hadn't talked to their client about their very important ticket.
I would tell them:
"I have 5 P1 tickets, 8 P2 tickets, and dozens of P3 tickets. Your ticket is a P3 ticket."
They would ask that I change it to a P1. I would. Then they would call me an hour later asking me about the ticket and I would tell them:
To give a "yes and" side-track to your comment: saying "logarithms have this relation between multiplication and addition" is even underselling what logarithms are, because reducing multiplication to an additive operation was the whole motivation for John Napier[0] to discover/invent logarithms:
> “…nothing is more tedious, fellow mathematicians, in the practice of the mathematical arts, than the great delays suffered in the tedium of lengthy multiplications and divisions, the finding of ratios, and in the extraction of square and cube roots… [with] the many slippery errors that can arise…I have found an amazing way of shortening the proceedings [in which]… all the numbers associated with the multiplications, and divisions of numbers, and with the long arduous tasks of extracting square and cube roots are themselves rejected from the work, and in their place other numbers are substituted, which perform the tasks of these rejected by means of addition, subtraction, and division by two or three only.”[1]
Logarithms were honestly an enormous breakthrough in optimization, computers wouldn't be remotely as useful without them, even if most of us don't "see" the logarithms being used.
In fact I'd argue that they are the second-biggest computational optimization in use today, with only positional notation being a bigger deal. Which, funny enough, works kind of similarly: imagine you only could count by tallying (so, unary). Adding two number M and N would take M+N operations, e.g. 1234 + 5678 would require counting all 6912 individual digits. Unary math scales O(n) in both data and computation. Systems like Roman numerals almost work, but as soon as we reach values larger than the largest symbol (M for 1000) it's O(n) again, just with a better constant factor.
With positional notation numbers require only log(n) symbols to write down, and log(n) operations for addition, e.g. 1234 + 5678 requires one or two additions for each digit pair in a given position - one addition if there's no carry from the previous addition, two if there is. So addition at most 2 × ceil( max( log(M), log(N) ) ) operations, so log(n).
Logarithms take that idea and "recursively" apply it to the notation, making the same optimization work for multiplication. Without it, the naive algorithm for the multiplication of two numbers requires iterating over each digit, e.g. 1234 × 5678 requires multiplying each of the four digits of the first number with each of the digit of the second number, and then adding all the resulting numbers. It scales O(di×dj), where di and dj are the digits of each number. If they're the same we can simplify that to O(d²). When the numbers are represented as two logarithms the operation is reduced to adding two numbers again, so O(log(d) + [whatever the log/inverse log conversion cost is]). Of course d is a different value here and the number of digits used affects the precision.
I think the craziest thing of all this is that we're so used to positional notation that nobody ever seems to consider it a data compression technique. Even though almost no other data compression method would work without it as a building block (run-length encoding, Lempel-Ziv, Arithmetic coding? Useless without positional notation's O(log(n)) scaling factor). The only exceptions are data compression methods that are based on inventing their own numerical notation[2].
We do this every day ever since we first learned addition and subtraction as kids. Or as David Bess[3] puts it in his book "Mathematica": ask almost any adult what one billion minus one is and they know the answer instantaneously, so most adults would appear to have mental superpowers in the eyes of pretty much all mathematicians before positional notation was invented (well, everyone except Archimedes maybe[4]). Positional notation is magical, we're all math wizards, and it's so normalized that we don't even realize it.
But to get back to your original point: yes, you are entirely correct. IEEE floats are a form of lossy compression of fractions, and the basis of that lossy compression is logarithmic notation (but with a fixed number of binary digits and some curious rules for encoding other values like NaN and infinity).
Simple and brief rules are more successful in practice than long and complicated rules.
I feel a briefer and more-to-the-point "When To Refactor" guide is to ask the following questions in the following order and only proceed when you can answer YES to every single question.
1. Do we have test coverage of the use-cases that are affected?
2. Are any non-trivial logic and business changes on the horizon for the code in question?
3. Has the code in question been undergoing multiple modifications in the last two/three/four weeks/months/years?
Honestly, if you answer NO to any of the questions above, you're in for a world of hurt and expense if you then proceed to refactor.
That last one might seem a bit of a reach, but the reality is that if there is some code in production that has been working unchanged for the last two years, you're wasting your time refactoring it.
More importantly, no changes over the last few years means that absolutely no one in the company has in-depth and current knowledge of how that code works, so a refactor is pointless because no one knows what the specific problems actually are.
This is a question more people need to ask. Be it a blog, YouTube channel, podcast, or whatever else, they all need to be rooted in having something worth to sharing to some small piece of the world.
Many people get caught up in thinking the blog, YouTube, or podcast is the thing, when it’s just the thing that lets you share the real thing. It took me longer than I’d like to admit to really integrate this lesson.
Essentially, it’s a refinement of Bacon-Rajan’s cycle collector (sequential) that does not require auxiliary heap memory and handles failures when tracing complex object graphs because it uses a breadth-first technique that fundamentally prevents stack overflow scenarios.
What's particularly compelling is the Rust implementation, which weaves the type system and borrow checker into the algorithm's design. When dealing with garbage cycles, the algorithm doesn't just match current Rust GC alternatives, it outperforms them.
Apple[1], for example, has performance based RSU’s for its executives. RSU’s vest over 3-4 years and the number granted is dependent on Apple’s performance relative to the S&P.
Stock prices theoretically have future cash flows built in but markets are inefficient. In this case, if Broadcom starts losing customers in 2027 and revenue declines 5%, it would get hammered in the market at that point in time, punished much more than the 5% drop.
[1] From Apple’s SEC filing:
The value received by our named executive officers in 2022 from long-term equity incentives reflects exceptional stock price performance over the applicable vesting periods. As a result, for the applicable performance periods, performance-based RSUs vested at 200% of target for our named executive officers based on Apple’s TSR relative to other companies in the S&P 500 (“Relative TSR”), which was at:
• the 96th percentile for performance-based RSUs held by Ms. Adams, Mr. Maestri, and Mr. Williams, and
• the 99th percentile for performance-based RSUs held by Ms. O’Brien.
Mr. Cook was granted an equity award with 75% performance-based vesting and 25% time-based vesting
If you like this, you might enjoy: "Column-Stores vs. Row-Stores: How Different Are They Really?"
> The elevator pitch behind this performance difference is
> straightforward: column-stores are more I/O efficient for read-only
> queries since they only have to read from disk (or from memory)
> those attributes accessed by a query.
> This simplistic view leads to the assumption that one can ob-
> tain the performance benefits of a column-store using a row-store:
> either by vertically partitioning the schema, or by indexing every
column so that columns can be accessed independently. In this pa-
> per, we demonstrate that this assumption is false.
Two economists are walking in a forest when they come across a pile of shit.
The first economist says to the other “I’ll pay you $100 to eat that pile of shit.” The second economist takes the $100 and eats the pile of shit.
They continue walking until they come across a second pile of shit. The second economist turns to the first and says “I’ll pay you $100 to eat that pile of shit.” The first economist takes the $100 and eats a pile of shit.
Walking a little more, the first economist looks at the second and says, "You know, I gave you $100 to eat shit, then you gave me back the same $100 to eat shit. I can't help but feel like we both just ate shit for nothing."
"That's not true", responded the second economist. "We increased the GDP by $200!"
Your newsletter signup form is absolutely terrific. I never thought that I'd say that.
It's inline in the page. Not a popup, not a modal, not a popunder or poparound or popfart. It's almost unique in its field for not being annoying. Almost got me to sign up, but I'm not your target demographic.
I'm sympathetic to your situation, but it's possible that the senior was still right to remove it at the time, even if you were eventually right that the product would need it in the end.
If I recall correctly they have a measure at SpaceX that captures this idea: The ratio of features added back a second time to total removed features. If every removed feature was added back, a 100% 'feature recidivism' (if you grant some wordsmithing liberty), then obviously you're cutting features too often. 70% is too much, even 30%. But critically 0% feature recidivism is bad too because it means that you're not trying hard enough to remove unneeded features and you will accumulate bloat as a result. I guess you'd want this ratio to run higher early in a product's lifecycle and eventually asymptote down to a low non-zero percentage as the product matures.
From this perspective the exact set of features required to make the best product are an unknown in the present, so it's fine to take a stochastic approach to removing features to make sure you cut unneeded ones. And if you need to add them back that's fine, but that shouldn't cast doubt on the decision to remove it in the first place unless it's happening too often.
Alternatively you could spend 6 months in meetings agonizing over hypotheticals and endlessly quibbling over proxies for unarticulated priors instead of just trying both in the real world and seeing what happens...
I like to say that immutability is a really good idea in the 1990s, especially considering how counterculture it would have been at the time. I don't mean that as diminutive or patronizing, I'm serious. It was a good cutting edge idea.
However, nobody had any experience with it. Now we do. And I think what that experience generally says is that it's a bit overkill. We can do better. Like Rust. Or possibly linear types, though that is I think much more speculative right now. Or other choices. I like mutable islands in a generally immutable/unshared global space as a design point myself, as mutability's main problem is that it gets exponentially more complicated to deal with as the domain of mutability grows, but if you confine mutability into lots of little domains that don't cross (ideally enforced by compiler, but not necessarily) it really isn't that scary.
It was a necessary step in the evolution of programming ideas, but it's an awful lot to ask that it be The One True Idea for all time, in all places, and that nobody in the intervening decades could come up with anything that was in any way an improvement in any problem space.
On my Pixel using GrapheneOS, I can go into Accessibility settings and enable gesture support for toggling Grayscale support, without the need for Tasker. I'm unsure how other Android ROMs stack up but I imagine stock Pixel has this setting.
It's a great feature, just swipe up from the bottom with two fingers whenever I need color.
The real danger is criminal profiling. Read a book on criminal profiling as done by the FBI. You hear things like "the suspect appeared nervous when his eyes saw the murder weapon" or "serial killers match two of three: cruelty to animals, obsession with fire-setting, and persistent bedwetting past the age of five" (aka Macdonald triad). Impulsive killers are in their teens or early 20s, while more careful killers will be at least in their 30s.
I'm sure the motives were good - sometimes it's like finding a needle in a haystack, and it saved lives back then.
But you have mass surveillance, you can go through every hay in the haystack. Yet likely they won't. They'll settle on a middle ground with these outdated methodologies, and combine it with AI/data, to create some form of data-driven astrology. Someone will be inspired by CSI to ask AI to blow up a blurry photo, and AI might just hallucinate it. There will be experts out there who would oppose this, and these could be shut down by their bosses, the politicians who don't understand how it all works.
The Macdonald triad detects the worst criminals, sure, but it mainly detects victims of abuse. Privacy isn't important to the privileged groups; but it's a level of protection for the innocents who could be profiled wrongly.
I also recommend using negative animation delay values if you want to control the progress via JS.
e.g. animation-delay: -1500ms will begin the animation immediately but jump to 1.5s into the animation. Controlling this value with JS lets you effectively scrub through CSS animations via JS, making every animation compatible with game-engine style compute-update-render tick loops.
How does it compare to https://invoice-generator.com/ ? It’s free and it stores invoices in the browser. It’s so simple that I wonder how this solution is simpler.
I feel like a lot of the time people's mathematically incorrect intuitions are exactly right if you were asking a slightly different question. What's abnormal is the person who's evaluating their intuition by asking tricky questions that aren't very compatible with how human thinking works. Notably nobody would ask questions in real life the way these trick questions are asked. I like the notion that the problem is in the asker!
Like the question in the article whether a person is more likely to be a banker or (banker and feminist). Probability says the former is a mathematical uncertainty. The human intuition gets it wrong but it's solving a different question, because the category of 'banker' is a type of person than (banker and feminist). The mathematically wrong answer that intuition gives is expressing which category makes more sense for the person, and it's right about that-- it is just not an answer that should be interpreted as a probability.
The human brain doesn't work on probabilities, but it will try to answer questions of them anyway, misapplying it's otherwise effective circuitry instead.
I think I'd put the difference between "invariant" and "assumption" as purely a difference in framing. Both describe a problem in communication between two components of a system. Suppose there are two components, where the output of component A is sent to component B. Both components seem to be working correctly, but when run together they are not producing the expected output.
* Option 1: Component A is violating an invariant of its output type. Whether that is invalid data (e.g. nullptr), internal inconsistency (e.g. an index that will be out of bounds for its array), or some (e.g. matrices with determinant one), the output generated by Component A is wrong in some way.
* Option 2: Component B is making an unjustified assumption about its input type. The implementation was only valid when the assumption held, but Component B's interface implied that it could accept a wider range of inputs.
The only difference between these two cases is the human-level description of what the type is "supposed" to hold. By describing it as an "invariant", the blame is placed at Component A. By describing it as an "assumption", the blame is placed at Component B.
My two cents having formerly worked in perovskites trying to upscale the process:
Perovskites are exciting (or were exciting) because they have a high theoretical efficiency, are relatively simple to prepare, and the "worst" component in them is lead (an incredibly abundant material). The big problem with them is that they are famously horrifically unstable in ambient conditions.
Roll-to-roll processing means that you can fabricate them in mass scale. Ambient means that they claim to have solved issues like working in glovebox conditions.
Even if the price of solar panels has come down below labor, the fact that they are produced from rare earth minerals goes (in my opinion) underreported.
Consider the relationship between perovskites and multi-junction solar cells similar to the comparison between sodium and lithium ion batteries. Lithium will always have a higher capacity, but sodium is so abundant that for many applications it just doesn't matter anymore.
There are lots of designs for nuclear rockets, and while I read about them, I did not spend time to poke them and see if there are any obvious issues. The problem with nuclear designs is that things that work on paper don't necessarily work in practice, as Admiral Rickover is known to have ranted once.
Given that, to me the most promising 2 nuclear designs are the nuclear thermal rocket design, and the Orion design.
Here's why.
The Rover/NERVA program [1] is underappreciated. In terms of scientific and engineering achievement it rivals the Manhattan project, while having a fraction of its budget. Just to get a sense of the distance between theory and practice: the idea of a nuclear thermal rocket is to push hydrogen through a nuclear core. It gets in cold, it comes out hot, and voila, you have a nice rocket engine. What could be simpler? There are a few problems. The first is the scale. Here's a quote about the nuclear engine Phoebus 2A [2]:
> This was followed by a test of the larger Phoebus 2A. A preliminary low power (2,000 MW) run was conducted on 8 June 1968, then a full power run on 26 June. The engine was operated for 32 minutes, 12.5 minutes of which was above 4,000 MW, and a peak power of 4,082 MW was reached. At this point the chamber temperature was 2,256 K (1,983 °C), and total flow rate was 118.8 kilograms per second (262 lb/s).
For comparison a full-size nuclear AP1000 reactor like the one that was just started at Vogtle produces about 3.6 GW-thermal, so less than the 4 GWt mentioned here. Such a reactor circulates about 20 tons of water per second through the core. Somehow this rocket engine is able to extract more power using 50 times less coolant by mass, and from a core that literally fits on the bed of a small truck.
The vibrations and temperatures inside this core were tremendous. In various tests parts of the fuel rods ruptured, sometimes the hydrogen would catch fire, sometimes valves would break. All these annoying engineering challenges had to be overcome. But eventually they were overcome. That's the important thing we know about the thermal nuclear engine: we know it can be done, because it was done.
Some people may complain that the ISP from these engines topped at 900 seconds. Considering the technological readiness of this technology, I think this is nothing to sneeze at. There are good reasons to believe with this technology we can reach 1000 seconds, and maybe a bit higher.
The second technology on my list is the Orion project. It was never implemented, but my heuristics are like this: 1. Freeman Dyson was, in the common understanding of the word, a genius. It is true that he did not have a PhD, but aside from that, as a scientific mind, he was probably the equal of Feynman. 2. The thing that makes the spaceship move, the nuclear bombs, are a very mature technology. Pairing that with a pusher plate remains to be validated, but it's highly likely to work. The pusher plate idea was tested with conventional bombs, and we have no particular reason to think it wouldn't work if you increase the yield of the bomb.
Of course, the thing that goes against project Orion is the fact that we live in the real world, and in this world nuclear bombs are a problem. You don't want to start ferrying thousands of nukes to space without thinking twice.
But if we can figure out the non-proliferation aspect of the project Orion, I think it's the most likely configuration to enable us to do deep space travel.
> Again, this is intended to be portable software.
A scathing criticism of the OpenSSL library by the BSD team was that it was too portable in a (very real) sense that it wasn't even written in "C" any more, or targeting "libc" as the standard library. It would be more accurate to say that it was "Autotools/C" instead. By rewriting OpenSSL to target an actual full-featured libc, they found dozens of other bugs, including a bunch of memory issues other than the famous Heartbleed bug.
Platform standards like the C++ std library, libc, etc... are supposed to be the interface against which we write software. Giving that up and programming against megabytes of macros and Autotools scripts is basically saying that C isn't a standard at all, but Autotools is.
Then just admit it, and say that you're programming in the Autotools standard framework. Be honest about it, because you'll then see the world in a different way. For example, you'll suddenly understand why it's so hard to get away from Autotools. It's not because "stuff is broken", but because it's the programming language framework you and everyone else is using. It's like a C++ guy lamenting that he needs gcc everywhere and can't go back to a pure C compiler.
Job ads should be saying: "Autotools programmer with 5 years experience" instead of "C programmer". It would be more accurate.
PS: I judge languages by the weight of their build overhead in relation to useful code. I've seen C libraries with two functions that had on the order of 50kb macros to enable them to build and interface with other things.
People digging into Boeing's specific history are likely doing themselves a huge disservice. It's too easy to say, "Oh well I won't acquire a company like McDonnell Douglas!" But that's not the issue. The issue is a culture (nationally/globally) of financialization. You're vulnerable to it even if you don't acquire MDA, or if you don't acquire anyone at all.
The fundamental issue is simple: Mistaking side-effects of the system for the goals of the system. The goal of a company shouldn't be to make number-on-spreadsheet go up. The goal is to produce excess value, capture some amount of it, and pass the rest onto your customers. The amount that you capture will affect the spreadsheet, but the real goal of the organization must be to produce that excess value and pass on a meaningful portion of it to your customers!
The whole meme that companies exist to maximize shareholder value is just that: a meme, and a recent one at that. Boeing is the natural outcome of believing it like a zealot.
The author needs to learn about techniques for rendering deep zooms into the Mandelbrot set. Since 2013 it has been possible to render images that are 2^-hundreds across using mostly double precision arithmetic, apart from a few anchor points calculated with multiprecision (hundreds of bits) arithmetic.
The deep zoom mathematics includes techniques for introspecting the iterations to detect glitches, which need extra multiprecision anchor points to be calculated.
Their software quality itself is about average for the tech industry. It's not bad, but not amazing either. It's sufficient for the task and better than their primary competitor (Windows). But, their UI quality is much higher, and that's what people can check quickly with their own eyes and fingers in a shop.