Seems like the real goal was to kill it so the market could consolidate under iPhone. Internal sabotage. Now Apple is killing themselves pushing bad UI decisions and getting paid off to insert back doors into Messages someone can control the public narrative as we enter another war.
This feels like the 80s/90s when people hacking assembly or finding undocumented instructions to squeeze CPU for performance. Until one day either the compiler will be highly optimized enough or the GPU will be so powerful that such tricks won’t make much difference anymore, like CPUs nowadays :D
I don’t understand why this is not obvious to many people: tech and stock trading are totally two different things, why on earth a tech expert is expected to know trading at all? Imagining how ridiculous it would be if a computer science graduate will also automatically get a financial degree from college even though no financial class has been taken.
People developing statistical models that are excercising the financial market at scale are the quants. These people don't come from financial degree background.
I’ve noticed this phenomenon among IT & tech VC crowd. They will launch pod cast, offer expert opinion and what not on just about every topic under the Sun, from cold fusion to COVID vaccine to Ukraine war.
You wouldn’t see this in other folks, for example, a successful medical surgeon won’t offer much assertion about NVIDIA.
And the general tendency among audience is to assume that expertise can be carried across domains.
> You wouldn’t see this in other folks, for example, a successful medical surgeon won’t offer much assertion about NVIDIA.
Doctors are actually known for this phenomenon. Flight schools particularly watch out for them because their overconfidence gets them in trouble.
And, though humans everywhere do this, Americans are particularly known for it. There are many compilation videos where Americans are asked their opinion on whether Elbonia needs to be bombed or not, followed by enthusiastic agreement. That's highly abnormal in most other countries, where "I don't know" is seen as an acceptable response.
> Anti-intellectualism has been a constant thread winding its way through our political and cultural life, nurtured by the false notion that democracy means that 'my ignorance is just as good as your knowledge.
― Isaac Asimov
> a successful medical surgeon won’t offer much assertion about NVIDIA.
You haven't meet many surgeons have you? When I was working in medical imaging, the technicians all said we (the programmers) were almost as bad as the surgeons.
This is exacerbated by the tendency in popular media to depict a Scientist character, who can do all kinds of Science (which includes technology, all kinds of computing, and math).
Systems, it’s all about systems thinking. It is absolutely true that people in tech are often optimistic and/or delusional about the other expertise at their command. But it’s not like the basic assumption here is completely crazy.
Being a surgeon might require thinking about a few interacting systems, but mostly the number and nature of those systems involved stay the same. Talented programmers without even formal training in CS will eat and digest a dozen brand new systems before breakfast, and model interactions mentally with some degree of fidelity before lunch. And then, any formal training in CS kind of makes general systems just another type of object. This is not the same as how a surgeon is going to look at a heart, or even the body as a whole.
Not that this is the only way to acquire skills in systems thinking. But the other paths might require, IDK, a phd in history/geopolitics, or special studies or extensive work experience in physics or math. And not to rule out other kinds of science or engineering experts as systems thinkers, but a surprisingly large subset of them will specialize and so avoid it.
By the numbers.. there are probably just more people in software/IT, therefore more of us to look stupid if/when we get stuff wrong.
Obviously general systems expertise can’t automatically make you an expert on particle physics. But honestly it’s a good piece of background for lots of the wicked problems[1], and the wicked problems are what everyone always wants to talk about.
But even if we just look at the examples given by the parent, most of them are not about systems or models at all. Epidemiology and politics concern practical matters of life. In such matters, life experience will always trump abstract knowledge.
Epidemiology and politics do involve systems, I’m afraid. We can call it “practical” or “human” or “subjective” all we like, but human behaviors exhibit the same patterns when understood from a statistical instead of an individual standpoint.
Epidemiology and politics are pretty much the poster children of systems[0], next to their eldest sibling, economics. Life and experience may trump abstract knowledge dumbly applied, but alone it won't let you reason at larger scales (not that you could collect any actual experience on e.g. pandemics to fuel your intuition here anyway).
A part of learning how to model things as systems is understanding your model doesn't include all the components that affect the system - but it also means learning how to quantify those effects, or at least to estimate upper bounds on their sizes. It's knowing which effects average out at scale (like e.g. free will mostly does, and quite quickly), and which effects can't possibly be strong enough to influence outcome and thus can be excluded, and then to keep track of those that could occasionally spike.
Mathematics and systems-related fields downstream of it provide us with plenty of tools to correctly handle and reason about uncertainty, errors, and even "unknown unknowns". Yes, you can (and should) model your own ignorance as part of the system model.
--
[0] - In the most blatant example of this, around February 2020, i.e. in the early days of the COVID-19 pandemic going global, you could quite accurately predict the daily infection stats a week or two ahead by just drawing up an exponential function in Excel and lining it up with the already reported numbers. This relationship held pretty well until governments started messing with numbers and then lockdowns started. This was a simple case because at that stage, the exponential component was overwhelmingly stronger than any more nuanced factor - but identifying which parts of a phenomenon dominate and describing their dynamics is precisely the what learning about systems lets you do.
It's because software devs are smart and make a lot of money - a natural next step is to try and use their smarts to do something with that money. Hence stocks.
>It's because software devs are smart and make a lot of money
They just think they're smart BECAUSE they make a lot of money. Just because you can center divs for six figures a year at a F500 doesn't make you smart at everything.
I've never met a fellow software engineer who "centers divs" for 6 figures.
But then I work with engineers using FPGAs to trade in the markets with tick to trade times in double digit nanoseconds and processing streams of market data at ~10 million messages per second (80Gbps)
The truth is, a lot of P&L in trading these days is a technical feat of mathematics and engineering and not just one of fundamental analysis and punting on business plans
If you were really smart surely you would be able to see that there are more long-term valuable things for you to do with your time than just make yourself more money...
Tech people are allowed to quickly learn a domain enough to build the software that powers it, bringing in insights from other domains they've been across.
Just don't allow them to then comment on that domain with any degree of insight.
I've decided to only receive calls/msgs from my contacts. On iPhone, you can do it by "Silence Unknown Callers". Instead of pretending I can ignore the spam calls, I'd rather take the risk when something super important are coming from an unknown number...
That only works if you have no family. Because you risk an emergency provider being unable to reach you if something happens, or even something as simple as your kid/spouse losing their phone and trying to call you from a borrowed phone.
Not really an option for me - my child's care providers may call me from an unexpected phone number, plus the occasional doctor phone call from a number that's not the same one I call to schedule appointments, etc.
Good to see such bare metal work in HN, I feel there has been fewer this type of work nowadays.
Back to the OS architecture, AFAIK, exokernel tries to get rid of the OS overhead (context switching, abstractions, etc.) to improve performance back in 1995. But I assume with modern computers/workload, the OS overhead only takes a tiny piece. Is that the case? Are people still optimizing OS heavily for today's workload like AI/ML/servers?
Postgres has it, but it didn't used to, and it's still got caveats.
Beyond Postgres, indexing random values is fundamentally harder than indexing sorted ones, whether you've got a hashmap or btree or something else. The often-cited O(1) lookup complexity of a hashmap assumes everything easily fits in uniform-access memory.
I feel that eventually people will hate every commercial product and the company. The interests of users and the company have a fundamental conflict because one is trying to get as much money as possible from the other with the least possible cost. The only honeymoon time is when the company is still trying to win customers by sacrificing their profit.
Not all mature companies are trying to milk their customers for every possible dime. Ethical ones do exist, but they tend not to be megacorps for obvious reasons.
Just curious, any startups working towards this goal? Replacing leadership, decision making, people management, project management, negotiation and such with AI?
There are two established companies that I know of, Chinese Co. NetDragon Websoft (5k employees) has an AI CEO named Tang Yu and then there is a polish rum company called Dictador that did something similar. We'll see where it goes I guess.
I work on a large distributed infrastructure. I always joked that my team's projects and people's careers are outage-driven: the only time we become important and people get opportunities for promotion is when where were big-enough outages that executives have to invest heavily on reliability or scalability. Other time, we are just minions who must listen to and serve feature or product teams. Nobody listen to us when we ask a product team to implement a reliability contract in their shining product.
Pre-outage improvements, reliability defense in depth, eliminated scalability bottlenecks before they are hit, are all ignored by leadership and the company: it is just human nature that even though they understand you have to prepare for possible issues, if it hasn't happen yet, you won't take it seriously. I've seen this in many internal performance reviews and promotion committees. People who haven't ever got bitten badly by an outage may call these premature optimizations.