In Britain we have quite strong employment laws and a bit less of a ruthless corporate culture, so in many sectors it's fairly uncommon for people to be terminated for poor performance, so I suspect "misconduct" is a higher percentage of overall firings here.
Yeah for sure. Unfortunately in the US it’s not uncommon for companies to find “a pattern of behavior” or “less than stellar work” that is enough to justify not giving you your severance, but also not too severe that they can’t be blamed for never bringing it up and putting you on a PIP (performance improvement plan, not sure if that’s a term in other countries).
In the US, if your employment is terminated for cause, i.e. due to your underperformance at work, then you are not eligible to receive unemployment benefits from the government.
There could be other conditions too depending on whatever employment agreement exists, but the point is to determine if an employee’s lack of performance caused their employment to be terminated or something external to the employee caused their employment to be terminated.
> In the US, if your employment is terminated for cause, i.e. due to your underperformance at work, then you are not eligible to receive unemployment benefits from the government.
You meant i.e. or e.g.?
Under performance may be considered termination for cause in your state. It is not generally.
If the supply of doctors wasn’t artificially suppressed as mentioned by comments above, it’s likely that wages would go down. Whether that would make things overall more or less costly isn’t easy to answer.
I’m just a layman, but why can’t they increase the orbital radius to solve this problem? Like, if the current “layer” is too full, have the new satellites orbit further out?
The reason starlink are so low in the first place is its cheaper to launch to that altitude, you need way less signal strength for devices to connect to them and the round-trip latency is vastly improved. They're intended to be essentially disposable, they're going for shorter lifetime and iterating on hardware improvements faster.
The further out you get, there's less atmospheric drag and each satellite is in view of the ground stations for longer but the cost of launch is higher and latency becomes a big issue. People expect 50ms latency for internet access not 500ms.
WP says Low Earth Orbit is popular because it's cheap to get stuff there, the latency is low (speed of light starts to matter when you're a couple Earth diameters up) and bandwidth to the ground is high (I assume it's harder to send a signal a longer distance, even through vacuum)
radio bandwidth: higher frequencies travel a shorter distance and provide more bandwidth. so you get frequency contention and also you need your sats to be physically closer
latency: the further a sat is, the higher the latency. not an issue for text messages. a huge issue for phone calls and general internet tasks. the further you "push" your sat "back", the worst the user experience is
there's other issues too, like geostationary vs geosynchronous and coverage and exposure.
Low orbit is how star link is able to achieve their connections, isn't it? I think of they moved to normal telecom orbit the performance would be like normal satellite internet too
Not with a geostationary orbit. That must have a fixed radius. The problem is that satellites have to move to counteract the force of gravity to avoid falling out of orbit. But if they move too much or too little, then the satellite moves with respect to the earth and the orbit is no longer geostationary.
(Caveat: Not an expert by any means, just someone who had a similar question and did some reading, so my answer may well be incomplete or not fully correct.)
This has already been addressed as LEO is not geostationary
but to point as to why. Consider the earths equator rotates at a particular velocity so there is a particular orbital radius where the two cancel and NO energy is needed to fall around the equator at the same rate the equator is moving. That is a geostationary orbit.
LEO maxes out ~ 1,200 miles radius, geostationary is at little over over 22,000 miles radius.
With the direction of OpenAI, hyper personalized ads inserted directly into chat and their app experiences could be a path. Not saying it will work, but they’re definitely exploring it.
Not all AI investment is private. You could argue that public companies like Google, Meta, and Microsoft have had their stock appreciate at least partly because of the AI frenzy.
Until recently I had only ever been to one baseball game.. saw the Jays when I was 10. I remember falling asleep at the game because it was so slow and boring, and never really watched baseball after that.
But in the last couple years I’ve seen the Mets and Phillies multiple times, and it’s now one of my favourite sports to watch thanks to the pitch clock increasing the pace of the game. I’d be really curious to see data on how many new fans the league got after the change.
COVID did such a number on attendance that it's hard to separate anything else out. It has been increasing since it bottomed out but is still below the peak.
The best I can say: it was falling before the pandemic and it's now above where it was even before everything shut down. So... maybe?
> I remember falling asleep at the game because it was so slow and boring, and never really watched baseball after that.
If you listen to people talking about how they loved going to the game with their family, it's usually about what they did to pass the time during the boredom. It was America's pastime, because you needed to figure out how to pass time during the boredom.
The pitch clock is nice though, gives a rythym to action.
That said, minor league baseball is a lot more fun to watch because there's a lot more variance, and they have a lot of stuff going on between the innings to keep you awake ;)
This is why the Savannah Bananas (and the banana league) are so popular. Banana-ball draws sellout crowds wherever they play. The main focus is "don't be boring."
I have to disagree with the author's argument for why hallucinations won't get solved:
> If there were a way to eliminate the hallucinations, somebody already would have. An army of smart, experienced people people, backed by effectively infinite funds, have been hunting this white whale for years now without much success.
Research has been going on for what, like 10 years in earnest, and the author thinks they might as well throw in the towel? I feel like the interest in solving this problem will only grow! And there's a strong incentive to solve it for the important use cases where a non-zero hallucination rate isn't good enough.
Plus, scholars have worked on problems for _far far_ longer and eventually solved them, e.g. Fermat's Last Theorem took hundreds of years to solve.
The problem with hallucinations is that they really are an expected part of what LLMs are used for today.
"Write me a story about the first kangaroo on the moon" - that's a direct request for a hallucination, something that's never actually happened.
"Write me a story about the first man on the moon" - that could be interpreted as "a made-up children's story about Neil Armstrong".
"Tell me about the first man on the moon" - that's a request for factual information.
All of the above are reasonable requests of an LLM. Are we asking for a variant of an LLM that can flat refuse the first prompt because it's asking for non-real information?
Even summarizing an article could be considered a hallucination: there's a truth in the world, which is the exact text of that article. Then there's the made-up shortened version which omits certain details to act as a summary. What would a "hallucination free" LLM do with that?
I would argue that what we actually want here is for LLMs to get better over time at not presenting made-up information as fact in answer to clear requests for factual information. And that's what we've been getting - GPT-5 is far less likely to invent things in response to a factual question than GPT-4 was.
> What would a "hallucination free" LLM do with that?
To me, there’s a qualitative question of what details to include. Ideally the most important ones. And there’s the binary question of whether it included details not in the original.
A related issue is that preference tuning loves wordy responses, even if they’re factually equivalent.
The author gave two arguments, a weak one and a stronger one. You quoted the weaker one. The OpenAI paper contains the stronger one, basically explaining that models will guess at the next token rather than saying “idk” because its guess could be correct.
The strongest argument in my mind for why statistical models cannot avoid hallucinations is the fact that reality is inherently long-tail. There simply isn’t enough data or FLOPs to consume that data. If we focus on the limited domain of chess, LLMs cannot avoid hallucinating moves that do not exist, let alone give you the best move. And scaling up training data to all positions is simply computationally impossible.
And even if it were possible (but still expensive) it wouldn’t be practical at all. Your phone can run a better chess algorithm than the best LLM.
All of this is to say, going back to your Fermat’s last theorem point, that we may eventually figure out a faster and cheaper way, and decide we don’t care about tall stacks of transformers anymore.
It really depends on how strictly you define "solved".
If for "solved", you want AI to be as accurate and reliable as simply retrieving the relevant data from an SQL database? Then hallucinations might never truly "get solved".
If for "solved", you want AI to be as accurate and reliable as a human? Doable at least in theory. The bar isn't high enough to remain out of reach forever.
To me, this looks like an issue of self-awareness - and I mean "self-awareness" in a very mechanical, no-nonsense way: "having usable information about itself and its own capabilities".
Humans don't have perfect awareness of their own knowledge, capabilities or competences. But LLMs have even less of each. They can recognize their own inability or uncertainty or lack of knowledge sometimes, but not always. Which seems like it would be very hard but not entirely impossible to rectify.
Exactly. I mean, if you asked people how probable the current LLMs would be (warts and all) 20 years ago I think there would have been a similar cynicism.