Nobody at my work knew anything about it. And we do have software engineers. I suspect only the very large orgs with expensive accountants were complying. And pay now vs later thing didn't really matter that much to them anyway.
> the public discourse either lacks motivation, understanding or incentive to take a proper look.
Indeed. Almost always in these discussions people have already made up their minds about the state of the economy and will just cherry-pick whatever metric best justifies their case (typically that the economy sucks).
There's never been a time in my life where people weren't complaining about how the economy is terrible and how that's clearly obvious if you just look at the real numbers.
I am not a fan of this banal trend of superficially comparing aspects of machine learning to humans. It doesn't provide any insight and is hardly ever accurate.
I've seen a lot of cases where, if you look at the context you're giving the model and imagine giving it to a human (just not yourself or your coworker, someone who doesn't already know what you're trying to achieve - think mechanical turk), the human would be unlikely to give the output you want.
Context is often incomplete, unclear, contradictory, or just contains too much distracting information. Those are all things that will cause an LLM to fail that can be fixed by thinking about how an unrelated human would do the job.
Alternatively, I've gotten exactly what I wanted from an LLM by giving it information that would not be enough for a human to work with, knowing that the llm is just going to fill in the gaps anyway.
It's easy to forget that the conversation itself is what the LLM is helping to create. Humans will ignore or depriotitize extra information. They also need the extra information to get an idea of what you're looking for in a loose sense.
The LLM is much more easily influenced by any extra wording you include, and loose guiding is likely to become strict guiding
Yeah, it's definitely not a human! But it is often the case in my experience that problems in your context are quite obvious once looked at through a human lens.
Maybe not very often in a chat context, my experience is in trying to build agents.
Totally agree. We've found that a lot of "agent failures" trace back to assumptions, bad agent-decisions, or bloat buried in the context, stuff that makes perfect sense to the dev who built it when following the happy path, but can so easily fall apart in real-world scenarios.
We've been working on a way to test this more systematically by simulating full conversations with agents and surfacing the exact point where things go off the rails. Kind of like unit tests, but for context, behavior, and other ai jank.
I don't see the usefulness of drawing a comparison to a human. "Context" in this sense is a technical term with a clear meaning. The anthropomorphization doesn't enlighten our understanding of the LLM in any way.
Of course, that comment was just one trivial example, this trope is present in every thread about LLMs. Inevitably, someone trots out a line like "well humans do the same thing" or "humans work the same way" or "humans can't do that either". It's a reflexive platitude most often deployed as a thought-terminating cliche.
I agree with you completely about the trend which has been going on for years. And it's usually used to trivialize the vast expanse between humans and LLMs.
In this case though it's a pretty weird and hard job to create a context dynamically for a task, cobbling together prompts, tool outputs, and other LLM outputs. This is hard enough and weird enough that you can often end up failing to make text that even a human could make sense of to produce the desired output. And there is practical value to taking a context the LLM failed at and checking if you'd expect a human to succeed.
Theres all these philosophers popping up everywhere. This is also another one of these topics that featured in peoples favorite scifi hyperfixation so all discussions inevitably get ruined with scifi fanfic (see also: room temperature superconductivity).
I agree, however I do appreciate comparisons to other human-made systems. For example, "providing the right information and tools, in the right format, at the right time" sounds a lot like a bureaucracy, particularly because "right" is decided for you, it's left undefined, and may change at any time with no warning or recourse.
The problem is that Dems are just culturally irrelevant. Most people don't care about issues, policy or the economy, they just want to cheer for a team and will justify everything their team does regardless of efficacy or outcome. Trump is the fun underdog team that everyone is talking about, the Dems are the boring party-pooper team we all love to hate. During covid, that boring became a source of needed stability, but after boring stewarded us through the crisis, nobody wanted to be associated with them again.
> The state of affairs prior to this ruling is that any of 700 district judges could unilaterally block the president from exercising his authority under the constitution pending a review
This ruling does not "restore" a functioning balance, it damages it. This has never been a problem in the past because previous administrations (regardless of politics) didn't take illegal actions daily. Framing it as "politics" is disingenuous as many of the judges ruling against Trump were appointed by him.
The system was working as intended to check an executive acting outside of the law, but once again, the supreme court continues to empower the executive.
When the Supreme Court has multiple members who have and continue to openly break the law themselves, they have a vested interest in keeping a party in power who is also openly corrupt.
> If you believe he's out to make as much money as he can
Well, twitter shows that he'll burn money if it suits his ego. However, I think his bigger problem is his promise to all existing Tesla owners that FSD would use cameras. If Tesla switches their approach to lidar they'll probably be facing a class action suit from all those camera-only Tesla buyers.
If I owned a Tesla which I paid for the FSD package, I wouldn't care if it had to use LIDAR or not, as long as I didn't have to pay for any extra hardware.
I was a little annoyed at the VW cheating Diesel scandal, but they had the good sense to make the modifications free and pay you almost 7k in "We're Sorry" money, which helps make up for the loss of fuel efficiency.
I live in major metro in the south east. HW4 FSD in a model 3 and it is dangerous. Certainly, it's a lot better than a few years ago but still nowhere near something that could safely carry me home from the bar.
reply