This is also my take on the market, although I also thought it looked like they were going to win 2 years ago too.
> How are we feeling about Google putting everyone out of work and owning the future? It's starting to feel that way to me.
Not great, but if one company or nation is going to come out on top in AI then every other realistic alternative at the moment is worse than Google.
OpenAI, Microsoft, Facebook/Meta, and X all have worse track records on ethics. Similarly for Russia, China, or the OPEC nations. Several of the European democracies would be reasonable stewards, but realistically they didn't have the capital to become dominant in AI by 2025 even if they had started immediately.
Taken literally it's accusing someone of a specific depraved act, but it's also clearly a term of abuse. My guess (not a lawyer!) is that once a term becomes more associated with abuse the more you're protected.
Hustler basically called Jerry Falwell a motherf!cker but attributed to him a specific act, which they highlighted was satire and not to be taken seriously. Hustler lost in a jury trial and also on an appeal to the 4th circuit. The Supreme Court eventually ruled in Hustler's favor [0]. This is dramatized in the movie The People vs Larry Flint.
> Taken literally it's accusing someone of a specific depraved act, but it's also clearly a term of abuse. My guess (not a lawyer!) is that once a term becomes more associated with abuse the more you're protected.
Computer people have this weird notion that courts are like a computer program. If x == "foo" then punishment.
That's not how it works. The use of any specific word does not determine in and of itself if something is an assertion of fact or an assertion of opinion. It depends on how you're using the word.
> The use of any specific word does not determine in and of itself if something is an assertion of fact or an assertion of opinion. It depends on how you're using the word.
Yes that's the point I'm making. The entire thread is about which words you can get sued over libel for, which isn't how it works.
> Computer people have this weird notion that courts are like a computer program. If x == "foo" then punishment.
This seems unnecessarily insulting, especially since your comment is just a repeat of mine with the relevant details removed.
Haha, I know, thanks :). I don't mind saying it... it's just such a raw word and I wanted people to focus on the substance without aggressively escalating the potty mouth in the thread.
it's interesting how differently people perceive it. Motherfucker is something I'd have called a parent in a card game if they bested me, or an exclamation said aloud from dropping a wallet while walking. Very little significance to it.
If Tesla's robotaxis develop a reputation for accidents, they'll create an unpredictable traffic bubble around them.
Some people will slow down to minimize the fatality of an impact and to increase reaction time (similar to people slowing down around a marked cop car). Others will speed up to ensure they don't get stuck behind or around one.
That happens with other unsafe vehicles (e.g. a truck that doesn't have its load well secured). But it makes me wonder what will happen if Tesla trains on the data of erratic driving created by its presence.
I'm doing this with Tesla's on the road already. When I see one i'm extra-cautious.
This company is so shaddy around all the driving assistance and FSD issues that i have 0 trust and will not until it is thoroughly investigated.
They are quite behind other manufacturers on simple stuff like line assistance and automated breaking already, they are going out of their way to make every reported incident sounding that others are to blame, it just looks bad from end to end.
Rushing those robotaxis is just trying to hide the fact that they are quite behind the competition on all those fronts.
On my last road trip i saw a Tesla go from 75 to 60 to 110 in the span of about 20 seconds then the driver pulled over and stopped and got out. No idea what the fuck happened there but I'm certainly giving them all a wide berth from now on. This was wide open road with almost no other other traffic in broad daylight.
It's fascinating that FSD is driving real cars on public roads with FPS that most gamers would disdain. It's only a couple tons of moving metal, would could go wrong?
There was a time when saying this on HN would have gotten you downvoted into oblivion. People felt extremely strongly that everything should be possible with cameras.
Even if one believes everything should be possible with cameras, the goal posts are moving. Other automated vehicles use radar or lidar. Even if Tesla achieves fatality levels comparable with human drivers, other vehicles will outperform them. There's not going to be a great market for the most fatal automatic vehicle. It makes more sense to chase state of the art rather than the state of the median human.
it's always a config push. people rollout code slowly but don't have the same mechanisms for configs. But configs are code, and this is a blind spot that causes an outsized percentage of these big outages.
I logged in for the first time in months a few days ago and it was mostly angry memes, a surprising number of which were celebrating violence and murder. This is despite me aggressively muting people who post that sort of thing.
I hope they find a niche, but the cultural damage may already be done.
No it's not. I mute people who post that sort of content and it's the math instance.
If you are forced to see it despite spending most of your time silencing it, it's not the people you follow it's the culture.
Judging from the CEO's letter and actions, it sounds like it's possible a bad culture example was set by the top of the project. Although that doesn't always happen. For example, Linux doesn't have a culture of over-the-top personal insults despite that being Linus's personal style.
> working with complex systems and constraints there often isn't an aha moment
You only get the a-ha moment when there's essentially one discrete piece of information needed to decide between alternatives. That doesn't apply to most problems.
Your brain simultaneously assigns probabilities to possible solutions, and in certain cases there's an information update that sets one solution to probability 1 and the others to 0. If your brain is actively expending energy keeping these possibilities warm simultaneously, then this will naturally lead to a rapid change in energy which will feel like something because it's a change in the flow of neuro chemicals.
It's not obvious that it would feel pleasant. But since the nucleus accumbens is active during problems solving then it's not entirely surprising that the the NAc gets extra stimulated in the rush of energy as the probabilities collapse and weights get updated to the real solution.
But relatively few problems require you to simultaneously juggle multiple possible solutions and pieces of evidence that are brought together in a single instant. So chasing that feeling is generally a poor strategy.
This is what came to mind for me reading the article as well: The difference between juggling, rotating, feeling out a thousand puzzle pieces that either fit or don't fit the well-defined hole you have, versus having the hole, having the puzzle-piece-'blank' that you're very slowly and deliberately chipping away at, sanding down, until it fits(as you know from the very start it will).
Like many scholarly linguistic construction, this is one many of us saw in latin class with non solum ... sed etium or non modo ... sed etium: https://issuu.com/uteplib/docs/latin_grammar/234. I didn't take ancient Greek, but I wouldn't be surprised if there's also a version there.
Keep in mind that pangram flags many hand-written things as AI.
> I just ran excerpts from two unpublished science fiction / speculative fiction short stories through it. Both came back as ai with 99.9% confidence. Both stories were written in 2013.
> I've been doing some extensive testing in the last 24 hours and I can confidently say that I believe the 1 in 10,000 rate is bullshit. I've been an author for over a decade and have dozens of books at hand that I can throw at this from years prior to AI even existing in anywhere close to its current capacity. Most of the time, that content is detected as AI-created, even when it's not.
> Pangram is saying EVERYTHING I have hand written for school is AI. I've had to rewrite my paper four times already and it still says 99.9% AI even though I didn't even use AI for the research.
> I've written an overview of a project plan based on a brief and, after reading an article on AI detection, I thought it would be interesting to run it through AI detection sites to see where my writing winds up. All of them, with the exception of Pangram, flagged the writing as 100% written by a human. Pangram has "99% confidence" of it being written by AI.
I generally don't give startups my contact info, but if folks don't mind doing so, I recommend running pangram on some of their polished hand written stuff.
How long were the extracts you gave to Pangram? Pangram only has the stated very high accuracy for long-form text covering at least a handful of paragraphs. When I ran this book, I used an entire chapter.
> How are we feeling about Google putting everyone out of work and owning the future? It's starting to feel that way to me.
Not great, but if one company or nation is going to come out on top in AI then every other realistic alternative at the moment is worse than Google.
OpenAI, Microsoft, Facebook/Meta, and X all have worse track records on ethics. Similarly for Russia, China, or the OPEC nations. Several of the European democracies would be reasonable stewards, but realistically they didn't have the capital to become dominant in AI by 2025 even if they had started immediately.