Point still stands. It’s not going anywhere. And the literal hate and pure vitriol I’ve seen towards people on social media, even when they say “oh yeah; this is AI”, is unbelievable.
So many online groups have just become toxic shitholes because someone once or twice a week posts something AI generated
The entire US GDP for the last few quarters is being propped up by GPU vendors and one singular chatbot company, all betting that they can make a trillion dollars on $20-per-month "it's not just X, it's Y" Markov chain generators. We have six to 12 more months of this before the first investor says "wait a minute, we're not making enough money", and the house of cards comes tumbling down.
Also, maybe consider why people are upset about being consistently and sneakily lied to about whether or not an actual human wrote something. What's more likely: that everyone who's angry is wrong, or that you're misunderstanding why they're upset?
I feel like this is the kind of dodgy take that'll be dispelled by half an hour's concerted use of the thing you're talking about
short of massive technological regression, there's literally never going to be a situation where the use of what amounts to a second brain with access to all the world's public information is not going to be incredibly marketable
I dare you to try building a project with Cursor or a better cousin and then come back and repeat this comment
>What's more likely: that everyone who's angry is wrong, or that you're misunderstanding why they're upset?
your patronising tone aside, GP didn't say everyone was wrong, did he? if he didn't, which he didn't, then it's a completely useless and fallacious rhetorical. what he actually said was that it's very common. and, factually, it is. I can't count the number of these type of instagram comments I've seen on obviously real videos. most people have next to no understanding of AI and its limitations and typical features, and "surprising visual occurrence in video" or "article with correct grammar and punctuation" are enough for them to think they've figured something out
> I dare you to try building a project with Cursor or a better cousin and then come back and repeat this comment
I always try every new technology, to understand how it works, and expand my perspective. I've written a few simple websites with Cursor (one mistake and it wiped everything, and I could never get it to produce any acceptable result again), tried writing the script for a YouTube video with ChatGPT and Claude (full of hallucinations, which – after a few rewrites – led to us writing a video about hallucinations), generated subtitles with Whisper (with every single sentence having at least some mistake) and finally used Suno and ChatGPT to generate some songs and images (both of which were massively improved once I just made them myself).
Whether Android apps or websites, scripts, songs, or memes, so far AI is significantly worse at internet research and creation than a human. And cleaning up the work AI did always ended up being taking longer just doing it myself from scratch. AI certainly makes you feel more productive, and it seems like you're getting things done faster, even though it's not.
Let's assume that's true — I'm just bad at using AI.
If that were the case, everyone else's AI creations would have a significantly higher quality than my own.
But that's not what we observe in the real world. They're just as bad as what I managed to create with AI.
The only ones I see who are happy with the AI output are people who don't care about the quality about the end result, or the details of it, just the semblance of a result.
If that was the case, that'd be great. I don't necessarily care how something was achieved, as long as the software engineering and architecture was properly done, requirements were properly considered, edge cases documented, tests written, and bugs reported upstream.
But it's not the case. Of course, I could be wrong – maybe it's not AI, maybe it's just actual incompetence instead.
That said, humans usually don't approach tasks the way LLMs do. Humans generally build a mental model that they refine over time, which means that each change, each bit of code written, closely resembles other code written at the same time, but often bears little resemblance to code nearby. This is also why humans need refactoring – our mental model has changed, and we need to adjust the old code to match the new model.
Whereas LLMs are influenced most by the most recent tokens, which means that any change is affected by the code surrounding it much more than by other code written at the same time. That's also why, when something is wrong, LLMs struggle with fixing it (as even just reading the broken code distorts the probabilities, making it more likely to make the same mistake again), which is why it's typically best to recreate a piece of code from scratch instead.
this doesn't really negate or address the fact that the sample you're basing your position upon clearly doesn't account for the content that you couldn't tell was made using AI
I only gave AI coding assistants as a secondary example as to why AI obviously isn't something that people are suddenly going to realise they don't need, and you're over-focusing on it because clearly you have an existing and well-thought out position on the topic, but it's completely beside the point
this thread is about AI generated text content online
Fascinatingly, as we found out from this HN post Markov chains don't work when scaled up, for technical reasons, so that whole transformers thing is actually necessary for this current generation of AI.
What isn't going anywhere? You're kidding yourself if you think every single place AI is used will withstand the test of time. You're also kidding yourself if you think consumer sentiment will play no part in determining which uses of AI will eventually die off.
I don't think anyone seriously believes the technology will categorically stop being used anytime soon. But then again we still keep using tech thats 50+ years old as it is.
Point still stands. It’s not going anywhere. And the literal hate and pure vitriol I’ve seen towards people on social media, even when they say “oh yeah; this is AI”, is unbelievable.
So many online groups have just become toxic shitholes because someone once or twice a week posts something AI generated