Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

People will develop an eye for how AI-generated looks and that will make human creativity stand out even more. I'm expecting more creativity and less cookie-cutter content, I think AI generated content is actually the end of it.


>People will develop an eye for how AI-generated looks

People will think they have an eye for AI-generated content, and miss all the AI that doesn't register. If anything it would benefit the whole industry to keep some stuff looking "AI" so people build a false model of what "AI" looks like.

This is like the ChatGPT image gen of last year, which purposely put a distinct style on generated images (that shiny plasticy look). Then everyone had an "eye for AI" after seeing all those. But in the meantime, purpose made image generators without the injected prompts were creating indistinguishable images.

It is almost certain that every single person here has laid eyes on an image already, probably in an ad, that didn't set off any triggers.


Given that the goal of generative AI is to generate content that is virtually indistinguishable from expert creative people, I think it's one of these scenarios:

1. If the goal is achieved, which is highly unlikely, then we get very very close to AGI and all bets are off.

2. If the goal is not achieved and we stay in this uncanny valley territory (not at the bottom of it but not being able to climb out either), then eventually in a few years' time we should see a return to many fragmented almost indie-like platforms offering bespoke human-made content. The only way to hope to achieve the acceptable quality will be to favor it instead of scale as the content will have to be somehow verified by actual human beings.


> If the goal is achieved, which is highly unlikely, then we get very very close to AGI and all bets are off.

Question on two fronts:

1. Why do you think, considering the current rate of progress think it is very unlikely that LLM output becomes indistinguishable from expert creatives? Especially considering a lot of tells people claim to see are easily alleviated by prompting.

2. Why do you think a model whose output reaches that goal would rise in any way to what we’d consider AGI?

Personally, I feel the opposite. The output is likely to reach that level in the coming years, yet AGI is still far away from being reached once that has happened.


Interesting thoughts, to which I partially agree.

1. The progress is there but it's been slowing down yet the downsides have largely remained.

1.1. With the LLMs, while thanks to the larger context window (mostly achieved via hardware, not software), the models can keep track of the longer conversations better, the hallucinations are as bad as ever; I use them eagerly yet I haven't felt any significant improvements to the outputs in a long time. Anecdotally, a couple days ago I decided to try my luck and vibe-code a primitive messaging library and it led me in the wrong path even though I was challenging it along the way; it was so convincing that I wouldn't have noticed hadn't my colleague told me there was a better way. Granted, the colleague is extremely smart, but LLM should have told me what was the right approach because I was specifically questioning it.

1.2. The image generation has also barely improved. The biggest improvement during the past year has been with 4o, which can be largely attributed to move from diffusion to autoregression but it's far from perfect and still suffers from hallucinations even more than LLMs.

1.3. I don't think video models are even worth discussing because you just can't get a decent video if you can't get a decent still in the first place.

2. That's speculation, of course. Let me explain my thought process. A truly expert level AI should be able to avoid mistakes and create novel writings or research just by the human asking it to do it. In order to validate the research, it can also invent the experiments that need to be done by humans. But if it can do all this, then it could/should find the way to build a better AI, which after an iteration or two should lead to AGI. So, it's basically a genius that, upon human request, can break itself out of the confines.


People already know what the ads are and what is content, but yet the advertisers keep on paying for ads on videos so they must be working.

It feels to me that the SOTA video models today are pretty damn good already, let alone in another 12 months when SOTA will no doubt have moved on significantly.


This eye will be a driving force for improving ai until it becomes in parity with real non generated pictures.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: