Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I was discussing with a colleague the past months, my view on how and why all these AI tools are being shoved down our throats (just look at Google's Gemini push into all enterprise tools, it's like Google+ for B2B) before there are clear cut use-cases you can point to and say "yes, this would have been much harder to do without LLM/AI" is because... Training data is the most valuable asset, all these tools are just data collection machines with some bonus features that make them look somewhat useful.

I'm not saying that I think LLMs are useless, far from it, I use them when I think it's a good fit for the research I'm doing, the code I need to generate, etc., but the way it's being pushed from a marketing perspective tells me that companies making these tools need people to use them to create a data moat.

Extremely annoying to be getting these pop-ups to "use our incredible Intelligence™" at every turn, it's grating on me so much that I've actively started to use them less, and try to disable every new "Intelligence™" feature that shows up in a tool I use.



It seems very simple cause and effect from a economic standpoint. Hype about AI is very high, so investors ask boards what they're doing about AI and using it, because they think AI will disrupt investments that don't.

The boards in turn instruct the CEOs to "adopt AI" and so you get all the normal processes about deciding what/if/when to do stuff get short circuited and so you get AI features that no one asked for or mandates for employees to adopt AI with very shallow KPIs to claim success.

The hype really distorts both sides of the conversation. You get the boosters for which any use of AI is a win, no matter how inconsequential the results, and then you get things like the original article which indicate it hasn't caused job losses yet as a sign that it hasn't changed anything. And while it might disprove the hype (especially the "AI is going to replace all mental labour in $SHORT_TIMEFRAME" hype), it really doesn't indicate that it won't replace anything.

Like when has a technology making the customer support experience worse for users or employees ever stopped it's rollout if there's cost savings to be had?

I think this why AI is so complicated for me. I've used it, and I can see some gains. But it's on the order of when IDE auto complete went from substring matches of single methods to when it could autocomplete chains of method calls based on types. The agent stuff fails on anything but the most bite size work when I've tried it.

Clearly some people seem it as something more transformative than that. There's other times when people have seen something transformative and it's just been so clearly nothing of value (NFTs for example) that it's easy to ignore the hype train. The reason AI is challenging for me is it's clearly not nothing, but also it's so far away from the vision that others have that it's not clear how realistic that is.


LLMs have mesmerized us, because, they are able to communicate meaning to us.

Fundamentally, we (the recipient of llm output) are generating the meaning from the words given. ie, llms are great when the recipient of their output is a human.

But, when their recipient is a machine, the model breaks down, because, machine to machine requires deterministic interactions. this is the weakness I see - regardless of all the hype about llm agents. fundamentally, the llms are not deterministic machines.

LLMs lack a fundamental human capability of deterministic symbolization - which is to create NEW symbols with associated rules which can deterministically model worlds we interact with. They have a long way to go on this.


Since it went viral at release the word "mesmerize" always makes me associate to this:

https://m.youtube.com/watch?v=Kqe3PKGcGkY

Arguably not even tangentially related, but maybe someone might enjoy it anyway.


Bingo. Especially with the 'coding assistants', these companies are getting great insight into how software features are described and built, and how software is architected across the board.

It's very telling that we see "we won't use your data for training" sometimes and opt-outs but never "we won't collect your data". 'Training' being at best ill defined.


Most likely they can identify very good software developers, or at least acquire this ability in the short term. That information has immediate value.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: