Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

That’s a perception and the problem isn’t the AI it’s human nature: 1. every time AI is able to do a thing we move the goalposts and say, yeah, but it can’t do that other thing over there; 2. We are impatient, so our ability to get bored tends to outpace the rate of change.


The other side of this coin is everyone overhyping what AI can do, and when the inevitable criticism comes, they respond by claiming the goal posts are being moved. Perhaps, but you also told me it could do XYZ, when it can only do X and some Y, but not much Z, and it’s still not general intelligence in the he broad sense.


ML scientists will tell you it can do X and some Y but not much Z. But the public doesn’t listen to ML scientists. Most of what the public hears about AI comes from businessmen trying to market a vision to investors — a vision, specifically, of what their business will be capable of five years from bow given predicted advancements in AI capabilities in the mean time; which has roughly nothing to do with what current models can do.


I appreciate this comment because I think it really demonstrates the core problem with what I'll call the "get off my lawn >:|" argument, because it's avowedly about personal emotions.

It's not "general intelligence", so it's over hyped, and They get so whiny about the inevitable criticism, and They are ignoring that it's so mindnumbingly boring to have people making the excuse that "designed a circuit board from scratch" wasn't something anyone thinks or claims an LLM should do.

Who told you LLMs can design circuit boards?

Who told you LLMs are [artificial] general intelligence?

I get sick of it constantly being everywhere, but I don't feel the need to intellectualize it in a way that blames the nefarious ???


> Who told you LLMs are [artificial] general intelligence?

*waves*

Everyone means a different thing by each letter of AGI, and sometimes also by the combination.

I know my opinion is an unpopular one, but given how much more general-purpose they are than most other AI, I count LLMs as "general" AI; and I'm old enough to remember when AI didn't automatically mean "expert level or better", when it was a surprise that Kasparov was beaten (let alone Lee Sedol).

LLMs are (currently) the ultimate form of "Jack of all trades, master of none".

I'm not surprised that it failed with these tests, even though it clearly knows more about electronics than me. (I once tried to buy a 220 kΩ resistor, didn't have the skill to notice the shop had given me a 220 Ω resistor, the resistor caught fire).

I'd still like to call these things "AGI"… except for the fact that people don't agree on what the word means and keep objecting to my usage of the initials as is, so it would't really communicate anything for me to do so.


What goals were achieved that I missed? Even for creative writing and image creation it still requires significant human guidance and correction.


This is a great example of goalposts shifting. Even having a model that can engage in coherent conversation and synthesize new information on the fly is revolutionary compared to just a few years ago. Now the bar has moved up to creativity without human intervention.


But isn't this goalpost shifting actually reasonable?

We discovered this nearly-magical technology. But now the novelty is wearing off, and the question is no longer "how awesome is this?". It's "what can I do with it for today?".

And frustratingly, the apparent list of uses is shrinking, mostly because many serious applications come with a footnote of "yeah, it can do that, but unreliably and with failure modes that are hard for most users to spot and correct".

So yes, adding "...but without making up dangerous nonsense" is moving the goalposts, but is it wrong?


There are a lot of things where being reliable isn’t as important (or it’s easier to be reliable).

For example, we are using it to do meeting summaries and it is remarkably good at it. In fact, in comparison to humans we did A/B testing with - usually better.

Another thing is new employee ramp. It is able to answer questions and guide new employees much faster than we’ve ever seen before.

Another thing I’ve started toying with it with, but have gotten incredible results so far is email prioritization. Basically letting me know which emails I should read most urgently.

Again, these were all things where the state of the art was basically useless 3 years ago.


IMO it’s not wrong to want the next improvement (“…but without making up dangerous nonsense”), but it is disingenuous to pretend as if there hasn’t already been a huge leap in capabilities. It’s like being unimpressed with the Wright brothers’ flight because nobody has figured out commercial air travel yet.


The leap has indeed been huge, but it's still not useful for any anything. The Wright brothers did not start a passenger airline after the first try.


No it's not. You can not shift goalposts that do not exist in the first place.


> "1. every time AI is able to do a thing we move the goalposts and say, yeah, but it can’t do that other thing over there"

So are you happy that a 1940s tic-tac-toe computer "is AI"? And that's going to be your bar for AI forever?

"Moving the goalposts is a metaphor, derived from goal-based sports such as football and hockey, that means to change the rule or criterion of a process or competition while it is still in progress, in such a way that the new goal offers one side an advantage or disadvantage." - and the important part about AI is that it be easy for developers to claim they have created AI, and if we move the goalposts then that's bad because ... it puts them at an unfair disadvantage? What is even wrong with "moving the goalposts" in this situation, claiming something is/isn't AI is not a goal-based sport. The metaphor is nonsensical whining.


No I'd say it's that people are very bad at knowing what they want, and worse at knowing how to get it.

While it might be "moving the goal posts" the issue is that the goal posts were arbitrary to start with. In the context of the metaphor we put them on the field so there could be a game, despite the outcome literally not mattering anywhere else.

This isn't limited to AI: anyone dealing with customers knows that the worst thing you can do is take what the customer says their problem is at face value, replete with the proposed solution. What the customer knows is they have a problem, but it's very unlikely they want the solution they think they do.


I don’t think the problem is moving the goalposts, but rather there are no actual goalposts. Advocates for this technology imply it can do anything either because they believe it will be true in the near future or they just want others to believe it for a wide range of reasons including to get rich of it. Therefore the general public has no real idea what the ideal use cases are for this technology in its current state so they keep asking it to do stuff it can’t do well. It is really no different than the blockchain in that regard.


One of the main issues I see amongst advocates of AI is that they cannot quantify the benefits and ignore provable failings of AI.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: