> unrealistic to say that AI will never take over our jobs
That's not my position. I'm agnostic. I have no idea where it'll end up but there's no reason to have a strong belief either way
The comment you originally replied to is I think the sanest thing in here. You can't just project out endlessly unless you have a technological basis for it. The current methodologies are getting into diminishing returns and we'll need another breakthrough to push it much further
Then we're in agreement. It's clearly not a religous debate, you're just mischaracterizing it that way.
The original comment I replied to is categorically wrong. It's not sane at all when it's rationally and factually not true. We are not projecting endlessly. We are hitting a 1 year mark of a bumpy upward trendline that's been going for over 15 years. This 1 year mark is characterized by a bump of a slight diminishing return of LLM technology that's being over exaggerated as an absolute limit of AI.
Clearly we've had all kinds of models developed in the last 15 years so one blip is not evidence of anything.
Again we already have a datapoint here. You are a human brain, we know that an intelligence up to human intelligence can be physically realized because the human brain is ALREADY a physical realization. It is not insane to draw a projection in that direction and it is certainly not an endless growth trendline. That's false.
Given the information we have you gave it an "agnostic" outlook which is 50 50. If you asked me 10 years ago whether we would hit agi or not I would've given it a 5 percent chance, and now both of us are at 50 50. So your stance actually contradicts the "sane" statement you stated you agree with.
We are not projecting to infinite growth and you disagree with that because in your own statement you believe there is a 50 percent possibility we will hit agi.
Agnostic, at least as I was using it, was intending to mean 'who knows'. That's very different from a 50% possibility
"You are a human brain, we know that an intelligence up to human intelligence can be physically realized" - not evidence that LLMs will lead to AGI
"trendline that's been going for over 15 years" - not evidence LLMs will continue to AGI, even more so now given we're running into the limits of scaling it
AI winter is a common term for a reason. We make huge progress in a short amount of time, everyone goes crazy with hype, then it dies down for years or decades
The only evidence that justifies a specific probability is going to be technical explanations of how LLMs are going to scale to AGI. No one has that
1. LLMs are good at specific, well defined tasks with clear outcomes. The thing that got them there is hitting its limit
2. ???
3. AGI
What's the 2?
It matters.. because everyone's hyped up and saying we're all going to be replaced but they can't fill in the 2. It's a religious debate because it's blind faith without evidence
>Agnostic, at least as I was using it, was intending to mean 'who knows'. That's very different from a 50% possibility
I take "don't know" to mean the outcome is 50/50 either way because that's the default probability of "don't know"
> not evidence LLMs will continue to AGI, even more so now given we're running into the limits of scaling it
Never said it was. The human brain is evidence of what can be physically realized and that is compelling evidence that it can be built by us. It's not definitive evidence but it's compelling evidence. Fusion is less compelling because we don't have any evidence of it existing on earth.
>AI winter is a common term for a reason. We make huge progress in a short amount of time, everyone goes crazy with hype, then it dies down for years or decades
>AI winter is a common term for a reason. We make huge progress in a short amount of time, everyone goes crazy with hype, then it dies down for years or decades
AI winter refers to a singular event that happened through the entire history of AI. It is not a term applicable to a common occurrence as you seem to imply. We had one winter, and that is not enough to establish a pattern that it is going to repeat.
>1. LLMs are good at specific, well defined tasks with clear outcomes. The thing that got them there is hitting its limit
What's the thing that got them there? Training data?
>It matters.. because everyone's hyped up and saying we're all going to be replaced but they can't fill in the 2. It's a religious debate because it's blind faith without evidence
The hype is in the other direction. On HN everyone is overwhelmingly against AI and making claims that it will never happen. Also artists are already replaced. I worked at a company where artists did in fact get replaced by AI.
That's not my position. I'm agnostic. I have no idea where it'll end up but there's no reason to have a strong belief either way
The comment you originally replied to is I think the sanest thing in here. You can't just project out endlessly unless you have a technological basis for it. The current methodologies are getting into diminishing returns and we'll need another breakthrough to push it much further
This is turning into religious debate