You can’t project trends out endlessly. If you could, FB would have 20B users right now based on early growth (just a random guess, you get the point). The planet would have 15B people on it based on growth rate up until the 90s. Google would be bigger than the world GDP. Etc.
One of the more bullish AI people has said the models performance scales with log of compute (Sam Altman). Do you know how hard it will be to move that number? We are already well into diminishing returns with current methodologies and there is no one pointing the way to a break through that will get us to expert level performance. RLHF is underinvested in currently but will likely be the path to get us from Junior contributor to Mid in specific domains, but that still leaves a lot of room for humanity.
The most likely reason for my PoV to be wrong is that AI labs are investing a lot of training time into programming, hoping the model can self improve. I’m willing to believe that will have some payoffs in terms of cheaper, faster models and perhaps some improvements in scaling for RLHF (a huge priority for research IMO). Unsupervised RL would also be interesting, albeit with alignment concerns.
What I find unlikely with current models is that they will show truly innovative thinking, as opposed to the remixed ideas presented as “intelligence” today.
Finally, I am absolutely convinced today’s AI is already powerful enough to affect every business on the planet (yes even the plumbers). I just don’t believe they will replace us wholesale.
But this is not just an endless projection. In one sense we can't have economic growth and energy consumption go endlessly as that will eat up all the available resources on earth, there is a physical hard line.
However for AI this is not the case. There is literally an example of human level intelligence exiting in the real world. You're it. We know we haven't even scratched the limit.
It can be done because an example of the finished product is humanity itself. The question is do we have the capability to do it? And for this we don't know. Given the trend and the fact that a Finished product Already exists, It is Totally realistic to say AI will replace our jobs.
There's no evidence we're even on the right track to have human level intelligence so no, I don't think it's realistic to say that
Counterpoint: our brains use about 20 watts of power. How much does AI use again? Does this not suggest that it's absolutely nothing like what our brains do?
There is evidence we're on the right track. Are you blind? The evidence is not definitive, but it's evidence that makes it a possibility.
Evidence: ChatGPT and all LLMs.
You cannot realistically say that this isn't evidence. Neither of these things guarantees that AI will take over our jobs but they are datapoints that lend credence to the possibility that it will.
On the other side of the coin, it is utterly unrealistic to say that AI will never take over our jobs when there is Also no definitive evidence on this front.
> unrealistic to say that AI will never take over our jobs
That's not my position. I'm agnostic. I have no idea where it'll end up but there's no reason to have a strong belief either way
The comment you originally replied to is I think the sanest thing in here. You can't just project out endlessly unless you have a technological basis for it. The current methodologies are getting into diminishing returns and we'll need another breakthrough to push it much further
Then we're in agreement. It's clearly not a religous debate, you're just mischaracterizing it that way.
The original comment I replied to is categorically wrong. It's not sane at all when it's rationally and factually not true. We are not projecting endlessly. We are hitting a 1 year mark of a bumpy upward trendline that's been going for over 15 years. This 1 year mark is characterized by a bump of a slight diminishing return of LLM technology that's being over exaggerated as an absolute limit of AI.
Clearly we've had all kinds of models developed in the last 15 years so one blip is not evidence of anything.
Again we already have a datapoint here. You are a human brain, we know that an intelligence up to human intelligence can be physically realized because the human brain is ALREADY a physical realization. It is not insane to draw a projection in that direction and it is certainly not an endless growth trendline. That's false.
Given the information we have you gave it an "agnostic" outlook which is 50 50. If you asked me 10 years ago whether we would hit agi or not I would've given it a 5 percent chance, and now both of us are at 50 50. So your stance actually contradicts the "sane" statement you stated you agree with.
We are not projecting to infinite growth and you disagree with that because in your own statement you believe there is a 50 percent possibility we will hit agi.
Agnostic, at least as I was using it, was intending to mean 'who knows'. That's very different from a 50% possibility
"You are a human brain, we know that an intelligence up to human intelligence can be physically realized" - not evidence that LLMs will lead to AGI
"trendline that's been going for over 15 years" - not evidence LLMs will continue to AGI, even more so now given we're running into the limits of scaling it
AI winter is a common term for a reason. We make huge progress in a short amount of time, everyone goes crazy with hype, then it dies down for years or decades
The only evidence that justifies a specific probability is going to be technical explanations of how LLMs are going to scale to AGI. No one has that
1. LLMs are good at specific, well defined tasks with clear outcomes. The thing that got them there is hitting its limit
2. ???
3. AGI
What's the 2?
It matters.. because everyone's hyped up and saying we're all going to be replaced but they can't fill in the 2. It's a religious debate because it's blind faith without evidence
>Agnostic, at least as I was using it, was intending to mean 'who knows'. That's very different from a 50% possibility
I take "don't know" to mean the outcome is 50/50 either way because that's the default probability of "don't know"
> not evidence LLMs will continue to AGI, even more so now given we're running into the limits of scaling it
Never said it was. The human brain is evidence of what can be physically realized and that is compelling evidence that it can be built by us. It's not definitive evidence but it's compelling evidence. Fusion is less compelling because we don't have any evidence of it existing on earth.
>AI winter is a common term for a reason. We make huge progress in a short amount of time, everyone goes crazy with hype, then it dies down for years or decades
>AI winter is a common term for a reason. We make huge progress in a short amount of time, everyone goes crazy with hype, then it dies down for years or decades
AI winter refers to a singular event that happened through the entire history of AI. It is not a term applicable to a common occurrence as you seem to imply. We had one winter, and that is not enough to establish a pattern that it is going to repeat.
>1. LLMs are good at specific, well defined tasks with clear outcomes. The thing that got them there is hitting its limit
What's the thing that got them there? Training data?
>It matters.. because everyone's hyped up and saying we're all going to be replaced but they can't fill in the 2. It's a religious debate because it's blind faith without evidence
The hype is in the other direction. On HN everyone is overwhelmingly against AI and making claims that it will never happen. Also artists are already replaced. I worked at a company where artists did in fact get replaced by AI.
One of the more bullish AI people has said the models performance scales with log of compute (Sam Altman). Do you know how hard it will be to move that number? We are already well into diminishing returns with current methodologies and there is no one pointing the way to a break through that will get us to expert level performance. RLHF is underinvested in currently but will likely be the path to get us from Junior contributor to Mid in specific domains, but that still leaves a lot of room for humanity.
The most likely reason for my PoV to be wrong is that AI labs are investing a lot of training time into programming, hoping the model can self improve. I’m willing to believe that will have some payoffs in terms of cheaper, faster models and perhaps some improvements in scaling for RLHF (a huge priority for research IMO). Unsupervised RL would also be interesting, albeit with alignment concerns.
What I find unlikely with current models is that they will show truly innovative thinking, as opposed to the remixed ideas presented as “intelligence” today.
Finally, I am absolutely convinced today’s AI is already powerful enough to affect every business on the planet (yes even the plumbers). I just don’t believe they will replace us wholesale.