> The frenzy around AI is to do with growth fueled cocaine capitalism seeking 'more' where rational minds can see that we don't have that much more runway left with our current mode of operation.
When talking with non-tech people around me, it’s really not about “rational minds”, it’s that people really don’t understand how all this works and as such don’t see the limitations of it.
Combine that with a whole lot of FOMO which happens often with investors and you have a whole pile of money being invested.
From what I hear, most companies like Google and Meta have a lot of money to burn, and their official position towards investors is “chances of reaching ASI/AGI are very low, but if we do and we miss out on it, it will mean a huge opportunity loss so it’s worth the investment right now”.
> When talking with non-tech people around me, it’s really not about “rational minds”, it’s that people really don’t understand how all this works and as such don’t see the limitations of it.
What are the limits? We know the limits for naked LLMs. Less so for LLM + current tools. Even less for LLM + future tools. And can only guess about LLM + other models + future tools. I mean moving forward likely requires complexity, research and engineering. We don't know the limits of this approach even without any major breakthrough. Can't predict, but if breakthrough happens it all will be different, but better than (we can foresee) today.
When talking with non-tech people around me, it’s really not about “rational minds”, it’s that people really don’t understand how all this works and as such don’t see the limitations of it.
Combine that with a whole lot of FOMO which happens often with investors and you have a whole pile of money being invested.
From what I hear, most companies like Google and Meta have a lot of money to burn, and their official position towards investors is “chances of reaching ASI/AGI are very low, but if we do and we miss out on it, it will mean a huge opportunity loss so it’s worth the investment right now”.