I think the biggest issue with LLMs is basically just the fact that we're finally coming to the end of the long tail of human intellectual capability.
With previous technological advancements, humans had places to intellectually "flee", and in fact, previous advancements were often made for the express purpose of freeing up time for higher level pursuits. The invention of computers, for example, let mathematicians focus on much higher level skills (although even there an argument can be made that something has been lost with the general decrease in arithmetic abilities amoung modern mathematicians).
Large language models don't move humans further up the value chain, though. They kick us off of it.
I hear lots of people prosletizing wonderful futures where humans get to focus on "the problems that really matters", like social structures, or business objectives; but there's no fundamental reason that large language models can't replace those functionalities aswell. Unlike, say, a Casio, which would never be able to replace a social worker no matter how hard you tried.
With previous technological advancements, humans had places to intellectually "flee", and in fact, previous advancements were often made for the express purpose of freeing up time for higher level pursuits. The invention of computers, for example, let mathematicians focus on much higher level skills (although even there an argument can be made that something has been lost with the general decrease in arithmetic abilities amoung modern mathematicians).
Large language models don't move humans further up the value chain, though. They kick us off of it.
I hear lots of people prosletizing wonderful futures where humans get to focus on "the problems that really matters", like social structures, or business objectives; but there's no fundamental reason that large language models can't replace those functionalities aswell. Unlike, say, a Casio, which would never be able to replace a social worker no matter how hard you tried.