Yeah, while I mostly agree with the sentiment, I don't actually recognise any of the behaviours described in this article. It does sound like the behavioural traits of a certain subsection of certain generations, who's expectations and norms have been warped by overuse of social media. It all sounds incredibly exhausting and I genuinely feel sorry for those growing up in this climate.
You're getting to the heart of the problem here. At what point in evolutionary history does "thinking" exist in biological machines? Is a jumping spider "thinking"? What about consciousness?
When we say "think" in this context, do we just mean generalize? LLMs clearly generalize (you can give one a problem that is not exactly in it's training data and it can solve it), but perhaps not to the extent a human can. But then we're talking about degrees. If it was able to generalize at a higher level of abstraction maybe more people would regard it as "thinking".
I meant it in the same way the previous commenter did:
> Having seen LLMs so many times produce incoherent, nonsensical and invalid chains of reasoning... LLMs are little more than RNGs. They are the tea leaves and you read whatever you want into them.
Of course LLMs are capable of generating solutions that aren't in their training data sets but they don't arrive at those solutions through any sort of rigorous reasoning. This means that while their solutions can be impressive at times they're not reliable, they go down wrong paths that they can never get out of and they become less reliable the more autonomy they're given.
It's rather seldom that humans arrive at solutions through rigorous reasoning. The word "think" doesn't mean "rigorous reasoning" in every day language. I'm sure 99% of human decisions are pattern matching on past experience.
Even when mathematicians do in fact do rigorous reasoning, they use years to "train" first, to get experiences to pattern match from.
I have been on a crusade now for about a year to get people to share chats where SOTA LLMs have failed spectacularly to produce coherent, good information. Anything with Heavy hallucinations and outright bad information.
So far, all I have gotten is data that is outside the knowledge cutoff (this is by far the most common) and technicality wrong information (Hawsmer House instead of Hosmer House) kind of fails.
I thought maybe I hit on something with the recent BBC study about not trusting LLM output, but they used 2nd shelf/old mid-tier models to do their tests. Top LLMs correctly answered their test prompts.
I'm still holding out for one of those totally off the rails Google AI overviews hallucinations showing up in a top shelf model.
Sure, and I’ve seen the same. But I’ve also seen the amount to which they do that decrease rapidly over time, so if that trend continues would your opinion change?
I don’t think there’s any point in comparing to human intelligence when assessing machine intelligence, there’s zero reason to think it would have similar qualities. It’s quite clear for the foreseeable future it will be far below human intelligence in many areas, while already exceeding humans in some areas that we regard as signs of intelligence.
I don’t think this is a specific cultural thing, in my experience some people host more curated gathering, some more relaxed and informal - doesn’t matter where you’re from. People just tend to think the way their social group does it is the ‘norm’.
I think there's a cultural element to how much of it people will know to do without being explicitly told what to do.
In the US successful gatherings tend to require a fair bit of wrangling - I've been to more than one potluck where everyone showed up with roughly the same dish...
I think that goes everywhere though, I’ve been to at least one of every type of party described in this discussion thread in the last year and I’m not American.
Yeah personally I don't give a toss about the person who created the music and their backstory or stage performance.
However I do care that the person who created the music made hundreds of micro decisions during the creation of the piece such that it is coherent, has personality and structure towards the goal of satisfying that individuals sense of aesthetics. Unsurprisingly this is not something you get from current AI generated music.
What I've tend to find is that although almost everyone listens to some form of music, the average person tends to like things which are squarely in the middle of the gaussian curve, and that are inherently very predictable as though the creator had chosen the most stastically likely outcome for every creative decision they made while creating it. Similar trends with almost anything creative, cinema, literature, food etc.
This is basically what all the Suno creations sound like to me, which is to say they definitely have a market, but that market isn't for people who have a more than average interest in music.
It may be reductive but that doesn't make it incorrect. I would certainly agree that creating and appreciating art are highly emergent phenomena in humans (as is for example humour) but that doesn't mean I don't think they're rooted in fitness functions and our evolved brains desire for approval from our tribal peer group.
Reductive arguments may not give us an immediate forward path to reproducing these emergent phenomena in artificial brains, but it's also the case that emergent phenomena are by definition impossible to predict - I don't think anyone predicted the current behaviours of LLMs for example.
I think the ‘out of the box’ Linux desktop experience has improved a lot. To me the difference is in the long tail of software. On Linux the variety of toolkits historically available means depending on what software you’re using you may encounter a lot of inconsistency- I certainly do. On the Mac far less so.
reply