I want an AI modeled after short-tempered stereotypical Germans or Eastern Europeans, not copying the attitude of non-confrontational Californians that say “dude, that’s awesome!” a dozen times a day.
As a German not working in Germany - I often get the feedback that the initial contact with me is rather off-putting, but over time people start appreciating my directness.
The problem is, performing social interaction theatre is way more important than actually using logic to solve issues. Look at how many corporate jobs are 10% engineering and 90% kissing people's assess in order to maintain social cohesion and hierarchy. Sure, you say you want "short-tempered stereotypical Germans or Eastern Europeans" but guess what - most people say some variation of that, but when they actually see such behavior, they get upset. So we continue with the theatre.
For reference, see how Linus Torvalds was criticized for trying to protect the world's most important open source project from weaponized stupidity at the cost of someone experiencing minor emotional damage.
That is a fair assessment, but on the other hand, yes men are not required to do things, despite people liking them. You can achieve great things even if your team is made of Germans.
My tongue-in-cheek comment wonders if having actors with a modicum of personality to be better than just being surrounded by over-enthusiastic bootlickers. In my experience, many projects would benefit from someone saying “no, that is silly.”
For some reason, the capitalistic market time and time again favors bootlickers, so there must be some feature of an army of bootlickers that we're missing.
Ok, then I can write an LLM too - because the guys you mention, if you asked them to write your code for you, would just tell you to get lost (or a more strongly phrased variation thereof).
There is evidence from Reddit that particularly women used GPT-4o as their AI "boyfriend". I think that's unhealthy behavior and it is probably net positive that GPT-5 doesn't do that anymore.
People want the feelings associated with love. They don't care how they get it.
The advantage of "real" love, health wise, is that the other person acts as a moderator. When things start to get out of hand they will back away. Alternatives, like drugs, tend to spiral of out of control when an individual's self-control is the only limiting factor. GPT on the surface seems more like being on the drug end of the spectrum, ready to love bomb you until you can't take it anymore, but the above suggests that it will also back away, so perhaps its love is actually more like another person than it may originally seem.
> People want the feelings associated with love. They don't care how they get it.
Most people want to be loved, not just believe they are. They don't want to be unknowingly deceived. For the same reason they don't want to be unknowingly cheated on. If someone tells them their partner is a cheater, or an unconscious android, they wouldn't be mad about the person who gives them this information, but about their partner.
> For the same reason they don't want to be unknowingly cheated on.
That's the thing, though, there is nothing about being a cheater that equates to loss of love (or never having loved). In fact, it is telling that you shifted gears to the topic of deceit rather than love.
It is true that feelings of love are often lost when one has been cheated on. So, yes, it is a fair point that for many those feelings of love aren't made available if one does not also have trust. There is a association there, so your gear change is understood. I expect you are absolutely right that if those aforementioned women dating GPT-4o found out that it wasn't an AI bot, but actually some guy typing away at a keyboard, they would lose their feelings even if the guy on the other side did actually love them!
Look at how many people get creeped out when they find out that a person they are disinterested in loves them. Clearly being loved isn't what most people seek. They want to feel the feelings associated with love. All your comment tells, surprising nobody, is that the feelings of love are not like a tap you can simply turn on (well, maybe in the case of drugs). The feelings require a special environment where everything has to be just right, and trust is often a necessary part of that environment. Introduce deceit and so goes the feelings.
If you get a massage from massage machine is it also a mirage? If you use a vibrator is it also a mirage? Why it suddenly becomes an unhealthy mirage if you need words to tickle yourself?
A vibrator still works as intended if you believe it doesn't love you. GPT-4o stops working as intended if you believe it doesn't love you. The latter relies on an illusion, the former doesn't.
(More precisely, a vibrator still relies on an illusion in the evolutionary sense: it doesn't create offspring, so over time phenotypes who like vibrators get replaced by those who don't.)
That's simply not true. Vibrators don't really work that well if you somehow suppress the fantasies during use. Same way that GPT-4o works better if you fantasize briefly that it might love you when it says what it does. Almost all people who use it in this manner are fully aware of its limitations. While they are phrasing it as "I lost my love" their complaints are really of the kind of "my toy broke". And they find similar mitigation strategies for the problem, finding another toy, giving each other tips on how to use what's available.
As for the evolutionary perspective, evolution is not that simple. Gay people typically have way less offspring than vibrator users and somehow they are still around and plentiful.
Brains are messy hodgepodge of various subsystems. Clever primates found multitude of way how to mess with them to make life more bearable. So far the species continuous regardless.
still feels like trash as the presentation is of a friendly person rather than an unthinking machine, which it is. The false presentation of humanness is a huge problem.
I feel strongly about this. LLMs should not try to write like humans. Computer voices should sound robotic. And when we have actual androids walking around, they should stay on the far side of the uncanny valley. People are already anthropomorphizing them too much.
It can't, though. It's language. We don't have a body of work constituting robots talking to each other in words. Hardly fair to ask LLMs not to write like humans when humans constructed everything they're built on.
I did as a test, Grok has "workspaces" and you can add a pre-prompt. So I made a Kamina (from Gurren Lagann) "worspace" so I could ask it silly questions and get back hyped up answers from "Kamina" it worked decently, my point is some tools out there let you "pre-prompt" based on your context. I believe Perplexity has this as well, they don't make it easy to find though.
They call it spaces, so when you create a "Space" on Desktop at least, you look on the right hand corner for the three dots, next to the Share button, and click "Settings":
> Give instructions to Perplexity to customize how answers are focused and structured.
You type into that box, now every space you make can have a fully custom "pre-prompt" as I call it.
I only just started using Perplexity, they really need to rework their UI a little bit.
I also don't get wanting to talk to an AI. Unless you are alone, that's going to be irritating for everyone else around.