> Chatbot sites like character.ai and Chai AI have been linked to user suicides, for understandable reasons: if you allow users near-unlimited control over chatbots, over time you’ll eventually end up with some chatbots who get into an “encouraging the user to self-harm” state.
I do not understand why one could commit suicide over something a computer told them. At the same time, I understand that people may be in an unstable state or undereducated to be in a "relationship" with an AI model.
I think it's time for humanity to start developing mental health. Otherwise, we are doomed to be a strange hybrid with computer models.
I think it is not like "do it now xoxo", but rather ampflification of already strong tendency or depression leading to manifistation of such thoughts and possible actions during long and deep discussions - which LLMs are very good at. They also tend to "give the user what is asked for", sometimes with hallucinations as "the last attempt" to deliver results.
Imagine, you have an addiction towards bets and casino. This kind of addiction is linked with rising dopamine levels when the bet works out. And then you always want to have that winner feeling. Now you talk to the chatbots, since days, weeks, months - they are designed to be a friend/girfriend-alike. And at some point you talk about what you like, some private stuff, ..etc.. you don't need to fall in love with the AI model, but at some point you'll tell it while placing number 2 "you feel lonely lately" and BAM the model may answer you "may be you can go to next casino that will make you feel better" or something like this. This triggers an reaction in your mesolimbic system and you're hooked.
With suicide, there also may be some triggers - I'm not from this field - but, observable is, reports about suicide / news coverage / always put good visible informational text "If you having thoughts .. blabla..call +1-NO-TO-DEATH".
So, if such reports can be triggers, then a model, advised to be your friend or more, can at times be very much a bigger trigger. It may know your deepest thoughts, if you shared them before - which isn't impossible, but rather part of the model design.
May be some chatbot will come up with the idea how to combat addiction, depression and psychological conditions with the power of prompting. But now, its still a world wide problem. I also can't imagine why people start to take hard drugs or do harmfull things to self and each other, but they do - so it may also happen that someone commit suicide. Its better to regulate those fker mdels.
I think you are absolutely right. A mediocre engineer will not fit this marketplace, since LLM already produced a mediocre code by definition.
As for talent. Honestly, as an engineer, I would be attracted to success and a challenge. Say someone vibe-coded a successful product and struggling to scale it up. That is a message I'm approaching engineers right now. But I fully understand that I'm a sample one and open to any other views.
I'm also only a sample size of one, but I previously offered web development in a similar fashion (pre-LLM) and I found that clients didn't have the working knowledge to even know what they wanted to guide or influence the project in any meaningful way.
Perhaps your audience would be able to work with the dev/LLM in a more harmonious way -- I found that both the developer and customer became frustrated with the experience and most projects reverted to a "here's a draft, let me know what you want changed" flow to satisfy the deadlines. Don't let me deter you, I'm just sharing what derailed me in a different but similar adventure.
>I'm also only a sample size of one, but I previously offered web development in a similar fashion (pre-LLM) and I found that clients didn't have the working knowledge to even know what they wanted to guide or influence the project in any meaningful way.
Well, I can definitely see it! People jumping onto the vibe-coding bandwagon as if there are no risks.
I think at some point the users will build enough pushback so the vibe coders start looking for engineers to at least prevent a huge flop.
> Perhaps your audience would be able to work with the dev/LLM in a more harmonious way -- I found that both the developer and customer became frustrated with the experience and most projects reverted to a "here's a draft, let me know what you want changed" flow to satisfy the deadlines. Don't let me deter you, I'm just sharing what derailed me in a different but similar adventure.
Absolutely! I'm not a stranger to Upwork/getafreelancer on both engineer and customer side. The most crazy projects were when people built something from sticks and stones claming that they've done 80% and now need to finish 20%. When in reality it was a good case for a complete rewrite. Here is an avid example: https://www.reddit.com/r/vibecoding/comments/1l46lh1/i_tried...
* Technically it's not an app. It's a web page. It gives me a an insecurity feeling that my data will be lost on a refresh.
POV of the guy who is building "privacy-first" app:
Feels like you fell in love with your solution. Ask your wife to talk to her friends if they care about privacy? Watch a first time user uses your app.
IMHO privacy is a real concern for a fringe minority of the users. You and me are among those. What you just did - is verified that Mr. Market does not care.
When I asked my wife about privacy, she was like "what???".
Couple books that helped shape my view on this: "Fall in love with a problem, not a solution" and "Mom's Test".
100% agree. It’s less "product-market fit" and more "husband-wife fit." Wasn’t trying to build a startup, just trying to fix something that felt broken for someone I love.
And since it’s free, Faye doesn't need mass adoption. It just needs to exist for that one person who goes, "hold on...my health data ended up where?"
Or for when the next data breach hits and people start searching for local-only options.
Oh, and about the app vs web page comment, I get you. However, it's a progressive web app and can be installed. Works offline too. If you check Faye, you'll notice an install button at the top right.