Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This is awesome, clear and concise explanation. I was talking to my wife and debating whether I should build a model of each of us and store it somewhere in case something tragic happens to either of us. But we decided not to because something about it felt wrong but maybe I’ll make it just in case I regret not doing later on.


I really don't understand this sort of thing at all. Even a hypothetical distant future AI that had a human level of sentience and intelligence and could perfectly emulate you in every way still would not be you (even if it could hypothetically be indistinguishable from you in communication), and it's especially odd now with the crude simulacrum it would provide.

If you're gone, you're gone. Some model of your brain or body isn't going to bring you back, and I don't see how it would help anyone recover from grief. It would just make me feel way worse, and very creeped out.

Why not skip the AI part and just create an archive of writings and communications and such so they can recall memories of you if they wish?


There was an episode of the old 80's TV show "Max Headroom" which explored the idea that a new age church was taking advantage of people by providing kiosks, for a price, that contained a crude digital simulation of their lost loved ones. The show came down pretty much on the side of "it's just creepy, false hope".


Incidentally, part of why I've been in no rush to watch Star Trek: Picard past season 1. At the end of that season, Jean-Luc Picard dies, period and end of story. But I'm guessing the following seasons carry on like the android that now thinks it's Picard actually is Picard.


Hypothetical distant future? Lmao

It's already here mate. GPT-4 is human level intelligence however you'd wish to evaluate.

I agree a model of your brain is not strictly speaking you. But it might be close for what someone might want.


Human level is putting it mildly. Which human? I've tried showing it to a few people that barely understand the answers and miss insights given in previous answers 2 minutes prior.

My dad and I played around with asking it questions about a rare proprietary piece of software he uses at work. My dad is decent at the software and even does consulting after hours because so few people are know it. Chatgpt was teaching him things he never even knew about, or thought about doing. And my dad is pretty decent at his work.


Did your dad ever search online trying to learn new things about the software? I've seen the experience you describe happen with people spontaneously discovering Ctrl+L and Ctrl+R in bash, e.g. in a comment here on HN. They could have found it years ago if they'd searched for "list of bash features" or whatever.

My point is that your example is not necessarily more than "a better search engine" (with all caveats about it not being "search"). People are asking ChatGPT things that they could have searched for previously. Some of it is because it can provide a better search-like experience, for sure. Some of it is, "let me play with my new toy", and getting answers the old toy could have given you.

I'm addressing your specific example, not making a blanket comment on LLM intelligence. And to clarify, I've also used these models in ways I could have before, for both of those reasons. They do offer a productivity boost.


I have tried to help him with that software before. It's in a niche CNC manufacturing type industry. You can find people discussing problems in forums sometimes, but even that was always sparse. Honestly I don't think you could even find proper docs for that software. I even tried asking chatgpt where to get the documentation for what it was telling us and it basically just said the help file in the program, but we looked and it was nowhere near as helpful as chatgpt answers. I don't know where it pulled all that info to be honest. But even if it was out there, the ease at which it gave great answers that fit exactly what we threw at it made it 10x more (or more) valuable than spam filled and loosely related google results.


For sure. average human is just the lower bound of intelligence on a few tasks. On most, it's well above average. On others, it's in the top %


That’s what we’ve came to the conclusion too, kinda like the movie Transcendence. It is a pretty interesting conversation to talk and think about though from a moral, humane, and social aspect.


> maybe I’ll make it just in case I regret not doing later on

Maybe. But please consider the question of consent. If I agree with my partner that we are not doing this and then he/she does it behind my back I would feel violated.

What is probably a lot better idea is to store the conversations securely for the future.

Lot less icky to get consent to. Protects against all the mundane forms of data loss, like dropped phones, fire, flood, global entity blocking your accounts, etc etc. And! You still have the option to build a model at any point. Presumably an even better one, since the tech is likely to improve in the future.


Would you not want her to move on, and be allowed the same privilege? This is unnatural and probably unhealthy.


It might actually be helpful in that process. Many people report talking to a loved one in a dream and getting to say something to them and that helped move on for example.


idk, an LLM of a deceased loved one would be the cruelest thing possible in my mind. If I imagine my wife passing away and then having an LLM trained to resembler her I could see myself coming back to it all the time. Knowing it's not her but just close enough to keep me coming back wishing it was sounds like misery forever.


Do you not think this is a bit different than keeping a catalogue of their personality that you can access at any time? Would this be fair to a new partner - "Sorry honey, I didn't mean to cheat on you with my dead wife?"


You certainly won’t regret it. If you need to get information out of your wife and she won’t cooperate you can coerce the LLM of your wife into having it divulge private information that your wife won’t reveal. This can be very useful for something like a divorce case.

Best to find a way to keep the LLM regularly training on new data as well.

Edit: stop downvoting ideas you don’t like, that’s not what it’s for.


Not sure if this is a joke but this in case it isn’t, no, that’s not how it works, it will just make up random plausible-ish guesses


this reminds me of the time I got in trouble for shit I did in a dream of my wife's.

not really in trouble but you know.


Knowledge is just a series of words in a highly probable order, it will work. The jury will be easily convinced.


Isn't that what humans do?


I think the closest analogy to what the above poster wants to do would be to talk to a fortune teller when your wife won't tell you something, give the fortune teller information about your wife, and then "present" the fortune teller's fortune reading stating what your wife was up to in a divorce case.

It's true that humans sometimes do things like this! It doesn't go well for them.


> Edit: stop downvoting ideas you don’t like, that’s not what it’s for.

I will continue to downvote creepy ideas as I please.


You are increasing the echo of the chamber.


That is the sound of inevitability.


That's a risk I'm willing to take.


The “idea” doesn’t even make any sense


How would the LLM be trained with information she wouldn't divulge?


That's beyond creepy.


Sorry, my LLM trained exclusively on bad ideas is posting again


Poe’s Law strikes again




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: