Sounds like a great way for someone to accidentally harm their infant. What an irresponsible thing to say. There are all sorts of little food risks, especially until they turn 1 or so (and of course other matters too, but food immediately comes to mind).
The stakes are too high and the amount you’re allowed to get wrong is so low. Having been through the infant-wringer myself yeah some people fret over things that aren’t that big of a deal, but some things can literally be life or death. I can’t imagine trying to vet ChatGPT’s “advice” while delirious from lack of sleep and still in the trenches of learning to be a parent.
But of course he just had to get that great marketing sound bite didn’t he?
Sam Altman decided to irresponsibly talk bullshit about parenting because yes, he needed that marketing sound bite.
I cannot believe someone will wonder how people managed to decode "my baby dropped pizza and then giggled" before LLMs. I mean, if someone is honestly terrified about the answer to this life-or-death question and cannot figure out life without an LLM, they probably shouldn't be a parent.
Then again, Altman is faking it. Not sure if what he's faking is this affectation of being a clueless parent, or of being a human being.
That’s not the questions people will ask though. They’ll go “what body temperature is too high?” Baby temperatures are not the same as ours. The threshold for fevers and such are different.
They will ask “how much water should my newborn drink?” That’s a dangerous thing to get wrong (outside of certain circumstances, the answer is “none.” Milk/formula provides necessary hydration).
They will ask about healthy food alternatives - what if it tells them to feed their baby fresh honey on some homemade concoction (botulism risk)?
People googled this stuff before, but a basic search doesn’t respond with you about how it’s right and consistently feed you emotionally bad info in the same fashion.
I was mostly arguing that Altman's statements, if taken at face value, show him to be unfit to be a parent. I stand by this, but mostly because I think people like him -- Altman, Musk, I tend to conflate -- are robots masquerading as human beings.
That said, of course Altman is being cynical about this. He's just marketing his product, ChatGPT. I don't believe for a minute he really outsources his baby's well-being to an LLM.
The stakes are too high and the amount you’re allowed to get wrong is so low. Having been through the infant-wringer myself yeah some people fret over things that aren’t that big of a deal, but some things can literally be life or death. I can’t imagine trying to vet ChatGPT’s “advice” while delirious from lack of sleep and still in the trenches of learning to be a parent.
But of course he just had to get that great marketing sound bite didn’t he?