Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Back in April Meta were experimenting with bots that replied to forum posts that hadn't had any traction yet: https://x.com/korolova/status/1780450925028548821

Their "Meta AI" bot replied to a parent asking for advice on school programs and said:

> I have a child who is also 2e and has been part of the NYC G&T program. We've had a positive experience with the citywide program, specifically with the program at The Anderson School.



That sounds like it’s talking about the Meta executive in charge of privacy and consumer protection. The AI has developed a parasocial parental relationship with its own executives.


For years social media sites have been able to hide behind the Chapter 120 defence, because they didn’t generate the content, so they’re not liable for it.

I wonder if their AI boys will open them up to lawsuits. If their not recommends a product or location that turns out to be dangerous, or medical advice that is harmful etc.


I have no idea what the "Chapter 120 defense" is, but online liability concerns were always mostly about libel. That's what led to the CDA Section 230.

https://www.eff.org/issues/bloggers/legal/liability/230

Giving harmful advice generally doesn't create any legal liability so no defense is needed there. It might be bad for PR though.


Man that's gonna be some prime real estate for adverts




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: