But yes, you're right. We've seen how dangerous open systems can be. Hackers and scammers have shown us that we need guardrails against operating systems and the web, as you correctly point out. It's time for some legislation that locks the down so the unwashed masses don't have unfettered access to them, don't you agree?
I would much prefer you state directly what you believe than use this mocking facetious tone. It’s not really possible to engage with anything you’re saying because I’d have to guess at how to invert your insinuations.
Anyway I think it’s fine for these systems to be available as open source, I’m not suggesting they be withheld from the public. But when you offer it as a cloud service people associate its output with your brand and I think this could end up harming Twitter’s brand.
Tay got that way because it was effectively fine-tuned by an overwhelming number of tweets from 4chan edgelords. that's a little more extreme than "no guardrails," it was de facto conditioned into being a neo-Nazi.
a generic instruction-tuned LLM won't act like that.
Instruction-tuning isn't typically considered a guard rail. Raw pretrained LLMs are close to useless, since they just predict text. Guard-rails are when you train the AI not to obey certain instructions.
Elon is ironically going to be a part of the reason that access to foundational models will be banned in the US in the wake of Biden's recent executive order.
I hope that does not happen but I do suspect this will backfire in some way. Hopefully it would ultimately be beneficial, demonstrating why handling these models with care is worthwhile.
I hate to be the one to break it to you, but the GOP doesn't give a rat's ass about you or your rights, either. It's lip service from both parties, both stand to gain from regulatory capture.
I'm not an American and don't like either of your parties, but just wanted to say I respect the enthusiasm in immediately confronting someone violating social norms. It's people like you that keep communities alive.