Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

People are going to tear thing thing apart and help build a solid case for why these guardrails are built in in the first place.


"Guardrails"

But yes, you're right. We've seen how dangerous open systems can be. Hackers and scammers have shown us that we need guardrails against operating systems and the web, as you correctly point out. It's time for some legislation that locks the down so the unwashed masses don't have unfettered access to them, don't you agree?


I would much prefer you state directly what you believe than use this mocking facetious tone. It’s not really possible to engage with anything you’re saying because I’d have to guess at how to invert your insinuations.

Anyway I think it’s fine for these systems to be available as open source, I’m not suggesting they be withheld from the public. But when you offer it as a cloud service people associate its output with your brand and I think this could end up harming Twitter’s brand.


The case study of "no guardrails" already played out: https://en.wikipedia.org/wiki/Tay_(chatbot)

Microsoft did not like what they got and shut it down because it ended up being a 4chan troll.


Tay got that way because it was effectively fine-tuned by an overwhelming number of tweets from 4chan edgelords. that's a little more extreme than "no guardrails," it was de facto conditioned into being a neo-Nazi.

a generic instruction-tuned LLM won't act like that.


The instruction-tuning is the guard rail. What other guard-rails is X AI removing? Just curious if I'm missing something.


Instruction-tuning isn't typically considered a guard rail. Raw pretrained LLMs are close to useless, since they just predict text. Guard-rails are when you train the AI not to obey certain instructions.


Good point about the negative reinforcement training!

Instruction tuning is on top of the base LLM and is often RLHF to train the base LLM to produce certain kinds of responses.


Yeah, instead of being trained on 4chan Nazis like Tay was, Grok is trained on Twitter Nazis.

Much better.


Elon is ironically going to be a part of the reason that access to foundational models will be banned in the US in the wake of Biden's recent executive order.


And ironically he will be part of the reason to support EU AI act that requires you document and test your foundational models: https://softwarecrisis.dev/letters/the-truth-about-the-eu-ac...


I hope that does not happen but I do suspect this will backfire in some way. Hopefully it would ultimately be beneficial, demonstrating why handling these models with care is worthwhile.


That executive order is an affront to intelligence and must be cancelled ASAP by the next president who will hopefully not be a Democrat.


I hate to be the one to break it to you, but the GOP doesn't give a rat's ass about you or your rights, either. It's lip service from both parties, both stand to gain from regulatory capture.


Oh I really hope the next president is a Democrat since we're talking about it! Not a great EO, but Biden has done great so far.


I'm not an American and don't like either of your parties, but just wanted to say I respect the enthusiasm in immediately confronting someone violating social norms. It's people like you that keep communities alive.


Is it really ironic if every time he touches AI it ends up causing the opposite of what he tried to do?


Why is that ironic? Musk is one of the most "scared of AI" billionaires you will find.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: