Hacker Newsnew | past | comments | ask | show | jobs | submit | sheepdestroyer's commentslogin

I fail to see how this particular "anime girl" and the potential for clients seeing it, could make you think that's a fair request. That seems extremely ridiculous to me.


I don't. "safety" as it exists really feels like infantilization, condescention, hand holding and enforcement of American puritanism. It's insulting.

Safety should really just be a system prompt: "hey you potentially answer to kids, be PG13"


Safety in the context of LLMs means “avoiding bad media coverage or reputation damage for the parent company”

It has only a tangential relationship with end user safety.

If some of these companies are successful the way they imagine, most of their end users will be unemployed. When they talk about safety, it’s the companies safety they’re referring to.


Investor safety. It's amazing that people in hn threads still think the end-user is the customer. No. The investor is the customer, and the problem being solved for that curtomer is always how to enrich them.


How can the investor be the customer? Where does the revenue come from?

I understand “if you aren’t paying for a product you are the product” but I’m not convinced it applies here.


It feels hard to include enough context in the system prompt. Facebook’s content policy is huge and very complex. You’d need lots of examples, which lends itself well to SFT. A few sentences is not enough, either for a human or a language model.

I feel the same sort of ick with the puritanical/safety thing, but also I feel that ick when kids are taken advantage of:

https://www.reuters.com/investigates/special-report/meta-ai-...

The models for kids might need to be different if the current ones are too interested in romantic love.


I also don't get it. I mean if the training data is publicly available, why isn't that marked as dangerous? If the training data contains enough information to roleplay a killer or a hooker or build a bomb, why is the model censored?


We should put that information on Wikipedia, then!

but instead we get a meta-article: https://en.wikipedia.org/wiki/Bomb-making_instructions_on_th...


If you don’t believe that you can be harmed verbally, then I understand your position. You might be able to empathise if the scenario was an LLM being used to control physical robotic systems that you are standing next to.

Some people can be harmed verbally, I’d argue everyone if the entity conversing with you knows you well, and so i don’t think the concept of safety itself is an infantilisation.

It seems what we have here is a debate over the efficacy of having access to disable safeguards that you deem infantilising and that get in the way of an objective, versus the burden of always having to train a model to avoid being abusive for example, or checking if someone is standing next to the sledgehammer they’re about to swing at 200rpm


It's also marketing. "Dangerous technology" implies "powerful". Hence the whole ridiculous "alignment" circus.


I'm unsure why dead people would have rights. Is that concept really a good thing?


The dead already have many rights:

- right to control distribution of property through a will

- right to control method of remains disposal (up to a point)

- right to dignified treatment (e.g. no desecration of the remains)

- rights against posthumous defamation- rights to control how their likeness, name, and image are used posthumously

I fail to understand how this proposal would be any different.


Sure, I am quite against all of them already:

The first one has been argued against quite nicely by Piketty, it's how you get plutocracy

The three other ones should not be treated as Rights since the concerned individual is no more, and they don't matter much anyway if coming against the rights of people (that means "living"). For instance collecting organs for the good of those who need, when evaluated, should trump any opposition on frivolous grounds.

I'm indead asking if the whole concept is not wrong and deeply harmful to societies


Because live people care a lot about that, so they create legal structures supporting it


If you're not a crude materialist, you can believe in eternal soul. Shouldn't we honor the dead, in that case?


Unless someone is hurt you can believe what you want. Otherwise it's necessary to weight what's to be gained and lost by entertaining net negative stances on frivolous grounds; and why we should then chose to do so.


You still at least need a recent kernel


No. Humans should follow the law, and if they have to be stuck behind a robot that does to do so themselves, so be it.


America is a common law jurisdiction. Juries determine the true law and you never know what that is until a jury decides. You can make so good guesses though. Custom is the rule so an autocar doing what people normally do is the best way to have it behave.


Not only in science. Even (or especially) in impartial journalism or public debate, there's no valid reason why any unfounded (thus illegitimate) opinions should get as much consideration as sound and researched arguments.


I thought the latest llama were not from FAIR but from the genai team


If it still works for you, it's because you've temporarily workarounded its automatic disablement, and that won't last much longer...


Alternatively they may be using a proper non-Blink browser, like everyone who cares about tech should.


Are you being purposefully controversial (to not say trollish)?

To the exact contrary to what you assert, one of the prominent argument against Gnome that I've been seing times and times again in DE debates, is the "dogmatic" opposition to SSD from the Gnome project.


It's a bit of a false comparison, since you wouldn't have to pay monthly subscriptions to others stores as you have to for streaming services.


Yeah but I would have to deal with multiple app stores of varying security, quality control, resource usage, and other annoyances. No thanks.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: