Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I can’t tell if this is serious or not. Surely you realise you can just use the word “example” and then replace the word in the regex?!




I think they would want a more optimized regex. Like a long list of swears, merged down into one pattern separated by tunnel characters, and with all common prefixes / suffixes combined for each group. That takes more than just replacing one word. Something like the output of the list-to-tree rust crate.

Wouldn't the best approach for that be to write a program that takes a list of words and output an optimized regex?

I'm sure an LLM can help write such a program. I wouldn't expect an LLM to be particularly good at creating the regex directly.


I would agree. That’s exactly what the example I gave (list-to-tree) does. LLMs are actually pretty OK at writing regexes, but for long word lists with prefix/suffix combinations they aren’t great I think. But I was just commenting on the “placeholder” word example given above being a sort of straw man argument against LLMs, since that wouldn’t have been an effective way to solve the problem I was thinking of anyways.

Still incredibly easy to do without feeding the actual words into the LLM.

But why are LLM censored? This is not a feature I asked for

Come on bro you know the answer to this.

When trying to block out nuanced filter evasions of the n-word for example, you can't really translate that from "example" in a useful meaningful way. The worst part is most mainstream (I should be saying all) models yell at you, even though the output will look nothing like the n-word. I figured an LLM would be a good way to get insanely nuanced about a regex.

What's weirdly funny is if you just type a slur, it will give you a dictionary definition of it or scold you. So there's definitely a case where models are "smart" enough to know you just want information for good.

You underestimate what happens when people who troll by posting the nword find an nword filter, and they must get their "troll itch" or whatever out of their system. They start evading your filters. An LLM would have been a key tool in this scenarion because you can tell it to come up with the most absurd variations.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: