Hacker Newsnew | past | comments | ask | show | jobs | submit | potsandpans's commentslogin

Im curious where all you top commenters were 5 years ago when grammarly was a product used by most professional writers.

If you weren't as incensed then, it's almost like your outrage and compulsion to post this on every hn thread is completely baseless.


Perhaps because it didn't stick out like a sore thumb? Or because it became so prevalent they observe the exact same tics in every other article they read nowadays?

Are you speaking for yourself? Or someone else?

How could they create something that already exists?

Maybe there's a small teapot orbiting the earth, with ten thousand angels dancing on the tip of the spout.

I think you’re both saying the same thing

Incorrect.


Harmful to who exactly?


Is there anything on the article that explicitly indicates that it was written by an llm?


"These aren't just X, they're Y" is a pretty strong tell these day.

Wikipedia has an excellent article about identifying AI-generated text. It calls that particular pattern "Negative parallelisms". https://en.m.wikipedia.org/wiki/Wikipedia:Signs_of_AI_writin...


It's funny, negative parallelisms used to be a favorite gimmick of mine, back in the before-times. Nowadays, every time I see "it isn't just..." I feel disappointment and revulsion.

The really funny thing is, I'll probably miss that tell when it's gone, as every AI company eventually scrubs away obvious blemishes like that from their flagship models.


Very similar here. I wouldn't use it myself but I'd definitely have liked to read such "punchy" text. The interesting thing is that I'm now much more consciously noticing how many em dashes and other similar phrases say an Agatha Christie used to use... which makes the source of such LLM data even more obvious hehe.

Interestingly, there are still many other good literary devices that are not yet used by AI - for example, sentences of varying lengths. There is still scope for a good human editor to easily outdo LLMs... but how many will notice? Especially when one of the editorial coloumns in the NYT (or Atlantic, forgot exactly) is merrily using LLMs in the heartfelt advice coloumn. It's really ironic, isn't it? ;)


I use em dashes in my regular writing — mostly because I consider them good/correct typography — and now I have to contend with being accused of using LLMs or being a bot. I think this societal development is damaging.


Could this be a symptom of the free tier of ChatGPT, but not all LLMs? I’ve recently been a heavy user of Anthropic’s Claude and I don’t believe I’ve seen too many of these in my chats. Though this may be because I haven’t asked Claude to write Wikipedia articles.

LLMs are also great at following style, not via criteria but via examples. So this is something that’s easily overcome.

I discovered this when I made an error in a creative writing tool I was working on. I told it to follow the writing style of existing story text, but it ended up making the system messages follow the same style. It was quite amusing to see tool messages and updates written in an increasingly enthusiastic Shakespearean/etc prose (so I left it unfixed!)


I am dismayed by the implication that humans are now no longer able to use certain tropes or rhetorical devices or stylistic choices without immediately being discounted as LLMs and having their entire writing work discredited.


You keep saying things that are completely unsubstantiated as though they were fact. "Nobody believes..." _all people_ this, _complete failure_ that...

You're either a shill, an ideologue or arguing dishonestly.

All three are bad equally.


Don't worry; HN makes such statements all the time, you can't accuse me of not grasping the format. On that note, not once did I use the words "complete failure" or "all people" despite your quotation in this thread, so please don't argue dishonestly yourself.

I cited a reality: We went from SOPA/PIPA over copyright, to no question about age verification on morality grounds. It shows a trend towards zero interest in free and open internet activism. Such a trend indicates something is severely wrong, and the idea of an open internet has become disconnected from popular belief, internationally, as something to strive for. Prove me wrong.


The topic at hand is not whether it's a bold claim to make. The question is: should organizations that control a large portion of the world's communication channels have the ability to unilaterally define the tone and timber of a dialog surrounding current events?

To the people zealously downvoting all of these replies: defend yourselves. What about this is not worthy of conversation?

I'm not saying that I support lab leak. The observation is that anyone that discussed the lab leak hypothesis on social media had content removed and potentially were banned. I am fundamentally against that.

If the observation more generally is that sentiments should be censored that can risk peoples lives by influencing the decisions they make, then let me ask you this:

Should Charlie Kirk have been censored? If he were, he wouldn't have been assassinated.


> "Should Charlie Kirk have been censored? If he were, he wouldn't have been assassinated."

On the other hand, if he were, then whoever censored him might have just as easily become the target of some other crazy, because that appears to be the world we live in now. Something's gotta change. This whole "us vs them" situation is just agitating the most extreme folks right over the edge of sanity into "Crazy Town". Wish we could get back to bein' that whole "One Nation Under God" "Great Melting Pot" "United States" they used to blather on about in grade-school back in the day, but that ship appears to have done sailed and then promptly sunk to the bottom... :(


I called this out in this thread and was immediately downvoted


Can you please provide evidence? I'm not saying I don't believe you. It's just... extraordinary claims etc



The sounds like the particular file in question was set to public and being widely shared around.

Which is rather different than scanning actual private files.


Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: