Hacker Newsnew | past | comments | ask | show | jobs | submit | evertedsphere's commentslogin

at this point i think we ought to start having a tag in the submission title for when the submission is (primarily) llm-generated, like we do for video/pdf/nsfw

Too late. YouTube music channels already began explicitly tagging videos as "Human made music".

no javascript please, we're british

i like how this blog post complaining about data loss due to an llm was itself (mostly? entirely?) generated by an llm

> While some of the examples you see below may not be WTFs in the truest sense, but they'll reveal some of the interesting parts of Python that you might be unaware of. I find it a nice way to learn the internals of a programming language, and I believe that you'll find it interesting too!

the spirit in which that page is presented is different from what you seem to have taken it to mean


well written; it's got me thinking about writing one of my own. also i hadn't heard of resomation and promession as practices (under those or any other names)

(btw: the 死は生の一部である at the end is missing the final る)


Looking into all the ways your body can be handled after death is quite interesting. The further down the rabbit hole you go, the more interesting and questionable options you find.

(Thanks for catching that. I was trying to decide whether '.' or '。' looked better next to English text and must have taken off more than I wanted.)


the generated ui that their "deep research" tool uses to present a report

why'd that get flagged

Fanboys

> But we don’t want medical device manufacturers or nuclear power plant operators to move fast and break things. AI will quickly get baked into critical infrastructure and could enable dangerous misuse.

nobody will put a language model in a pacemaker or a nuclear reactor, because the people who would be in a position to do such things are actual doctors or engineers aware both of their responsibilities and of the long jail term that awaits them if they neglect them

this inevitabilism, to borrow a word from another submission earlier today, about "AI" ending up in critical infrastructure and the important thing being to figure out how to do it right is really quite repugnant

sure, yes, i know about the shitty kinda-explainable statistical models that already control my insurance premiums or likelihood of getting policed or whatever

but why is it a foregone conclusion that people are going to (implicitly rightly so given the framing lets it pass unquestioned!) put llms into things that materially affect my life on the level of it ending due to a stopped heart or a lethal dose of radiation


honey it's 4pm time for your daily frontpage chatgpt readme

no, that's a region variable if i understand correctly, so closer to a rust lifetime


It is, but your reply seems like a non-sequitur. The point OP was making was that it doesn't return an Option or Maybe or anything like that, meaning that there's a failure case untracked by the type system.


oh, right, i misunderstood


Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: