Hacker Newsnew | past | comments | ask | show | jobs | submit | more thuuuomas's commentslogin

These results speak towards the absence of an audience for poetry as much as they do the aptitude of LLMs for creative writing.


Would you feel comfortable pushing generated code to production unaudited?


For my I have a company subscription for Copilot and I just use the line based autocomplete. It’s mildly better than the built in autocomplete. I never have it do more than though and probably wouldn’t buy a license for myself.


Would you feel comfortable pushing human code to production unaudited?


depends on the human.

but i would never push llm generated code. never.

-

edit to add some substance:

if it’s someone who

* does a lot of manual local testing

* adds good unit / integration tests

* writes clear and well documented PRs

* knows the code style, and when to break it

* tests themselves in a staging environment, independent of any QA team or reviews

* monitors the changes after they’ve gone out

* has repeatedly found things in their own PRs and asked to hold off release to fix them

* is reviewing other people’s PRs and spotting things before they go out

yea, sure, i’ll release the changes. they’re doing the auditing work for me.

they clearly care about the software. and i’ve seen enough to trust them.

and if they got it wrong, well, shit, they did everything good enough. i’m sure they’ll be on the ball when it comes to rolling it back and/or fixing it.

an llm does not do those things. an llm *does not care about your software* and never will.

i’ll take people who give a shit any day of the week.


I'd say it depends more on "the production" than the human. There are legal means to hold all people accountable for their actions ("Gross neglience" and all that). So you can basically always trust that people will fix what they messed up given the possibility. So if you can afford for the production to be broken (e.g. the downtime will just annoy some people) you might as well allow your team to deploy straight to prod without audits. It's not that rare actually.


Only on Fridays before a three day weekend.


Nope. But AI's sales pitch is that it's an oracle to lean on. Which is part of the problem.

As a start, let me know when an AI can fail test cases, re-iterate on its code to correct the test case, and re-submit. But I suppose that starts to approach AGI territory.


Intent is immaterial


How did you measure your ability to concentrate?


I have ADHD and simply engaging in a conversation while driving is enough for me to miss turns, exits, etc. Sure, in the event of an impending crash I might go into the zone and avoid danger, but my general awareness is noticeably impacted.

It seems to be less so for people without ADHD in my experience, but we can't just say it has zero effect. It's better to discuss the threat to safety in terms of potential, since that accounts for individual variance.


> fwiw

What do you think this sort of observation is worth?


Really depends on what sort of person you are I guess.

Some people appreciate being shown fascinating aspects of human nature. Some people don't, and I wonder why they're on a forum dedicated to curiosity and discussion. And then, some people get weirdly aggressive if they're shown something that doesn't quite fit in their worldview. This topic in particular seems to draw those out, and it's fascinating to me.

Myself, I thought it was great to learn about spontaneous trait association, because it explains so much weird human behavior. The fact that LLMs do something so similar is, at the very least, an interesting parallel.


This is the fantasy of brownfield redevelopment. The reality is that remediation is always expensive even when it doesn’t depend on novel innovations.


Can you link this idle observation directly to the operation of LLMs or is this more anthropomorphizing boosterism?


I'm not sure where you saw anthropomorphizing. It's not in my message.


We do represent much of our cognition in language. Sometime I feel like LLMs might be “dancing skeletons” - pulleys & wire giving motion to the bones of cognition.


But why stop there? All matter and all life is just increasingly fancy machines.

https://en.wikipedia.org/wiki/Philosophical_zombie

Editor's note: I do not promote such a worldview -- my intention is precisely the opposite.


Scientology has a long-standing opposition to psychotropic drugs & psychiatry broadly


I'm digging into that now. On one hand I can kind of understand the position, yet on the other the motive and reasoning doesn’t feel quite right. I guess that might be a broad theme with Scientology.


There’s also also the js port of tidalcycles, named strudel, which is a lot of fun

https://github.com/tidalcycles/strudel


Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: