Hacker Newsnew | past | comments | ask | show | jobs | submit | nicklaf's commentslogin

Same story here. After installing dwm years ago, I've more or less stopped thinking about window managers as an open problem to be solved. I suppose one could run into trouble if they started patching it a lot, but in my experience you don't need to do that.


It's painfully apparent when you've reached the limitations of an LLM to solve a problem it's ill-suited for (like a concurrency bug), because it will just keep spitting out non-sense, eventually going in circles or going totally off the rails.


And then one jumps in and solves the problem themself, like they’ve done for their entire career. Or maybe one hasn’t done that, and that’s who we hear complain so much? I’m not talking about you specifically, just in general.


> Van der Kolk lost his job due to pretty extensive sexual misconduct, so I think he's a pretty poor person to hold up as a hero.

I was curious about this accusation, so I read a bit about the scandal. [0]

It seems you are actually talking about Joseph Spinazzola, the executive director of Van der Kolk's trauma center, who was fired for sexual misconduct while Van der Kolk was on sabbatical. Van der Kolk was fired two months later, not for sexual misconduct, but for denigrating and bullying employees.

[0] https://www.boston.com/news/local-news/2018/03/07/allegation...



In my limited experiments with Gemini: it stops working when presented with a program containing fundamental concurrency flaws. Ask it to resolve a race condition or deadlock and it will flail, eventually getting caught in a loop, suggesting the same unhelpful remedies over and over.

I imagine this has to to with concurrency requiring conceptual and logical reasoning, which LLMs are known to struggle with about as badly as they do with math and arithmetic. Now, it's possible that the right language to work with the LLM in these domains is not program code, but a spec language like TLA+. However, at that point, I'd probably just spend less effort to write the potentially tricky concurrent code myself.


Personally, I don't trust LLMs to write code for me, generally speaking. That said, as of late I've been very pleased with the whole "shoehorn your problem to fit code examples found online" thing these LLMs do, in the very special case of massaging unix scripts, and where the "code examples found online" part seems to mostly amount to making fairly canonical reference to features documented in man pages that are plastered all over the web and haven't changed much in decades.

For questions that I know should have a straightforward answer, I think it beats searching Stackoverflow. Sure, I'll typically end up having to rewrite most of the script from scratch; however, if I give it a crude starting point of a half-functional script I've already got going, pairing that with very clear instructions on how I'd like it extended is usually enough to get it to write a proof of concept demonstration that contains enough insightful suggestions for me to spend some time reading about features in man pages I hadn't yet thought to use.

The biggest problem maybe is a propensity for these models to stick in every last fancy feature under the sun. It's fun to read about a GNU extension to awk that makes my script a couple lines shorter, but at best I'll take this as an educational aside than something I'd accept at the expense of portability.


Mathematics texts with titles that would mislead a beginner who naïvely takes words such as "basic" and "elementary" at face value are a bit of a running joke, particularly when you go past the undergraduate level.

Just look, for example, at the table of contents to André Weil's "Basic" Number Theory book: https://link.springer.com/book/10.1007/978-3-642-61945-8#toc


FWIW, Colbert quipping that "reality has a well-known liberal bias" was clearly satirical of conservatives' tendency to trot out the canard of "liberal" bias.

You could just as well say he still agrees with your point about reality comporting more with a progressive understanding of ethics, while at the same time parodying Fox News for incoherently making spurious charges of "liberalism" at every turn.


Invoking Colbert, a well known Democrat partisan, speaks to the problems with this discussion.

Additionally, passionate invocation of "facts", "reality" and "objectively true" should be red flags for any discussion.


I believe Colbert came up with it in the first place.


perfect username. I guess it was too long ago for some people to remember the joke?


Rather amusingly, the author recently espoused the viewpoint that bloggers who wanted to stand out should avoid using tools like ChatGPT: https://youtu.be/avASDgtw9k0?t=678

It's not obvious to me that this blog post was synthesized whole cloth from an LLM. On the other hand, in that same interview, the author encourages the use of LLMs for idea exploration, and it's entirely possible that this is what he did.

In fact, using an LLM in this way may itself may be an SEO trick, in the sense that he simply wanted to boost his search rank (inasmuch as it's still the case that longer articles are more highly ranked) by beefing up what would have otherwise been a very short article.


The Clickbaity title certainly did the trick!


Interesting. This is actually behavior that I already prefer and enable in Firefox. It makes sense when you leave browser instances open for long periods of time (with multi-account containers for a large number of tabs that you save and restore the session of) and use a password manager to sign in when you do restart.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: