Hacker News new | past | comments | ask | show | jobs | submit | fcatalan's comments login

I have observed my 16 year old incorporate LLMs into her workflow and most of it has been an improvement.

For a start, let's be realistic, before LLMs, unless some subject specially caught her fancy, most of her output was trite rewordings of Wikipedia plus some extra bits from the first couple of sites that came up in Google. The myth of "your own personal essay" was always a myth even before the digital age. Things came from the paper encyclopedia, then Encarta then Wikipedia. At college level, that and the reading list from the course.

Nowadays she prompts and reprompts deep research mode, checks the sources to avoid hallucinations and be able to defend her work if challenged, prunes and reworks the outline to her liking, brings things to her level and manually removes LLM blandness...

All in all I think she gets more than before from the whole exercise and the output is much better.

She also does things like fixing bad study material she gets handed, asking for clarifications and alternative explanations for stuff she is initially confused by, generating practice quizzes...

By her account, her peers handing in straight cut&paste slop from ChatGPT are the ones that previously didn't hand in anything at all. I've also seen her occasionally do that as retaliation against bad faith or pure make-work assignments meant as collective punishment, which I find... fair?


I concur that SICP 4.4 is very approachable. I once took a class that had a simple Prolog assignment, I recall we were given some building plans and had to program a path solver through the building. I thought it was a bit too easy and I wanted to dig deeper, because just doing the task left you with a taste of "this is magic, just use these spells".

I looked at how to implement Prolog and was stumped until I found that SICP section.

So I ported it to JavaScript and gave it a Prolog-like syntax and made a web page where you could run the assignment but also exposed the inner workings, and it was one of the neatest things I've ever handed in as coursework.


I need to have a closer look at this. Mostly because I was surprised recently while experimenting with making a dieting advice agent. I built a prompt to guide the recommendations "only healthy foods, low purines, low inflammation blah blah" and then gave it simple tools to have a memory of previous meals, ingredient availability, grocery ticket input and so on.

The main interface was still chat.

The surprise was that when I tried to talk about anything else in that chat, the LLM (gemini2.5) flatly refused to engage, telling me something like "I will only assist with healthy meal recommendations". I was surprised because nothing in the prompt was so restrictive, in no way I had told it to do that, just gave it mainly positive rules in the form of "when this happens do that".


That's interesting. Maybe the Gemini 2.5 models have been trained such that, in the presence of system instructions, they assume that anything outside of those instructions isn't meant to be part of the conversations.

Adding "You can talk about anything else too" to the system prompt may be all it takes to fix that.


you should try just to give an instruction like, when you're inquired about non-dietary related questions, you might entertain chit-chat and barter but try to steer the conversation back to dietary / healthy lifestyle, at the end of the day the context is king, if something is not in context the llm can infer by the lack of it, that its not -programmed- to do anything else.

these are funny systems to work with indeed


9/11 was a big turning point in my experience. American conservatives that I considered online friends were simple impossible to reason with within days and completely alien beings after a few weeks.

Interesting. Things did change on 9/11 but it seemed incremental to me. Before that was the constant investigation of Clinton by Gingrich, the dog whistling of Reagan, Nixon's Southern Strategy, and before that to McCarthy and so on.

This is high level rather than your direct experience, so it's not a contradiction. Just a different perspective.


Yes. Almost everything about our current situation can be traced back to Newt Gingrich and Rush Limbaugh. Things were much more civil and reasonable before that point.

I don’t know. Nixon had goons breaking into the DNC headquarters (and his whole southern strategy led to racially polarized politics up to this day), and there was that senator who got beaten by another senator just before the civil war. Eisenhower waited in the car rather than attend a meeting with Truman on his inauguration.

Nixon was forced to resign in disgrace to avoid impeachment when it came out. The dude in the White House now did much worse and he was rewarded with reelection.

But Rogue One ends literally setting the opening chase at the beginning of A New Hope, so no space for a straight sequel there. Maybe some Kleya "John Wick in Space" sidequest?

Roman elites spoke a lot of Greek among themselves, a bit similar to how Tsarist nobility used French


Sure, I mean obviously he knew Greek, but I'd never heard anyone assert that his last words were in Greek!


The stove must be touched, there's no other way


Agreed. And if it turns out there is no LLM riding in to rescue companies in need of new talent, the engineers who remain will be in very high demand indeed.


I'm trying to lose some weight, and while bored I pasted a few data points into Gemini to do some dumb extrapolation, just a list of dates and weights. No field names, no units.

I specifically avoided mentioning anything that would trigger any tut-tutting about the whole thing being a dumb exercise. Just anonymous linear regression.

Then when I finished I asked it to guess what we were talking about. It nailed it: the reasoning output was spot on, considering every clue: The amounts, the precision, the rate of decrease, the dates I had been asking about and human psychology. It briefly considered the chance of tracking some resource to plan for replacement but essentially said "nah human cares more about looking good this summer".

Then it gave me all the caveats and reprimands...


Why was it giving you caveats and reprimands about losing weight?


Oh the usual "linear weight loss predictions might not hold", "if you are on a restrictive diet make sure you are supervised by a doctor" and so on.


It'll likely start behaving differently if you respond by explaining why you found it's response offensive and condescending. The models tend to be pretty flexible in how they adapt to user preference if you call them out.


It's not incorrect, you drop water and glycogen quickly starting a diet. This isn't a "repeatable" gain unless you put it back on. Still I wish they were less prone to barfing ten pages of disclaimers and "safety" every response.


Oh I didn't mind it, the response is in fact right: it's not very realistic to extrapolate early diet results, and people come up with all kinds of potentially harmful crazy diets, so better to add the warning. I just wanted to emphasise that I deliberately avoided to drop any early clues about the nature of the numbers as I just wanted the (very probably wrong) results without any further comments, and it was interesting (maybe not really surprising) that the LLM would still easily guess what they were about when prompted


You can also see that the hanging yellow part of the headscarf, he just winged it, effective as it might be.

I paint as a sort of weekly ritual, just 2 hours every Wednesday evening, and did an inept copy of this as my first serious try. Months of staring closely at every little detail of it leave you in a sort of communion with the work and the artist.

One thing you quickly learn is that the old masters were "impressionists" too. If you overwork stuff trying to perfect every shape with hundreds of precise brushstrokes, you end up with a naive, infantile looking painting that feels "unpainterly".

Trying and failing to mimic that single quick brushtroke that fools the eye leaves you in awe, fully appreciating the mastery.


I have a similar philosophy for the systems I manage. We have always been severely understaffed, so I treat any user support request that repeats twice as a bug.

If the decision to push a button is yours, I'll give you the button. If you need some data more than once, you get a button too. My ideal user never needs to know who manages the system or how to contact us.

This has even got me a "why do you guys have almost no tickets? You aren't doing anything!" talk a couple times. Music for my ears.


Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: