Hacker Newsnew | past | comments | ask | show | jobs | submit | buserror's commentslogin

Another thing if you are recovering and have limited dexterity in your hands, after trying pretty much ALL the voice recognition I could find, the VScode/copilot assistant is the best by far!

I've now recovered enough that I can type/edit faster, but I still use it; I keep a Worksheet.md tab around and keep a whole running log of stuff, LLM prompts etc


Had a stroke 2 months ago at 55, after an entire life (professionally since I'm 16) as a dev. I mostly followed these rules apart from when I got dragged into a project that was sufficiently interesting that I started overworking. 12-14h days.

Just don't do that. I used to do that just fine and that's why I thought I was OK. I mean, I USED to go on in huge coding benders, did'nt I ? Well apparently not at 55, when the pressure has been on for months instead of weeks.

Other things to watch -- diet! With the work came less free time, put on weight etc and all the good habits I had built for years, disappeared.

And the worst bit you can think of is "Oh but I'm so CLOSE to being done, I'll just fix it up later when I can relax". Just don't.

I lost all sensation on the right side. It is coming back slowly. I can still work, didn't lose speech or mobility or strength, I consider myself super-mega-lucky in that.


> when I got dragged into a project that was sufficiently interesting that I started overworking

This is what bites. I have some really narrow interest areas that I can end up being obsessive about, to my own detriment. We have to be careful.

Glad you didn't lose mobility and speech! I also feel lucky. I met others in neuro-rehab in far worse situations. For three months I couldn't walk and now thankfully do so with a stick and ankle brace. The hard stuff isn't the stuff you can see visually though. People see my floppy leg, and might presume that's the main thing, but nope. The big thing is the epilepsy, this constant monster present in the background. It's the invisible stuff that's often hard.


Pro tip: Do not use on your internet router if it is ALSO the DHCP server, as the JetKVM is by default DHCP (with no backup fallback to a LL address) so if your server somehow needs serious attention...

Well yes, your JetKVM no longer has a lease.

Don't ask me how I discovered this one :-)


Static addressing in the next release, shame there's no link locals setup though.


No they won't. I've seen them coming a looooong way. I even re-baptised arduidiots [0] quite a while ago. Since the "branding" fiasco I've stayed well clear of them.

[0]: https://github.com/buserror/simavr/blob/master/examples/shar...


you can't say no on this website, downvoted until you change it to `yes, they won't`


I use NFS as a keystone of a pretty large multi-million data center application. I run it on a dedicated 100Gb network with 9k frames and it works fantastic. I'm pretty sure it is still use in many, many places because... it works!

I don't need to "remember NFS", NFS is a big part of my day!


On a smaller scale, I run multiple PC's in house diskless with NFS root; so easy to just create copies on the server and boot into them as needed, it's almost one image per bloated app these days (server also boots PC's into Windows using iSCSI/SCST and old DOS boxes from 386 onwards with etherboot/samba). Probably a bit biased due to doing a lot of hardware hacking where virtualisation solutions take so much more effect, but got to agree NFS (from V2 through V4) just works.


I'm amazed to see kids and adult wearing shorts going out for forest walks. We are in an area of dense woods filled with deer, and even the fields are full of sheep that also carry ticks... There is zero awareness in the general population of the dangers of tick bites!


I’m guilty. My legs are torn up by mosquitos even as I type. It’s just so fucking hot and humid in the summers.


Same here, nnn feels so much lighter too. It also works out of the box, no need to carry around "your" .rc file on dozens of systems as you work


I have a debian box I installed in 2002. Trust me, it works :-)


I personally treat the LLM as a very junior programmer. He's willing to work, will take instructions, but his knowledge of the codebase, and patterns we use is lacking strongly. So it needs a LOT of handholding, very clear instructions, description of potential pitfalls, and smaller, scoped tasks, and reviewed carefully to catch any straying off pattern.

Also, I make it work the same way I do: I first come up with the data model until it "works" in my head, before writing any "code" to deal with it. Again, clear instructions.

Oh another thing, one of my "golden rule" is that it needs to keep a block comment at the top of the file to describe what's going on in that file. It acts as a second "prompt" when I restart a session.

It works pretty well, it doesn't appear as "magic" as the "make it so!" approach people think they can get away with, but it works for me.

But yes, I still also spend maybe 30% of the time cleaning up, renaming stuff and do more general rework of the code before it comes "presentable" but it still allows to work pretty quickly, a lot quicker than if I were to do it all by hand.


I think "junior programmer" (or "copilot") oversells the AI in some cases and undersells it in others. It does forget things that a regular person wouldn't, and it does very basic coding mistakes sometimes. At the same time it's better than me at some things (getting off-by-one errors when dealing with algorithms that work on arrays). It also has encyclopedic knowledge about basically anything out there on the internet. Red-black Trees? Sure thing. ECS systems for game programming? No problemo, here are the most used libraries.

I have ended up thinking about it as a "hunting dog". It can do some things better than me. It can get into tiny crevasses and bushes. It doesn't mind getting wet or dirty. It will smell the prey better than me.

But I should make the kill. And I should be leading the hunt, not the other way around.


That hunting dog analogy is epic and perfectly matches my experience.


The difference between LLM and a very junior programmer: junior programmer will learn and change, LLM won't change! The more instructions you put in the prompt, the more will be forgotten and the more it will bounce back to the "general world-wide average". And on next prompt you must start all over again... Not so with junior programmers ...


This is the only thing that makes junior programmers worthwhile. Any task will take longer and probably be more work for me if I give it to a junior programmer vs just doing it myself. The reason I give tasks to junior programmers is so that they eventually become less junior, and can actually be useful.

Having a junior programmer assistant who never gets better sounds like hell.


The tech might get better eventually, it has gotten better rapidly to this point and everyone working on the models are aware of these problems. Strong incentive to figure something out.

Or maybe this is it. Who knows.


Ahaha you likely haven't seen as many Junior Programmer as I have then! </jk>

But I agree completely some juniors are a pleasure to see bloom, it's nice when one day you see their eye shine and "wow this is so cool, never realized you made that like THAT for THAT reason" :-)


The other big difference is that you can spin up an LLM instantly. You can scale up your use of LLMs far more quickly and conveniently than you can hire junior devs. What used to be an occasional annoyance risks becoming a widespread rot.


My guess is that you're letting the context get polluted with all the stuff it's reading in your repo. Try using subagents to keep the top level context clean. It only starts to forget rules (mostly) when the context is too full of other stuff and the amount taken up by the rules is small.


They're automations. You have to program them like every other script.


The learning is in the model versions.


Definitely. To be honest I don't think LLM's are any different from googling and copying code off the Internet. Its still up to the developer to take the code, go over it, make sure its doing what its supposed to be doing (and only that) etc.

As for the last part, I've recently been getting close to 50 and my eyes aren't what they used to be. In order to fight off eye-strain I now have to tightly ration whatever I do into 20 minute blocks, before having to take appropriate breaks etc.

As a result of that time has become one of the biggest factors for me. An LLM can output code 1000x faster than a human, so if I can wrangle it somehow to do whatever basics for me then its a huge bonus. At the moment I'm busy generating appropriate struct of arrays for SIMD from input AoS structs, and I'm using Unity C# with LINQ to output the text (I need it to be editable by anyone, so I didn't want to go down the Roslyn or T4 route).

The queries are relatively simple, take the list of data elements and select the correct entries, then take whatever fields and construct strings with them. Even so, copying/editing them takes a lot longer than me telling GPT to select this, exclude that and make the string look like ABC.

I think there was a post yesterday about AI's as HUDs, and that makes a lot of sense to me. We don't need an all-powerful model that can write the whole program, what we need is a super-powered assistant that can write and refactor on a very small and local scale.


I personally see the LLM as a (considerably better) alternative to StackOverflow. I ask it questions, and it immediately has answers for my exact questions. Most often I then write my own code based on the answer. Sometimes I have the LLM generate functions that I can use in my code, but I always make sure to fully understand how it works before copy-pasting it into my codebase.

But sometimes I wonder if pushing a +400.000 lines PR to an open-source project in a programming language that I don't understand is more beneficial to my career than being honest and quality-driven. In the same way that YoE takes precedence over actual skill in hiring at most companies.


Unlike stack overflow, if it doesn’t know the answer it’ll just confidently spit out some nonsense and you might fall for it or waste a lot of time figuring out that it’s clueless.

You might get the same in Stack Overflow too, but more likely I’ve found either no response or, or someone pretty competent actually does come out of the woodworks.


I find success basically limiting it to the literal coding but not the thinking - chop tasks down to specific, constrained changes; write detailed specs including what files should be changed, how I want it to write the code, specific examples of other places to emulate, and so on. Doesn’t have to be insanely granular but the more breadcrumbs the higher chance it’ll work, you find a balance. And whatever it produces, I git add -p one by one to make sure each chunk makes sense.

More work up front and some work after, but still saves time and brain power vs doing it all myself or letting it vibe out some garbage.


To a certain extent you are probably still not using it optimally if you are still doing that much work to clean it up. We, for example, asked the LLM to analyze the codebase for the common patterns we use and to write a document for AI agents to do better work on the codebase. I edited it and had it take a couple of passes. We then provide that doc as part of the requirements we feed to it. That made a big difference. We wrote specific instructions on how to structure tests, where to find common utilities, etc. We wrote pre-commit hooks to help double check its work. Every time we see something it’s doing that it shouldn’t, it goes in the instructions. Now it mostly does 85-90% quality work. Yes it requires human review and some small changes. Not sure how the thing works that it built? Before reviewing the code, have it draw a Mermaid sequence diagram.

We found it mostly starts to abandon instructions when the context gets too polluted. Subagents really help address that by not loading the top context with the content of all your files.

Another tip: give it feedback as PR comments and have it read them with the gh CLI. This is faster than hand editing the code yourself a lot of times. While it cleans up its own work you can be doing something else.


Interesting, I actually do have a coding-guidelines.md file for that purpose, but I hadn't thought of having the LLM either generate it, or maintain it; good idea! :-)


I actually had Claude and Gemini both do it and revise each other's work to get to the final doc. Worked surprisingly well.


I agree. Brings me to the question though, how to deal with team members that are less experienced and use LLMs. Code review needs much more work then to teach these principles. And most of the time people won't bother to do that and just rubber stamp the working solution.


In my experience, this is a problem without LLM anyway; many times you cannot just tell coworkers (junior, or not) to completely trash their patch and do it again (even using nicer words).

Very often it comes down to HR issues in the end, so you end up having to take that code anyway, and either sneakily revert it or secretly rework it...


> Also, I make it work the same way I do: I first come up with the data model until it "works" in my head, before writing any "code" to deal with it. Again, clear instructions.

But then it's not vibe coding anymore :)


Is that a bad thing? What do we call this?


I have never seen a junior engineer make up API calls or arguments.


Also, make it it auto-pushes somewhere else, I use aider a lot, and I have a regular task that backs everything up at regular interval, just to make sure the LLM doesn't decide to rm -rf .git :-)

Paranoid? me? nahhhhh :-)


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: