Hacker Newsnew | past | comments | ask | show | jobs | submit | gbuk2013's commentslogin

One of the biggest mind-shifts for me moving from senior dev to lead was realising that technology is much less of an issue than people. The impact of good communication leading to people understanding and agreeing on what they are working on is overwhelmingly greater than the technology choices we devs typically spend our time arguing about.

Without contradicting "in general", my anecdotal experience is that even well-oiled teams with good internal communication and team spirit can have bad technologies that end a business.

You may well be correct about the general case: I've not witnessed cat-herding, the closest was managment constantly chasing new shinies and one time forgetting to tell the devs about the latest change.


I was absolutely only speaking “in general”. Even with 20 years in the industry my experience can only be anecdotal given that I only had time to work with fewer than 10 companies. :)

That said, I suspect a bad technical decision may have people and communication causes and not fixing the problem once it is apparent is definitely rooted in these.

Technical debt and leadership vacuum are both interesting and intertwined hard problems.


We’re generally fine and well paid. :) Frontend tooling churn is tiresome but the upside is that there is a lot of great tooling that more than makes up for any language deficiencies.

> 2. Every code change must be reviewed

At a couple of places I worked at this was a hard compliance requirement: there had to be at least one review by a human to guard against an engineer slipping in malicious code (knowingly or otherwise).


Yeah, there's whole industries where you simply cannot operate without enforcing this. The author's view is pretty narrow, both on this front and on the other points.

The author mostly write about average startup work, not about industries or more constrained environment. A good example of this is the sprint thing: you can do whatever pace you want when you work on your own product that is a web product, but as soon as you work on something with hardware or marketing, you can't just use random deadlines.

Conversely, feature flags can create annoying issues due to compliance requirements.

I worked on an underwriting system where we had to be able to explain the reason for a decision. This meant that you needed to have on file both the state of the flag and the effective logic at the moment in time that a line of credit was offered to a customer.

They're useful, but not necessarily simple.


Right, they add risk both in terms of inadvertently being turned on / off and also in terms of permutations of possible system configurations that need to be tested. Less of a problem for well engineered systems with good deployment practices but it’s rare to come across these mythical things. :)

It depends a lot on the domain. I've mostly worked in high compliance/regulation worlds. It can be kind of stifling, honestly, but "oops maybe we had the feature flag turned on" is not going to cut the mustard.

Most startups can ignore all that at least until they get to a scale where "run out of money, go bust" is not the biggest risk to their business :)


This is very true and is exactly why there is no magic right answer other than “it depends”.

There are different stages of company lifecycle, different industries, different regulatory environments etc.

The processes put in place always have a cost - if picked appropriately it is worth paying, otherwise it is a waste that can hurt or even kills a project. This balance is the “art” of the job that I personally am only starting to probe around at my level and so it is still quite interesting. :)


I was going to make the same observation - typically this will be defined in your secure development policy or similar, and be part of your ISMS controls for whatever frameworks you're aligning to.

It's possible this is more relevant in B2B contexts than B2C


Luckily, gemini catches a good amount of errors in PR reviews, less need for manual review unless you need to double check if the code structure and architecture is sane.

Until it doesn't, you f up but at least it apologizes later

Depends on how good the human reviewers are. It's hard to give a thorough code review, you need to understand the code and requirements, pull the changes locally, manually test the PR, think about security, readability, catch line level bugs like bad conditionals, but also think about the overall structure (classes, patterns, systems). But this requires a lot of effort, especially with larger PRs and it's easy for things to slip through. Nothing is perfect, but you can think of AI review like a supercharged linter, it might not suggest an alternative approach, but it will point out some glaring omissions or unhandled exceptions, etc.

> Also dicking around with DMARC tools. Was unhappy with all the existing tools, want something simple I can run semi-locally for a bunch of low volume email domains.

That’s a rabbit hole on my list to go down - recently set up DMARC for some domains I am hosting emails for and the XML reports that now end up in my inbox were… refreshing to see in 2025 :)


> For themselves. To eat. So it’s easy to understand the argument that you’re harming them directly by stealing their honey, which is the result of their labour.

On the other hand the bee social structure (not sure what the right word to use here) is so brutal that taking their honey seems to be just keeping pace. :)


Obviously it has no battery, being a desktop. Regarding sleep, under Debian 13 it supports S0 (s2idle) only, which works without issue.

    $ cat /sys/power/state
    freeze mem


Just got my Framework Dektop a few weeks ago and it works flawlessly so far with Debian 13 install including (shock!) suspend working without issue. :)

It’s really great to see companies focus on improving Linux hardware support.


My favourite question to ask when seeing benchmark results such as this is “how much latency did you have / inject when running the benchmark”. This tends to lead to interesting conversations and learning. :)

One my favourite instances of this was in a benchmark measuring contest with a Golang enthusiast who was surprised to see that with a bit of latency his server was not 3x faster than Node.js like he thought it would be.


The Caddy config in the parent article uses status code 418. This is cute, but wouldn’t this break search engine indexing? Why not use 307 code?


I use this for a personal Redlib instance, so search indexing is not important. I don't know if this will allow indexing even with a 307 status code - maybe you just need to add an exception for Googlebot.


Ask your AI tool of choice - it’s great at reading manpages. I also add my most favourite prompt instruction “be succinct”. :)


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: