Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I thought about this several years ago and I think I hit the right balance with these 2 rules of thumb:

* The closer something is to your core business, the less you externalize.

* You always externalize security (unless security is your exclusive core business)

Say you are building a tax calculation web app. You use dependencies for things like the css generation or database access. You do not rely on an external library for tax calculation. You maintain your own code. You might use an external library for handling currencies properly, because it's a tricky math problem. But you may want to use your own fork instead, as it is close to your core business.

On the security side, unless that's your speciality, there's guys out there smarter than you and/or who have dedicated more time and resources than you to figure that stuff out. If you are programming a tax calculation web app you shouldn't be implementing your own authentication algorithm, even if having your tax information secure is one of your core needs. The exception to this is that your core business is literally implementing authentication and nothing else.



I feel like "shouldn't be implementing your own authentication" is overblown. Don't write the crypto algorithms. But how hard is it to write your own auth? If you are pulling in a third party dependency for that you still would need to audit it, and if you can audit authentication software why can't you implement it?

Just follow OWASP recommendations. A while back this was posted to HN and it also provides great recommendations: https://thecopenhagenbook.com/ .


The main challenge isn't necessarily implementing the algorithms, it is keeping up with the security space

Do you expect your team to be keeping up with new exploits in hardware and networking that might compromise your auth? That takes a lot of expertise and time, which they could instead be spending building features that add business value

It sounds cynical, and it kind of is, but offloading this onto external experts makes way more business sense and probably is what allows you to deliver at all. Security is just too big a space for every software company to have experts on staff to handle


The thing is, your "roll your own" auth is likely way smaller and less targeted than the library everyone uses. So the new exploits may simply not apply to your case.

Many famous vulnerabilities happen in parts of software people don't actually use. For example, the "Heartbleed" vulnerability in OpenSSL targeted the "heartbeat" feature few people actually used. In the "Log4Shell" vulnerability the exploit targeted LDAP support in log4j, which I have never seen used and didn't even know existed.

In addition, the "experts" maybe aren't. You may think that whoever is writing that popular library has a team of experts in security, it is used by big, serious companies after all. But in reality it may just be one overworked guy, and people only notice when the system has been publicly compromised. And that's if the developers themselves don't have malicious intent, or have accepted someone with malicious intent in the team (for the latter, see the xz story).


I think this is good retort to what was argued.

What's missed in me saying roll your own auth, even though I did say it, is that you aren't implementing the network stack or crypto. As long as you keep your dependencies up to date you shouldn't have any increased risk over using a third party library.

If there is a novel security flaw discovered, consider the first SQL-injection or XSS attack, then you definitely should know about it. The idea that not rolling your own security related functionality absolves you from the responsibility to know or understand major security considerations is incorrect. It is the responsibility of every programmer to be knowledgeable of the security risks in their space and the patterns that protect against those risks.


There have been major F-ups in recent history with Okta, CrowdStrike, and so on. Keycloak had some major long-standing vulnerabilities. I've had PRs accepted in popular open-source IAM libraries a bit too easily.

Yeah, we shouldn't roll our own cryptography, but security isn't as clean cut as this comment implies. It also frequently bleeds into your business logic.

Don't confuse externalizing security with externalizing liability.


As far as I know tacking on security after the fact usually leads to issues. It should be a primary concern from the beginning. Even if you don't do it 100% right, you'd be surprised how many issues you can avoid by thinking about this during (and not after) development.

Dropping your rights to open files as soon as possible, for example, or thinking about what information would be available to an attacker should they get RCE on the process. Shoehorning in solutions to these things after the fact tends to be so difficult that it's a rare sight.

I have been recommended to think of security as a process rather than an achievable state and have become quite fond of that perspective.


You are describing domain-driven design. Outsource generic subdomains, focus your expertise on the core subdomains.

https://blog.jonathanoliver.com/ddd-strategic-design-core-su...


I think this helps, but I also think the default for any dev (particularly library authors) should be to minimize dependencies as much as possible. Dependencies have both a maintenance and a security cost. Bad libraries have deep and sprawling trees.

I've seen devs pull in frameworks just to get access to single simple to write functions.


Even if you make the obviously wrong assumption that every library is more secure than the one you would write (that will do less the vast majority of the time) We still end up in a eggs in one basket situation.

You haven't thought through any cyber security games or you are funded to post this bad argument over and over again by state agencies with large 0-day stockpiles.


I would like to dig into point 2 a bit. Do you think this is a matter of degree, or of kind? Does security, in this, imply a network connection, or some other way that exposes your application to vulnerabilities, or is it something else? Are there any other categories that you would treat in a similar way as security, but to a lesser degree, or that almost meet that threshold for a special category, but don't?




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: