Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Amazing that even within the last decade a site as large as LinkedIn could be storing unsalted passwords. How does anyone fail at this in the modern era?


It's actually really easy to do unintentionally. For an intervening middleware, a password field in a JSON object is just like any other field in a JSON object.

You may have some kind of logging / tracking / analytics somewhere that logs request bodies. You don't even have to engage in marketing shenanigans for that to be a problem, an abuse prevention system (which is definitely a necessity at their scale) is enough.

Storing unsalted passwords in the "passwords database" is uncommon. Storing request logs from e.g. the Android app's API gateway, and forgetting to mark the `password` field in the forgot password flow as sensitive? Not so uncommon.


A company as big as LinkedIn should have bots continually accessing their site with unique generated passwords etc., and then be searching for those secrets in logging pipelines, bytes on disk, etc. to see where they get leaked. I know much smaller companies that do this.

Yes, it's easy to fuck up. But a responsible company implements mitigations. And LinkedIn can absolutely afford to do much more.


Such bots can certainly solve part of the problem, but they can't fix the issue entirely.

If your logging is on an obscure enough endpoint (password reset flow in the Android app's API gateway), you may forget to add that endpoint to the bot, just like you may forget to mark it as sensitive in your logging system.

At this scale, the developers working on these esoteric endpoints might not even be aware that such a bot exists.


I always picture a random middle manager in $large_organisation being told about something like this, and then they work out the angles and try to find the benefit.

If the method works, and it shows that the logging feature Fred got so much credit for is storing passwords, what are the political implications of that? Can our intrepid middle manager steal some of Fred's glory? Or is Fred an ally and it should be carefully handled? Or do they sit on it and wait until an opportune moment to destroy Fred?

This is the kind of reasoning process I think goes on, because I've seen very few large organisations make actually-good technical decisions.


Sadly, this is not as far-fetched as we here on HN would like it to be.


The fact that you think random middle managers are all that psychopathic really says more about you than it does some hypothetical middle manager.

Are their psychopaths and Machiavellian schemers in management? Certainly. Are they the majority? Almost certainly not, unless you're working for absolutely the wrong company.

As the Brits would say, "cock-up before conspiracy."


No. It may not be conscious Machiavellian scheme, but it's a common attitude among middle managers. They are extremely sensitive to their reputation, which is why they punish people who make them look bad, even if it's something good for the company. Finding security vulnerabilities or wasted resources is met with an ambiguous hostility.

And unfortunately, a lot of people aren't emotionally intelligent enough to recognize that many managers use emotional reactions to redirect the room away from them. Because if you're the angry one, people won't ask questions like "didn't someone mention the possibility of this to you 6 months ago?"


Everyone is extremely sensitive to their reputation. That is just human nature. Someone who can't factor that into their actions and communications is frankly lacking basic social skills.


> Everyone is extremely sensitive to their reputation. That is just human nature

I don't really agree with that, but let's say I do. Middle management is a unique position where their sensitivity is a bigger liability to everyone else. They have some power, but not a lot. They ironically have higher visibility in the company than upper management. And the job requires 0 technical understanding of what they manage.

So that puts them in an awkward position that is often abused. If they feel someone is going to get in trouble, they will make sure that's not them, which is a terribly common instinct. When a developer tells the company there is a problem to address that could threaten the product, that's a good thing that should be welcomed. Instead, many middle managers see that developer as the problem.

> Someone who can't factor that into their actions and communications is frankly lacking basic social skills.

No argument there.


There are so many things that companies as big as linkedin should be doing but aren't :(


that would require hiring a security personnel which they can't afford to do. /s


I think the new approach is to "hire" LLM agents to do the job, unless the hiring manager can prove they exhausted all ways an LLM could possibly have done the task.


Would this be solved by providing the client with a (frequently rotated) public key to encrypt the password field specifically before submitting to the server, so that the only place it can be decrypted and stored is the authentication service at the very end of its journey through the network?


A new public key per password-mutating session is quite an interesting idea.

It does have some challenges in introducing a read-before-write to fetch the session key at the start of the session, but given the relatively low call volume of such flows that might be a small price to pay to simplify security audits and de-risk changes to any service in the call chain.


The existing solution for this is SRP (Secure Remote Passwords http://srp.stanford.edu/).

Unfortunately my understanding is that it’s trivial to implement unsoundly but it’s also not something for which there are an abundance of good implementations across languages.

It’s been awhile since I’ve looked though so maybe there is a newer, less radioactive approach. But yes, never actually sending the authenticator itself (and doing so in a way that the proof is valid only once) would stop this sort of thing cold.


SRP, even the latest version, is unfortunately pretty bad in comparison to modern PAKE protocols: https://blog.cryptographyengineering.com/should-you-use-srp/


They must have not asked enough Leetcode Hard questions in interviews.


I am stealing this. Made my day :)


LinkedIn at one point were continually pressuring people into handing over their email credentials in the name of making it easy to find your contacts.

So yeah, LinkedIn have never been exactly a bastion of IT Security.


They (and the users) have a very real use case for that, just like a contacts app needs all of that. The problem is not keeping it safe.


No user ever had a real use case for seeing a button that says "invite X" that doesn't send an invite on the platform, but instead sends an email to X who doesn't have a Linkedin account.

And if you decline, it asks you again. Two times using different wording.


You'll be surprised how many features "tech" people think nobody uses (Like a share button on a website), are actually very popular. That's likely the reason that feature still exists as everything is most likely A/B tested to death.

I was not only talking about that though, but also that they can build shadow profiles and recommend people to you that way.


Same company that requires you upload a biometric scan of your face paired with your passport for ""verification"" (despite not needing it on signup) if you want to enable MFA, btw ;-)

On a related note, I no longer have an active linkedin account.


IIRC linkedin was one of the breaches where I got a spam email to my linkedin address, told them and they were like "can't be us - must be you who has been hacked". And then later "ah yeah was us, but no personal data was stolen". Like email address is not personal - lucky me for having a catch all domain and being able to just block the address I had used with linkedin.


I worked for a company with millions of users that had plaintext passwords in the DB. The login had been rolled from scratch in the days before you could get decent, tested off-the-shelf code for their particular stack. There were always so many fires to put out and projects to keep the wages being paid that it never got looked at. It got bought by Microsoft and eventually they just consumed the whole thing somehow, so it's gone now.

It did allow me to cheekily run a SQL GROUP BY once to see what the most common passwords were, though. Top password was actually "trustno1" IIRC, followed by all the usual suspects, e.g. abcdefg, 12345678 etc. (there were no meaningful password rules)


> How does anyone fail at this in the modern era?

Most probably some ancient legacy mainframe or whatnot other integration that nobody really has the time and budget to clean up and migrate to something more modern.

The larger the company, the larger the risk for ossification of anything deemed "business critical" because even a minuscule outage of one hour now is six if not seven figures worth of "lost" time.


LinkedIn isn't old enough to have anything ancient. It was launched in 2003, and even then you'd get laughed at for suggesting storing passwords in plaintext.


Plaintext, sure, but it was certainly common still to use SHA-256 which is very quickly cracked if your password is short.


Doesn't mean that the infra is still ancient. What I see a lot is tech debt from migrations. Lots of times both the old and new systems have to work together for a period of time, so you leave certain legacy protocols and flags in place for the transition period and then the new system is never fully "updated" to the new standards. Pre win2k AD, file path lengths, encryption protocols, etc etc. Sure, the new system is "up to date" but the old compatibility settings remain.


This is also how feature flag services become mission critical because everything gets launched behind feature flags that never get cleaned up


For all the talk of AI Slop, I don’t hear much about the fact that we have been suffering from Outsourced Slop for decades now. I suspect that is how this kind of thing also fail at LinkedIn. I say that based on my experience dealing with outsourcing companies and the product they produce through outsourced programmers.

It’s really just been a similar problem as with AI code, that without strong and competent management that can set intelligent expectations and requirements and test for them, you will surely get what appears to all the business and leadership types like an equivalent product, without any sense that it’s slop underneath the surface.


I'm on board with the cheap offshore and bad incentives motiv, but feel this has to be augmented with a mention of the senior cowboy coder (who just went into retirement). Most likely in the future these stereotypes will be joined by vibe coders and AI-powered juniors, but as someone working this industry for a couple of decades give or take - we've learned how to deal with these by now.


> the senior cowboy coder (who just went into retirement)

They just went into retirement?


I've seen coworkers at Big Tech Co™ make huge security blunders despite attending prestigious universities (Berkeley, Stanford, etc) and having 5+ years of industry experience. No LLM slop required. Just rushing to meet deadlines while requirements shift rapidly enough that details get overlooked.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: