It's actually really easy to do unintentionally. For an intervening middleware, a password field in a JSON object is just like any other field in a JSON object.
You may have some kind of logging / tracking / analytics somewhere that logs request bodies. You don't even have to engage in marketing shenanigans for that to be a problem, an abuse prevention system (which is definitely a necessity at their scale) is enough.
Storing unsalted passwords in the "passwords database" is uncommon. Storing request logs from e.g. the Android app's API gateway, and forgetting to mark the `password` field in the forgot password flow as sensitive? Not so uncommon.
A company as big as LinkedIn should have bots continually accessing their site with unique generated passwords etc., and then be searching for those secrets in logging pipelines, bytes on disk, etc. to see where they get leaked. I know much smaller companies that do this.
Yes, it's easy to fuck up. But a responsible company implements mitigations. And LinkedIn can absolutely afford to do much more.
Such bots can certainly solve part of the problem, but they can't fix the issue entirely.
If your logging is on an obscure enough endpoint (password reset flow in the Android app's API gateway), you may forget to add that endpoint to the bot, just like you may forget to mark it as sensitive in your logging system.
At this scale, the developers working on these esoteric endpoints might not even be aware that such a bot exists.
I always picture a random middle manager in $large_organisation being told about something like this, and then they work out the angles and try to find the benefit.
If the method works, and it shows that the logging feature Fred got so much credit for is storing passwords, what are the political implications of that? Can our intrepid middle manager steal some of Fred's glory? Or is Fred an ally and it should be carefully handled? Or do they sit on it and wait until an opportune moment to destroy Fred?
This is the kind of reasoning process I think goes on, because I've seen very few large organisations make actually-good technical decisions.
The fact that you think random middle managers are all that psychopathic really says more about you than it does some hypothetical middle manager.
Are their psychopaths and Machiavellian schemers in management? Certainly. Are they the majority? Almost certainly not, unless you're working for absolutely the wrong company.
As the Brits would say, "cock-up before conspiracy."
No. It may not be conscious Machiavellian scheme, but it's a common attitude among middle managers. They are extremely sensitive to their reputation, which is why they punish people who make them look bad, even if it's something good for the company. Finding security vulnerabilities or wasted resources is met with an ambiguous hostility.
And unfortunately, a lot of people aren't emotionally intelligent enough to recognize that many managers use emotional reactions to redirect the room away from them. Because if you're the angry one, people won't ask questions like "didn't someone mention the possibility of this to you 6 months ago?"
Everyone is extremely sensitive to their reputation. That is just human nature. Someone who can't factor that into their actions and communications is frankly lacking basic social skills.
> Everyone is extremely sensitive to their reputation. That is just human nature
I don't really agree with that, but let's say I do. Middle management is a unique position where their sensitivity is a bigger liability to everyone else. They have some power, but not a lot. They ironically have higher visibility in the company than upper management. And the job requires 0 technical understanding of what they manage.
So that puts them in an awkward position that is often abused. If they feel someone is going to get in trouble, they will make sure that's not them, which is a terribly common instinct. When a developer tells the company there is a problem to address that could threaten the product, that's a good thing that should be welcomed. Instead, many middle managers see that developer as the problem.
> Someone who can't factor that into their actions and communications is frankly lacking basic social skills.
I think the new approach is to "hire" LLM agents to do the job, unless the hiring manager can prove they exhausted all ways an LLM could possibly have done the task.
Would this be solved by providing the client with a (frequently rotated) public key to encrypt the password field specifically before submitting to the server, so that the only place it can be decrypted and stored is the authentication service at the very end of its journey through the network?
A new public key per password-mutating session is quite an interesting idea.
It does have some challenges in introducing a read-before-write to fetch the session key at the start of the session, but given the relatively low call volume of such flows that might be a small price to pay to simplify security audits and de-risk changes to any service in the call chain.
Unfortunately my understanding is that it’s trivial to implement unsoundly but it’s also not something for which there are an abundance of good implementations across languages.
It’s been awhile since I’ve looked though so maybe there is a newer, less radioactive approach. But yes, never actually sending the authenticator itself (and doing so in a way that the proof is valid only once) would stop this sort of thing cold.
You may have some kind of logging / tracking / analytics somewhere that logs request bodies. You don't even have to engage in marketing shenanigans for that to be a problem, an abuse prevention system (which is definitely a necessity at their scale) is enough.
Storing unsalted passwords in the "passwords database" is uncommon. Storing request logs from e.g. the Android app's API gateway, and forgetting to mark the `password` field in the forgot password flow as sensitive? Not so uncommon.