Which is exactly why companies are aggressive about deprecating old products and services. "But why can't they just leave them running and not touch it?" Because every such service eventually becomes a security hole. The only secure code is no code.
While your argument seems to make sense on the surface, it fails in deeper inspection.
What security implications did Google Reader have? I do understand keeping older APIs and endpoints for authentication and authorization are indeed dangerous. However, if your architecture causes the mere clients of those authorization infra to be exploited, I think the problem isn't keeping the products running. You designed something inherently insecure.
Google Reader used Google accounts for authentication, so an exploit in Reader could potentially be compromising to your entire Google account. This very article gives an example of that; Looker Studio was used to reveal the name on any account, even though most accounts have likely never used Looker Studio.
Google could mitigate this by not having universally shared accounts across all services, but they're not going to do that because most users would find that inconvenient.
The counter-argument would be that if you consider the name on an account to be something needing security protections, then Looker Studio code should not have had the ability to access the name on an account.
Since you're the person that didn't answered with "hah you say: don't code bugs", I'll take some time to answer you.
> Google Reader used Google accounts for authentication, so an exploit in Reader could potentially be compromising to your entire Google account. This very article gives an example of that; Looker Studio was used to reveal the name on any account, even though most accounts have likely never used Looker Studio.
Guess what else uses Google Accounts? Everything that Google needs authentication for. When designing software, so much effort is put into its design, possible user stories and architecture. We put so much effort into unit tests, integration tests, regression tests whatnot.Security is no different. When designing services, considering the data flow is critical for security. An engineering organization should take into account of data that needs authentication. Those should be separate isolated services.
They can crate security islands for critical parts. Why Looker needs to get the full name from an e-mail that this person hasn't initiated a two-way contact? Or even, why it can in the first place? There is a service that does this resolution (Contacts?). Google failed to consider that limiting this kind of queries when creating this service. It has nothing to do with the functionality of Looker Studio. Now anything touches this service has problems. The old and the new products. You'll not be able to resolve these problems by deprecating Looker Studio nor deprecating Reader solved these issues.
> Google could mitigate this by not having universally shared accounts across all services, but they're not going to do that because most users would find that inconvenient.
It is not the sharing of the authentication provider but the authorization of the kinds of queries is the problem here. It is not the problem with the age of the service Looker provides either. Yes you may be able to extract some data if the pod running Looker Studio got compromised, maybe even PII data. The dependencies can get old or can have critical bugs. However, they shouldn't be able to be exploited to extract large swaths of data due to layering and consideration of security architecture. That's why creating those security narrow-waist points are so important. They need to be given the same care of the correctness of the software and other UX goals.
Even a smaller company needs to consider these architectural details when designing integrated services. With GDPR, you should be able to delete every piece of PII. It gently forces you to do the right thing already. It is totally unacceptable that bigger companies like Google skipping this.
Google Reader was the last major user of the old social sharing stack at Google designed for Buzz, a product mostly remembered to these days for United States v. Google and the 2011 FTC consent decree. When people redesigned Google's social stack for G+ (e.g. all the infrastructure like Zanzibar underlying Circles, which to this day is close to state of the art!) the choice was between migrating Reader to the new tech - which nobody could justify the cost of - or keeping the old tech around for Reader when that tech was known to have had serious privacy issues leading to a major lawsuit.
That describes most/all software older than a decade with no updates applied. How many libraries was Google Reader using that now have known vulns? I'm guessing it's more than zero.
This is the same logic as "just don't write bugs", if it was that easy everybody would already be doing it.
If “what security implications does xyz have” was easy to answer then there would never be another hack or data breach. The simple answer is that we don’t know. And it is very expensive to find out.
I'm indicating that "it isn't easy to answer" is the root of problem there.
It means that the engineering teams were incompetent in designing a system with a "narrow waist" security infrastructure. Then the solution isn't deprecating xyz but fixing the security infrastructure. Otherwise the same issues will surface again and again in the newer products.
There still is a standard password recovery flow with mail/capability URL that is reasonably safe and hasn't changed too much in a decade.
It is the bullshit some security advisories brought us that introduced new dangers. By sharing telephone numbers for example...
These threats are also worse than losing an account in many cases, because now the data can easily be correlated, which has proliferated through a lot of 2FA bullshit.