Not calling this app Covfefe Meets Bagel seems like a bit of a missed opportunity to me. Not making the Firebase instance private seems like a bigger missed opportunity though.
I suppose this could be an interesting attack vector on folks. Pick a group you hate, create a website specifically targeted at that group, get their personal information, and then have a "data breach". Just being part of some groups could seriously impact people in certain circles.
Its not like there is much risk for the website owners given past breaches. At worst you fold the company and even that isn't very certain since TOS seems to be king.
Sen. Marco Rubio is not a Trump fan (now that is an understatement), so I'm not sure that makes it less likely. I have doubts in this case and chalk it up to crappy developers unless something more comes of it.
It still seems like an attack vector that has a really good risk / reward ratio. Thinking about it, it also seems like an interesting way to feed an election campaigns big data.
Politics aside, is the person who found this exploit really a "security researcher" if they sent the data to a news publisher instead of responsibly disclosing the issue?
Yes. This way the users know to protect themselves as quickly as possible. It's not like they made the app easier to hack with the information in the article, anyone who looks at the app will see the data.
By definition (https://en.wikipedia.org/wiki/Responsible_disclosure) this is not "Responsible", but I don't believe it's little-r responsible either. They should have at least told the operators and TC simultaneously, and not (according to the article) relied on TC to tell the operators.
"Responsible disclosure" is an Orwellian term coined by vendors to coerce researchers. The accepted term among professional researchers is "coordinated disclosure". This wasn't coordinated disclosure, but not all disclosure has to be.
That's a false dichotomy. Coordinated disclosure can sometimes optimize outcomes for some subset of users. Sometimes it doesn't. Ethical judgements in vulnerability disclosure are complicated. Sometimes a vendor and the majority of its user community would prefer disclosure be suppressed, because it saves them work (see: "Patch Tuesday"). Sometimes that impulse lines up with what's best for the world, and sometimes it doesn't. See what's so Orwellian about "responsible disclosure" now? The whole point of rejecting the term is that there isn't a one-sized-fits-all answer, to which you must comply in order to be "responsible".
Anyways: hard no to the suggestion that, in order to be a "real researcher", you have to coordinate your disclosures. To be a serious researcher, you just have to be serious about finding vulnerabilities.
(Semantic reminder: our field uses the term "researcher" in a way closer to the journalism definition of the word than the academia definition.)
I'm sure 'tptacek is tired of grinding this particular axe so I'll trot out the usual arguments:
- Discovery didn't create the bug. You have no idea who has been exploiting the shit out of it already.
- Vendors will have all sorts of unreasonable responses, from ignoring you to threatening legal action to dragging their feet.
- Vulns are work product. You are entitled to zero of the researcher's time and effort unless you're paying them for it.
I'm not saying coordinated disclosure is bad either! I prefer to do it when I find stuff. We found a bug in NextJS last week and we did coordinated disclosure (I'm talking about it now because they released a fix). I'm saying the researcher owns the bug.
Downloading the data is always harder to defend than just discovering it. But unless and until they do something malicious, or sell the data to the highest bidder, I'm going to say it's fair game. Really we don't know how many independent copies were made while the data was live, and however much blame I assign to the hacker has got to be tiny compared to the responsibility of the app maker.
Telling the operators first creates a situation where there is a reasonable chance that no one malicious will obtain the data. Publishing a public blog post greatly increases the chance the entire dataset will be leaked to the public.
The researcher can only change the likelihood the data isn't obtained by a malicious actor if a malicious actor hasn't already obtained the data. The researcher usually has no way of telling if a malicious actor has the data. Optimizing for the worst-case scenario, which is yes, a black-hat hacker has already gotten there, it makes sense to prioritize notification of users by all available means so they can attempt to remediate the data loss.
Nearly all security vulnerabilities are published in blog posts at some point because most companies deny a problem even exists or needs to be fixed. Sometimes they just don't even respond and the person who discovered the vulnerability publishes anyways as a sort of "punishment" for the companies lack of response.
The article mentions it was because their Firebase database was unsecured - meaning anyone who knew the url could get access to all the data. That was the default for a long time, and Firebase will send you email reminders if you keep it unsecured. The developer ignored the best practices mentioned in the Firebase documentation and the email reminders that come out once a week (I think).