Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Donald Daters, a dating app for Trump supporters, leaked its users’ data (techcrunch.com)
49 points by fooey on Oct 16, 2018 | hide | past | favorite | 29 comments


Not calling this app Covfefe Meets Bagel seems like a bit of a missed opportunity to me. Not making the Firebase instance private seems like a bigger missed opportunity though.


I suppose this could be an interesting attack vector on folks. Pick a group you hate, create a website specifically targeted at that group, get their personal information, and then have a "data breach". Just being part of some groups could seriously impact people in certain circles.

Its not like there is much risk for the website owners given past breaches. At worst you fold the company and even that isn't very certain since TOS seems to be king.


"Emily Moreno, the app’s founder and a former aide to Sen. Marco Rubio"

Seems unlikely in this case.


Sen. Marco Rubio is not a Trump fan (now that is an understatement), so I'm not sure that makes it less likely. I have doubts in this case and chalk it up to crappy developers unless something more comes of it.

It still seems like an attack vector that has a really good risk / reward ratio. Thinking about it, it also seems like an interesting way to feed an election campaigns big data.


Politics aside, is the person who found this exploit really a "security researcher" if they sent the data to a news publisher instead of responsibly disclosing the issue?


Yes. This way the users know to protect themselves as quickly as possible. It's not like they made the app easier to hack with the information in the article, anyone who looks at the app will see the data.


By definition (https://en.wikipedia.org/wiki/Responsible_disclosure) this is not "Responsible", but I don't believe it's little-r responsible either. They should have at least told the operators and TC simultaneously, and not (according to the article) relied on TC to tell the operators.


"Responsible disclosure" is an Orwellian term coined by vendors to coerce researchers. The accepted term among professional researchers is "coordinated disclosure". This wasn't coordinated disclosure, but not all disclosure has to be.

https://hn.algolia.com/?query=author:tptacek%20responsible%2...


Surely though the point of coordinating a disclosure is to responsibly ensure that nobody is needlessly put at risk whilst the vendor works on a fix?


That's a false dichotomy. Coordinated disclosure can sometimes optimize outcomes for some subset of users. Sometimes it doesn't. Ethical judgements in vulnerability disclosure are complicated. Sometimes a vendor and the majority of its user community would prefer disclosure be suppressed, because it saves them work (see: "Patch Tuesday"). Sometimes that impulse lines up with what's best for the world, and sometimes it doesn't. See what's so Orwellian about "responsible disclosure" now? The whole point of rejecting the term is that there isn't a one-sized-fits-all answer, to which you must comply in order to be "responsible".

Anyways: hard no to the suggestion that, in order to be a "real researcher", you have to coordinate your disclosures. To be a serious researcher, you just have to be serious about finding vulnerabilities.

(Semantic reminder: our field uses the term "researcher" in a way closer to the journalism definition of the word than the academia definition.)


Fair comment, that's a reasonable answer. Thanks :-)


I'm sure 'tptacek is tired of grinding this particular axe so I'll trot out the usual arguments:

- Discovery didn't create the bug. You have no idea who has been exploiting the shit out of it already.

- Vendors will have all sorts of unreasonable responses, from ignoring you to threatening legal action to dragging their feet.

- Vulns are work product. You are entitled to zero of the researcher's time and effort unless you're paying them for it.

I'm not saying coordinated disclosure is bad either! I prefer to do it when I find stuff. We found a bug in NextJS last week and we did coordinated disclosure (I'm talking about it now because they released a fix). I'm saying the researcher owns the bug.

EDIT: Oh no! Of course he beat me to it.


> ensure that nobody is needlessly put at risk

People are needlessly at risk if they’re still using a vulnerable service they could have been told not to use while you wait for the vendor.


I don't agree. Maybe if they just reported it instead of sending the data. That implies malicious intent.


Downloading the data is always harder to defend than just discovering it. But unless and until they do something malicious, or sell the data to the highest bidder, I'm going to say it's fair game. Really we don't know how many independent copies were made while the data was live, and however much blame I assign to the hacker has got to be tiny compared to the responsibility of the app maker.


How does sending data to a journalist imply malicious intent? That doesn't make any sense at all.


Not sure how this is much different than posting a blog post about it.


Telling the operators first creates a situation where there is a reasonable chance that no one malicious will obtain the data. Publishing a public blog post greatly increases the chance the entire dataset will be leaked to the public.


The researcher can only change the likelihood the data isn't obtained by a malicious actor if a malicious actor hasn't already obtained the data. The researcher usually has no way of telling if a malicious actor has the data. Optimizing for the worst-case scenario, which is yes, a black-hat hacker has already gotten there, it makes sense to prioritize notification of users by all available means so they can attempt to remediate the data loss.


Nearly all security vulnerabilities are published in blog posts at some point because most companies deny a problem even exists or needs to be fixed. Sometimes they just don't even respond and the person who discovered the vulnerability publishes anyways as a sort of "punishment" for the companies lack of response.


I would say the difference is they sent the data to a 3rd party.


Minimal Viable Products strike again.


Anyone with half a brain could tell this was a bare-minimum effort that you shouldn't trust with your information.


The best people.


Both the people that use the app and attack the app need to reevaluate what's important in life


Why does everyone seem to make the same mistake? DO NOT roll your own crypto.


The article mentions it was because their Firebase database was unsecured - meaning anyone who knew the url could get access to all the data. That was the default for a long time, and Firebase will send you email reminders if you keep it unsecured. The developer ignored the best practices mentioned in the Firebase documentation and the email reminders that come out once a week (I think).


while i suspect a couple layers/points, i can't zero in on the major specific point of your sarcasm/humor.

"The data was accessible from a public and exposed Firebase data repository, which was hardcoded in the app."


Crypto wasn’t involved - this was just a publicly available database of all their users.

The crypto version of this I guess would be to store the plaintext copy of the encrypted content as metadata on the “encrypted” content?

No amount of correct encryption or (in this case) API access policies saves you if you just publish all the data publicly




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: