Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Part of systemic improvement to security comes from the market forces that reward producers putting out carefully designed and tested products and punish producers that don't. Your suggestion of requiring prior notice, coordination, approval etc. incentivises them to defer the cost of proper development until there is a crisis, so they can rush out any rubbish product, and force users and researchers to do their security testing for them. Let them fear the unknown, with their necks on the line, and design accordingly.


I proposed protected legal channels for researchers.

It does remove any pressure from companies. Their neck is still on the line.

It adds pressure to companies because it creates a paper trail. It enables good faith companies to work with researchers as well. They can even have researchers contact each other if they are both looking into the same thing.

There's a lot of good that can come of it

Companies can already rush out any product they want with no security. Lack of security is still a risk, regardless of how we address researching vulnerabilities


You proposed requiring consent from the producer of a product/service to have their offering probed. And did so with an example of a house not owned by that producer.

If the production company declines, that DOES remove pressure from that company.

Companies that rush out rubbish products can presently be named and shamed by independent, uncooperative or even adversarial researchers. Your proposal considers that research illegitimate unless said dodgy company decides to open itself up to scrutiny, which it obviously would not be inclined to do.

If you want to suggest the market would respond by not selecting products from such an opaque company, look into how many WhatsApp users care about auditable, open source code vs. those using Signal.


I proposed a preference for systemic solutions over a soft dependence on white hat hackers who operate identically to black hat hackers right up until they have a vulnerability to exploit and decide what to do with it.

In this thread I expanded the detail to include the system to do this could (and imo should) be a legal framework that creates effective communication between companies and researchers.

I also try to adapt my language to try to parallel what the person I'm speaking with is trying to say, rather than telling them they didn't mean what they are telling me they meant. I apologize if created a misunderstanding with my word choice.

Yes I did mention requiring consent from the company as the ideal goal of the model. I am not suggesting the implementation of the model full stop at that just that sentence. In other areas of law, if you can prove a message was received by a company that can sometimes be considered implied consent if they do not respond to it.

We can also require that companies cannot simply refuse for no reason, but leave legal room here for any legitimate reasons to deny should they exist.

And so on and so forth.

It makes the intent of the researcher very clear.

Declining is obviously less pressure for the company in this situation, I agree. But it is not less pressure compared to the current situation. Companies currently have no obligations at all to researchers, and they certainly do not build security out of concern that white hat hackers will out them. They fear black hat hackers. Those are not going away, and if a legal framework exists for companies to work with researchers and better arrange fair conditions for both sides, I would bet companies will be MORE willing to allow research than less.

Because right now they company gets the research for free and then gets to decide whether or not they want to throw the researcher a Starbucks gift card or not. Or just press charges because they are assholes.

I dont really care what the market decides to do. The point of this is to protect the researchers regardless of what the market does. Because to your point, the market has already chosen poorly which is why we have issues on this subject to begin with.

Does this clarify my stance?


Kinda. I think we agree on the need to protect researchers. And if researchers are aligned with consumers rather than manufacturers then that's preferable because it's not the manufacturer's property once it leaves the building.

If protection is in place, that alignment will work because manufacturers' declining to be scrutinised won't prevent researches from doing their job. But making protection conditional on manufacturer approval will suppress their work in those cases. And I don't know the practicality of establishing and enforcing this. So I oppose any conditionality generally.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: