Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I can audit it if I want to But seriously--do you? Or others that you know working in the field?


Yes I have, several times.

Sometimes it's to find out what the heck is the cause of a particular behavior in a program, sometimes it's to know for sure that the program isn't trying to do anything that I recognize as malicious in a security sensitive environment, other times it's to see exactly how a game is calculating whether or not my bullet has hit the enemy (server side calculation is more difficult to fake than client side).

Would you honestly chose a black-box solution for a business critical need, knowing that it could stop working at any time and won't let you know for sure that the code is secure by auditing it (or paying a trusted security professional to do so for you)?

I get the impression that the anti many eyes sentiment comes largely from non-programmers, am I wrong about that?


> I get the impression that the anti many eyes sentiment comes largely from non-programmers, am I wrong about that?

I've only heard it from programmers, generally very good ones. Anyone who is at all following the security community knows that many eyes is possible but generally very optimistic. That's why so many people were glad to see Heartbleed lead to the Core Infrastructure Initiative since that will keep the guaranteed number above zero for some key projets.


> Anyone who is at all following the security community knows that many eyes is possible but generally very optimistic.

I think this is true, but I also think that a lot of people have seen statements from authoritative people to this effect and taken them farther, as a complete rejection of not just the scale of the effect of 'many eyes', but a rejection of the fundamental idea, which leads to a conclusion that the source being available is either worthless or even detrimental.

The Core Infrastructure Initiative is not at odds with the basic notion of many eyes, but augments it. Arbitrary groups (particularly groups with non-commercial motives) committing monetary resources is also enabled by open source in a way that is impossible with closed source, after all.


I would characterize this as a reaction to earlier triumphalism: some of the more breathless OSS advocates treated many eyes as a given – open the source and bugs will be fixed – when it's heavily dependent on project culture, existing code quality and simply the nature of the project.


I get the impression that the anti many eyes sentiment comes largely from non-programmers, am I wrong about that?

I can only speak for myself. I am a long-time programmer and security professional and I argue against the "many eyes" sentiment.

A significant portion of the projects that I assess code on I don't have the source. And yes, I find security-relevant bugs in that code.

There are claimed black boxes and "open" black boxes. On a linux system, do a "top" and tell me how many of those hundreds of open source programs the eyeballs have actually looked at and can testify to the absence of bugs or presence of trustworthiness?


Realistically, there is so much code in a linux system it takes a lifetime to review it all yourself. So, you end up putting your trust in the code reviews of random people on the internet. Is that better than putting your trust in BigCorp? I used to think so, but i'm not so sure anymore because i don't see substantiation of the claim that open source is more secure. I see similar volumes of security issues in open source and closed source, and i don't see that ratio changing over time, which is what the many eyeballs theory would suggest.

Sure, the many eyeballs theory is appealing, but it seems more aspirational than actual.


A government institution does have the resources to review every single application they use, should they want to.

You're also missing that often BigCorp gets more involved in open-source than random individuals. Microsoft for example is said to be the fifth largest contributor for Linux 3.0, speaking of which Red Hat, IBM and Google are regulars and now Samsung too.

Fun fact, did you know that SELinux, one of the most advanced modules for access control, was originally developed by the NSA? Yup, a little ironic, but we can use it because it is open-source and because it has been reviewed.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: