> BitTorrent Sync remains the most secure and private way to to move data between two or more devices.
That very first sentence will always be false as long as it isn't open source. (Even the protocol isn't publicly documented, last I checked.) I'm not an open-source purist, but the way they always promote as being the most secure, private option out there while completely ignoring that fact is frustrating.
I haven't tried using it, but Pulse[0] appears to be an open source replacement for BitTorrent Sync. I figure it's relevant to this thread.
How would open sourcing it make it more secure? I mean, I understand that doing so would let you look at the code, and maybe even have others find and plug security holes, but your statement seems to imply that closed source is less secure by default, unless I'm missing something.
Doesn't necessarily make it more secure, but will increase the confidence that it is the most secure through peer review. Any company will say their products are secure/the best etc. - proof is what makes those claims legit.
> I can audit it if I want to. Closed Source strips me of that option.
Closed source does not strip you of the ability to audit.
In the case of BitTorrent Sync you can use Wireshark to inspect the network traffic yourself. BitTorrent even goes so far as to purposefully use plaintext for the usage statistics it reports back so that someone could cross-verify with Wireshark.
Besides Wireshark there's all sorts of tools for instrumenting, debugging, or decompiling Sync that would also fall within the realm of auditing.
As other commenters have noted, it is not exactly trivial to verify that a binary for some open source software was produced by the same open source code you audited. This leaves you with having to compile from source everything – which seems only an order of magnitude or so less annoying than Wireshark-ing or IDA-ing everything.
Perhaps a better approach to handling the security concerns of open vs. closed source software would be to take a more active approach to locking down what we run. Lets operate with the working assumption that whatever we run is hostile instead of having blind trust in open source.
Lets use firewalls, containerization, selinux, etc. to essentially configure whitelists for what we allow of the software we run.
For someone that doesn't trust BitTorrent Sync because it's not open source consider locking it down. Use a firewall to only allow connections to known peers, isolate it from other processes, and restrict filesystem access. Trade some usability for security.
The kinds of things y'all are concerned about with BitTorrent Sync are the same things we should be just as concerned about for what we just apt-get'd; closed source vs. open source doesn't make a lick of difference in that respect.
Sometimes it's to find out what the heck is the cause of a particular behavior in a program, sometimes it's to know for sure that the program isn't trying to do anything that I recognize as malicious in a security sensitive environment, other times it's to see exactly how a game is calculating whether or not my bullet has hit the enemy (server side calculation is more difficult to fake than client side).
Would you honestly chose a black-box solution for a business critical need, knowing that it could stop working at any time and won't let you know for sure that the code is secure by auditing it (or paying a trusted security professional to do so for you)?
I get the impression that the anti many eyes sentiment comes largely from non-programmers, am I wrong about that?
> I get the impression that the anti many eyes sentiment comes largely from non-programmers, am I wrong about that?
I've only heard it from programmers, generally very good ones. Anyone who is at all following the security community knows that many eyes is possible but generally very optimistic. That's why so many people were glad to see Heartbleed lead to the Core Infrastructure Initiative since that will keep the guaranteed number above zero for some key projets.
> Anyone who is at all following the security community knows that many eyes is possible but generally very optimistic.
I think this is true, but I also think that a lot of people have seen statements from authoritative people to this effect and taken them farther, as a complete rejection of not just the scale of the effect of 'many eyes', but a rejection of the fundamental idea, which leads to a conclusion that the source being available is either worthless or even detrimental.
The Core Infrastructure Initiative is not at odds with the basic notion of many eyes, but augments it. Arbitrary groups (particularly groups with non-commercial motives) committing monetary resources is also enabled by open source in a way that is impossible with closed source, after all.
I would characterize this as a reaction to earlier triumphalism: some of the more breathless OSS advocates treated many eyes as a given – open the source and bugs will be fixed – when it's heavily dependent on project culture, existing code quality and simply the nature of the project.
I get the impression that the anti many eyes sentiment comes largely from non-programmers, am I wrong about that?
I can only speak for myself. I am a long-time programmer and security professional and I argue against the "many eyes" sentiment.
A significant portion of the projects that I assess code on I don't have the source. And yes, I find security-relevant bugs in that code.
There are claimed black boxes and "open" black boxes. On a linux system, do a "top" and tell me how many of those hundreds of open source programs the eyeballs have actually looked at and can testify to the absence of bugs or presence of trustworthiness?
Realistically, there is so much code in a linux system it takes a lifetime to review it all yourself. So, you end up putting your trust in the code reviews of random people on the internet. Is that better than putting your trust in BigCorp? I used to think so, but i'm not so sure anymore because i don't see substantiation of the claim that open source is more secure. I see similar volumes of security issues in open source and closed source, and i don't see that ratio changing over time, which is what the many eyeballs theory would suggest.
Sure, the many eyeballs theory is appealing, but it seems more aspirational than actual.
A government institution does have the resources to review every single application they use, should they want to.
You're also missing that often BigCorp gets more involved in open-source than random individuals. Microsoft for example is said to be the fifth largest contributor for Linux 3.0, speaking of which Red Hat, IBM and Google are regulars and now Samsung too.
Fun fact, did you know that SELinux, one of the most advanced modules for access control, was originally developed by the NSA? Yup, a little ironic, but we can use it because it is open-source and because it has been reviewed.
Or 2014 should be the year that it's confirmed that open sourcing something can lead to people finding bugs in it. It's kind of bizarre to me that people finding bugs in open source software and being able to both patch it and release patches of it themselves indicates a failure of open source. It's not as if the closed source security software of the world fared any better this year, and I'd still rather be able to get a patch from Debian right away than wait for Microsoft to get off their ass and release one.
On a practical level there are clearly limits to the many eyeballs hypothesis. Particularly if your assumptions about it were based on the idea that every user is an 'eyeball'. It's obvious there isn't a linear relationship between the two. None of that means it isn't still better than the alternative.
It's kind of bizarre to me that people finding bugs in open source software and being able to both patch it and release patches of it themselves indicates a failure of open source.
The claim is not that it is a failure of open source, but that is a failure in the "many eyeballs" claim.
Looking at two fairly famous bug hunters, Juliano Rizzo and Thai Duong finding POET, BEAST and CRIME, didn't require open source to find these bugs. BEAST was identified in 2002, but wasn't taken seriously.
Not having source is not much of a speed bump for a bug finder.
This "many eyeballs" is giving us all a false sense of security.
> This "many eyeballs" is giving us all a false sense of security.
Precisely!
This issue is further amplified by the fact that lots of people believe that "open source" means "trustworthy". If someone opens up their code, they must be good guys. This assumption is very easy to exploit. All a bad player needs to do is to distribute both the source and the binaries, but build latter from an alternative source. Just look how long it took for someone to actually try and verify that TrueCrypt binaries were in fact built from the source supplied. And that's for a security product with a massive installation base.
"Bugs can be found in closed source software" and "code will be seen by a wider variety of programmers if the source is open" are not contradictory statements. This is a false dilemma.
I agree there is an element of false sense of security, but I fail to see how the answer to that is to actively encourage our security software to be closed source.
> Not having source is not much of a speed bump for a bug finder.
As someone who has found bugs in software thanks to having access to the source code, I call bullshit on this. I likely would not have been able to identify the bug myself without access to the source.
It is true that you can find bugs in closed source software, but it sure as hell is a lot easier when we have access to the source.
(I'm aware that code doesn't have to be Open Source for me to have access to it, but I feel that's splitting hairs.)
Sure, they could contract out with trusted third parties to review the code, but they would then need a mechanism to ensure that the binaries distributed to users matched the code that these trusted third parties were provided.
So yes, it is correct to say that it is not required that the project be made open source to corroborate their claims, but the alternative of keeping it closed source and having third parties verify every release is much more logistically challenging.
Completely agree that open source is critical to making claims about security. Else you're asking people to trust you.
Not to be pedantic but the gotcha is that you can't know they're using the open source software as-is. If they run a hosted service or distribute binaries you won't know. Also with cryptography any change (diverging from the open source software) can have regressions.
The answer to this, for something like btsync, is that you shouldn't really be trusting the servers outside your control to begin with. That's the whole point of these systems as opposed to the usual cloud model where you're throwing plaintext up to a server and hoping their security model holds.
If the client software is all that's ever supposed to see plaintext, being able to see source allows you to confirm that that is (probably) the case and then compile it yourself rather than trust that they haven't thrown an extra step in that backdoors it.
I thought about this a lot when trying to come up with a very secure open source email service. Is there perhaps a way to show hashes for the coded binaries/etc that are used that could actually be trusted to be correct?
It just seems like such a chicken/egg problem. Where does the actual trust come from (like holy crap web certificates seems unbelievably broken).
Ultimately it seems like it's just impossible to be 100% for sure what is running on another persons server without having access to it. Which is unfortunate.
Security is a poor word, but if we define it as quality (reliability, fitness for a stated purpose, logical consistency, etc), then access to source code allows peer and public review. IMHO, free software is just the scientific method (loosely) applied to programming (with a moral context). Test-driven development is an attempt to tighten up the hypothesis-implementation-analysis loop.
While it's a pure implementation issue, it's odd that they haven't at all addressed app's crashing through packet fuzzing, because that's an excellent zero-day candidate.
The last time I installed bittorrent on my mac, it came with a chrome extension malware. I used to really love BTSync, but now I'm certainly going to ditch it.
For once again proving the point that unless its free software, its not worth it.
> it is a 160 bit number, which means that it is cryptographically impossible to guess the hash of a specific folder.
Perhaps they know what they're talking about and are just trying to simplify for a non-technical audience... but that kind of language does not inspire confidence from a technical audience.
I think the use of "cryptographically" there is a bit silly, but otherwise they're spot on. You could have a billion computers, each trying a billion unique guesses per second, for a billion years, and you would've only guessed 0.000000000002158% of the possible values.
Well, it would depend on the entropy of the folder. Maybe we know the general structure of the folder, except some crucial short piece of information. Otherwise, everyone would also be fine using SHA-1 to store passwords.
N.B: I don't know anything about the protocol, or whether the above applies. But appealing to the strength of a hash function only makes sense for hard-to-guess hashing material, which is not always the case.
Right. But as I read it, the 160bit hash is over the content, in order to find peers with the same content. So if I wonder who's got a certain leaked document, or a certain media file, the hash can be used to xheck for that? Could be enoygh to get a warrant, for example? If I know a certain document, say payslip.pdf and guess it is in a folder named payslips - maybe I could guess the other values and verify my guess with the hash (yeah, probably not a very interesting example ...)
Because saying it like sounds like they are merely using buzzords instead of clearly stating why their system is secure. To me it seemed that the intended result for the user was to think:
Our algorithm to hash folders produces hashes 160bits long. Which means that 2^160 -1 different folders can be securely hashed!
Of course they diden't say what they meant with 'secure' either. Will two folders that are both very big, and only differ with 1 bit produce the same hash yes or no? Is it possible to reverse this hash easily? Can it be brute forced? That kind of stuff...
Having said that, it does appear that this post was aimed at non-technical users. So perhaps it's not a bad way to rebut the claims about leaky security.
The hashes cannot be used to obtain access to the folder; it is just a way to discover the IP addresses of devices with the same folder. Hashes also cannot be guessed; it is a 160 bit number, which means that it is cryptographically impossible to guess the hash of a specific folder.
That is, you can only know the hash of a specific folder if
a. You have that folder
b. Someone told you the hash
This follows from using any reasonable hashing algorithm, such as sha-1.
Furthermore, because the range of the hash is sufficiently large the chance that you can, guessing randomly, find a valid folder is quite small.
There are 2^160 ~= 1.46 x 10^48 possible hashes.
Given that the probability of guessing a random folder is (number of folders hashed) / (number of possible hashes), and assuming that there are less than 10^8 folders that have been hashed, the chance of randomly finding any folder is less than 1 in 10^40.
That very first sentence will always be false as long as it isn't open source. (Even the protocol isn't publicly documented, last I checked.) I'm not an open-source purist, but the way they always promote as being the most secure, private option out there while completely ignoring that fact is frustrating.
I haven't tried using it, but Pulse[0] appears to be an open source replacement for BitTorrent Sync. I figure it's relevant to this thread.
[0]: https://ind.ie/pulse/
EDIT: ef4 mentioned Syncthing in his comment. Pulse was forked from Syncthing. I think they're still compatible with each other at the moment. Nice explanation at https://discourse.syncthing.net/t/syncthing-is-still-syncthi....