[Cryptography] Would open source solve current security issues? (was "Re: EFF amicus brief in support of Apple")

Kevin W. Wall kevin.w.wall at gmail.com
Wed Mar 9 20:52:19 EST 2016


On Wed, Mar 9, 2016 at 2:52 PM, Perry E. Metzger <perry at piermont.com> wrote:
> On Sun, 6 Mar 2016 22:51:25 -0500 "Kevin W. Wall"
> <kevin.w.wall at gmail.com> wrote:
>> When Eric S. Raymond originally published his epic the "Cathedral
>> and the Bazaar" in 1997, his whole premise of:
>>
>>     Given enough eyeballs, all bugs are shallow.
>>
>> sounded quite plausible and many thought it was almost a guaranteed
>> certainty.
>>
>> Since that time, almost everyone believe that there was at least
>> one underlying fallacious assumption of that premise, namely that
>> if source was open and available for examination, then it would in
>> fact have "enough eyeballs" viewing it to make a difference in
>> software quality. Since the almost 19 years since Raymond first
>> published CatB, we have come to realize that they just isn't that
>> many people actually LOOKING at the source code. So, no eyeballs
>> means that open source generally as actually worse than closed
>> source, because in most open source projects, there is not
>> *separate* QA team to write and perform integration and system
>> level testing.
>
> I'm going to subtly disagree, in several ways.
>
> First, and less importantly, modern software development practice no
> longer has separate QA teams. These days, most reasonable places do
> things like TDD and tests are written by the dev team itself.

Unless things have drastically changed since I left IT at a large
telecomm company less than 3 years ago, they still had a separate
test organization. Yes, dev teams were doing TDD then, but they were
not doing integration testing or UAT. In fact, there were separate
environments for those that most dev teams did not even have access
to.

However, in the context of this thread & list, think of a development QA
team more in the context of what I do, which is security code reviews,
or in terms of pen testing teams. Those are things that require specialized
knowledge about security.  For the latter, one doesn't technically even need
to know how to code. For the former an understanding of how to write
code AND application security is critical and the combination of those
two skill sets is really in short supply. Now add to that anything beyond
even a rudimentary understanding of applied cryptography and cryptographic
attacks, and it should become obvious for the reason we are in the
pickle that we are in.

> Second, and much more importantly, the issue is "what kind of bugs?"
> Although it does indeed seem true that bugs that the users notice and
> cause them day to day trouble are shallow in open source systems,
> because they are irritants to the user community, *security* holes are
> a different sort of beast. Those require systematic audits, because
> they are generally bugs that are tickled only in extraordinary
> circumstances.

I think the type of "bugs" that is relevant to this discussion is
security vulnerabilities. And I completely agree that they are a
whole different sort of beast. In fact, unless developers are aware
of how systems and applications are exploited, they generally will
miss that sort of software faults and design flaws.  In that regard,
I think that academia and all the "Learn <stuff> in 24 Hours" type
of books and training have failed the development community miserably.
But that's a discussion for a whole different thread and not explicitly
crypto related and thus I figured it was OT for this list (although,
it's your list, so if you want to take it in that direction, you'll
get no argument from me :).

> I think open source has produced really good results over time. I also
> have very little evidence that the security of closed source systems
> is better -- see, for example, the record of Microsoft Windows.

I truly wish that was the case, but if you look at open source as a
whole vs. closed source as a whole, it's not clear that the data bears
that out. However, it's definitely an apples to oranges comparison
if we are focusing on security vulnerabilities. It's a damn sight easier
to find vulnerabilities in a white-box scenario that open source provides
than it is the mostly black-box (decompilation of certain languages not
withstanding) scenario that is only available for closed source, so
the results are certainly skewed in that way.  Where the data is mostly
clear is that "time to fix" once a security vulnerability is made public
(say, with 0days) is much quicker for getting an initial fix for open
source projects than it is for closed source projects.

The other factor that skews the vulnerability data is what the attackers
go after. Generally they go for the soft targets (desktops instead of
servers) and they go where they go for where they think they can get
the biggest bang for the buck. So, as far as OSes go, Windows, MacOS,
and then Linux. (E.g., Why bother going after Linux desktops if there's only
10k of those versus 100M Windows?) But if you look at the smart phone /
tablet market, Android OS (Linux-based) as the most issues. To a large
degree, the bad guys follow the #s so that makes the open source vs.
closed source vulnerability figures (IMO) somewhat skewed to whichever
has the biggest market. (Be it know that I am NOT a Windows fan; anything,
but for that matter. But a fairer open source vs closed source comparison
might be Java vs. Windows, or if you don't like that comparison, Java
vs. .NET.)

> What we're talking about here is something different -- which is
> whether users are (in general) better off with systems that prevent
> them from running software that has not been vetted by a trusted third
> party. That is to say, many people seem to be getting a benefit from,
> in effect, paying Apple to vet all their applications for them.

Right. And I agree with that. But I also think that Raymond's premise
that many eyes make all bugs (presumably including security-related ones)
make "all bugs shallow" has little if any merit JUST BECAUSE it is
open source. *IF* you can get eyeballs on the code in question by
people who are in position to understand it, then it intuitively seems
like Raymond's premise is correct. The problem is, that's a mighty big IF.

-kevin
--
Blog: http://off-the-wall-security.blogspot.com/    | Twitter: @KevinWWall
NSA: All your crypto bit are belong to us.


More information about the cryptography mailing list