[Cryptography] Would open source solve current security issues? (was "Re: EFF amicus brief in support of Apple")

Perry E. Metzger perry at piermont.com
Thu Mar 10 07:44:06 EST 2016


On Wed, 9 Mar 2016 20:52:19 -0500 "Kevin W. Wall"
<kevin.w.wall at gmail.com> wrote:
> > First, and less importantly, modern software development practice
> > no longer has separate QA teams. These days, most reasonable
> > places do things like TDD and tests are written by the dev team
> > itself.  
> 
> Unless things have drastically changed since I left IT at a large
> telecomm company less than 3 years ago, they still had a separate
> test organization. Yes, dev teams were doing TDD then, but they were
> not doing integration testing or UAT. In fact, there were separate
> environments for those that most dev teams did not even have access
> to.

Things have been changing rapidly. Old line firms are, of course, the
slowest to change, but that way of working -- even the notion of
having integration testing done on machines the devs can't touch and
thus can't debug on -- is going out the door. People realized a while
ago that this just isn't a viable model.

> However, in the context of this thread & list, think of a
> development QA team more in the context of what I do, which is
> security code reviews, or in terms of pen testing teams. Those are
> things that require specialized knowledge about security.  For the
> latter, one doesn't technically even need to know how to code. For
> the former an understanding of how to write code AND application
> security is critical and the combination of those two skill sets is
> really in short supply. Now add to that anything beyond even a
> rudimentary understanding of applied cryptography and cryptographic
> attacks, and it should become obvious for the reason we are in the
> pickle that we are in.

I won't disagree, but the problem is no different in the closed and
open source communities in this regard. Indeed, even if one can
afford a distinct QA team (which is, I think, a dubious
organizational choice in any case), I've seen no evidence it provides
value in reducing security bugs.

One of the general problems here is, I think, that systematic design
principles needed to avoid security bugs are not really
understood or followed by most software developers regardless of
whether their work is open or closed source.

> > What we're talking about here is something different -- which is
> > whether users are (in general) better off with systems that
> > prevent them from running software that has not been vetted by a
> > trusted third party. That is to say, many people seem to be
> > getting a benefit from, in effect, paying Apple to vet all their
> > applications for them.  
> 
> Right. And I agree with that. But I also think that Raymond's
> premise that many eyes make all bugs (presumably including
> security-related ones) make "all bugs shallow" has little if any
> merit JUST BECAUSE it is open source. *IF* you can get eyeballs on
> the code in question by people who are in position to understand
> it, then it intuitively seems like Raymond's premise is correct.
> The problem is, that's a mighty big IF.

I think we both agree on this last point, but it is of course
orthogonal to the question of whether it is better for the average
user to, in effect, pay someone to vet all the software that they use
and arrange things so unvetted software cannot be executed on their
equipment.

Perry
-- 
Perry E. Metzger		perry at piermont.com


More information about the cryptography mailing list