[Cryptography] Would open source solve current security issues? (was "Re: EFF amicus brief in support of Apple")

Kevin W. Wall kevin.w.wall at gmail.com
Fri Mar 18 19:30:53 EDT 2016


I'm glad this thread has become active again, because it reminded me
that I wanted to respond to a couple of Perry's thoughts.

On Thu, Mar 10, 2016 at 7:44 AM, Perry E. Metzger <perry at piermont.com> wrote:
> On Wed, 9 Mar 2016 20:52:19 -0500 "Kevin W. Wall"
> <kevin.w.wall at gmail.com> wrote:
[snip]
> Things have been changing rapidly. Old line firms are, of course, the
> slowest to change, but that way of working -- even the notion of
> having integration testing done on machines the devs can't touch and
> thus can't debug on -- is going out the door. People realized a while
> ago that this just isn't a viable model.
>
>> However, in the context of this thread & list, think of a
>> development QA team more in the context of what I do, which is
>> security code reviews, or in terms of pen testing teams. Those are
>> things that require specialized knowledge about security.  For the
>> latter, one doesn't technically even need to know how to code. For
>> the former an understanding of how to write code AND application
>> security is critical and the combination of those two skill sets is
>> really in short supply. Now add to that anything beyond even a
>> rudimentary understanding of applied cryptography and cryptographic
>> attacks, and it should become obvious for the reason we are in the
>> pickle that we are in.
>
> I won't disagree, but the problem is no different in the closed and
> open source communities in this regard. Indeed, even if one can
> afford a distinct QA team (which is, I think, a dubious
> organizational choice in any case), I've seen no evidence it provides
> value in reducing security bugs.

I guess "QA" wasn't the best choice of words, but what I was trying
to drive across is that by and large FOSS projects generally will not have
people to serve in the role of doing security code reviews assisted by
commercial SAST tools, pen testing assisted by COTS DAST tools, manual pen
testing, etc.

Commercial sectors like telecomm and especially financial are more concerned
about their companies' assets (and asses) and thus will spend significant
$$ on it to protect their assets and reputation. Certainly this has to do
with both resources and personal accountability. (No one really gets "fired"
over a screw up for some contribution to a FOSS project that leads to an
exploitable vulnerability, although they may lose credibility.)

> One of the general problems here is, I think, that systematic design
> principles needed to avoid security bugs are not really
> understood or followed by most software developers regardless of
> whether their work is open or closed source.

Design? LOL. Sigh. I'll bet it's been 20+ years since I've seen an actual
design document (not counting the ones I made my team write). PowerPoint
presentations with a single "architecture" slide represented by a bunch
of colorful boxes connected with lines and a red box labelled "security"
need not apply. (Yes; I'm not making that up.)

IMO, Agile approaches have mostly killed any systematic designs. Now
things are thrown together in some mishmash and the thought is usually
"we'll work out {performance|threading|security} design issues later
when we refactor". From where I sit with mostly a background in systems
programming and application security, there is no "design" per se.

>> > What we're talking about here is something different -- which is
>> > whether users are (in general) better off with systems that
>> > prevent them from running software that has not been vetted by a
>> > trusted third party. That is to say, many people seem to be
>> > getting a benefit from, in effect, paying Apple to vet all their
>> > applications for them.

Oh, I agree, but you're talking USERS and I'm talking DEVELOPERS. I was only
referring to users in the context that they are the ones suffering the
consequences of the development based on FOSS vs. proprietary products.

>> Right. And I agree with that. But I also think that Raymond's
>> premise that many eyes make all bugs (presumably including
>> security-related ones) make "all bugs shallow" has little if any
>> merit JUST BECAUSE it is open source. *IF* you can get eyeballs on
>> the code in question by people who are in position to understand
>> it, then it intuitively seems like Raymond's premise is correct.
>> The problem is, that's a mighty big IF.
>
> I think we both agree on this last point, but it is of course
> orthogonal to the question of whether it is better for the average
> user to, in effect, pay someone to vet all the software that they use
> and arrange things so unvetted software cannot be executed on their
> equipment.

If posed as is it better for users to vet their software or not, certainly
for the average non-technical user who doesn't have a security background,
it's almost always better to pay someone to do that, because otherwise it
likely won't happen.

-kevin
-- 
Blog: http://off-the-wall-security.blogspot.com/    | Twitter: @KevinWWall
NSA: All your crypto bit are belong to us.


More information about the cryptography mailing list