[Cryptography] Open Source Sandboxes to Enforce Security on Proprietary Code?

Kent Borg kentborg at borg.org
Fri Aug 15 09:42:43 EDT 2014


Designing in end-to-end encryption is a good idea, but just because 
there is a claim that some product employs end-to-end encryption, why 
should any customer believe it?

With open source programs there is a go-check-for-yourself response 
that, though it might not be practical, does pose a risk of discovery to 
those who might want to try to quietly inject a backdoor.

But that doesn't do any good in assuring that a proprietary product is 
in anyway secure.

Is there any work going on to build an open/closed hybrid, where a the 
closed source portion of the code is in a restricted sandbox that can't 
talk to the outside world, except through limited facilities provided by 
the open source portion, a part that is susceptible to 
go-check-for-yourself auditing?

One doesn't have to worry as much about what product Foo is doing if we 
encrypt all its communication with a key that Foo doesn't know. Sure, 
Foo might implement a covert channel, but if we don't let it talk to 
untrusted endpoints, so what? Don't think of it as a covert channel, 
think of it as a proprietary feature that makes Foo's voice quality 
better than the competition's.

Individual products might release portions of their source code to try 
to demonstrate how wholesome they are, but some standardization would be 
better.

Recent Linux kernels have seccomp filters that can restrict and filter 
system calls, and that sounds useful, but is rather incomplete. The open 
source portions of Android made an attempt at implementing permission 
lists on app sandboxes, but it is full of holes.

Are their other Usual Suspects are in this space?

-kb, the Kent who has been musing over a product idea and but who is 
wondering how it could possibly be considered trustworthy.


More information about the cryptography mailing list