[Cryptography] The Trust Problem

ianG iang at iang.org
Thu May 22 03:06:49 EDT 2014


Follows is a long rambling response.  This might mean I don't know, or
it might mean I think it's a hard problem...



On 20/05/2014 15:50 pm, Jerry Leichter wrote:
> So I ran across the Mustbin iOS app (http://mustbin.com).  Cool, simple idea:  Take pictures of important documents, the contents of your wallet, etc.; organize them in "bins"; upload and sync to all your devices.  The data is encrypted with "military grade security" (they actually specify RSA - no key length mentioned - and AES-256); they don't have access to your decryption keys.    "Our technology has been reviewed and verified by one of the best firms in the security analysis business."  (This is mainly from a blog entry:  http://mustbin.com/blog/read/mustbin-security-military-grade-certified.)
> 
> So ... should I believe their stuff is secure?  Let's suppose they really are good guys doing their best to provide a secure service:  What could they do to help me trust them with such sensitive information?


We're in trouble already.  Firstly, "secure" and security models are ...
assumed.  Likely, borrowed from some crypto book and rehased for the
ages, so all the top acronyms inserted, I'm guessing.

So we don't actually know as yet what we are protecting the "important
documents" from.  loss?  theft?  spying?  teenage daughter?  spooks?
extortionists?  local council, big state?  mice?  heirs?

Then, trust.  Does one trust on the basis of a website?  No, not really.
 Trust is something you build up over multiple transactions,
measurements, recommendations etc.

Multiple transactions in this sense build up trust that the data is not
lost.  But secure?  If we think about it, can only be seen /in the
attack/.  In the absence of attacks, a secure system works as well as a
non-secure system.  Only in the breach is security going to make a
difference.  And only knowledge of that breach is going to inform us and
therefore build trust in any measurable sense.

Reputation -- who are these people?  As posted recently Jeff, the
Lavabit debacle reveals behaviour under attack -- Ladar Levison shut
down his company.  He then went on to work on that secure email thing.
That speaks volumes.


> With security, we're now at a level well beyond technical questions about algorithms and key lengths.  What should you demand to be convinced that you can use some software safely?  What should someone offering secure software put out there that would help you reach a decision?
> 
> The facile answer is "only use OSS" - like OpenSSL, home of Heartbleed.  :-(


Another myth.  Is there any scientific evidence for this?  Might make a
suitable grad project ;)


> (Actually, Mustbin uses OpenSSL - they have another blog entry about what affect Heartbleed had on them.)


Using OpenSSL is a signal.  The nature of signals is that they are
presumed at face value, but can be confused.  Other signals might have
included NIST standards, FIPS testing, NSA review, verifiable randomness
in seeds, etc.

What people tend to do is look at all the signals on offer and see if
they are aligned in the same direction.  But as you point out, the
signals popular a few decades ago "uses RSA 1024 and DES 64" are not
really sufficient any more.

Information on actual attacks is the real info we want, because it shows
the product doing the job.  Problem is, the existence of a failed attack
is taken as a negative signal by the MSM, whereas it should be seen as a
positive signal;  both evidence of demand and evidence of defence.

Promote your hacked company wherever and whenever you can.


> Openness is certainly *part of* the answer.  I'd find Mustbin's comments much more convincing it they named that "best firm" *and published their report* so I could judge what was actually examined.  But it's not the whole story.  


Right.  In the absence of any really useful information on attacks, an
audit or review could help.  The review by Benson on Skype helped a lot.

(What raised eyebrows was that it was never repeated.  What killed
Skype's reputation dead was that they ditched the security model without
cause, without real notification, and left the Benson report up there.)


> Apple's recent white paper on iOS security http://images.apple.com/ipad/business/docs/iOS_Security_Feb14.pdf may not be perfect - what is? - but it's certainly way beyond what you get with most products, which basically say "We're experts, trust us."


Basically, disclosure.  Maybe the new signals include "depth of
disclosure" ?  There are two components to this;  how we do things (inc.
OSS) and what we claim.

Claims are also important because it sets a statement that can be tested
over time.  So what is the cost of a broken claim?  Can a company put a
credible claim on the table?

Skype's broken security claim has cost them what?  Embarrassment for
Microsoft and Skype, I guess.  But the former cares little, and the
latter is probably happy with the phone market.

What would happen if there were real fallout?  RSA suffered quite a lot
-- so some say -- when their DUAL_EC behaviour came out.  NIST have been
red-faced.  But still we don't see much in the way of measurable damages.

Sometimes, false marketing claims are tested in court, typically in a
class-action suit or against a data protection regulator.  Sometimes
these claims win through and damages are awarded.

So maybe a new signal is to prepare specific claims that can be tested
in court?  If we can keep the lawyers from watering them down (which is
the normal signal) and make them aligned with customer needs enough,
would that work?




iang


More information about the cryptography mailing list