[Cryptography] Toxic Combination
benl at google.com
Tue Dec 9 14:45:44 EST 2014
On 7 December 2014 at 18:16, ianG <iang at iang.org> wrote:
> On 4/12/2014 11:28 am, Ben Laurie wrote:
>> On Thu Dec 04 2014 at 7:22:16 AM Peter Gutmann
>> <pgut001 at cs.auckland.ac.nz <mailto:pgut001 at cs.auckland.ac.nz>> wrote:
>> Ben Laurie <benl at google.com <mailto:benl at google.com>> writes:
>> >I think that's a completely unfair accusation - the difficulty has
>> >been the lack of a _usable_ way to _securely_ implement such
>> You forgot the rest of the list that gets trotted out:
>> It won't scale, there's no user demand, there's insufficient
>> industry support,
>> I ran out of gas, I had a flat tire, I didn't have enough money for
>> cab fare,
>> my tux didn't come back from the cleaners, an old friend came in
>> from out of
>> town, someone stole my car, there was an earthquake, a terrible flood,
>> There have been endless studies done and papers published on how to do
>> perfectly usable shared secret-based authentication.
>> Oh really? Please provide references. Actually, I don't have time to be
>> drowned in a million crap papers, so please, for now at least, provide a
>> reference for the best solution you are aware of (or two or three if
>> choosing is hard).
> As you yourself show below, asking for references is a setup for a
Come on, seriously? As I said below, I'm prepared to retract that
requirement, but I still ask: what is the point of using a ZKP if it
isn't to conceal the password from the site operator?
In any case, your position appears to be "you should implement this
even though I cannot point to a single example of how". Not tenable.
>> Heck, I devote
>> significant chunks of my book (draft) to them, I'd be surprised if
>> there were
>> less than a hundred references to published work on how to do it.
>> There are many papers on how to do it badly. I have yet to see one
>> (backed by actual testing, I am not interested in usability by
>> assertion) that's actually deployable.
> As I'm sure you know, things like certificate pinning were trialled and
> tested seriously in the mid 2000s as phishing turned up. Browsers
> successfully ignored those efforts. I don't recall whether that excuse was
> used, but I'd not be surprised, they were on a mission to block all outside
Pinning does not scale: you risk your site becoming unavailable for an
extended period if you screw up. Remediation is necessarily manual.
> This is all pre-chrome times so perhaps google can look to avoid the
> mistakes of Mozilla and Microsoft. But as I'm also sure you found out, it
> wasn't that easy, the power of standards and compliance is immense.
>> >And it has to be secure - which includes "not allow credential
>> theft _even by
>> >the site operator_".
>> Oh, that's a new one: Set a requirement that can't possibly be met
>> perhaps through the use of magic) and then claim you can't meet that
>> requirement, therefore it's not worth doing.
>> I did muse about that one for a while, and surely its the point of using
>> zero knowledge protocols? If it is not, then what is?
>> But if you really think its impossible, I'm certainly prepared to drop
>> it as a requirement.
>> Looking past all the excuses, there is one, and only one, reason why
>> browser supports proper shared secret-based mutual auth: The browser
>> don't want to do it.
>> And you claim that don't want to because
> No, no, let me put words into Peter's mouth ;)
> The reason the vendors won't act for user security is twofold. Firstly, the
> vendors are doing what they are told by the standards bodies and the
> upstream vendors. The browsers don't really have security / architectural
> capability because they just follow the standards . The vendors have
> outsourced the security equation, so they are totally going to ignore any
> input from any alternate source, peter paper or proven, up to and including
> evidence that they are part of a perpetual and profitable criminal
> You might (should) ask why. There is at least one reason why Mozilla and
> Microsoft refused to enter into the strategic architectural security game:
> liability. If they recognised that there was a security weakness, and they
> sought to do something about it, they could become theoretically liable for
> phishing losses from their users. Given the state of American legal
> behaviour, they did the obvious thing, and denied their liability for all
> security losses  and therefore sat on their "we follow standards"
> principles aka "best practices" lie aka get out of jail card.
> (The exception to the above dynamic might be google which has been caught on
> both sides of the fence - as browser vendor and as online merchant. It has
> therefore been incentivised by being liable for more parts of the equation
> to somewhat rock the status quo. By thinking of alternates, and trying to
> push them through .)
> Then, secondly, when there is a new standard, the vendors wait until others
> have done it. As nobody wants to leap off without a guarantee of the others
> doing it too, the natural state is that nothing happens. As per Peter's
> suggestion. This is by way of a natural cartel, and yes, there is such a
> thing, and it can be deliberately constructed, and it can be manipulated.
I think it is absurd to claim nothing happens. Certificate
Transparency and Safe Browsing are two obvious examples that improve
user security and have none of the above going on.
More information about the cryptography