[Cryptography] Follow up on my password replacement idea

Phillip Hallam-Baker phill at hallambaker.com
Wed Sep 23 10:40:52 EDT 2015

On Wed, Sep 23, 2015 at 9:29 AM, Bill Cox <waywardgeek at gmail.com> wrote:

> On Tue, Sep 22, 2015 at 6:08 PM, Phillip Hallam-Baker <
> phill at hallambaker.com> wrote:
>> Protecting those private keys is difficult.  Malware might sniff them
>>> when the user unlocks them.  A co-worker and I would like to build an
>>> open-source a hardware-backed signing library with a common API on the
>>> major platforms.  For example, the new SGX Intel extensions can enable more
>>> secure rapid key signing.  Some operations have to be super-fast, like
>>> Token Binding signature operations, while others, such as unlocking a key
>>> when a user enters a password, can be slower, and may rely on signing in
>>> secure hardware, such as a TPM.
>> Protecting the keys is easier if every machine has different keys and
>> keys are never ever removed from devices, only deleted.
> Why is multiple keys on multiple devices more secure than a single key on
> those same devices?  An attacker needs only steal one of them to PWN user
> accounts.  The attack surface is about the same.

One of the things this has forced me to do is to abandon the notion that we
just think about stopping breaches. If you put crypto in the hands of
ordinary people they are going to mess up. Even professionals have messed
up without external attackers trying to provoke it.

Having separate device keys does not reduce your attack surface, that is
true. But it gives a lot of leverage for mitigating the damage. Devices are
going to be stolen. Right now if you lose a device, it (typically) has all
the access codes for all your apps and you are completely pwned.

Right now I am coding for phase 1 which is based on RSA with no exotic
crypto at all. In phase 2 I am going to be looking to add proxy
re-encryption 'recryption' techniques to further mitigate disclosure risks.

There are two objectives here:

1) Give the typical user 'pretty good' security. This means a security
scheme they can use without burning themselves. Static encryption keys have
to be escrowed or they will sob when they lose the pictures of the kids
taken at 3 years old.

2) Give a knowledgeable user the ability to completely control their
security without having to know math or use unfriendly applications.

At a minimum, you want signing to be done a separate process running as a
> user in it's own group, without giving read permission to any of it's
> files.  However, that leaves the entire OS as an attack surface, and as
> time has shown, we can't fully secure an entire OS, at least not while
> making that OS general purpose.  Reducing the attack surface even further
> makes sense, especially if it's as easy as linking to a shared library.
> We're hoping to make it that simple.

Yes, you are right that this is an area where real value can be added.

Right now I am building on Windows since there are at least hooks that
could in theory link to a TCB. If you have something better, that could be
added in or you could offer it as a value added.

> In any case, simply storing the private keys in the clear for malware to
> attack is nuts, IMO.  Don't be lazy.  Link to a key manager that gets key
> security right on each platform.

Yup. I am so pissed about the folk who spent so much time and effort
tailing against 'trustworthy computing'. They were carrying water for the
NSA even if they didn't know it.

> I am particularly interested in Matt Blaze's proxy re-encryption work that
>> was mentioned here. That can be added in as an extra layer of security.
> It's cool.  How can it be used to increase security of keys stored on
> devices?

It means that each device can have a separate decryption key for a single
public encryption key.

This gets us into the realm of trusted untrusted services. Like the key
generation scheme I described a while back, the Mesh has the property that:

1) The Mesh itself is always an untrusted service. It does not offer any
confidentiality guarantee and therefore cannot suffer confidentiality
breach. It offers only a qualified service guarantee that is time bounded
by blockchain techniques.

2) A Mesh portal is a trusted, untrusted service.

2a) A Mesh portal cannot compromise a user unless a user screws up.

2b) If a user screws up, a Mesh portal can help the user mitigate the
damage as a trusted service.

In this case lets say Alice has three devices, A, B, C and A is the admin
machine that has the decryption key for the encryption key. This is used to
generate a recryption/decryption keypair for each of B and C. The
recryption key goes to the portal.

In normal use, the portal can't decrypt the message because it only has a
recryption key.

If Alice loses device C, she can tell the portal to stop recrypting to C.
The portal still can't read any messages but Alice is trusting them to help
mitigate the risk.

Actually, I meant privacy issues similar to what we see today with
> third-party cookies that enable advertisers to track your web browsing
> behavior.  The initial "killer app" for the Mesh seems to be a password
> manager, which should do a reasonable job of privacy protection, but as you
> said above, eventually the goal would be stronger authentication using
> PKI.  If a user exposes the same public key to multiple sites, those sites
> can collude to track the user's behavior on the web.  The FIDO initiative
> uses semi-anonymous assertions, as does DAA, to help solve this issue.

This is where I want help from folk here. The Mesh is like the Web, it is a
platform for deploying crypto that is intended to let people plug lots of
things in.

Again, we can apply the recryption trick. So I create a username N paired
with a public key e^X. I then provision each connected device with a device
specific authentication key and recryption key d, e^(X-d). These are
encrypted to the device master encryption key and provisioned via the Mesh.

This allows each device to have access to that site via the authentication
key without linking it to the profile in any direct way.

> However, I agree that you don't have to solve every problem in the first
> pass.  Simply giving the user the choice of opting into the Mess, with it's
> potential key-based tracking limitation, is probably good enough to start.
> Most people wont care, I think.  Longer term, I think you can take a play
> out of the FIDO or DAA book and upgrade both privacy and authentication
> strength.

The aim is to get to the point we reached with the Web. Until late 1994 we
were pushing a heavy rock uphill. After whitehouse.gov went online we had
reached the top and it went down the other side of its own accord.

I am sure lots of people will be able to add FIDO or go beyond.

A major challenge in authentication is doing it in the presence of
>> malware.  In a magical world filled with unicorns and fairies, we could
>> imagine that users have zero malware infected devices, and build our
>> security model based on that.  Alternatively, we could try and solve the
>> actual problem the world has given us, instead of whimping out and letting
>> users get PWNed.
> I see way too many security people give up on delivering real security by
> starting with, "Assuming there's no malware..."  That's a good point to
> stop listening to those security experts, at least when it comes to
> authentication.

Well one of the reasons Comodo is backing this is that it creates brand
awareness for our anti-Virus.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.metzdowd.com/pipermail/cryptography/attachments/20150923/674906fb/attachment.html>

More information about the cryptography mailing list