[Cryptography] Follow up on my password replacement idea

Bill Cox waywardgeek at gmail.com
Wed Sep 23 09:29:39 EDT 2015


On Tue, Sep 22, 2015 at 6:08 PM, Phillip Hallam-Baker <phill at hallambaker.com
> wrote:

> Protecting those private keys is difficult.  Malware might sniff them when
>> the user unlocks them.  A co-worker and I would like to build an
>> open-source a hardware-backed signing library with a common API on the
>> major platforms.  For example, the new SGX Intel extensions can enable more
>> secure rapid key signing.  Some operations have to be super-fast, like
>> Token Binding signature operations, while others, such as unlocking a key
>> when a user enters a password, can be slower, and may rely on signing in
>> secure hardware, such as a TPM.
>>
>
> Protecting the keys is easier if every machine has different keys and keys
> are never ever removed from devices, only deleted.
>

Why is multiple keys on multiple devices more secure than a single key on
those same devices?  An attacker needs only steal one of them to PWN user
accounts.  The attack surface is about the same.

At a minimum, you want signing to be done a separate process running as a
user in it's own group, without giving read permission to any of it's
files.  However, that leaves the entire OS as an attack surface, and as
time has shown, we can't fully secure an entire OS, at least not while
making that OS general purpose.  Reducing the attack surface even further
makes sense, especially if it's as easy as linking to a shared library.
We're hoping to make it that simple.

In any case, simply storing the private keys in the clear for malware to
attack is nuts, IMO.  Don't be lazy.  Link to a key manager that gets key
security right on each platform.

I am particularly interested in Matt Blaze's proxy re-encryption work that
> was mentioned here. That can be added in as an extra layer of security.
>

It's cool.  How can it be used to increase security of keys stored on
devices?


> One criticism I'm sure you hear is that the Mesh publishes private keys to
>> the world that can be used to "track" users.
>>
>
>  ...

> But using the Mesh is going to expose people to a degree of traffic
> analysis, no question of that. Just like the PGP key servers do. I am ok
> with that if the amount of leakage isn't significant compared to what we
> leak by using IP in the first place.
>

Actually, I meant privacy issues similar to what we see today with
third-party cookies that enable advertisers to track your web browsing
behavior.  The initial "killer app" for the Mesh seems to be a password
manager, which should do a reasonable job of privacy protection, but as you
said above, eventually the goal would be stronger authentication using
PKI.  If a user exposes the same public key to multiple sites, those sites
can collude to track the user's behavior on the web.  The FIDO initiative
uses semi-anonymous assertions, as does DAA, to help solve this issue.

However, I agree that you don't have to solve every problem in the first
pass.  Simply giving the user the choice of opting into the Mess, with it's
potential key-based tracking limitation, is probably good enough to start.
Most people wont care, I think.  Longer term, I think you can take a play
out of the FIDO or DAA book and upgrade both privacy and authentication
strength.


>
> Yep, these are tough problems.  To take this to the next level, the Mesh
>> may want to consider more than just signatures.  For example, if suddenly a
>> device is correctly authenticating to the Mess from Russia when all the
>> other user's devices remain in Idaho, that's a signal that maybe more
>> authentication factors are needed.  This can be more devices, answering
>> security questions, or sending an email to the user's default account
>> asking for confirmation.
>>
>
> An authentication profile could specify geolocation data and that could be
> used to limit access. But then you leak more information.
>
>
A major challenge in authentication is doing it in the presence of
malware.  In a magical world filled with unicorns and fairies, we could
imagine that users have zero malware infected devices, and build our
security model based on that.  Alternatively, we could try and solve the
actual problem the world has given us, instead of whimping out and letting
users get PWNed.

I see way too many security people give up on delivering real security by
starting with, "Assuming there's no malware..."  That's a good point to
stop listening to those security experts, at least when it comes to
authentication.

So, how can we authenticate users in the presence of malware?  It's
complicated, ugly, dirty, and painful.  You can't simply "solve" it with
PKI.  We have to take every advantage we can get, and even then expect to
lose way too often.  Some steps we can take in no particular order are:

* Secure secret keys and passwords in a far less complex and hopefully
malware-free location, such as a TPM or SGX "enclave".
* Track as much data as users are willing to let you track to help
distinguish them from malware-bots.
* Don't allow any critical state changes (such as financial transactions)
without verifying that the user is physically present at the device and
intends to allow a significant state change.
* Use multiple devices for authentication, at least when important state
changes need to be approved.
* Have multiple levels of authentication, each requiring a higher bar of
confidence that the user is genuine.
* Enable recovery of PWNed profiles using every bit of data at your
disposal.

Solid authentication is the one place where we really need to leak a ton of
information, preferably only to a semi-trusted third-party.  In a world
with an exploding number of malware infested devices per user, we are in a
constant arms race to do a better job of discriminating bots from real
people than the attackers do at making user-like bots.  You also need to
keep the battle in those terms, and secure the secret keys against malware
with hardware backing, or you enable attacks with low-wage labor where
attackers authenticate remotely to PWN user accounts.  This is the
preferred method today - live attacks against your accounts through your
device are rare.

This is where a semi-trusted third party can be valuable.  The entire
history of a user's MAC addresses, IP addresses, geolocations, operating
system, web browser, and every other aspect of the user's behavior can all
be taken into account along with passwords and device-based PKI assertions
in making an authentication decision.  We could even consider typing
cadence as they type a password, which is highly identifying, if we could
get the device to send that data.  If we knew you used your credit card 3
miles away to buy lunch 1 hour ago, and that your cell phone says it's
moved from the restaurant back to your office where you are authenticating
with your computer, that would be a solid authentication signal.

Doing authentication and account recovery well is _very_ hard.  I'm still
just learning the basics.  This is one reason it might make sense to simply
let a monster corporation do most of the authentication for the world, in
the manner we see today when web sites offer "Log in with Facebook or
Google".  I've personally given up on authenticating users on my own web
sites.  For all future sites I build, I plan to throw the authentication
over the wall to the monster corps.

Unless you succeed with the Mesh :)

Bill
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.metzdowd.com/pipermail/cryptography/attachments/20150923/87052cbe/attachment.html>


More information about the cryptography mailing list