[Cryptography] Thoughts about keys

Ben Laurie ben at links.org
Sun Sep 1 08:55:08 EDT 2013


On 25 August 2013 21:29, Perry E. Metzger <perry at piermont.com> wrote:

> [Disclaimer: very little in this seems deeply new, I'm just
> mixing it up in a slightly different way. The fairly simple idea I'm
> about to discuss has germs in things like SPKI, Certificate
> Transparency, the Perspectives project, SSH, and indeed dozens of
> other things. I think I even suggested a version of this exact idea
> several times in the past, and others may have as well. I'm not going
> to pretend to make claims of real originality here, I'm more
> interested in thinking about how to get such things quite widely
> deployed, though it would be cool to hear about prior art just in case
> I decide to publish a tech report.]
>
> One element required to get essentially all messaging on the
> Internet end to end encrypted is a good way to find out what people's
> keys are.
>
> If I meet someone at a reception at a security conference, they might
> scrawl their email address ("alice at example.org") for me on a cocktail
> napkin.
>
> I'd like to be able to then write to them, say to discuss their
> exciting new work on evading censorship of mass releases of stolen
> government documents using genetically engineered fungal spores to
> disseminate the information in the atmosphere worldwide.
>
> However, in our new "everything is always encrypted" world, I'll be
> needing their encryption key, and no one can remember something as
> long as that.
>
> So, how do I translate "alice at example.org" into a key?
>
> Now, the PGP web-of-trust model, which I think is broken, would have
> said "check a key server, see if there's a reasonable trust path
> between you and Alice."
>
> I have an alternative suggestion.
>
> Say that we have a bunch of (only vaguely) trustworthy organizations
> out in the world. (They can use crypto based log protocols of various
> kinds to make sure you don't _really_ need to trust them, but for the
> moment that part doesn't matter.)
>
> Say that Alice, at some point in the past, sent an email message,
> signed in her own key, to such an organization's key server, saying in
> effect "this is alice at example.org's key".
>
> At intervals, the trustworthy organization (and others like it) can
> send out email messages to Alice, encrypted in said key, saying "Hi
> there! Please reply with a message containing this magic cookie,
> encrypted in our key, signed in yours."
>
> If a third party needing the key for alice at example.org queries the
> vaguely trusted server, it will then give them information like "For
> the past six years, we've been sending alice at example.org emails every
> couple of weeks asking her to reply to demonstrate she controls a
> particular public key, and she always has -- new keys have always been
> signed in the old one, too. Here's a log, including various sorts of
> widely witnessed events and hash chains so that if we were lying about
> this we had to be planning to lie about it for a very long time."
>
> Now of course, in the real world, who wants to go through the effort
> of hand replying to query messages to establish a key over time?
> Instead, Alice has some actually trusted software running on her
> computer at home.
>
> She can either leave it to automatically do IMAP queries against her
> mailbox (which could even be GMail or what have you) and reply on her
> behalf, or it could ask her to unlock her key while she's at home in
> the morning having her coffee. However, I think the former is actually
> preferable. We'd rather have an imperfect system that is effortless to
> use but can be bypassed by physically breaking in to someone's home.
> (After all if you did that you probably can bug Alice's hardware
> anyway.)
>
> Alice probably also needs to make sure someone isn't spoofing her
> replies by accessing her IMAP box and replying for her (using a key
> known to the attacker but presumably not to Alice) and then deleting
> the query, but the mere absence of periodic pings from the trusted
> party may be enough to create suspicion, as might doing one's own
> queries against the trusted parties and noticing that the key isn't
> your own.
>
> Presumably, of course, there should be a bunch of such servers out
> there -- not so many that the traffic becomes overwhelming, but not so
> few that it is particularly feasible to take the system off the
> air. (One can speculate about distributed versions of such systems as
> well -- that's not today's topic.)
>
> So, this system has a bunch of advantages:
>
> 1) It doesn't require that someone associated with administrators of
> the domain name you're using for email has to cooperate with deploying
> your key distribution solution. Alice doesn't need her managers to agree
> to let her use the system -- her organization doesn't even need to
> know she's turned it on. Yet, it also doesn't allow just anyone to
> claim to be alice at example.org -- to put in a key, you have to show you
> can receive and reply to emails sent to the mailbox.
>
> 2) You know that, if anyone is impersonating Alice, they had to have
> been planning it for a while. In general, this is probably "good
> enough" for a very large number of purposes, and much better than a
> perfect system that no one ever uses.
>
> You can always trade a key hash with Alice personally, of course, and
> if you do, perhaps she sets her software to personally alert you to
> key refresh events and such (which we'll gloss over.) However, you
> don't *have* to do it that way -- the system makes it possible to get
> a reasonable amount of de facto trust without needing you to make many
> decisions that ordinary people have trouble making, too. None of this
> "I know this is Charlie's real key, but do I think Charlie is
> trustworthy to sign Alice's key, or would he have been sloppy"
> business that PGP imposes.
>
> (We can gloss over details like a protocol for Alice to update her key
> at intervals, or to make repeated claims that she was stupid and lost
> her key and needed to generate a new one, or what have you. I think
> they're solvable, and I don't want to clog up the gist of the idea
> with them.)
>
> 3) The system can be extremely lightweight to implement.
>

Not sure about _extremely_, but I certainly agree it should be relatively
straightforward. And since I have an interest in the "Here's a log,
including various sorts of widely witnessed events and hash chains so that
if we were lying about this we had to be planning to lie about it for a
very long time." (not sure I agree about that last part, btw), I hereby
volunteer to implement that part if there are people willing to implement
the rest...


>
> Disadvantages are obvious. It isn't "perfect" in that mostly, it is
> just depending on temporal continuity and widely witnessed information
> to discourage forgery. On the other hand, that's a damn sight better
> than a universal key management system that doesn't exist.
>
> A few more notes:
>
> First, I said nothing about "certificates". I've contended for a very
> long time that I'm not sure what the function of the things really
> is. What I want in the real world is the sort of attestation of long
> term use that got discussed above. This system could use X.509 certs
> as a pure data format, of course, or PGP keys, or raw integers in the
> SSH style. Who cares.
>
> Second, I've said nothing really about what such keys could be used
> for. I'm thinking along the lines of next generation secure IM and
> Email systems, but really, such things could be used for anything. If
> people particularly insist on having separate keys for different
> purposes, they'll need to arrange to store some sort of label along
> with each of several keys in the key servers, of course, and they'll
> need to arrange to periodically sign round trips for all those
> keys. Personally, I think this is probably more trouble than it is
> worth, but I could be persuaded.
>
> Third, presumably one wants a means to query such databases that
> doesn't allow traffic analysis. Mix networks including Tor are
> probably the answer on that. Without such a mechanism, listening in on
> the query traffic becomes a very good way to trace out social
> networks.
>
> Perry
> --
> Perry E. Metzger                perry at piermont.com
> _______________________________________________
> The cryptography mailing list
> cryptography at metzdowd.com
> http://www.metzdowd.com/mailman/listinfo/cryptography
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.metzdowd.com/pipermail/cryptography/attachments/20130901/247e2453/attachment.html>


More information about the cryptography mailing list