[Cryptography] What is a secure conversation? (Was: online forums...)

ianG iang at iang.org
Mon Dec 30 00:52:43 EST 2013


On 27/12/13 22:12 PM, Theodore Ts'o wrote:
>>> I actually addresses this issue a couple of weeks back as a
>>> hypothetical.  So let's think about it:  Just what *would* a "more
>>> secure" version of this discussion (ignoring the actual technology)
>>> look like?  Keep in mind that, by design, anyone can join by sending
>>> a simple request to the moderator.  They'll promptly receive copies
>>> of all messages.  Given this, what's your threat model?
>>
>>
>> WYTM?  Then the next step is we list out *all the threats we can
>> think of* ... without prejudice.
>>
>> Later on we do some risk analysis and decide which are serious or not.
>
> I think we should do both steps at the same time.  But if you want to
> separate them out, that's fine --- but then we shouldn't start
> proposing using 4GB worth of memory whenever we need to execute a
> string-to-key algorithm, or pursuing other solutions, until *after*
> we've done this risk analysis.


Threat modelling is separated because the threat is the domain of the 
attacker, whereas the risk is the domain of the defender.

To the extent that you can think like an attacker at the same time as 
thinking like a defender, they can be combined.  We all do a little of that.

The danger is that we impose our wisdom of defence on the attacker's 
mind, and we design a system that defends against what we know how to 
defend, and justify that attack model post-facto.

C.f. SSL [0] hence Adi's "cryptography is typically bypassed." [1]  Or 
being inside your own OODA loop.

Initially at least, listing of threats should be considered as a 
whiteboarding session;  there are no rules, get it all up there.  Then:


> Personally, part of talking about listing the threat also includes
> doing the risk analysis, because otherwise the list can easily become
> unbounded, and because there are people who are overly inclined to
> paranoia will start pursuing solutions and demanding that we make
> changes to mailing lists, protocols, open source software, etc.,
> prematurely.

Right, and that is why every listed threat has to be filtered.  In risk 
analysis, they do teach you to do a preliminary pass and drop stuff that 
is on the face of it unrealistic (at least in my class they did).

> (Having gotten all sorts of demands from really clueless
> people about changes that they think I should make to the Linux
> /dev/random driver, perhaps I'm a bit more sensitive about this than
> others.)

I've got a mighty fine LavaLamp driver for you :)  Seriously though, 
this is just historical.  We have every expert under the sun saying that 
the RNG in the OS is a prime risk, post-Snowden, we have a documented 
case study of a breach of an RNG, and we have various other illuminating 
events:  FreeBSD, Android, Debian.

There is going to a lot of attention on this point for RNGs, and for Linux.

Personally I'd take it as *an opportunity*.  Use the energy to assemble 
a better understanding, and build towards it.  You won't get this sort 
of attention in a year's time.


> I'm reminded, though, of the theory that one of the reasons why former
> Vice President Cheney got so enthusiastic about waterboarding and
> torture, and other forms of overkill in the "war on terror" (including
> warrantless wiretapping) was because he got unfiltered access to the
> list of all "potential threats", before it "is this really a credible
> threat" filter had been applied, and this caused his paranoia to race
> out of control.


Indeed.  We presume he demanded access to raw stuff, what can you say/do 
to the vice commander in chief?

Now, later, we all know what was happening;  Cheney intervened so as to 
pervert the process to an already laid-out game plan;  he wasn't 
paranoid or crazy, he was only pretending that so he had plausible 
deniability.

Another threat that we should consider in our list:  what happens if 
there is an insider in *our process* that has interests that are 
incompatible with ours, and pushes us to weaken our process (and improve 
his)?

C.f., the IETF's embattled crypto group.  We should use x.509 and 
CA-signed keys only :)


> Which is why I'm not all that enthusiastic about people making lists
> of random threats, and then seeing people proposing algorithms and
> changes, with apparently *no* serious risk analysis taking place.


Indeed.  So we have a quandary.  Do it one way, fall in one trap.  Do it 
another way, fall in another trap.  Is there a way to avoid all traps?

We know what doesn't work:  committees, broad-based low-level crypto 
tool analyses, government standards, consultancies.

What that leaves is, I think:  the business must appoint one person to 
take responsibility.  That person must make the decision to drop the 
unrealistic threats, once they've had their day in the sun.

The job and the person takes on the success as well as the failures.



iang

[0] http://iang.org/ssl/wytm.html
[1] http://financialcryptography.com/mt/archives/000147.html



More information about the cryptography mailing list