[Cryptography] The Laws (was the principles) of secure information systems design

Peter Fairbrother peter at m-o-o-t.org
Tue Jul 19 14:17:23 EDT 2016


On 19/07/16 17:01, Jon Callas wrote:
>
>> On Jul 12, 2016, at 1:30 PM, Peter Fairbrother <peter at m-o-o-t.org>
>> wrote:
>>
>> I've been revising the principles, and came up with this. It's an
>> early version.
>>
>> As ever, corrections and suggestions are welcome.
>>
>> Calling them Laws is perhaps a bit overreaching - but on reflection
>> I thought that's mostly what they are, break them and the system
>> won't be secure.
>
> I agree that it's overreaching. They're not laws, not in the way that
> a physicist or mathematician would consider laws. Relatively few are
> actionable. They're far more like aphorisms.
>
>>
>> I will put the Laws up on the 'net shortly, hopefully with a link
>> for suggestions and comments.
>>
>>
>>
>> The Laws of secure information systems design:
>>
>>
>>
>> Law 0: It's all about who is in control
>
> Is it? I know what you mean, and yet I have no idea what you mean.

:)

Hi Jon.

But at heart it's simple enough.

Design of secure information systems is in the first place all about who 
immediately controls access to the information, and later who controls 
other people's access to the information, and maybe who allows eg users 
to share information, and what information between themselves -

all of which is, who is in control of access to the information.

By access I include, as appropriate, addition, deletion, modification as 
well as "seeing" of information.

I was thinking of changing this (back)

to:

  Law 0: Ultimately, it's all about who is in control

but it's ok as it is - in fact, it's the law I am happiest with.

Plus, it is a law like a physical law - the ultimate goal of a secure 
information system is to provide someone(s) with the means of control 
over adding, accessing, modifying, deleting information in or used by 
the system.

There is no other ultimate goal.

There may be considerations of availability, usability, etc which can 
and do overcome best control practice - but the whole point of having a 
secure system is to provide control.


>>
>>  Someone else is after your data
>
> Well, Robert Morris Sr. had a fantastic principle that security boils
> down to a 2x2 matrix of how much someone else wants your data, and
> how much you care if it's known. If you don't care about something
> and they don't care, then we have something close to a no-op. If you
> don't care and they do, then they're going to get it and you don't
> care. If you care and they don't, you're wasting effort -- and this
> is itself interesting. It's only in the quadrant where they care and
> you don't want them to get it that things get *really* interesting.
> Part of good security is learning to divert effort from protecting
> things you care about from things they don't care about to things
> they do.
>
> In any event, this law seems to imply that only that interesting
> quadrant exists.

Oh dear, I can see this is going to be very long!

There is a badly-expressed not-quite-law, "big secrets, little secrets - 
treat them all the same, so an attacker can't tell the difference,and 
neither can you, or anybody else."

So we should treat the "I don't care" by "he cares" quadrant the same as 
the "interesting" quadrant, and act as if someone else is after the data 
in it.

As according to the above principle the attacker isn't supposed to know 
what's in a quadrant, so the "I don't care" by "he doesn't care" 
quadrant should be empty - because he cares about everything unless he 
knows he shouldn't, as hoe is after something but can't tell where it is.


There is another argument about this, but I'll skip it for now. I'll 
just say

Law 1: Someone else is after your data

that is to say, someone else, someone you may not have considered.


Robert Morris Sr.: Never underestimate the attention, risk, money and 
time that an opponent will put into reading traffic.

Me: And never underestimate who your opponents are.


>>
>> Law 2: If it isn't stored it can't be stolen
>
> That's clever, but the more I think about it, the less I understand
> it. What does "store" mean? Is a value in a register stored? Is data
> in transit stored? I think I can argue that they both are and that it
> isn't unreasonable to think that everything is stored. If everything
> is stored, then the law is meaningless because it becomes a
> false-implies-true tautology.
>
> It also seems to be in contradiction to your #13.
>
>>
>> Law 3: Only those you trust can betray you
>
> I know what you're trying to say. I'd think something better might
> be, "talking securely to a snitch is still insecure." Or something
> like that.
>
>>
>>
>>
>>
>> Law 4: Attack methods are many, varied, ever-changing and eternal
>
> Is this any different than the aphorism that the defender has to
> protect everything, but the attacker only has to win once?
>
> Personally, I despise that particular aphorism because it is to me
> defeatist. It seems to basically say, "why bother, the attacker has
> already won."
>
> But each of them, your #4 and the one I reference are to me trying to
> say:
>
> Attackers adapt, nature doesn't. Friction doesn't develop
> countermeasures to lubricants.
>
>>
>> Law 5: The entire system is subject to attack
>
> Well, no.

Well bloody yes.

> Your system has to have an attack model and part of the
> attack model is a declaration of what's out of scope.

The system has to have an attack model? why? a well-defined attack model?

So, if it's out of scope of some attack model, it isn't subject to attack?

I have this nice bottom land ..

> We assume that
> the attacker can't read your mind.

You may - I don't. and Kerkchoff's Principle says we shouldn't.


> We assume that a value kept in
> registers won't end up on the memory bus.

Nope

> We assume that pinned memory won't get swapped.

Nope. That way lies defeat.

>
> Sometimes those assumptions are thwarted. For example, pinned memory
> might not be pinned in a crappy VM, the CPU might be emulated, the
> compiler might have optimized something away (like clearing the key).
> In such a case, the problem is real but the lesson is different.
>
> The attacker will attack the weakest point. The attacker will do the
> least work to win.

NO.

The attacker will in practice try things until he finds an attack which 
works (or stop attacking). He may well abandon an attack just before it 
would have become successful.

The order in which he tries attacks may (may - not all attackers are 
sane) be first the ones which he thinks are against the weakest 
points/where he thinks the least work is required/ which he thinks are 
most likely to succeed.

But our appreciation of what the weakest points are and what the 
attacker's appreciation is will almost certainly differ.



Now your statement above, "The attacker will do the least work to win." 
may be a good worst-case assumption - but that is all it is.



>>
>> Law 6: A more complex system has more places to attack
>
> But is also has more places to defend.

yes, it has to defend more places that a simpler system

> Complexity gets a bad rap for some good reasons. But let's also
> remember the Einstein quote that something should be as simple as
> possible and no simpler.

Indeed. I have no argument with that. Thing is, in practice, most 
systems do not follow the corollary:

Something should be as complex as necessary and no more complex.

Or Occam's razor, Entia non multiplicanda praeter necesitas


>
> In my experience as a designer, complexity arguments are the first
> refuge of amateurs.
>
> In Don Norman's "Living with Complexity" there is much wisdom. He
> discusses what he calls "Tesler's Law" for Larry Tesler, that the
> complexity of a system remains a constant. You can shuffle it around
> but you can't eliminate it. He notes that two systems lauded for
> their simplicity are the classic "Macintosh" GUI and the Unix command
> line. The GUI is a few knobs that do a lot of complex things simply,
> and the command line is a lot of knobs each of which is simple but in
> combination are both powerful and subtle. Each is both simple or
> complex depending on your point of view.
>
>>
>> Law 7: Holes for good guys are holes for bad guys too
>>
>
> I think this can be tighter. Words like "holes" are pejorative to
> start with. No one, no one puts in holes for the good guys.


*Everyone* puts holes in for the good guys.

I'm not just talking about backdoors, or default passwords, or the like: 
noone could get access to anything in the system unless there were holes 
for the good guys.

Even real passwords are holes.


Even the
> LEA people whom I mock as wanting a "magic rainbow unicorn key" don't
> want a *hole*.

But that is what they are, holes. Pretending the aren't holes because 
management doesn't like the idea is, well

stupid?

irrational?

insecure??



>
>>
>>
>>
>> Law 8: Kerckhoffs's Principle rulez! - usually...
>
> Usually? And what do you mean? Why not just state, "the only part of
> your system that should be secret is the key" (Kerckhoff's principle)
> and be done with it?

First, that's not Kerckhoffs's principle, which is roughly : A 
cryptosystem should be secure even if everything about the system, 
except the key, is public knowledge.

That's very sensible, as an enemy might work all the rest out once, and 
if it weren't true that one break might compromise the whole system, but 
have to work out keys on a one-by-one basis.

But it doesn't always apply. If the system has two ends without variable 
keys, eg think Caesar cipher, an attacker not knowing how the system 
works might be enough to keep messages secure.


>>
>> Law 9: A system which is hard to use will be abused or unused
>
> Here's my equivalent: The most secure system is the one people will
> actually use.

That makes no sense to me. Explain?

> Compare and contrast with complexity. Simple systems are often hard
> to use.
>
>>
>> law 10: Design for future threats
>
> Great advice. Completely non-actionable.

hmm, planning for 2048-but RSA? Planning for non-QS-susceptible crypto? 
in designs for new cryptosystems   (which is what the Laws are all about)?


>
> You have to work with the tools you have. You have to play the cards
> that are in your hand.
>
>>
>> Law 11: Security is a Boolean
>>
>
> Not really.
>
> This is in contrast to the old, "Security is a journey, not a
> destination." Booleans are destinations.
>
> There are plenty of security systems that are good for X but not for
> Y. Threat models again are key.
>
> At the end of it, security in general and cryptography in specific is
> about adaptation and picking the right thing.
>
> Let's also discuss other things like usability, scope,
> appropriateness to the threat, and so on.
>
> "Security is not a boolean" is in my opinion an better law than
> "security is a boolean.
>
> Worst of all, they're both great laws.

I'll leave that one, see other postings
>
>> Law 12: People offering the impossible are lying
>
> The old aphorism, "if something sounds too good to be true, it
> probably is" is better.
>
>>
>> Law 13: Nothing ever really goes away
>
> Well. Lots of things do. I could list lots of them. I can also list a
> lot of things I wish would go away.
>
>>
>> Law 15: "Schneier's law" [1] holds illimitable dominion over all...
>> including these laws
>>
>
> Wouldn't it be better to just quote "Schneier's Law" as you put it,
> or to just simplify that to "it's easy to make a system you can't
> break, but hard to make one that others can't"?
>
>
> Here's a few of my own principles:
>
> The first rule of security is "You can't protect everything."
>
> No compromises is a compromise.
>
> Layering is important.
>
> There's no sense is strengthening the strong links of the chain.
>
> You can land a man on the moon, but you can't land a man on the sun.

Thanks,


a slightly drunk (it's the hottest day of the year, cold beeer mmmmmmmmm

-- Peter Fairbrother
>
> Jon
>
>>
>> -- Peter Fairbrother
>>
>> [1] "Anyone, from the most clueless amateur to the best
>> cryptographer, can create an algorithm that he himself can't break.
>> It's not even hard. What is hard is creating an algorithm that no
>> one else can break, even after years of analysis. And the only way
>> to prove that is to subject the algorithm to years of analysis by
>> the best cryptographers around."
>> _______________________________________________ The cryptography
>> mailing list cryptography at metzdowd.com
>> http://www.metzdowd.com/mailman/listinfo/cryptography
>
>



More information about the cryptography mailing list