[Cryptography] The Laws (was the principles) of secure information systems design
Kent Borg
kentborg at borg.org
Wed Jul 13 12:36:17 EDT 2016
Uh, oh! You invite me to rant about one of my pet peeves by including a
trigger-word pair of mine in your very title: "systems" and "secure".
It seems to me that no one is ever honest about system security,
starting by not being honest with themselves.
The problem is that you might design a *perfectly* secure system, yet
attackers can still break in. How? By shifting system boundaries. We
might say this started with social engineering and has progressed to
spyware that sniffs passwords from users' computers.
So where is the pervasive dishonestly I casually assert? In not
specifying system boundaries, and further, not offering any analysis of
what risks exist regarding the details of those boundaries.
Many of the laws in your list hint around this, but I think it needs to
be explicit. One has to understand what is being secured before you can
secure it. And as you define the "what" that you are defending, I think
you also need to defend and explain your choices: why you consider some
aspects inside your system but not others? And this needs to be clear to
the users of the system. (And if the users are naive consumers?, the
problem is particularly difficult and, I think, important. Consider
password managers and the false sense of security that is due to explode
in lots of users faces...)
A specific example. I have a strange sense of fun, and a few years ago,
just for fun, I implemented an encrypted Python dictionary (intended to
be used on top of persistent storage, such as Python's shelve, or maybe
a cloud service). And it was secure!
Except, very aware of Kerckhoffs's Principle (even if I can't pronounce
nor remember the name), I tried hard to examine what I had done, and
wrote up for myself a technical analysis. A significant part of this was
specifying vulnerabilities. Because even if I have made no mistakes,
there were still weaknesses! I don't remember all the issues, but the
obvious one off the top of my head is essentially traffic analysis. I
included padding parameters so this could be mitigated, somewhat, and
that was worth discussion. I forget what else I found, but in a
key/value dictionary, I didn't just encrypt the values, but I also
encrypted the keys, and doing that while implementing all the things a
Python dictionary can do, is tricky, it involved tradeoffs, and those
are important to understand. (I do remember I didn't get as far doing a
multiuser version--as layering on top of a cloud service begs for--but
that is when things get seriously difficult. #missioncreep #workinprogress)
Unfortunately, the idea of specifying vulnerabilities--in a system that
doesn't *have* any "known vulnerabilities"--isn't a common practice that
I could copy, rather it just seemed like good engineering to me; it was
something I needed to invent.
I assume this is not really my invention, that others have done so far
better than have I, but why it is damn obscure? It shouldn't be!
Maybe include a law that says designers and users have to understand the
"secure system's" boundaries.
Okay, I'll shut up now. Thank you for your patience.
-kb, the Boston-area Kent who also happens to be out of work...if the
curious want to peek at his resume, they are cordially invited:
http://www.borg.org/~kentborg/kentborg-resume-long-2016-06-24.pdf
More information about the cryptography
mailing list