choosing building blocks, was Re: general defensive crypto coding principles

Travis H. solinym at gmail.com
Mon Feb 13 12:18:17 EST 2006


On 2/13/06, Peter Gutmann <pgut001 at cs.auckland.ac.nz> wrote:
> >I would expect that typically implementors would be following a published
> >standard, which would (well, one would hope) have had expert cryptographers
> >check it over sometime prior to publication

Published implementations aren't immune to errors, and quite
frequently they don't even explicitly specify what/whom they're
protecting against.  Using one incorrectly or inappropriately presents
the same pitfalls as using one de novo.  There's also an absence of
easily accessible information which describe the various protocols,
their design parameters, their constraints, their threat models, their
dependencies, their availability, their patent status, their licensing
status, status with regard to prior flaws, etc.  I know this has been
asked for before, when someone got the standard "reuse a published
protocol" answer.

For example, there's a popular program called stunnel which uses
openssl to secure connections.  This ostensibly is a shim for
protecting cleartext protocols such as POP.
However, unless it does significant length padding or in some other
way (maybe compression?) decouples the length of the messages from the
length of the ciphertext transmissions, you can still probably derive
a lot of information from a version identifcation and the length of
the human-readable strings which follow the ^[0-9][0-9][0-9]
machine-readable responses, and knowing the standard order of
interaction.  From browsing the online documentation, I find stunnel
basically saying "it's really just openssl", and from browsing the
openssl documentation, I can't figure out whether it does any length
padding at all.  I think it's unrealistic to have to read source to
find out this kind of information.

I think that there's a noteworthy level of skill between being able to
design a secure block cipher (what I call a cryptologist) and being a
newbie.  I think that someone with those intermediate skills can
probably cobble together existing building blocks into a decent
protocol.  They should, however, do the homework on the various
protocols and attacks, publish their protocol (to this list?) before
implementing it, run a finite-state analysis against it with the
standard assumptions as a sanity check, keep up to date on the
weaknesses of any building blocks they use, and maybe hire an expert
cryptanalyst to try and break it (of course he will probably prefer to
design it, but the premise is that won't happen).

Doing this is not exactly easy -- I had a hard time finding any
descriptions of protocols for 2-party  mutual authentication in my
limited literature several years ago when I did the crypto and
networking for a distributed HIDS.  I ended up factoring one of the
parties out (i.e. merging two parties) of a 3-party authentication
algorithm published in AC.  Speaking of which, there's an error in the
2ed 5th printing, on pp 61, the Neumann-Stubblebine protocol, step (3)
--- the text is correct but the symbolic notation should read:
E_A(B,R_A,K,T_B), E_B(A,K,T_B), R_B
I have verified this against the original paper, and the error is
obvious if you think about what's going on.  I sent a correction to
the email black hole that is Schneier.  I only know of a few attacks
strictly on protocols (replay, version rollback, reflection,
MITM/chess grandmaster), and I think all are easily derived from some
simple rules in a finite state analysis (attacker can replay, attacker
can observe, attacker can modify, attacker can impersonate &c.).  If I
am mistaken, please illuminate me.

Speaking of this makes me want to write such a set of wiki pages
somewhere.  So if anyone would kindly send me a list of protocols and
algorithms they'd like to see covered, I'll compile it and maybe fill
in some stuff on a wiki with it.  I'm sure it would be a useful
learning exercise, as well as a public service.  I'm mostly interested
in illustrating modern protocol details (e.g. SSL v3, SSHv2, ISAKMP,
WEP, WPA, WPA2, IEEE 802.1x?, Photuris?), reviewing libaries (e.g.
openssl, cryptlib), and describing the use of APIs (GSSAPI, SASL) but
I'm also interested in the strength of primitives (AES, SHA1, etc.) as
defined by recent attacks.

> It also defends against the MD5 crack, and is one of the recommended
> IETF solutions to hash problems.

Cool :)  Another idea I had was to uniquify the hashes using some sort
of machine-specific key to prevent them from being broken on a
different machine, but it'd still need to be stored on disk across
reboots.  With PK you could create a keypair, one for encrypting
(making a password hash) and one for decrypting (validating a password
hash), and you'd only have to protect one or the other.  Perhaps
making password hashes could be done offline.  Another way to slow
down brute force is to require a search for a correct input to the
password-verifying algorithm (for example, don't prepend the whole IV
to the hash).  A cracker would have to exhaustively test the input
space for every incorrect guess at the password, whereas a valid
password would require one half the amount of computation (on
average), ignoring collisions.
--
"Cryptography is nothing more than a mathematical framework for discussing
various paranoid delusions." -- Don Alvarez
http://www.lightconsulting.com/~travis/ -><-
GPG fingerprint: 50A1 15C5 A9DE 23B9 ED98 C93E 38E9 204A 94C2 641B

---------------------------------------------------------------------
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majordomo at metzdowd.com



More information about the cryptography mailing list