[Cryptography] Gates are cheap. Should cipher design change?
ianG
iang at iang.org
Wed Mar 30 05:30:47 EDT 2016
On 29/03/2016 07:29 am, dj at deadhat.com wrote:
> Maybe back to the original point, we want ciphers with huge, and
> preferably arbitary size blocks. A guy from Cisco made essentially the
> same point in his talk at the NIST lightweight crypto workshop so it's not
> just me and the people I work with.
I'm not sure why others find this so, but the reason I find it so is
because every protocol I've looked at in recent times has been a
datagram protocol. By that, I don't mean UDP / layer 3 but that the
application at layer 7 has a requirement to send a particular packet of
known size across to the other party.
So much so that I've been scratching my head for about a decade now
trying to think of a pure stream oriented requirement and the only one I
can come up with is SSH - in terminal mode. But even that is more a
byte-datagram protocol, and it's "only" that because that's the
semantics of a Unix terminal as designed back in the 1970s. People who
are older than me will recall that before Unix established that
paradigm, terminals were typically datagram-oriented. And from a pure
security/tracking perspective, byte-by-byte SSH is a nuisance, we'd much
rather a single datagram for the line that is typed.
E.g., Sound & Video - lossy datagrams. Logs - time-ordered append
stream of datagrams. Backups - set of datagrams.
Another aspect is that at least in my programming, although we use
streams in OO a lot, we always convert in and out of streams several
times, so the packet and its size is well impressed on the software.
Even up to a 100k photo from a phone, software is happy to work with a
single packet and not worry too much until post-mature optimisation time
comes, or it has to slice the packet down to UDP size for transmission.
(NB, I'm leaving out the reliability aspects of protocols in the above.)
> NIST/NSA public consumption ciphers
> seem to deliberately avoid large block sizes. The absence of 256 bit block
> sizes from AES and Simon/Speck is clearly deliberate. We want all the
> powers of 2 block sizes up to at least the size of the largest jumbo
> packets and the largest unit of disk block storage.
From software complexity pov, I'd actually say we want arbitrary sized,
not even "powers of 2".
But we could work with powers of 2. There has been a recent shift
towards hiding length, as that can be used to figure out what part of
the protocol is being used.
One of the security protocols out there uses 1024 byte datagrams (I
forget its name). That's it. But this isn't totally perfect when it
comes to larger packets because sending 10 x 1024 in 10ms reveals that
you are really sending a 10240 sized packet. Which generates some
signature. Now we're into conflicting desires: anti-trackability versus
preservation of bandwidth (== mobile airtime).
> Take a look at the lightweight ciphers which have various block sizes. You
> can plot the number of rounds against block size and see that the rounds
> per block-bit goes down as the block size increases. These algorithms get
> more efficient as the block size increases. There's no good reason not
> have larger block sizes.
>
> I found Rogaway's Sometimes Recursive Shuffle, to be an interesting
> direction in block size independent ciphers. I don't know where that's
> going to go. I have good uses for it though.
As a future direction, SHA3's expanding block is enheartening, including
it's AE cipher mode. I can't recall its name but SHA3 is where I would
start today for a new generation crypto suite. One reason is because of
the sponge feature and dial-in size, another reason is that the suite
covers the basic symmetric family needs.
> Friends don't let friends do crypto in software.
I have a different set of friends, I don't let them do crypto in
anything but own code ;) In high level application space, packages are
a threat. Hardware is just another package.
iang
More information about the cryptography
mailing list