[Cryptography] [cryptography] NIST Workshop on Elliptic Curve Cryptography Standards
dj at deadhat.com
dj at deadhat.com
Wed May 13 16:46:07 EDT 2015
> On 05/12/2015 05:00 PM, dj at deadhat.com wrote:
>> Alas, the world isn't just the internet and smart cards. We are throwing
>> crypto on silicon as fast as we can to address the many threats to
>> computer hardware. No one block size is correct.
> Well, maybe...
> How about "The block size is exactly the same as the message
> size no matter what the message size happens to be?"
I'm coming from a place where many data sizes are fixed. I.E. Data fields
on chip. But they might be 256 or 512 bits instead of 64 or 128. So my
world view may be different.
> I know, it's a laughable idea when talking about "lightweight"
> cryptography. I can't think of any way to actually do it that
> wouldn't take at least LogN times longer than a standard block
> cipher with fixed-length blocks. And the idea of encrypting a
> "stream" is right out unless you have a higher level of the
> protocol breaking the stream up into packets of known size,
> in which case you have a standard block cipher again.
The published lightweight ciphers are interesting in that their real
design feature is they disperse and non-linearize more per gate than say
AES. In hardware this lets you do more with fewer gates. It doesn't
dictate the key or block size of the cipher at all. Yes under the
lightweight title, they self limit themselves to small block and key
> But in a brute kind of way, it's very interesting for just
> plain freezing out most of the attack methodologies I'm aware
> No block boundaries inside the message, and every bit of the
> ciphertext depending on every bit of the plaintext, means
> entire classes of attacks just don't have anything to work
I would like such a thing to exist. Do you have an algorithm handy? The
closest thing I can think of is format preserving encryption, like
Rogaway's Sometimes Recurse Shuffle. That can work on arbitrary string
> It's a random idea. It may have occurred to me due to lack
> of sleep, or because I'd been looking at C++ template code
> for a block cipher that takes block size as a parameter.
That would be nice, but hardware implementation parameters are usually
tied to the size required for the application. So probably more of a
software thing, which isn't my gig.
Orthogonally I have been thinking of ciphers taking the number of rounds
as a parameter. Then use that in protocol negotiation. Algorithm gets
weak, increase the rounds. It beats undeleteable cipher options.
I spoke with a block cipher designer about this and his argument against
was that if you can run the same data and key through with a different
number of rounds, it's trivial to break. However I see this as just
another constraint, like 'never use the same key and IV twice'. Never use
the same key and iteration count twice.
More information about the cryptography