[Cryptography] Which big-name ciphers have been broken in living memory?

james hughes hughejp at mac.com
Sat Aug 16 16:21:03 EDT 2014


> On Aug 16, 2014, at 6:16 AM, ianG <iang at iang.org> wrote:
> 
> On 16/08/2014 02:52 am, james hughes wrote:
>> On 15 Aug 2014 11:37 +0100, from iang at iang.org (ianG):
>>> Thanks for the update!  I'm still waiting for someone to report on which big-name algorithm got broken in living memory.
>> 
>> 
>> My definition of "big-name algorithm got broken” is: Algorithms that were broadly deployed and deprecated because they do not longer provide the security expectation any more? On “the web” in my living memory... 
>> 
>> 56 bit DES
>> 512 bit RSA
>> RC4
>> MD5
>> SHA1
> 
> Yes, these are all deprecated.  Once deprecated, they live in the halls
> of fame as an algorithm that served their purpose but are now marked for
> not being used.
> 
> This is engineering, right?  Once the end of life is reached, we
> shouldn't be using them.  Right?

This is a little semantics. RC4, MD5 and SHA1 were “end of life" because people discovered flaws in (broke) the algorithm, not because of design life limits (DES and 512 RSA).

“Fortifying algorithms” has been discussed before. When DES was teetering, the reluctance of the govies to replace DES or relax the key length led to DESX which showed to be an improvement, but not a panacea. The real result was the AES competition.  

>>> you're probably better off focussing on the known roadkill not the zombies in hollywood movies.
>> 
>> Nice!!! “Zombie algorithms”? I think you have coined a great new term for these “undead algorithms”! 
> 
> lol...  OK, point.  So what is it about the zombie algorithms?  Why do
> they keep popping up?
> 
> Do we need NIST or IETF to put the dragonglass blade into them?  An RFC
> that lists deprecated algorithms, updated on a yearly basis?

NIST does for banks and govies, (and anyone else with a clue), but IETF seems more stubborn (IMHO). 

> (That's a serious question, btw.  As far as I know, they don't have an
> answer to the overall question…)

<Hyperbole >Your grandmother has a windows 3.1 machine running Netscape. Should she be banned from accessing her bank over the internet? </Hyperbole > Hard question. What is the liability of the bank? I doubt anything… 

Choosing security over communications (secure or nothing) is a tough choice for many businesses. Standards organizations like the IETF seem to favor insecure communications over no communications at all (secure with insecure fallback)). Implementors seem to say "caveat crypto" and push information security awareness on your grandmother. 

I seem to remember some hack that secure encrypting radios of some government agency can be made to default to “in the clear” with some smart interference. Is that a vulnerability or a feature? 

>> Yes, Designing (or modifying) cryptographic algorithms should be accomplished by those skilled in that art (which I am not one). Taking an algorithm and doing random stuff to it claiming to "strengthen it", in many cases, has the opposite effect. For instance, randomizing the s-boxes in DES or changing the constants in AES. Sometimes simple things like lengthening the key, increasing the rounds, etc. can make an algorithm weaker. 
>> 
>> Yes, buffer overflows and RND snafu are the gift that keeps on giving for many reasons… I also agree that the majority of “you” (me included) should focus on roadkill.
> 
> 
> ;-)  The point we should be making is that the one thing we can trust is
> the strength of the big-name algorithms.  They've never failed us,
> within their design parameters (including EOL).

<tin foil hat> That we know of ;-) </tin foil hat>

> Everything else has failed us.  But not the basic algorithms.

…because there is a literal army of extremely well trained cryptographers watching over these algorithms. 

> iang



More information about the cryptography mailing list