Possibly questionable security decisions in DNS root management

Steven Bellovin smb at cs.columbia.edu
Mon Oct 19 18:35:22 EDT 2009


On Oct 17, 2009, at 5:23 AM, John Gilmore wrote:

>> Even plain DSA would be much more space efficient on the signature
>> side - a DSA key with p=2048 bits, q=256 bits is much stronger than a
>> 1024 bit RSA key, and the signatures would be half the size. And NIST
>> allows (2048,224) DSA parameters as well, if saving an extra 8 bytes
>> is really that important.
>
> DSA was (designed to be) full of covert channels.

The evidence that it was an intentional design feature is, to my  
knowledge, slim.  More relevant to this case is why it matters: what  
information is someone trying to smuggle out via the DNS?  Remember  
that DNS records are (in principle) signed offline; servers are  
signing *records*, not responses.  In other words, it's more like a  
certificate model than the TLS model.
>
>> Given that they are attempted to optimize for minimal packet size,  
>> the
>> choice of RSA for signatures actually seems quite bizarre.
>
> It's more bizarre than you think.  But packet size just isn't that big
> a deal.  The root only has to sign a small number of records -- just
> two or three for each top level domain -- and the average client is
> going to use .com, .org, their own country, and a few others).  Each
> of these records is cached on the client side, with a very long
> timeout (e.g. at least a day).  So the total extra data transfer for
> RSA (versus other) keys won't be either huge or frequent.  DNS traffic
> is still a tiny fraction of overall Internet traffic.  We now have
> many dozens of root servers, scattered all over the world, and if the
> traffic rises, we can easily make more by linear replication.  DNS
> *scales*, which is why we're still using it, relatively unchanged,
> after more than 30 years.

It's rather more complicated than that.  The issue isn't bandwidth per  
se, at least not as compared with total Internet bandwidth.  Bandwidth  
out of a root server site may be another matter.  Btw, the DNS as  
designed 25 years ago would not scale to today's load.  There was a  
crucial design mistake: DNS packets were limited to 512 bytes.  As a  
result, there are 10s or 100s of millions of machines that read *only*  
512 bytes.  That in turn means that there can be at most 13 root  
servers.  More precisely, there can be at most 13 root names and IP  
addresses.  (We could possibly have one or two more if there was just  
one name that pointed to many addresses, but that would complicate  
debugging the DNS.)  The DNS is working today because of anycasting;  
many -- most?  all? -- of the 13 IP addresses exist at many points in  
the Internet, and depend on routing system magic to avoid problems.   
At that, anycasting works much better for UDP than for TCP, because it  
will fail utterly if some packets in a conversation go to one  
instantiation and others go elsewhere.

It is possible to have larger packets, but only if there is prior  
negotiation via something called EDNS0.  At that, you still *really*  
want to stay below 1500 bytes, the Ethernet MTU.  If you exceed that,  
you get fragmentation, which hurts reliability.  But whatever the  
negotiated maximum DNS response size, if the data exceeds that value  
the server will say "response truncated; ask me via TCP".  That, in  
turn, will cause massive problems.  Many hosts won't do TCP properly  
and many firewalls are incorrectly configured to reject DNS over TCP.   
Those problems could, in principle, be fixed.  But TCP requires a 3- 
way handshake to set up the connection, then a 2-packet exchange for  
the data and response (more if the response won't fit in a single  
packet), plus another 3 packets to tear down the connection.  It also  
requires a lot of state -- and hence kernel memory -- on the server.   
There are also reclamation issues if the TCP connection stops -- but  
isn't torn down -- in just the proper way (where the server is in FIN- 
WAIT-2 state), which in turn might happen if the routing system  
happens to direct some anycast packets elsewhere.

To sum up: there really are reasons why it's important to keep DNS  
responses small.  I suspect we'll have to move towards elliptic curve  
at some point, though there are patent issues (or perhaps patent FUD;  
I have no idea) there.
>
> The bizarre part is that the DNS Security standards had gotten pretty
> well defined a decade ago,

Actually, no; the design then was wrong.  It looked ok from the crypto  
side, but there were subtle points in the DNS design that weren't  
handled properly.  I'll skip the whole saga, but it wasn't until RFC  
4033-4035 came out, in March 2005, that the specs were correct.  There  
are still privacy concerns about parts of DNSSEC.

> when one or more high-up people in the IETF
> decided that "no standard that requires the use of Jim Bidzos's
> monopoly crypto algorithm is ever going to be approved on my watch".
> Jim had just pissed off one too many people, in his role as CEO of RSA
> Data Security and the second most hated guy in crypto.  (NSA export
> controls was the first reason you couldn't put decent crypto into your
> product; Bidzos's patent, and the way he licensed it, was the second.)
> This IESG prejudice against RSA went so deep that it didn't matter
> that we had a free license from RSA to use the algorithm for DNS, that
> the whole patent would expire in just three years, that we'd gotten
> export permission for it, and had working code that implemented it.
> So the standard got sent back to the beginning and redone to deal with
> the complications of deployed servers and records with varying  
> algorithm
> availability (and to make DSA the "officially mandatory" algorithm).
> Which took another 5 or 10 years.

There was some of that animosity, but I don't think it was the whole  
story.
>
> RSA was the obvious choice because it was (and is) believed that if
> you can break it, you can factor large numbers (which mathematicians
> have been trying to do for hundreds of years).  No other algorithm
> available at the time came with such a high pedigree.  As far as I
> know, none still does.  And if we were going to go to the trouble of
> rewiring the whole world's DNS for security at all, we wanted real
> security, not pasted-on crap security.
>
> The DNSSEC RSA RFC says:
>
>     For interoperability, the RSA key size is limited to 4096 bits.   
> For
>   particularly critical applications, implementors are encouraged to
>   consider the range of available algorithms and key sizes.
>
> That's standard-speak for "don't use the shortest possible keys all
> the time, idiot".  Yes, using 1024-bit keys is lunacy -- but of course
> we're talking about Verisign/NSI here, the gold standard in crap
> security.  The root's DNSSEC operational procedures should be designed
> so that ICANN does all the signing, though the lord knows that ICANN
> is even less trustworthy than NSI.  But at least we know what kind of
> larceny ICANN is into, and it's a straightforward squeeze for their
> own lavish benefit, forcibly paid by every domain owner; it doesn't
> involve promising security and not delivering it.
>
> Even using keys that have a round number of bits is foolish, in my
> opinion.  If you were going to use about 2**11th bits, why not 2240
> bits, or 2320 bits, instead of 2048?  Your software already handles
> 2240 bits if it can handle 2048, and it's only a tiny bit slower and
> larger -- but a 2048-bit RSA cracker won't crack your 2240-bit key.
> If this crypto community was serious about resistance to RSA key
> factoring, the most popular key generation software would be picking
> key sizes *at random* within a wide range beyond the number of bits
> demanded for application security.  That way, there'd be no "sweet
> spots" at 1024 or 2048.  As it is today, if NSA (or any major country,
> organized crime group, or civil rights nonprofit) built an RSA key
> cracker, more than 50% of the RSA keys in use would fall prey to a
> cracker that ONLY handled 1024-bit keys.  It's probably more like
> 80-90%, actually.  Failing to use 1056, 1120, 1168-bit, etc, keys is
> just plain stupid on our (the defenders') part; it's easy to automate
> the fix.


That's an interesting assumption, but is it true?  In particular, is  
it really that useful to tune an cracking engine to exactly one  
modulus size?  A maximum size, sure, but do the cracking engines  
really have trouble with slightly smaller moduli?

		--Steve Bellovin, http://www.cs.columbia.edu/~smb





---------------------------------------------------------------------
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majordomo at metzdowd.com



More information about the cryptography mailing list