[Cryptography] Heartbleed and fundamental crypto programming practices

Sampo Syreeni decoy at iki.fi
Sun Apr 13 18:51:15 EDT 2014


On 2014-04-13, John Gilmore wrote:

>> Or is there something fundamental about the congestion issue that 
>> stops UDP being usable under any circumstances?
>
> No. BitTorrent was rehosted on UDP, using its own delay-sensitive end 
> to end transmission protocol, and it works fine (and better than when 
> it was using TCP). RTP (VoIP) also works pretty well on UDP.

What you probably meant to say is that UDP doesn't stop you from 
building in congestion control at a higher protocol layer, but at the 
same time it has none by itself. As long as congestion control is 
handled as an end-to-end business, TCP -- when you can trust people to 
use it -- has it by design, UDP does not, and hence by definition you 
can't downright contribute to a wholesale congestion collapse *if* you 
use TCP. Using UDP you can, but you still don't have to. BitTorrent's 
µTP/LEDBAT is extra benign by design even compared to TCP, and the 
various mechanisms built into the various RTP applications are currently 
pretty nicely-behaved too, but with no real guarantees given by the base 
RTP framework, per se.

So, it's basically bad logic to talk about UDP or even TCP wholesale. 
You have to take a look at which precise congestion mitigation measures 
are taken, over the whole protocol stack, and how each one of the stacks 
interacts with each other in the wild. The yardstick of course being the 
newest, most sophisticated incarnation of TCP, even if via LEDBAT kind 
of delay sensitive protocol work we already know TCP isn't too nice 
either.

> Not to mention the classic use of UDP, DNS.

That's a bit tricky already. By its nature DNS is an embarrassingly 
parallel datagram based request-response lookup protocol. Even as of now 
there are no real, standardized mechanisms to make that sort of thing 
immune to congestion collapse. The only thing which keeps even the age 
old DNS/UDP from becoming a nuisance is that it's a relatively rarely 
used protocol with lots of caching of its own. It simply eats so little 
bandwidth compared to everything else that it pretty much never leads to 
marked congestion issues.

But in conditions of true congestion and network instability because of 
that, DNS's most common UDP incarnation would not react gracefully even 
now. The classier *implementations* of the protocol such as BIND do try 
to take care of the problem, e.g. by going into exponential back-off, 
and sometimes even trying to restrict the whole of the uplink protocol 
traffic amount from a single node. But such measures are not a binding 
part of the standard, and so if you happen to have an application (many 
P2P ones come to mind) in your hands which queries lots of disparate 
nameservers, many of which are dynamical with short cache renewal 
periods, under a bandwidth squeeze quite a number of DNS clients are 
quite willing to conspire towards a collapse. Even to shut themselves 
off into an online equivalent of a live-lock, mediated by their cache.

> The very short summary is that TCP only throttles back after a packet 
> is dropped, but in the last 20 years everyone has been adding RAM 
> buffers to routers so that they would never drop a packet.

But also things like RED, which via packet loss, and more or less 
fairly, signals flows going through a router beforehand to throttle 
back. Combined with that sort of thing, long buffers work pretty well in 
the core of the network, even if the high bandwidths and thus inversely 
low queuing delays of today make such processing less necessary.

So what really fucks TCP up is an intermediate buffer which only serves 
a single endnode, and whose inputs as such cannot be assumed to follow 
any of the typical statistical distributions queuing theory relies upon. 
And which also has to serve a mixed load of bulk and real time traffic, 
with bimodal packet size and bandwidth disributions, with no more help 
from the endnode than your core router would have. Which also has to be 
exceedingly low cost.

That is, your cable modem. The core routing infrastructure can nowadays 
deal pretty well even with straight TCP, and even the kind where the 
application protocol used would want to do real time. What really messes 
things up is the middle-box-like implemented, blind, massive queue they 
build into your uplink, in order to run such weird multiple access 
protocols on your local cable as DOCSIS, without without losing 
advertisable total steady state bandwidth guarantees. When you do that, 
especially on the cheap, you'll inevitably end up with a huge, badly 
managed queue, and a higher level fix which essentially, *very* hard, 
tries to avoid that queue being there at all. That is, what the delay 
sensitive protocols -- in the absence of any signaling except perchance 
the emerging ECN notification one, not really accessible to middleboxes 
either -- like the BitTorrent one (µTP) and its IETF standardized 
variant (LEDBAT) in fact try to do. Both in practice and in the original 
design rationale.

> The right cure is to fix TCP so it throttles back when it notices 
> packet delay, but nobody's doing that because fixed-TCP would perform 
> "worse" on single connections than unfixed-TCP, so instead they're 
> moving off TCP to their own protocols that do the same.

So, that isn't really a problem with TCP at all. There is absolutely no 
reason why you couldn't do both TCP style loss-derived and µTP/LEDBAT 
style delay-governed congestion avoidance. In fact, the BitTorrent 
derived work does just *that*: in order to guarantee TCP-friendliness, 
it falls back to implementing additive increase and exponential backoff 
upon packet loss. Essentially it implements the core TCP congestion 
control algorithm at a higher protocol layer, over UDP, just to be sure 
(and it had better too because not one of the µTP derivatives I now of 
have been proven even statistically speaking fair amongst themselves, 
unlike TCP as).

And as it usually goes, those same ideas have already been adapted for 
the TCP stack itself. That's what the TCP Vegas incarnation of the CC 
loop is all about. Lots of research has followed from that, and the code 
is already in the Linux kernel. It isn't quite as aggressive as the 
LEDBAT variant, and certainly relies too much on the classical means of 
RTT measurement within TCP over torrent-like explicit, high accuracy 
timestamps. But rather certainly it's far less aggressive than any of 
the earlier incarnations of TCP, is already entrenched, works like a 
charm, and relies primarily on RTT instead of loss (timestamps work even 
better when the channel is asymmetric, as it is with that nasty upband 
buffer I already mentioned, so that is a qualitative difference too).

So, you are right. There is a severe incentive compatibility problem 
here. However, against all odds, it has already been mostly fixed in 
practice. Even TCP has that problem, because it's E2E. Nothing in the 
routing fabric enforces the flow control or congestion avoidance 
features of even TCP, so if you want to go around them, you can. In 
early incarnations of TCP you could do that simply by subverting a 
couple of assumptions in the implied control loop. I dunno if anything 
like that could be done today, but in any case, multiple parallel 
connections over either of TCP or UDP will always do the trick. That's 
where DDoS attacks come from, and why they are so difficult to defend 
against.

And yet, for the vast majority of situations, designing a proper 
congestion avoidance protocol into software your typical enduser won't 
bother to change, just works. Like TCP did. So that many other protocols 
would too, for the most part. What mostly impacts such protocol 
development then isn't that the users have adverse incentives to 
deployment. That was already countermanded by the high cost of 
developing your own protocol stack de novo and interworking with the 
wider community. What the real problem is is that those two economical 
problems work against *any* new protocol, be it good or bad. The same 
thing which stops you from being a bad guy, also hinders you in Doing 
the Right Thing Better.
-- 
Sampo Syreeni, aka decoy - decoy at iki.fi, http://decoy.iki.fi/front
+358-40-3255353, 025E D175 ABE5 027C 9494 EEB0 E090 8BA9 0509 85C2


More information about the cryptography mailing list