Hamiltonian path as protection against DOS.

Anne & Lynn Wheeler lynn at garlic.com
Wed Aug 16 15:45:46 EDT 2006

mikeiscool wrote:
> Could this sort of system be something that is implemented way before
> a HTTP connection even starts?
> Say, implemented by OS vendors or API vendors of sockets. That is to
> say, when you open a socket connection for the first time, for certain
> protocols, you need to pay this fee. The socket lib would be adjusted
> to do it, and then you are good to go.
> It would mean that other services get the benefit of protection. But
> is there legimate need to connect to many, or one, host many thousands
> of times? I'd guess there is.
> Take the discussed handshakes. Could something be incorporated there?
> Maybe there could be a new low level protocol, kind of like SSL, but
> less cost involved ... then you could tell your server to operate in
> that mode only...

it can be considered from the standpoint of a lot of SSL is transaction 

you start with reliable TCP session support ... which has a minimum of 
7-packet exchange. then you encapsulate all the HTTPS hand-shaking ... 
which eventually possibly reduces to doing a transaction packet exchange 
(as opposed to really requiring full session semantics).

in the late 80s, there was work on reliable XTP, a transaction oriented 
protocol that supported reliable internet with a minimum of 3-packet 
exchange (disclaimer, my wife and I were on the XTP technical advisory 

so a lot of the SSL stuff around ssl server certificates is validating 
that the server you think you are talking to is actually the server you 
are talking to .... by checking the domain name from the URL that you 
supposedly typed in against the domain name in the ssl server 
certificate. a big vulnerability was created when a lot of the merchant 
servers ... that were the original prime target for ssl server 
certificates ... backed way from using SSL for the whole web experience 
and reduced it to just the payment transaction. the problem then was 
that the supposedly typed in URL came from a button provided by the 
server ... and not actually typed in by the client. the ssl server 
process then became checking the domain name in the URL provided by the 
server against the domain name in the certificate provided by the server 
(totally subverting original security assumptions). lots of past 
collected posts mentioning ssl server certificate infrastructure

So the next part was that somebody applies for a SSL server certificate 
.... which basically involves the certification authority checking the 
applicant provided information against what is on file with the domain 
name infrastructure. there was some integrity issues with this 
information being hijacked/changed ... so the certification authority 
industry was backing a proposal that domain name owners register a 
public key (with the domain name infrastructure) along with the other 
information. Then all future communication would be digitally signed
(as countermeasure to various hijacking vulnerabilities).

the issue then is that certification authorities can also request that 
ssl server certificate applications also be digitally signed. the 
certification authorities, then can validate the digital signature with 
the onfile domain name infrastructure (turning a time-consuming, 
error-prone, and expensive identification process into a much simpler, 
less-expensive and efficient authentication process)

note that the existing infrastructure has the trust root with the 
information on file with the domain name infrastructure (that has to be 
cross-checked for identification purposes). the change to registering a 
public key retains the domain name infrastructure as the trust root (but 
changing from an expensive identification operation to a much simpler
authentication operation).

so a real SSL simplification, when the client contacts the domain name 
infrastructure to do the domain name to ip-address translation, the 
domain name infrastructure can piggy-back the public key and any 
necessary ssl options on the ip-address reply.

the client then composes a XTP transaction (has minimum 3-packet 
exchange for reliable operation) that has an "SSL" packet structure. the 
client generates a random transaction key, encrypts the communication 
with the random generated key and encrypts the random key with the 
server's public key ... and sends it off the encrypted random key and 
the encrypted communication.

for purely transaction operation, there is minimum (XTP) 3-packet 
exchange between client and server. however, if more data is involved, 
then as many packets as necessary are transmitted. I've suggested this 
design numerous times in the past.

as an aside, i've pointed out before that in the mid-90s that as 
webserver activity was increasing ... a lot of platforms experienced 
severe throughput degradation with HTTP transaction protocol use of TCP. 
Most platforms had a highly inefficient session close implementation 
around checking of the FINWAIT list ... the assumption as that most 
session activity had relatively infrequent session open/close activity. 
The HTTP transaction activity violated those TCP activity assumptions 
... and for a period of time you found platforms spending over 95percent 
of their processor utilization dealing with the FINWAIT list.

The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majordomo at metzdowd.com

More information about the cryptography mailing list