[Cryptography] Why aren’t we using SSH for everything?

Nico Williams nico at cryptonector.com
Sun Jan 4 03:31:10 EST 2015


On Sat, Jan 03, 2015 at 07:02:20PM -0500, Jerry Leichter wrote:
> There are three widely-known protocols to provided authenticated/encrypted connections:
> 
> 1.  SSL/TLS.
> 2.  SSH.
> 3.  IPSEC.
> 
> It's interesting to compare the three.  In terms of programming
> interface, IPSEC is what you would design if you had a blank piece of
> paper and the goal of allowing any network-enabled program to work
> completely transparently.  This is how we generally add new
> functionality while retaining old code.

IPsec[*] effectively has no programming interface.  One might think that
that's the correct design (security should be invisible), but since all
IPsec sees from the application is IP addresses (ah, there is an
interface), there's NO relation to the entities that applications
generally want to authenticate.  Which means that IPsec can't provide
meaningful authentication.

IF the sockets APIs dealt in higher-layer/level identities (and
credentials), THEN IPsec could be useful.  But there's no
AF_SERVICE_NAME and so on.

(I hear some implementors want to use DANE to automagically configure
IPsec on the fly, which is a neat idea, but it's a patch, not a design.
The local recursive resolver would do the work of detecting IPsec
applicability and modifying IPsec configuration.)

As it is IPsec is even worse than I paint it above, because IPsec isn't
even aware of logical packet flows.  If you connect() (assume TCP here)
to a service using an IP address and port number, each packet for that
TCP connection might be exchanged with a different peer[**] each time,
even though it all [appear to] have the same IP address!

That didn't have to be so, but no one bothered to implement APIs between
TCP (and UDP) and IPsec.  It could have been done though; see RFC5660
(which came too late).

And it's even worse still: there's often no way for the application to
even know whether IPsec protection is available, who the peer is (even
assuming it couldn't change), nor request IPsec protection, for that
matter.  There are few exceptions to this (e.g., Solaris'/Illumos'
IP_SEC_OPT socket option).

IPsec *could* have been the "what you would design if you had a blank
piece of paper", but such a design would have to include real APIs that
deal in higher-layer/level identities and credentials, and that deal in
packet flows.  Today, for IPsec, it's too late to fix this.

[*] That's the proper capitalization.  _I_ don't care about that, but
    others very much do, fyi and fwiw.

[**] Because IPsec deals in IP addresses at the ESP level, and because
     that's so obnoxious and difficult to deal with (since a lot of
     nodes renumber often but seldom change _names_), we often see IPsec
     authorization rules like "any peer with a certificate from that CA
     can claim any IP address from these CIDR blocks".

> SSL/TLS is at the far extreme:  It presents a network programming
> model entirely unlike the TCP stream-oriented model.  That doesn't

The very name indicates that the intention was to have an API that looks
like the native sockets.  Various things get in the way of that, and it
took a long time to get to where TLS APIs look like that, but that seems
to have been the intention, in which case: what's the problem?  Aside
from never having gotten quite that, that is.

Mind you, an API that looks like sockets but deals in *names* could use
IPsec, TCPINC, TLS, or something else -- that choice wouldn't be so
critical, because the key is dealing in *names*, because that's what
people deal with.

> make it bad, it just means you can't take, say, FTP and layer it over
> SSL/TLS without significant rework.  (Again, the issue isn't whether

It's been done.  For HTTPS, for example, it was trivial (protocol-wise).
For protocols where we used the StartTLS pattern it was a bit harder,
but not that much.

> that's a good idea.  If you don't like FTP, pick something else.)  No
> surprise here - SSL/TLS didn't start off with the goal of supporting
> "secure connections" in the abstract, it started off with the goal of
> providing secure HTTP connections.  On the other hand, there are

"Secure Socket Layer".  That sounds like it was meant to evoke a
SOCK_SECURE_STREAM (and DGRAM for that matter), or SOCK_* with a socket
option for "secure", or even no socket option (it should always be
secure).

The main difficulty is that we'd have needed new AFs: AF_SERVER_NAME,
AF_SERVICE_NAME, AF_USER_NAME, and maybe others, as well as socket
options for pointing the system at one's private/secret key credentials
(or passwords, or...).  None of that exists.

Also, fitting the native OS file descriptor/handle I/O system basically
meant putting TLS in "the kernel", really, in the native OS.  But that
didn't happen either (though there have been a few "kssl" kernel
modules), at least for the post-authentication phase.

> libraries for SSL/TLS that you can build against - though they've
> traditionally been very complex and difficult to use.

Yes, that they are.  Especially when it comes to naming, because the
WebPKI has had so many problems (the use of x.500 naming, for example,
the lack of naming constraints on CAs, for another).

The octet stream part is utterly trivial by comparison to the naming
issues.

> SSH also started off with a specific goal:  Providing remote terminal
> connections.  But the semantics of remote terminal connections is
> [...]

And multiplexing for X11 display forwarding and so on.  Turns out that
that requires flow control at two layers in the stack, and this doesn't
work so well.

> IPSEC, of course, suffered from various design-by-standards-committee
> problems, leading to huge complexity underneath what could have been a

And the opposite too.  Since there were no suitable APIs and no one
cared to standardize them...

> very simple interface.  Early VPN's relied on IPSEC, but ran into all

Yes, IPsec is useful mostly for VPNs, not end-to-end security.

> [...]
> 
> Encryption and authentication have to "just be there" using standard

That's easy.  Authentication is hard because naming is hard.

> techniques, not something you need to put in using a special,
> complicated procedure.  This was definitely the idea in IPSEC - and,
> BTW, would have also been consistent with the design approach of the
> ISO network protocols.  Two good germs of ideas completely spoiled by
> the standards process.

Is that so?  APIs for CNLP, IIRC, dealt with network addresses, not
names.  Which means they (ISO network protocols) were going to have the
same failures.  It's not just the protocol, it's the APIs.

The naming schemes that sucked least (DNS and name at domain RFC822 style
addressing) didn't come in until very late in the picture of IP and ISO
development.  Everybody missed that boat because the sockets APIs were
designed in an era of rather small networks, and this screw-up (hiding
higher-layer naming from the OS) got baked-in, persisted, and will
continue to be part of our lives for many, many more years.  Because
compatibility.  Because handling DNS and such in "the kernel" is "hard"
-- actually, it's not: just upcall to a user-land service, a-la
microkernel, which is exactly what NFSv4 implementations do and have for
15 years now, plus Windows 2000 -and before it, NT4- got this right as
to SMB as well.

> (Me, I miss DECnet in VMS:  Network support was built into the file
> syntax and the file system.  The file syntax included a place for a
> username and password.  Authentication was done by the OS when
> receiving the connection attempt, not re-invented by every program.
> This was way too early to support encryption, so would have needed
> years of development for the modern era - but a much better base than
> what we ended up with.  Plan 9 ran with similar ideas in a Unix-like
> context; there were probably others.)

I think you're saying roughly what I'm saying: the OS and the network
security protocols need to be more closely integrated, particularly as
to naming ("file syntax" -> names that the OS sees, yes?).

If this is what we want, we need to push for standardized network APIs
that deal with names.  That means: we must implement for an open source
OS, push on vendors of proprietary OSes, and/or push on the Open Group.
And then we'll have to push on app devs.  It's going to take many years.
Meanwhile people are already budgeting for TLS 1.3 with the same
approach as before.  It seems like it's too late for the correct design,
no?

Perhaps the TCPINC folks will give us the right APIs.  I sure hope so.

My apologies for the length of my response.  There's a lot to say on
this topic.

Nico
-- 


More information about the cryptography mailing list