[Cryptography] Internet of things - can we secure it by going simple?

Jerry Leichter leichter at lrw.com
Fri Jan 6 06:35:06 EST 2017


> If we put no more software and complexity in these devices than they
> actually need, don't we suppose that they could be made secure?
> 
> Web interfaces on these devices only need to be able to understand eight
> or so different HTML tags.  Skip implementing the rest.  A device that
> never executes anything downloaded, ever, can't easily be recruited into
> a botnet. Command lines on these devices (if they expose command lines
> at all) should only understand three or four different commands.  They
> should not even have access to utilities like "ls" and "cat" and "mv"
> and "copy", let alone interpret pipes and things.  I mean, is there an
> explicit reason, for each and every one of those tools, why it's needed
> for device configuration?.  Because device configuration is the only
> thing these command lines exist for.
> 
> If someone breaks into a thermostat and can install shell scripts on it
> - what the hell was a thermostat doing with a command shell capable of
> running scripts?  If someone can use it for a reflector to poke around
> your network - what the hell did a thermostat need with a repertoire of
> utilities like 'mount' and 'rlogin' and whatever else would get used to
> do that?...
You'd be running counter to a couple of decades of evolution and "best practice".  Don't reinvent - use what's out there.

Not so long ago, devices typically ran no OS at all - all the code running in the thing was a single program written for the specific hardware.  The code ranged from dreadful to excellent - but each device and each code base was pretty much unique.  No one would bother to take the time to put more in one of these things than was needed.  Since devices didn't talk to each other much, and certainly could not generally be reached remotely, security was not a big issue:  Undoubtedly there were many security bugs in there, but hardly anyone was in a position to exploit them.

Then small real-time kernels emerged and gained some traction - until Linux caught on (and also the BSD's, even if they don't get talked about much).  Hey, look, for free you can get a whole OS, written by skilled developers you don't have to pay, with toolchains and development environments and such you don't have to buy.  They run on all the hardware you care about.  Someone, somewhere, has probably already written a driver for that weird I/O chip the hardware guys spec'ed because it cost 3 cents less than the common competitor.  What's not to like?

There's an old joke from the era when workstations were first coming into wide use:  "The mainframe is shared with many people - I don't have to work off hours, my workstation is just as fast during the day!"  "Yes, but it's just as slow at night!"  Using a full OS makes your device "just as easy to program" as your workstation - and just as vulnerable.  Add lack of patching, and it's just as vulnerable as the workstation you had four (or many more) years ago.

Yes, implementing just a minimal subset of capabilities in OS's and protocols appropriate for use in embedded devices would go a long way to make them more secure.  And there have been repeated attempts to do this over the years - at many layers of the software stack.  One example have been C subsets that leave out or at least minimize the more hazardous parts of the language.  Busybox at least lets you fix any security issues once, rather than in every one of 20 or more basic shell commands.

But none of these has caught on.  You need to gather a sufficiently large group to buy inton using and building and maintaining the thing, or it quickly dies.  Alternatively, you can try for a commercial effort - but making money in this area is just about impossible, since the free alternatives are considered "good enough":  Insecurity doesn't show up as a cost factor in most embedded systems.

In theory, you could "modularize" Linux so that someone wanting a cut-down implementation could pick and choose.  In practice ... this does get done to some degree, but the smallest practical Linux implementation is still full of features you'd rather not have.

Over in the datacenter world, containers are actually providing a way to build smaller minimal "installations" of a different sort - though they aim at solving an entirely different set of problems in a very different space.  I'm working on a system like this, and we run our containers with no root or other privileged account, only a handful of open ports, the minimum set of services we can get away with ... etc.

This is the downside of free software:  Everyone uses the same thing, so they get the advantages of sharing.  But they also get the disadvantages of over-sharing and common failure modes - one failure mode being that features keep getting added, and once added, are almost impossible to remove.

The BSD's are probably closer to providing what you describe than Linux.  Since the OS inside of embedded devices isn't easily determined, it's difficult to compare the popularity of BSD- vs. Linux-based devices.  I have the feeling, based on no evidence, that BSD penetration is slowly declining.

And, of course, Linux vs. BSD vs. anything else at the OS level has no influence on the network protocols, which just keep getting bigger and hairier every year.   Since these are inherently there so that devices can talk to each other, no individual implementation can decide not to play the game.

We're not getting to the promised land.  We can't even see it from here.

                                                        -- Jerry



More information about the cryptography mailing list