[Cryptography] practical verifiable systems -- forensic and otherwise, cheap and otherwise
jsd at av8n.com
Mon Mar 2 15:09:57 EST 2015
On 02/27/2015 09:01 AM, Phillip Hallam-Baker wrote:
> This approach lets me:
> * Use a machine in a verifiable known state
> * Control exactly what is on the machine
> But I would like to go a little further.
The "cheap forensic recorder" thread is far more
important than the subject line would suggest.
Let's not sell ourselves short. About 98% of
what has been suggested so far applies just fine
a wide range of systems, forensic and otherwise,
cheap and otherwise.
The general question is, what can we do to facilitate
verifying the integrity of the system -- hardware,
firmware, software, etc. -- even in situations that
are somewhat adversarial. Litigation and forensics
are familiar adversarial situations, but certainly
not the only ones.
Here's another example dear to my heart: Vote-counting
equipment. Consider a setup where each voter goes to
the polling place, marks a paper ballot, and feeds it
into a scanner right there at the polling place. At
the end of the day, the scanner prints a tape with
the tally for that polling place. Then the tape, and
a duplicate tape, and the original ballots are sent
downtown. I'm leaving out a lot of details, but if
done right, this setup is vastly more secure than
an all-paper scheme or an all-electronic scheme.
This is an unabashedly adversarial situation ... and
existing practices use that to their advantage.
They don't even try to find nonpartisan poll workers;
instead they rely on bipartisan teams of poll workers.
Ideally each critical step gets vetted and signed off
by somebody from each major party.
Validating the scanner system is an important part
of the story. Many of the tactics suggested in the
"forensics" thread apply here also.
Ken Thompson wasn't wrong; he was only a few years
ahead of his time.
The central message is that people can mess with
your supply chain. To some people, that sounded like
tin-foil-hattery in 1984, but nowadays there is tons
of evidence that people *are* messing with your
-- NSA TAO intercepting and "tailoring" your hardware,
as documented by Citizen Snowden.
-- Trojans in the disk firwmare, observed in the wild.
-- Multiple examples of people tampering with voting
-- Superfish tampering with the TLS root of trust.
The Superfish example suggests that the amount of
money you need to offer a hardware vendor to get them
to betray their customers is very small. More than
30 pieces of silver, but not much more.
It is better to talk about /practical/ verifiable
systems, not just "cheap". Cheapness is a relative
thing. I have no doubt that seat belts and airbags
raise the price of a car. They don't solve all the
world's problems, but they solve some problems and
mitigate others. So, I don't want the cheapest
possible car; I am willing to pay a modest premium
for safety and security.
Again, cheapness is a relative thing. The threats
against the chain of evidence are different in the
case of a $13.00 shoplifting case and a $13 billion
Election security is a big deal. In the 2012 election,
each party spent more than a billion dollars. It is
common for candidates to spend more than $100.00 per
vote cast ... even on down-ballot races. Given that
a number of races were close, the dollar value of
flipping a few votes is just enormous ... far exceeding
the cost of the vote-counting machinery. So anybody
in their right mind would happily pay a substantial
premium for verifiable hardware, firmware, and software.
IMHO the zeroth order of business is securing the
hardware. One of my favorite sayings is
If you don't have physical security,
you don't have security.
For something like a forensic machine or a vote-
scanning machine, this starts with tamper-resistant
and tamper-evident enclosures. This is more of a
challenge than you might think. Seals don't do it.
At Argonne National Lab, the group that worries
about security of nuclear materials ran some tests
that showed the mean time to plan an attack to
bypass a security seal was 45 minutes, and the
mean time to carry out the attack was 5 minutes.
And that's if the security seals are properly used.
I've seen election materials "sealed" in such a
way that it was easier to bypass the seal than
not. I returned to the downtown office some used
materials, using the same /sealed/ containers
that the original materials came in, with the
same seals. Nobody noticed. I called their
attention to it. Nobody cared.
This also calls for storing stuff in vaults with
guards and video surveillance.
There may be a role for cut-and-choose protocols
here. If you need N machines, buy N+M machines,
and tear down some of them. Let the adversarial
parties decide which ones they want to tear down.
This creates some nonzero probability that tampering
will be detected. If you couple that with strict
penalties for getting caught (a bit of wishful
thinking), then you might accomplish something
IMHO the next step is securing the BIOS. Again
the logic is simple: If you can't trust the BIOS,
you can't trust anything else. Conversely, a
trusted BIOS can vet the other components. For
starters, it can demand a valid cryptologic signature
for BIOS updates. Similarly it can demand a valid
crypto sig on the software it reads from disk at
boot time. These things can be signed multiple
times, once by each of the interested parties.
This would make life noticeably more difficult for
anybody who wants to bugger the firmware in your
Open-source auditable BIOS implementations exist.
Some HP models already demand a signature for BIOS
updates ... to the annoyance of modders.
Software RAID may help here. It makes it much
harder for the disk firmware to know what the bits
VM technology helps a lot here. Each new VM guest
starts with a fresh copy of the software, from a
read-only file on the host. It's not hard to break
the MS Windows software running on the guest, but
it's a lot harder to break the VM model in such a
way that the guest can have /any/ lasting effect
on the host. The guest can't get anywhere near
the host BIOS, disk drivers, or anything like that.
This doesn't solve all the world's problems, but
it does dramatically reduce the attack surface ...
and adds a layer of hardnessto the remaining surface.
As for printers, that's actually a harder problem,
but there are things we can do. Suppose the mfgr
doesn't want to release the firmware code. OK,
it's proprietary. Even so, they should publishe
the HMAC of the firmware. There should be a printer
command to return the HMAC of the installed firmware,
for comparison to the published value. And then if
I tear down the machine and hash the firmware, I
should get the same answer, without relying on the
firmware to fink on itself.
Also a teardown "might" detect the secret wifi
interface that I didn't pay for.
Last but not least, there are plennnty of ways of
detecting radio transmissions. The hard part is
distinguishing unauthorized from authorized
transmissions. Still, if a printer that's not
supposed to have any wireless at all starts
transmitting, that is detectable.
See previous discussion of encrypted point-to-point
connections, basically virtual circuits. See also
previous discussion about stolen MAC addresses.
Hugh Daniel always said we should never rely on
firewalls. You can have one if you want, but think
of it like velvet ropes for a waiting line: It is
a guideline for people who are trying to cooperate,
but it is no barrier at all against any determined
attacker. We need point-to-point security. For
a multiuser machine, host-to-host is not even good
enough; it needs to be process-to-process.
Bottom line: Stuff that passed as "best current
practices" a couple of years ago must be considered
security malpractice today. We seriously need to
raise our game, not just for specialized forensic
systems, but for basically everything.
More information about the cryptography