[Cryptography] Cheap forensic recorder

Kevin W. Wall kevin.w.wall at gmail.com
Sun Mar 1 17:28:58 EST 2015

On Sun, Mar 1, 2015 at 9:43 AM, Phillip Hallam-Baker
<phill at hallambaker.com> wrote:
> On Sun, Mar 1, 2015 at 9:02 AM, Emin Gün Sirer <el33th4x0r at gmail.com> wrote:
>> On Sun, Mar 1, 2015 at 8:52 AM, Phillip Hallam-Baker
>> <phill at hallambaker.com> wrote:
>>> If I am using a Raspberry Pi with a clean O/S install from a source I
>>> have checked the thumbprint of, I have a lot more control than is possible
>>> using a TPM.
>>> The TPM is really designed to solve a different set of problems to do
>>> with running the system for a long period of time.
>> The TPM (and similar technologies) gives you attestation to binary images,
>> which in turn ensures that the images you think you're running are indeed
>> the images your CPU is executing.
>> For instance, if the OS on which you prepared the Raspberry image was
>> compromised, your tools could be storing an adulterated OS image and
>> reporting a different hash, and you'd be none the wiser. Reading Ken
>> Thompson's "Trusting Trust" paper might be a good idea for anyone involved
>> in systems that stand up to forensic scrutiny. Your current setup is not
>> very secure: the TCB is very large because you transitively trust countless
>> systems; the integrity of your findings depends on the provenance and
>> integrity of every single system that went into preparing the final boot
>> image. The TPM, and similar hardware measures, can enable you to perform an
>> end-to-end check, rooted in the hardware root of trust provided by the
>> hardware.
> This is rhetoric, not argument.
> Yes, I have read the Thompson paper and like much of the UNIX security work
> I find an obsession with issues of mostly academic importance.

Academic or not, we are starting to seem some of Thompson's arguments
approaching reality so I agree that we can no longer simply dismiss

However, in your (PHB's) case, I am not sure that the attestation chain of
trust that Emin is arguing for will help all that much. (Although, it
certainly probably will not hurt things either.)

Let's consider malicious hard drive firmware. For the attestation chain to
work with that, that HD firmware needs to somehow be confirmed as the
original firmware as installed by the manufacturer. In most cases, I
think it is probably doubtful that this could be confirmed because one
can't actually 'get at' the hard drive firmware during the boot sequence
to check the attestation chain. (Unlike other firmware for things like NICs,
it is not loaded as part of the device drivers from /lib/firmware or
wherever.) So what do you do then? Run random I/O tests until you are
satisfied that the hard drives' firmware is working as expected?
That hardly provides attestation and leaves many questions unanswered,
such as how much attempted verification is enough (knowing that it would
always be needed to be balanced against boot times).

The other thing that Emin was perhaps hinting at regarding malware in the
firmware of the hard drive I think is an issue, but not so much for
the field system gathering the forensics evidence. The important thing
in forensics is having a verifiable chain-of-evidence and if past
history is any evidence, courts are willing to accept lower levels
of this such as eyewitness from the court, law enforcement, or
the opposing counsel, or perhaps video taping and followed by
turning over the forensics collections system to court authorities.
While such chain-of-evidence / chain-of-custody can be challenged by
the defendant's attorney(s), the team performing the forensics is not
the one on trial so judges are likely to control at least some aspect
of what is a reasonable challenge. And one would think that this can
be simplified by getting opposing counsel to agree beforehand what is
and is not acceptable to them in terms of evidence gathering.

But I see a bigger forensics problem looming on the horizon. Probably not
now, but eventually. Should malicious hard drive firmware become common
place enough (which, given organized crime, now seems inevitable), it
is going to make the prosecution's job in some cases much more difficult.
For instance, consider a case where a hard drive is being examined for
evidence of explicit images of child pornography.  There are cases
that have already been overturned because it was shown by the defense
that the accused's PC was infected with malware and said malware was
being used to load / swap / sell child porn. If we are unable to
now unable trust the disk drive firmware because it *may* be malicious,
it seems to me that the defense could use that as a possible defense and
it could really raise the bar for the prosecution. (In fact, one could
claim that the hard drive had malicious firmware on it, loaded some
child porn, and then reloaded the manufacturer's original firmware.)
And this is not just going to present issues for child pornography
cases but also for cases where the defendant attempts to use such malware
claims against cases involving non-repudiation such as in digitally signing
contracts, etc.

> I remember back when folk were doing all the work on cryptographic elections
> in the 1990s and the concern was to protect confidentiality. People really
> didn't get my assertion that what I was interested in was the ability to
> audit the election.
> The concern here is to be able to provide the exact same environment to
> opposing counsel so they can examine it.

I take it you are trying to do that without simply allowing the
opposing counsel to just have your Raspberry Pi box, but rather allow
them to duplicate an identical environment by which they can reproduce
your forensic results? If that is the case, then having a TPM and
attestation seems like it would not be a major benefit unless the opposing
counsel came up with different results than your findings. How often
does that happen?

> A TPM really does not help very much because it is a sealed box that the
> whole security of the system depends on and I can't audit it. I certainly
> can't expect to explain it to counsel.

Certainly it should be possible to explain it to them in the abstract
though, would it not? I would think that they do not need to understand
all the cryptographic details.  There may be a time for that if their
expert witness disagrees with your testimony, but I wouldn't think that
would be a common occurrence.

> It is really easy to throw the 'transitive trust' problem, the TPM is just
> as vulnerable.

Well, for agencies like the NSA it probably is, but I'd expect that for
most situations that particular attack vector is outside of the considered
threat model.

Blog: http://off-the-wall-security.blogspot.com/
NSA: All your crypto bit are belong to us.

More information about the cryptography mailing list