[Cryptography] Compiler optimization side channel

Henry Baker hbaker1 at pipeline.com
Sat Aug 24 13:10:00 EDT 2024


See below...
-----Original Message-----
From: Bertrand Mollinier Toublet <crypto-metzdowd at bmt-online.org>
Sent: Aug 23, 2024 5:30 PM
To: Phillip Hallam-Baker <phill at hallambaker.com>, Cryptography Mailing List <cryptography at metzdowd.com>
Subject: Re: [Cryptography] Compiler optimization side channel

> On Aug 23, 2024, at 11:06 AM, Phillip Hallam-Baker wrote:
>
> So, I was looking at some RSA key generation code, just because and I suddenly had this thought. I have all these registers here with values that can be used to factor the modulus. So I had better zero them out before I leave.
>
[snip]
>
> Where Erase (this ref BigInteger x) is an extension method that reaches in to the structure and erases the value _bits.
>
>
> Which looked all well and good. But then I thought to myself, what if the compiler gets hold of that data and decides to optimize the arithmetic creating temporary variables that it leaves hanging about?
>
> I don't think the C# compiler can do that quite yet because there isn't a way to declare the BigInteger classes meet the laws of the arithmetic operators. So even though I can write a = b+c-b, the compiler can't optimize that away to a = b.
>
> But not too far-fetched to think it might one day in the future.
>
If I’m not mistaken, countless pixels have been blackened (whitened, I assume, for those using dark mode) on this very topic since… time immemorial? I believe it is understood that on a general purpose CPU, the C and C++ compilers will aggressively attempt to optimize away any “side-effect free” attempt at zeroing memory storage (this is me also acknowledging I can’t speak for the C# compiler, though my experiences with the “lower-level” C and C++ compilers makes me generally wary on the whole topic).

As in: when you generate an RSA key with OpenSSL on your Windows/Mac/Linux box, all kinds of traces of that computation are likely left around after the fact.

It is only in better controlled environments (special purpose processors along with potentially purpose-modified compilers and/or direct assembly coding) that there is any hope to “clean up” after oneself after a cryptographic operation of any kind.
--
Bertrand

---
[hbaker]
The situation is far worse than most computer people realize.

1. Standard software *type systems* can't handle secret stuff.

Some type systems were proposed ~50 years ago to deal with
"levels" of secrecy -- analogous to "ring" levels -- but it was
quickly realized that those type systems required so many
"escapes", that they were virtually useless.

Somewhat later (~1990), people (including me) were advocating
type systems that included so-called "linear types" (essentially
types whose reference counts can be *only* exactly 1), making
it impossible to surreptitiously "copy" the object. Linear
types are analogous to physical objects that cannot (within
the context of a program) be created or destroyed. This is a
serious limitation, since an object of linear type cannot be
passed as an argument to a function unless that object is
subsequently returned as one of the return values from the
function. This restriction also place great demands upon any
*exception* mechanism, which must also respect the linearity
of linear objects.

The Rust language, with all of its wonderfully tight type system,
sadly still doesn't have real *linear* types, but a poor approximation
called "affine" types. Thus, the Rust language must still rely on
the programmer to use additional constraints to try to deal with
"secrets".

2. Hardware, including *instruction set architectures* (ISA's) are
also not up to the task of dealing with secrets.

The vast majority of ISA's are *pre-emptable*, which allows the
operating system to "interrupt" a given instruction stream *between
any two instructions*. Once interrupted, the interrupt handler can
do anything it wants, including violating the integrity of any software-
enforced typing system -- including copying secrets out of registers.

There used to be microprocessors which could *disable interrupts*,
and therefore provide some guarantees that -- assuming power
still continues to flow -- the program would continue uninterruptibly
and be able to *finish* some transaction it had begun. Unfortunately,
if the transaction never finished, the processor would be locked up
and only a "power cycle" would enable full operation to resume.

Over the years, there have been some small attempts to build
more type-preserving hardware, but their cost-effectiveness was
never proven.

Perhaps a few successful *multi-billion-dollar* lawsuits will
re-align the cost/price incentives to re-invigorate research into
better type-preserving hardware, including for *linear types* ?

---
Thus, the only thing left to do: include completely separate "secret
processors", all of whose code is 100% under control of the
organization trying to keep the secrets.

But of course, with "secret processors", you immediately lose the
disinfecting power of sunlight, so these "secret processors" are
full of "secret-revealing bugs".

---
Now that AI/Copilot will solve all programming problems, I guess
we're done: "What, me worry?" :-)



More information about the cryptography mailing list