<html><head></head><body> <div><br></div><div><br></div> <div><br></div><div><br></div>On Mon, Aug 19, 2024 at 12:28 PM, Kent Borg <<a class="" href="mailto:On Mon, Aug 19, 2024 at 12:28 PM, Kent Borg <<a href=">kentborg@borg.org</a>> wrote:<blockquote type="cite" class="protonmail_quote">
<div class="moz-cite-prefix">On 8/18/24 12:06, Ron Garret wrote:<br>
</div>
<blockquote type="cite">
<pre>Worse news: hard (actually impossible) to know for sure what is going on inside <b class="moz-txt-star"><span class="moz-txt-tag">*</span>any<span class="moz-txt-tag">*</span></b> modern electronic device.</pre>
</blockquote>
<p>Effectively, supply chain attacks.<br>
</p>
<p>Very true.</p>
<p>That's why long ago and far away, so to speak, when I once worked
for a fab-less semiconductor company doing SoCs for laser printers
(that dates it pretty well), I asked my boss a question that
puzzled him. The question was roughly: How easily could the fab
change our design to add a backdoor? He understood the question.
What was puzzling was my asking it. I don't think anyone else in
the company was considering such things, and this certainly wasn't
my job. But I'm weird, that's why I'm on this list. </p>
<p>(Shortish answer, mixing in what I know now: Doable, but
involved, and even if this were something they were doing to every
chip, not cheap. We did the design, from IP we bought and our own
circuitry, *and* we did the layout. I'm pretty sure we only sent
the layout to the fab, for we wouldn't have had the rights to send
higher level "sources" and I don't think we were paying them for
layout services. So making a change would require reverse
engineering from our layout, changing the circuit, and redoing the
layout. Analogous to patching a binary, but harder. …</p></blockquote><div dir="auto">In the late 2000s to early 2010s there was a Dr. Tehranipoor who had some work along these lines doing power analysis which started as side channel defense work (e.g. <a href="https://ieeexplore.ieee.org/abstract/document/4358706">https://ieeexplore.ieee.org/abstract/document/4358706</a> ). I saw some tools in their lab that could put a tolerance on power usage characteristics measured in a grid across the layout using only the VLSI under different inputs, which would significantly restrict an unknown backdoor.</div><div dir="auto"><br></div><blockquote type="cite" class="protonmail_quote" dir="auto"><p dir="auto">Yes, your point holds, but one can still know a lot about ones
gear. Magical supply chain criminals aren't going to, say, squeeze
undocumented TBs of storage or high bandwidth digital transceivers
into something that doesn't take up enough space, cost enough
money, get hot enough, draw enough power, nor have the bandwidth
to get a hold of ones data, etc. Also, at least for users of
things like Linux, one can know a lot about and influence what
data the OS is hurling where. And, in the case of SSDs, one can do
very powerful things such as never storing anything on the SSD
that the SSD could differentiate from gibberish. I.E., only store
encrypted data. (Those links John Gilmore posted did not mention
using encrypted file systems, alas. Though they were MS
Windows-centric and I suppose such things are harder over there.)</p></blockquote><br><div dir="auto">Great points - compared to alternatives SSDs are also easier to destroy, which should be the norm as the practice largely eliminates this concern from the threat model. For organizations larger than a couple people it’s not feasible to be sure that disks are wiped consistently.</div></body></html>