[Cryptography] Dual cycle computation and heating.

Phillip Hallam-Baker phill at hallambaker.com
Mon Dec 6 11:28:27 EST 2021


I despise proof of work.

But...

The notion of distributed compute farms doing work is not completely bogus.
Not if we can actually make use of the heat. And this creates an
interesting challenge for something that looks like homomorphic encryption
but probably ends up being more of a DRM/data obfuscation challenge.

Consider the case in which CPUs become cheap as in RaPi level prices for an
all-in-one compute node that has only three sets of connections:

* Power (3V @ 30 amps.)
* Network (2x 10G Ethernet)
* Water (integrated PEX connectors)

These CPUs would be sold in the plumbing aisle and come with PEX
connections. As far as the homeowner is concerned, they function just like
an electric water heater: Plug them into a standard socket and let them go.

My experience in the parallel processing world was that a 'gigaframe'
represents a major inflection in the price/performance curve. 1GB of
memory, 1GHz processor per node is enough to get useful computation done.
[Yes, I am writing this on a 32GB machine but it has 12 cores, I am
thinking that by the time a system like this became practical we would be
talking about a $50 compute node with 1024 cores, 1TB RAM, 1TB disk,
running at 1-4GHz depending on load.]

The device has to be cheap enough that it is economic to only turn it on
when heat is required or it is possible to vent the heat to the outside at
an economic rate.

It is clear that such a device is feasible, the recent nVidia GPUs are in
that class. And while they cost $1000 today, it is clear that they will be
very very much cheaper in the future. When silicon valley runs smack into
the end barriers of Moore's law, the only thing left will be
beggar-thy-neighbor with price cuts.

It is also clear that the on-chip/off chip communications bandwidth is the
biggest barrier to improving performance. The integrated all-in-one compute
engine is the logical end point: Stick absolutely everything you need on
one die, add spares so you can switch off the dud circuits and package it
around the heat management solution.

If you have radiant heat flooring, that is the obvious place to dispose of
your surplus heat from your compute engines.


So the crypto question is what can we do to make such a device sufficiently
secure to use as a compute engine? Do we partition the problem up so that a
hacked node set to render Black Widow 2 can only grab one part of one
frame? Can we preprocess the data in some way?

Is there a sufficient number of SETI at home type problems which are in
essence searching for a needle in a haystack?

Of course, it is quite possible that this sort of scheme is something that
can only be applied to corporate premises with J&J using their building
compute node to perform drug computations that they would never put in the
cloud?


One thing is pretty certain though. China will be well placed to build such
infrastructure and run 'national security' problems on it. Cracking the
keys of dissidents etc. does fit the compute model. So we had better be
sure our work factors are sufficient.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://www.metzdowd.com/pipermail/cryptography/attachments/20211206/e34c67d8/attachment.htm>


More information about the cryptography mailing list