[Cryptography] Is it mathematically provably impossible to construct a mechanism to test for back doors in programs?

Phillip Hallam-Baker phill at hallambaker.com
Tue Jun 3 13:03:05 EDT 2014


On Tue, Jun 3, 2014 at 12:05 PM, Bear <bear at sonic.net> wrote:
> On Mon, 2014-06-02 at 07:50 -0400, Phillip Hallam-Baker wrote:
>
>> We can achieve a robust notary infrastructure that is proof against
>> defection for considerably less money. Let there be 32 independent
>> notary log maintainers who maintain a Harber-Stornetta style hash
>> chain log (i.e. what is used in Certificate Transparency). Each notary
>> has very limited defection opportunities and any defection would be
>> quickly noticed.
>
> This provides identifiable points of attack for adversaries of the
> system.  what the NSA &co have been teaching us is that if agencies
> with legal authority are permitted to command breaches of trust and
> keep those breaches of trust secret, there can be no trusted
> authority.

A chained notary is only trusted as far as the last outputs that have
not been recorded by other notaries. So even in the case of total
collusion, default is very limited.

The organizations I am looking at to run notaries are

1) National laboratories e.g. NIST and its equivalents in Turkey,
Brazil, France, UK etc.
2) Certificate Authorities
3) The major engineering universities: MIT, Stanford, METU, Southampton

Now obviously there is a theoretical possibility that they all might
collude and default but it is pretty unlikely that they would and it
would certainly be noticed. I think that is far better in practice
than the BitCoin block chain with its known vulnerability to unwinding
transactions.


> I don't like the bitcoin proof-of-work system; it guarantees that
> the expense of creating bitcoins will hover right around equal to
> their market value (ie, the point at which the marginal profit of
> bitcoin farming reaches zero), and constitutes an outright theft
> of subsidized electrical utilities, etc.  But the problem it solves
> is how to proceed with zero-trust, and a solution that requires
> identifiable trusted authorities is no solution.

Coming back to the subject line, I think it illustrates perfectly the
different between actual and mathematical proofs of security.

In theory one time pads are perfectly secure. But people who try to
use them in practice quickly discover that the many compromises
required to make them practical makes them much weaker in practice
than mechanisms that are not provably secure. The same objection
applies to quantum cryptography which is not unbreakable in practice
because the real world implementations have mechanical imperfections
that make them vulnerable.

I am not too interested in being able to prove an implementation
perfectly secure mathematically because I expect that a system that
limits the rate of default to an insignificant level is actually going
to be sufficient.


More information about the cryptography mailing list