[Cryptography] Is it mathematically provably impossible to construct a mechanism to test for back doors in programs?
ianG
iang at iang.org
Sun Jun 1 10:21:38 EDT 2014
Hi Stephan,
On 1/06/2014 09:34 am, Stephan Neuhaus wrote:
> On 2014-05-31, 17:22, ianG wrote:
>> The bit half-way through second para. It seems to be an interesting
>> property; can someone work through the logic of it for me?
>>
>> I'm specifically wondering whether we are limiting this case to all
>> backdoors, backdoors of the nature of a public-private key pair, or
>> something in between.
>>
>> E.g., if it is impossible to detect any backdoor, why do we favour open
>> source, which might provide a neat mechanism to insert undetectable back
>> doors? Just to be controversial...
>
> To tackle your questions in turn: the answer to the question of whether
> it is "mathematically provably impossible to construct a mechanism to
> test for back doors in programs?" is yes, and the proof is simple and
> requires only elementary logic. It is the same proof that was used by
> Fred Cohen some thirty years ago to prove that no perfect virus detector
> could exist. It is a classic proof by contradiction.
>
> Assume therefore that such a method existed. In other words, if P is
> any program, we assume the existence of a function f such that
>
> * f(P) is "yes" if P contains a backdoor
> * f(P) is "no" if P contains no backdoor
> * the computation of f always terminates
>
> Now I build an encryption program C that contains the following code:
>
> void encrypt (buffer b, key k) {
> if (f(C) == "no")
> send_key(k, NSA);
> ... /* encryption code here */
> }
>
> In other words, C exhibits a back door if and only if f(C) says it has
> no backdoor. This is a contradiction to our assumption and we must
> therefore conclude that no such function f can exist.
Thanks for that, that was a perfect walk-through of the logic.
Is this also sufficient to prove the halting problem unsolvable?
Something like:
/** @returns TRUE if this program C will halt */
boolean f(Code C) {
...
}
void haltNowIf () {
return ! f(C);
}
> Now for the second question: given the impossibility of f, why do we
> favour open source? The first answer to this is that "we" might *not*
> favour open source at all, for a variety of reasons, but that is a
> fairly cheap and glib answer. The real answer is that while no
> algorithm with the properties given above can exist, there is
> nevertheless the possibility of detecting a great number of backdoors in
> a practical way.
I agree.
> Also, the contortions that program C above went
> through are rather easy to detect themselves and would lead one to
> reject program C.
Sure, but the recent rash of suspicions concerning open processes in
security & crypto does give one pause for thought.
> The same mistake is occasionally made in cryptography. Ralph Merkle
> once invented a cryptosystem where an attacker had to solve a knapsack
> problem, but where the legitimate recipient had secret information that
> would allow him to turn the knapsack problem into a so-called
> "superincreasing" knapsack problem, which is easy to solve by brute
> force. The reasoning was that since the knapsack problem is known to be
> NP-complete, then unless P = NP, the attacker has a very difficult job.
> The fault in the logic was that, yes, a *random* instance of the
> knapsack problem is hard to solve on average, but the knapsack problems
> that came out of his cryptosystem were *not* random instances. In fact,
> they were solved on standard desktop hardware just a few years later,
> and the cryptosystem thus broken.
:) Perhaps the interesting thing about that is that Ralph Merkle
probably learnt that lesson well. Yet when a group gets tripped by this
logic, do they learn it? Does anyone else?
> Fun,
>
> Stephan
>
thanks again,
iang
More information about the cryptography
mailing list