TrustBar: an open-source crypto anti-spoofing/phishing toolbar forFireFox
Amir Herzberg
herzbea at macs.biu.ac.il
Wed Feb 9 08:30:11 EST 2005
Ed Gerck (responded to me)^2:
>>
>>> Can you trust what trustbar shows you?
>>
>> This trust translates to:
>> -- Trusting the TrustBar code (which is open source so can be
>> validated by tech-savvy users / sys-admin)
>> -- Trusting that this code was not modified (same as for any other
>> aspect of your machine)
>> -- Trusting the CA - well, not exactly; TrustBar allows users to
... <skip>...
>
> In other words, if trustbar can be verified it can be trusted.
This is one method of verifying, but probably not very useful for naive
users... Instead, such users would base their trust on trusted
evaluations of TrustBar by independant experts (like, hopefully, people
on this list), or by receiving TrustBar from someone they trust (e.g.
when Mozilla or Microsoft or other reputable company provide TrustBar
functionality in the browser).
>
> Redundancy is useful to qualify trust in information. Trusting the trustbar
> code might be hard to qualify by itself (ie, source code verification) but
> redundancy helps here [1]. Trust increases if the two channels trustbar and
> browser CA status [2] agree with each other. Trustbar can become a trusted
> verifier after positively checking with the browser CA status.
I think the most useful redundency here is the availability of
_multiple_ (redundant) reviews of the code by independant experts.
...<skip>...
> [1] This is also my solution to the famous trust paradox proposed by Ken
> Thompson in his " Reflections of Trusting Trust". Trust is earned, not
> given. To trust Ken's code, I would first ask two or more programmers (who
> I choose) to code the same function and submit their codes to tests. If
> they
> provide the same answers for a series of inputs, including random inputs,
> I would have a qualification for trusting (or not) Ken's code. This works
> even without source code. Trust is not in the thing, it's how the thing
> works.
Unfortunately, this test is neither feasible nor secure.
It is not feasible since you assume that the programs implement a
(deterministic) function, while in reality programs are (usually)
non-deterministic and often are also interactive processes (not a single
function from input to output).
It is insecure since a program can fail only on a negligibly small
fraction of the inputs, which will be invoked by the attacker. Namely,
even if the programs are supposed to simply implement a (deterministic)
function, e.g. AES_k(m), a rogue implementation could output AES_k(m)
for all m except a special input e.g. "trapdoor", and output k when
given m="trapdoor". The probability that your test will include the
string "trapdoor" is negligible. QED
>
> [2] Mozilla already shows the signing CA name when the mouse is over the
> lock symbol in SSL. This is more readily visible than clicking with the
> right-button and reading the cert.
This is very good; I don't know if this change was motivated to any
extent by TrustBar, but clearly our goals is exactly to introduce
sufficiently secure solution into the browsers.
Unfortunately, our usability surveys provide clear evidence, that these
improvements are still not enough to protect (most/naive) users... In
fact, I doubt that many users are even aware of this feature and/or of
what it means. Furthermore, currently, this information - as other info
displayed by browsers - can be spoofed in different ways, as explained
in our paper and in some of the previous works we cite.
Best, Amir Herzberg
---------------------------------------------------------------------
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majordomo at metzdowd.com
More information about the cryptography
mailing list