[Cryptography] Trustworthiness

Kevin W. Wall kevin.w.wall at gmail.com
Thu Jun 22 19:52:44 EDT 2017


On Tue, Jun 20, 2017 at 8:29 AM, iang <iang at iang.org> wrote:
> On 18/06/2017 11:26, mok-kong shen wrote:
>
>> P. G. Neumann wrote in his article: Trustworthiness and Trustfulness are
>> Essential,
>> CACM, vol.60, p.28:
>>
>> "The concept of trustworthiness seems to becoming supplanted with people
>> falsely placing their trust in systems and people that are simply not
>> trustworthy -- without any  strong cases being made for safety, security, or
>> indeed assurance that might otherwise be found in regulated critical
>> industries such as aviation.  However, the risks of would-be "facts" may be
>> the ultimate danger."
>>
>> Are there any practical remedies in sight?
>
> There are better paradigms.  Whether you call them 'remedies' would depend
> on how broken you think the system is.
>
> In practice, anything labelled with trust was a bit of a bait&switch.  The
> notion that people could "trust" systems was a misuse of the word.  In
> practice people rely on systems, not trust them.  You don't trust your car
> to get you to work, except euphemistically instead you rely on it.

I know that this has been discussed many years ago on the RandomBit,net crypto
mailing list. I recall Marsh Ray suggesting that we use the term "relies on" as
suggested by his former colleague Mark S. Miller. (I even have an indirect
reference to that on my blog post on "Misunderstanding Trust" at
https://off-the-wall-security.blogspot.com/2012/01/misunderstanding-trust.html.)

In general, I think that "relies on" makes better sense in many contexts because
unlike the word "trusts" it doesn't make non-security people think of a binary
"trust / not trust" result. I think using "relies on" carries the more subtle
shades of gray that is really more accurate when discussion such relationships.
I do not really believe that trust is binary, but most people that I discuss it
with seem to characterize it as such.

In my blog post "Understanding Trust", at
<https://off-the-wall-security.blogspot.com/2011/07/understanding-trust.html>,
I claim that "trust" has the following properties (does not imply a complete
list):
    Trust is not commutative
    Trust is transitive
    Trust is not binary
    Trust is context dependent
    Trust is not constant over time

Read it and see if you agree. (I know Peter Biddle did not agree with trust
being transitive which is why I wrote the follow-up "Misunderstanding Trust"
blog post.)

On Wed, Jun 21, 2017 at 6:43 PM, iang <iang at iang.org> wrote:
> On 21/06/2017 08:29, Dave Horsfall wrote:
>> On Tue, 20 Jun 2017, Ray Dillinger wrote:
>>> The Trusted Platform Module, for example, is named correctly.
>>> "Trusted" means simply that it introduces an additional risk of failure.
>>
>> Remember, in this context "trusted" means you *have* to trust it, not
>> because you *can*.
>
> This is what I call compliance.  I resist calling this trust.  To me, trust
> involves me taking an analysis, making a decision, taking on a risk, and
> then living with the consequences - reward or loss.
>
> Wherever one talks about a Trusted XBlahSomething, we ultimately end up with
> no choice.

And that is spot on the problem that I have using the term "relies on" rather
than "trust".  Just because I "rely on" Google or GM or the USG doesn't
really mean that I "trust" them, and even the extent that I do trust them is
limited by context rather than all-encompassing.

I think "trust" conveys a personal choice (although that choice often may be
implicit). The "choice" being the analysis and decision making that Ian refers
to...do I want to accept the risk or not? What are the trade-offs. Many times,
those decisions are implicit and/or not terribly well-grounded in logic. For
example, I think that many of us (myself included) at one time or another
have made the mistake of "trusting" someone simply because they were in a
position of authority only to have it come back and bite us royaly in the
ass. The logic error there borders on the fallacy of appeal to authority, which
is understandable given that its drilled into most people's heads since they
were small children. At lot of those types of "trust" decisions get made
implicitly because we generally feel that authorities will behave
morally and are
altruistic.

However the term "relies on" doesn't really imply this degree of analysis. If I
"rely upon" my car to get me from point A to point B. Maybe it's because almost
all such cases of decision to use Google or use my care are implicit based on
prior personal experience. (I remember first learning to drive and I had no such
trust in my car at that time.) I think of "relying on my car" or "relying on
Google" in the sense that I've already made a decision sometimes in the past
based on nothing obviously bad happening and based on that, have decided to
accept the benefits over any inherent risks. And I do that even when there are
other equivalent, but safer, alternatives exist in it's place, such as using
DuckDuckGo for Internet searches rather than Google search.

Getting back to M.K. Shen's original question regarding the issues discussed
in Neumann's most recent CACM Inside Risks column of "Are there any practical
remedies in sight?", I would say that some that I've seen work are developer
education in terms of "secure coding practices" and developing formal threat
models. At my previous employer, essentially no time was given to IT folks to
take any kind of secure coding classes. At my present company, its almost
expected and for some lines of business, may even be mandated. But the
resulting security of the final product is telling and is as different as
night and day. At my former company, when I pointed out a XSS vulnerability
(and sometimes even a SQL injection), I would often hear developers complain
"but no one would ever do that" when I told them they had to fix there code.
At my present company, I often get thanked for finding the vulnerability.
So, yeah, developer education makes a noticeable difference. Whether it is
always practical, well that's a cost/benefit analysis each needs to make.
Having each developer take 2 weeks of security training might not make a lot
of sense if all you are developing is some new game for a start-up company.
(But that same game developed by an established company may very will come to
a different conclusion because there is much more reputational risk at stake.)
You've got to do your own homework to see the trade-offs for your particular
contexts. If you don't have the skills to do that, there are a lot of
companies that can provide that expertise to you.

-kevin
-- 
Blog: http://off-the-wall-security.blogspot.com/    | Twitter: @KevinWWall
NSA: All your crypto bit are belong to us.


More information about the cryptography mailing list