[Cryptography] Keeping Malware from Using Security Hardware

Kent Borg kentborg at borg.org
Wed Mar 5 00:15:09 EST 2025


On 3/3/25 5:19 AM, Ben Laurie wrote:
> This is why I've always said that authentication devices need an 
> actual display, so you can see what you're authenticating.
>
> See https://www.nspw.org/papers/2008/nspw2008-laurie.pdf.

The "Choose the Red Pill and the Blue Pill" paper is interesting (partly 
in how old it is). I don't know that a lot has *changed* since 2008, but 
certainly a lot has *happened*. Does that make sense?

Main reactions:

1. I'm not as worried as Laurie and Singer seem to be about whether this 
imagined "Nebuchadnezzar" authentication/authorization device could be 
built. A perfectly implemented device (with "an absolutely bullet-proof 
kernel") doesn't seem necessary, I think a merely really well built 
device could still be useful. (Use Rust, stop to think and then stop to 
design, worry, and don't require it be built only in three-line PRs.)

2. Adoption in the real world seems hard. I don't see some industry 
group (or powerful company) having the motivation and power to simply 
*do* this (à la SIM cards), so I wonder what incremental approaches 
there are. Is there some well motivated use case to get the ball 
rolling? Could there be some reasonable extension to OAuth (??) that 
takes useful advantage of a UI? Current Yubikeys have security value, 
could there be a Yubikey plus simple UI device that would be useful? Or 
would the bad guy simply create the appropriate display? (Serious question.)

3. When I imagine the full blown "Nebuchadnezzar" they imagine, with a 
UI saying how many of what widgets for how many dollars to be shipped 
where…I worry about UI design and the social engineering possibilities 
in getting people to agree to bad things, particularly given how many 
times they might eventually do that in a day.

4. The management of a Yubikey is already very complicated, the 
management of a "Nebuchadnezzar" seems a lot worse. Is there any way 
these beasts could be owned by normal individuals?

5. The UI. How the hell can that be made meaningful enough to offer any 
security yet flexible enough to be of general use?


Pet peeve time. What we are talking about is fighting the techniques of 
good old fashioned fraud and swindle and grift and trickery, but 
digital. The basic problem goes back a *very* long time: The bad guys 
will try to trick the good guys. In the physical world we try to teach 
people how not to be swindled. (Or at least used to try, have we stopped?)

But in the virtual world:

A) We teach people *to* be swindled. I am opposed to clicking on an 
e-mailed link and then entering authentication credentials, but banks 
and HR departments seem particularly determined to force me to do this. 
I want to login myself, using a URL I trust, and *then* click in the 
e-mail. But no, frequently that is impossible, I still need to give 
credentials. And were I to try to call to complain no one would have the 
faintest idea what I am talking about.

B) We are determined to *not* teach people how to avoid being swindled 
in the virtual world. Because that would be blaming the user! Having 
savvy users is an explicit non-goal of the cybersecurity world. But 
there is no technical solution to humans tricking one another, humans 
will still be making decisions. They can be better or worse decisions.


-kb, the Kent who is wondering how to be paranoid enough about what he 
runs on his Linux machines.


P.S. I would still like a good offline password safe. Roughly a "smart" 
phone with a keyboard. Last time I tried to resurrect my Gemini PDA I 
was disappointed that running Linux was always a future and never really 
productized.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://www.metzdowd.com/pipermail/cryptography/attachments/20250305/5d99727d/attachment.htm>


More information about the cryptography mailing list