[Cryptography] [cryptography] Browser JS (client side) crypto FUD

Kevin W. Wall kevin.w.wall at gmail.com
Sun Jul 27 19:27:56 EDT 2014


[Note: Dropped cypherpunks list as I'm not subscribed to that list.]

On Sat, Jul 26, 2014 at 11:03 AM, Lodewijk andré de la porte
<l at odewijk.nl> wrote:
> http://matasano.com/articles/javascript-cryptography/
>
> Is surprisingly often passed around as if it is the end-all to the idea of
> client side JS crypto.
>
> TL;DR:

I don't see how you claim that it was "TL;DR", especially when you
put in as much time as you apparently did in your almost blow-by-blow
reply. It was a mere 9 pages if you printed it out and if had they used
formatting for headers normally used when presenting white papers, I
would guess it would come out no more than mere 5 or 6 pages.

> It's a fantastic load of horse crap, mixed in with some extremely
> generalized cryptography issues that most people never thought about before
> that do not harm JS crypto at all.
>
> I'm not sure why the guy wrote it. Maybe he's NSA motivated? Maybe he's
> worked a lot on secure systems and this just gives him the creeps? Maybe
> he's the kind of guy that thinks <dash>JS</dash> dynamic scripted languages
> are not a real languages?

Really? You're going to go there and imply he's an NSA shill? That's pretty
unprofessional.

>
> Somebody, please, give me something to say against people that claim JS
> client side crypto can just never work!

I can't do that, because I wouldn't claim that it can "never" work. I could see
it being useful if used correctly in the right context, which I mean as using it
as an approach to security-in-depth. But if one is referring to moving all the
crypto to the client side, I think that would generally be a huge mistake.

> ---------------------------------------------------------
> Aside from that it's, well, fundamentally moronic to claim that something is
> "harmful" when you actually means it does nothing, it's also just (almost!)
> never true that no attacks are prevented.
>
> But, let's go with the flow of the article. Rants won't really settle
> arguments.
>
> Two example usages are given.
>
> The first is client-side hashing of a password, so that it's never sent in
> the clear. This is so legitimate it nearly makes me drop my hat, but, the
> author decides to use HMAC-SHA1 instead of SHA2 for reasons that are fully
> beyond me. Perhaps just trying to make things less secure?

I think it more likely had do do with the prevalence of SHA1.  When was
this written anyway? The only date that I saw was "2008" in reference
to the browser maturity, where it stated:

   Check back in 10 years when the majority of people aren't still running
   browsers from 2008.

(Of course that's still largely true today and will remain so until all
those unsupported WinXP systems get replaced.)

So assume HMAC-SHA2 here if you like. I don't think that changes things
much. But I think the reason for the HMAC was because you clearly want
a keyed hash where you are hashing a nonce in the sort of challenge-response
authN system that the author is describing.

But if the goal is to build something like SRP, it would be much better to
build that into the HTTP specification so that the browsers and web servers
could support them directly similar to how the do with HTTP Digest
Authentication.

> The second is using AES keys to client side encrypt. The author must've
> thought he was being helpful when he imagined the scheme for this. Or maybe
> he was drunk. "So you generate an AES key for each note, send it to the
> user's browser to store locally, forget the key, and let the user wrap and
> unwrap their data.". Somehow trusting the transport layer is all back in
> vogue. The only key-generation problem in JS is entropy, which is a problem
> everywhere tbh. If you really want to ensure entropy, send a random data
> blob and XOR it with whatever client-side best-shot at randomness. Whatever.
>
> The author bluntheadedly claims "They will both fail to secure users". In
> principle I agree, his methods sucked balls. He, however, blames it on JS.
> Okay.. Let's go on.
>
>> REALLY? WHY?
>> For several reasons, including the following:
>> 1 Secure delivery of Javascript to browsers is a chicken-egg problem.
>> 2 Browser Javascript is hostile to cryptography.
>> 3 The "view-source" transparency of Javascript is illusory.
>>
>> Until those problems are fixed, Javascript isn't a serious crypto research
>> environment, and suffers for it.
>
> (points numbered for pointwise addressing)
>
> 1 - Yeah. Duh. What do you think of delivering anything client side? There's
> the whole SSL infrastructure, if that doesn't cut it for you, well, welcome
> to the Internet. (I suggest the next article is about how the Internet is
> fundamentally flawed.) I would suggest, however, that once your delivery
> pathway is exploited you're fundamentally screwed in every way. You can't
> communicate anything, you can't authenticate anyone, you really can't do
> anything! So let's leave out the "Javascript" part of this point, and just
> do whatever we're already doing to alleviate this issue.

Well, it's really more than that and the problem goes beyond just JS crypto.
We have the same issue for login forms. All those pages where you see
a login form displayed on a vanilla http page are insecure even though
they really do POST back to the application via https. This is not much
different.

I see all the time where something (most often images or a .js library) is
using an http link from within an https page. Sure the browser gives you
a warning, but how many times to people really click through those
without trying to find out what is giving the warning?

I think what they are saying here is in the case of crypto, the stakes
are just a lot higher than your Gmail or FB password or whatever.

> 2 - This is a conclusion without any basis so far (aside from being..
> meaningless to a computer scientist. Hostile?)

I disagree. They give these reasons and I believe that all of them are valid:

* The prevalence of content-controlled code.
* The malleability of the Javascript runtime.
* The lack of systems programming primitives needed to implement crypto.
* The crushing weight of the installed base of users.

And they go into some detail for each of them. Of course, IMO, these are
not just issues with crypto written in JS, they are issues with pretty much
anything written in JS.

> 3 - Then just look at what data was transferred. Does every crypto
> application require checkable source? Is any SSL implementation "considered
> harmful" because nobody is able to flawlessly read the code, no compilers
> are trusted, etc?

I think you are missing the point here. This has to do with the these 2 bullet
items:

* The prevalence of content-controlled code.
* The malleability of the Javascript runtime.

Your TLS implementations or your server-side code are generally compiled
and much less malleable or subject to content controlled code.  I don't think
they are at all arguing that everything else is a perfect world, but
only the way
JavaScript is used today with all the AJAX-based mash-ups, that is a much
more difficult scenario.

> Okay so that chapter meant absolutely nothing. The author goes on to try to
> defend his brabble:
>
> "WHAT'S THE "CHICKEN-EGG PROBLEM" WITH DELIVERING JAVASCRIPT CRYPTOGRAPHY?
>
> If you don't trust the network to deliver a password, or, worse, don't trust
> the server not to keep user secrets, you can't trust them to deliver
> security code. The same attacker who was sniffing passwords or reading
> diaries before you introduce crypto is simply hijacking crypto code after
> you do."
>
> A fair point against a single thread model. Interestingly the last line does
> absolutely not have to hold, sniffing (after the fact) and on-the-fly
> rewriting are worlds apart. Take Tempest of Xkeyscore, for example, they
> can't do rewrites. They need specialized programs for that. (Conclusion:
> nope, nothing to see here)

I responded somewhat to that above, but there is another issue here. I
don't see the major issue here as MITM attacks, but rather as MITB
(man-in-the-browser) attacks, and that most likely would be done through
DOM-based cross-site scripting.  DOM-based XSS is prevalent, especially in
AJAX code where people use (or misuse) frameworks like jQuery. It is
also one of the more difficult classes of vulnerabilities to mitigate,
but that's another story for another day and somewhat OT for a crypto-list.

> The next chapter tries to justify the fallacies made earlier on. Equating a
> rewrite to a read, ad-homineming the JS crypto "industry" (and failing to
> distinguish operational security from actual security), and lastly claiming
> that misplaced trust is bad (which is obvious and unrelated).
>
> The next chapter claims SSL is safe, and "real" crypto unlike JS crypto.
> Then firmly cements his baseless ridicule by claiming that if you use non-JS
> crypto to make JS crypto work, then obviously there's no point.

Actually, I think that the insecurity *of* TLS implementations might be one
of the stronger arguments *for* client-side crypto. But I think that would
be safer as a browser plug-in than it would be in JavaScript because then
things like DOM-based XSS are less of a concern.

I don't think that client-side crypto (in a browser) should stand on its own,
but it might be good as a security-in-depth approach to ensuring
confidentiality and authenticity.  A belt AND suspenders approach.

> The next chapter "WHAT'S HARD ABOUT DEPLOYING JAVASCRIPT CRYPTO CODE OVER
> SSL/TLS?" claims all the page has to be SSL/TLS and that makes it hard. It's
> not hard and you should already be doing it to have /any/ security. Not to
> mention it's not true, only that interpreted as page contents has to be
> SSL'ed (eg, images don't need to be transported over SSL).

Well, the reason why this is hard is because often developers write
and deploy the code but someone else with relatively little development
experience latter add (and regularly change) content. For example, the
marketing department might be given a frame to display some PR story
about recent sales figures and they like to some images without using
https. But usually those are coming from the same origin so there's not
any cross-origin browser policy to protect and then that does allow
a MITM foothold because of all the people who ignore the browser's
"mixed content" warnings. I regularly do security code reviews for
a living and this is even common with developers. Mix in someone
else with no security training at all and allow them to provide
content and it's pretty much guaranteed to happen. Sure, you can
configure your application to guarantee transport over only TLS,
but that also is often overlooked. Not saying securing this is impossible,
but it is difficult.

Oh, and BTW, you are wrong about "images don't need to be transported
over SSL". Because if those can be MITM'd, that can lead to Cross-Site
Request Forgery (CSRF) attacks.

> So, point 1 has no merit against JS whatsoever.

I'd disagree saying there is "no merit against JS whatsoever". See points
made above.

> There's also a lot of
> FUD-like text that denies reality. Especially the assumption that SSL and
> desktop programs are somehow more secure.
>
> So point 2.

Mot more secure. None of them are "secure". But I guess one could view
this as trying to address the devil you know vs the devil you don't.
At least now we know what we are facing.

And while augmenting SSL and server-side security with JS crypto might
seem like a good idea, knowing what I know about IT management, eventually
the approach would become "we don't need crypto in the client AND in SSL;
and SSL is costing us $X dollars per year for certificates and $Y per
year in operational costs and $Z per year in requiring extra server CPU
capacity, so let's just use the JS crypto and can SSL". Somewhere, some
pointy haired boss will make that call. Most won't, but some will.

> (letterized for pointwise addressing)
> "HOW ARE BROWSERS HOSTILE TO CRYPTOGRAPHY?
> In a dispriting variety of ways, among them:
>
> a - The prevalence of content-controlled code.
> b - The malleability of the Javascript runtime.
> c - The lack of systems programming primitives needed to implement crypto.
> d - The crushing weight of the installed base of users.
>
> Each of these issues creates security gaps that are fatal to secure crypto.
> Attackers will exploit them to defeat systems that should otherwise be
> secure. There may be no way to address them without fixing browsers."
>
> a, c, d are, at first sight, all rubbish. b is a very genuine point however.

Point 'a' is most definitely NOT rubbish. I find DOM-based XSS is just about
every review that I've done. That could totally compromise your JS crypto.

Sure, Content Security Policy headers someday might address this, or at least
diminish it to a large degree, but we are very very far from that day.

> With prototyping and the like it can be VERY hard to see what's going on.
> It's an often mentioned thing about JS that it's too powerful in some ways,
> and it can be true. The same goes for C and memory control.

Well, the concern I think is mash-ups. A lot of developers don't even bother
to pull down JS frameworks and reference them locally, but just make
external references to them. It boils down to trusting 3rd party frameworks
and using them in the way that is intended. Java has similar problems, but
not to quite the same degree. And SAST tools do a much better job analyzing
Java than they do JavaScript.

> Next chapter confirms that a is rubbish.

Which chapter / paragraph is that? This one?

    We mean that pages are built from multiple requests, some of them conveying
    Javascript directly, and some of them influencing Javascript using DOM tag
    attributes (such as "onmouseover").

I don't think that's rubbish, at least not if you know how prevalent XSS
is. And when you are using frameworks like (say) jQuery that calls 'eval'
under-the-hood without most developers even knowing it, that's just asking
for trouble.

It's the main reason that I'd prefer that client-side crypto get handled
by a browser plug-in / helper rather than JS.

> Chapter after that explains some basic Comp Sci about when you can trust
> something (and discredits something that can help in a lot of cases, any if
> you do it correctly (which is too hard))
>
> Chapter after that rehashes the idea that you can't trust the environment
> unless you trust the whole environment, which is also the same everywhere.
> (I also refer to trusting the compiler)

No, because of the liberal use of mash-ups with JavaScript, I think this
brings it to a whole new level. Yes, you have a issue of "trusting the
compiler" for regular applications, but in this context, I would say that
is more akin to "trusting your browser" (or more specifically, trusting
your browser's JavaScript engine). With mash-ups (e.g., a mash up of
Sales Force with Google Maps API) you introduce dependencies that are
much more varied and the interaction between them is much more dynamic
than your "compiler trust" case where you (as the builder) know at least
which compiler, assembler, link editor, that you are trusting at build time.

I think this scenario that the Matasano author portrays is much closer to
trusting 3rd party libraries. The major difference that I see here is
that JavaScript is much more malleable than even Java with reflection. When
used cryptographic check sums or digital signatures (e.g., signed jars)
provide some modicum of assurance that at least we are using the the
library that intended.  That can be combined with something like OWASP
Dependency Check to make sure that you 3rd party libraries that do not
have unpatched known (as in, published in NVD) vulnerabilities.
(Unfortunately, OWASP Dependency check is currently limited to Java and
.NET libraries). Also, you rarely, if ever, see server-side code pull in
3rd party frameworks dynamically (the use of 3rd party REST and SOAP
based web services is usually as close as we get to this), but pulling
in 3rd party JavaScript frameworks from somewhere other than your locally
hosted application is actually all too common. While few would argue that
this is good practice, it appears like it won't end anytime soon.

Also, in the JavaEE world one *can* (although it is rarely done) use a
Java Security Manager with their application server allow with a custom
security policy to ensure that such attempts to play maleability games
using Java reflection fail with a SecurityException. I believe, but cannot
say for certain, that something like this also exists in the .NET world.
However, I am not aware of any way to impose such a restriction in the
JavaScript world. There might be a way, but I've never heard of it.

> Next chapter is titled "WELL THEN, COULDN'T I WRITE A SIMPLE BROWSER
> EXTENSION THAT WOULD ALLOW JAVASCRIPT TO VERIFY ITSELF?". And guess what,
> the author agrees. You can indeed do this. If you're just doing it for
> yourself or a single kind of crypto you could also make a plugin for that.
> Which is what the WhatWG is doing with the HTML5 crypto extension. Then
> claims crypto is to PGP as programming languages are to Lisp, which is
> rubbish.

The exact Lisp reference was:
    Just as all programs evolve towards a point where they can read
    email, and all languages contain a poorly-specified and buggy
    implementation of Lisp, most crypto code is at heart an inferior
    version of PGP.

And yeah, I didn't think that was a very good analogy either. For one thing,
other than Emacs, I can't think of any programs off the top of my head that both
have evolved to read email and implement some version of Lisp. It would
have been more believable had the author simply say that "most crypto code
is inferior to PGP". Certainly most crypto implementations are not even
remotely based on PGP. (If they were, the probably would be considerably
better than they are.)

> The author then goes on to actually talk about random generators. Which are
> not always required, but who cares, right?

Not always, no. But frequently for generating keys, nonces, IVs, etc. So
it shouldn't be dismissed entirely.

> Then Secure erase, which is only
> important if you expect the client device to be exploited.

Well, it is probably a good thing to include secure erasure in memory
as something that needs to be dealt with in one's threat model. This might
not be as important in Google Chrome since there, separate tabs / windows
run in separate processes rather than just separate threads.

And again, MITB issues don't make this threat that remote. If crypto JS
starts becoming common, you can bet that organized crime will start to
target things like crypto keys in JS via various MITB attacks.

> Then 'timing
> attacks; which is even more specific and can be alleviated easily enough.

*Know* timing attacks can be mitigated, but to be fair, the author did write
"functions with known timing characteristics".

> Then tries to generalize his claim to remove JS specifically from the
> equation, removing is last viable (but not definitive) arguments.
>
> Some hating on key management, which is justified but again bullocks wrt the
> main argument. (not to mention it's a problem everywhere, and it can be
> solved like everywhere)

True it is a problem everywhere, but that doesn't make it any less of
a problem here. We criticize it when there are key management issues
on the server, so it is only fair that failure to address it on the client
side is open to criticism. The major difference is that generally there
are devops groups who address server-side hardening issues and audits are
commonly run. Such attention to detail is unlikely to happen for the client
side key management issues.

That said, I don't see this as a show-stopper. One would think that TPM could
be leveraged for key management, at least for those platforms where it is
supported. All that would be needed would be some secure manner in which
to interface with the TPM. Again, I think that would be better addressed
by the browser itself or by a browser extension / plug-in than via JavaScript
for reasons already given.

> Some hate on people running old browsers, which has actually been solved by
> background-auto-updating by now. (huzzah for all the added insecurity there)

RIGHT... expect for those millions of fools who are still running IE and
WinXP and who can't update. Surely that's not a problem that is going to
disappear overnight.

Also, I'd bet that half of my acquaintances who come to me for help do NOT
have auto-update enabled! Whether that's because they've gotten themselves
infected with malware that disabled it or because they've been too annoyed
at the sometimes untimely updates / reboots or they are just clueless, I
don't know. Also more than half of them regularly login using a Windows
administrator account and surf the web. So I think you are over stating
the effectiveness of auto-updates a bit. It works pretty well when it
is enabled, but all to frequently it is not enabled.

> Then something about graceful degrading. Which is fair except for him not
> sufficiently providing any reason JS crypto never works. (and not relevant).
> He apparently meant this with d. Depends greatly on the deployment
> situation, but in general it's FUD.

Not FUD if it is up to developers to decide on the algorithm suites.
However, I could see it as FUD if the algorithm support is punted directly
to the browser manufacturers and the ciphersuites are specified by IETF
or W3C or NIST or this fine group of people on this list. :)

> "AND WHEN YOU SAID "VIEW-SOURCE TRANSPARENCY WAS ILLUSORY"?
>
> We meant that you can't just look at a Javascript file and know that it's
> secure, even in the vanishingly unlikely event that you were a skilled
> cryptographer, because of all the reasons we just cited."
>
> Yeah. Welcome to programming. There's absolutely no truth to this claim btw.
> Vagely referring to a large body of rubbish is not an argument.

Well, I will say that the SAST tools don't work near as well for JavaScript
as they do for other languages like Java, C#, or C++. And that in itself
is an issue because almost all serious secure code reviews today seem to
be tool-assisted. I keep thinking that the SAST tools will start making up
the gap in JavaScript, but over the 5 years os so that I've been observing
this space, I've not seen it.

The other relevant issue here is that I would speculate that they there are
far less skilled cryptographers who are proficient with JavaScript as there
one ones proficient with C or C++ or Java, etc. That was not the point the
author was specifically trying to make here, but I think that it is still
pertinent.

[snip]

-kevin
-- 
Blog: http://off-the-wall-security.blogspot.com/
NSA: All your crypto bit are belong to us.


More information about the cryptography mailing list