[Cryptography] [cryptography] Browser JS (client side) crypto FUD

Lodewijk andré de la porte l at odewijk.nl
Tue Jul 29 09:53:44 EDT 2014


2014-07-28 1:27 GMT+02:00 Kevin W. Wall <kevin.w.wall at gmail.com>:

> [Note: Dropped cypherpunks list as I'm not subscribed to that list.]
>
> On Sat, Jul 26, 2014 at 11:03 AM, Lodewijk andré de la porte
> <l at odewijk.nl> wrote:
> > http://matasano.com/articles/javascript-cryptography/
> >
> > Is surprisingly often passed around as if it is the end-all to the idea
> of
> > client side JS crypto.
> >
> > TL;DR:
>
> I don't see how you claim that it was "TL;DR",


Only the following alinea is the TL;DR of my response, as I noticed it got
quite long. It's a bit of a convention, but apparently not one to trust.
Sorry!


> > I'm not sure why the guy wrote it. Maybe he's NSA motivated? Maybe he's
> > worked a lot on secure systems and this just gives him the creeps? Maybe
> > he's the kind of guy that thinks <dash>JS</dash> dynamic scripted
> languages
> > are not a real languages?
>
> Really? You're going to go there and imply he's an NSA shill? That's pretty
> unprofessional.
>

I'm really having a hard time finding any other reason. If you Google
"Javascript cryptography" you get that article as a first hit, not a crypto
library. If there's any truth to the NSA manipulating Google results (why
wouldn't they?) to improve national security, and I'm sure they do that by
preventing a wildgrowth of JS crypto, then maybe he still didn't write it
for the NSA, but they sure did support it.

Does it really matter if he's part of them? The point is he is helping them.


> > Somebody, please, give me something to say against people that claim JS
> > client side crypto can just never work!
>
> I can't do that, because I wouldn't claim that it can "never" work. I
> could see
> it being useful if used correctly in the right context, which I mean as
> using it
> as an approach to security-in-depth. But if one is referring to moving all
> the
> crypto to the client side, I think that would generally be a huge mistake.
>

What does all the crypto even mean? It's not possible because secure
application delivery depends on crypto. But that's a totally different
claim, which goes for any sort of application. That way it's a strawman at
best.


> > The first is client-side hashing of a password, so that it's never sent
> in
> > the clear. This is so legitimate it nearly makes me drop my hat, but, the
> > author decides to use HMAC-SHA1 instead of SHA2 for reasons that are
> fully
> > beyond me. Perhaps just trying to make things less secure?
>
> I think it more likely had do do with the prevalence of SHA1.  When was
> this written anyway? The only date that I saw was "2008" in reference
> to the browser maturity, where it stated:
>

It's popular enough to be kept up-to-date. The 2008 was meant as a "way in
the past" kind of date. The waybackmachine first dates it in September 2011
<https://web.archive.org/web/20110925120934/http://www.matasano.com/articles/javascript-cryptography/>,
10 years after SHA2 was standardized. (It may be older, too)


>    Check back in 10 years when the majority of people aren't still running
>    browsers from 2008.
>
> (Of course that's still largely true today and will remain so until all
> those unsupported WinXP systems get replaced.)
>

Windows XP is an argument against JS crypto? Just deny them access to your
website. Put up "I'm sorry, your safety cannot be guaranteed while using
this site because you use an outdated browser or operating system. Click
here to continue unsafely.". If they still do it it's not your frikkin'
problem. There's just no way to solve it.


> So assume HMAC-SHA2 here if you like. I don't think that changes things
> much. But I think the reason for the HMAC was because you clearly want
> a keyed hash where you are hashing a nonce in the sort of
> challenge-response
> authN system that the author is describing.
>
> But if the goal is to build something like SRP, it would be much better to
> build that into the HTTP specification so that the browsers and web servers
> could support them directly similar to how the do with HTTP Digest
> Authentication.
>

A bit of a distraction IMHO. "In some situations some things don't work", I
fully agree with that.


> I see all the time where something (most often images or a .js library) is
> using an http link from within an https page. Sure the browser gives you
> a warning, but how many times to people really click through those
> without trying to find out what is giving the warning?
>
> I think what they are saying here is in the case of crypto, the stakes
> are just a lot higher than your Gmail or FB password or whatever.
>

This is a choice made by the webdevs. HTTPS has value even in a mixed
environment. But obviously it wouldn't be a secure environment.


>
> > 2 - This is a conclusion without any basis so far (aside from being..
> > meaningless to a computer scientist. Hostile?)
>
> I disagree. They give these reasons and I believe that all of them are
> valid:
>
> * The prevalence of content-controlled code.
> * The malleability of the Javascript runtime.
> * The lack of systems programming primitives needed to implement crypto.
> * The crushing weight of the installed base of users.
>
> And they go into some detail for each of them. Of course, IMO, these are
> not just issues with crypto written in JS, they are issues with pretty much
> anything written in JS.
>

They're occasionally valid and therefore do not support the article's claim.


> > 3 - Then just look at what data was transferred. Does every crypto
> > application require checkable source? Is any SSL implementation
> "considered
> > harmful" because nobody is able to flawlessly read the code, no compilers
> > are trusted, etc?
>
> I think you are missing the point here. This has to do with the these 2
> bullet
> items:
>
> * The prevalence of content-controlled code.
> * The malleability of the Javascript runtime.
>

> Your TLS implementations or your server-side code are generally compiled
> and much less malleable or subject to content controlled code.  I don't
> think
> they are at all arguing that everything else is a perfect world, but
> only the way
> JavaScript is used today with all the AJAX-based mash-ups, that is a much
> more difficult scenario.


You can recreate the issues in any other language, it's just harder to
code. The ease of coding something (in an way that may not be secure) is
not a strong argument for saying it's impossible to code something
securely. Only if you would do it wrong by accident, which is not really
the case IMHO.

I responded somewhat to that above, but there is another issue here. I
> don't see the major issue here as MITM attacks, but rather as MITB
> (man-in-the-browser) attacks, and that most likely would be done through
> DOM-based cross-site scripting.  DOM-based XSS is prevalent, especially in
> AJAX code where people use (or misuse) frameworks like jQuery. It is
> also one of the more difficult classes of vulnerabilities to mitigate,
> but that's another story for another day and somewhat OT for a crypto-list.
>

Again, this is a coder's mistake that can be easily avoided. MITB attacks
are no different from Man in the OS attacks, and can therefore be discarded
from the discussion.


> Actually, I think that the insecurity *of* TLS implementations might be one
> of the stronger arguments *for* client-side crypto. But I think that would
> be safer as a browser plug-in than it would be in JavaScript because then
> things like DOM-based XSS are less of a concern.
>

Yes and no. Webapplication signing like application signing would be neat
and could be a plugin or webstandard. In general I'm against plugins, it
just means you're not really doing HTML anymore.


> I don't think that client-side crypto (in a browser) should stand on its
> own,
> but it might be good as a security-in-depth approach to ensuring
> confidentiality and authenticity.  A belt AND suspenders approach.
>

It depends on the thread model, but in general: yes!


> > The next chapter "WHAT'S HARD ABOUT DEPLOYING JAVASCRIPT CRYPTO CODE OVER
> > SSL/TLS?" claims all the page has to be SSL/TLS and that makes it hard.
> It's
> > not hard and you should already be doing it to have /any/ security. Not
> to
> > mention it's not true, only that interpreted as page contents has to be
> > SSL'ed (eg, images don't need to be transported over SSL).
>
> Well, the reason why this is hard is because often developers write
> and deploy the code but someone else with relatively little development
> experience latter add (and regularly change) content.


If you can't deal with this there's no helping any software becoming
insecure. Again, just not a JS problem.


> For example, the marketing department might be given a frame to display
> some PR story
> about recent sales figures and they like to some images without using
> https. But usually those are coming from the same origin so there's not
> any cross-origin browser policy to protect and then that does allow
> a MITM foothold because of all the people who ignore the browser's
> "mixed content" warnings. I regularly do security code reviews for
> a living and this is even common with developers. Mix in someone
> else with no security training at all and allow them to provide
> content and it's pretty much guaranteed to happen. Sure, you can
> configure your application to guarantee transport over only TLS,
> but that also is often overlooked. Not saying securing this is impossible,
> but it is difficult.
>
> Oh, and BTW, you are wrong about "images don't need to be transported
> over SSL". Because if those can be MITM'd, that can lead to Cross-Site
> Request Forgery (CSRF) attacks.
>

Perhaps SVG has some intense features, but I'm pretty sure that <img src="
www.hackerswebsite.com/uncertainimage.jpg>, cannot lead to anything going
wrong with the webpage. Aside from origin control (which helps) I don't see
data included in that way being treated as anything but data.

> There's also a lot of
> > FUD-like text that denies reality. Especially the assumption that SSL and
> > desktop programs are somehow more secure.
> >
> > So point 2.
>
> Mot more secure. None of them are "secure". But I guess one could view
> this as trying to address the devil you know vs the devil you don't.
> At least now we know what we are facing.
>
> And while augmenting SSL and server-side security with JS crypto might
> seem like a good idea, knowing what I know about IT management, eventually
> the approach would become "we don't need crypto in the client AND in SSL;
> and SSL is costing us $X dollars per year for certificates and $Y per
> year in operational costs and $Z per year in requiring extra server CPU
> capacity, so let's just use the JS crypto and can SSL". Somewhere, some
> pointy haired boss will make that call. Most won't, but some will.
>

You do realize that this still has absolutely nothing to do with whether or
not JS in the browser is, by definition, "Considered Harmful". The sheer
pretentiousness of the title, "It is considered harmful", it should be "I
consider JS crypto harmful". They're pushing so hard to level with goto's
are considered harmful, but there's no meat in the arguments.

Finding more specific cases in which, indeed, a system that includes JS
crypto fails, does not help at all.

Unless you can tell me that for some reason you agree with  "JS Crypto
Considered Harmful" but not with "Crypto Considered Harmful", tell me. And
while you're at it, tell me about a singular piece of silicon valley
software (dropbox, evernote, skype, (android? it's not made there but by
the same attitude,) etc.) that uses cryptography such that you would not
consider it harmful.


>
> > (letterized for pointwise addressing)
> > "HOW ARE BROWSERS HOSTILE TO CRYPTOGRAPHY?
> > In a dispriting variety of ways, among them:
> >
> > a - The prevalence of content-controlled code.
> > b - The malleability of the Javascript runtime.
> > c - The lack of systems programming primitives needed to implement
> crypto.
> > d - The crushing weight of the installed base of users.
> >
> > Each of these issues creates security gaps that are fatal to secure
> crypto.
> > Attackers will exploit them to defeat systems that should otherwise be
> > secure. There may be no way to address them without fixing browsers."
> >
> > a, c, d are, at first sight, all rubbish. b is a very genuine point
> however.
>
> Point 'a' is most definitely NOT rubbish. I find DOM-based XSS is just
> about
> every review that I've done. That could totally compromise your JS crypto.
>

Let's say you're writing a C program. You decide that it'd be a great idea
to dynamically link a library. That's not so unusual, people do it all the
time. But, you decide to link not a library on the system, but a library
that's hosted at "www.nsa.com/sharedlibraries/stdio.h" (URI dynamic linking
could be a new feature in GCC5). Is C cryptography now considered harmful?

Actually, dynamic linking on the system itself suffers the same problems
still. Did you ever link to OpenSSL? Very foolish, totally exploited.

Also "content-controlled" is a really poor way to express what is meant.
The content is not in control any more than it is in other languages. It's
just easier to include code. And that's a good thing.

Sure, Content Security Policy headers someday might address this, or at
> least
> diminish it to a large degree, but we are very very far from that day.
>

No, they don't protect you from including bad code into your project.


> > Chapter after that rehashes the idea that you can't trust the environment
> > unless you trust the whole environment, which is also the same
> everywhere.
> > (I also refer to trusting the compiler)
>
> No, because of the liberal use of mash-ups with JavaScript, I think this
> brings it to a whole new level. Yes, you have a issue of "trusting the
> compiler" for regular applications, but in this context, I would say that
> is more akin to "trusting your browser" (or more specifically, trusting
> your browser's JavaScript engine). With mash-ups (e.g., a mash up of
> Sales Force with Google Maps API) you introduce dependencies that are
> much more varied and the interaction between them is much more dynamic
> than your "compiler trust" case where you (as the builder) know at least
> which compiler, assembler, link editor, that you are trusting at build
> time.
>

It's a choice you don't have to make. Including foreign code as is done
quite frequently is immensely unsafe, and I doubt anyone would deny that.
But it would be unsafe with anything else, too. The level of dynamicness
makes an attack much easier, but then remove the dynamicness and you're
fine again.

It might be an idea to have "link hashes". When I place a URI then the data
pulled through it (the whole set of data) should have a certain hash. So
<script src="blabla.com/bla.js"></script> becomes <script src="
blabla.com/bla.js" hash="xxxxxx"></script>. But we don't have these, so
just control what you link towards, k?


> that JavaScript is much more malleable than even Java with reflection. When
> used cryptographic check sums or digital signatures (e.g., signed jars)
> provide some modicum of assurance that at least we are using the the
> library that intended.  That can be combined with something like OWASP

Dependency Check to make sure that you 3rd party libraries that do not
> have unpatched known (as in, published in NVD) vulnerabilities.
>

Yep. This seems like a good idea. Signatures, too, could help. Thing is
that it hasn't really been a priority for the W3C or WhatWG to increase
security. Especially as the NSA would stop it's indirect funding if they
did that (joke, but wouldn't that be interesting?).


> (Unfortunately, OWASP Dependency check is currently limited to Java and
> .NET libraries). Also, you rarely, if ever, see server-side code pull in
> 3rd party frameworks dynamically (the use of 3rd party REST and SOAP
> based web services is usually as close as we get to this), but pulling
> in 3rd party JavaScript frameworks from somewhere other than your locally
> hosted application is actually all too common. While few would argue that
> this is good practice, it appears like it won't end anytime soon.
>

I agree. It horrifies me. For privacy, too, it's terrible. Usually referrer
headers are sent and the host knows what I'm browsing. Google hosts
especially many free items online, libraries, fonts and even DNS, and I
don't imagine Google wastes such good sources of profiling information.


> Also, in the JavaEE world one *can* (although it is rarely done) use a
> Java Security Manager with their application server allow with a custom
> security policy to ensure that such attempts to play maleability games
> using Java reflection fail with a SecurityException. I believe, but cannot
> say for certain, that something like this also exists in the .NET world.
> However, I am not aware of any way to impose such a restriction in the
> JavaScript world. There might be a way, but I've never heard of it.
>

I just serve the libraries from my own server over HTTPS. That's the best
available practice atm AFAIK.


> > Next chapter is titled "WELL THEN, COULDN'T I WRITE A SIMPLE BROWSER
> > EXTENSION THAT WOULD ALLOW JAVASCRIPT TO VERIFY ITSELF?". And guess what,
> > the author agrees. You can indeed do this. If you're just doing it for
> > yourself or a single kind of crypto you could also make a plugin for
> that.
> > Which is what the WhatWG is doing with the HTML5 crypto extension. Then
> > claims crypto is to PGP as programming languages are to Lisp, which is
> > rubbish.
>
> The exact Lisp reference was:
>     Just as all programs evolve towards a point where they can read
>     email, and all languages contain a poorly-specified and buggy
>     implementation of Lisp, most crypto code is at heart an inferior
>     version of PGP.
>
> And yeah, I didn't think that was a very good analogy either. For one
> thing,
> other than Emacs, I can't think of any programs off the top of my head
> that both
> have evolved to read email and implement some version of Lisp. It would
>

It does never fail to make me more interested in Lisp, which I guess is the
real point anyway.


> have been more believable had the author simply say that "most crypto code
> is inferior to PGP". Certainly most crypto implementations are not even
> remotely based on PGP. (If they were, the probably would be considerably
> better than they are.)
>

But they are based off of things that PGP is based off. And diversity also
has value. http://openpgpjs.org/ is of great relevance here :)


> > The author then goes on to actually talk about random generators. Which
> are
> > not always required, but who cares, right?
>
> Not always, no. But frequently for generating keys, nonces, IVs, etc. So
> it shouldn't be dismissed entirely.
>

Yeah that was just dishing out the hate..


> And again, MITB issues don't make this threat that remote. If crypto JS
> starts becoming common, you can bet that organized crime will start to
> target things like crypto keys in JS via various MITB attacks.
>

I believe they already do for things like blockchain.info. It's still a lot
more specific, which helps a wee bit. And it will drive up projects like
the Chrome Web Store.


> > Then tries to generalize his claim to remove JS specifically from the
> > equation, removing is last viable (but not definitive) arguments.
> >
> > Some hating on key management, which is justified but again bullocks wrt
> the
> > main argument. (not to mention it's a problem everywhere, and it can be
> > solved like everywhere)
>
> True it is a problem everywhere, but that doesn't make it any less of
> a problem here. We criticize it when there are key management issues
> on the server, so it is only fair that failure to address it on the client
> side is open to criticism. The major difference is that generally there
> are devops groups who address server-side hardening issues and audits are
> commonly run. Such attention to detail is unlikely to happen for the client
> side key management issues.
>
> That said, I don't see this as a show-stopper. One would think that TPM
> could
> be leveraged for key management, at least for those platforms where it is
> supported. All that would be needed would be some secure manner in which
> to interface with the TPM. Again, I think that would be better addressed
> by the browser itself or by a browser extension / plug-in than via
> JavaScript
> for reasons already given.
>

There's a difference in severity, a few users (that didn't have their
system in order) versus all users if the server gets cracked. But I think
that passwords will fail before the key management does, or else the
passwords can compliment the key management.

I'll use AES for encrypting the asymetric keys, and store the ciphertext
serverside. I believe everyone does it that way. The bottleneck is the
password entered (and maybe remembered) by the user's system.


> > Some hate on people running old browsers, which has actually been solved
> by
> > background-auto-updating by now. (huzzah for all the added insecurity
> there)
>
> RIGHT... expect for those millions of fools who are still running IE and
> WinXP and who can't update. Surely that's not a problem that is going to
> disappear overnight.
>

I investigated and was shocked at the amount of people that still use XP.
It seems like a good 15 to 20%.

Reminds me of the Linux distro that's made to look like XP. Didn't get far
enough, but seems like a pretty good idea right now. Sadly, those <20%
wouldn't find out about this Linux XP and they wouldn't do anything to get
an upgrade.

Maybe a sort of I love you virus for XP would be more effective.


> Also, I'd bet that half of my acquaintances who come to me for help do NOT
> have auto-update enabled! Whether that's because they've gotten themselves
> infected with malware that disabled it or because they've been too annoyed
> at the sometimes untimely updates / reboots or they are just clueless, I
> don't know. Also more than half of them regularly login using a Windows
> administrator account and surf the web. So I think you are over stating
> the effectiveness of auto-updates a bit. It works pretty well when it
> is enabled, but all to frequently it is not enabled.
>

Everyone using Windows runs from the admin account. UAC was a big flop
because user-access-rights are not really a native concept for Windows
users. They should've gone the Android road, giving applications limited
rights instead of giving users limited rights.

OS auto-updates don't work well because they are, in fact, very annoying.
Especially the "reboot in 4 hours" thing that can't be seen when someone
runs a full-screen application, then suddenly interrupts a game or a movie
for rebooting. Really insane.

Microsoft's failures aside, I was referring to browser auto updates. Chrome
updates completely silently, you can't notice if it updated without seeing
differences (UI changes) or checking the version number. Mozilla followed
suit. Now browsers are either of a pre-silent-auto-update era, or they're
up to date.

I checked if that's actually true. netmarketshare.com tells us that in the
last month people have been using FF30 and FF29 about equally much. Since
FF30 was release 10 June that quite makes sense. FF28 is not on the chart.
Chrome is tighter, just version 35. Then there's the miserable Internet
Explorer. Since they started trying to make a real browser they've been
losing the browser war less. They still really suck and ruin the industry
though.

Deployment's messy. But you can just tell people "nope, upgrade first". The
old-tech slackers aren't the good customers anyway (are they?).

The other relevant issue here is that I would speculate that they there are
> far less skilled cryptographers who are proficient with JavaScript as there
> one ones proficient with C or C++ or Java, etc. That was not the point the
> author was specifically trying to make here, but I think that it is still
> pertinent.
>

Let's not talk about competence! A big problem nowadays is that so many
coders are so barely competent. They make mistakes they shouldn't. It's not
even really their fault, coding is hard and most people are of normal
intelligence. With the extreme lack in capable coders, less capable coders
are employed. That causes a lot of grief. A lot more products, and most of
that lot are bad products. Crypto quality is usually the first to suffer.


Either way, I think we agree on all points. Just use a different framing. I
say "It's hard to do right, but it's not impossible" and then you say "but
it really is hard". I agree that JavaScript does not have much in the way
of security technology, that it is lacking. I even agree with JavaScript
being an easy language to cut yourself in the fingers with (dynamic
environments and all). The RNG problems have been mostly addressed, most
others will be in time.

JavaScript cryptography is possible, there are usecases, and it is
*definitely* *not *"considered harmful" by default.

-Lodewijk
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.metzdowd.com/pipermail/cryptography/attachments/20140729/f64e1666/attachment.html>


More information about the cryptography mailing list