From crypto at senderek.ie Sat Aug 1 03:10:58 2015 From: crypto at senderek.ie (Ralf Senderek) Date: Sat, 1 Aug 2015 09:10:58 +0200 (CEST) Subject: [Cryptography] How to solve the hen-and-egg problem Message-ID: On Fri, 31 Jul 2015 23:53:27 Tom Mitchell writes: > On Fri, Jul 31, 2015 at 1:34 PM, Ben Laurie wrote: > On Thu, 30 Jul 2015 at 08:37 Ralf Senderek > wrote: > While static code analysers will work with C code, they might be less > valuable when it comes to reviewing the ksh scripts. These scripts > represent the logic of the message encryption scheme and a review > needs to focus on the security of the ideas, they're based on. > > > Perhaps you should consider writing those scripts in a language that > lends itself to analysis? > > > How are the scripts being used? > Scripts that run with SUID/SGID permissions are difficult. > Many *nix disable the SGID/SGID permission bit for scripts because of > the security challenges. > http://stackoverflow.com/questions/18698976/suid-not-working-with-shell-script > [stackoverflow.com] > > If SUID/SGID is not an issue then never mind... No, there is no SUID/SGID set on any of the scripts. While the GUI is beeing installed the name of the user is required and only this user will be put into /etc/sudoers to be able to run the main script "cbcontrol", which has (700 root root) permissions. This script calls all others. There are a number of advantages: 1) The masterkey can have root read-only permission when it is stored on the USB, so read access to the filesystem as the user would not reveal the masterkey to an attacker that gains access via the network. 2) Using the cbcontrol program by an attacker that has gained execute permission as the user would require his login password (asked for by sudo via openssh-askpass). 3) Anyone with the intention to subvert the installed Crypto Bone software would need execute permission as root, in which case the battle is already lost, in case all-in-one mode is used. 4) In REAL mode, even then the Crypto Bone software is safe. The cbcontrol script will handle the commands that are generated by the GUI either by itself (if it is in all-in-one mode) or sent it to the real, separate Crypto Bone if it is in REAL mode. To do the later, the ssh private key need to be used, which is also stored with (400, root root) permissions. The next step might be to use a mobile phone as an decryption oracle for the measterkey and the local key, so that both can be stored encrypted on the local computer. I hope that answers your question. --ralf From phill at hallambaker.com Sat Aug 1 13:38:07 2015 From: phill at hallambaker.com (Phillip Hallam-Baker) Date: Sat, 1 Aug 2015 13:38:07 -0400 Subject: [Cryptography] Windows... Your choice but make it informed. In-Reply-To: References: <55B9105E.4060105@sonic.net> Message-ID: On Fri, Jul 31, 2015 at 10:11 PM, Tom Mitchell wrote: > On Wed, Jul 29, 2015 at 10:41 AM, Ray Dillinger wrote: > >> >> What Microsoft is up to these days... >> >> http://thenextweb.com/microsoft/2015/07/29/wind-nos/ >> > > > I am with you but it is more complex than just this. > That alone is troubling. > > More interesting... > The answer is unfolding and not 100% clear. > I think folk are not quite appreciating that what Microsoft is trying to do here is actually very hard to do and as far as the typical user is concerned, protecting their data for confidentiality is a lot less of a concern for them than the risk they might lose their data. None of the consumer products come with strong encryption turned on out of the box. So what Microsoft is offering here needs to be compared to the alternative of no encryption at all. It is a big improvement. When Vista was launched, the main upgrade was to security which in turn meant a huge increase in workload for system admins. So to avoid the need for all that extra work, the system admins found it much easier to convince people that what they really wanted to do was run Windows XP. That said, I think Microsoft has to consider their position very carefully because they are now caught between a rock and a hard place. On the one hand they are going to have a huge blowback from the lazy system admins and users upset at losing all their data if they try to force people to use too much security. On the other they face a huge blowback from the privacy advocates if their solution is to back everything up to a trusted Microsoft cloud. And yes, there is a huge intersection between the two groups. I think I have an answer for them. The problem with Microsoft's cloud is that we are forced to trust it. But the Mesh I am working on provides the same set of capabilities without requiring the end users to trust it. Now my solution isn't for everyone, you would have to have enough skill to be able to print out the recovery codes on paper and store them somewhere safe. But offering it as an option would be a way to avoid the privacy onslaught facing them. -------------- next part -------------- An HTML attachment was scrubbed... URL: From l at odewijk.nl Sat Aug 1 16:13:43 2015 From: l at odewijk.nl (=?UTF-8?Q?Lodewijk_andr=C3=A9_de_la_porte?=) Date: Sun, 2 Aug 2015 05:13:43 +0900 Subject: [Cryptography] =?utf-8?q?Why_Nasdaq_Is_Betting_On_Bitcoin?= =?utf-8?b?4oCZcyBCbG9ja2NoYWlu?= In-Reply-To: <55BC22D2.1030901@iang.org> References: <37F20327-8F64-449A-BA3B-D860C633381B@lrw.com> <55BC22D2.1030901@iang.org> Message-ID: We have a distributed global ledger; please; ledge! Technically it's absolutely pie to track stock on the Bitcoin blockchain. If you can't roll it yourself, you can even use Prism and Colored Coins. When issuing stock you'd instead issue Colored Coins. For security, make people /also/ set up a "stock check" according to the following protocol: 1. The parties agree upon and commit to a transaction 2. The parties sign the "intention to transfer ownership" 3. The parties perform the Colored Coin transaction 4. The parties amend the "intention to transfer ownership" with the raw (hexidecimal) transaction as it is included in the blockchain. The advantage of this dual administration is that if something goes wrong with the colored coin system, you still have a paper trail by which to decide "true" ownership of the colored coins. If you do not use the protocol but instead only perform step 3 - that's still good enough. Bonus points if you create two documents, one for saying "these coin will henceforth change ownership according to the bitcoin system " and one for "these coins will henceforth change ownership using the Dual Administration system", so you can state when you interrupt and when you restart the paper administration. These documents are signed first by the *coin owner, and then by the company/registration affiliate company after a time period if the *coin owner is then still the coin owner. This last step is important - over time a Bitcoin transaction grows in certainty. To emit currency to shareholders, or perform votes: 1. Shareholder creates document requesting currency to be paid to Bitcoin/Altcoin address (hell, make it an IBAN number for all I care) 2. Shareholder signs document with the same public/private key that accesses the Colored Coins (or other proof of ownership) 3. Shareholder submits signed document to company/registration affiliate Technically it's possible to emit continuously (daily, hourly, per-profit-unit, whatever) by sending to the "last known address" of the colored coin's owner. Privacy is preserved as a colored coin can be held by pseudonyms, and so can the payment addresses. So, technically and libertarian-free-society-with-contracts-wise it's all cookies and dough. The question is, who wants this? And, more limiting, what is the legal interpretation? -------------- next part -------------- An HTML attachment was scrubbed... URL: From allenpmd at gmail.com Sat Aug 1 19:18:52 2015 From: allenpmd at gmail.com (Allen) Date: Sat, 1 Aug 2015 19:18:52 -0400 Subject: [Cryptography] More efficient and just as secure to sign message hash using Ed25519? Message-ID: <01a701d0ccb0$6e2540f0$4a6fc2d0$@gmail.com> According to the Ed25519 paper, the (potentially long) input message is hashed twice (see http://ed25519.cr.yp.to/ed25519-20110926.pdf Section 4 page 12 steps 1 and 3). The webpage https://blog.mozilla.org/warner/2011/11/29/ed25519-keys/ has a nice diagram toward the bottom that illustrates this, and I confirmed it in the reference code on Supercop (see supercop-20141124/crypto_sign/ed25519/ref/sign.c, function calls crypto_hash_sha512(nonce, sm+32, mlen+32) and crypto_hash_sha512(hram, sm, mlen + 64)). My question is, for long messages, wouldn't it be more efficient and just as secure to hash the entire message just once, and then use the 64 byte hash as the input to the signing algorithm? In other words, the code would look like: crypto_hash_sha512(mhash, m, mlen); crypto_sign(output, mhash, 64, key); The would seem to me to be faster for mlen > approx 128 bytes without any loss of security. I'm I missing something here? Is there a potential loss of security to using mhash as the signing input instead of the original message m? From iang at iang.org Sun Aug 2 00:27:12 2015 From: iang at iang.org (ianG) Date: Sun, 02 Aug 2015 05:27:12 +0100 Subject: [Cryptography] asymmetric attacks on crypto-protocols - the rough consensus attack Message-ID: <55BD9C20.5090205@iang.org> There's a group working on a new crypto protocol. I don't need to name them because it's a general issue, but we're talking about one of those "rough consensus and working code" rooms where dedicated engineers do what they most want to do - create new Internet systems. This new crypto protocol will take a hitherto totally open treasure trove of data and hide it. Not particularly well but well enough to make the attacker work at it. The attacker will have to actually do something, instead of just hoovering. Doing something will be dangerous - because those packets could be spotted - so it will be reserved for those moments and targets where it's worthwhile. It's not as if the attacker cares that much about being spotted, but embarrassment is best avoided. So this could be kind of a big deal - we go from 100% open on this huge data set, down to 99% closed, over some time and some deployment curve. Now, let's assume the attacker is pissed at this. And takes it's attitudinal inspiration from Hollywood, or other enlightened sources like NYT on how to retaliate in cyberwar (OPM, anyone?) [0]. Which is to say, it decides to fight back. Game on. How to fight back seems easy to say: Stop the group from launching its protocol. How? It turns out that there is a really nice attack. If the group has a protocol in mind, then all the attacker has to do is: a) suggest a new alternate protocol. b) balance the group so that there is disagreement, roughly evenly balanced between the original and the challenger. Suggesting an alternate is really easy - as we know there are dozens of prototypes out there, just gotta pick one that's sufficiently different. In this case I can think of 3 others without trying, and 6 people on this group could design 1 in a month. Balancing the group is just a matter of phone calls and resources. Call in favours. So many people out there who would love to pop in and utter an opinion. So many friends of friends, willing to strut their stuff. Because of the rules of rough consensus, if a rough balance is preserved, then it stops all forward movement. This is a beautiful attack. If the original side gets disgusted and walks, the attacker can simply come up with a new challenger. If the original team quietens down, the challenger can quieten down too - it doesn't want to win, it wants to preserve the conflict. The attack can't even be called, because all contributors are doing is uttering an opinion as they would if asked. The attack simply uses the time-tested rules which the project is convinced are the only way to do these things. The only defence I can see is to drop rough consensus. By offering rough consensus, it's almost a gilt-edged invitation to the attacker. The attacker isn't so stupid as to not use it. Can anyone suggest a way to get around this? I think this really puts a marker on the map - you simply can't do a security/crypto protocol under rough consensus in open committee, when there is an attacker out there willing to put in the resources to stop it. Thoughts? iang [0] you just can't make this stuff up... http://mobile.nytimes.com/2015/08/01/world/asia/us-decides-to-retaliate-against-chinas-hacking.html From peter at cryptojedi.org Sun Aug 2 03:42:42 2015 From: peter at cryptojedi.org (Peter Schwabe) Date: Sun, 2 Aug 2015 09:42:42 +0200 Subject: [Cryptography] More efficient and just as secure to sign message hash using Ed25519? In-Reply-To: <01a701d0ccb0$6e2540f0$4a6fc2d0$@gmail.com> References: <01a701d0ccb0$6e2540f0$4a6fc2d0$@gmail.com> Message-ID: <20150802074242.GE17984@tyrion> Allen wrote: Dear Allen, > My question is, for long messages, wouldn't it be more efficient and just as > secure to hash the entire message just once, and then use the 64 byte hash > as the input to the signing algorithm? In other words, the code would look > like: > > crypto_hash_sha512(mhash, m, mlen); > crypto_sign(output, mhash, 64, key); > > The would seem to me to be faster for mlen > approx 128 bytes without any > loss of security. What you're losing is collision resilience. For a more detailed discussion please see our recent paper "EdDSA for more curves", page 5, paragraph "Security notes on prehashing": https://cryptojedi.org/peter/index.shtml#eddsa Best regards, Peter -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 173 bytes Desc: Digital signature URL: From leichter at lrw.com Sun Aug 2 06:35:02 2015 From: leichter at lrw.com (Jerry Leichter) Date: Sun, 2 Aug 2015 06:35:02 -0400 Subject: [Cryptography] asymmetric attacks on crypto-protocols - the rough consensus attack In-Reply-To: <55BD9C20.5090205@iang.org> References: <55BD9C20.5090205@iang.org> Message-ID: > On Aug 2, 2015, at 12:27 AM, ianG wrote: > [Block "rough consensus and working code" convergence on a crypto protocol by maintaining an alternative position indefinitely.] This is an issue that's broader than crypto protocols and broader than "rough consensus". It's a fundamental issue with group decision-making when group members believe that logical argument - which is infinitely sub-divisible - is the only basis for resolving arguments. In fact, it's a fundamental problem of rational decision-making - see "Buridan's ass" (https://en.wikipedia.org/wiki/Buridan%27s_ass). I saw similar processes occurring internally at DEC decades ago. You can doubtless find them throughout academia. And they can form without any external deliberate agency - though as you point out, they can be *encouraged* to form. My own solution: If two different approaches have each been successfully argued by two roughly equally teams for "a while", *neither is "better" than the other*. They are simply *different*. At that point, logical argument is beside the point - pick one at random. Making a choice has become more important than which choice you make. Often, there are multiple objective functions to satisfy, and you find that each side is arguing that, over all, they satisfy "more" of them. But then you can end up with a cyclic majority, in which there *is* no overall "better" choice - each can be dominated by another. Just choose at random. But ... it could be that it's not so much that there's a cyclic majority as that different factions simply weight the different objectives differently. It may not be obvious that this is happening; it may even be deliberately hidden, especially when the objectives favor one external group over another. Surfacing these differences may eliminate the (false) equivalence of the competing approaches; or it may simply move the argument to a new plane. But at least the argument on that plane is about real differences. If the situation truly is a Buridan's ass one, you may find that techies *still* aren't willing to cede a choice to a random choice. An argument I've made about elections may swing them. Imagine we held an election, and the results were extremely close. We do a recount - and the results are still close, but go the other way. We do *another* recount, and get yet another set of results. It's impossible to re-run the election, but we can't get convergence on the result. This leads to all kinds of fights, but the underlying basic claim is that the election *had* a true result - we just need to determine what it was. I claim that in such a situation we're "in the quantum domain": The election *didn't have a result*. It's in a mixed state between two equally probably results, and which one we see is indeterminate. Just flip a coin. -- Jerry From allenpmd at gmail.com Sun Aug 2 07:24:47 2015 From: allenpmd at gmail.com (Allen) Date: Sun, 2 Aug 2015 07:24:47 -0400 Subject: [Cryptography] More efficient and just as secure to sign message hash using Ed25519? In-Reply-To: <20150802074242.GE17984@tyrion> References: <01a701d0ccb0$6e2540f0$4a6fc2d0$@gmail.com> <20150802074242.GE17984@tyrion> Message-ID: <023d01d0cd15$d7553fc0$85ffbf40$@gmail.com> > What you're losing is collision resilience. For a more detailed discussion please see our recent paper "EdDSA for more curves", page 5, paragraph "Security notes on prehashing" Hi Peter, Thank you much for the reference (and for being part of the team that designed Ed25519). That paper is also listed on the Ed25519 website at http://ed25519.cr.yp.to/papers.html and I should have read it! :-) Thanks much, Allen From waywardgeek at gmail.com Sun Aug 2 09:17:44 2015 From: waywardgeek at gmail.com (Bill Cox) Date: Sun, 2 Aug 2015 06:17:44 -0700 Subject: [Cryptography] asymmetric attacks on crypto-protocols - the rough consensus attack In-Reply-To: <55BD9C20.5090205@iang.org> References: <55BD9C20.5090205@iang.org> Message-ID: I think it is possible to defend against this attack, but it is difficult. An attacker will likely assume multiple fake identities, join the group multiple times, and amplify his attack. To defend against this, you want to use real identities, preferably backed up by getting to know people by voice in group voice meetings. The better you get to know the people you deal with, the harder it becomes for a shill to do real damage. Another defense is to call a guy out as a potential shill when you suspect it. If the attacker is keen on not being discovered, they'll stop being disruptive. On the other hand, this can backfire - calling a natural born a-hole a shill does not discourage his bad behavior in my experience :) Maybe I'm too paranoid, but I have felt in multiple situations that a security-related discussion might be under a rough-consensus attack by a shill. For example, when discussing the possibility of switching from SHA1 to SHA256 for BitTorrent, some guy got so obnoxious and irrational that it killed the discussion. An attacker who can break SHA1 at will can do nasty things to torrents. The sorry state of a lot of our FOSS security might be due to this attack. We probably should make effort to defend against it. In short, don't let anonymous a-holes disrupt security discussions. Security requires real people working together. Bill -------------- next part -------------- An HTML attachment was scrubbed... URL: From waywardgeek at gmail.com Sun Aug 2 09:41:54 2015 From: waywardgeek at gmail.com (Bill Cox) Date: Sun, 2 Aug 2015 06:41:54 -0700 Subject: [Cryptography] asymmetric attacks on crypto-protocols - the rough consensus attack In-Reply-To: <55BD9C20.5090205@iang.org> References: <55BD9C20.5090205@iang.org> Message-ID: Re-reading your post, I see you're talking about a dedicated attacker who is a real person everyone already knows. This isn't some small change, like switching from SHA1 in BitTorrent to SHA256 that you might discuss anonymously. You're talking about securing data important enough for a government to plant a real person on a committee to disrupt it. In this case, I think you are right to drop rough consensus. Just go build it, and don't let anyone get in your way. Strong leadership is a good defense in this case. Bill -------------- next part -------------- An HTML attachment was scrubbed... URL: From stephen.farrell at cs.tcd.ie Sun Aug 2 07:33:23 2015 From: stephen.farrell at cs.tcd.ie (Stephen Farrell) Date: Sun, 02 Aug 2015 12:33:23 +0100 Subject: [Cryptography] asymmetric attacks on crypto-protocols - the rough consensus attack In-Reply-To: <55BD9C20.5090205@iang.org> References: <55BD9C20.5090205@iang.org> Message-ID: <55BE0003.60508@cs.tcd.ie> On 02/08/15 05:27, ianG wrote: > It turns out that there is a really nice attack. Also trying to keep away from specifics of any one protocol. In general you assume that the attacker (who I agree exists) is active as part of the process. There's no way to know the probability of that. I do know that people have the ability and propensity to disagree with one another for all sorts of reasons that are nothing to do with the posited attacker. Perhaps especially the kind of people who currently dominate discussions about new Internet protocols. And even more especially in fully open environments where anyone can try to participate. And since the new work represents change, and for some folks, significant change, it's entirely likely that genuine differences of opinion will exist even without any action from the attacker. There is also the fact that any rough consensus process has to be run by fallible humans. Not everyone is good at herding cats so that the cats agree they have arrived at rough consensus. So in addition to genuine technical disagreement one also has to take into account the chances of accidental mis-management. IMO, that probability is also quite high - not every engineer ends up being good at cat herding sadly;-) Lastly, given there are a whole bunch of proposals and bits of work being done in parallel, it's entirely to be expected that at least one of those gets stuck because of some process-stupidity. All of the above argues that we need be very realistic and quite well informed to arrive at a realistic evaluation of whether or not they might be an active attack being attempted against a specific proposal. To move to slightly more specifics, you mention rough consensus so I assume you're talking mainly about the IETF, since the IETF is afaik the only set of folks that use "rough consensus" as a term of art. In the case of the many bits of good work that are being done to improve security and privacy in the IETF, I do think it's quite likely that some but not all people working for signals intelligence agencies, and/or companies who work with them, do disagree with some of that work. Some of that disagreement is openly expressed I'm sure and that's just fine - we can handle openly expressed technical disagreement fairly easily, if not perfectly. Since there are only a tiny number of direct employees of signals intelligence agencies who participate in the IETF, and those folks are generally not trying to game the system in obvious ways, (I think I would notice if they were, 'cause yes I look out for it:-), I think we can ignore them here. Three are however a lot of esp. large companies who work with/for signals intelligence agencies and who do participate in the IETF, so I'll focus on those since any sensible attack would be done via a player like that. In any such case, my experience is that perceived commercial advantage (which may be long term) is what causes such participants to try to game the system. And indeed working with signals intelligence agencies is presumably profitable, so there is the potential for this attack. (One can argue that individuals within such enterprises may be used in an attack by leveraging their inflated egos etc, and that's true, but is indistinguishable from other personality related reasons to disagree so is covered above I think.) The remaining question then is whether or not people from commercial enterprises are, in addition to openly participating as expected, attempting to manipulate the open process to their own commercial advantage. And the answer is yes, of course they are, as always. But is that only because of the signals intelligence agencies? No it is not. For any of the relevant players, which includes basically all large companies, they have many more interests in play and it's not possible to disentangle those from the outside. (Or even from inside sometimes I bet:-) So it's impossible to tell what has motivated any particular bit of process gaming, and it's mostly silly to bother asking. That's just a part of operating in the big bad world, once you get beyond the playing- with-friends stage of any project you can't worry about the motivations of all participants. (You can decide if specific folks are worth worrying more about and pay more attention to technically examining their inputs, that's IMO fine, and I do that, but not based on current employer, rather based on a pattern of contributions.) Basically, we can describe what we consider good behaviour but we need to recognise that clever people will figure out ways to try to game any system for reasons we can't know, so worrying about all motivations is counter-productive, we need to examine visible actions and not worry about the unknowable. > The only defence I can see is to drop rough consensus. IMO that would not be a defensive move. That represents surrender. And not only of the rough consensus approach. To have any effect you would also have to surrender openness and decide who to allow into your secret cabal. That kind of cabal doesn't scale IMO. (The outputs of such cabals can be and often are useful inputs to the open rough consensus process that the IETF uses, so secret cabals are not all bad:-) With your surrender proposal, the putative attacker wins as there would no longer be an organised way to produce new protocols that are likely to fairly quickly see widespread deployment. In that case instead of heading towards 99% deployment in some reasonable timeframe, it's much more likely that 2% is a potential high-water mark. There may be exceptions but those will be exceptions. (And yes, we're both pulling numbers from the air.) I do agree that over time things like the IETF will evolve or perish and the concepts of rough consensus and openness ought be part of that evolution. (For example, the IETF IMO needs to figure out how to consider the views of users who are not engineers, but I don't know how to do that today.) So one can sensibly work towards a world where things like the IETF evolve more to one's liking, or one can sensibly work to try create something else to fill the niche currently filled by e.g. the IETF. But simply suggesting dropping rough consensus is nonsense, any credible attempt to avoid the posited attack would need to also say how you'd effectively replace the whole of the IETF if you're not proposing some feasible evolution within the IETF. > By offering > rough consensus, it's almost a gilt-edged invitation to the attacker. > The attacker isn't so stupid as to not use it. > > Can anyone suggest a way to get around this? Yes. Where you think it sufficiently important you should participate in the rough consensus process with sound technical argument about the technical proposals made. That is IMO far more effective as a way to counter the attacker, compared to surrender. And if you get really exercised about all this and are a masochist you can participate in trying to evolve things like IETF processes. That's not for everyone by any means, and is a recipe for learning to deal with frustration, but is the kind of worthy-but-tedious stuff that does actually need to be done to improve the rough consensus open approach to improving the Internet. And for things where you can't participate (time available being finite), to the extent you can, refuse to use the outputs of the process that you don't like or doubt, and tell other folks (the technically sounds reasons) why and recommend they do likewise. (I.e. sure, go ahead and make noise about the actual bad features of outputs from the process.) That can produce change in those who do participate in the parts of the work that you can't get to helping with or that you don't care that much about. > I think this really puts a > marker on the map - you simply can't do a security/crypto protocol under > rough consensus in open committee, when there is an attacker out there > willing to put in the resources to stop it. > > Thoughts? Your argument is ill-informed and incomplete and your conclusion is erroneous. (That's my thought anyway:-) Cheers, S. From tom at ritter.vg Sun Aug 2 11:20:26 2015 From: tom at ritter.vg (Tom Ritter) Date: Sun, 2 Aug 2015 08:20:26 -0700 Subject: [Cryptography] asymmetric attacks on crypto-protocols - the rough consensus attack In-Reply-To: <55BD9C20.5090205@iang.org> References: <55BD9C20.5090205@iang.org> Message-ID: On 1 August 2015 at 21:27, ianG wrote: > Can anyone suggest a way to get around this? I think this really puts a > marker on the map - you simply can't do a security/crypto protocol under > rough consensus in open committee, when there is an attacker out there > willing to put in the resources to stop it. > > Thoughts? My opinion is that the rough consensus can be counter-balanced by "running code". If the original group moves forward, deploys, gets early adopters, shows it's working, and perhaps wonder-of-wonders gets it picked up by one of the big behemoths that could jump-start deployment (maybe Google, or Akamai, or CloudFlare) - well they can document as an informational document at least. And you can interoperate with the folks who have deployed. -tom From ron at flownet.com Sun Aug 2 12:28:06 2015 From: ron at flownet.com (Ron Garret) Date: Sun, 2 Aug 2015 09:28:06 -0700 Subject: [Cryptography] More efficient and just as secure to sign message hash using Ed25519? In-Reply-To: <20150802074242.GE17984@tyrion> References: <01a701d0ccb0$6e2540f0$4a6fc2d0$@gmail.com> <20150802074242.GE17984@tyrion> Message-ID: On Aug 2, 2015, at 12:42 AM, Peter Schwabe wrote: > Allen wrote: > > Dear Allen, > >> My question is, for long messages, wouldn't it be more efficient and just as >> secure to hash the entire message just once, and then use the 64 byte hash >> as the input to the signing algorithm? In other words, the code would look >> like: >> >> crypto_hash_sha512(mhash, m, mlen); >> crypto_sign(output, mhash, 64, key); >> >> The would seem to me to be faster for mlen > approx 128 bytes without any >> loss of security. > > What you're losing is collision resilience. I think it’s important to note here that the collision resilience you are losing is resilience against collisions in the underlying hash H. Ed25519 *is* a hash of M and the secret key, and it obviously cannot be resilient against collisions in *that* hash (i.e. collisions in ed25519 itself). So if you hash first, you now have two collision risks whereas before you only had one. But the output of Ed25519 is 256 bits, so if H is, say, SHA512 the incremental risks of collisions in H over the inherent risk of collisions in Ed25519 are (almost certainly) pretty darn low. Almost certainly the least of your worries in any real-world application. If you’re really worried about collisions, you can probably produce an overall more collision-resistent signature scheme by concatenating the signatures of two different hashes of M. (But I am not an expert so don’t do this until someone who actually knows what they’re doing has analyzed it.) rg -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 455 bytes Desc: Message signed with OpenPGP using GPGMail URL: From danmcd at kebe.com Sun Aug 2 11:56:05 2015 From: danmcd at kebe.com (Dan McDonald) Date: Sun, 2 Aug 2015 11:56:05 -0400 Subject: [Cryptography] asymmetric attacks on crypto-protocols - the rough consensus attack In-Reply-To: References: <55BD9C20.5090205@iang.org> Message-ID: <1FD7D63A-88FA-4456-BBAC-3222FD6F76FD@kebe.com> On 1 August 2015 at 21:27, ianG wrote: > Can anyone suggest a way to get around this? I think this really puts a > marker on the map - you simply can't do a security/crypto protocol under > rough consensus in open committee, when there is an attacker out there > willing to put in the resources to stop it. > > Thoughts? It's a problem, like terrorism is a real problem. ALSO like terrorism, the mere threat of such a problem can be used by people with strong NIH infections to push their own terrible alternatives simply by waving the threat of the "rough consensus attacker" around. This has happened in Real Life before, and it will happen again. It doesn't diminish the actual problem of a rough-consensus attack, but the concept is rife for hiding other abuses. (Were I real tinfoil-hat-wearer, I might argue a rough consensus attacker would use NIH fanatics as a second prong.) Dan From l at odewijk.nl Sun Aug 2 12:35:27 2015 From: l at odewijk.nl (=?UTF-8?Q?Lodewijk_andr=C3=A9_de_la_porte?=) Date: Mon, 3 Aug 2015 01:35:27 +0900 Subject: [Cryptography] asymmetric attacks on crypto-protocols - the rough consensus attack In-Reply-To: References: <55BD9C20.5090205@iang.org> Message-ID: Beneficial dictator. It's not uncommon for a single person to be able to very justly parse arguments and make a choice. A good "beneficial dictator" will pull rank only with regards to "roadmap" and as a Deus Ex Machina decision-maker. Another advantage is how a person can be the central repository for the team's spirit and goals. And it is convenient for public communication, too. To ensure adoption - get commitment to include in softwares early on. Get commitment of engineers to continue working on it, unless the dictator calls it quits or a time period expires (after which new commitment should be arranged). The original post specifically stated no democratic solution could be made. A board could be better, but then one would have to find a board-full of beneficial-dictator quality people. Also, a board's internal communication is strained far more than a single person's communication. So long as a single person has sufficient expertise, there is little pain in having only a single person. We can take Bitcoin as an example; Gavin Anderson has a lot of coins himself (incentive to make it work) and he worked on the software for a very long time (extensive expertise). He's not the best (or maybe even good) at everything, but he's humble enough to admit that. If he had more power it would probably be good for Bitcoin. He would probably make some unpopular choices, and some of them will even be bad choices. But at least the choices would be made, and progress would be much faster. Linus is another example of this being helpful. Without him calling the canonical versions things would be probably fracture a thousand incoherent manners. If there's any truth to Steve Jobs' biography, it seems he was quite the beneficial dictator too. So, that'd be my fix for this threat. -------------- next part -------------- An HTML attachment was scrubbed... URL: From ccontavalli at gmail.com Sun Aug 2 12:54:00 2015 From: ccontavalli at gmail.com (Carlo Contavalli) Date: Sun, 2 Aug 2015 09:54:00 -0700 Subject: [Cryptography] SRP for mutual authentication - as an alternative / addition to certificates? Message-ID: Hello, haven't seen many conversations or much noise about SRP, from http://srp.stanford.edu/ on this mailing list. By a quick reading, and by peeking at the implementation, it provides strong mutual authentication of both client and server through a "shared secret", which is stored as a one way hash on the server, and never exchanged on the wire. Eg, if used with ssh, checking the fingerprint when connecting would be significantly less relevant, the fact that the server can establish an encrypted session at all proves that the server knows a hash of the shared secret. Has drawbacks - but certainly sounds like an improvement compared to existing protocols? Are there / why are not similar technologies used for web? I see two separate needs x509 certificates and TLS typically try to address: 1) establishing the identity of a site you connect to. 2) maintaining privacy and preventing mangling of the data exchanged. If I think about my typical workflow, ... x509 and certificates would still play a role the first time I end up on a site. Eg, the first time I go to uber.com, or first time I register to use my health plan benefits online, I would check that the certificate matches who the site claims to be. But from then on... once registered, and once I have a password, SRP would allow me to establish that the remote end is who they claim to be based on their ability to prove that they know a hash of my password, the certificate would just be an additional protection? Seems like a significant improvement over what we have today? Reducing exposure, and need to trust certification authorities? For example: a rogue certificate authority creates a false uber / false health plan management site. Or a rogue certificate is installed on my laptop. I try to login after this fake has been created, ... I would not be able to login? or notice immediately? Or if they proxy my connection acting as a MITM, they would not be able to decrypt my data? Opinions? Carlo From allenpmd at gmail.com Sun Aug 2 13:07:27 2015 From: allenpmd at gmail.com (Allen) Date: Sun, 2 Aug 2015 13:07:27 -0400 Subject: [Cryptography] More efficient and just as secure to sign message hash using Ed25519? Message-ID: <032301d0cd45$b61659e0$22430da0$@gmail.com> > So if you hash first, you now have two collision risks whereas before you only had one. ... Almost certainly the least of your worries in any real-world application. I see it basically the same way. Performing two full hashes of the message seems to buy only a very small marginal security benefit (maybe something on the order of 1 additional bit of security in the overall scheme?). Even if I thought the additional computational/probabilistic security were needed, I could probably find a way to use those CPU cycles that would yield a better payoff (using a stronger curve or a more complicated hash function perhaps?). I'm comfortable signing the hash(message) rather than the message itself. From bascule at gmail.com Sun Aug 2 13:50:11 2015 From: bascule at gmail.com (Tony Arcieri) Date: Sun, 2 Aug 2015 10:50:11 -0700 Subject: [Cryptography] asymmetric attacks on crypto-protocols - the rough consensus attack In-Reply-To: <55BD9C20.5090205@iang.org> References: <55BD9C20.5090205@iang.org> Message-ID: On Sat, Aug 1, 2015 at 9:27 PM, ianG wrote: > There's a group working on a new crypto protocol. I don't need to name > them because it's a general issue, but we're talking about one of those > "rough consensus and working code" rooms where dedicated engineers do what > they most want to do - create new Internet systems. > > This new crypto protocol will take a hitherto totally open treasure trove > of data and hide it. Not particularly well but well enough to make the > attacker work at it. The attacker will have to actually do something, > instead of just hoovering. > Ok, so I see through your thinly veiled wording to the WG in question ;) > It turns out that there is a really nice attack. If the group has a > protocol in mind, then all the attacker has to do is: > > a) suggest a new alternate protocol. > b) balance the group so that there is disagreement, roughly evenly > balanced between the original and the challenger. For what it's worth, I got frustrated with this particular group and stopped participating entirely... -- Tony Arcieri -------------- next part -------------- An HTML attachment was scrubbed... URL: From iang at iang.org Sun Aug 2 14:16:46 2015 From: iang at iang.org (ianG) Date: Sun, 02 Aug 2015 19:16:46 +0100 Subject: [Cryptography] asymmetric attacks on crypto-protocols - the rough consensus attack In-Reply-To: <55BE0003.60508@cs.tcd.ie> References: <55BD9C20.5090205@iang.org> <55BE0003.60508@cs.tcd.ie> Message-ID: <55BE5E8E.4010907@iang.org> On 2/08/2015 12:33 pm, Stephen Farrell wrote: > > On 02/08/15 05:27, ianG wrote: >> It turns out that there is a really nice attack. > > Also trying to keep away from specifics of any one protocol. > > In general you assume that the attacker (who I agree exists) is active > as part of the process. There's no way to know the probability of > that. I do know that people have the ability and propensity to disagree > with one another for all sorts of reasons that are nothing to do with > the posited attacker. Perhaps especially the kind of people who > currently dominate discussions about new Internet protocols. And even > more especially in fully open environments where anyone can try to > participate. And since the new work represents change, and for some > folks, significant change, it's entirely likely that genuine > differences of opinion will exist even without any action from the > attacker. > > There is also the fact that any rough consensus process has to be > run by fallible humans. Not everyone is good at herding cats so that > the cats agree they have arrived at rough consensus. So in addition > to genuine technical disagreement one also has to take into account > the chances of accidental mis-management. IMO, that probability is > also quite high - not every engineer ends up being good at cat > herding sadly;-) So, to just add something to the above point about committees being difficult without any help, it is of course possible for a committee to act the same way even in the absence of an attacker. This is what makes the attack so neat - as long as the attacker just acts as disorganised and catty as a normal engineer, there is no observable difference. The attack is invisible, and the hand that guides is also invisible, but not the invisible hand of economic progress. Learning that these two things exist - that we alone can stall the process by being bad at committee, and that others can use this badness against us - is a really tough lesson. However, I have discovered a rather elegant way that at least gets leads the horse (ass?) to water. Way back in WWII, the USA's OSS was engaged in the process of sabotaging the German production machine. To assist its agents it created a manual [0] which was distributed out to the field. This manual has since been declassified as it was presumably only of historical interest. As it was a comprehensive look at how to interfere with the enemy, it also exhorted the common factory worker to do his or her part. And it created a set of tactics to slow everything down. This is chapter 11 of the manual, which has such gems as "engage in long correspondence" :) It turns out that Chapters 11 and 12 [1] are a rather poignant reflection of what can go wrong in committee. So when I found myself as part of such a committee back in late 2000s, I copied the manual in and I euphemistically named it "the manual for our committee" [2]. Then, every time there was a new committee elected, I would pop up and say "and don't forget to read the manual on how you do board meetings" or some such. New members would then diligently read it, and quietly chuckle and figure out I was having a joke or something. But the seed is planted. Not only can we stuff up with histrionics ("Cry and sob hysterically at every occasion") and bad behaviour, this can be used against us by an enemy. iang [0] http://svn.cacert.org/CAcert/CAcert_Inc/Board/oss/OSS_Simple_Sabotage_Manual.pdf [1] http://svn.cacert.org/CAcert/CAcert_Inc/Board/oss/oss_sabotage.html [2] The board of CAcert, a community certification authority that changes its board around every year. From huitema at huitema.net Sun Aug 2 14:28:43 2015 From: huitema at huitema.net (Christian Huitema) Date: Sun, 2 Aug 2015 11:28:43 -0700 Subject: [Cryptography] asymmetric attacks on crypto-protocols - the rough consensus attack In-Reply-To: References: <55BD9C20.5090205@iang.org> Message-ID: <00e701d0cd51$10edf660$32c9e320$@huitema.net> On Sunday, August 2, 2015 8:20 AM, Tom Ritter > To: ianG > ... > On 1 August 2015 at 21:27, ianG wrote: > > Can anyone suggest a way to get around this? I think this really puts a > > marker on the map - you simply can't do a security/crypto protocol under > > rough consensus in open committee, when there is an attacker out there > > willing to put in the resources to stop it. > > > > Thoughts? > > My opinion is that the rough consensus can be counter-balanced by > "running code". If the original group moves forward, deploys, gets > early adopters, shows it's working, and perhaps wonder-of-wonders gets > it picked up by one of the big behemoths that could jump-start > deployment (maybe Google, or Akamai, or CloudFlare) - well they can > document as an informational document at least. And you can > interoperate with the folks who have deployed. That's what worked in previous similar scenarios, e.g. SNMP vs. CMIP, or OSPF vs. ISIS. Note that I cannot help associating IanG's message with the current state of the TCP crypto working group. And in that particular case, I don't believe there is much malice involved. The group started with a goal to "develop opportunistic encryption for all TCP connections," but quickly ran into two kinds of troubles. The first one was a need to traverse the hodge-podge of firewalls, inspectors and accelerators that we call "middle boxes." Turns out that if you want to do that, you have to leave TCP pretty much alone, and that negates most of the compelling advantages of the original "TCP Crypto" design. If you cannot secure the TCP protocol itself, you are bound to just insert a security filter on top of TCP, which brings the second issue. We already know how to run a security filter on top of TCP. That's what SSL/TLS do. At that point, the obvious question is whether the original goal of "opportunistic encryption" is best achieved by just inserting TLS as a filter, or by developing a light weight filter that would be easier to insert -- where light weight means light weight negotiation, as the actual cost of encryption is pretty much constant in any proposal. There are bunches of arguments one way and the other, but they are all about implementation issues, not about anything drastic. For example, we don't want to encrypt twice, so any light weight filter would have to be disabled if the application actually runs TLS. We also want applications to be able to evolve from "opportunistic" to "strong" encryption, which means adding authentication. And that means either evolving a parallel authentication framework in top of the light-weight filter, or just switching to TLS. And then there is a question whether having two parallel technologies means twice more resiliency, or two times as many bugs. So we are seeing two camps, not out of malice but because people weight different arguments differently. And yes, the only way out is to start deployments and see what happens. By the way, there are actually three camps in the debate. The third camp is QUIC, the protocol designed at Google to subsume both TCP, TLS and the bottom layer of HTTP 2.0. -- Christian Huitema From stephen.farrell at cs.tcd.ie Sun Aug 2 15:09:20 2015 From: stephen.farrell at cs.tcd.ie (Stephen Farrell) Date: Sun, 02 Aug 2015 20:09:20 +0100 Subject: [Cryptography] asymmetric attacks on crypto-protocols - the rough consensus attack In-Reply-To: <55BE5E8E.4010907@iang.org> References: <55BD9C20.5090205@iang.org> <55BE0003.60508@cs.tcd.ie> <55BE5E8E.4010907@iang.org> Message-ID: <55BE6AE0.6050601@cs.tcd.ie> On 02/08/15 19:16, ianG wrote: > On 2/08/2015 12:33 pm, Stephen Farrell wrote: >> >> On 02/08/15 05:27, ianG wrote: >>> It turns out that there is a really nice attack. >> >> Also trying to keep away from specifics of any one protocol. >> >> In general you assume that the attacker (who I agree exists) is active >> as part of the process. There's no way to know the probability of >> that. I do know that people have the ability and propensity to disagree >> with one another for all sorts of reasons that are nothing to do with >> the posited attacker. Perhaps especially the kind of people who >> currently dominate discussions about new Internet protocols. And even >> more especially in fully open environments where anyone can try to >> participate. And since the new work represents change, and for some >> folks, significant change, it's entirely likely that genuine >> differences of opinion will exist even without any action from the >> attacker. >> >> There is also the fact that any rough consensus process has to be >> run by fallible humans. Not everyone is good at herding cats so that >> the cats agree they have arrived at rough consensus. So in addition >> to genuine technical disagreement one also has to take into account >> the chances of accidental mis-management. IMO, that probability is >> also quite high - not every engineer ends up being good at cat >> herding sadly;-) > > > So, to just add something to the above point about committees Sigh. You are wrong to think of IETF working groups as "committees." There are similarities but there are huge differences. I realise that using that term serves your rhetoric as it conjures up images of closed rooms full of staid 19th century gentlemen but that is just not a relevant way to think about an IETF working group. Many of the O(100) IETF working group lists have hundreds of subscribers, and dozens of active mailing list participants. And all people (with an email address) are welcome to participate at any time - the main requirement being to that one's contributions need to be technically sound or they will be ignored. Those working groups have no real membership and we no real voting (there being no enumerable electorate) so many of the concepts associated with committees (including by you below when you say "elected") are not applicable. Yes, the reality is not perfect but the real imperfect dynamics are just not those described by the (here pejorative) term committee. And before one argues to discard a significant part of such a process, especially on the basis of an invisible hand on the scales, I do think one has a duty to at least accurately describe what one is arguing to discard. And you have not done that. That is another part of why I think your argument here is ill-informed. > being > difficult without any help, it is of course possible for a committee to > act the same way even in the absence of an attacker. This is what makes > the attack so neat - as long as the attacker just acts as disorganised > and catty as a normal engineer, there is no observable difference. The > attack is invisible, and the hand that guides is also invisible, but not > the invisible hand of economic progress. So let me see, you argue that there's an attack that can always be invisible, and that therefore we should surrender to that attacker. I don't find that at all convincing. (Separately, I never said economic "progress" - I said interests which is just not the same:-) Cheers, S. > > Learning that these two things exist - that we alone can stall the > process by being bad at committee, and that others can use this badness > against us - is a really tough lesson. However, I have discovered a > rather elegant way that at least gets leads the horse (ass?) to water. > > > > Way back in WWII, the USA's OSS was engaged in the process of sabotaging > the German production machine. To assist its agents it created a manual > [0] which was distributed out to the field. This manual has since been > declassified as it was presumably only of historical interest. > > As it was a comprehensive look at how to interfere with the enemy, it > also exhorted the common factory worker to do his or her part. And it > created a set of tactics to slow everything down. This is chapter 11 of > the manual, which has such gems as "engage in long correspondence" :) > > It turns out that Chapters 11 and 12 [1] are a rather poignant > reflection of what can go wrong in committee. So when I found myself as > part of such a committee back in late 2000s, I copied the manual in and > I euphemistically named it "the manual for our committee" [2]. > > Then, every time there was a new committee elected, I would pop up and > say "and don't forget to read the manual on how you do board meetings" > or some such. New members would then diligently read it, and quietly > chuckle and figure out I was having a joke or something. > > But the seed is planted. Not only can we stuff up with histrionics > ("Cry and sob hysterically at every occasion") and bad behaviour, this > can be used against us by an enemy. > > > > iang > > > > [0] > http://svn.cacert.org/CAcert/CAcert_Inc/Board/oss/OSS_Simple_Sabotage_Manual.pdf > > [1] > http://svn.cacert.org/CAcert/CAcert_Inc/Board/oss/oss_sabotage.html > [2] > The board of CAcert, a community certification authority that changes > its board around every year. > > From watsonbladd at gmail.com Sun Aug 2 15:55:36 2015 From: watsonbladd at gmail.com (Watson Ladd) Date: Sun, 2 Aug 2015 12:55:36 -0700 Subject: [Cryptography] asymmetric attacks on crypto-protocols - the rough consensus attack In-Reply-To: <55BE0003.60508@cs.tcd.ie> References: <55BD9C20.5090205@iang.org> <55BE0003.60508@cs.tcd.ie> Message-ID: On Sun, Aug 2, 2015 at 4:33 AM, Stephen Farrell wrote: > > On 02/08/15 05:27, ianG wrote: >> It turns out that there is a really nice attack. > > Also trying to keep away from specifics of any one protocol. That way, no one can actually argue about what's going on, as they have no idea what sources you are examining and how you are drawing the conclusions you are drawing. Unless we talk about specifics, we can't actually come to grips with what is, as opposed to what we think is. > > In general you assume that the attacker (who I agree exists) is active > as part of the process. There's no way to know the probability of > that. I do know that people have the ability and propensity to disagree > with one another for all sorts of reasons that are nothing to do with > the posited attacker. Perhaps especially the kind of people who > currently dominate discussions about new Internet protocols. And even > more especially in fully open environments where anyone can try to > participate. And since the new work represents change, and for some > folks, significant change, it's entirely likely that genuine > differences of opinion will exist even without any action from the > attacker. Yes, it's true that some people will not consider the costs of any change to deploy. But that's not the situation we're talking about. Rather its when you have 2 proposals, one with running code, and another with no running code, both very similar properties, and yet we can't pick the one that works. Not reacting to known defects until it is too late is a distinct failure mode. > > There is also the fact that any rough consensus process has to be > run by fallible humans. Not everyone is good at herding cats so that > the cats agree they have arrived at rough consensus. So in addition > to genuine technical disagreement one also has to take into account > the chances of accidental mis-management. IMO, that probability is > also quite high - not every engineer ends up being good at cat > herding sadly;-) > > Lastly, given there are a whole bunch of proposals and bits of work > being done in parallel, it's entirely to be expected that at least > one of those gets stuck because of some process-stupidity. > > All of the above argues that we need be very realistic and quite > well informed to arrive at a realistic evaluation of whether or not > they might be an active attack being attempted against a specific > proposal. We know that the NSA spent millions of dollars on influencing standards. We know some of these activities involved NIST and ISO. Why wouldn't they also target IETF? We also know that the TLS WG repeatedly ignored email messages concerning holes TLS that were later exploited, as well as papers and documents outlining these problems for years. The process needs to stand up to subversion. > > To move to slightly more specifics, you mention rough consensus so > I assume you're talking mainly about the IETF, since the IETF is > afaik the only set of folks that use "rough consensus" as a term of > art. > > In the case of the many bits of good work that are being done to improve > security and privacy in the IETF, I do think it's quite likely > that some but not all people working for signals intelligence agencies, > and/or companies who work with them, do disagree with some of that > work. Some of that disagreement is openly expressed I'm sure and > that's just fine - we can handle openly expressed technical > disagreement fairly easily, if not perfectly. > > Since there are only a tiny number of direct employees of signals > intelligence agencies who participate in the IETF, and those folks > are generally not trying to game the system in obvious ways, (I > think I would notice if they were, 'cause yes I look out for it:-), > I think we can ignore them here. Three are however a lot of esp. > large companies who work with/for signals intelligence agencies > and who do participate in the IETF, so I'll focus on those since > any sensible attack would be done via a player like that. Are you capable of determining backdoors in protocols yourself? No. Does the IETF process catch crypto vulnerabilities in protocols? No. So why are you confident that you can find disruption of the process by intelligence agencies? I agree it might seem more visible, but consider that the complexity of X509 lead to holes, and X509 was pushed by governments heavily over simpler options. Was this part of the thinking? (The NSA also shapes how grants are paid out in the US to discourage some kinds of research: this is openly discussed on their website) > > In any such case, my experience is that perceived commercial advantage > (which may be long term) is what causes such participants to try > to game the system. And indeed working with signals intelligence > agencies is presumably profitable, so there is the potential for > this attack. (One can argue that individuals within such enterprises > may be used in an attack by leveraging their inflated egos etc, > and that's true, but is indistinguishable from other personality > related reasons to disagree so is covered above I think.) John Kelsey had no reason to believe the NSA was pulling anything over on him. But ultimately he ended up defending the inclusion of Dual_EC_DRBG, despite having questioned it internally. Consider that as the prototypical example of an attack. > > The remaining question then is whether or not people from commercial > enterprises are, in addition to openly participating as expected, > attempting to manipulate the open process to their own commercial > advantage. And the answer is yes, of course they are, as always. But > is that only because of the signals intelligence agencies? No it is > not. For any of the relevant players, which includes basically all > large companies, they have many more interests in play and it's not > possible to disentangle those from the outside. (Or even from inside > sometimes I bet:-) > > So it's impossible to tell what has motivated any particular bit of > process gaming, and it's mostly silly to bother asking. That's just a > part of operating in the big bad world, once you get beyond the playing- > with-friends stage of any project you can't worry about the motivations > of all participants. (You can decide if specific folks are worth > worrying more about and pay more attention to technically examining > their inputs, that's IMO fine, and I do that, but not based on current > employer, rather based on a pattern of contributions.) > > Basically, we can describe what we consider good behaviour but we need > to recognise that clever people will figure out ways to try to game any > system for reasons we can't know, so worrying about all motivations is > counter-productive, we need to examine visible actions and not worry > about the unknowable. But we know that some IETF protocols have had better track records on security than others, and that many changes > >> I think this really puts a >> marker on the map - you simply can't do a security/crypto protocol under >> rough consensus in open committee, when there is an attacker out there >> willing to put in the resources to stop it. >> >> Thoughts? > > Your argument is ill-informed and incomplete and your conclusion is > erroneous. (That's my thought anyway:-) Can you point to a correctly designed protocol done by rough consensus? It's clear most successful protocols have actually been designed by small teams, and adopted through consensus. Asking a committee to design something is a proverbially bad idea. > > Cheers, > S. > > > _______________________________________________ > The cryptography mailing list > cryptography at metzdowd.com > http://www.metzdowd.com/mailman/listinfo/cryptography -- "Man is born free, but everywhere he is in chains". --Rousseau. From stephen.farrell at cs.tcd.ie Sun Aug 2 16:58:15 2015 From: stephen.farrell at cs.tcd.ie (Stephen Farrell) Date: Sun, 02 Aug 2015 21:58:15 +0100 Subject: [Cryptography] asymmetric attacks on crypto-protocols - the rough consensus attack In-Reply-To: References: <55BD9C20.5090205@iang.org> <55BE0003.60508@cs.tcd.ie> Message-ID: <55BE8467.1060703@cs.tcd.ie> Hiya, On 02/08/15 20:55, Watson Ladd wrote: > On Sun, Aug 2, 2015 at 4:33 AM, Stephen Farrell > wrote: >> >> On 02/08/15 05:27, ianG wrote: >>> It turns out that there is a really nice attack. >> >> Also trying to keep away from specifics of any one protocol. > > That way, no one can actually argue about what's going on, If you're implying Ian or I wanted to obfuscate something that's nonsense. He chose to try to generalise, and I'm fine with that. The alternative would be to repeat arguments that are currently already being (pointlessly IMO) repeated on the relevant IETF list. > as they > have no idea what sources you are examining and how you are drawing > the conclusions you are drawing. Unless we talk about specifics, we > can't actually come to grips with what is, as opposed to what we think > is. > >> >> In general you assume that the attacker (who I agree exists) is active >> as part of the process. There's no way to know the probability of >> that. I do know that people have the ability and propensity to disagree >> with one another for all sorts of reasons that are nothing to do with >> the posited attacker. Perhaps especially the kind of people who >> currently dominate discussions about new Internet protocols. And even >> more especially in fully open environments where anyone can try to >> participate. And since the new work represents change, and for some >> folks, significant change, it's entirely likely that genuine >> differences of opinion will exist even without any action from the >> attacker. > > Yes, it's true that some people will not consider the costs of any > change to deploy. But that's not the situation we're talking about. > Rather its when you have 2 proposals, one with running code, and > another with no running code, both very similar properties, and yet we > can't pick the one that works. Not reacting to known defects until it > is too late is a distinct failure mode. > >> >> There is also the fact that any rough consensus process has to be >> run by fallible humans. Not everyone is good at herding cats so that >> the cats agree they have arrived at rough consensus. So in addition >> to genuine technical disagreement one also has to take into account >> the chances of accidental mis-management. IMO, that probability is >> also quite high - not every engineer ends up being good at cat >> herding sadly;-) >> >> Lastly, given there are a whole bunch of proposals and bits of work >> being done in parallel, it's entirely to be expected that at least >> one of those gets stuck because of some process-stupidity. >> >> All of the above argues that we need be very realistic and quite >> well informed to arrive at a realistic evaluation of whether or not >> they might be an active attack being attempted against a specific >> proposal. > > We know that the NSA spent millions of dollars on influencing > standards. S/spent/wasted/ but yes. (And I don't mean wasted in terms of did/didn't get what they want, I mean in terms of it being a really really stupid way to mis-use money.) > We know some of these activities involved NIST and ISO. Why > wouldn't they also target IETF? I'd be surprised if some of that money wasn't mis-spent on trying to muck up IETF work. And I ack'd that already. My point is that Ian's supposed defence is surrender. I am not trying to deny that there may be an attack. The point is though that we will never know if any specific action is part of such an attack and we therefore have to react via our normal processes that aim to counter that and other kinds of gaming. We do have to be more alert/vigilant for some aspects of what is proposed but mostly we just need to run the processes well. (There are I'm sure some improvements that can be suggested too, but that's again not at all the same as Ian's surrender proposal.) > We also know that the TLS WG > repeatedly ignored email messages concerning holes TLS that were later > exploited, as well as papers and documents outlining these problems > for years. The process needs to stand up to subversion. I don't agree with the above characterisation. While some of the history of TLS hasn't been great, I doubt that's down to this kind of attack. >> To move to slightly more specifics, you mention rough consensus so >> I assume you're talking mainly about the IETF, since the IETF is >> afaik the only set of folks that use "rough consensus" as a term of >> art. >> >> In the case of the many bits of good work that are being done to improve >> security and privacy in the IETF, I do think it's quite likely >> that some but not all people working for signals intelligence agencies, >> and/or companies who work with them, do disagree with some of that >> work. Some of that disagreement is openly expressed I'm sure and >> that's just fine - we can handle openly expressed technical >> disagreement fairly easily, if not perfectly. >> >> Since there are only a tiny number of direct employees of signals >> intelligence agencies who participate in the IETF, and those folks >> are generally not trying to game the system in obvious ways, (I >> think I would notice if they were, 'cause yes I look out for it:-), >> I think we can ignore them here. Three are however a lot of esp. >> large companies who work with/for signals intelligence agencies >> and who do participate in the IETF, so I'll focus on those since >> any sensible attack would be done via a player like that. > > Are you capable of determining backdoors in protocols yourself? No. I've no idea what you mean (that could be relevant). > Does the IETF process catch crypto vulnerabilities in protocols? No. Bad question and a wrong answer anyway. The question is bad because it's people (or maybe programs written by people) who discover vulnerabilities. Whether they chose to feed that information into the IETF process in a usefully timely manner is a different question. As is how well or badly the IETF process handles such input. And even though it's a bad question, I think the example of DKG figuring out issues with 0-RTT in the TLS1.3 proposals is a case that comes close to providing a "yes, stuff works sometimes" answer to your bad question. (Not that I'm yet very happy with the results so far on that score:-) > So why are you confident that you can find disruption of the process > by intelligence agencies? If you think I said I was, you mis-read what I wrote or I wrote badly. My main point is that we ought treat this as another kind of gaming the system and ensure that we handle it well, just as we have to with other more purely commercially motivated attempts to game the system. > I agree it might seem more visible, but > consider that the complexity of X509 lead to holes, and X509 was > pushed by governments heavily over simpler options. Was this part of > the thinking? (The NSA also shapes how grants are paid out in the US > to discourage some kinds of research: this is openly discussed on > their website) I doubt it. X.509 was part of X.500, all of which was similarly baroque, as was X.400 at the time. And that all started back in the mid-1980's too when using strong crypto was hard to impossible in most applications. Seems pretty unlikely to me that X.509-complexity was part of any such attack. >> In any such case, my experience is that perceived commercial advantage >> (which may be long term) is what causes such participants to try >> to game the system. And indeed working with signals intelligence >> agencies is presumably profitable, so there is the potential for >> this attack. (One can argue that individuals within such enterprises >> may be used in an attack by leveraging their inflated egos etc, >> and that's true, but is indistinguishable from other personality >> related reasons to disagree so is covered above I think.) > > John Kelsey had no reason to believe the NSA was pulling anything over > on him. But ultimately he ended up defending the inclusion of > Dual_EC_DRBG, despite having questioned it internally. Consider that > as the prototypical example of an attack. The dual-ec fiasco isn't a good model for a similar attack on a piece of IETF work IMO. The setup there was much more vulnerable to capture by just a few parties for many reasons. That problem affects the IETF much less - it's still an issue but far less of an issue so long as we have enough capable folks participating. I do agree that other standards development organisations can be very vulnerable to that kind of capture though. As are industry consortia and small-team projects. The scale of the IETF is a PITA in many ways, but for this aspect it helps. >> The remaining question then is whether or not people from commercial >> enterprises are, in addition to openly participating as expected, >> attempting to manipulate the open process to their own commercial >> advantage. And the answer is yes, of course they are, as always. But >> is that only because of the signals intelligence agencies? No it is >> not. For any of the relevant players, which includes basically all >> large companies, they have many more interests in play and it's not >> possible to disentangle those from the outside. (Or even from inside >> sometimes I bet:-) >> >> So it's impossible to tell what has motivated any particular bit of >> process gaming, and it's mostly silly to bother asking. That's just a >> part of operating in the big bad world, once you get beyond the playing- >> with-friends stage of any project you can't worry about the motivations >> of all participants. (You can decide if specific folks are worth >> worrying more about and pay more attention to technically examining >> their inputs, that's IMO fine, and I do that, but not based on current >> employer, rather based on a pattern of contributions.) >> >> Basically, we can describe what we consider good behaviour but we need >> to recognise that clever people will figure out ways to try to game any >> system for reasons we can't know, so worrying about all motivations is >> counter-productive, we need to examine visible actions and not worry >> about the unknowable. > > But we know that some IETF protocols have had better track records on > security than others, and that many changes > >> >>> I think this really puts a >>> marker on the map - you simply can't do a security/crypto protocol under >>> rough consensus in open committee, when there is an attacker out there >>> willing to put in the resources to stop it. >>> >>> Thoughts? >> >> Your argument is ill-informed and incomplete and your conclusion is >> erroneous. (That's my thought anyway:-) > > Can you point to a correctly designed protocol done by rough > consensus? My argument doesn't require one and I even acknowledged that starting from the output of a small team is often a good way to end up with better IETF output. The IETF isn't great at starting from a blank sheet of paper, but is often good at improving various aspects of small-team output. > It's clear most successful protocols have actually been > designed by small teams, and adopted through consensus. Asking a > committee to design something is a proverbially bad idea. Wrt "committee" see my earlier mail to the list. S. > >> >> Cheers, >> S. >> >> >> _______________________________________________ >> The cryptography mailing list >> cryptography at metzdowd.com >> http://www.metzdowd.com/mailman/listinfo/cryptography > > > From iang at iang.org Sun Aug 2 17:17:54 2015 From: iang at iang.org (ianG) Date: Sun, 02 Aug 2015 22:17:54 +0100 Subject: [Cryptography] asymmetric attacks on crypto-protocols - the rough consensus attack In-Reply-To: <55BE6AE0.6050601@cs.tcd.ie> References: <55BD9C20.5090205@iang.org> <55BE0003.60508@cs.tcd.ie> <55BE5E8E.4010907@iang.org> <55BE6AE0.6050601@cs.tcd.ie> Message-ID: <55BE8902.2060900@iang.org> So, just to forestall any thoughts in a particular direction. 1. It is fruitless to name a person who might be a shill. The reason is quite logical - the attacker is better at this game than you are, and will use your attempt to name a shill as a way to create discord, and will (eg) also use the same noise to name YOU as a shill. Or worse. In case you're wondering, this is known art, I'm not just talking out my posterior. tl;dr don't name a shill, you'll lose. Attacker is better at it. 2. Naming a WG is also amusing but distracting. This is the security area. The attacker exists. He spends millions of dollars on this, he has been caught with his finger in the cookie jar before (nod to Watson on these points), and he's said in revealed docs he's going to do it. We all know that. So, it's a systemic problem. It might be happening today in a group, but actually it's more likely a honed process across 10 or more groups. What's the systemic response? 3. This only applies to security when there is a known attacker who's decided to stop this particular protocol from interfering with his actions. That's a fairly narrow slice of WGs. Probably less than 10 (speculation). I.e., I'm not arguing to dispose of the entirety of the IETF. Not today at least :) On 2/08/2015 20:09 pm, Stephen Farrell wrote: > And before one argues to discard a significant part of such a process, > especially on the basis of an invisible hand on the scales, I do think > one has a duty to at least accurately describe what one is arguing to > discard. And you have not done that. So, assumptions: 1. The attacker exists. 2. The attacker has approximately infinite resources and is prepared to spend them. 3. The attacker can call on a large network of people, including ones who might not agree with the call, and ones who don't spot the motives. 4. The attacker cares not to be spotted, but not that much. You're not going to sue him. 5. The attacker has decided that deployment of protocol X on wide-spread basis is to be stopped. (Somehow.) Then the attack. As described, attacker eases the WG into rough anti-consensus, a balance between two opposing forces by (i) proposing an alternate protocol, and (ii) stacking the group so there is roughly enough opposition. The defence *I proposed* was to drop rough consensus. I stopped there. Stephen pointed out that any replacement of rough consensus with a directional method ("one czar" or AD or ...) would then shift the burden of the attack to another place. I.e., could very will just work in the attacker's favour. A very good point. Jerry described the coin toss. This "addresses" Stephen's dual-attack at some level. What it does is actually give a 50% chance of the good protocol, and a 50% chance of the challenger. So now we can refine our attack by saying, the challenger should be also a non-optimal protocol. We've now got a 50% chance of killing it by putting in a non-working protocol, a familiar scenario to everyone who's been engaged in these efforts, sadly. Now I'll propose another way, just thought of it: Split the protocols. Group A proceeds, so does group B. Then both are standardised. Now, the market works both over. There is now a betamax story to get through as the market gets to have a second call on the rough consensus. If you believe in rough consensus that much, let the market vote ;-) Engineers of course will be horrified. "We can do better!" But actually, maybe we can't. Betamax resolved more quickly in the marketplace than many standards groups took to come to rough consensus and produce their standards. Maybe the question here is, where is the pain? And perhaps a bit of user pain is the price we pay? (None of these points are entirely new!) > That is another part of why I think your argument here is ill-informed. This is all by way of a thought experiment. I set some parameters. Everyone's free to knock it down, and/or change the parameters to be more interesting (someone has already proposed an entirely new set of parameters in private email). Where it gets "interesting" is when we inform a particular situation in reality. That's of course a crapshoot. But we don't know how close the thought experiment gets to reality unless we try. > (Separately, I never said economic "progress" - I said interests which > is just not the same:-) (Right. In this case, I reckon the interests are directly opposed. Attacker's mission is pretty clear.) iang From iang at iang.org Sun Aug 2 17:20:09 2015 From: iang at iang.org (ianG) Date: Sun, 02 Aug 2015 22:20:09 +0100 Subject: [Cryptography] asymmetric attacks on crypto-protocols - the rough consensus attack In-Reply-To: References: <55BD9C20.5090205@iang.org> Message-ID: <55BE8989.7090108@iang.org> On 2/08/2015 16:20 pm, Tom Ritter wrote: > On 1 August 2015 at 21:27, ianG wrote: >> Can anyone suggest a way to get around this? I think this really puts a >> marker on the map - you simply can't do a security/crypto protocol under >> rough consensus in open committee, when there is an attacker out there >> willing to put in the resources to stop it. >> >> Thoughts? > > My opinion is that the rough consensus can be counter-balanced by > "running code". If the original group moves forward, deploys, gets > early adopters, shows it's working, and perhaps wonder-of-wonders gets > it picked up by one of the big behemoths that could jump-start > deployment (maybe Google, or Akamai, or CloudFlare) - well they can > document as an informational document at least. And you can > interoperate with the folks who have deployed. ftr, I read this post earlier, and it inspired me in my mind to copy it and think I'd thought of it myself... iang From stephen.farrell at cs.tcd.ie Sun Aug 2 17:36:15 2015 From: stephen.farrell at cs.tcd.ie (Stephen Farrell) Date: Sun, 02 Aug 2015 22:36:15 +0100 Subject: [Cryptography] asymmetric attacks on crypto-protocols - the rough consensus attack In-Reply-To: <55BE8902.2060900@iang.org> References: <55BD9C20.5090205@iang.org> <55BE0003.60508@cs.tcd.ie> <55BE5E8E.4010907@iang.org> <55BE6AE0.6050601@cs.tcd.ie> <55BE8902.2060900@iang.org> Message-ID: <55BE8D4F.3080805@cs.tcd.ie> On 02/08/15 22:17, ianG wrote: > So, just to forestall any thoughts in a particular direction. > > 1. It is fruitless to name a person who might be a shill. The reason > is quite logical - the attacker is better at this game than you are, and > will use your attempt to name a shill as a way to create discord, and > will (eg) also use the same noise to name YOU as a shill. Or worse. In > case you're wondering, this is known art, I'm not just talking out my > posterior. > > tl;dr don't name a shill, you'll lose. Attacker is better at it. Strongly agree. > > 2. Naming a WG is also amusing but distracting. This is the security > area. The attacker exists. He spends millions of dollars on this, he > has been caught with his finger in the cookie jar before (nod to Watson > on these points), and he's said in revealed docs he's going to do it. We > all know that. Also agree. > > So, it's a systemic problem. It might be happening today in a group, > but actually it's more likely a honed process across 10 or more groups. > What's the systemic response? > > 3. This only applies to security when there is a known attacker who's > decided to stop this particular protocol from interfering with his > actions. That's a fairly narrow slice of WGs. Probably less than 10 > (speculation). I disagree there. I think the attacker is probably more interested in there being protocols for which turning on any security is hard. That could be attempted by making some specific security protocol hard to deploy, (*) but equally by making e.g. a protocol that requires that every node have the ability to add/subtract/change PDUs. That way it's hard to add any e2e security features, no matter how well designed those are. So I think it'd be as likely that lots of non-security-area WGs would be targets. The latter might also be easier to influence, as many participants could be commercially motivated to not want better security and privacy as that has a cost. > > I.e., I'm not arguing to dispose of the entirety of the IETF. Not today > at least :) I'm afraid you did suggest just that. The rough consensus thing and the open-ness thing are inextricably intertwined and necessary for the IETF. Take away one or both and you're no longer dealing with the IETF. So while I'm interested in feasible ways to improve IETF process, I'm not interested in surrender, but I said that already I guess:-) Cheers, S. (*) I think I'm on record as saying that the IETF has in the past failed in developing security protocols that were too hard to deploy. It could be that this attack was a part of the cause of that. But my take is that perfectionism and inexperience with scale on the part of security folks was a bigger factor. In any case I think we're improving in that respect, but have a ways to go. > > > On 2/08/2015 20:09 pm, Stephen Farrell wrote: >> And before one argues to discard a significant part of such a process, >> especially on the basis of an invisible hand on the scales, I do think >> one has a duty to at least accurately describe what one is arguing to >> discard. And you have not done that. > > > > So, assumptions: > > 1. The attacker exists. > 2. The attacker has approximately infinite resources and is prepared to > spend them. > 3. The attacker can call on a large network of people, including ones > who might not agree with the call, and ones who don't spot the motives. > 4. The attacker cares not to be spotted, but not that much. You're not > going to sue him. > 5. The attacker has decided that deployment of protocol X on > wide-spread basis is to be stopped. (Somehow.) > > Then the attack. > > As described, attacker eases the WG into rough anti-consensus, a balance > between two opposing forces by > (i) proposing an alternate protocol, and > (ii) stacking the group so there is roughly enough opposition. > > The defence *I proposed* was to drop rough consensus. I stopped there. > > Stephen pointed out that any replacement of rough consensus with a > directional method ("one czar" or AD or ...) would then shift the burden > of the attack to another place. I.e., could very will just work in the > attacker's favour. A very good point. > > Jerry described the coin toss. This "addresses" Stephen's dual-attack > at some level. What it does is actually give a 50% chance of the good > protocol, and a 50% chance of the challenger. So now we can refine our > attack by saying, the challenger should be also a non-optimal protocol. > We've now got a 50% chance of killing it by putting in a non-working > protocol, a familiar scenario to everyone who's been engaged in these > efforts, sadly. > > Now I'll propose another way, just thought of it: > > Split the protocols. Group A proceeds, so does group B. Then both are > standardised. Now, the market works both over. There is now a betamax > story to get through as the market gets to have a second call on the > rough consensus. > > If you believe in rough consensus that much, let the market vote ;-) > > Engineers of course will be horrified. "We can do better!" But > actually, maybe we can't. Betamax resolved more quickly in the > marketplace than many standards groups took to come to rough consensus > and produce their standards. > > Maybe the question here is, where is the pain? And perhaps a bit of > user pain is the price we pay? > > (None of these points are entirely new!) > > > >> That is another part of why I think your argument here is ill-informed. > > > This is all by way of a thought experiment. I set some parameters. > Everyone's free to knock it down, and/or change the parameters to be > more interesting (someone has already proposed an entirely new set of > parameters in private email). > > Where it gets "interesting" is when we inform a particular situation in > reality. That's of course a crapshoot. > > But we don't know how close the thought experiment gets to reality > unless we try. > > > >> (Separately, I never said economic "progress" - I said interests which >> is just not the same:-) > > > (Right. In this case, I reckon the interests are directly opposed. > Attacker's mission is pretty clear.) > > > > iang > > From cryptography at dukhovni.org Mon Aug 3 00:35:00 2015 From: cryptography at dukhovni.org (Viktor Dukhovni) Date: Mon, 3 Aug 2015 04:35:00 +0000 Subject: [Cryptography] More efficient and just as secure to sign message hash using Ed25519? In-Reply-To: References: <01a701d0ccb0$6e2540f0$4a6fc2d0$@gmail.com> <20150802074242.GE17984@tyrion> Message-ID: <20150803043459.GL19228@mournblade.imrryr.org> On Sun, Aug 02, 2015 at 09:28:06AM -0700, Ron Garret wrote: > > What you're losing is collision resilience. > > I think it's important to note here that the collision resilience you are > losing is resilience against collisions in the underlying hash H. Ed25519 > *is* a hash of M and the secret key, and it obviously cannot be resilient > against collisions in *that* hash (i.e. collisions in ed25519 itself). > So if you hash first, you now have two collision risks whereas before you > only had one. But the output of Ed25519 is 256 bits, so if H is, say, > SHA512 the incremental risks of collisions in H over the inherent risk of > collisions in Ed25519 are (almost certainly) pretty darn low. Almost > certainly the least of your worries in any real-world application. > > If you're really worried about collisions, you can probably produce an > overall more collision-resistent signature scheme by concatenating the > signatures of two different hashes of M. (But I am not an expert so don't > do this until someone who actually knows what they're doing has analyzed > it.) This analysis is too naive. The risk is internal collisions in the hash function, which might enable extension attacks. The Ed25519 construct is resistant against internal collisions and extension attacks, while SHA-2 is not. Now of course internal collisions on the full SHA-2 are far from feasible at present, but not depending on unexpected progress on that front is reasonable defense in depth. -- Viktor. From ryacko at gmail.com Mon Aug 3 00:57:49 2015 From: ryacko at gmail.com (Ryan Carboni) Date: Sun, 2 Aug 2015 21:57:49 -0700 Subject: [Cryptography] 420,000 devices connected to internet hacked Message-ID: http://www.spiegel.de/international/world/hacker-measures-the-internet-illegally-with-carna-botnet-a-890413.html Well, it's an old article, but as things move on, it seems like people are seriously planning on creating an internet of things. A billion new devices are connected to the internet each year. On the plus side, the next internet census can be conducted much faster now that we have so much more vulnerable devices. -------------- next part -------------- An HTML attachment was scrubbed... URL: From ron at flownet.com Mon Aug 3 02:04:56 2015 From: ron at flownet.com (Ron Garret) Date: Sun, 2 Aug 2015 23:04:56 -0700 Subject: [Cryptography] More efficient and just as secure to sign message hash using Ed25519? In-Reply-To: <032301d0cd45$b61659e0$22430da0$@gmail.com> References: <032301d0cd45$b61659e0$22430da0$@gmail.com> Message-ID: On Aug 2, 2015, at 10:07 AM, Allen wrote: >> So if you hash first, you now have two collision risks whereas before you > only had one. ... Almost certainly the least of your worries in any > real-world application. > > I see it basically the same way. Performing two full hashes of the message > seems to buy only a very small marginal security benefit (maybe something on > the order of 1 additional bit of security in the overall scheme?). Even if > I thought the additional computational/probabilistic security were needed, I > could probably find a way to use those CPU cycles that would yield a better > payoff (using a stronger curve or a more complicated hash function > perhaps?). I'm comfortable signing the hash(message) rather than the > message itself. This is probably obvious, but I thought it might be worth stating explicitly for the benefit of lurkers: it’s important that the hash you sign be at least 256 bits. 512 is probably better just to give yourself a little more margin. If you sign a hash narrower than 256 bits then you really do lose. (And, as long as I’m stating the obvious, these numbers are for Ed25519. If you are using a generalized EdDSA signature scheme you should sign a hash that is at least as wide as the signature you are producing. Making it wider is probably not a bad idea.) -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 455 bytes Desc: Message signed with OpenPGP using GPGMail URL: From fungi at yuggoth.org Mon Aug 3 08:27:00 2015 From: fungi at yuggoth.org (Jeremy Stanley) Date: Mon, 3 Aug 2015 12:27:00 +0000 Subject: [Cryptography] asymmetric attacks on crypto-protocols - the rough consensus attack In-Reply-To: References: <55BD9C20.5090205@iang.org> Message-ID: <20150803122700.GA2731@yuggoth.org> On 2015-08-03 01:35:27 +0900 (+0900), Lodewijk andré de la porte wrote: > Beneficial dictator. I think the term you're looking for is "benevolent dictator" (at least that's how it's typically phrased in free software communities). > It's not uncommon for a single person to be able to very justly parse > arguments and make a choice. A good "beneficial dictator" will pull rank > only with regards to "roadmap" and as a Deus Ex Machina decision-maker. [...] Agreed, at least this is the mechanism I've seen work out most often. I'm involved in collaborative development with a free software community who embrace (perhaps sometimes even enshrine) consensus/distributed decision-making. Often this works to our benefit, but there are somewhat frequent cases where a particular group fails to reach any clear agreement and for this we have leaders elected by the community to make those decisions (team leads within subgroups, and a separate body of technical leaders within the larger collective community). It doesn't _always_ help because there can be a pressure within the collective for a leader to not alienate any one set of opinion holders, and so a default non-decision outcome can still happen. In those cases, "start implementing all relevant proposals and see which turns out to be better/easier" is a useful fall-back position. This has the benefit that the race to a solution will most often be won by those with either the simpler solution or the most development support (either of which are great proxies for identifying which choice was actually the superior one). This thread has reminded me that I should make groups aware on controversial decisions when they appear to cross the line into that realm where the decision-making process has become more important than the decision being made. -- Jeremy Stanley From ron at flownet.com Mon Aug 3 13:01:23 2015 From: ron at flownet.com (Ron Garret) Date: Mon, 3 Aug 2015 10:01:23 -0700 Subject: [Cryptography] More efficient and just as secure to sign message hash using Ed25519? In-Reply-To: <20150803043459.GL19228@mournblade.imrryr.org> References: <01a701d0ccb0$6e2540f0$4a6fc2d0$@gmail.com> <20150802074242.GE17984@tyrion> <20150803043459.GL19228@mournblade.imrryr.org> Message-ID: <470A3146-3843-4FE3-88BA-CC2F844A2D28@flownet.com> On Aug 2, 2015, at 9:35 PM, Viktor Dukhovni wrote: > On Sun, Aug 02, 2015 at 09:28:06AM -0700, Ron Garret wrote: > >>> What you're losing is collision resilience. >> >> I think it's important to note here that the collision resilience you are >> losing is resilience against collisions in the underlying hash H. Ed25519 >> *is* a hash of M and the secret key, and it obviously cannot be resilient >> against collisions in *that* hash (i.e. collisions in ed25519 itself). >> So if you hash first, you now have two collision risks whereas before you >> only had one. But the output of Ed25519 is 256 bits, so if H is, say, >> SHA512 the incremental risks of collisions in H over the inherent risk of >> collisions in Ed25519 are (almost certainly) pretty darn low. Almost >> certainly the least of your worries in any real-world application. >> >> If you're really worried about collisions, you can probably produce an >> overall more collision-resistent signature scheme by concatenating the >> signatures of two different hashes of M. (But I am not an expert so don't >> do this until someone who actually knows what they're doing has analyzed >> it.) > > This analysis is too naive. The risk is internal collisions in > the hash function, which might enable extension attacks. The > Ed25519 construct is resistant against internal collisions and > extension attacks, while SHA-2 is not. I don’t see how Ed25519 is resistant against length extension attacks. It is true that collisions in H do not produce collisions in Ed25519 because Ed25519 applies H twice to two different inputs. But it seems to me that a collision in Ed25519 itself could be length-extended if that collision resulted from two collisions in H, because both applications of H put M at the end. > Now of course internal collisions on the full SHA-2 are far from > feasible at present, but not depending on unexpected progress on > that front is reasonable defense in depth. If you are really worried about future collisions in SHA-512 you can sign an HMAC instead of a simple hash. (In fact, if I’m right and Ed25519 really is vulnerable to length-extension attacks on two collisions in H, then signing an HMAC might actually be (very slightly) more secure than signing the message directly.) rg From cryptography at dukhovni.org Mon Aug 3 14:26:24 2015 From: cryptography at dukhovni.org (Viktor Dukhovni) Date: Mon, 3 Aug 2015 18:26:24 +0000 Subject: [Cryptography] More efficient and just as secure to sign message hash using Ed25519? In-Reply-To: <032301d0cd45$b61659e0$22430da0$@gmail.com> References: <032301d0cd45$b61659e0$22430da0$@gmail.com> Message-ID: <20150803182624.GW19228@mournblade.imrryr.org> On Sun, Aug 02, 2015 at 01:07:27PM -0400, Allen wrote: > > So if you hash first, you now have two collision risks whereas before you > only had one. ... Almost certainly the least of your worries in any > real-world application. > > I see it basically the same way. Performing two full hashes of the message > seems to buy only a very small marginal security benefit (maybe something on > the order of 1 additional bit of security in the overall scheme?). Even if > I thought the additional computational/probabilistic security were needed, I > could probably find a way to use those CPU cycles that would yield a better > payoff (using a stronger curve or a more complicated hash function > perhaps?). I'm comfortable signing the hash(message) rather than the > message itself. So long as the full hash function remains resistant to internal collisions, the extra care is not required. The Ed25519 proposal however survives failures in internal collision resistance. It is a more conservative design. You might conjecture it to be too conservative, but that's no excuse for arguing that there's no added robustness from defending against as yet impractical attacks. -- Viktor. From allenpmd at gmail.com Mon Aug 3 14:40:51 2015 From: allenpmd at gmail.com (Allen) Date: Mon, 3 Aug 2015 14:40:51 -0400 Subject: [Cryptography] More efficient and just as secure to sign message hash using Ed25519? In-Reply-To: <470A3146-3843-4FE3-88BA-CC2F844A2D28@flownet.com> References: <01a701d0ccb0$6e2540f0$4a6fc2d0$@gmail.com> <20150802074242.GE17984@tyrion> <20150803043459.GL19228@mournblade.imrryr.org> <470A3146-3843-4FE3-88BA-CC2F844A2D28@flownet.com> Message-ID: <050401d0ce1b$eca8d1f0$c5fa75d0$@gmail.com> > If you are really worried about future collisions in SHA-512 you can sign an HMAC instead of a simple hash. I think for my application I'm going to end up signing a short input that consists of the concatenation of (the 512 bit hash of the message || the length of the message || a few small values that in my application tie the message to its context). In theory, the extra values aren't necessary, but it is a low cost way to harden the algorithm slightly and counter the potential perception that I took a shortcut in implementing the Ed25519 algorithm. From leichter at lrw.com Mon Aug 3 15:19:09 2015 From: leichter at lrw.com (Jerry Leichter) Date: Mon, 3 Aug 2015 15:19:09 -0400 Subject: [Cryptography] SRP for mutual authentication - as an alternative / addition to certificates? In-Reply-To: References: Message-ID: <7D94E679-63AB-4562-BA99-23D9DD11BAAB@lrw.com> > By a quick reading, and by peeking at the implementation, it provides > strong mutual authentication of both client and server through a > "shared secret", which is stored as a one way hash on the server, and > never exchanged on the wire. ...Has drawbacks - but certainly sounds like an improvement compared to > existing protocols? ... Are there / why are not similar technologies used for web? There's a history of issues involving patents with SRP and similar protocols. (The underlying EKE patents were owned by Lucent, which didn't seem to want to make them broadly available. SRP was allegedly designed to avoid the EKE patents, but there were enough doubts about whether it did to keep people away.) The EKE patents have recently expired, so perhaps its time to go look at this again. -- Jerry From allenpmd at gmail.com Mon Aug 3 15:21:50 2015 From: allenpmd at gmail.com (Allen) Date: Mon, 3 Aug 2015 15:21:50 -0400 Subject: [Cryptography] More efficient and just as secure to sign message hash using Ed25519? In-Reply-To: <20150803182624.GW19228@mournblade.imrryr.org> References: <032301d0cd45$b61659e0$22430da0$@gmail.com> <20150803182624.GW19228@mournblade.imrryr.org> Message-ID: <050601d0ce21$a60fa470$f22eed50$@gmail.com> > > I see it basically the same way. Performing two full hashes of the > > message seems to buy only a very small marginal security benefit > > (maybe something on the order of 1 additional bit of security in the > > overall scheme?). Even if I thought the additional > > computational/probabilistic security were needed, I could probably > > find a way to use those CPU cycles that would yield a better payoff > > (using a stronger curve or a more complicated hash function perhaps?). > > I'm comfortable signing the hash(message) rather than the message itself. > So long as the full hash function remains resistant to internal collisions, the extra care is not required. > The Ed25519 proposal however survives failures in internal collision resistance. It is a more conservative design. > You might conjecture it to be too conservative, but that's no excuse for arguing that there's no added robustness > from defending against as yet impractical attacks. Who claimed there is "no added robustness"? It certainly wasn't me. I specifically said there was a very small marginal benefit, but that I thought it was not the best use of the resources required. It would also be more conservative to use five different 1024-bit hash functions in parallel and to sign messages twelve times using RSA, DSA, ECDSA and EdDSA with various curves and key lengths. But it's not our job to be as conservative as possible without considering the costs and benefits. You are correct that I believe the double-hashing in Ed25519 design is overly conservative for many application. I also think that in some if not many cases the additional CPU cycles required to hash the full message twice could be put to other uses that would give a better security payoff. From leichter at lrw.com Mon Aug 3 15:45:29 2015 From: leichter at lrw.com (Jerry Leichter) Date: Mon, 3 Aug 2015 15:45:29 -0400 Subject: [Cryptography] asymmetric attacks on crypto-protocols - the rough consensus attack In-Reply-To: <55BE8902.2060900@iang.org> References: <55BD9C20.5090205@iang.org> <55BE0003.60508@cs.tcd.ie> <55BE5E8E.4010907@iang.org> <55BE6AE0.6050601@cs.tcd.ie> <55BE8902.2060900@iang.org> Message-ID: <152688EE-07CD-47C7-A185-794DEE7CF2C4@lrw.com> > Jerry described the coin toss. This "addresses" Stephen's dual-attack at some level. What it does is actually give a 50% chance of the good protocol, and a 50% chance of the challenger. You're changing the nature of the attack. I took your attack to be "find two essentially equal protocols and keep the decision procedure stuck on deciding between them". If one of the protocols is actually *better* along the agreed-upon dimensions - for example, if one has a security flaw - the whole assumption of the "rough consensus" approach is that this will be found eventually and the better protocol will win on the technical merits. If you can't determine that one of the proposed protocols is actually unacceptable according to the agreed criteria, you have a very different problem, which has nothing to do with rough consensus, working code, committee procedures, or what have you. -- Jerry From simon at josefsson.org Mon Aug 3 16:34:17 2015 From: simon at josefsson.org (Simon Josefsson) Date: Mon, 03 Aug 2015 22:34:17 +0200 Subject: [Cryptography] asymmetric attacks on crypto-protocols - the rough consensus attack In-Reply-To: (Tom Ritter's message of "Sun, 2 Aug 2015 08:20:26 -0700") References: <55BD9C20.5090205@iang.org> Message-ID: <87614w6rhy.fsf@latte.josefsson.org> Tom Ritter writes: > On 1 August 2015 at 21:27, ianG wrote: >> Can anyone suggest a way to get around this? I think this really puts a >> marker on the map - you simply can't do a security/crypto protocol under >> rough consensus in open committee, when there is an attacker out there >> willing to put in the resources to stop it. >> >> Thoughts? > > My opinion is that the rough consensus can be counter-balanced by > "running code". If the original group moves forward, deploys, gets > early adopters, shows it's working, and perhaps wonder-of-wonders gets > it picked up by one of the big behemoths that could jump-start > deployment (maybe Google, or Akamai, or CloudFlare) - well they can > document as an informational document at least. And you can > interoperate with the folks who have deployed. +1 The "running code" approch could lead the attacker to change its modus operandi to 1) attempt to get implementers/deployers out of the decision making process, or at least sufficiently balanced with people who never writes code or deploy code, combined with 2) attempts to stall publication of implemented protocols. Both are relatively cheap to achieve in a good-faith organization by a bad-faith participant, and is possible to do indirectly without being identified. In the end you get "rough consensus" decision-making, with the problems discussed here, without the "running code" mitigator. Success for the attacker. The IETF is, I would argue, extremely good at refining/documenting deployed protocols and resolving identified problems with them. It has never (or at least not as long as far as I've been around) been good at designing things from scratch when the use-case is not clearly expressed and agreed on. Sadly it has not been good at learning from this history either. /Simon -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 472 bytes Desc: not available URL: From iang at iang.org Mon Aug 3 16:51:36 2015 From: iang at iang.org (ianG) Date: Mon, 03 Aug 2015 21:51:36 +0100 Subject: [Cryptography] asymmetric attacks on crypto-protocols - the rough consensus attack In-Reply-To: <152688EE-07CD-47C7-A185-794DEE7CF2C4@lrw.com> References: <55BD9C20.5090205@iang.org> <55BE0003.60508@cs.tcd.ie> <55BE5E8E.4010907@iang.org> <55BE6AE0.6050601@cs.tcd.ie> <55BE8902.2060900@iang.org> <152688EE-07CD-47C7-A185-794DEE7CF2C4@lrw.com> Message-ID: <55BFD458.6040507@iang.org> On 3/08/2015 20:45 pm, Jerry Leichter wrote: >> Jerry described the coin toss. This "addresses" Stephen's dual-attack at some level. What it does is actually give a 50% chance of the good protocol, and a 50% chance of the challenger. > You're changing the nature of the attack. I took your attack to be "find two essentially equal protocols and keep the decision procedure stuck on deciding between them". It's the latter - generate the deadlock on decision. That could be done in theory with two essentially equal protocols, then fine, but I expect this Buridan's Ass story to collapse; it's a dynamic world, and either two essentially equal protocols are not equal tomorrow with more analysis or news, /or/ the engineers know it and go with a coin toss. > If one of the protocols is actually *better* along the agreed-upon dimensions - for example, if one has a security flaw - the whole assumption of the "rough consensus" approach is that this will be found eventually and the better protocol will win on the technical merits. I'm expecting the two protocols to be quite different and difficult to compare. This is in order to preserve the tribe that supports each; the two protocols have to be oriented to their own tribe in ways that they appeal and horrify in equal measure. Also, the nature of the attack is that the attacker will change the nature of the attack, if it suits... The essence is the outcome, not the inputs, and this attacker cheats. So I'd fully expect the attacker to actually improve the underdog if it was losing support. > If you can't determine that one of the proposed protocols is actually unacceptable according to the agreed criteria, you have a very different problem, which has nothing to do with rough consensus, working code, committee procedures, or what have you. I think even in real life that's not easy. Two protocols can score highly on different criteria, thus setting off an argument as to which criteria is more important. iang From leichter at lrw.com Mon Aug 3 17:33:24 2015 From: leichter at lrw.com (Jerry Leichter) Date: Mon, 3 Aug 2015 17:33:24 -0400 Subject: [Cryptography] asymmetric attacks on crypto-protocols - the rough consensus attack In-Reply-To: <55BFD458.6040507@iang.org> References: <55BD9C20.5090205@iang.org> <55BE0003.60508@cs.tcd.ie> <55BE5E8E.4010907@iang.org> <55BE6AE0.6050601@cs.tcd.ie> <55BE8902.2060900@iang.org> <152688EE-07CD-47C7-A185-794DEE7CF2C4@lrw.com> <55BFD458.6040507@iang.org> Message-ID: >> If one of the protocols is actually *better* along the agreed-upon dimensions - for example, if one has a security flaw - the whole assumption of the "rough consensus" approach is that this will be found eventually and the better protocol will win on the technical merits. > I'm expecting the two protocols to be quite different and difficult to compare. This is in order to preserve the tribe that supports each; the two protocols have to be oriented to their own tribe in ways that they appeal and horrify in equal measure. > > Also, the nature of the attack is that the attacker will change the nature of the attack, if it suits... The essence is the outcome, not the inputs, and this attacker cheats. So I'd fully expect the attacker to actually improve the underdog if it was losing support.... You never need to outright compare the protocols. In fact, the nature of arguments of this sort - whether a deliberate attack or just arising by themselves - is that each side simply trumpets the virtues of its own approach, with only minor mention of the other approach. My assumption is that if there are any significant problems with an approach, they will be found and called out, and that approach will be removed from the running. If an approach has survived all attempts to knock it out for some appropriate length of time, we assume that it's "good enough". We usually operate on the assumption that the ranking of approaches induces a total order. I claim that's not true: When you get to the point where each side is simply listing the ways it's better than the others - and the lists are comparable - then you have two approaches that are simply not ordered with respect to each other. At that point, you toss a coin. In terms of actual operation, I'd have the procedure work like this: 1. Anyone can enter a proposal, or make an argument about proposals that have been entered. 2. There's a (pre-determined) cutoff time after which no new proposals can be entered. 3. Some arguments about proposals are classed as "significant attacks". It's usually obvious which these are; but should there be any debate, an argument shall *not* be deemed a "significant attack on a proposal" unless a super-majority agrees that it is. 4. Any proposal that's the subject of a "significant attack" is taken out of the running. 5. If more than one proposal remains in the running, and no "significant attacks" have been mooted in some (pre-determined) amount of time, a random choice among the remaining proposals is made. This procedure is guaranteed to terminate in a pre-determined amount of time, having chosen 0 or 1 proposals. It is vulnerable to a "heckler's veto" in clause 3: A sufficiently large (but under a super-majority) group can block all attempts to kill off a bad proposal. But it's hard to hide that you're doing this - this is a technical discussion, and if you say "no, that's not an attack" without being able to advance a good reason, people will quickly figure out what you're up to. I said "a super-majority agrees", not "a super-majority votes" - I'm maintaining the "rough consensus" nature of the interaction. (Note that it *may* be the case that there are genuinely two distinct audiences with different needs, and no one proposal can really satisfy both. In that case, you may really not *want* there to be a single winner. In some cases, providing two alternative approaches, each covering part of the space of application - with perhaps substantial overlap - may simply be the best you can do.) -- Jerry From stephen.farrell at cs.tcd.ie Mon Aug 3 19:41:07 2015 From: stephen.farrell at cs.tcd.ie (Stephen Farrell) Date: Tue, 04 Aug 2015 00:41:07 +0100 Subject: [Cryptography] asymmetric attacks on crypto-protocols - the rough consensus attack In-Reply-To: References: <55BD9C20.5090205@iang.org> <55BE0003.60508@cs.tcd.ie> <55BE5E8E.4010907@iang.org> <55BE6AE0.6050601@cs.tcd.ie> <55BE8902.2060900@iang.org> <152688EE-07CD-47C7-A185-794DEE7CF2C4@lrw.com> <55BFD458.6040507@iang.org> Message-ID: <55BFFC13.1010704@cs.tcd.ie> Hiya, On 03/08/15 22:33, Jerry Leichter wrote: > 2. There's a (pre-determined) cutoff time after which no new proposals can be entered. That could work in some place but not in the IETF. (Although there are timers and cutoffs involved in the nominal IETF process.) In the IETF we have a theory, which is actually fairly well reflected by practice, that any decision can be overturned by a sufficiently compelling new fact. That has not infrequently resulted in work being sent back to working groups at IETF last call time when a different set of folks not involved in the working group get to describe their views of the downsides of some thing or other. I think overall the benefit of being fact-based regardless of how much it buggers up progress is more significant than the potential for fixed timings such as you've suggested to mitigate an action taken as part of an invisible bullrun attack. (Once again, I assert that we need to not try consider bullrun in isolation, but we need to try our best to counter all methods of gaming the system without worrying about the unkonwable details as to why someone may be gaming the system.) That said, I do agree that there's usually a giant debate about what are in fact the facts in most such situations, so YMMV in terms of what reasonable folks can conclude on this point. S. From bascule at gmail.com Mon Aug 3 19:44:42 2015 From: bascule at gmail.com (Tony Arcieri) Date: Mon, 3 Aug 2015 16:44:42 -0700 Subject: [Cryptography] More efficient and just as secure to sign message hash using Ed25519? In-Reply-To: <050601d0ce21$a60fa470$f22eed50$@gmail.com> References: <032301d0cd45$b61659e0$22430da0$@gmail.com> <20150803182624.GW19228@mournblade.imrryr.org> <050601d0ce21$a60fa470$f22eed50$@gmail.com> Message-ID: On Mon, Aug 3, 2015 at 12:21 PM, Allen wrote: > Who claimed there is "no added robustness"? It certainly wasn't me. I > specifically said there was a very small marginal benefit, but that I > thought it was not the best use of the resources required. Exploiting hash collisions in digital signature algorithms have lead to real-world attacks. See e.g. Flame MD5 collision. -- Tony Arcieri -------------- next part -------------- An HTML attachment was scrubbed... URL: From allenpmd at gmail.com Mon Aug 3 20:11:52 2015 From: allenpmd at gmail.com (Allen) Date: Mon, 3 Aug 2015 20:11:52 -0400 Subject: [Cryptography] More efficient and just as secure to sign message hash using Ed25519? In-Reply-To: References: <032301d0cd45$b61659e0$22430da0$@gmail.com> <20150803182624.GW19228@mournblade.imrryr.org> <050601d0ce21$a60fa470$f22eed50$@gmail.com> Message-ID: <057401d0ce4a$2a7692a0$7f63b7e0$@gmail.com> > Exploiting hash collisions in digital signature algorithms have led to real-world attacks. See e.g. Flame MD5 collision. Would hashing twice with MD5 be the best way to prevent that attack, or might it be better to use a stronger hash function? See also my earlier comment: "I could probably find a way to use those CPU cycles that would yield a better payoff (using a stronger curve or a more complicated hash function perhaps?)." From leichter at lrw.com Mon Aug 3 20:38:18 2015 From: leichter at lrw.com (Jerry Leichter) Date: Mon, 3 Aug 2015 20:38:18 -0400 Subject: [Cryptography] asymmetric attacks on crypto-protocols - the rough consensus attack In-Reply-To: <55BFFC13.1010704@cs.tcd.ie> References: <55BD9C20.5090205@iang.org> <55BE0003.60508@cs.tcd.ie> <55BE5E8E.4010907@iang.org> <55BE6AE0.6050601@cs.tcd.ie> <55BE8902.2060900@iang.org> <152688EE-07CD-47C7-A185-794DEE7CF2C4@lrw.com> <55BFD458.6040507@iang.org> <55BFFC13.1010704@cs.tcd.ie> Message-ID: On Aug 3, 2015, at 7:41 PM, Stephen Farrell wrote: > Hiya, > > On 03/08/15 22:33, Jerry Leichter wrote: >> 2. There's a (pre-determined) cutoff time after which no new proposals can be entered. > > That could work in some place but not in the IETF. (Although there are > timers and cutoffs involved in the nominal IETF process.) > > In the IETF we have a theory, which is actually fairly well reflected > by practice, that any decision can be overturned by a sufficiently > compelling new fact. That has not infrequently resulted in work being > sent back to working groups at IETF last call time when a different > set of folks not involved in the working group get to describe their > views of the downsides of some thing or other. If you have a new proposal that's sufficiently better than all the existing ones, showing that it is so amounts to an attack against the existing proposals, knocking them out of the competition. (There's no formal definition of "an attack". It doesn't *have* to show a weakness - showing that you can do much better is sufficient.) Presumably you then restart the competition. Just allowing new proposals - even proposals that will rapidly be knocked out of the running - to be mooted forever allows anyone to delay the process indefinitely. In effect, what I'm suggesting is that after the cutoff, the only way to get a proposal in is by getting it recognized as significantly better than what's already there - a deliberately high barrier. Treating this as an attack just lets you do it without invoking some new ad hoc mechanism for judging what's "better by enough": It's decided by the same people, in the same way, that they judge when an attack that looks like and attack is "significant". > I think overall the benefit of being fact-based regardless of how much > it buggers up progress is more significant than the potential for fixed > timings such as you've suggested to mitigate an action taken as part of > an invisible bullrun attack. (Once again, I assert that we need to not > try consider bullrun in isolation, but we need to try our best to > counter all methods of gaming the system without worrying about the > unkonwable details as to why someone may be gaming the system.) These deadlocks arise often enough even without a deliberate attack that having a way to deal with them is important. At some point, *any* decision is better than *no* decision. (Or it isn't. Sometimes what you learn from the process is that this is decision you don't need to make about a protocol - or whatever - that you don't need. The way this tends to play out is that eventually most of the participants who were so eager at the start fade away, realizing that the effort is just not worth their while.) > That said, I do agree that there's usually a giant debate about what > are in fact the facts in most such situations, so YMMV in terms of > what reasonable folks can conclude on this point. -- Jerry From bascule at gmail.com Mon Aug 3 20:52:47 2015 From: bascule at gmail.com (Tony Arcieri) Date: Mon, 3 Aug 2015 17:52:47 -0700 Subject: [Cryptography] More efficient and just as secure to sign message hash using Ed25519? In-Reply-To: <057401d0ce4a$2a7692a0$7f63b7e0$@gmail.com> References: <032301d0cd45$b61659e0$22430da0$@gmail.com> <20150803182624.GW19228@mournblade.imrryr.org> <050601d0ce21$a60fa470$f22eed50$@gmail.com> <057401d0ce4a$2a7692a0$7f63b7e0$@gmail.com> Message-ID: On Mon, Aug 3, 2015 at 5:11 PM, Allen wrote: > Would hashing twice with MD5 be the best way to prevent that attack, or > might it be better to use a stronger hash function? Your question is a false dichotomy. The "best" answer is "do both". If we could wave a magic wand and magically upgrade everything that's using older algorithms, that would clearly be the best solution. But here in the real world, we don't have magic wands. Indeed the clients that were exploited by the Flame MD5 collision were capable of using better algorithms (e.g. SHA1), but since MD5 was supported, it became the weakest link. -- Tony Arcieri -------------- next part -------------- An HTML attachment was scrubbed... URL: From cryptography at dukhovni.org Mon Aug 3 20:54:10 2015 From: cryptography at dukhovni.org (Viktor Dukhovni) Date: Tue, 4 Aug 2015 00:54:10 +0000 Subject: [Cryptography] More efficient and just as secure to sign message hash using Ed25519? In-Reply-To: <057401d0ce4a$2a7692a0$7f63b7e0$@gmail.com> References: <032301d0cd45$b61659e0$22430da0$@gmail.com> <20150803182624.GW19228@mournblade.imrryr.org> <050601d0ce21$a60fa470$f22eed50$@gmail.com> <057401d0ce4a$2a7692a0$7f63b7e0$@gmail.com> Message-ID: <20150804005409.GF19228@mournblade.imrryr.org> On Mon, Aug 03, 2015 at 08:11:52PM -0400, Allen wrote: > Would hashing twice with MD5 be the best way to prevent that attack, or > might it be better to use a stronger hash function? See also my earlier > comment: "I could probably find a way to use those CPU cycles that would > yield a better payoff (using a stronger curve or a more complicated hash > function perhaps?)." If what the curve is signing is the message hash, a stronger curve cant't help. There is merit in constructs that rely on fewer unproven assumptions about the underlying primitives. Few of us expect SHA2-256 to SHAKE-256 to fail soon. Attacking them may even lie beyond human ingenuity, but history has been on the side of the pessimists. In many applications the message size is bounded by protocol constraints, and hashing twice is not a significant bottleneck. A typical use might be signing the parameters for an ephemeral key exchange, where the message size is quite small. Another is signing X.509 certificates. In neither case is it onerous to hash twice. The real obstacle is existing combined digest+sign IUF (initialize, update, final) APIs. If one is to plug the new signatures into a generic framework, the new algorithm would have to buffer the data. In some cases libraries also offer an all-in-one primitive that combines I, U and F, and that would be more efficient with the full message already in memory, by avoiding memory allocation and copying overhead. So the reason why the proposal might get traction is not the modest CPU cost, but API impedance mismatch. We've learned to expect signing APIs to turn meat into sausage in a single pass, and changing the model is difficult. The proposal seems sound on its merits, but may be too difficult to adopt. Thus CFRG seems to have decided to preserve IUF, but internally the signature algorithm may still take H(M) as the message, and sign that more securely (yes even though it is too late). Separately, new interfaces might be made available for conservative designs that choose to bypass IUF and not bolt the door after the horse has left the barn. -- Viktor. From bascule at gmail.com Mon Aug 3 23:19:29 2015 From: bascule at gmail.com (Tony Arcieri) Date: Mon, 3 Aug 2015 20:19:29 -0700 Subject: [Cryptography] SRP for mutual authentication - as an alternative / addition to certificates? In-Reply-To: References: Message-ID: On Sun, Aug 2, 2015 at 9:54 AM, Carlo Contavalli wrote: > Are there / why are not similar technologies used for web? Two words: user experience -- Tony Arcieri -------------- next part -------------- An HTML attachment was scrubbed... URL: From rsalz at akamai.com Tue Aug 4 07:42:34 2015 From: rsalz at akamai.com (Salz, Rich) Date: Tue, 4 Aug 2015 11:42:34 +0000 Subject: [Cryptography] asymmetric attacks on crypto-protocols - the rough consensus attack In-Reply-To: References: <55BD9C20.5090205@iang.org> Message-ID: <8c60b345f5a04714a5270297c60db6aa@ustx2ex-dag1mb2.msg.corp.akamai.com> > My opinion is that the rough consensus can be counter-balanced by "running > code". If the original group moves forward, deploys, gets early adopters, > shows it's working, and perhaps wonder-of-wonders gets it picked up by one > of the big behemoths that could jump-start deployment (maybe Google, or We used to be very unhappy with Microsoft's "embrace and extend" strategy and upset that they did not work things through open processes. Satayana is smiling. From iang at iang.org Tue Aug 4 09:01:19 2015 From: iang at iang.org (ianG) Date: Tue, 04 Aug 2015 14:01:19 +0100 Subject: [Cryptography] asymmetric attacks on crypto-protocols - the rough consensus attack In-Reply-To: <87614w6rhy.fsf@latte.josefsson.org> References: <55BD9C20.5090205@iang.org> <87614w6rhy.fsf@latte.josefsson.org> Message-ID: <55C0B79F.2050309@iang.org> On 3/08/2015 21:34 pm, Simon Josefsson wrote: > Tom Ritter writes: > >> On 1 August 2015 at 21:27, ianG wrote: >>> Can anyone suggest a way to get around this? I think this really puts a >>> marker on the map - you simply can't do a security/crypto protocol under >>> rough consensus in open committee, when there is an attacker out there >>> willing to put in the resources to stop it. >>> >>> Thoughts? >> >> My opinion is that the rough consensus can be counter-balanced by >> "running code". If the original group moves forward, deploys, gets >> early adopters, shows it's working, and perhaps wonder-of-wonders gets >> it picked up by one of the big behemoths that could jump-start >> deployment (maybe Google, or Akamai, or CloudFlare) - well they can >> document as an informational document at least. And you can >> interoperate with the folks who have deployed. > > +1 So in essence, the group forks, and the running code teams pursue informational track rather than standards track. The consensus battle shifts to the marketplace. The attacker has presumably concentrated his forces on people who don't write the code. So that thrust is only valuable in WG if the decision is kept within the WG. As the running code teams leaves, they achieve a hollow victory. OK, I get it. This approach works if the big companies look at things from an engineering pov, and adopt the Informational doc on the merits. It fails if the big majors hold out and it doesn't achieve scale. (So if this attacker can also impact the majors, it's harder. Although, one has to note, that even in the case of the WG coming to a conclusion and publishing the RFC, we still depend on the majors to deploy in order to reach market consensus / critical mass / effective deployment. The majors are more likely to respond to a formal RFC; eg at least one major declares only to follow standards, it won't save its users unless someone tells it how to do it by standard.) > The "running code" approch could lead the attacker to change its modus > operandi to 1) attempt to get implementers/deployers out of the decision > making process, or at least sufficiently balanced with people who never > writes code or deploy code, combined with 2) attempts to stall > publication of implemented protocols. Both are relatively cheap to > achieve in a good-faith organization by a bad-faith participant, and is > possible to do indirectly without being identified. In the end you get > "rough consensus" decision-making, with the problems discussed here, > without the "running code" mitigator. Success for the attacker. Right, so attacker downgrades the running code aspect in WG. I think I see the tactic. > The IETF is, I would argue, extremely good at refining/documenting > deployed protocols and resolving identified problems with them. It has > never (or at least not as long as far as I've been around) been good at > designing things from scratch when the use-case is not clearly expressed > and agreed on. Sadly it has not been good at learning from this history > either. Yup. No "democratic group" ever is. Sadly, a battle of dictators has more success in designing new stuff. A "democratic group" is only useful when the group decides they are better off agreeing in a room what is the combined way forward, when the alternative is open warfare. iang ps; There's something of hubris in the standards world. They all seem to believe they can do more than resolving market battles into standards. E.g., the British Standards Institute just recently came up and said they wish to start standardising cryptocurrencies. The bind moggles. From iang at iang.org Tue Aug 4 09:29:41 2015 From: iang at iang.org (ianG) Date: Tue, 04 Aug 2015 14:29:41 +0100 Subject: [Cryptography] asymmetric attacks on crypto-protocols - the rough consensus attack In-Reply-To: References: <55BD9C20.5090205@iang.org> <55BE0003.60508@cs.tcd.ie> <55BE5E8E.4010907@iang.org> <55BE6AE0.6050601@cs.tcd.ie> <55BE8902.2060900@iang.org> <152688EE-07CD-47C7-A185-794DEE7CF2C4@lrw.com> <55BFD458.6040507@iang.org> Message-ID: <55C0BE45.5030807@iang.org> On 3/08/2015 22:33 pm, Jerry Leichter wrote: > (Note that it *may* be the case that there are genuinely two distinct audiences with different needs, and no one proposal can really satisfy both. In that case, you may really not *want* there to be a single winner. In some cases, providing two alternative approaches, each covering part of the space of application - with perhaps substantial overlap - may simply be the best you can do.) I think I'm seeing the fallacy in my thought experiment. My false assumption here is that the decision will succeed. If we do a good job, prepare a good document, then profit and happiness will ensue. Unfortunately that assumption is so far from useful that it is actually raises questions. The crux of the difficulty comes down to this, I think: The biggest issue by far is deployment of the protocol - how likely it is that the various erstwhile users of the protocol are going to pick it up, write it, deploy it. Success in deployment is approximately an unknowable, a priori. There are so many factors involved that from the group's perspective it is unpredictable. We seriously are looking at from approximately 0% to approximately 100% without any real scientific tool that helps us further. In effect, the factors that effect success of the efforts are outside the group's control. And outside written requirements. We can't "require" deployment. But, the unwritten requirement is imposed on us - we still have to evaluate every proposal from the point of view of later deployment. Indeed, if I'm right, it is the most and only important criteria. Even though it is unstated and cannot be stated. Hence, this is fertile ground for two groups to do what you state - bifurcate on favourites, and only list the benefits of their choice. Because as soon as the get into the real question - which will deploy better - the useful question is withdrawn because we're crystal ball gazing. iang From iang at iang.org Tue Aug 4 09:39:21 2015 From: iang at iang.org (ianG) Date: Tue, 04 Aug 2015 14:39:21 +0100 Subject: [Cryptography] asymmetric attacks on crypto-protocols - the rough consensus attack In-Reply-To: <1FD7D63A-88FA-4456-BBAC-3222FD6F76FD@kebe.com> References: <55BD9C20.5090205@iang.org> <1FD7D63A-88FA-4456-BBAC-3222FD6F76FD@kebe.com> Message-ID: <55C0C089.7070600@iang.org> On 2/08/2015 16:56 pm, Dan McDonald wrote: > On 1 August 2015 at 21:27, ianG wrote: >> Can anyone suggest a way to get around this? I think this really puts a >> marker on the map - you simply can't do a security/crypto protocol under >> rough consensus in open committee, when there is an attacker out there >> willing to put in the resources to stop it. >> >> Thoughts? > > It's a problem, like terrorism is a real problem. ALSO like terrorism, the mere threat of such a problem can be used by people with strong NIH infections to push their own terrible alternatives simply by waving the threat of the "rough consensus attacker" around. > > This has happened in Real Life before, and it will happen again. It doesn't diminish the actual problem of a rough-consensus attack, but the concept is rife for hiding other abuses. (Were I real tinfoil-hat-wearer, I might argue a rough consensus attacker would use NIH fanatics as a second prong.) This is a good point. Were I to start accusing of a rough consensus attack on some WG, I'd probably be assisting that very same rough consensus attack... NIH == not invented here? Yes, I see that. iang From ccontavalli at gmail.com Tue Aug 4 10:29:13 2015 From: ccontavalli at gmail.com (Carlo Contavalli) Date: Tue, 4 Aug 2015 07:29:13 -0700 Subject: [Cryptography] SRP for mutual authentication - as an alternative / addition to certificates? In-Reply-To: References: Message-ID: On Mon, Aug 3, 2015 at 8:19 PM, Tony Arcieri wrote: > On Sun, Aug 2, 2015 at 9:54 AM, Carlo Contavalli > wrote: >> >> Are there / why are not similar technologies used for web? > > Two words: user experience > It's 2015 - I'm sure we could figure something out? Without thinking much... some support for "styled authentication" would not be that hard to add to a browser? we introduce an: with a type="" attribute specifying how / what protocol to use to authenticate, determines some fixed fields (eg, username, password, otp, hw token, ...) that the browser is able to recognize and display in a special way (example: url bar expands to show the inputs under the ssl lock)? Can have a link to recover password, no javascript, but some CSS for styling? It does not have to be an ugly window like the 404 authorization required. If authentication is successful, based on the type used, can include a cookie, start using some special encryption, or include a WWW-authenticate field. Sharing the cookie / encryption / ... across multiple requests / responses should not be hard, similar to SSL session reuse? Carlo From phill at hallambaker.com Tue Aug 4 11:09:48 2015 From: phill at hallambaker.com (Phillip Hallam-Baker) Date: Tue, 4 Aug 2015 11:09:48 -0400 Subject: [Cryptography] asymmetric attacks on crypto-protocols - the rough consensus attack In-Reply-To: <1FD7D63A-88FA-4456-BBAC-3222FD6F76FD@kebe.com> References: <55BD9C20.5090205@iang.org> <1FD7D63A-88FA-4456-BBAC-3222FD6F76FD@kebe.com> Message-ID: On Sun, Aug 2, 2015 at 11:56 AM, Dan McDonald wrote: > On 1 August 2015 at 21:27, ianG wrote: > > Can anyone suggest a way to get around this? I think this really puts a > > marker on the map - you simply can't do a security/crypto protocol under > > rough consensus in open committee, when there is an attacker out there > > willing to put in the resources to stop it. > > > > Thoughts? > > It's a problem, like terrorism is a real problem. ALSO like terrorism, > the mere threat of such a problem can be used by people with strong NIH > infections to push their own terrible alternatives simply by waving the > threat of the "rough consensus attacker" around. > > This has happened in Real Life before, and it will happen again. It > doesn't diminish the actual problem of a rough-consensus attack, but the > concept is rife for hiding other abuses. (Were I real tinfoil-hat-wearer, > I might argue a rough consensus attacker would use NIH fanatics as a second > prong.) I am very sure I have seen exactly that. Back in 2000, after VeriSign bought Network Solutions, Warwick Ford and myself took a look at what it would take to deploy DNSSEC which was one of the main reasons behind the purchase. There was a huge scalability problem in the spec which required an NSEC record to be inserted for every record in the zone. The DNSSEC code was written and would have deployed when VeriSign deployed ATLAS in 2002. The only reason that code was pulled was that a faction in the IETF refused to allow a very minor change to the DNSSEC spec so that NSEC would only cover signed zones. The extra cost of the original approach was over $30 million as it would require the use of 64 bit machines rather than 32 bits. The choice was between modified DNSSEC and no DNSSEC at all. But the NSA BULLRUN folk were able to derail the discussion and block the change. They also made sure ICANN would not permit deployment of any DNSSEC scheme that was not IETF approved. The spec was eventually fixed, many years later. But that is why you don't have security in the DNS today. The difficulty of deploying an infrastructure change like that goes up the longer deployment is delayed. -------------- next part -------------- An HTML attachment was scrubbed... URL: From danmcd at kebe.com Tue Aug 4 11:22:10 2015 From: danmcd at kebe.com (Dan McDonald) Date: Tue, 4 Aug 2015 11:22:10 -0400 Subject: [Cryptography] asymmetric attacks on crypto-protocols - the rough consensus attack In-Reply-To: References: <55BD9C20.5090205@iang.org> <1FD7D63A-88FA-4456-BBAC-3222FD6F76FD@kebe.com> Message-ID: <20150804152210.GC34147@everywhere.local> > The extra cost of the original approach was over $30 million as it would > require the use of 64 bit machines rather than 32 bits. The choice was > between modified DNSSEC and no DNSSEC at all. But the NSA BULLRUN folk were > able to derail the discussion and block the change. They also made sure > ICANN would not permit deployment of any DNSSEC scheme that was not IETF > approved. Whoa. I've only seen the strong-NIH version -- you're saying the modified DNSSEC was sabotaged by "NSA BULLRUN" folk? Do you have mailing-list archive pointers for folks in the audience? That might be some fascinating reading. Dan From wilson at math.wisc.edu Tue Aug 4 12:37:40 2015 From: wilson at math.wisc.edu (Robert L. Wilson) Date: Tue, 4 Aug 2015 11:37:40 -0500 Subject: [Cryptography] Rough Consensus Attack Message-ID: <55C0EA54.5030801@math.wisc.edu> > In fact, the nature of arguments of this sort - whether a deliberate attack or just arising by themselves - is that each side simply trumpets the virtues of its own approach, with only minor mention of the other approach. Several posts have noted how these arguments apply far outside cryptography. This quote was directly about crypto, but it reminds me of what an outside "expert" told us when I recently was on a local campaign committee, approximately this: Never say something the opponent has said is bad, just trumpet your own virtues. (I did not like the idea of letting our opponents have veto power over what we could say...) The idea is sort of: When somebody goes into the voting booth, you can't predict whether he/she will remember all of what you said or just the part you quoted from your opponent, thereby reinforcing his/her ads. (I'll leave it for you to compare that advice to the negative political ads that are so prevalent...) The IETF and other standards groups we have to care about don't typically have voting booths (although they may implement other schemes intended to give ballot privacy). But regardless of whether you are the good guy or the attacker, you still run the risk of reinforcing your opponent's arguments when you mention them. So this strategy has some basis. Bob Wilson From pawel.veselov at gmail.com Tue Aug 4 13:24:03 2015 From: pawel.veselov at gmail.com (Pawel Veselov) Date: Tue, 4 Aug 2015 10:24:03 -0700 Subject: [Cryptography] SRP for mutual authentication - as an alternative / addition to certificates? In-Reply-To: References: Message-ID: On Sun, Aug 2, 2015 at 9:54 AM, Carlo Contavalli wrote: > Hello, > > haven't seen many conversations or much noise about SRP, from > http://srp.stanford.edu/ on this mailing list. > > By a quick reading, and by peeking at the implementation, it provides > strong mutual authentication of both client and server through a > "shared secret", which is stored as a one way hash on the server, and > never exchanged on the wire. > Yes. Wikipedia article https://en.wikipedia.org/wiki/Secure_Remote_Password_protocol also has a reasonable explanation on how they work. There is also an RFC 5054 that plops SRP on top of TLS. > Eg, if used with ssh, checking the fingerprint when connecting would > be significantly less relevant, the fact that the server can establish > an encrypted session at all proves that the server knows a hash of the > shared secret. > But, with ssh, or protocols like ssh, session is established per TCP connection. You wouldn't ask the user the password every time an HTTP(s) connection is established. > Has drawbacks - but certainly sounds like an improvement compared to > existing protocols? > > Are there / why are not similar technologies used for web? > There certainly are for the web. Not so much for HTTP(s) though. You can't reasonably (from the UE perspective) implement SRP/ZKP for HTTP because it needs to span multiple HTTP requests. The full SRP web solutions do require javascript (or other) code to retrieve any protected data (there are solutions that only check the password, after which it's standard HTTP session) > I see two separate needs x509 certificates and TLS typically try to > address: > 1) establishing the identity of a site you connect to. > 2) maintaining privacy and preventing mangling of the data exchanged. > > If I think about my typical workflow, ... x509 and certificates would > still play a role the first time I end up on a site. > > Eg, the first time I go to uber.com, or first time I register to use > my health plan benefits online, I would check that the certificate > matches who the site claims to be. > > But from then on... once registered, and once I have a password, SRP > would allow me to establish that the remote end is who they claim to > be based on their ability to prove that they know a hash of my > password, the certificate would just be an additional protection? > If the X509 certificate checks out that just means nobody stole the private key of the holder, or a CA you trust. If the SRP checks out, that means that the server has the verifier code derived from your password. So in case of the password database breach, it would be very hard to derive your password, but still as easy to impersonate the server. Though they won't get your password even if you give it to the impersonator. > Seems like a significant improvement over what we have today? Reducing > exposure, and need to trust certification authorities? > If you don't use PKI, then you'd have problems trusting whom to establish the passwords with in the first place. > For example: a rogue certificate authority creates a false uber / > false health plan management site. Or a rogue certificate is installed > on my laptop. I try to login after this fake has been created, ... I > would not be able to login? Your client must terminate the connection if the server's proof doesn't check out, or if it can't decrypt the data from the server. > or notice immediately? Or if they proxy my > connection acting as a MITM, they would not be able to decrypt my > data? > Yes, neither fake servers no MITM can get into your data unless they have the verifier. > Opinions? > SRP really helps in: - preventing password from ever being transmitted - preventing guessing the password even if the password is really simple Besides it's ability to encrypt communications, it in no way deprecates any of the functionality of the PKI. -------------- next part -------------- An HTML attachment was scrubbed... URL: From mitch at niftyegg.com Tue Aug 4 13:24:22 2015 From: mitch at niftyegg.com (Tom Mitchell) Date: Tue, 4 Aug 2015 10:24:22 -0700 Subject: [Cryptography] asymmetric attacks on crypto-protocols - the rough consensus attack In-Reply-To: <8c60b345f5a04714a5270297c60db6aa@ustx2ex-dag1mb2.msg.corp.akamai.com> References: <55BD9C20.5090205@iang.org> <8c60b345f5a04714a5270297c60db6aa@ustx2ex-dag1mb2.msg.corp.akamai.com> Message-ID: On Tue, Aug 4, 2015 at 4:42 AM, Salz, Rich wrote: > > My opinion is that the rough consensus can be counter-balanced by Two personal hot buttons. 1) The use of consensus is an astounding risk. 2) The shortening of "informed opinion" to "opinion". I can endure "consensus of informed experts in the field". There has been some very good information here lately by experts that share their knowledge and experience but to pick on a TLA if the NSA asserts with no test that foo() is better than bar() the students and other teachers should shout "show your work". My nightmare is "uninformed opinion by experts not in the field". This is way too common in congress and other fields where failure is not an option like cryptographic security. -- T o m M i t c h e l l -------------- next part -------------- An HTML attachment was scrubbed... URL: From bear at sonic.net Tue Aug 4 14:14:36 2015 From: bear at sonic.net (Ray Dillinger) Date: Tue, 04 Aug 2015 11:14:36 -0700 Subject: [Cryptography] SRP for mutual authentication - as an alternative / addition to certificates? In-Reply-To: References: Message-ID: <55C1010C.5060805@sonic.net> On 08/04/2015 07:29 AM, Carlo Contavalli wrote: > Sharing the cookie / encryption / ... across multiple requests / > responses should not be hard, similar to SSL session reuse? I consider SSL session reuse to be a vulnerability. It gives an attacker additional time to break the SSL key before cutting in with a "reuse". We have already seen downgrade attacks that put SSL keys within reach given an amount of compute power that can be achieved by a modest cluster in a matter of a few minutes. Session reuse can give an attacker literally hours to break an SSL key. Bear -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 819 bytes Desc: OpenPGP digital signature URL: From dj at deadhat.com Tue Aug 4 15:06:37 2015 From: dj at deadhat.com (dj at deadhat.com) Date: Tue, 4 Aug 2015 19:06:37 -0000 Subject: [Cryptography] asymmetric attacks on crypto-protocols - the rough consensus attack In-Reply-To: <55C0C089.7070600@iang.org> References: <55BD9C20.5090205@iang.org> <1FD7D63A-88FA-4456-BBAC-3222FD6F76FD@kebe.com> <55C0C089.7070600@iang.org> Message-ID: > On 2/08/2015 16:56 pm, Dan McDonald wrote: >> On 1 August 2015 at 21:27, ianG wrote: > > NIH == not invented here? Yes, I see that. > > I'm currently in the process of developing a security protocol spec in a standards group, that will be deployed everywhere. The reverse seems to be true. There is a desire to do some things new (specifically to avoid X.509 and NIST curves and make things as brutally simple as possible), but there is a NISE (Not invented somewhere else) crowd that calls for external specs we can point to for all crypto things. This leads down the slippery path to NIST, DSA and X.509. From ben at links.org Tue Aug 4 21:57:30 2015 From: ben at links.org (Ben Laurie) Date: Wed, 05 Aug 2015 01:57:30 +0000 Subject: [Cryptography] SRP for mutual authentication - as an alternative / addition to certificates? In-Reply-To: References: Message-ID: On Tue, 4 Aug 2015 at 18:09 Carlo Contavalli wrote: > On Mon, Aug 3, 2015 at 8:19 PM, Tony Arcieri wrote: > > On Sun, Aug 2, 2015 at 9:54 AM, Carlo Contavalli > > wrote: > >> > >> Are there / why are not similar technologies used for web? > > > > Two words: user experience > > > > It's 2015 - I'm sure we could figure something out? > > Without thinking much... Right, because why bother to think about one of the longest standing security problems we have on the 'net? Obviously you should be able to fix that in your sleep. How about you don't think about this much: how do you prevent phishing in your scheme? -------------- next part -------------- An HTML attachment was scrubbed... URL: From ccontavalli at gmail.com Tue Aug 4 22:24:40 2015 From: ccontavalli at gmail.com (Carlo Contavalli) Date: Tue, 4 Aug 2015 19:24:40 -0700 Subject: [Cryptography] SRP for mutual authentication - as an alternative / addition to certificates? In-Reply-To: References: Message-ID: On Tue, Aug 4, 2015 at 6:57 PM, Ben Laurie wrote: > On Tue, 4 Aug 2015 at 18:09 Carlo Contavalli wrote: >> >> On Mon, Aug 3, 2015 at 8:19 PM, Tony Arcieri wrote: >> > On Sun, Aug 2, 2015 at 9:54 AM, Carlo Contavalli >> > wrote: >> >> >> >> Are there / why are not similar technologies used for web? >> > >> > Two words: user experience >> > >> >> It's 2015 - I'm sure we could figure something out? >> >> Without thinking much... > > > Right, because why bother to think about one of the longest standing > security problems we have on the 'net? Obviously you should be able to fix > that in your sleep. meh :-( I just associated "user experience" with the stigma associated with http authentication and various schemes based on it, which, among many other drawbacks, look horrible to the end user, and just lead to bad user experience. But the bad user experience is not implicit to the use of an authentication scheme, imho it's more the result of old standards, lack of investment / interest / push in the implementations and implicit difficulty of changing the current status. > How about you don't think about this much: how do you prevent phishing in > your scheme? What I had in mind is the browser has built in support for . Just like the "https lock" or "green url bar", which can hardly be manipulated by javascript or other code in the page, presence of tags results in a graphic feature _on the browser_ that can't be manipulated by server provided code. For example, URL bar becomes thicker, and displays an "username" and "password" with a "lock next to it". With a scheme like SRP, once the user presses enter to confirm username and password, _the browser_ tries to perform SRP authentication with the remote site. If the site is not who it claims to be, or not the site the user registered against, authentication would fail. Without disclosing the password of the user, and making it really hard for a site to impersonate a different one. The cost on the user is in making sure he is entering the username and password only in "secure boxes", rather than random ones on the web site. Again, x509 and certificates would still play an important role, but would reduce the surface of attack on returning users? There is no perfect solution of course... is the incremental benefit worth the work? Carlo From ron at flownet.com Wed Aug 5 00:10:42 2015 From: ron at flownet.com (Ron Garret) Date: Tue, 4 Aug 2015 21:10:42 -0700 Subject: [Cryptography] SRP for mutual authentication - as an alternative / addition to certificates? In-Reply-To: References: Message-ID: <2B002639-2AC0-4DA6-B72C-742516B3DD1E@flownet.com> On Aug 4, 2015, at 7:24 PM, Carlo Contavalli wrote: > On Tue, Aug 4, 2015 at 6:57 PM, Ben Laurie wrote: >> On Tue, 4 Aug 2015 at 18:09 Carlo Contavalli wrote: >>> >>> On Mon, Aug 3, 2015 at 8:19 PM, Tony Arcieri wrote: >>>> On Sun, Aug 2, 2015 at 9:54 AM, Carlo Contavalli >>>> wrote: >>>>> >>>>> Are there / why are not similar technologies used for web? >>>> >>>> Two words: user experience >>>> >>> >>> It's 2015 - I'm sure we could figure something out? >>> >>> Without thinking much... >> >> >> Right, because why bother to think about one of the longest standing >> security problems we have on the 'net? Obviously you should be able to fix >> that in your sleep. > > meh :-( I just associated "user experience" with the stigma associated > with http authentication and various schemes based on it, which, among > many other drawbacks, look horrible to the end user, and just lead to > bad user experience. FYI/FWIW I took a whack a re-inventing authentication a few years back and came up with this: http://dswi.net It’s essentially browser certs implemented in Javascript, which essentially delegates authentication to a trusted third party. It was designed to be more secure than usernames and passwords (which is a pretty low bar) but super-easy for both users and relying-parties to use. If there’s any interest in this I’d be happy to provide more details. rg From crypto.jmk at gmail.com Wed Aug 5 10:16:39 2015 From: crypto.jmk at gmail.com (John Kelsey) Date: Wed, 5 Aug 2015 09:16:39 -0500 Subject: [Cryptography] asymmetric attacks on crypto-protocols - the rough consensus attack In-Reply-To: <55BD9C20.5090205@iang.org> References: <55BD9C20.5090205@iang.org> Message-ID: Rule of thumb: Suppose you have two systems. One runs along perfectly nearly all the time. The other just barely works in the best of times and often breaks down under its own weight. It's pretty easy to tell when the first system has been sabotaged, but really hard to tell when the second one has been sabotaged. Most of our mechanisms for developing crypto standards are more like the second system than the first. --John > On Aug 1, 2015, at 11:27 PM, ianG wrote: > > There's a group working on a new crypto protocol. I don't need to name them because it's a general issue, but we're talking about one of those "rough consensus and working code" rooms where dedicated engineers do what they most want to do - create new Internet systems. > > This new crypto protocol will take a hitherto totally open treasure trove of data and hide it. Not particularly well but well enough to make the attacker work at it. The attacker will have to actually do something, instead of just hoovering. > > Doing something will be dangerous - because those packets could be spotted - so it will be reserved for those moments and targets where it's worthwhile. It's not as if the attacker cares that much about being spotted, but embarrassment is best avoided. > > So this could be kind of a big deal - we go from 100% open on this huge data set, down to 99% closed, over some time and some deployment curve. > > > > Now, let's assume the attacker is pissed at this. And takes it's attitudinal inspiration from Hollywood, or other enlightened sources like NYT on how to retaliate in cyberwar (OPM, anyone?) [0]. Which is to say, it decides to fight back. Game on. > > How to fight back seems easy to say: Stop the group from launching its protocol. How? > > It turns out that there is a really nice attack. If the group has a protocol in mind, then all the attacker has to do is: > > a) suggest a new alternate protocol. > b) balance the group so that there is disagreement, roughly evenly balanced between the original and the challenger. > > Suggesting an alternate is really easy - as we know there are dozens of prototypes out there, just gotta pick one that's sufficiently different. In this case I can think of 3 others without trying, and 6 people on this group could design 1 in a month. > > Balancing the group is just a matter of phone calls and resources. Call in favours. So many people out there who would love to pop in and utter an opinion. So many friends of friends, willing to strut their stuff. > > > > Because of the rules of rough consensus, if a rough balance is preserved, then it stops all forward movement. This is a beautiful attack. If the original side gets disgusted and walks, the attacker can simply come up with a new challenger. If the original team quietens down, the challenger can quieten down too - it doesn't want to win, it wants to preserve the conflict. > > The attack can't even be called, because all contributors are doing is uttering an opinion as they would if asked. The attack simply uses the time-tested rules which the project is convinced are the only way to do these things. > > > > The only defence I can see is to drop rough consensus. By offering rough consensus, it's almost a gilt-edged invitation to the attacker. The attacker isn't so stupid as to not use it. > > Can anyone suggest a way to get around this? I think this really puts a marker on the map - you simply can't do a security/crypto protocol under rough consensus in open committee, when there is an attacker out there willing to put in the resources to stop it. > > Thoughts? > > > > iang > > > > [0] you just can't make this stuff up... > http://mobile.nytimes.com/2015/08/01/world/asia/us-decides-to-retaliate-against-chinas-hacking.html > _______________________________________________ > The cryptography mailing list > cryptography at metzdowd.com > http://www.metzdowd.com/mailman/listinfo/cryptography From ben at links.org Wed Aug 5 06:07:31 2015 From: ben at links.org (Ben Laurie) Date: Wed, 05 Aug 2015 10:07:31 +0000 Subject: [Cryptography] SRP for mutual authentication - as an alternative / addition to certificates? In-Reply-To: References: Message-ID: On Wed, 5 Aug 2015 at 03:24 Carlo Contavalli wrote: > The cost on the user is in making sure he is entering the username and > password only in "secure boxes", rather than random ones on the web > site. > This is the core problem - if we could get users to only type their passwords into the one true password box, then there are many viable solutions to "the password problem". But all attempts to do this so far have been dismal failures. -------------- next part -------------- An HTML attachment was scrubbed... URL: From ccontavalli at gmail.com Wed Aug 5 10:39:38 2015 From: ccontavalli at gmail.com (Carlo Contavalli) Date: Wed, 5 Aug 2015 07:39:38 -0700 Subject: [Cryptography] SRP for mutual authentication - as an alternative / addition to certificates? In-Reply-To: References: Message-ID: On Wed, Aug 5, 2015 at 3:07 AM, Ben Laurie wrote: > On Wed, 5 Aug 2015 at 03:24 Carlo Contavalli wrote: >> >> The cost on the user is in making sure he is entering the username and >> password only in "secure boxes", rather than random ones on the web >> site. > > > This is the core problem - if we could get users to only type their > passwords into the one true password box, then there are many viable > solutions to "the password problem". But all attempts to do this so far have > been dismal failures. Out of curiosity, do you have more details about previous attempts? Thank you, Carlo From ron at flownet.com Wed Aug 5 13:09:18 2015 From: ron at flownet.com (Ron Garret) Date: Wed, 5 Aug 2015 10:09:18 -0700 Subject: [Cryptography] SRP for mutual authentication - as an alternative / addition to certificates? In-Reply-To: References: Message-ID: On Aug 5, 2015, at 7:39 AM, Carlo Contavalli wrote: > On Wed, Aug 5, 2015 at 3:07 AM, Ben Laurie wrote: >> On Wed, 5 Aug 2015 at 03:24 Carlo Contavalli wrote: >>> >>> The cost on the user is in making sure he is entering the username and >>> password only in "secure boxes", rather than random ones on the web >>> site. >> >> >> This is the core problem - if we could get users to only type their >> passwords into the one true password box, then there are many viable >> solutions to "the password problem". But all attempts to do this so far have >> been dismal failures. > > Out of curiosity, do you have more details about previous attempts? And in particular, has there ever been an attempt that was integrated into the browser so that the user could actually have a hope of knowing whether or not they were dealing with the One True Password Box? (No, browser certificates don’t count. Certs got the underlying auth right but dropped the ball in a big way on the UX.) rg From leichter at lrw.com Wed Aug 5 13:09:55 2015 From: leichter at lrw.com (Jerry Leichter) Date: Wed, 5 Aug 2015 13:09:55 -0400 Subject: [Cryptography] SRP for mutual authentication - as an alternative / addition to certificates? In-Reply-To: References: Message-ID: >> This is the core problem - if we could get users to only type their >> passwords into the one true password box, then there are many viable >> solutions to "the password problem". But all attempts to do this so far have >> been dismal failures. > > Out of curiosity, do you have more details about previous attempts? Safari actually implements such a mechanism: If the remote site asks for authentication in "the right way" - and, frankly, I have no idea what it is; some sites do manage to trigger the mechanism; most don't - a special box "unrolls" from the top chrome over the page. I don't know if the effect can be duplicated in Javascript; it would take some effort, I would think. But since people don't expect this anyway ... there's little point. -- Jerry From bascule at gmail.com Wed Aug 5 13:51:33 2015 From: bascule at gmail.com (Tony Arcieri) Date: Wed, 5 Aug 2015 10:51:33 -0700 Subject: [Cryptography] SRP for mutual authentication - as an alternative / addition to certificates? In-Reply-To: References: Message-ID: On Wed, Aug 5, 2015 at 10:09 AM, Ron Garret wrote: > And in particular, has there ever been an attempt that was integrated into > the browser so that the user could actually have a hope of knowing whether > or not they were dealing with the One True Password Box? (No, browser > certificates don’t count. Certs got the underlying auth right but dropped > the ball in a big way on the UX.) FIDO U2F derives origin-specific ECC keys (derived using a hardware token) which are effectively "unphishable": https://fidoalliance.org/specifications/overview/ It's integrated into Chrome. Support for other browsers has not been forthcoming though -- Tony Arcieri -------------- next part -------------- An HTML attachment was scrubbed... URL: From ben at links.org Wed Aug 5 14:51:44 2015 From: ben at links.org (Ben Laurie) Date: Wed, 05 Aug 2015 18:51:44 +0000 Subject: [Cryptography] SRP for mutual authentication - as an alternative / addition to certificates? In-Reply-To: References: Message-ID: On Wed, 5 Aug 2015 at 18:51 Tony Arcieri wrote: > On Wed, Aug 5, 2015 at 10:09 AM, Ron Garret wrote: > >> And in particular, has there ever been an attempt that was integrated >> into the browser so that the user could actually have a hope of knowing >> whether or not they were dealing with the One True Password Box? (No, >> browser certificates don’t count. Certs got the underlying auth right but >> dropped the ball in a big way on the UX.) > > > FIDO U2F derives origin-specific ECC keys (derived using a hardware token) > which are effectively "unphishable": > > https://fidoalliance.org/specifications/overview/ > > It's integrated into Chrome. Support for other browsers has not been > forthcoming though > I use one of those, but it doesn't really help with my other devices. And I'm screwed if I lose it (well, I'm not, because I'll be given another, but if I were a member of the public I would be). -------------- next part -------------- An HTML attachment was scrubbed... URL: From michal.bozon at cesnet.cz Wed Aug 5 17:41:03 2015 From: michal.bozon at cesnet.cz (Michal Bozon) Date: Wed, 5 Aug 2015 23:41:03 +0200 Subject: [Cryptography] SHA-3 FIPS-202: no SHAKE512 but SHAKE128; confusing SHAKE security Message-ID: <20150805214103.GB13929@carbon.w2lan.cesnet.cz> Hi. There is new fresh FIPS-202 standardizing SHA-3. In addition to SHA3-{224,256,384,512}, SHAKE-{256,512} were expected. However, we got SHAKE-{128,256} instead. So in addition to four fixed hash functions with 224 up to 512 bit security, there are two "expandable-output" functions (XOF) with only max. 128 vs max. 256 bit security. So what is the point of their expansion? (In the Example docs linked in FIPS-202 appendix E, their output values are expanded to impressive 4096 bits.) And regarding to SHAKE security.. according to the FIPS-202's Table 4, unlike SHA3 functions, where the collision resistance is half of the (2nd) preimage resistance, as expected, SHAKE functions have these resistances equal (for sufficient output lengths). Interesting.. Birthday paradox does not apply here? Do I have a mistake somewhere? Do they? thanks for any insider comments, Michal Bozon From peter at cryptojedi.org Wed Aug 5 22:30:04 2015 From: peter at cryptojedi.org (Peter Schwabe) Date: Thu, 6 Aug 2015 04:30:04 +0200 Subject: [Cryptography] More efficient and just as secure to sign message hash using Ed25519? In-Reply-To: <057401d0ce4a$2a7692a0$7f63b7e0$@gmail.com> References: <032301d0cd45$b61659e0$22430da0$@gmail.com> <20150803182624.GW19228@mournblade.imrryr.org> <050601d0ce21$a60fa470$f22eed50$@gmail.com> <057401d0ce4a$2a7692a0$7f63b7e0$@gmail.com> Message-ID: <20150806023004.GO3033@tyrion> Allen wrote: Dear Allen, dear all, > > Exploiting hash collisions in digital signature algorithms have led > > to real-world attacks. See e.g. Flame MD5 collision. > > Would hashing twice with MD5 be the best way to prevent that attack, > or might it be better to use a stronger hash function? Please note that Ed25519 is not just simply "hashing with MD5 twice". > See also my > earlier comment: "I could probably find a way to use those CPU cycles > that would yield a better payoff (using a stronger curve or a more > complicated hash function perhaps?)." I have the impression that there are two aspects of EdDSA that get confused in this discussion: the issue of pre-hashing (and losing collision resilience) and the issue of deterministic signing (which requires the two hash computations). The two hashes computed for Ed25519 signing are 1.) r = H(h_b,...,h_{2b-1},M) and 2.) H(R,A,M) Instead of computing the first hash, one *could* pick r as a uniformly random scalar modulo the group order l. This is in fact what Schnorr signatures (and similarly ECDSA signatures) do. However, then the security of the scheme then largely depends on the RNG, which is outside the scope of testing of the signature scheme. Also, cryptographically secure random numbers are not always easy to obtain and RNG failures have been the reason for multiple real-world attacks in the past, so EdDSA is designed to not rely on a secure RNG for signing. The second hash includes R at the beginning, which is not attacker-controlled and makes the scheme collision-resilient. The point is that an attacker who can compute an internal or external collision of H cannot forge a signature. Obviously, this feature is lost when replacing M with H(M) (i.e., when pre-hashing the message). The cost of not pre-hashing depends on the length of the message. For signatures on short messages (like, e.g., public keys) the overhead is neglible, for very long messages it approaches a factor of 2. I certainly agree that it's a good idea to use a stronger hash function than MD5. When suggesting a better payoff for CPU cycles, then the questions are what message lengths you're dealing with and what attacks you're worried about. Personally, until quantum computers are built, I am less concerned about a DLP attack against Curve25519 than about a collision in SHA-2. Best regards, Peter -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 173 bytes Desc: Digital signature URL: From dj at deadhat.com Thu Aug 6 01:01:59 2015 From: dj at deadhat.com (David Johnston) Date: Wed, 05 Aug 2015 22:01:59 -0700 Subject: [Cryptography] asymmetric attacks on crypto-protocols - the rough consensus attack In-Reply-To: References: <55BD9C20.5090205@iang.org> <1FD7D63A-88FA-4456-BBAC-3222FD6F76FD@kebe.com> <55C0C089.7070600@iang.org> Message-ID: <55C2EA47.4010200@deadhat.com> On 8/5/15 1:25 PM, John Kelsey wrote: > I wonder what fraction of the time people invent their own crypto algorithms and protocols, and the result is better than the standard stuff. I'm guessing the fraction is small enough that it needs quite a few significant digits to be distinguishable from zero. > > --John In the case of the UPB (unnamed peripheral bus) that I'm referring to, It's not so much creating new crypto algorithms as composing normal algorithms in a system that is simpler than the complex specs like X.509 that lead to complex software that lead to bugs. Or like DSA that is very fragile in the face of biased random numbers. Also avoiding NIST curves with the unexplained constants and trying to use algorithms that aren't thought to be subject to government interference which could cause export problems in international markets. Many standards (e.g. from IETF, IEEE 802 (before they learned to stop asking the government for help), SP800-90, X.509 etc. ) have proven toxic, either cryptographically, structurally or in terms of implementation complexity. Doing a good job of interoperability standards writing these days involves taking this into account and being very circumspect about what parts of what standards can be considered safe and what parts should be composed in a new fashion that achieves something simpler, or more scalable or more efficient or all three. Standards are a minefield. We need to learn to tread carefully. DJ > > > > On Aug 4, 2015, at 2:06 PM, dj at deadhat.com wrote: > >>>> On 2/08/2015 16:56 pm, Dan McDonald wrote: >>>>> On 1 August 2015 at 21:27, ianG wrote: >>> >>> NIH == not invented here? Yes, I see that. >> I'm currently in the process of developing a security protocol spec in a >> standards group, that will be deployed everywhere. >> >> The reverse seems to be true. There is a desire to do some things new >> (specifically to avoid X.509 and NIST curves and make things as brutally >> simple as possible), but there is a NISE (Not invented somewhere else) >> crowd that calls for external specs we can point to for all crypto things. >> This leads down the slippery path to NIST, DSA and X.509. >> >> >> >> _______________________________________________ >> The cryptography mailing list >> cryptography at metzdowd.com >> http://www.metzdowd.com/mailman/listinfo/cryptography From dave at horsfall.org Thu Aug 6 02:02:13 2015 From: dave at horsfall.org (Dave Horsfall) Date: Thu, 6 Aug 2015 16:02:13 +1000 (EST) Subject: [Cryptography] Book of possible interest Message-ID: Spreading the word, as it were... The list is where RTTY idiots like me hang out. -- Dave Horsfall DTM (VK2KFU) "Those who don't understand security will suffer." Watson never said: "I think there is a world market for maybe five computers." ---------- Forwarded message ---------- Date: Wed, 05 Aug 2015 09:57:13 -0400 From: Jim Reeds To: greenkeys@ Subject: [GreenKeys] Book of possible interest I am a long-time lurker, and have just helped publish a book that might be of interest to list members: Breaking Teleprinter Ciphers at Bletchley Park: An edition of I.J. Good, D. Michie and G. Timms: General Report on Tunny with Emphasis on Statistical Methods (1945)" James A. Reeds (Editor), Whitfield Diffie (Editor), J. V. Field (Editor) IEEE/Wiley Press, July 2015. (See http://www.wiley.com/WileyCDA/WileyTitle/productCd-0470465891,subjectCd-STZ0.html and http://www.amazon.com/Breaking-Teleprinter-Ciphers-Bletchley-Park/dp/0470465891 for details.) -- Jim Reeds reeds at idaccr.org ______________________________________________________________ From watsonbladd at gmail.com Thu Aug 6 12:05:10 2015 From: watsonbladd at gmail.com (Watson Ladd) Date: Thu, 6 Aug 2015 09:05:10 -0700 Subject: [Cryptography] asymmetric attacks on crypto-protocols - the rough consensus attack In-Reply-To: References: <55BD9C20.5090205@iang.org> Message-ID: On Aug 5, 2015 9:54 AM, "John Kelsey" wrote: > > Rule of thumb: Suppose you have two systems. One runs along perfectly nearly all the time. The other just barely works in the best of times and often breaks down under its own weight. It's pretty easy to tell when the first system has been sabotaged, but really hard to tell when the second one has been sabotaged. > > Most of our mechanisms for developing crypto standards are more like the second system than the first. Notice how IEEE 743 and RnRS don't have these problems. Then look at the authors and compare what they know to what the people making crypto standards know, and you'll have the explanation. I'd never design controls without knowing the theory or a house without sesmic engineering. Somehow we've accepted this in crypto standards. > > --John > > > > > On Aug 1, 2015, at 11:27 PM, ianG wrote: > > > > There's a group working on a new crypto protocol. I don't need to name them because it's a general issue, but we're talking about one of those "rough consensus and working code" rooms where dedicated engineers do what they most want to do - create new Internet systems. > > > > This new crypto protocol will take a hitherto totally open treasure trove of data and hide it. Not particularly well but well enough to make the attacker work at it. The attacker will have to actually do something, instead of just hoovering. > > > > Doing something will be dangerous - because those packets could be spotted - so it will be reserved for those moments and targets where it's worthwhile. It's not as if the attacker cares that much about being spotted, but embarrassment is best avoided. > > > > So this could be kind of a big deal - we go from 100% open on this huge data set, down to 99% closed, over some time and some deployment curve. > > > > > > > > Now, let's assume the attacker is pissed at this. And takes it's attitudinal inspiration from Hollywood, or other enlightened sources like NYT on how to retaliate in cyberwar (OPM, anyone?) [0]. Which is to say, it decides to fight back. Game on. > > > > How to fight back seems easy to say: Stop the group from launching its protocol. How? > > > > It turns out that there is a really nice attack. If the group has a protocol in mind, then all the attacker has to do is: > > > > a) suggest a new alternate protocol. > > b) balance the group so that there is disagreement, roughly evenly balanced between the original and the challenger. > > > > Suggesting an alternate is really easy - as we know there are dozens of prototypes out there, just gotta pick one that's sufficiently different. In this case I can think of 3 others without trying, and 6 people on this group could design 1 in a month. > > > > Balancing the group is just a matter of phone calls and resources. Call in favours. So many people out there who would love to pop in and utter an opinion. So many friends of friends, willing to strut their stuff. > > > > > > > > Because of the rules of rough consensus, if a rough balance is preserved, then it stops all forward movement. This is a beautiful attack. If the original side gets disgusted and walks, the attacker can simply come up with a new challenger. If the original team quietens down, the challenger can quieten down too - it doesn't want to win, it wants to preserve the conflict. > > > > The attack can't even be called, because all contributors are doing is uttering an opinion as they would if asked. The attack simply uses the time-tested rules which the project is convinced are the only way to do these things. > > > > > > > > The only defence I can see is to drop rough consensus. By offering rough consensus, it's almost a gilt-edged invitation to the attacker. The attacker isn't so stupid as to not use it. > > > > Can anyone suggest a way to get around this? I think this really puts a marker on the map - you simply can't do a security/crypto protocol under rough consensus in open committee, when there is an attacker out there willing to put in the resources to stop it. > > > > Thoughts? > > > > > > > > iang > > > > > > > > [0] you just can't make this stuff up... > > http://mobile.nytimes.com/2015/08/01/world/asia/us-decides-to-retaliate-against-chinas-hacking.html > > _______________________________________________ > > The cryptography mailing list > > cryptography at metzdowd.com > > http://www.metzdowd.com/mailman/listinfo/cryptography > _______________________________________________ > The cryptography mailing list > cryptography at metzdowd.com > http://www.metzdowd.com/mailman/listinfo/cryptography -------------- next part -------------- An HTML attachment was scrubbed... URL: From allenpmd at gmail.com Thu Aug 6 07:55:06 2015 From: allenpmd at gmail.com (Allen) Date: Thu, 6 Aug 2015 07:55:06 -0400 Subject: [Cryptography] More efficient and just as secure to sign message hash using Ed25519? Message-ID: <012001d0d03e$bd4686d0$37d39470$@gmail.com> Hi Peter, I understand what you are saying. Regardless of how the two hashes are used internally, at the end of the day, the algorithm is hashing the entire message twice (with the same hash algorithm and two different IV's), and the potential advantage over hashing just once is that you gain resistance to collisions in the hash function. Hashing an entire message twice with two different IV's is a simple way to gain collision resistance in any context, but I also think it is relatively expensive compared to other potential approaches. This is the approach we're considering for our application, and I would suggest it might also be suitable for other applications with long messages: 1. Hash the entire message once using a hash function that outputs at least 512 bits. If you want fast, Skein-512-512, Blake-512 or Blake2b might be good choices. If you want more security and/or collision resistance, Keccak-512 or Keccak-576 might be good choices. 2. Form a short string by concatenating the hash computed in Step 1 with the original message length and other key message metadata, such as the message id, timestamp, sender id, reply-to id, etc. 3. Sign the short string from Step 2 using the Ed25519 algorithm as published, pairing the elliptic curve with a strong 512 bit hash function such as Keccak-512, or possibly even two strong hash functions such as the Skein-512(Keccak-576(x)). Depending on the application and the nature of the messages being signed, I think that sufficiently strengthens the algorithm against potential collisions in the hash function without requiring the entire (long) message to be hashed twice. It also opens up the possibility of using a faster and/or wider hash function in Step 1, with a more computationally complex hash function in Step 3, which I think puts the CPU cycles to work where they can provide the greatest benefit. Just my 2 cents. Other people might have different security/cost tradeoffs, which is fine by me. From wk at gnupg.org Thu Aug 6 03:43:18 2015 From: wk at gnupg.org (Werner Koch) Date: Thu, 06 Aug 2015 09:43:18 +0200 Subject: [Cryptography] More efficient and just as secure to sign message hash using Ed25519? In-Reply-To: <20150806023004.GO3033@tyrion> (Peter Schwabe's message of "Thu, 6 Aug 2015 04:30:04 +0200") References: <032301d0cd45$b61659e0$22430da0$@gmail.com> <20150803182624.GW19228@mournblade.imrryr.org> <050601d0ce21$a60fa470$f22eed50$@gmail.com> <057401d0ce4a$2a7692a0$7f63b7e0$@gmail.com> <20150806023004.GO3033@tyrion> Message-ID: <87io8srhex.fsf@vigenere.g10code.de> On Thu, 6 Aug 2015 04:30, peter at cryptojedi.org said: > not pre-hashing depends on the length of the message. For signatures on > short messages (like, e.g., public keys) the overhead is neglible, for > very long messages it approaches a factor of 2. There is another point to consider. When using a smartcard it is obviously better to implement the entire signature algorithm in the smartcard. The whole point of using a smartcard is to better protect the private key. Now, smartcards have a very limited I/O bandwidth and thus it is impossible to feed the card with large data so that the EdDSA algorithm in the card can do its work. You want to feed the card only with a hash. Salam-Shalom, Werner -- Die Gedanken sind frei. Ausnahmen regelt ein Bundesgesetz. From alfonso.degregorio at gmail.com Thu Aug 6 09:16:05 2015 From: alfonso.degregorio at gmail.com (Alfonso De Gregorio) Date: Thu, 6 Aug 2015 13:16:05 +0000 Subject: [Cryptography] What is the format to add multiple signatures (Would PKCS#7 work?) In-Reply-To: References: Message-ID: On Thu, Aug 6, 2015 at 12:19 PM, Puneet Bakshi wrote: ... > Where (means where in ASN1 grammer) can I put name of the signed document in > PKCS7 (or CMS) ? The standard describes how to work with arbitrary octet strings; it doesn't have any notion of file. Which is to say that there is no such thing as 'file name' field in the PKCS#7 / CMS syntax. > When p7s-file is opened using p7s-viewer > (http://www.signfiles.com/p7s-viewer/), it shows "Signed document name" as > "Test Document.docx". This is also shown in screenshot pasted at this link. I guess what that utility does is to remove the PKCS#7 file name extension from the file where the CMS happens to be stored. The viewer then uses the resulting string as the suggested file name for the data content, when in its turn it is stored in a file. On a related note: It may be trivial to trick the viewer application into associating arbitrary file names and/or extensions to the CMS data content. As the interpretation of those octet string is left to the viewer application, extra care should be placed on handling those contents, *including* when the Content Type is signed-data. > Regards, > ~Puneet Alfonso From allenpmd at gmail.com Thu Aug 6 09:51:35 2015 From: allenpmd at gmail.com (Allen) Date: Thu, 6 Aug 2015 09:51:35 -0400 Subject: [Cryptography] More efficient and just as secure to sign message hash using Ed25519? Message-ID: <014001d0d04f$02b6e9c0$0824bd40$@gmail.com> P.S, I might add that for many applications it would be good to include: Step 0. A pseudo-random nonce is generated and appended to the message. This step would go a long way toward thwarting chosen message attacks, and is a good defense measure in any signature scheme. From henrypaulmadore at gmail.com Thu Aug 6 10:35:43 2015 From: henrypaulmadore at gmail.com (Paul Madore) Date: Thu, 06 Aug 2015 09:35:43 -0500 Subject: [Cryptography] FBI Cracked TrueCrypt (lol) Message-ID: <55C370BF.70402@gmail.com> Hadn't noticed any mention of this on the list, but it appears a Florida newspaper really misunderstands the difference between cracking a password and "hacking TrueCrypt." http://www.sun-sentinel.com/news/fl-christopher-glenn-sentenced-20150731-story.html Glenn read up on the art of espionage and used an elaborate encryption system, TrueCrypt, with a decoy computer drive to distract investigators from another hidden drive that he protected with a complex 30-character password, army counterintelligence expert Gerald Parsons testified. Though prosecutors said Glenn emailed a friend a link to an article headlined "FBI hackers fail to crack TrueCrypt" in October 2011, he wasn't as lucky in his efforts. The FBI's counterintelligence squad in South Florida was able to crack Glenn's code, Parsons said. How much residue is left after a person's brain melts on the revelation that there is some stuff government authority can't corrupt? -Paul -------------- next part -------------- An HTML attachment was scrubbed... URL: From iang at iang.org Thu Aug 6 11:24:18 2015 From: iang at iang.org (ianG) Date: Thu, 06 Aug 2015 16:24:18 +0100 Subject: [Cryptography] asymmetric attacks on crypto-protocols - the rough consensus attack In-Reply-To: References: <55BD9C20.5090205@iang.org> <1FD7D63A-88FA-4456-BBAC-3222FD6F76FD@kebe.com> <55C0C089.7070600@iang.org> Message-ID: <55C37C22.2060806@iang.org> On 5/08/2015 21:25 pm, John Kelsey wrote: > I wonder what fraction of the time people invent their own crypto algorithms and protocols, and the result is better than the standard stuff. I'm guessing the fraction is small enough that it needs quite a few significant digits to be distinguishable from zero. It's a little bit difficult to tell because often there is substantial cross-fertilisation, and sometimes successful protocols go from base invention and then into standardisation (some would argue that is the meaning of the word standardisation...). Eg., In terms of successes, SSL, SSH, Skype, PGP, Bitcoin, OTR, were all invented outside standards bodies. Most of those went into standards, but arguably their best work [1] was done before hand. Practically all ciphers are done outside standards bodies, although one could argue that AES was done within a "standards" context. In terms of failures, IPSec, DNSec, Secure Telnet, were invented inside the standards process. Wifi 802.11? S/MIME was inside, as far as I know, and could be called as much a success as PGP at invading the email world, debatable. GSM was inside a standards process, and was a success, notwithstanding the bugs and interferences found. So all in all, for my count, the answer is closer to 100% than 0%. The difference might be in the way we define 'better'. I define 'better security' as what is delivered and deployed and protected to users, as opposed to what they miss out on. So SSL is a failure in my definition because it only covers about 1% of browsing [2], and its authentication is too easily bypassed. Whereas others define 'better security' according to some standard model such as CIA in a lab setting. In which case they define SSL as a success because it meets that criteria. Yet others might go further and define 'better' as a loss-rate difference, but we don't have the data to support that as yet, IMHO, except in the case of phishing. It's certainly a very good question and it should be widely debated. I'd even go so far as to say it's a topic that should be researched and mined. Someone needs to do a big table with protocols down the side, and metrics of success across the top.... [3] A masters project? iang [1] by "best" I mean the best bang for buck. [2] may be higher by now, haven't seen any figures on this lately. [3] like this: http://iang.org/ssl/security_metrics.html#balance > On Aug 4, 2015, at 2:06 PM, dj at deadhat.com wrote: > >>>> On 2/08/2015 16:56 pm, Dan McDonald wrote: >>>>> On 1 August 2015 at 21:27, ianG wrote: >>> >>> >>> NIH == not invented here? Yes, I see that. >> >> I'm currently in the process of developing a security protocol spec in a >> standards group, that will be deployed everywhere. >> >> The reverse seems to be true. There is a desire to do some things new >> (specifically to avoid X.509 and NIST curves and make things as brutally >> simple as possible), but there is a NISE (Not invented somewhere else) >> crowd that calls for external specs we can point to for all crypto things. >> This leads down the slippery path to NIST, DSA and X.509. From iang at iang.org Thu Aug 6 11:28:54 2015 From: iang at iang.org (ianG) Date: Thu, 06 Aug 2015 16:28:54 +0100 Subject: [Cryptography] asymmetric attacks on crypto-protocols - the rough consensus attack In-Reply-To: <55C2EA47.4010200@deadhat.com> References: <55BD9C20.5090205@iang.org> <1FD7D63A-88FA-4456-BBAC-3222FD6F76FD@kebe.com> <55C0C089.7070600@iang.org> <55C2EA47.4010200@deadhat.com> Message-ID: <55C37D36.1020205@iang.org> On 6/08/2015 06:01 am, David Johnston wrote: > On 8/5/15 1:25 PM, John Kelsey wrote: >> I wonder what fraction of the time people invent their own crypto >> algorithms and protocols, and the result is better than the standard >> stuff. I'm guessing the fraction is small enough that it needs quite >> a few significant digits to be distinguishable from zero. >> >> --John > In the case of the UPB (unnamed peripheral bus) that I'm referring to, > It's not so much creating new crypto algorithms as composing normal > algorithms in a system that is simpler than the complex specs like X.509 > that lead to complex software that lead to bugs. Just avoiding x.509 and CA stuff is probably the biggest win in terms of ROI, and is enough to justify bringing in a high-paid resource who can do that. It took me about 1 month to write a custom equivalent, and another month to roll it through all my code. Since then, peace on earth. Replacing both OpenPGP (and x.509) sits right up there on the top investments I've ever made. > Or like DSA that is > very fragile in the face of biased random numbers. Also avoiding NIST > curves with the unexplained constants and trying to use algorithms that > aren't thought to be subject to government interference which could > cause export problems in international markets. > > Many standards (e.g. from IETF, IEEE 802 (before they learned to stop > asking the government for help), SP800-90, X.509 etc. ) have proven > toxic, either cryptographically, structurally or in terms of > implementation complexity. Doing a good job of interoperability > standards writing these days involves taking this into account and being > very circumspect about what parts of what standards can be considered > safe and what parts should be composed in a new fashion that achieves > something simpler, or more scalable or more efficient or all three. > > Standards are a minefield. We need to learn to tread carefully. Amen to that. iang From bear at sonic.net Thu Aug 6 12:46:00 2015 From: bear at sonic.net (Ray Dillinger) Date: Thu, 06 Aug 2015 09:46:00 -0700 Subject: [Cryptography] asymmetric attacks on crypto-protocols - the rough consensus attack In-Reply-To: <55C2EA47.4010200@deadhat.com> References: <55BD9C20.5090205@iang.org> <1FD7D63A-88FA-4456-BBAC-3222FD6F76FD@kebe.com> <55C0C089.7070600@iang.org> <55C2EA47.4010200@deadhat.com> Message-ID: <55C38F48.1020302@sonic.net> > On 8/5/15 1:25 PM, John Kelsey wrote: >> I wonder what fraction of the time people invent their own crypto >> algorithms and protocols, and the result is better than the standard >> stuff. I'm guessing the fraction is small enough that it needs quite >> a few significant digits to be distinguishable from zero. I'm guessing only around three significant digits. A lot of programmers are actually both conscientious and smart. More than one out of a thousand will actually do the research and the homework and do a good job of protocol design. Cipher or RNG design I'm less sure about. It seems that people invent three of these a week and I've essentially never heard of anybody other than a math/crypto pro or a university researcher inventing ANYTHING in that line that there is a good reason to use. Rarely, one of the better ones is secure AFAICT, but none so far are both secure AFAICT and have any other advantage over existing and well-examined secure algorithms. On 08/05/2015 10:01 PM, David Johnston wrote: > Many standards (e.g. from IETF, IEEE 802 (before they learned to stop > asking the government for help), SP800-90, X.509 etc. ) have proven > toxic, either cryptographically, structurally or in terms of > implementation complexity. This is true. But it is true any time software is designed by committee. If one is trying to write one's very first crypto application, I think designing with the specific goal of compliance with any standards over about 10-15 pages long is likely to cause enormous numbers of bugs. The committees do not tend to produce anything simple enough to fully understand all at once. The usually-good programmer habit of breaking things into subproblems and thinking about them separately does not work well in this case. In fact it is, in crypto software, possibly the primary origin of most serious failures. MANY attacks are perpetrated by using "distant" parts of the system together whose interaction the programmer (or sometimes the committee whose standard the programmer was slavishly following) never considered. Fewer moving parts would be fewer opportunities for that to happen. So I'm with David here. Standards are good design input but mainly in terms of reminding you of the attacks that the standards are designed to defend against. If you read them merely as instructions on how to build something secure, you will fail. Design it carefully. Then build it. Be sure it's working as designed. Then if and ONLY if you can do it without compromising the design, see if you can actually comply with those standards without breaking its security. If you can't, then either the standard does not describe what you intend to do and need your software to do, or it was a bad standard and you shouldn't be following it anyway. In fact in the case of a bad standard, following it slavishly to be interoperable with all the other things that follow it, is simply adding your application to a pile of toxic waste that will eventually have to be cleared away for the sake of public safety. (X.509 CA process, I'm looking at you....) > Standards are a minefield. We need to learn to tread carefully. Indeed. And that includes conscientiously not implementing any which you know to be bad, insecure, inappropriate for your specific application, or have too many details to hold in your head all at once. Sometimes standards documents are inapplicable, dangerous, or just plain wrong. Bear -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 819 bytes Desc: OpenPGP digital signature URL: From cloos at jhcloos.com Thu Aug 6 13:43:25 2015 From: cloos at jhcloos.com (James Cloos) Date: Thu, 06 Aug 2015 13:43:25 -0400 Subject: [Cryptography] More efficient and just as secure to sign message hash using Ed25519? In-Reply-To: <014001d0d04f$02b6e9c0$0824bd40$@gmail.com> (Allen's message of "Thu, 6 Aug 2015 09:51:35 -0400") References: <014001d0d04f$02b6e9c0$0824bd40$@gmail.com> Message-ID: >>>>> "A" == Allen writes: A> P.S, I might add that for many applications it would be good to include: A> Step 0. A pseudo-random nonce is generated and appended to the message. The recent thread on cfrg suggests that the nonce needs to be prepended rather than apended to avoid attacks. -JimC -- James Cloos OpenPGP: 0x997A9F17ED7DAEA6 From ron at flownet.com Thu Aug 6 18:09:35 2015 From: ron at flownet.com (Ron Garret) Date: Thu, 6 Aug 2015 15:09:35 -0700 Subject: [Cryptography] asymmetric attacks on crypto-protocols - the rough consensus attack In-Reply-To: <55C38F48.1020302@sonic.net> References: <55BD9C20.5090205@iang.org> <1FD7D63A-88FA-4456-BBAC-3222FD6F76FD@kebe.com> <55C0C089.7070600@iang.org> <55C2EA47.4010200@deadhat.com> <55C38F48.1020302@sonic.net> Message-ID: <8DD16908-193E-4F93-99C8-D3B36AB6AF30@flownet.com> On Aug 6, 2015, at 9:46 AM, Ray Dillinger wrote: > Design it carefully. Then build it. Be sure it's working as > designed. Then if and ONLY if you can do it without compromising > the design, see if you can actually comply with those standards > without breaking its security. In keeping with this advice, I am pleased to announce that my super-simple (<1000 LOC + TweetNaCl) PGP replacement, SC4, now has a command-line version written in Python. If crypto in the browser made you queasy, this is for you. https://github.com/Spark-Innovations/SC4 NOTE: This is an ALPHA release. It has undergone only very cursory testing (I would really appreciate some help with that, actually). The web version of SC4 has been audited, but the Python version has not (though it was mostly ported directly from the Javascript implementation, so it should not have any gaping holes). Feedback of all sorts very much appreciated. rg -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 455 bytes Desc: Message signed with OpenPGP using GPGMail URL: From ron at flownet.com Thu Aug 6 18:13:27 2015 From: ron at flownet.com (Ron Garret) Date: Thu, 6 Aug 2015 15:13:27 -0700 Subject: [Cryptography] Announcing a command-line version of SC4 Message-ID: <9C3E4972-687D-4FAE-B95A-B8CB2CC847FA@flownet.com> SC4 is my attempt to produce a minimalist and super-easy-to-use replacement for PGP using TweetNaCl as the core crypto. The original SC4 was a web application. Since crypto in the browser makes a lot of people queasy, I have produced a command-line version written in Python. It uses the C TweetNaCl library (the web version obviously had to use a Javascript port). Python is only used to implement the UI. You can find SC4-PY, along with the original web version of SC4, on github: https://github.com/Spark-Innovations/SC4 NOTE: This is an ALPHA release. It has undergone only very cursory testing (I would really appreciate some help with that, actually). The web version of SC4 has been audited, but the Python version has not (though it was mostly ported directly from the Javascript implementation, so it should not have any gaping holes). Feedback of all sorts very much appreciated. rg From iang at iang.org Thu Aug 6 14:31:20 2015 From: iang at iang.org (ianG) Date: Thu, 06 Aug 2015 19:31:20 +0100 Subject: [Cryptography] SHA-3 FIPS-202: no SHAKE512 but SHAKE128; confusing SHAKE security In-Reply-To: <20150805214103.GB13929@carbon.w2lan.cesnet.cz> References: <20150805214103.GB13929@carbon.w2lan.cesnet.cz> Message-ID: <55C3A7F8.2020802@iang.org> On 5/08/2015 22:41 pm, Michal Bozon wrote: > Hi. > There is new fresh FIPS-202 standardizing SHA-3. It would be useful if someone more informed could post the proper URLs for this. This is what DuckDuckGo MITM'd for me: http://csrc.nist.gov/publications/drafts/fips-202/fips_202_draft.pdf but croogleanalysis says it might be this one: http://nvlpubs.nist.gov/nistpubs/FIPS/NIST.FIPS.202.pdf I don't think it is this one, which is not revised to include Keccak although it is advertised as revised: http://nvlpubs.nist.gov/nistpubs/FIPS/NIST.FIPS.180-4.pdf iang From jonathan.berliner at gmail.com Thu Aug 6 18:58:34 2015 From: jonathan.berliner at gmail.com (Jonathan Berliner) Date: Thu, 6 Aug 2015 18:58:34 -0400 Subject: [Cryptography] SHA-3 FIPS-202: no SHAKE512 but SHAKE128; confusing SHAKE security In-Reply-To: <55C3A7F8.2020802@iang.org> References: <20150805214103.GB13929@carbon.w2lan.cesnet.cz> <55C3A7F8.2020802@iang.org> Message-ID: On Thu, Aug 6, 2015 at 2:31 PM, ianG wrote: > On 5/08/2015 22:41 pm, Michal Bozon wrote: >> >> Hi. >> There is new fresh FIPS-202 standardizing SHA-3. > > > It would be useful if someone more informed could post the proper URLs for > this. This is what DuckDuckGo MITM'd for me: > > http://csrc.nist.gov/publications/drafts/fips-202/fips_202_draft.pdf > > but croogleanalysis says it might be this one: > > http://nvlpubs.nist.gov/nistpubs/FIPS/NIST.FIPS.202.pdf > > I don't think it is this one, which is not revised to include Keccak > although it is advertised as revised: > > http://nvlpubs.nist.gov/nistpubs/FIPS/NIST.FIPS.180-4.pdf > > > > iang > > _______________________________________________ > The cryptography mailing list > cryptography at metzdowd.com > http://www.metzdowd.com/mailman/listinfo/cryptography This is the official announcement on the Federal Register: https://www.federalregister.gov/articles/2015/08/05/2015-19181/announcing-approval-of-federal-information-processing-standard-fips-202-sha-3-standard This is the official latest FIPS document: http://dx.doi.org/10.6028/NIST.FIPS.202 This is the official NIST announcement at the Hash Forum listserv: FYI – NIST announces FIPS 202 (the SHA-3 Standard) and FIPS 180-4 in the Federal Register today. Please see the Federal Register Notice for details and for NIST’s comment resolutions for DRAFT FIPS 202 and DRAFT FIPS 180-4. FIPS 202, SHA-3 Standard: Permutation-Based Hash and Extendable-Output Functions August 5, 2015 NIST published a Federal Register Notice, 80 FR 46543, on August 5, 2015 announcing the approval of FIPS 202, SHA-3 Standard: Permutation-Based Hash and Extendable-Output Functions, and a Revision of the Applicability Clause of FIPS 180-4, Secure Hash Standard. FIPS 202 specifies the SHA-3 family of hash functions, as well as mechanisms for other cryptographic functions to be specified in the future. The revision to the Applicability Clause of FIPS 180-4 approves the use of hash functions specified in either FIPS 180-4 or FIPS 202 when a secure hash function is required for the protection of sensitive, unclassified information in Federal applications, including as a component within other cryptographic algorithms and protocols. For details on NIST’s cryptographic hash project, please refer to this page: http://csrc.nist.gov/groups/ST/hash/index.html For details on the SHA-3 standardization effort, please refer to this page: http://csrc.nist.gov/groups/ST/hash/sha-3/sha-3_standardization.html. From ben at links.org Fri Aug 7 07:39:44 2015 From: ben at links.org (Ben Laurie) Date: Fri, 07 Aug 2015 11:39:44 +0000 Subject: [Cryptography] SRP for mutual authentication - as an alternative / addition to certificates? In-Reply-To: References: Message-ID: On Wed, 5 Aug 2015 at 15:39 Carlo Contavalli wrote: > On Wed, Aug 5, 2015 at 3:07 AM, Ben Laurie wrote: > > On Wed, 5 Aug 2015 at 03:24 Carlo Contavalli > wrote: > >> > >> The cost on the user is in making sure he is entering the username and > >> password only in "secure boxes", rather than random ones on the web > >> site. > > > > > > This is the core problem - if we could get users to only type their > > passwords into the one true password box, then there are many viable > > solutions to "the password problem". But all attempts to do this so far > have > > been dismal failures. > > Out of curiosity, do you have more details about previous attempts? > Here's a paper that gives a pretty fair overview of the problem: https://cups.cs.cmu.edu/soups/2005/2005proceedings/p77-dhamija.pdf Unfortunately I can't find the study they claim they're going to do in that paper, but I do remember seeing it: it didn't work very well. Which is probably why I can't find it anymore. -------------- next part -------------- An HTML attachment was scrubbed... URL: From cryptography at dukhovni.org Fri Aug 7 17:16:27 2015 From: cryptography at dukhovni.org (Viktor Dukhovni) Date: Fri, 7 Aug 2015 21:16:27 +0000 Subject: [Cryptography] SHA-3 FIPS-202: no SHAKE512 but SHAKE128; confusing SHAKE security In-Reply-To: <20150805214103.GB13929@carbon.w2lan.cesnet.cz> References: <20150805214103.GB13929@carbon.w2lan.cesnet.cz> Message-ID: <20150807211627.GT9139@mournblade.imrryr.org> On Wed, Aug 05, 2015 at 11:41:03PM +0200, Michal Bozon wrote: > In addition to SHA3-{224,256,384,512}, SHAKE-{256,512} were expected. > However, we got SHAKE-{128,256} instead. SHAKE-128 is essentially SHA3-256 with variable length output. SHAKE-256 is essentially SHA3-512 with variable length output. > So in addition to four fixed hash functions with 224 up to 512 bit > security, there are two "expandable-output" functions (XOF) with only > max. 128 vs max. 256 bit security. Not "only", rather "as expected". The name reflects the collision resistance, not the output width, because the latter is variable. > So what is the point of their expansion? (In the Example docs linked in > FIPS-202 appendix E, their output values are expanded to impressive 4096 > bits.) Most likely use case is as DRBG, but perhaps also as a keystream for a stream cipher. > Interesting.. Birthday paradox does not apply here? > Do I have a mistake somewhere? Do they? Variable length output d, with security min(128, d/2). No surprises. -- Viktor. From bear at sonic.net Fri Aug 7 21:41:25 2015 From: bear at sonic.net (Ray Dillinger) Date: Fri, 07 Aug 2015 18:41:25 -0700 Subject: [Cryptography] SHA-3 FIPS-202: no SHAKE512 but SHAKE128; confusing SHAKE security In-Reply-To: <20150807211627.GT9139@mournblade.imrryr.org> References: <20150805214103.GB13929@carbon.w2lan.cesnet.cz> <20150807211627.GT9139@mournblade.imrryr.org> Message-ID: <55C55E45.4080302@sonic.net> On 08/07/2015 02:16 PM, Viktor Dukhovni wrote: > On Wed, Aug 05, 2015 at 11:41:03PM +0200, Michal Bozon wrote: > Not "only", rather "as expected". The name reflects the collision > resistance, not the output width, because the latter is variable. > >> So what is the point of their expansion? (In the Example docs linked in >> FIPS-202 appendix E, their output values are expanded to impressive 4096 >> bits.) > > Most likely use case is as DRBG, but perhaps also as a keystream > for a stream cipher. > > Variable length output d, with security min(128, d/2). No surprises. It seems counterproductive to me to specify a "hash" function that can produce output longer than its security provides collision resistance for. People are going to make this mistake - and get less collision resistance than they're designing for - because the muddled use case and unfortunate terminology of calling this a "hash" invite this mistake. When people want a hash of some level of collision resistance MANY of them are going to think they should be looking for a hash function that produces a hash of some bit length. There shouldn't be a "hash function" they can select which gives them that bit length without giving them that level of collision resistance. It invites avoidable design errors. Keep the primitives simple so everybody knows exactly what they do and more importantly what they don't do. If you want a PRNG initialized from a hash on some document, you take the hash and use it to initialize a PRNG. Bear -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 819 bytes Desc: OpenPGP digital signature URL: From cryptography at dukhovni.org Fri Aug 7 22:47:32 2015 From: cryptography at dukhovni.org (Viktor Dukhovni) Date: Sat, 8 Aug 2015 02:47:32 +0000 Subject: [Cryptography] SHA-3 FIPS-202: no SHAKE512 but SHAKE128; confusing SHAKE security In-Reply-To: <55C55E45.4080302@sonic.net> References: <20150805214103.GB13929@carbon.w2lan.cesnet.cz> <20150807211627.GT9139@mournblade.imrryr.org> <55C55E45.4080302@sonic.net> Message-ID: <20150808024732.GU9139@mournblade.imrryr.org> On Fri, Aug 07, 2015 at 06:41:25PM -0700, Ray Dillinger wrote: > > Most likely use case is as DRBG, but perhaps also as a keystream > > for a stream cipher. > > > > Variable length output d, with security min(128, d/2). No surprises. > > It seems counterproductive to me to specify a "hash" function that > can produce output longer than its security provides collision > resistance for. People are going to make this mistake - and get > less collision resistance than they're designing for - because > the muddled use case and unfortunate terminology of calling this > a "hash" invite this mistake. The hash functions are the SHA3 ones, the SHAKE functions serve a different (useful) purpose, and generate variable width output at the stated security. NIST is doing the right thing. If you find the novelty disturbing, I expect you'll get used to it. -- Viktor. From michal.bozon at cesnet.cz Sat Aug 8 06:59:07 2015 From: michal.bozon at cesnet.cz (Michal Bozon) Date: Sat, 8 Aug 2015 12:59:07 +0200 Subject: [Cryptography] SHA-3 FIPS-202: no SHAKE512 but SHAKE128; confusing SHAKE security In-Reply-To: <20150807211627.GT9139@mournblade.imrryr.org> References: <20150805214103.GB13929@carbon.w2lan.cesnet.cz> <20150807211627.GT9139@mournblade.imrryr.org> Message-ID: <20150808105906.GA4122@carbon.w2lan.cesnet.cz> On 2015-08-07 Fri 21:16, Viktor Dukhovni wrote: > On Wed, Aug 05, 2015 at 11:41:03PM +0200, Michal Bozon wrote: > > > In addition to SHA3-{224,256,384,512}, SHAKE-{256,512} were expected. > > However, we got SHAKE-{128,256} instead. > > SHAKE-128 is essentially SHA3-256 with variable length output. > SHAKE-256 is essentially SHA3-512 with variable length output. Not sure I can agree here. SHA3-256 =~ Keccak[512](d=256) SHA3-512 =~ Keccak[1024](d=512) SHAKE128(d) =~ Keccak[256](d) SHAKE256(d) =~ Keccak[512](d) (d is output length; Keccak[c]: c is capacity) Best SHA-3 (SHA3-512) is essentially Keccak with capacity 1024 (output fixed to 512 bits though), best SHAKE (SHAKE256) is essentially Keccak with capacity 512. I was just wondering why the Keccak capacity for best extendable output hash function was not chosen to be at least as big as for the best fixed hash function. Michal Bozon > > > So in addition to four fixed hash functions with 224 up to 512 bit > > security, there are two "expandable-output" functions (XOF) with only > > max. 128 vs max. 256 bit security. > > Not "only", rather "as expected". The name reflects the collision > resistance, not the output width, because the latter is variable. > > > So what is the point of their expansion? (In the Example docs linked in > > FIPS-202 appendix E, their output values are expanded to impressive 4096 > > bits.) > > Most likely use case is as DRBG, but perhaps also as a keystream > for a stream cipher. > > > Interesting.. Birthday paradox does not apply here? > > Do I have a mistake somewhere? Do they? > > Variable length output d, with security min(128, d/2). No surprises. > > -- > Viktor. > _______________________________________________ > The cryptography mailing list > cryptography at metzdowd.com > http://www.metzdowd.com/mailman/listinfo/cryptography From pinterkr at gmail.com Sat Aug 8 16:52:30 2015 From: pinterkr at gmail.com (=?utf-8?Q?Kriszti=C3=A1n_Pint=C3=A9r?=) Date: Sat, 8 Aug 2015 22:52:30 +0200 Subject: [Cryptography] SHA-3 FIPS-202: no SHAKE512 but SHAKE128; confusing SHAKE security In-Reply-To: <20150808105906.GA4122@carbon.w2lan.cesnet.cz> References: <20150805214103.GB13929@carbon.w2lan.cesnet.cz> <20150807211627.GT9139@mournblade.imrryr.org> <20150808105906.GA4122@carbon.w2lan.cesnet.cz> Message-ID: <835832901.20150808225230@gmail.com> Michal Bozon (at Saturday, August 8, 2015, 12:59:07 PM): > I was just wondering why the Keccak capacity for best extendable output > hash function was not chosen to be at least as big as for the best fixed > hash function. the reason for the SHAKE's is exactly to have something reasonable, unlike the SHA3 instances, which are not. as it happened, the keccak team submitted stupid parameters, because the NIST call for submissions was unclear, and they didn't want to be disqualified. old hash functions often have larger security against preimage attacks than collision attacks. NIST wanted something that has at least the same security as the SHA2 variants. so the keccak team had to replicate the 256 bit preimage and 128 collision for the SHA-256 drop-in. that requires 512 bit capacity. it is especially crazy for the SHA3-512 version, which now has 512 bit preimage security, which is for all intents and purposes a nonsensical securit level. this comes at a terrible performance hit. it is completely useless. you want one general security against everything. therefore NIST proposed to change the parametrization to have 256bit output, 256 bit capacity for the SHA3-256. that would have a general 128 bit security. this was in agreement with the keccak team's intent. they actually discussed it, and agreed to it. this is how you use keccak if you are a sane person. here comes the crypto celebrity mob. schneier and the like were quick to jump on the "NIST weakens crypto again" bandwagon. the entire thing was shameful. to save its nonexistent reputation, NIST backed off, and decided to standardize the original stupid parameters. congrats to everyone involved, djb included! so to save the day, they added the SHAKE instances as a workaround. they are pretty much what SHA3 should have been. if you don't understand how a sponge works, you are very much free to use the SHA3 instances. but if you want to do actual cryptography, you should choose the SHAKE's. From watsonbladd at gmail.com Sat Aug 8 18:18:38 2015 From: watsonbladd at gmail.com (Watson Ladd) Date: Sat, 8 Aug 2015 15:18:38 -0700 Subject: [Cryptography] SHA-3 FIPS-202: no SHAKE512 but SHAKE128; confusing SHAKE security In-Reply-To: <835832901.20150808225230@gmail.com> References: <20150805214103.GB13929@carbon.w2lan.cesnet.cz> <20150807211627.GT9139@mournblade.imrryr.org> <20150808105906.GA4122@carbon.w2lan.cesnet.cz> <835832901.20150808225230@gmail.com> Message-ID: On Sat, Aug 8, 2015 at 1:52 PM, Krisztián Pintér wrote: > > Michal Bozon (at Saturday, August 8, 2015, 12:59:07 PM): > >> I was just wondering why the Keccak capacity for best extendable output >> hash function was not chosen to be at least as big as for the best fixed >> hash function. > > > the reason for the SHAKE's is exactly to have something reasonable, > unlike the SHA3 instances, which are not. > > as it happened, the keccak team submitted stupid parameters, because > the NIST call for submissions was unclear, and they didn't want to be > disqualified. old hash functions often have larger security against > preimage attacks than collision attacks. NIST wanted something that > has at least the same security as the SHA2 variants. so the keccak > team had to replicate the 256 bit preimage and 128 collision for the > SHA-256 drop-in. that requires 512 bit capacity. > > it is especially crazy for the SHA3-512 version, which now has 512 bit > preimage security, which is for all intents and purposes a nonsensical > securit level. this comes at a terrible performance hit. > > it is completely useless. you want one general security against > everything. therefore NIST proposed to change the parametrization to > have 256bit output, 256 bit capacity for the SHA3-256. that would have > a general 128 bit security. this was in agreement with the keccak > team's intent. they actually discussed it, and agreed to it. this is > how you use keccak if you are a sane person. > > here comes the crypto celebrity mob. schneier and the like were quick > to jump on the "NIST weakens crypto again" bandwagon. the entire thing > was shameful. to save its nonexistent reputation, NIST backed off, and > decided to standardize the original stupid parameters. congrats to > everyone involved, djb included! That's missing part of the story. NIST had eliminated CubeHash on the basis that its preimage resistance was insufficient, in favor of Keccak parameters which had been designed for their ridiculous requirements. This elimination happened going into the final round. Once that requirement was dropped, they would have had to redo a bunch of things to be fair to everyone. > > so to save the day, they added the SHAKE instances as a workaround. > they are pretty much what SHA3 should have been. if you don't > understand how a sponge works, you are very much free to use the SHA3 > instances. but if you want to do actual cryptography, you should > choose the SHAKE's. > > > > _______________________________________________ > The cryptography mailing list > cryptography at metzdowd.com > http://www.metzdowd.com/mailman/listinfo/cryptography -- "Man is born free, but everywhere he is in chains". --Rousseau. From pinterkr at gmail.com Sat Aug 8 18:53:12 2015 From: pinterkr at gmail.com (=?utf-8?Q?Kriszti=C3=A1n_Pint=C3=A9r?=) Date: Sun, 9 Aug 2015 00:53:12 +0200 Subject: [Cryptography] SHA-3 FIPS-202: no SHAKE512 but SHAKE128; confusing SHAKE security In-Reply-To: References: <20150805214103.GB13929@carbon.w2lan.cesnet.cz> <20150807211627.GT9139@mournblade.imrryr.org> <20150808105906.GA4122@carbon.w2lan.cesnet.cz> <835832901.20150808225230@gmail.com> Message-ID: <1047617402.20150809005312@gmail.com> Watson Ladd (at Sunday, August 9, 2015, 12:18:38 AM): >> here comes the crypto celebrity mob. schneier and the like were quick >> to jump on the "NIST weakens crypto again" bandwagon. the entire thing >> was shameful. to save its nonexistent reputation, NIST backed off, and >> decided to standardize the original stupid parameters. congrats to >> everyone involved, djb included! > That's missing part of the story. NIST had eliminated CubeHash on the > basis that its preimage resistance was insufficient, in favor of > Keccak parameters which had been designed for their ridiculous > requirements. This elimination happened going into the final round. > Once that requirement was dropped, they would have had to redo a bunch > of things to be fair to everyone. if NIST had restarted the competition, that would have been a good solution, though time consuming. standardizing a subpar algorithm (parameter set) is not a good solution. crypto celebrities babbling just to get clicks on their blogs is also not good. From s.gesemann at gmail.com Sun Aug 9 10:08:35 2015 From: s.gesemann at gmail.com (Sebastian Gesemann) Date: Sun, 9 Aug 2015 16:08:35 +0200 Subject: [Cryptography] SHA-3 FIPS-202: no SHAKE512 but SHAKE128; confusing SHAKE security In-Reply-To: <20150805214103.GB13929@carbon.w2lan.cesnet.cz> References: <20150805214103.GB13929@carbon.w2lan.cesnet.cz> Message-ID: On Wed, Aug 5, 2015 at 11:41 PM, Michal Bozon wrote: > Hi. > There is new fresh FIPS-202 standardizing SHA-3. > > In addition to SHA3-{224,256,384,512}, SHAKE-{256,512} were expected. > However, we got SHAKE-{128,256} instead. > > So in addition to four fixed hash functions with 224 up to 512 bit > security, That's not their security. Their security is 112 up to 256. We don't use 512 bits of output because we need a preimage resistance of 2^512. We use 512 bits of output because they are necessary for collision resistance of 2^256. > there are two "expandable-output" functions (XOF) with only > max. 128 vs max. 256 bit security. 128 and 256 are the "standard security levels" we know from AES already. Even in the quantum computer context, 256 is perfectly fine and 512 rather meaningless. > So what is the point of their expansion? The SHAKEs can be used as a DRBG (deterministic random bit generator) or an MGF (mask generation function, something you use in RSA-OAEP and RSA-PSS, for example). You can also use them if you need a faster hash. Just pick the desired security level (s=128 or s=256), an appropriate digest length d and use SHAKE-s with d bits of output. If you care about collision resistance use d=2s, otherwise d=s should be fine. So, with the SHAKEs you are pretty flexible in that you can choose the security level s and output length d independently for a better security/speed trade-off. Given SHAKE-s with d bits of output you get a 1st and 2nd preimage resistance of 2^min(s, d) and a collision resistance of 2^min(s, d/2). In the quantum computer context (using Grover's algorithm) this should drop down to preimage resistance of 2^min(2s/3, d/2) and a collision resistance of 2^min(2s/3, d/3) I believe. Cheers! sg From iang at iang.org Sun Aug 9 11:26:09 2015 From: iang at iang.org (ianG) Date: Sun, 09 Aug 2015 16:26:09 +0100 Subject: [Cryptography] Threatwatch: CIN - Corruptor-Injector Network Message-ID: <55C77111.7060207@iang.org> There's a long post by "cryptostorm_team" that describes a capture of the activity of a CIN or Corruptor-Injector Network. https://cryptostorm.org/viewtopic.php?f=67&t=8713 The short story appears to be malware injected into the router which then proceeds to present a false view of many things, including google sites and chrome downloads. That last part again - the CIN appears to be capable of injecting a special download of Chrome which then participates in the false presentation to user. Given the complexity of modern software I'd say this to be an impossible task except for a very well funded, long term adversary. The implied conclusion is nothing good - if this attack is scalable and scaled, the secure web system (HTTPS+CAs, etc) is no longer capable of defending. The implied limitations: the attack works through a pwned router (no hope there), and it may rely on downloading a new pwned brower (slight hope!?). It's pretty clear I don't follow the ins & outs, and could be well off base. But worse than that, the team that wrote the blog post don't have the confidence to say what's really happening. The story is full of "we don't know what's happening here, but..." If true -- if this isn't some monumental failure to follow some new google gyratory security system -- then we have the spectre of a very bad situation: A team that claims to spend their full endeavours on this security stuff is also not able to be certain of what's going on. Even if they're a mediocre bunch of undergrad dropouts with the hubris of gamers, even if you know better, or Kaspersky's got it in the bag, they're still likely more informed than 99% of the corps and 99.99% of the users. What hope the rest? iang From iang at iang.org Mon Aug 10 06:55:36 2015 From: iang at iang.org (ianG) Date: Mon, 10 Aug 2015 11:55:36 +0100 Subject: [Cryptography] Threatwatch: CIN - Corruptor-Injector Network In-Reply-To: <0B6DE85F-246E-4492-A60D-05F7B6C413A6@bmt-online.org> References: <55C77111.7060207@iang.org> <0B6DE85F-246E-4492-A60D-05F7B6C413A6@bmt-online.org> Message-ID: <55C88328.4060801@iang.org> On 10/08/2015 05:24 am, Bertrand Mollinier Toublet wrote: > Mmh… they’re off to a bad start: "This certificate identifies itself (via CN field) as *.google.com despite being served during a putative session with google.fr (again, this kind of obvious certificate misconfiguration is all but impossible to imagine google doing in production systems):” > > The certificate served for https://www.google.fr/ *does* have a CN of *.google.com. It does also, correctly, have a SubjectAltName extension including, correctly, *.google.fr. So, no misconfiguration here, nor cause to call it as such. With the rest of the “there’s stuff going on, but we don’t want to talk about it here” general tone of the document, my bullshit detector is ringing loudly. > > And they continue "However, it's notable that the connection does not appear to represent an EV-class certificate. In other words, there's no 'green lock' as we see in any of google's other services.” Ah, nope, it’s not notable. Google does not use EV. > > > Huh. Apparently (surprise!), I’m not the first one to _not_ be convinced by this: https://news.ycombinator.com/item?id=10030820. Happy reads! Yep. Plenty of skepticism there. Let's step back from the situation and ask what's really happening here? Let's say the guys are mostly pretty competent. They've gone down the rabbit hole. Come out hyperventilating. Can't see what's what and what's not. They could have just made a mistake and a better team would have figured it out. But actually ... likely not. If you take a random team across the world and try and figure it out, my guess is you would come up with the same situation: "don't know." Imagine a Security Certified Engineer's exam. Or, contrast this to your vehicle, where you drop it into the mechanic for a checkup. He's supposed to come back and say it's safe and operating fine. The brakes work and will continue to work. The engine won't blow up, the tires are safe. All these things. Even with the snafus of recent Jeep rides - that's still pretty much true. Whereas here - if a customer had been mostly infected, we've got a situation where a mostly competent mechanic (an assumption, I grant) cannot figure out what's happening. Can't even point at the correct path. Now, granted, everyone knows their favourite lab that can handle this question, but at what cost? And, here's the clanger - your car mechanic will issue a certificate that it's safe to drive (does so every year for registration) - but those labs aren't going to issue a certificate that the network is clean for any reasonable cost. Google isn't going to declare formally that "everything is clear, it's good." Note how they are fixing a few misconceptions about certs, but to go further than that is probably out of reach. Google is not saying there is no injection. There isn't a CA in sight that is going to put its nose above the parapet. I think we've hit and passed the peak of complexity that is tractable for security. We know that attacks and breaches have been rising rapidly in the last 5 years or so; complexity has been rising since the web was invented. Have we created a situation where only very large players can muster the ability to defend themselves, large attackers can do what they want, and the rest are sheep for slaughter? iang From leichter at lrw.com Mon Aug 10 11:10:36 2015 From: leichter at lrw.com (Jerry Leichter) Date: Mon, 10 Aug 2015 11:10:36 -0400 Subject: [Cryptography] Threatwatch: CIN - Corruptor-Injector Network In-Reply-To: <55C77111.7060207@iang.org> References: <55C77111.7060207@iang.org> Message-ID: Seen https://news.ycombinator.com/item?id=10030820 for some knowledgable responses. -- Jerry From leichter at lrw.com Mon Aug 10 12:26:10 2015 From: leichter at lrw.com (Jerry Leichter) Date: Mon, 10 Aug 2015 12:26:10 -0400 Subject: [Cryptography] Threatwatch: CIN - Corruptor-Injector Network In-Reply-To: <55C88328.4060801@iang.org> References: <55C77111.7060207@iang.org> <0B6DE85F-246E-4492-A60D-05F7B6C413A6@bmt-online.org> <55C88328.4060801@iang.org> Message-ID: <82AAA635-BA71-49E3-AA6E-269981B3E3FC@lrw.com> > ...I think we've hit and passed the peak of complexity that is tractable for security. Definitely - but why limit this to "security"? Our ability to *correctly* build *working*, large computerized systems hasn't kept up with our desire for them. Security is an area where this happens to stand out, for a number of reasons - but it's an endemic problem, it's been around for years, and it's not clear how to do better. > We know that attacks and breaches have been rising rapidly in the last 5 years or so; complexity has been rising since the web was invented. Have we created a situation where only very large players can muster the ability to defend themselves, large attackers can do what they want, and the rest are sheep for slaughter? What makes you think even the large players can defend themselves? Complexity - it's not alone - has led to a situation where the attack/defend tradeoff is is all on the attacker's side. This probably won't last - it never has - though one has to be careful about the lessons of history: Network and system architectures may prove more pervasive and thus much harder to change than things like military strategy. -- Jerry From mitch at niftyegg.com Mon Aug 10 17:11:51 2015 From: mitch at niftyegg.com (Tom Mitchell) Date: Mon, 10 Aug 2015 14:11:51 -0700 Subject: [Cryptography] Threatwatch: CIN - Corruptor-Injector Network In-Reply-To: <55C77111.7060207@iang.org> References: <55C77111.7060207@iang.org> Message-ID: On Sun, Aug 9, 2015 at 8:26 AM, ianG wrote: > There's a long post by "cryptostorm_team" that describes a capture of the > activity of a CIN or Corruptor-Injector Network. > > https://cryptostorm.org/viewtopic.php?f=67&t=8713 > > The short story appears to be malware injected into the router which then > proceeds to present a false view of many things, including google sites and > chrome downloads. > Wow.. trouble .... One short term hack is to find ways to discover these bad certificates and black list them. Another is to cache "good" certificates for famous hosts for a gosh long time and deal with black listed credentials via a pool of trusted neighbors. There is no reason to discard a cert in five min if it is good for five years. At this point I make a point of keeping most bootstrap install download tools. Download of A--> revised to B--> revised to C --> revised to ... N seems a risk. Vendors could improve these downloaders with a mix of hard crypto and some layers of unique knock knock like tricks some with hardwired addresses and layers of keys. https:{Google.com, download.google.com, time.google.com keys.google.com} should not share a common key, key management or even local routers. Same for 100 more big international companies. Next vendors and users will need physical media to anchor to something solidly in their control. Hardware flash may need a hard wire jumper to hobble invading software. The hardware flash for SSD and more seems a future requirement. The only bright side is the attention this stuff is getting. -- T o m M i t c h e l l -------------- next part -------------- An HTML attachment was scrubbed... URL: From frantz at pwpconsult.com Mon Aug 10 18:06:49 2015 From: frantz at pwpconsult.com (Bill Frantz) Date: Mon, 10 Aug 2015 15:06:49 -0700 Subject: [Cryptography] Threatwatch: CIN - Corruptor-Injector Network In-Reply-To: <82AAA635-BA71-49E3-AA6E-269981B3E3FC@lrw.com> Message-ID: On 8/10/15 at 9:26 AM, leichter at lrw.com (Jerry Leichter) wrote: >Network and system architectures may prove more pervasive and >thus much harder to change than things like military strategy. I think it is too late for capability model OSs. The change in thinking needed to program in the KeyKOS, CapRos, Coyotos, etc. model is too far from the way people put applications together with Apache, shell scripts etc. and the Unix file system and security models. Never mind the the capability model is almost exactly the object model without globally available objects, a model that most programmers have used. That's how you write a program, not integrate a system. I agree with Tom that the only bright side is the attention these issues are getting. It seems to me that the TLS 1.3 effort is greatly simplifying the protocol. Of course if you must be backward compatible, then it won't help much. We didn't get into this mess in a day and it will take many days to get out of it. A wise man once said, "If you find yourself in a hole, the first thing to do is stop digging." Cheers - Bill ----------------------------------------------------------------------- Bill Frantz | If the site is supported by | Periwinkle (408)356-8506 | ads, you are the product. | 16345 Englewood Ave www.pwpconsult.com | | Los Gatos, CA 95032 From leichter at lrw.com Mon Aug 10 18:35:57 2015 From: leichter at lrw.com (Jerry Leichter) Date: Mon, 10 Aug 2015 18:35:57 -0400 Subject: [Cryptography] Threatwatch: CIN - Corruptor-Injector Network In-Reply-To: References: Message-ID: <8056AEEE-E7BE-482C-9AB6-F1F631071723@lrw.com> >> Network and system architectures may prove more pervasive and thus much harder to change than things like military strategy. > > I think it is too late for capability model OSs. The change in thinking needed to program in the KeyKOS, CapRos, Coyotos, etc. model is too far from the way people put applications together with Apache, shell scripts etc. and the Unix file system and security models. Maybe. Both Android and, to an even greater extent, iOS (and to a degree MacOS before that, and on a continuing basis), have done violence to many of the traditional assumptions people have about OS's, processes, file systems, and so on. Note that I'm *not* saying any of these are implementing a capability model - though MacOS with its xpc's is moving in that direction - just that they've implemented *different* models, and programmers have adjusted. So perhaps there's some hope. -- Jerry From jmg at funkthat.com Mon Aug 10 20:31:34 2015 From: jmg at funkthat.com (John-Mark Gurney) Date: Mon, 10 Aug 2015 17:31:34 -0700 Subject: [Cryptography] Threatwatch: CIN - Corruptor-Injector Network In-Reply-To: <55C77111.7060207@iang.org> References: <55C77111.7060207@iang.org> Message-ID: <20150811003134.GD68509@funkthat.com> ianG wrote this message on Sun, Aug 09, 2015 at 16:26 +0100: > There's a long post by "cryptostorm_team" that describes a capture of > the activity of a CIN or Corruptor-Injector Network. > > https://cryptostorm.org/viewtopic.php?f=67&t=8713 Considering they state: "However, it's notable that the connection does not appear to represent an EV-class certificate. In other words, there's no 'green lock' as we see in any of google's other services. For example:' And then proceed to provide a snapshot of mozilla.org which does present an EV cert... Never a shot of a Google site presenting an EV cert... They failed at even the most basic level.. The implied that Google uses EV certs, which AGL (in the other thread) says Google does not.. -- John-Mark Gurney Voice: +1 415 225 5579 "All that I will do, has been done, All that I have, has not." From jmg at funkthat.com Mon Aug 10 20:38:16 2015 From: jmg at funkthat.com (John-Mark Gurney) Date: Mon, 10 Aug 2015 17:38:16 -0700 Subject: [Cryptography] Threatwatch: CIN - Corruptor-Injector Network In-Reply-To: References: <55C77111.7060207@iang.org> Message-ID: <20150811003816.GE68509@funkthat.com> Tom Mitchell wrote this message on Mon, Aug 10, 2015 at 14:11 -0700: > One short term hack is to find ways to discover these bad certificates and > black list them. There are lots of these projects out there... Might want to look at: https://www.eff.org/observatory http://tack.io/ And Chrome already does this for their own properties: http://googleonlinesecurity.blogspot.com/2011/08/update-on-attempted-man-in-middle.html and: http://blog.chromium.org/2011/06/new-chromium-security-features-june.html Chromium has Google's certs preloaded and pinned to prevent invalid certificates from being used... -- John-Mark Gurney Voice: +1 415 225 5579 "All that I will do, has been done, All that I have, has not." From bascule at gmail.com Tue Aug 11 21:24:35 2015 From: bascule at gmail.com (Tony Arcieri) Date: Tue, 11 Aug 2015 18:24:35 -0700 Subject: [Cryptography] SRP for mutual authentication - as an alternative / addition to certificates? In-Reply-To: References: Message-ID: On Wed, Aug 5, 2015 at 11:51 AM, Ben Laurie wrote: > I use one of those, but it doesn't really help with my other devices. > U2F is just a protocol. Your "other devices" could also act as U2F tokens themselves (e.g. your SmartWatch could act as a U2F token for your SmartPhone). Or (potentially) something like a Yubikey could provide U2F over Bluetooth or NFC. > And I'm screwed if I lose it (well, I'm not, because I'll be given > another, but if I were a member of the public I would be). > Buy two and keep another as a backup, then revoke the first when you lose it. But losing credentials is a general problem with any authentication system. -- Tony Arcieri -------------- next part -------------- An HTML attachment was scrubbed... URL: From ben at links.org Wed Aug 12 00:56:26 2015 From: ben at links.org (Ben Laurie) Date: Wed, 12 Aug 2015 04:56:26 +0000 Subject: [Cryptography] SRP for mutual authentication - as an alternative / addition to certificates? In-Reply-To: References: Message-ID: On Wed, 12 Aug 2015 at 02:24 Tony Arcieri wrote: > On Wed, Aug 5, 2015 at 11:51 AM, Ben Laurie wrote: > >> I use one of those, but it doesn't really help with my other devices. >> > > U2F is just a protocol. Your "other devices" could also act as U2F tokens > themselves (e.g. your SmartWatch could act as a U2F token for your > SmartPhone). > I don't wear a watch. > Or (potentially) something like a Yubikey could provide U2F over Bluetooth > or NFC. > I'm not sure potential logins are much use to me. :-) > > >> And I'm screwed if I lose it (well, I'm not, because I'll be given >> another, but if I were a member of the public I would be). >> > > Buy two and keep another as a backup, then revoke the first when you lose > it. > So, if I'm on holiday, I do without access for the remaining 2 weeks? > But losing credentials is a general problem with any authentication system. > True, but that doesn't give you a licence to ignore it. -------------- next part -------------- An HTML attachment was scrubbed... URL: From ben at links.org Wed Aug 12 01:33:29 2015 From: ben at links.org (Ben Laurie) Date: Wed, 12 Aug 2015 05:33:29 +0000 Subject: [Cryptography] Threatwatch: CIN - Corruptor-Injector Network In-Reply-To: <55C77111.7060207@iang.org> References: <55C77111.7060207@iang.org> Message-ID: On Sun, 9 Aug 2015 at 20:25 ianG wrote: > There's a long post by "cryptostorm_team" that describes a capture of > the activity of a CIN or Corruptor-Injector Network. > > https://cryptostorm.org/viewtopic.php?f=67&t=8713 > > The short story appears to be malware injected into the router which > then proceeds to present a false view of many things, including google > sites and chrome downloads. > > That last part again - the CIN appears to be capable of injecting a > special download of Chrome which then participates in the false > presentation to user. Given the complexity of modern software I'd say > this to be an impossible task except for a very well funded, long term > adversary. > Or, actually, it is impossible. That article appears to be complete nonsense. For example: "This certificate identifies itself (via CN field) as *.google.com despite being served during a putative session with google.fr(again, this kind of obvious certificate misconfiguration is all but impossible to imagine google doing in production systems):" Impossible to imagine, but ... true. The certificate is fine, google.fr is a SAN. This supposedly fake certificate, btw, is well known to CT: https://crt.sh/?q=4B9D33E64EF6104E2043BF1E0928924F6D41337A Another example: "http://clients1.google.com/ocsp 404s when loaded.This is not the sort of thing one will find in a legitimately Google-issued certificate, created less than 10 days ago." Oh yes it is. That is completely correct behaviour for an OCSP responder. The alleged bad certificate, btw, for future record is: -----BEGIN CERTIFICATE----- MIIGxTCCBa2gAwIBAgIIa4/pt17tKWYwDQYJKoZIhvcNAQEFBQAwSTELMAkGA1UE BhMCVVMxEzARBgNVBAoTCkdvb2dsZSBJbmMxJTAjBgNVBAMTHEdvb2dsZSBJbnRl cm5ldCBBdXRob3JpdHkgRzIwHhcNMTUwNTA2MTAzMzE1WhcNMTUwODA0MDAwMDAw WjBmMQswCQYDVQQGEwJVUzETMBEGA1UECAwKQ2FsaWZvcm5pYTEWMBQGA1UEBwwN TW91bnRhaW4gVmlldzETMBEGA1UECgwKR29vZ2xlIEluYzEVMBMGA1UEAwwMKi5n b29nbGUuY29tMFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAE6qywJ47uyuZZh7I4 4f3qvA9T+u3Zy6fI3V0M2W1sQ/fWd9hgs2Ieobbo9lDh3wM912o++qSsLUKA/zud +wa5uqOCBF0wggRZMB0GA1UdJQQWMBQGCCsGAQUFBwMBBggrBgEFBQcDAjCCAyYG A1UdEQSCAx0wggMZggwqLmdvb2dsZS5jb22CDSouYW5kcm9pZC5jb22CFiouYXBw ZW5naW5lLmdvb2dsZS5jb22CEiouY2xvdWQuZ29vZ2xlLmNvbYIWKi5nb29nbGUt YW5hbHl0aWNzLmNvbYILKi5nb29nbGUuY2GCCyouZ29vZ2xlLmNsgg4qLmdvb2ds ZS5jby5pboIOKi5nb29nbGUuY28uanCCDiouZ29vZ2xlLmNvLnVrgg8qLmdvb2ds ZS5jb20uYXKCDyouZ29vZ2xlLmNvbS5hdYIPKi5nb29nbGUuY29tLmJygg8qLmdv b2dsZS5jb20uY2+CDyouZ29vZ2xlLmNvbS5teIIPKi5nb29nbGUuY29tLnRygg8q Lmdvb2dsZS5jb20udm6CCyouZ29vZ2xlLmRlggsqLmdvb2dsZS5lc4ILKi5nb29n bGUuZnKCCyouZ29vZ2xlLmh1ggsqLmdvb2dsZS5pdIILKi5nb29nbGUubmyCCyou Z29vZ2xlLnBsggsqLmdvb2dsZS5wdIISKi5nb29nbGVhZGFwaXMuY29tgg8qLmdv b2dsZWFwaXMuY26CFCouZ29vZ2xlY29tbWVyY2UuY29tghEqLmdvb2dsZXZpZGVv LmNvbYIMKi5nc3RhdGljLmNugg0qLmdzdGF0aWMuY29tggoqLmd2dDEuY29tggoq Lmd2dDIuY29tghQqLm1ldHJpYy5nc3RhdGljLmNvbYIMKi51cmNoaW4uY29tghAq LnVybC5nb29nbGUuY29tghYqLnlvdXR1YmUtbm9jb29raWUuY29tgg0qLnlvdXR1 YmUuY29tghYqLnlvdXR1YmVlZHVjYXRpb24uY29tggsqLnl0aW1nLmNvbYILYW5k cm9pZC5jb22CBGcuY2+CBmdvby5nbIIUZ29vZ2xlLWFuYWx5dGljcy5jb22CCmdv b2dsZS5jb22CEmdvb2dsZWNvbW1lcmNlLmNvbYIKdXJjaGluLmNvbYIIeW91dHUu YmWCC3lvdXR1YmUuY29tghR5b3V0dWJlZWR1Y2F0aW9uLmNvbTALBgNVHQ8EBAMC B4AwaAYIKwYBBQUHAQEEXDBaMCsGCCsGAQUFBzAChh9odHRwOi8vcGtpLmdvb2ds ZS5jb20vR0lBRzIuY3J0MCsGCCsGAQUFBzABhh9odHRwOi8vY2xpZW50czEuZ29v Z2xlLmNvbS9vY3NwMB0GA1UdDgQWBBRYmgbDFeI+6yulnYNz+u8RSD6b7TAMBgNV HRMBAf8EAjAAMB8GA1UdIwQYMBaAFErdBhYbvPZotXb1gba7Yhq6WoEvMBcGA1Ud IAQQMA4wDAYKKwYBBAHWeQIFATAwBgNVHR8EKTAnMCWgI6Ahhh9odHRwOi8vcGtp Lmdvb2dsZS5jb20vR0lBRzIuY3JsMA0GCSqGSIb3DQEBBQUAA4IBAQCSRnI2r+DE aeRcZNOWvOrf9XlRnQVRiBjC46eRWp4aP2IU/au5wh8w7hXK8044hcjrlVXl/Z1K oL65aEyFwdKM33Mx7Dle74jL12aSHPitnFJQsFkDQ+oB6ydMz1bk8fH3A5Lq3L03 yIgNwF+pU1MlKL5rbhZ8ekQOw4EwGXVd4PsgAxT0KESx3MD/K9CgSZxf/Z7D00m2 3wHvx9WPjiWBqjqoHBG0YU+asMtPa0GplNpDlTU0qfxFQlhG05446DbjIAAZ1JTQ jhV5+ga4YI/Mvnt4Xf2qEi8Jj1HsdB2Vz94V4NqjyBI2gjPKu5uZFLXHYJY8olUK fPfn9P6xBumP -----END CERTIFICATE----- To be clear, it isn't fake. -------------- next part -------------- An HTML attachment was scrubbed... URL: From oshwm at openmailbox.org Wed Aug 12 02:44:50 2015 From: oshwm at openmailbox.org (oshwm) Date: Wed, 12 Aug 2015 07:44:50 +0100 Subject: [Cryptography] Threatwatch: CIN - Corruptor-Injector Network In-Reply-To: References: <55C77111.7060207@iang.org> Message-ID: <143E44B6-4233-434C-88AE-D889F1B70452@openmailbox.org> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA512 yeh but then... crt.sh - owned by comodo comodo involved with privdog mitm comodo issues certs for cloudflare ben laurie works for google none of above is a killer but would suggest not necessarily proof of no wrongdoing. also, is injecting a modified version of chrome into an http stream impossible - i dont think so. do ppl in general check md5 or other sums - nope, only paranoid cpunks :D as for cryptostorm, they generally have been reliable and i would need to read more about CIN before i either dismiss or agree with them on this topic. cheers, oshwm. On 12 August 2015 06:33:29 BST, Ben Laurie wrote: >On Sun, 9 Aug 2015 at 20:25 ianG wrote: > >> There's a long post by "cryptostorm_team" that describes a capture of >> the activity of a CIN or Corruptor-Injector Network. >> >> https://cryptostorm.org/viewtopic.php?f=67&t=8713 >> >> The short story appears to be malware injected into the router which >> then proceeds to present a false view of many things, including >google >> sites and chrome downloads. >> >> That last part again - the CIN appears to be capable of injecting a >> special download of Chrome which then participates in the false >> presentation to user. Given the complexity of modern software I'd >say >> this to be an impossible task except for a very well funded, long >term >> adversary. >> > >Or, actually, it is impossible. > >That article appears to be complete nonsense. > >For example: > >"This certificate identifies itself (via CN field) as *.google.com >despite >being served during a putative session with google.fr(again, this kind >of >obvious certificate misconfiguration is all but impossible to imagine >google doing in production systems):" > >Impossible to imagine, but ... true. The certificate is fine, google.fr >is >a SAN. > >This supposedly fake certificate, btw, is well known to CT: > >https://crt.sh/?q=4B9D33E64EF6104E2043BF1E0928924F6D41337A > >Another example: > >"http://clients1.google.com/ocsp 404s when loaded.This is not the sort >of >thing one will find in a legitimately Google-issued certificate, >created >less than 10 days ago." > >Oh yes it is. That is completely correct behaviour for an OCSP >responder. > >The alleged bad certificate, btw, for future record is: > >-----BEGIN CERTIFICATE----- >MIIGxTCCBa2gAwIBAgIIa4/pt17tKWYwDQYJKoZIhvcNAQEFBQAwSTELMAkGA1UE >BhMCVVMxEzARBgNVBAoTCkdvb2dsZSBJbmMxJTAjBgNVBAMTHEdvb2dsZSBJbnRl >cm5ldCBBdXRob3JpdHkgRzIwHhcNMTUwNTA2MTAzMzE1WhcNMTUwODA0MDAwMDAw >WjBmMQswCQYDVQQGEwJVUzETMBEGA1UECAwKQ2FsaWZvcm5pYTEWMBQGA1UEBwwN >TW91bnRhaW4gVmlldzETMBEGA1UECgwKR29vZ2xlIEluYzEVMBMGA1UEAwwMKi5n >b29nbGUuY29tMFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAE6qywJ47uyuZZh7I4 >4f3qvA9T+u3Zy6fI3V0M2W1sQ/fWd9hgs2Ieobbo9lDh3wM912o++qSsLUKA/zud >+wa5uqOCBF0wggRZMB0GA1UdJQQWMBQGCCsGAQUFBwMBBggrBgEFBQcDAjCCAyYG >A1UdEQSCAx0wggMZggwqLmdvb2dsZS5jb22CDSouYW5kcm9pZC5jb22CFiouYXBw >ZW5naW5lLmdvb2dsZS5jb22CEiouY2xvdWQuZ29vZ2xlLmNvbYIWKi5nb29nbGUt >YW5hbHl0aWNzLmNvbYILKi5nb29nbGUuY2GCCyouZ29vZ2xlLmNsgg4qLmdvb2ds >ZS5jby5pboIOKi5nb29nbGUuY28uanCCDiouZ29vZ2xlLmNvLnVrgg8qLmdvb2ds >ZS5jb20uYXKCDyouZ29vZ2xlLmNvbS5hdYIPKi5nb29nbGUuY29tLmJygg8qLmdv >b2dsZS5jb20uY2+CDyouZ29vZ2xlLmNvbS5teIIPKi5nb29nbGUuY29tLnRygg8q >Lmdvb2dsZS5jb20udm6CCyouZ29vZ2xlLmRlggsqLmdvb2dsZS5lc4ILKi5nb29n >bGUuZnKCCyouZ29vZ2xlLmh1ggsqLmdvb2dsZS5pdIILKi5nb29nbGUubmyCCyou >Z29vZ2xlLnBsggsqLmdvb2dsZS5wdIISKi5nb29nbGVhZGFwaXMuY29tgg8qLmdv >b2dsZWFwaXMuY26CFCouZ29vZ2xlY29tbWVyY2UuY29tghEqLmdvb2dsZXZpZGVv >LmNvbYIMKi5nc3RhdGljLmNugg0qLmdzdGF0aWMuY29tggoqLmd2dDEuY29tggoq >Lmd2dDIuY29tghQqLm1ldHJpYy5nc3RhdGljLmNvbYIMKi51cmNoaW4uY29tghAq >LnVybC5nb29nbGUuY29tghYqLnlvdXR1YmUtbm9jb29raWUuY29tgg0qLnlvdXR1 >YmUuY29tghYqLnlvdXR1YmVlZHVjYXRpb24uY29tggsqLnl0aW1nLmNvbYILYW5k >cm9pZC5jb22CBGcuY2+CBmdvby5nbIIUZ29vZ2xlLWFuYWx5dGljcy5jb22CCmdv >b2dsZS5jb22CEmdvb2dsZWNvbW1lcmNlLmNvbYIKdXJjaGluLmNvbYIIeW91dHUu >YmWCC3lvdXR1YmUuY29tghR5b3V0dWJlZWR1Y2F0aW9uLmNvbTALBgNVHQ8EBAMC >B4AwaAYIKwYBBQUHAQEEXDBaMCsGCCsGAQUFBzAChh9odHRwOi8vcGtpLmdvb2ds >ZS5jb20vR0lBRzIuY3J0MCsGCCsGAQUFBzABhh9odHRwOi8vY2xpZW50czEuZ29v >Z2xlLmNvbS9vY3NwMB0GA1UdDgQWBBRYmgbDFeI+6yulnYNz+u8RSD6b7TAMBgNV >HRMBAf8EAjAAMB8GA1UdIwQYMBaAFErdBhYbvPZotXb1gba7Yhq6WoEvMBcGA1Ud >IAQQMA4wDAYKKwYBBAHWeQIFATAwBgNVHR8EKTAnMCWgI6Ahhh9odHRwOi8vcGtp >Lmdvb2dsZS5jb20vR0lBRzIuY3JsMA0GCSqGSIb3DQEBBQUAA4IBAQCSRnI2r+DE >aeRcZNOWvOrf9XlRnQVRiBjC46eRWp4aP2IU/au5wh8w7hXK8044hcjrlVXl/Z1K >oL65aEyFwdKM33Mx7Dle74jL12aSHPitnFJQsFkDQ+oB6ydMz1bk8fH3A5Lq3L03 >yIgNwF+pU1MlKL5rbhZ8ekQOw4EwGXVd4PsgAxT0KESx3MD/K9CgSZxf/Z7D00m2 >3wHvx9WPjiWBqjqoHBG0YU+asMtPa0GplNpDlTU0qfxFQlhG05446DbjIAAZ1JTQ >jhV5+ga4YI/Mvnt4Xf2qEi8Jj1HsdB2Vz94V4NqjyBI2gjPKu5uZFLXHYJY8olUK >fPfn9P6xBumP >-----END CERTIFICATE----- > >To be clear, it isn't fake. > > >------------------------------------------------------------------------ > >_______________________________________________ >The cryptography mailing list >cryptography at metzdowd.com >http://www.metzdowd.com/mailman/listinfo/cryptography -----BEGIN PGP SIGNATURE----- Version: APG v1.1.1 iQI7BAEBCgAlBQJVyutiHhxvc2h3bSA8b3Nod21Ab3Blbm1haWxib3gub3JnPgAK CRAqeAcYSpG1iPIxD/0YSVCvVamvnkyTg86a4MWMKSGcmSXuAwfTi4YxPh4aUk37 zMWqp9sYqld1GoH7hJjRUDdJILjVwWSdCztGjIqCTl8dBlJChva7LfMQCYTC6K6d O1dHvBVAaOTJ5iBk8ZdfSlDIoJnLU1aNAe+Fd7hXsbMFBzH885WZaK+A6wMuMqP1 ZltsBUFP44MO/qOU8Y2MRj7viG+hX2ol/GsVd/M4SYwPTKXR2eAjyRyNyNbYUX9b rKlhF8ERFa04PSK8wsYaXGNSTvyP3J81h0MXG1eKPizIqKiyhw1xqCaOxN5s2iY3 nfccZ9+vVd8KC4zbpO6TWJbGNFld/eHIe7E63CbvivYlqKcjU/TQynWP+zIHigwK et1zDi/XiKdDlumwYstx3IDrirIwr+VAx+IZohKYQxNn9G0hg2seoZ7pSKWiYavw 5zLZf/6Wbo3XXrOHlS+w0vG5twx66bM57QuCc0Zof9/bxlKw3Y1mESvhnk1SKVVi K1x1/XjWHFXn67JcfGBynPKml4drQQhV87rE5reMeunGHe8vISYJVYhgPeIz8wPu MYOUepfkqpgYdisswEkskjl5vZcgAagpEXUmz+EEygMpsD3yCUSeNSAcfU0wRJia Z/+b4zfM7y8NOcJvdnkYVZdBVLA5/gzuGPcoTlEJZ7aBCMSfV6igXmTFm8j2wQ== =q6vf -----END PGP SIGNATURE----- From michal.bozon at cesnet.cz Wed Aug 12 05:47:24 2015 From: michal.bozon at cesnet.cz (Michal Bozon) Date: Wed, 12 Aug 2015 11:47:24 +0200 Subject: [Cryptography] SHA-3 FIPS-202: no SHAKE512 but SHAKE128; confusing SHAKE security In-Reply-To: <835832901.20150808225230@gmail.com> References: <20150805214103.GB13929@carbon.w2lan.cesnet.cz> <20150807211627.GT9139@mournblade.imrryr.org> <20150808105906.GA4122@carbon.w2lan.cesnet.cz> <835832901.20150808225230@gmail.com> Message-ID: <20150812094724.GE4122@carbon.w2lan.cesnet.cz> On 2015-08-08 Sat 22:52, Krisztián Pintér wrote: > > Michal Bozon (at Saturday, August 8, 2015, 12:59:07 PM): > > > I was just wondering why the Keccak capacity for best extendable output > > hash function was not chosen to be at least as big as for the best fixed > > hash function. > > > the reason for the SHAKE's is exactly to have something reasonable, > unlike the SHA3 instances, which are not. > > as it happened, the keccak team submitted stupid parameters, because > the NIST call for submissions was unclear, and they didn't want to be > disqualified. old hash functions often have larger security against > preimage attacks than collision attacks. NIST wanted something that > has at least the same security as the SHA2 variants. so the keccak > team had to replicate the 256 bit preimage and 128 collision for the > SHA-256 drop-in. that requires 512 bit capacity. > > it is especially crazy for the SHA3-512 version, which now has 512 bit > preimage security, which is for all intents and purposes a nonsensical > securit level. this comes at a terrible performance hit. > > it is completely useless. you want one general security against > everything. therefore NIST proposed to change the parametrization to > have 256bit output, 256 bit capacity for the SHA3-256. that would have > a general 128 bit security. this was in agreement with the keccak > team's intent. they actually discussed it, and agreed to it. this is > how you use keccak if you are a sane person. > > here comes the crypto celebrity mob. schneier and the like were quick > to jump on the "NIST weakens crypto again" bandwagon. the entire thing > was shameful. to save its nonexistent reputation, NIST backed off, and > decided to standardize the original stupid parameters. congrats to > everyone involved, djb included! Thanks for brief history intro. However, I do not think that overkill security is useless, nonsensical and stupid. The algorithm becomes useless when the algorithm is sufficiently broken. > > so to save the day, they added the SHAKE instances as a workaround. > they are pretty much what SHA3 should have been. if you don't > understand how a sponge works, you are very much free to use the SHA3 > instances. but if you want to do actual cryptography, you should > choose the SHAKE's. > > > > _______________________________________________ > The cryptography mailing list > cryptography at metzdowd.com > http://www.metzdowd.com/mailman/listinfo/cryptography From pinterkr at gmail.com Wed Aug 12 07:23:32 2015 From: pinterkr at gmail.com (=?UTF-8?B?S3Jpc3p0acOhbiBQaW50w6ly?=) Date: Wed, 12 Aug 2015 13:23:32 +0200 Subject: [Cryptography] SHA-3 FIPS-202: no SHAKE512 but SHAKE128; confusing SHAKE security In-Reply-To: <20150812094724.GE4122@carbon.w2lan.cesnet.cz> References: <20150805214103.GB13929@carbon.w2lan.cesnet.cz> <20150807211627.GT9139@mournblade.imrryr.org> <20150808105906.GA4122@carbon.w2lan.cesnet.cz> <835832901.20150808225230@gmail.com> <20150812094724.GE4122@carbon.w2lan.cesnet.cz> Message-ID: On Wed, Aug 12, 2015 at 11:47 AM, Michal Bozon wrote: > However, I do not think that overkill security is useless, nonsensical > and stupid. The algorithm becomes useless when the algorithm is sufficiently > broken. well, in a sense, it is. overkill is by definition means no added value. if it has any significant chance to be useful, we call it security margin. overkill means you pay with loss of performance, for nothing. it also means that some people will not be able to use it (performance budget does not allow), so need to fall back on older or less used algorithms. a broken algorithm is broken. it is much worse than stupid or useless in cryptography. stupid has some value. broken has none. addition: afaik nist at one point considered adding a remark that shakes are the preferred primitives. it is apparently missing from the final document. which i find unfortunate. From thierry.moreau at connotech.com Wed Aug 12 10:25:31 2015 From: thierry.moreau at connotech.com (Thierry Moreau) Date: Wed, 12 Aug 2015 14:25:31 +0000 Subject: [Cryptography] SRP for mutual authentication - as an alternative / addition to certificates? In-Reply-To: References: Message-ID: <55CB575B.3080601@connotech.com> On the relevance of the fidoalliance.org initiative, On 08/12/15 04:56, Ben Laurie and Tony Arcieri wrote: > > > On Wed, 12 Aug 2015 at 02:24 Tony Arcieri > wrote: > > On Wed, Aug 5, 2015 at 11:51 AM, Ben Laurie > wrote: > > I use one of those, but it doesn't really help with my other > devices. > > > U2F is just a protocol. Your "other devices" could also act as U2F > tokens themselves (e.g. your SmartWatch could act as a U2F token for > your SmartPhone). > > > I don't wear a watch. > > Or (potentially) something like a Yubikey could provide U2F over > Bluetooth or NFC. > > > I'm not sure potential logins are much use to me. :-) > > > And I'm screwed if I lose it (well, I'm not, because I'll be > given another, but if I were a member of the public I would be). > > > Buy two and keep another as a backup, then revoke the first when you > lose it. > > > So, if I'm on holiday, I do without access for the remaining 2 weeks? > > But losing credentials is a general problem with any authentication > system. > > > True, but that doesn't give you a licence to ignore it. > My two points: 1) general feedback of UAF / U2F 2) specific comment on Ben objections 1) general feedback From a very superficial look at the technology, the UAF / U2F approach has some good points: - by relying on a public key digital signature for routine authentication (login), it rests on a very effective password phishing countermeasure (maybe the only effective countermeasure), - it uses client private signature keys without the concept of client security certificate, something I refer to as the "first party certification" paradigm, - it managed to get some momentum as an industry alliance. It may also have limitations: - the proof of possession at the registration phase provides no protection against impersonation attack at this phase, - while the client side device (could be a software emulation) is touted as requiring a form of biometric or PIN enablement, this requirement is hardly enforceable by the protocol. Both of these limitations appear minor if we envision the technology as a replacement for password-only authentication for minimal security applications, but may become more serious when the added security attracts higher valued services and/or when the added perceived security induces a relaxation of vigilance for the limitations. 2) Specific comment on recovery of lost authenticator The client-side implementation may include private signature key backup facilities. Obviously, there is an implicit vulnerability with this avenue, but nonetheless that's an ever-present mitigation strategy for the ever present operational threat of a private key loss incident. Maybe Ben should have figured it by himself; likely he did but was arguing from an ordinary user perspective. The UAF / U2F approach seems to defer the difficult user interface for managing a private signature key to the implementation at the client side. Do we have some anecdotal experience in user interface for managing bare private keys (not linked to a security certificate, and without cryptoperiod expiration requirements)? - Thierry Moreau From dj at deadhat.com Wed Aug 12 19:55:55 2015 From: dj at deadhat.com (dj at deadhat.com) Date: Wed, 12 Aug 2015 23:55:55 -0000 Subject: [Cryptography] SHA-3 FIPS-202: no SHAKE512 but SHAKE128; confusing SHAKE security In-Reply-To: References: <20150805214103.GB13929@carbon.w2lan.cesnet.cz> <20150807211627.GT9139@mournblade.imrryr.org> <20150808105906.GA4122@carbon.w2lan.cesnet.cz> <835832901.20150808225230@gmail.com> <20150812094724.GE4122@carbon.w2lan.cesnet.cz> Message-ID: > addition: afaik nist at one point considered adding a remark that > shakes are the preferred primitives. it is apparently missing from the > final document. which i find unfortunate. However in my experience so far, the shakes are the preferred primitives. When you're getting a room of people in a standards group to first agree on a minimum security strength (say O(2^128)) then to agree on a hash, taking recent history into account and looking to the future deployments, the shakes are the obvious choice and shake128 has already been adopted in one standards body I'm involved in. From waywardgeek at gmail.com Wed Aug 12 21:12:13 2015 From: waywardgeek at gmail.com (Bill Cox) Date: Wed, 12 Aug 2015 18:12:13 -0700 Subject: [Cryptography] Why is ECC secure? In-Reply-To: <20150630182121.GG14121@mournblade.imrryr.org> References: <20150613134103.GI2050@mournblade.imrryr.org> <20150630145621.GF14121@mournblade.imrryr.org> <20150630182121.GG14121@mournblade.imrryr.org> Message-ID: I took my geometry based attack further today and found some things I think are very cool. In particular, in an Edwards curve with negative d (squished circle, not fat one), I set z = -sqrt(d)xy, so that the Edwards curve points map onto the unit sphere. I found that when I add a small delta (like the point (0.0001, ~0), then measure the distance traversed on the sphere, it is always equal no matter what point I start from, once I divide by 1/sqrt(x^2 + y^2). I computed the line integral on the sphere from the Edwards curve origin (0, 1, 0) to an arbitrary point using Wolfram's awesome integration toolkit. It resulted in a closed form solution, but unfortunately involves an Elliptic integral. This was the only part that I can't compute using modular arithmetic. Had it resulted in an equation that was modular arithmetic friendly, I think that might result in a significant break of elliptic curve crypto that can be mapped to Edwards curves. The idea would have been to find the modular distance from the origin of the generator point, and also of the user's publiic key point. At that point, I think we've mapped the problem to regular modular arithmetic in one variable. But... it didn't work. Was fun, though :) I used Wolfram to evaluate the path integral for several multiples of the generator point, and indeed, they are clearly multiples of a constant. Bill Bill -------------- next part -------------- An HTML attachment was scrubbed... URL: From ryacko at gmail.com Thu Aug 13 02:34:45 2015 From: ryacko at gmail.com (Ryan Carboni) Date: Wed, 12 Aug 2015 23:34:45 -0700 Subject: [Cryptography] Why is ECC secure? Message-ID: Quite bluntly, millennia have been spent towards prime numbers. The history of ECC is quite short. The history of post-quantum prime is even shorter. Prime numbers came before modern machine-assisted cryptanalysis. The thing that really makes me nervous is AES. The biclique attack shows that attacks can be combined. The mix columns only provide diffusion if each byte is not equal, as a result, the weak key schedule prevent inter-round symmetry. AES seems to have too many weak components. And comments in the Rijndael specification that "The cipher is fully 'self-supporting'. It does not make use of another cryptographic component, S-boxes 'lent' from well-reputed ciphers, bits obtained from Rand tables, digits of π or any other such jokes." Or that "The polynomial m ( x ) (‘11B’) for the multiplication in GF(2 8 ) is the first one of the list of irreducible polynomials of degree 8, given in [LiNi86, p. 378]." I even find Speck to be suspicious. Even SHA-1 is a block cipher, and it is ARX. But the NSA says that without Threefish's design of several sequential operations, they wouldn't have developed Speck. They certainly have the resources to bruteforce every possible ARX function to see if it meets any tests they themselves developed. There also seems to be a persistent insistence on having as a maximum, 128-bit block widths. When there is a logical contradiction, suspicions must be raised and items looked into. Math teaches reasoning, yes? But for Dual EC, I think it is best to use Blum Blum Shub instead. From ryacko at gmail.com Thu Aug 13 02:37:30 2015 From: ryacko at gmail.com (Ryan Carboni) Date: Wed, 12 Aug 2015 23:37:30 -0700 Subject: [Cryptography] Speculation about Baton Block Cipher Message-ID: https://en.wikipedia.org/wiki/BATON I think in modern terms, according to the above wikipedia page: BATON is a family of authenticated encryption ciphers, with a variable block width, and accepts a tweak as an input? To think that since 1995 the NSA has a cipher that the civilian cryptographic community is on the verge of accepting! And this is before the AES competition as well! It's only been 20 years anyway. From waywardgeek at gmail.com Thu Aug 13 11:29:13 2015 From: waywardgeek at gmail.com (Bill Cox) Date: Thu, 13 Aug 2015 08:29:13 -0700 Subject: [Cryptography] Why is ECC secure? In-Reply-To: References: <20150613134103.GI2050@mournblade.imrryr.org> <20150630145621.GF14121@mournblade.imrryr.org> <20150630182121.GG14121@mournblade.imrryr.org> Message-ID: Just for completeness, here's my notes on the math: x^2 + y^2 = 1 + dx^2y^2 Define a curve C as follows: let z' = -sqrt(d)xy x^2 + y^2 + z^2 = 1 So, it is the unit sphere. Let (0, 1, 0) be the origin point on the curve C is a path inscribed on a unit sphere with a cool property. Any point on the Edwards curve corresponds to a point on C and can be trivially computed using modular arithmetic. Edwards curve addition is equivalent adding the lengths from the origin to the two points on the sphere, weighted by a simple weighting factor. The weight is 1/|(x, y)|. If the length, computed in modular arithmetic, of both the generator and public key point are known, then computing the discrete log can be done using regular techniques such as index calculus. If this where to happen, the strength of Edwards curve compatible EC crypto would plummet, as we typically only use 256 bits in EC, while we need more like 2048 bit to defend against index calculus. So, can we find the line integral from the origin to (x, y, z) given x and y using modular arithmetic? y = sqrt((1 - x^2)/(1 + x^2)) |(x, y)| = sqrt(x^2 + (1-x^2)/(1+x^2)) = sqrt((x^2 + x^4 + 1 - x^2)/(1+x^2)) = sqrt((x^4 + 1)/(x^2 + 1)) z = xy = x*sqrt((1 - x^2)/(1 + x^2)) x' = 1 y' = -(2 x)/(Sqrt[(1 - x^2)/(1 + x^2)] (1 + x^2)^2) z' = (-x^4-2 x^2+1)/(sqrt((1-x^2)/(x^2+1)) (x^2+1)^2) integrate sqrt(x'^2 + y'^2 + z'^2)/sqrt(x^2 + y^2) = sqrt(1 + ((2 x)/(Sqrt[(1 - x^2)/(1 + x^2)] (1 + x^2)^2))^2 + ((-x^4-2 x^2+1)/(sqrt((1-x^2)/(x^2+1)) (x^2+1)^2))^2)/sqrt((x^4 + 1)/(x^2 + 1)) Plugging the above into Wolfram's integral calculator results in: = (Sqrt[2 - 2 x^4] Sqrt[-((1 + x^4)/((-1 + x^2) (1 + x^2)^2))] EllipticF[ArcSin[x], -1])/Sqrt[(1 + x^4)/(1 + x^2)] The ArcSin and especially the EllipticF are functions that I don't know how to compute using modular arithmetic. There are various modular-arithmetic friendly infinite series expansions. Are there any where we can reduce to a reasonable finite equation using modular arithmetic? Bill -------------- next part -------------- An HTML attachment was scrubbed... URL: From cryptography at dukhovni.org Thu Aug 13 11:33:30 2015 From: cryptography at dukhovni.org (Viktor Dukhovni) Date: Thu, 13 Aug 2015 15:33:30 +0000 Subject: [Cryptography] Why is ECC secure? In-Reply-To: References: Message-ID: <20150813153330.GY9139@mournblade.imrryr.org> On Wed, Aug 12, 2015 at 11:34:45PM -0700, Ryan Carboni wrote: > Quite bluntly, millennia have been spent towards prime numbers. > > The history of ECC is quite short. The history of post-quantum prime > is even shorter. The history of both RSA and ECC is quite short. Prime numbers indeed go back to antiquity, but deep insights into prime number theory start with Fermat, and the foundations of modern number theory are laid by Euler, Legendre, Gauss and Dirichlet in the late 18th and the 19th century. Analytic number theory and group theory are both 19th century advances. Around the same time Elliptic curves are studied by Abel and Weierstress in the 19th century. And yes, many of the major advances in the arithmetic of Elliptic Curves are then made in the early to mid 20th centries. Significant attention to and progress in factoring algorithms (quadratic sieve, ECM, and GNFS) is quite recent. So just because the concept of prime number dates back to antiquity, while Elliptic curves do not, it is I think a false meme that therefore we have a multi-millenium lead on understanding RSA vs. ECC. (It is amusing in this context to note the role of ECM in factoring composites). > Prime numbers came before modern machine-assisted cryptanalysis. This seems irrelevant, elliptic curves date back to the mid 18th century, so what? -- Viktor. From waywardgeek at gmail.com Thu Aug 13 13:34:59 2015 From: waywardgeek at gmail.com (Bill Cox) Date: Thu, 13 Aug 2015 10:34:59 -0700 Subject: [Cryptography] Why is ECC secure? In-Reply-To: References: <20150613134103.GI2050@mournblade.imrryr.org> <20150630145621.GF14121@mournblade.imrryr.org> <20150630182121.GG14121@mournblade.imrryr.org> Message-ID: I simplified some of the math above. For d == -1 (it's only slightly more complex for d != 1), the equation to integrate in the path from the origin to the elliptic curve point simplifies to: sqrt(2)/sqrt(1 - x^4) The line integral simplifies to: Sqrt[2] EllipticF[ArcSin[x], -1] Evaluating this equation at the first four points where the generator has x = 0.4: path integral over 0 .. 1 = 1.85407, which is Sqrt[2] EllipticK[-1] x y path integral ratio 0.4 0.8509629434 0.567149 0.7699820705 0.5252257314 1.1343 2.0000035264 0.9884198756 0.1079220526 1.70145 3.0000052896 0.9175950827 -0.2928953491 2.26859 3.9999894208 Now, if I could just figure out how to evaluate sqrt(2)*EllipticF(arcsin(x), -1) mod p :) -------------- next part -------------- An HTML attachment was scrubbed... URL: From dave at horsfall.org Thu Aug 13 14:10:12 2015 From: dave at horsfall.org (Dave Horsfall) Date: Fri, 14 Aug 2015 04:10:12 +1000 (EST) Subject: [Cryptography] The Code Talkers Message-ID: Today (well, in this timezone) is National Navajo Code Talkers Day. Remember them? Well, the Japanese certainly couldn't, because it was an obscure language spoken by only a few people, so it was, well, crypto in the clear, I suppose... -- Dave Horsfall DTM (VK2KFU) "Those who don't understand security will suffer" RIP Cecil the Lion; he was in pain for two days, thanks to some brave hunter. From mitch at niftyegg.com Thu Aug 13 14:54:19 2015 From: mitch at niftyegg.com (Tom Mitchell) Date: Thu, 13 Aug 2015 11:54:19 -0700 Subject: [Cryptography] The Code Talkers In-Reply-To: References: Message-ID: Gack I top posted my previous... this is an NPR article on the last Talker. http://www.npr.org/sections/thetwo-way/2014/06/04/318873830/last-of-the-navajo-code-talkers-dies-at-93 -- T o m M i t c h e l l -------------- next part -------------- An HTML attachment was scrubbed... URL: From bascule at gmail.com Thu Aug 13 16:46:28 2015 From: bascule at gmail.com (Tony Arcieri) Date: Thu, 13 Aug 2015 13:46:28 -0700 Subject: [Cryptography] Why is ECC secure? In-Reply-To: <20150813153330.GY9139@mournblade.imrryr.org> References: <20150813153330.GY9139@mournblade.imrryr.org> Message-ID: On Thu, Aug 13, 2015 at 8:33 AM, Viktor Dukhovni wrote: > On Wed, Aug 12, 2015 at 11:34:45PM -0700, Ryan Carboni wrote: > > > Quite bluntly, millennia have been spent towards prime numbers. > > > > The history of ECC is quite short. The history of post-quantum prime > > is even shorter. > > The history of both RSA and ECC is quite short. Prime numbers > indeed go back to antiquity, but deep insights into prime number > theory start with Fermat, and the foundations of modern number > theory are laid by Euler, Legendre, Gauss and Dirichlet in the late > 18th and the 19th century. Analytic number theory and group theory > are both 19th century advances. Around the same time Elliptic > curves are studied by Abel and Weierstress in the 19th century. It's also important to note that both RSA and ECC use prime numbers (specifically prime fields in the latter's case). In many ways I think they can be seen as special cases of a single bigger problem (which I believe is the hidden subgroup problem, correct me if I'm wrong) When it comes to cryptanalysis, the real question is the amount of work that has gone into breaking things like the RSA trapdoor function as opposed to just "prime numbers" or factoring. Both ECC and RSA use prime numbers, and the discrete logarithm problem has probably received a similar amount of study to factoring (and indeed there are spooky similarities between these problems too). -- Tony Arcieri -------------- next part -------------- An HTML attachment was scrubbed... URL: From fergdawgster at mykolab.com Thu Aug 13 16:57:56 2015 From: fergdawgster at mykolab.com (Paul Ferguson) Date: Thu, 13 Aug 2015 13:57:56 -0700 Subject: [Cryptography] Medieval Sword contains Cryptic Code. British Library appeals for help to crack it. Message-ID: <55CD04D4.3030609@mykolab.com> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256 Interesting article, pointer via Schneier's blog led me to: http://www.ancient-origins.net/news-general/medieval-sword-contains-cryp tic-code-british-library-appeals-help-crack-it-003571 Via: https://www.schneier.com/blog/archives/2015/08/cryptography_fr.html FYI, - - ferg - -- Paul Ferguson PGP Public Key ID: 0x54DC85B2 Key fingerprint: 19EC 2945 FEE8 D6C8 58A1 CE53 2896 AC75 54DC 85B2 -----BEGIN PGP SIGNATURE----- Version: GnuPG v2 iF4EAREIAAYFAlXNBNQACgkQKJasdVTchbJDtQD/aT47inzPelp3w0rEahO2aTnh KOMQ7Ace+KwlhNMEIg8BAIF3qxymBVJc5eZg2b9JgXGvR3YYjUMo7J4+EuODWC1P =4vd7 -----END PGP SIGNATURE----- From mitch at niftyegg.com Thu Aug 13 18:06:07 2015 From: mitch at niftyegg.com (Tom Mitchell) Date: Thu, 13 Aug 2015 15:06:07 -0700 Subject: [Cryptography] The Code Talkers In-Reply-To: References: Message-ID: We explored the difference between code and cryptography previously... That should not diminish the work that our Code Talkers did when needed. I believe the last WW2 talker has passed. I hope I am wrong but to remember them and why we needed them is a sobering memory. My Dad fought in the islands in a unit that suffered 600% casualties and was even bombarded by friendly naval gun fire because of a communication screwup. Code, languages, cryptography all have a common need and that is communication and reliability. Secrecy also gets involved but without communication and reliability .... Well enough said by me... it is worth looking at the Youtube and other on line communication including alternate words, authentication, and other quality work that made it all work. The life long learning investment in an uncommon language was an important part of the whole. -- T o m M i t c h e l l -------------- next part -------------- An HTML attachment was scrubbed... URL: From cryptography at dukhovni.org Thu Aug 13 19:26:23 2015 From: cryptography at dukhovni.org (Viktor Dukhovni) Date: Thu, 13 Aug 2015 23:26:23 +0000 Subject: [Cryptography] Why is ECC secure? In-Reply-To: References: <20150613134103.GI2050@mournblade.imrryr.org> <20150630145621.GF14121@mournblade.imrryr.org> <20150630182121.GG14121@mournblade.imrryr.org> Message-ID: <20150813232623.GZ9139@mournblade.imrryr.org> On Wed, Aug 12, 2015 at 06:12:13PM -0700, Bill Cox wrote: > I used Wolfram to evaluate the path integral for several multiples of the > generator point, and indeed, they are clearly multiples of a constant. An Edwards curve over the reals is a compact subset of R^2 bounded away from (0,0). Therefore, your scaled metric gives the image of the curve on S^2 a finite diameter, but there are points of infinite order on the curve when the generator "G" is not a torsion element. Therefore, any proportionality between "n" and the "distance" of "nG" from some reference point (pick any continuous metric), fails for large enough "n". Thus, before we even consider whether any of this applies to the discrete case, it seems clear that this must fail in the continuous case. -- Viktor. From bear at sonic.net Thu Aug 13 22:57:32 2015 From: bear at sonic.net (Ray Dillinger) Date: Thu, 13 Aug 2015 19:57:32 -0700 Subject: [Cryptography] Why is ECC secure? In-Reply-To: References: <20150813153330.GY9139@mournblade.imrryr.org> Message-ID: <55CD591C.2050708@sonic.net> On 08/13/2015 01:46 PM, Tony Arcieri wrote: > When it comes to cryptanalysis, the real question is the amount of work > that has gone into breaking things like the RSA trapdoor function as > opposed to just "prime numbers" or factoring. Both ECC and RSA use prime > numbers, and the discrete logarithm problem has probably received a similar > amount of study to factoring (and indeed there are spooky similarities > between these problems too). One thing to point out is that RSA is at least as hard as factoring because if you can solve RSA, you can use the solution to factor its modulus. IOW, there can be no shortcut that makes it easier than factoring. But that doesn't rule out the possibility that factoring may still be easier than the best way we know how to do it now. Bear -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 819 bytes Desc: OpenPGP digital signature URL: From bear at sonic.net Thu Aug 13 23:06:36 2015 From: bear at sonic.net (Ray Dillinger) Date: Thu, 13 Aug 2015 20:06:36 -0700 Subject: [Cryptography] Medieval Sword contains Cryptic Code. British Library appeals for help to crack it. In-Reply-To: <55CD04D4.3030609@mykolab.com> References: <55CD04D4.3030609@mykolab.com> Message-ID: <55CD5B3C.2080200@sonic.net> On 08/13/2015 01:57 PM, Paul Ferguson wrote: > Interesting article, pointer via Schneier's blog led me to: > > http://www.ancient-origins.net/news-general/medieval-sword-contains-cryp > tic-code-british-library-appeals-help-crack-it-003571 > > Via: > > https://www.schneier.com/blog/archives/2015/08/cryptography_fr.html > > FYI, 18 characters isn't a lot to go on. Even for a monoalphabetic substitution cipher, that's likely below the unicity distance. We'll probably never know what the inscription says, unless we find it as part of a longer text somewhere else. I really have to admire the metallurgy though; the thing was sunk in a river for seven centuries, is still sharp, still has a very legible (if mysterious) inscription, and didn't even significantly rust. Seriously. That's better than most of the stainless steel we make today. Bear -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 819 bytes Desc: OpenPGP digital signature URL: From cryptography at dukhovni.org Fri Aug 14 00:11:06 2015 From: cryptography at dukhovni.org (Viktor Dukhovni) Date: Fri, 14 Aug 2015 04:11:06 +0000 Subject: [Cryptography] Why is ECC secure? In-Reply-To: <20150813232623.GZ9139@mournblade.imrryr.org> References: <20150613134103.GI2050@mournblade.imrryr.org> <20150630145621.GF14121@mournblade.imrryr.org> <20150630182121.GG14121@mournblade.imrryr.org> <20150813232623.GZ9139@mournblade.imrryr.org> Message-ID: <20150814041106.GC9139@mournblade.imrryr.org> On Thu, Aug 13, 2015 at 11:26:23PM +0000, Viktor Dukhovni wrote: > On Wed, Aug 12, 2015 at 06:12:13PM -0700, Bill Cox wrote: > > > I used Wolfram to evaluate the path integral for several multiples of the > > generator point, and indeed, they are clearly multiples of a constant. > > An Edwards curve over the reals is a compact subset of R^2 bounded > away from (0,0). Therefore, your scaled metric gives the image of > the curve on S^2 a finite diameter, but there are points of infinite > order on the curve when the generator "G" is not a torsion element. > > Therefore, any proportionality between "n" and the "distance" of > "nG" from some reference point (pick any continuous metric), fails > for large enough "n". Thus, before we even consider whether any > of this applies to the discrete case, it seems clear that this must > fail in the continuous case. Note that, in essence, what you're trying to do is contruct a group homomorphism from the real Edwards curve to the circle. If "d" is not a real square (i.e. d is negative), then the Edwards group law is a differtiable function of its arguments, and the Edwards curve is a compact one dimensional "Lie group". Such a group is necessarily diffeomorphic to the unit circle, via the "exponential map" (for which you've stumbled into a closed form via Elliptic functions). More precisely, if e() is the exponential map on the Edwards curve mapping real numbers t to group elements e(t), and T > 0 is the smallest number with e(T) = identity, then we get a group isomorphism to the unit circle: e(t) <-> exp(2*i*pi*t/T) where e() is the exponential map (from theory of Lie groups) on the Edwards curve, and exp() is the exponential map on the unit circle which (not coincidentally) maps t -> e^{it}. If G is not a torsion element, and you know a bound for "nG" and a bound for "n", then sufficiently precise preimages for the exponential map, log(G) and log(nG) (both known only modulo T) are in principle sufficient to find "n". This does not carry over to the discrete case. A discrete Edwards curve is not a Lie group, and the functions you're looking for are the logarithms, which if you had at hand, would trivially identify any cyclic subgroup of the curve abelian group with Z/nZ (discrete circle). These functions of course exist, but that does not make them computable in practice. -- Viktor. From cryptography at dukhovni.org Fri Aug 14 00:21:20 2015 From: cryptography at dukhovni.org (Viktor Dukhovni) Date: Fri, 14 Aug 2015 04:21:20 +0000 Subject: [Cryptography] Why is ECC secure? In-Reply-To: <55CD591C.2050708@sonic.net> References: <20150813153330.GY9139@mournblade.imrryr.org> <55CD591C.2050708@sonic.net> Message-ID: <20150814042120.GD9139@mournblade.imrryr.org> On Thu, Aug 13, 2015 at 07:57:32PM -0700, Ray Dillinger wrote: > > When it comes to cryptanalysis, the real question is the amount of work > > that has gone into breaking things like the RSA trapdoor function as > > opposed to just "prime numbers" or factoring. Both ECC and RSA use prime > > numbers, and the discrete logarithm problem has probably received a similar > > amount of study to factoring (and indeed there are spooky similarities > > between these problems too). > > One thing to point out is that RSA is at least as hard as factoring > because if you can solve RSA, you can use the solution to factor its > modulus. IOW, there can be no shortcut that makes it easier than > factoring. That's backwards, factoring trivially solves RSA. I'm not aware of any proof that solving RSA necessarily factors the modulus. However, there are partial results in that direction: https://www.iacr.org/archive/eurocrypt2009/54790037/54790037.pdf -- Viktor. From waywardgeek at gmail.com Fri Aug 14 09:46:12 2015 From: waywardgeek at gmail.com (Bill Cox) Date: Fri, 14 Aug 2015 06:46:12 -0700 Subject: [Cryptography] Why is ECC secure? In-Reply-To: <20150813232623.GZ9139@mournblade.imrryr.org> References: <20150613134103.GI2050@mournblade.imrryr.org> <20150630145621.GF14121@mournblade.imrryr.org> <20150630182121.GG14121@mournblade.imrryr.org> <20150813232623.GZ9139@mournblade.imrryr.org> Message-ID: On Thu, Aug 13, 2015 at 4:26 PM, Viktor Dukhovni wrote: > > Therefore, any proportionality between "n" and the "distance" of > "nG" from some reference point (pick any continuous metric), fails > for large enough "n". Thus, before we even consider whether any > of this applies to the discrete case, it seems clear that this must > fail in the continuous case. > > -- > Viktor. > The power of visualization seems to be under-rated in group theory :) Not only does all this work, it gives me a way to create new additive groups easily, which is something I've wanted to know how to do for a while now. For example, here's a group I just came up with using this line integral stuff: a @ b = sqrt(((4a^2 + 1)^(3/2) + (4b^2 + 1)^(3/2))^(2/3) - 1)/2 Plug it into a spread sheet, and you'll see it works. I created it by doing the line integral of 12x over the curve y = x^2. If the line integral is called L(x), then the addition rule is simply Linv(L(a) + L(b)). I'm not sure if the cube-root is friendly modulo a prime, but if it is, we could probably use this to do crypto :) We can create groups on path or function using this technique. The unit circle group is the simplest case, where point multiplication is simply angle addition. Bill -------------- next part -------------- An HTML attachment was scrubbed... URL: From watsonbladd at gmail.com Fri Aug 14 10:07:18 2015 From: watsonbladd at gmail.com (Watson Ladd) Date: Fri, 14 Aug 2015 07:07:18 -0700 Subject: [Cryptography] Why is ECC secure? In-Reply-To: <55CD591C.2050708@sonic.net> References: <20150813153330.GY9139@mournblade.imrryr.org> <55CD591C.2050708@sonic.net> Message-ID: On Thu, Aug 13, 2015 at 7:57 PM, Ray Dillinger wrote: > > > On 08/13/2015 01:46 PM, Tony Arcieri wrote: > >> When it comes to cryptanalysis, the real question is the amount of work >> that has gone into breaking things like the RSA trapdoor function as >> opposed to just "prime numbers" or factoring. Both ECC and RSA use prime >> numbers, and the discrete logarithm problem has probably received a similar >> amount of study to factoring (and indeed there are spooky similarities >> between these problems too). > > One thing to point out is that RSA is at least as hard as factoring > because if you can solve RSA, you can use the solution to factor its > modulus. IOW, there can be no shortcut that makes it easier than > factoring. It's true that given phi I can factor the modulus. But it's not known that being able to decrypt an RSA ciphertext that I can factor the modulus. Some schemes like Rabin-Williams are provably equivalent to factoring, but as far as I know a similar result hasn't been shown: the best results in this direction involve unrealistic models of algorithms, and there are proofs that certain classes of reduction would make factoring easy. > > But that doesn't rule out the possibility that factoring may still be > easier than the best way we know how to do it now. > > Bear > > > > _______________________________________________ > The cryptography mailing list > cryptography at metzdowd.com > http://www.metzdowd.com/mailman/listinfo/cryptography -- "Man is born free, but everywhere he is in chains". --Rousseau. From waywardgeek at gmail.com Fri Aug 14 12:35:09 2015 From: waywardgeek at gmail.com (Bill Cox) Date: Fri, 14 Aug 2015 09:35:09 -0700 Subject: [Cryptography] Why is ECC secure? In-Reply-To: References: <20150613134103.GI2050@mournblade.imrryr.org> <20150630145621.GF14121@mournblade.imrryr.org> <20150630182121.GG14121@mournblade.imrryr.org> <20150813232623.GZ9139@mournblade.imrryr.org> Message-ID: Duh... any operator of the form Finv(F(a) + F(b)) forms a group. It is associative and commutative. The identity element is Finv(0). The inverse of x is Finv(-F(x)). I really would have enjoyed the class where they teach this :) Bill -------------- next part -------------- An HTML attachment was scrubbed... URL: From outer at sympatico.ca Fri Aug 14 14:11:11 2015 From: outer at sympatico.ca (Richard Outerbridge) Date: Fri, 14 Aug 2015 14:11:11 -0400 Subject: [Cryptography] Why is ECC secure? In-Reply-To: References: <20150613134103.GI2050@mournblade.imrryr.org> <20150630145621.GF14121@mournblade.imrryr.org> <20150630182121.GG14121@mournblade.imrryr.org> <20150813232623.GZ9139@mournblade.imrryr.org> Message-ID: > On 2015-08-14 (226), at 12:35:09, Bill Cox wrote: > > Duh... any operator of the form Finv(F(a) + F(b)) forms a group. It is associative and commutative. The identity element is Finv(0). The inverse of x is Finv(-F(x)). I really would have enjoyed the class where they teach this :) Somewhat sarcastic are we feeling this morning, Sir? Usual shave? __outer From waywardgeek at gmail.com Fri Aug 14 15:15:00 2015 From: waywardgeek at gmail.com (Bill Cox) Date: Fri, 14 Aug 2015 12:15:00 -0700 Subject: [Cryptography] Why is ECC secure? In-Reply-To: References: <20150613134103.GI2050@mournblade.imrryr.org> <20150630145621.GF14121@mournblade.imrryr.org> <20150630182121.GG14121@mournblade.imrryr.org> <20150813232623.GZ9139@mournblade.imrryr.org> Message-ID: On Fri, Aug 14, 2015 at 11:11 AM, Richard Outerbridge wrote: > > > On 2015-08-14 (226), at 12:35:09, Bill Cox > wrote: > > > > Duh... any operator of the form Finv(F(a) + F(b)) forms a group. It is > associative and commutative. The identity element is Finv(0). The inverse > of x is Finv(-F(x)). I really would have enjoyed the class where they > teach this :) > > Somewhat sarcastic are we feeling this morning, Sir? Usual shave? > __outer > > Actually, there's a sad story here. I love group theory. However, in the class I took at Berkeley on intro group theory, they allowed a prof who had recently had a stroke teach it, but the guy had lost all of his mathematical ability. So... we all failed the final, and the math department gave us all A's anyway. I was unable to continue with the second course since I had learned nothing from the first. So, all I have now is a Wikipedia level of knowledge of group theory. Anyway, I had fun this morning coming up with all sorts of group addition laws of the form Finv(F(a) + F(b)). Here's one of the simplest: let F(x) = 1/x for x != 0, and 0 for x = 0 Finv(x) = 1/x for x != 0, and 0 for x = 0 Addition rule: 1/(1/a + 1/b), or 0 if a == 0, b == 0, or 1/a + 1/b == 0 We need a "0" element, which is why we need F(0) = 0. Now check for inverse: inv(a) = Finv(-F(a)) = 1/(-1/a) = -a We can define this on the open interval (-1, 1), which excludes 1, and -1, which map to themselves, which would cause problems. This seems to satisfy the definition of a group. To make it faster for modular arithmetic, we only have to do the modular inverse at end of computation if we track numerator/denominator separately: an/ad @ bn/bd = 1/(ad/an + bd/bn) = 1/((ad*bn + an*bd)/(an*bn) = an*bn/(ad*bn + an*bd) This is only 3 multiplications and one addition per group operation, which I think is quite a bit faster than Edwards curve performance with protective coords. Have simpler groups like this one been broken? I'm trying to get my head around why we need the full complexity of elliptic curve crypto to get the security we need, and understanding weaknesses in this system and why such weaknesses are not in regular elliptic crypto would give me some more confidence in elliptic crypto. I wrote some simple Python code to implement it, which is attached. An interesting property in this group is that anything multiplied (not added) by itself is 1. One way to think of it is that the group operation is putting n resistors in parallel, and the result is the resistance of the parallel structure. Are there standard attacks I can try against it? Bill -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: simple.py Type: text/x-python Size: 2929 bytes Desc: not available URL: From iang at iang.org Fri Aug 14 15:15:06 2015 From: iang at iang.org (ianG) Date: Fri, 14 Aug 2015 20:15:06 +0100 Subject: [Cryptography] SHA-3 FIPS-202: no SHAKE512 but SHAKE128; confusing SHAKE security In-Reply-To: References: <20150805214103.GB13929@carbon.w2lan.cesnet.cz> Message-ID: <55CE3E3A.2020209@iang.org> On 9/08/2015 15:08 pm, Sebastian Gesemann wrote: > On Wed, Aug 5, 2015 at 11:41 PM, Michal Bozon wrote: >> Hi. >> There is new fresh FIPS-202 standardizing SHA-3. >> >> In addition to SHA3-{224,256,384,512}, SHAKE-{256,512} were expected. >> However, we got SHAKE-{128,256} instead. >> >> So in addition to four fixed hash functions with 224 up to 512 bit >> security, > > That's not their security. Their security is 112 up to 256. We don't > use 512 bits of output because we need a preimage resistance of 2^512. > We use 512 bits of output because they are necessary for collision > resistance of 2^256. > >> there are two "expandable-output" functions (XOF) with only >> max. 128 vs max. 256 bit security. > > 128 and 256 are the "standard security levels" we know from AES already. > > Even in the quantum computer context, 256 is perfectly fine and 512 > rather meaningless. > >> So what is the point of their expansion? > > The SHAKEs can be used as a DRBG (deterministic random bit generator) > or an MGF (mask generation function, something you use in RSA-OAEP and > RSA-PSS, for example). > > You can also use them if you need a faster hash. Just pick the desired > security level (s=128 or s=256), an appropriate digest length d and > use SHAKE-s with d bits of output. If you care about collision > resistance use d=2s, otherwise d=s should be fine. So, with the SHAKEs > you are pretty flexible in that you can choose the security level s > and output length d independently for a better security/speed > trade-off. Given SHAKE-s with d bits of output you get a 1st and 2nd > preimage resistance of 2^min(s, d) and a collision resistance of > 2^min(s, d/2). so picking the smaller convenient s=128, and d of 128 for ephemeral purposes, I get these strengths: collision = 64. 1,2preimage = 128. Good enough for government work. For longer term general stuff, s=128 and d=256 collision = 128 1,2preimage = 128. > In the quantum computer context (using Grover's algorithm) this should > drop down to preimage resistance of 2^min(2s/3, d/2) and a collision > resistance of 2^min(2s/3, d/3) I believe. Taking the s=128 and d=256 above I would then have: collision = 85 1,2preimage = 85. Which is probably enough in the short term for most purposes to create a breathing room for upgrade. So if we live in a world where Quantum is a threat we need to deal with, we'd like to jump to s=256 and up to say d=512: collision = 170 1,2preimage = 170. Does all that make sense? One of the things that has emerged in the last N years or so is that it is up to the protocol designer to present a good cipher suite that is balanced. Letting a user choose different algorithms has proven to be a bad idea -- the user doesn't know more than us, and to a pretty good confidence level knows much less. So to some extent we've lent on this idea that for a 128 bit strength we need a 256 bit hash, a 128 bit cipher, etc etc as is now popularised by the Suite B list. But Sponge is challenging us to get a bit more precise about our calculations. The good thing here is that the tools described above aren't hard to deal with. The annoying thing might be that the idea of us being able to deliver an algorithmic smorgasbord is receding, but nobody I know has come up with a good reason for preserving that as a feature (as opposed to the more general argument that we need the flexibility to swap out entire suites). So if people want to go full IoT, can we ask: what does that mean? Can we draw the line and say the OpenPGP offering here is CipherSuiteIoT which means x/y/z in numbers and params and no more no less? PHB: > IOT looks set to create a demand > for an absolutely minimal cryptographic > suite. One signature algorithm, one > exchange algorithm, both on the same > curve, one authenticated encryption > mode, one digest/pseudorandom function. Or are we offering full cipher flexibility to those IoT designers, and thus forcing them to implement all the multiples, because they won't know what other designers will choose, etc? My thinking right now is that (assuming we're doing this) we should put in the draft a recommendation that precisely identifies a minimum most-popular obligatory to implement suite that covers as far down as we can get it. And leave the rest up to the market? iang From iang at iang.org Fri Aug 14 15:49:26 2015 From: iang at iang.org (ianG) Date: Fri, 14 Aug 2015 20:49:26 +0100 Subject: [Cryptography] Threatwatch: CIN - Corruptor-Injector Network In-Reply-To: <143E44B6-4233-434C-88AE-D889F1B70452@openmailbox.org> References: <55C77111.7060207@iang.org> <143E44B6-4233-434C-88AE-D889F1B70452@openmailbox.org> Message-ID: <55CE4646.60201@iang.org> On 12/08/2015 07:44 am, oshwm wrote: > -----BEGIN PGP SIGNED MESSAGE----- > Hash: SHA512 > > yeh but then... > > crt.sh - owned by comodo > comodo involved with privdog mitm > comodo issues certs for cloudflare > ben laurie works for google > > none of above is a killer but would suggest not necessarily proof of no wrongdoing. > > also, is injecting a modified version of chrome into an http stream impossible - i dont think so. > do ppl in general check md5 or other sums - nope, only paranoid cpunks :D > > as for cryptostorm, they generally have been reliable and i would need to read more about CIN before i either dismiss or agree with them on this topic. Basically, yes. The situation we are looking at isn't verifiable from the outside. It's like the financial system, without auditing. (And we all know where that's gone.) It all works perfectly fine when nobody's doing anything wrong, and the insiders know what they're getting out of it. We get verbal assurances that all is good, go back to sleep. But as soon as something goes wrong, we get another complicated description, and no assurances of any value - we'll fix it, go back to sleep. It used to be that a standard techie - say a university student - could come in, check what the browser and server was up to, and declare it safe and secure. The user could take on some risk, be part of the process. Now, we can't even rely on a crypto-security org to come in and verify the situation. Audit is no longer tractable. The barriers to entry are written so high that only specialist insiders at every point can check these things. iang > On 12 August 2015 06:33:29 BST, Ben Laurie wrote: >> On Sun, 9 Aug 2015 at 20:25 ianG wrote: >> >>> There's a long post by "cryptostorm_team" that describes a capture of >>> the activity of a CIN or Corruptor-Injector Network. >>> >>> https://cryptostorm.org/viewtopic.php?f=67&t=8713 >>> >>> The short story appears to be malware injected into the router which >>> then proceeds to present a false view of many things, including >> google >>> sites and chrome downloads. >>> >>> That last part again - the CIN appears to be capable of injecting a >>> special download of Chrome which then participates in the false >>> presentation to user. Given the complexity of modern software I'd >> say >>> this to be an impossible task except for a very well funded, long >> term >>> adversary. >>> >> >> Or, actually, it is impossible. >> >> That article appears to be complete nonsense. >> >> For example: >> >> "This certificate identifies itself (via CN field) as *.google.com >> despite >> being served during a putative session with google.fr(again, this kind >> of >> obvious certificate misconfiguration is all but impossible to imagine >> google doing in production systems):" >> >> Impossible to imagine, but ... true. The certificate is fine, google.fr >> is >> a SAN. >> >> This supposedly fake certificate, btw, is well known to CT: >> >> https://crt.sh/?q=4B9D33E64EF6104E2043BF1E0928924F6D41337A >> >> Another example: >> >> "http://clients1.google.com/ocsp 404s when loaded.This is not the sort >> of >> thing one will find in a legitimately Google-issued certificate, >> created >> less than 10 days ago." >> >> Oh yes it is. That is completely correct behaviour for an OCSP >> responder. >> >> The alleged bad certificate, btw, for future record is: >> >> -----BEGIN CERTIFICATE----- >> MIIGxTCCBa2gAwIBAgIIa4/pt17tKWYwDQYJKoZIhvcNAQEFBQAwSTELMAkGA1UE >> BhMCVVMxEzARBgNVBAoTCkdvb2dsZSBJbmMxJTAjBgNVBAMTHEdvb2dsZSBJbnRl >> cm5ldCBBdXRob3JpdHkgRzIwHhcNMTUwNTA2MTAzMzE1WhcNMTUwODA0MDAwMDAw >> WjBmMQswCQYDVQQGEwJVUzETMBEGA1UECAwKQ2FsaWZvcm5pYTEWMBQGA1UEBwwN >> TW91bnRhaW4gVmlldzETMBEGA1UECgwKR29vZ2xlIEluYzEVMBMGA1UEAwwMKi5n >> b29nbGUuY29tMFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAE6qywJ47uyuZZh7I4 >> 4f3qvA9T+u3Zy6fI3V0M2W1sQ/fWd9hgs2Ieobbo9lDh3wM912o++qSsLUKA/zud >> +wa5uqOCBF0wggRZMB0GA1UdJQQWMBQGCCsGAQUFBwMBBggrBgEFBQcDAjCCAyYG >> A1UdEQSCAx0wggMZggwqLmdvb2dsZS5jb22CDSouYW5kcm9pZC5jb22CFiouYXBw >> ZW5naW5lLmdvb2dsZS5jb22CEiouY2xvdWQuZ29vZ2xlLmNvbYIWKi5nb29nbGUt >> YW5hbHl0aWNzLmNvbYILKi5nb29nbGUuY2GCCyouZ29vZ2xlLmNsgg4qLmdvb2ds >> ZS5jby5pboIOKi5nb29nbGUuY28uanCCDiouZ29vZ2xlLmNvLnVrgg8qLmdvb2ds >> ZS5jb20uYXKCDyouZ29vZ2xlLmNvbS5hdYIPKi5nb29nbGUuY29tLmJygg8qLmdv >> b2dsZS5jb20uY2+CDyouZ29vZ2xlLmNvbS5teIIPKi5nb29nbGUuY29tLnRygg8q >> Lmdvb2dsZS5jb20udm6CCyouZ29vZ2xlLmRlggsqLmdvb2dsZS5lc4ILKi5nb29n >> bGUuZnKCCyouZ29vZ2xlLmh1ggsqLmdvb2dsZS5pdIILKi5nb29nbGUubmyCCyou >> Z29vZ2xlLnBsggsqLmdvb2dsZS5wdIISKi5nb29nbGVhZGFwaXMuY29tgg8qLmdv >> b2dsZWFwaXMuY26CFCouZ29vZ2xlY29tbWVyY2UuY29tghEqLmdvb2dsZXZpZGVv >> LmNvbYIMKi5nc3RhdGljLmNugg0qLmdzdGF0aWMuY29tggoqLmd2dDEuY29tggoq >> Lmd2dDIuY29tghQqLm1ldHJpYy5nc3RhdGljLmNvbYIMKi51cmNoaW4uY29tghAq >> LnVybC5nb29nbGUuY29tghYqLnlvdXR1YmUtbm9jb29raWUuY29tgg0qLnlvdXR1 >> YmUuY29tghYqLnlvdXR1YmVlZHVjYXRpb24uY29tggsqLnl0aW1nLmNvbYILYW5k >> cm9pZC5jb22CBGcuY2+CBmdvby5nbIIUZ29vZ2xlLWFuYWx5dGljcy5jb22CCmdv >> b2dsZS5jb22CEmdvb2dsZWNvbW1lcmNlLmNvbYIKdXJjaGluLmNvbYIIeW91dHUu >> YmWCC3lvdXR1YmUuY29tghR5b3V0dWJlZWR1Y2F0aW9uLmNvbTALBgNVHQ8EBAMC >> B4AwaAYIKwYBBQUHAQEEXDBaMCsGCCsGAQUFBzAChh9odHRwOi8vcGtpLmdvb2ds >> ZS5jb20vR0lBRzIuY3J0MCsGCCsGAQUFBzABhh9odHRwOi8vY2xpZW50czEuZ29v >> Z2xlLmNvbS9vY3NwMB0GA1UdDgQWBBRYmgbDFeI+6yulnYNz+u8RSD6b7TAMBgNV >> HRMBAf8EAjAAMB8GA1UdIwQYMBaAFErdBhYbvPZotXb1gba7Yhq6WoEvMBcGA1Ud >> IAQQMA4wDAYKKwYBBAHWeQIFATAwBgNVHR8EKTAnMCWgI6Ahhh9odHRwOi8vcGtp >> Lmdvb2dsZS5jb20vR0lBRzIuY3JsMA0GCSqGSIb3DQEBBQUAA4IBAQCSRnI2r+DE >> aeRcZNOWvOrf9XlRnQVRiBjC46eRWp4aP2IU/au5wh8w7hXK8044hcjrlVXl/Z1K >> oL65aEyFwdKM33Mx7Dle74jL12aSHPitnFJQsFkDQ+oB6ydMz1bk8fH3A5Lq3L03 >> yIgNwF+pU1MlKL5rbhZ8ekQOw4EwGXVd4PsgAxT0KESx3MD/K9CgSZxf/Z7D00m2 >> 3wHvx9WPjiWBqjqoHBG0YU+asMtPa0GplNpDlTU0qfxFQlhG05446DbjIAAZ1JTQ >> jhV5+ga4YI/Mvnt4Xf2qEi8Jj1HsdB2Vz94V4NqjyBI2gjPKu5uZFLXHYJY8olUK >> fPfn9P6xBumP >> -----END CERTIFICATE----- >> >> To be clear, it isn't fake. >> >> >> ------------------------------------------------------------------------ >> >> _______________________________________________ >> The cryptography mailing list >> cryptography at metzdowd.com >> http://www.metzdowd.com/mailman/listinfo/cryptography > -----BEGIN PGP SIGNATURE----- > Version: APG v1.1.1 > > iQI7BAEBCgAlBQJVyutiHhxvc2h3bSA8b3Nod21Ab3Blbm1haWxib3gub3JnPgAK > CRAqeAcYSpG1iPIxD/0YSVCvVamvnkyTg86a4MWMKSGcmSXuAwfTi4YxPh4aUk37 > zMWqp9sYqld1GoH7hJjRUDdJILjVwWSdCztGjIqCTl8dBlJChva7LfMQCYTC6K6d > O1dHvBVAaOTJ5iBk8ZdfSlDIoJnLU1aNAe+Fd7hXsbMFBzH885WZaK+A6wMuMqP1 > ZltsBUFP44MO/qOU8Y2MRj7viG+hX2ol/GsVd/M4SYwPTKXR2eAjyRyNyNbYUX9b > rKlhF8ERFa04PSK8wsYaXGNSTvyP3J81h0MXG1eKPizIqKiyhw1xqCaOxN5s2iY3 > nfccZ9+vVd8KC4zbpO6TWJbGNFld/eHIe7E63CbvivYlqKcjU/TQynWP+zIHigwK > et1zDi/XiKdDlumwYstx3IDrirIwr+VAx+IZohKYQxNn9G0hg2seoZ7pSKWiYavw > 5zLZf/6Wbo3XXrOHlS+w0vG5twx66bM57QuCc0Zof9/bxlKw3Y1mESvhnk1SKVVi > K1x1/XjWHFXn67JcfGBynPKml4drQQhV87rE5reMeunGHe8vISYJVYhgPeIz8wPu > MYOUepfkqpgYdisswEkskjl5vZcgAagpEXUmz+EEygMpsD3yCUSeNSAcfU0wRJia > Z/+b4zfM7y8NOcJvdnkYVZdBVLA5/gzuGPcoTlEJZ7aBCMSfV6igXmTFm8j2wQ== > =q6vf > -----END PGP SIGNATURE----- > > _______________________________________________ > The cryptography mailing list > cryptography at metzdowd.com > http://www.metzdowd.com/mailman/listinfo/cryptography > From leichter at lrw.com Fri Aug 14 17:26:38 2015 From: leichter at lrw.com (Jerry Leichter) Date: Fri, 14 Aug 2015 17:26:38 -0400 Subject: [Cryptography] Threatwatch: CIN - Corruptor-Injector Network In-Reply-To: <55CE4646.60201@iang.org> References: <55C77111.7060207@iang.org> <143E44B6-4233-434C-88AE-D889F1B70452@openmailbox.org> <55CE4646.60201@iang.org> Message-ID: <9F256C47-6588-4796-86F0-0E37555468AB@lrw.com> >> yeh but then... >> >> crt.sh - owned by comodo >> comodo involved with privdog mitm >> comodo issues certs for cloudflare >> ben laurie works for google >> >> none of above is a killer but would suggest not necessarily proof of no wrongdoing. >> >> also, is injecting a modified version of chrome into an http stream impossible - i dont think so. >> do ppl in general check md5 or other sums - nope, only paranoid cpunks :D >> >> as for cryptostorm, they generally have been reliable and i would need to read more about CIN before i either dismiss or agree with them on this topic. > > > Basically, yes. The situation we are looking at isn't verifiable from the outside. Perhaps this is true in some generic sense, but it's bizarre to say this in this case. Someone *from Google* tells you stuff about stuff *done by Google* that's readily checked. Is google.fr a SAN in the certificate in question? Simply convert the damn cert into readable form and check. Is this what Google *intended*? Who should you ask other than someone *at Google*? An OSCP at Google 404's if connected to by a browser. Did Google intend that to happen? Again ... who would you ask other than someone at Google. You then have the second level question: Is this a reasonable configuration? And that's not hard to check. For the google.fr case ... this is exactly what SAN's are for. For OSCP's ... there's documentation out there, but what possible security vulnerability is returning a 404 supposed to represent, even if other OSCP providers choose to do something different. Then there's all the weird stuff about Comodo and CT. The question is whether this is a legitimate Google cert. Someone from Google says it is. Who else could make a stronger claim for that fact? CT can provide some evidence that others have seen that cert from Google, which you can accept or not. Really, this is an absurd claim at this point. Think about it: Someone claims that if you try to get stuff from Google, you'll be MITM'ed and will actually get something else. Someone at Google says, no, what you're seeing is exactly what you should expect to see. Let's look at the cases here: 1. Google and the person at Google know what they are telling the truth as they understand it, and they are correct: There's no MITM attack. 2. Google and the person at Google know what they are telling the truth as they understand it, and they are *in*correct: There really is a MITM attack. 3. Google and the person at Google are, honestly or dishonestly, telling you all is OK; but in fact there is no MITM attack. 4. Google and the person at Google are, honestly or dishonestly, telling you all is OK; but in fact there *is* a MITM attack. In case 1, all is golden. In case 2, it makes no difference who makes the statements - there's no point looking for conspiracies. The attackers are just too good; go back to notes on paper left at dead drops. In case 3, Google's incompetent, but in fact you're safe anyway. (But it's hard to square this situation with the actual observations.) That leaves us case 4 ... but it makes no sense. If Google is complicit or has just had the wool pulled over its eyes - why would anyone bother with a MITM attack? Just have Google distribute the "bad" versions of Chrome directly. There's taking care, and there's tin-hattery. If you fall into the latter pit ... The Terrorists/Eavesdroppers Have Won. :-( -- Jerry From cryptography at dukhovni.org Fri Aug 14 17:31:31 2015 From: cryptography at dukhovni.org (Viktor Dukhovni) Date: Fri, 14 Aug 2015 21:31:31 +0000 Subject: [Cryptography] Why is ECC secure? In-Reply-To: References: <20150630145621.GF14121@mournblade.imrryr.org> <20150630182121.GG14121@mournblade.imrryr.org> <20150813232623.GZ9139@mournblade.imrryr.org> Message-ID: <20150814213131.GK9139@mournblade.imrryr.org> On Fri, Aug 14, 2015 at 12:15:00PM -0700, Bill Cox wrote: > Addition rule: 1/(1/a + 1/b), or 0 if a == 0, b == 0, or 1/a + 1/b == 0 The neutral element is infinity, not 0 (makes sense in terms of the parallel resistor model). As for cryptography: point doubling gives: a + a -> 1/(1/a + 1/a) = a/2. And (a/n + a) = 1/(n/a + 1/a) = a/(n+1)). So scalar multiplication is just division. Not especially useful for DH! :-) -- Viktor. From pinterkr at gmail.com Fri Aug 14 16:24:28 2015 From: pinterkr at gmail.com (=?utf-8?Q?Kriszti=C3=A1n_Pint=C3=A9r?=) Date: Fri, 14 Aug 2015 22:24:28 +0200 Subject: [Cryptography] SHA-3 FIPS-202: no SHAKE512 but SHAKE128; confusing SHAKE security In-Reply-To: <55CE3E3A.2020209@iang.org> References: <20150805214103.GB13929@carbon.w2lan.cesnet.cz> <55CE3E3A.2020209@iang.org> Message-ID: <131849658.20150814222428@gmail.com> ianG (at Friday, August 14, 2015, 9:15:06 PM): > One of the things that has emerged in the last N years or so is that it > is up to the protocol designer to present a good cipher suite that is > balanced. Letting a user choose different algorithms has proven to be a > bad idea -- the user doesn't know more than us, and to a pretty good > confidence level knows much less. So to some extent we've lent on this > idea that for a 128 bit strength we need a 256 bit hash, a 128 bit > cipher, etc etc as is now popularised by the Suite B list. and the designer did http://keccak.noekeon.org/tune.html don't forget keccak > sha3. they have a lot of interesting stuff that is not standardized. like one-pass authenticated encryption. From seanl at literati.org Fri Aug 14 16:25:50 2015 From: seanl at literati.org (Sean Lynch) Date: Fri, 14 Aug 2015 20:25:50 +0000 Subject: [Cryptography] Threatwatch: CIN - Corruptor-Injector Network In-Reply-To: References: <82AAA635-BA71-49E3-AA6E-269981B3E3FC@lrw.com> Message-ID: On Mon, Aug 10, 2015 at 6:33 PM Bill Frantz wrote:[snip] > I think it is too late for capability model OSs. The change in > thinking needed to program in the KeyKOS, CapRos, Coyotos, etc. > model is too far from the way people put applications together > with Apache, shell scripts etc. and the Unix file system and > security models. > > Never mind the the capability model is almost exactly the object > model without globally available objects, a model that most > programmers have used. That's how you write a program, not > integrate a system. > It seems to me that your second paragraph contradicts your first. In my experience, there are plenty of programmers who don't know or care much about the systems-level stuff anyway; they get devops people to handle it for them. We get shell scripts, etc, *because* we don't have a system that extends the object model down to the system level. By and large the operating system is going away as a consideration for deploying apps. If I want to deploy something, I write a Dockerfile and run a couple commands to build the docker image and deploy it to the cloud. There's no Apache config because the web server is built in to the application. And there are no shell scripts because Docker handles starting the application for me. Any changes to the OS would be handled by changes to the base Docker image I depend on. If anything, I think switching to an object-capability model at the OS level would eliminate an impedance mismatch and make it easier for developers to deploy code. -------------- next part -------------- An HTML attachment was scrubbed... URL: From waywardgeek at gmail.com Fri Aug 14 18:49:58 2015 From: waywardgeek at gmail.com (Bill Cox) Date: Fri, 14 Aug 2015 15:49:58 -0700 Subject: [Cryptography] Why is ECC secure? In-Reply-To: <20150814213131.GK9139@mournblade.imrryr.org> References: <20150630145621.GF14121@mournblade.imrryr.org> <20150630182121.GG14121@mournblade.imrryr.org> <20150813232623.GZ9139@mournblade.imrryr.org> <20150814213131.GK9139@mournblade.imrryr.org> Message-ID: On Fri, Aug 14, 2015 at 2:31 PM, Viktor Dukhovni wrote: > On Fri, Aug 14, 2015 at 12:15:00PM -0700, Bill Cox wrote: > > > Addition rule: 1/(1/a + 1/b), or 0 if a == 0, b == 0, or 1/a + 1/b == 0 > > The neutral element is infinity, not 0 (makes sense in terms of > the parallel resistor model). > > As for cryptography: point doubling gives: a + a -> 1/(1/a + 1/a) > = a/2. And (a/n + a) = 1/(n/a + 1/a) = a/(n+1)). > > So scalar multiplication is just division. Not especially useful > for DH! :-) > > -- > Viktor. > > Ha! It took me all afternoon to figure out what you must have figured out in seconds or minutes. Thanks for that critique. Bill -------------- next part -------------- An HTML attachment was scrubbed... URL: From iang at iang.org Fri Aug 14 23:06:54 2015 From: iang at iang.org (ianG) Date: Sat, 15 Aug 2015 04:06:54 +0100 Subject: [Cryptography] NSA has just recommended that Quantum is a threat Message-ID: <55CEACCE.8070207@iang.org> https://www.nsa.gov/ia/programs/suiteb_cryptography/index.shtml Background IAD recognizes that there will be a move, in the not distant future, to a quantum resistant algorithm suite. Based on experience in deploying Suite B, we have determined to start planning and communicating early about the upcoming transition to quantum resistant algorithms. Our ultimate goal is to provide cost effective security against a potential quantum computer. We are working with partners across the USG, vendors, and standards bodies to ensure there is a clear plan for getting a new suite of algorithms that are developed in an open and transparent manner that will form the foundation of our next Suite of cryptographic algorithms. Until this new suite is developed and products are available implementing the quantum resistant suite, we will rely on current algorithms. For those partners and vendors that have not yet made the transition to Suite B algorithms, *we recommend not making a significant expenditure to do so at this point* but instead to prepare for the upcoming quantum resistant algorithm transition. For those vendors and partners that have already transitioned to Suite B, we recognize that this took a great deal of effort on your part, and we thank you for your efforts. We look forward to your continued support as we work together to improve information security for National Security customers against the threat of a quantum computer being developed. Unfortunately, *the growth of elliptic curve use has bumped up against the fact of continued progress in the research on quantum computing*, necessitating a re-evaluation of our cryptographic strategy. ... From iang at iang.org Fri Aug 14 23:30:12 2015 From: iang at iang.org (ianG) Date: Sat, 15 Aug 2015 04:30:12 +0100 Subject: [Cryptography] SHA-3 FIPS-202: no SHAKE512 but SHAKE128; confusing SHAKE security In-Reply-To: <55CE3E3A.2020209@iang.org> References: <20150805214103.GB13929@carbon.w2lan.cesnet.cz> <55CE3E3A.2020209@iang.org> Message-ID: <55CEB244.4030702@iang.org> On 14/08/2015 20:15 pm, ianG wrote: > So if people want to go full IoT, can we ask: what does that mean? Can > we draw the line and say the OpenPGP offering here is CipherSuiteIoT > which means x/y/z in numbers and params and no more no less? > > PHB: > > IOT looks set to create a demand > > for an absolutely minimal cryptographic > > suite. One signature algorithm, one > > exchange algorithm, both on the same > > curve, one authenticated encryption > > mode, one digest/pseudorandom function. > > > Or are we offering full cipher flexibility to those IoT designers, and > thus forcing them to implement all the multiples, because they won't > know what other designers will choose, etc? > > My thinking right now is that (assuming we're doing this) we should put > in the draft a recommendation that precisely identifies a minimum > most-popular obligatory to implement suite that covers as far down as we > can get it. And leave the rest up to the market? Wait - I'm on the wrong bloody list .. this was supposed to be a message to OpenPGP. Oh well. iang From iang at iang.org Fri Aug 14 23:39:01 2015 From: iang at iang.org (ianG) Date: Sat, 15 Aug 2015 04:39:01 +0100 Subject: [Cryptography] SHA-3 FIPS-202: no SHAKE512 but SHAKE128; confusing SHAKE security In-Reply-To: References: <20150805214103.GB13929@carbon.w2lan.cesnet.cz> <20150807211627.GT9139@mournblade.imrryr.org> <20150808105906.GA4122@carbon.w2lan.cesnet.cz> <835832901.20150808225230@gmail.com> <20150812094724.GE4122@carbon.w2lan.cesnet.cz> Message-ID: <55CEB455.5090601@iang.org> On 13/08/2015 00:55 am, dj at deadhat.com wrote: > >> addition: afaik nist at one point considered adding a remark that >> shakes are the preferred primitives. it is apparently missing from the >> final document. which i find unfortunate. > > However in my experience so far, the shakes are the preferred primitives. > When you're getting a room of people in a standards group to first agree > on a minimum security strength (say O(2^128)) then to agree on a hash, > taking recent history into account and looking to the future deployments, > the shakes are the obvious choice and shake128 has already been adopted in > one standards body I'm involved in. Ever since the Lenstra & Verheul 2001 paper, people have been arguing about how to match up strengths. To little consistent and methodological effect. Keccak may be the most significant step since then. It at least claims an internally consistent methodology. (I might be wrong. Maybe they stole if from somewhere else. But keylength.com is testament to a 15 year history of ad hoc methods.) iang From phill at hallambaker.com Sat Aug 15 12:11:03 2015 From: phill at hallambaker.com (Phillip Hallam-Baker) Date: Sat, 15 Aug 2015 12:11:03 -0400 Subject: [Cryptography] SHA-3 FIPS-202: no SHAKE512 but SHAKE128; confusing SHAKE security In-Reply-To: <55CEB244.4030702@iang.org> References: <20150805214103.GB13929@carbon.w2lan.cesnet.cz> <55CE3E3A.2020209@iang.org> <55CEB244.4030702@iang.org> Message-ID: On Fri, Aug 14, 2015 at 11:30 PM, ianG wrote: > On 14/08/2015 20:15 pm, ianG wrote: > > So if people want to go full IoT, can we ask: what does that mean? Can >> we draw the line and say the OpenPGP offering here is CipherSuiteIoT >> which means x/y/z in numbers and params and no more no less? >> >> PHB: >> > IOT looks set to create a demand >> > for an absolutely minimal cryptographic >> > suite. One signature algorithm, one >> > exchange algorithm, both on the same >> > curve, one authenticated encryption >> > mode, one digest/pseudorandom function. >> >> >> Or are we offering full cipher flexibility to those IoT designers, and >> thus forcing them to implement all the multiples, because they won't >> know what other designers will choose, etc? >> >> My thinking right now is that (assuming we're doing this) we should put >> in the draft a recommendation that precisely identifies a minimum >> most-popular obligatory to implement suite that covers as far down as we >> can get it. And leave the rest up to the market? >> > > > > Wait - I'm on the wrong bloody list .. this was supposed to be a message > to OpenPGP. Oh well. Actually, it might be better to have that conversation here. Something that really worries me about the OpenPGP discussion is the tone of the discussion is 'prove to me that this attack is a problem' not 'prove to me that this attack is not a concern'. I think the IoT space is so diffuse that we risk ending up talking nonsense. I see three distinct classes of machine: 1) Effectively unconstrained. Any desktop, smartphone or tablet. Anything at or above Raspberry Pi capabilities. 2) Demanding thought and care 3) Ridiculously underpowered. Anything with an 8 bit core. Yes, there will be devices in the third category. But guess what, they don't have to do public key at all. Or if they do they only need do it during one time initialization. The hard bit is the bit in the middle. And even Windows 10 IoT is likely to pose issues. Yes, you can use a Raspberry Pi2 to develop and the chip at the center of the device only costs a buck. But that is a development environment. If you went into production you would want to go for the lowest power, lowest cost or otherwise best chip you can find. Raspberry Pi can easily do AES256. But you might well want to ask yourself if you really, really need AES128 and AES256. Every module you add to your device means more memory, longer startup times and so on. Right now we do have a defacto consensus algorithm suite: SHA-2-256 HMAC-SHA-2-256 AES128 CBC RSA-2048 ECDH-256 The main problem with this set is the RSA part and in particular key generation which is difficult and painful. The strength is not ideal either and RSA really hits diminishing returns above 2048 bits. I think we will settle on a new defacto consensus. But I think its going to be centered on the 256 bit algorithms: SHA-3-512 AES256-GCM CFRG-SIG-448 CFRG-DH-448 -------------- next part -------------- An HTML attachment was scrubbed... URL: From pinterkr at gmail.com Sat Aug 15 14:21:45 2015 From: pinterkr at gmail.com (=?utf-8?Q?Kriszti=C3=A1n_Pint=C3=A9r?=) Date: Sat, 15 Aug 2015 20:21:45 +0200 Subject: [Cryptography] SHA-3 FIPS-202: no SHAKE512 but SHAKE128; confusing SHAKE security In-Reply-To: References: <20150805214103.GB13929@carbon.w2lan.cesnet.cz> <55CE3E3A.2020209@iang.org> <55CEB244.4030702@iang.org> Message-ID: <371456871.20150815202145@gmail.com> Phillip Hallam-Baker (at Saturday, August 15, 2015, 6:11:03 PM): > I think we will settle on a new defacto consensus. But I think its > going to be centered on the 256 bit algorithms: > SHA-3-512 > AES256-GCM > CFRG-SIG-448 > CFRG-DH-448 first of all using sha-3-512 seems very weird, it is the one primitive most suffering the most severe performance hit from the overised preimage. SHAKE256 seems to be the better option. but the more interesting question is: why aes-gcm if you already have keccak in there? keccak supports a one-pass authenticated encryption scheme. http://keccak.noekeon.org/KeccakDIAC2012.pdf getting rid of gcm seems to be a good thing in itself. From gnu at toad.com Sat Aug 15 14:54:09 2015 From: gnu at toad.com (John Gilmore) Date: Sat, 15 Aug 2015 11:54:09 -0700 Subject: [Cryptography] SHA-3 FIPS-202: no SHAKE512 but SHAKE128; confusing SHAKE security In-Reply-To: References: <20150805214103.GB13929@carbon.w2lan.cesnet.cz> <55CE3E3A.2020209@iang.org> <55CEB244.4030702@iang.org> Message-ID: <201508151854.t7FIs9f1027094@new.toad.com> > Right now we do have a defacto consensus algorithm suite: > > SHA-2-256 > HMAC-SHA-2-256 > AES128 CBC > RSA-2048 > ECDH-256 > > The main problem with this set is the RSA part and in particular key > generation which is difficult and painful. The strength is not ideal either > and RSA really hits diminishing returns above 2048 bits. This seems like yet another example of Binary RSA Myopia. If the cost of RSA at 2048 bits is too high, why not use 2016 bits? Or 1984 bits? Or 1600 bits? Or 1216 bits? (NSA's 1024-bit RSA- cracker won't work on a 1216-bit prime. It probably won't even work on a 1056-bit prime, since myopia has caused fools to 'standardize' on 1024-bit keys and now a huge majority of TLS keys are 1024 bits.) And I'm not sure why you say 'RSA really hits diminishing returns above 2048 bits". Do you mean, using myopia, that you don't think the price/performance of 4096 bits is worthwhile? Then why didn't you say so? My RSA OpenPGP key has 3200 bits and it seems to have no difficulties in price/performance or interoperation. John From stephen.farrell at cs.tcd.ie Sat Aug 15 15:42:50 2015 From: stephen.farrell at cs.tcd.ie (Stephen Farrell) Date: Sat, 15 Aug 2015 20:42:50 +0100 Subject: [Cryptography] SHA-3 FIPS-202: no SHAKE512 but SHAKE128; confusing SHAKE security In-Reply-To: <201508151854.t7FIs9f1027094@new.toad.com> References: <20150805214103.GB13929@carbon.w2lan.cesnet.cz> <55CE3E3A.2020209@iang.org> <55CEB244.4030702@iang.org> <201508151854.t7FIs9f1027094@new.toad.com> Message-ID: <55CF963A.8020105@cs.tcd.ie> On 15/08/15 19:54, John Gilmore wrote: > now a huge majority of TLS keys are 1024 bits. Isn't that out of date? I think 2048 RSA is now more common than 1024 bit RSA and ECDH is or has become more common than RSA key transport. We had presentations on various measurements at the saag session at the last IETF. [1,2,3] Slide 4 of [3] says that 96% of TLS certs (seen in use I assume) are "2KRSA" and that 70% of servers (web servers I think) now use ECDH with p256. S. [1] https://www.ietf.org/proceedings/93/slides/slides-93-saag-2.pdf [2] https://www.ietf.org/proceedings/93/slides/slides-93-saag-3.pdf [3] https://www.ietf.org/proceedings/93/slides/slides-93-saag-4.pdf From phill at hallambaker.com Sun Aug 16 13:49:50 2015 From: phill at hallambaker.com (Phillip Hallam-Baker) Date: Sun, 16 Aug 2015 13:49:50 -0400 Subject: [Cryptography] SHA-3 FIPS-202: no SHAKE512 but SHAKE128; confusing SHAKE security In-Reply-To: <201508151854.t7FIs9f1027094@new.toad.com> References: <20150805214103.GB13929@carbon.w2lan.cesnet.cz> <55CE3E3A.2020209@iang.org> <55CEB244.4030702@iang.org> <201508151854.t7FIs9f1027094@new.toad.com> Message-ID: On Sat, Aug 15, 2015 at 2:54 PM, John Gilmore wrote: > > Right now we do have a defacto consensus algorithm suite: > > > > SHA-2-256 > > HMAC-SHA-2-256 > > AES128 CBC > > RSA-2048 > > ECDH-256 > > > > The main problem with this set is the RSA part and in particular key > > generation which is difficult and painful. The strength is not ideal > either > > and RSA really hits diminishing returns above 2048 bits. > > This seems like yet another example of Binary RSA Myopia. > > If the cost of RSA at 2048 bits is too high, why not use 2016 bits? > Or 1984 bits? Or 1600 bits? Or 1216 bits? (NSA's 1024-bit RSA- > cracker won't work on a 1216-bit prime. It probably won't even work on > a 1056-bit prime, since myopia has caused fools to 'standardize' on > 1024-bit keys and now a huge majority of TLS keys are 1024 bits.) > Read and respond to what I wrote if you want to accuse others of myopia. Your eyesight is clearly faulty. "The strength is not ideal" RSA2048 is reckoned to present a work factor of 2^112 which falls short of the 128 we prefer. To get to 128 bits we need 3072 bits. And even then that is only 128 bits against the best attack currently known. "RSA really hits diminishing returns above 2048 bits." If we want to get to 2^256 work factor we need to more than double the number of bits, we need 15360 bits which is ridiculous. And I'm not sure why you say 'RSA really hits diminishing returns > above 2048 bits". Do you mean, using myopia, that you don't think the > price/performance of 4096 bits is worthwhile? Then why didn't you say > so? My RSA OpenPGP key has 3200 bits and it seems to have no > difficulties in price/performance or interoperation. > > John > The tone of your response suggests that you need to consider the fact that if someone is saying something that appears to be stupid, you are reading it wrong rather than the other person wrote something stupid. -------------- next part -------------- An HTML attachment was scrubbed... URL: From waywardgeek at gmail.com Sun Aug 16 20:56:19 2015 From: waywardgeek at gmail.com (Bill Cox) Date: Sun, 16 Aug 2015 17:56:19 -0700 Subject: [Cryptography] Why is ECC secure? In-Reply-To: References: <20150630145621.GF14121@mournblade.imrryr.org> <20150630182121.GG14121@mournblade.imrryr.org> <20150813232623.GZ9139@mournblade.imrryr.org> <20150814213131.GK9139@mournblade.imrryr.org> Message-ID: I just realized what is either an obvious attack against the circle group - probably the usual attack, or maybe I'm making a mistake. In short, represent the group generator g as a 2x2 rotation matrix. In computing m*g, we just raise the matrix to the power of m and multiply it by (1, 0). This is simple linear matrix based crypto. This has been shown to be equivalent to regular DLP. You take the characteristic equation of the matrix, and using this compute an equivalent regular DLP problem with some polynomial manipulation magic. This is good news to me for the security of elliptic curve crypto. My fear was that we simply have not yet figured out how to do invsin(x) mod p. If we did, we'd reduce the circle group to a regular additive group with zero bits of security. Showing it is equivalent to regular DLP means that we can never invert arcsin mod p, at least not in less effort than it would take to solve DLP. This inverse is well defined once you scale the circle. The reason I care about the security of elliptic curves is that I'm now in a group at Google that is working on Token Binding, and we have to pick a default prefered encryption mode. It is not the end of the world if Token Binding gets broken, and we have the flexibility to switch, but pretty much any crypto decision made at Google impacts a billion people. In particular, we're leaning towards P256 as the default. What do we know about this curve? Should there be any concern that there may be a back door of any kind? For example, what happens if the prime modulus minus 1 has factors that are only known to the NSA? What if they are purposely small? Do we know enough about P256 to know this sort of thing is not the case? Thanks, Bill -------------- next part -------------- An HTML attachment was scrubbed... URL: From ryacko at gmail.com Sun Aug 16 21:12:30 2015 From: ryacko at gmail.com (Ryan Carboni) Date: Sun, 16 Aug 2015 18:12:30 -0700 Subject: [Cryptography] SHA-3 FIPS-202: no SHAKE512 but SHAKE128; confusing SHAKE security Message-ID: A Rasperry Pi 2 can encrypt a million AES blocks per second using a single thread. Can saturate the 100 megabit ethernet port on the chip (if one includes packet overhead). The Raspberry Pi 2 has reasonable and satisfactory performance. -------------- next part -------------- An HTML attachment was scrubbed... URL: From cryptography at dukhovni.org Mon Aug 17 01:21:07 2015 From: cryptography at dukhovni.org (Viktor Dukhovni) Date: Mon, 17 Aug 2015 05:21:07 +0000 Subject: [Cryptography] Why is ECC secure? In-Reply-To: References: <20150630182121.GG14121@mournblade.imrryr.org> <20150813232623.GZ9139@mournblade.imrryr.org> <20150814213131.GK9139@mournblade.imrryr.org> Message-ID: <20150817052107.GC24426@mournblade.imrryr.org> On Sun, Aug 16, 2015 at 05:56:19PM -0700, Bill Cox wrote: > This is simple linear matrix based crypto. This has been shown to be > equivalent to regular DLP. You take the characteristic equation of the > matrix, and using this compute an equivalent regular DLP problem with some > polynomial manipulation magic. No matrix algebra is needed here. See my messages from June 30th. Message-ID: <20150630145621.GF14121 at mournblade.imrryr.org> Message-ID: <20150630182121.GG14121 at mournblade.imrryr.org> For p = 1 mod 4, the circle group is isomorphic to the multiplication group in F_p, which is cyclic of order p-1. For p = 3 mod 4, the circle group is isomorphic to a cyclic subgroup of order p+1 of F_{p^2}. DLP for both is IIRC believed roughly comparable in difficulty to RSA with moduli of the same size. > This is good news to me for the security of elliptic curve crypto. That's not so clear. A reduction of EC to circle arithmetic would substantially weaken EC, because the primes in EC are much smaller than the primes in regular finite-field DLP (or degree-2 extensions thereof). Also, it is hypothetically possible that EC groups are easier to attack than the circle group, and we just have not figured out how just yet. > My fear > was that we simply have not yet figured out how to do invsin(x) mod p. That's just one dead-end, a failure of naive intuition from "real" analytic geometry to carry over to the discrete case. This does not rule out more sophisticated attacks, based on deeper theory. Now for the discrete circle, we have computationally efficient isomorphisms from the circle group to the finite-field DLP problem, so any successful attack on the circle group is a successful attack on finite-field DLP with primes of the same size. For elliptic curves, there is no analogous isomorphism, which is a feature, not a bug, since we're trying to avoid giving the attacker a leg up via "smooth bases". > In particular, we're leaning towards P256 as the default. What do we know > about this curve? Should there be any concern that there may be a back > door of any kind? For example, what happens if the prime modulus minus 1 > has factors that are only known to the NSA? What if they are purposely > small? Do we know enough about P256 to know this sort of thing is not the > case? Folks like Adam Langley at Google should be able to provide better guidance than you'd get from naive analysis of circle groups and EC. New curves (newer than P256) for EC are under development in CFRG. The ECDH curves are done IIRC, but the signature scheme is still under discussion. Depending on your timetable, you might be better off going with the new CFRG curves. There is no published weakness in P256, just implementation pitfalls. Of course there's no way to know that the curve is not "cooked" by an adversary with a much deeper theoretical grasp of EC crypto. -- Viktor. From ryacko at gmail.com Mon Aug 17 01:35:33 2015 From: ryacko at gmail.com (Ryan Carboni) Date: Sun, 16 Aug 2015 22:35:33 -0700 Subject: [Cryptography] Why is ECC secure? Message-ID: Why do you need public key cryptography for token binding? Does it provide protection from passive eavesdropping? If so, why not use HMAC? Maybe I would understand why public keys would be used if client certificates were also signed, but otherwise I do not see Why is token binding secure? -------------- next part -------------- An HTML attachment was scrubbed... URL: From iang at iang.org Mon Aug 17 05:12:41 2015 From: iang at iang.org (ianG) Date: Mon, 17 Aug 2015 10:12:41 +0100 Subject: [Cryptography] Speculation about Baton Block Cipher In-Reply-To: References: Message-ID: <55D1A589.4070606@iang.org> On 13/08/2015 07:37 am, Ryan Carboni wrote: > https://en.wikipedia.org/wiki/BATON > > I think in modern terms, according to the above wikipedia page: > > BATON is a family of authenticated encryption ciphers, with a variable > block width, and accepts a tweak as an input? I'm not sure how you get that it is has a variable blockwidth from the wikipedia page? But yes, I see the hint about the checksum: "160 bits of the key are checksum material." > To think that since 1995 the NSA has a cipher that the civilian > cryptographic community is on the verge of accepting! > > And this is before the AES competition as well! > > It's only been 20 years anyway. Yeah, interesting point. Although it's not really a "cipher" in the old terms, it's more a cipher suite, and maybe the composition just got lost in the bureaucracy of creating the standard? iang From bear at sonic.net Mon Aug 17 10:59:26 2015 From: bear at sonic.net (Ray Dillinger) Date: Mon, 17 Aug 2015 07:59:26 -0700 Subject: [Cryptography] SHA-3 FIPS-202: no SHAKE512 but SHAKE128; confusing SHAKE security In-Reply-To: References: <20150805214103.GB13929@carbon.w2lan.cesnet.cz> <55CE3E3A.2020209@iang.org> <55CEB244.4030702@iang.org> <201508151854.t7FIs9f1027094@new.toad.com> Message-ID: <55D1F6CE.7090709@sonic.net> On 08/16/2015 10:49 AM, Phillip Hallam-Baker wrote: > RSA2048 is reckoned to present a work factor of 2^112 which falls short of > the 128 we prefer. > > To get to 128 bits we need 3072 bits. And even then that is only 128 bits > against the best attack currently known. > > > > "RSA really hits diminishing returns above 2048 bits." > > If we want to get to 2^256 work factor we need to more than double the > number of bits, we need 15360 bits which is ridiculous. I don't believe it's ridiculous. I mean, yes, large, but still under 2k. We already had keys of such a length that nobody was going to enter them by hand, and 2k is near-epsilon with regard to today's protocols. It probably lets the bottom tier devices have a decent excuse not to implement it, but other than that it's fine. Bear -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 819 bytes Desc: OpenPGP digital signature URL: From pgut001 at cs.auckland.ac.nz Mon Aug 17 11:09:25 2015 From: pgut001 at cs.auckland.ac.nz (Peter Gutmann) Date: Mon, 17 Aug 2015 15:09:25 +0000 Subject: [Cryptography] Speculation about Baton Block Cipher In-Reply-To: <55D1A589.4070606@iang.org> References: , <55D1A589.4070606@iang.org> Message-ID: <9A043F3CF02CD34C8E74AC1594475C73F4ADDE87@uxcn10-5.UoA.auckland.ac.nz> ianG writes: >On 13/08/2015 07:37 am, Ryan Carboni wrote: >> https://en.wikipedia.org/wiki/BATON >> >> I think in modern terms, according to the above wikipedia page: >> >> BATON is a family of authenticated encryption ciphers, with a variable >> block width, and accepts a tweak as an input? > >But yes, I see the hint about the checksum: > >"160 bits of the key are checksum material." >But yes, I see the hint about the checksum: > >"160 bits of the key are checksum material." That's not a tweak, it's just a way of making the crypto capture-proof, you can only key it using an NSA-supplied fill device. The Clipper/Capstone chip did the same thing (although not very well, as Matt Blaze demonstrated). So what you've got is... a block cipher. Nothing magic about it. Peter. From bear at sonic.net Mon Aug 17 11:28:45 2015 From: bear at sonic.net (Ray Dillinger) Date: Mon, 17 Aug 2015 08:28:45 -0700 Subject: [Cryptography] Why is ECC secure? In-Reply-To: References: <20150630145621.GF14121@mournblade.imrryr.org> <20150630182121.GG14121@mournblade.imrryr.org> <20150813232623.GZ9139@mournblade.imrryr.org> <20150814213131.GK9139@mournblade.imrryr.org> Message-ID: <55D1FDAD.5020103@sonic.net> On 08/16/2015 05:56 PM, Bill Cox wrote: > In particular, we're leaning towards P256 as the default. What do we know > about this curve? Should there be any concern that there may be a back > door of any kind? For example, what happens if the prime modulus minus 1 > has factors that are only known to the NSA? What if they are purposely > small? Do we know enough about P256 to know this sort of thing is not the > case? > > Thanks, > Bill > Actually we specifically don't know that about P256. http://safecurves.cr.yp.to/rigid.html raises a concern that the P256 curve may be manipulatable by an attacker. There is a large unexplained input to the curve parameters, and it comes from NIST which has been subverted by attackers before. Also.... http://safecurves.cr.yp.to/complete.html raises a concern that the P256 curve has properties that make standard Weierstrass addition formulas not work on this curve (fails doublings) and that there are identity points (positive vs. negative values of identical absolute value) for some parameters that produce the same results, which increases the attack surface. (not by much if I'm reading it right, but it may be one of the contributing factors to the next concern). http://safecurves.cr.yp.to/ind.html raises a concern that the normal method of making elliptic-curve strings indistinguishable from random is not defined on NIST P256. Distinguishability today leaves open an increased probability of an attack tomorrow. I highly recommend poking around safecurves.cr.yp.to a lot if you're selecting elliptic curves for widespread use. I think it's the most comprehensive collection of specific information about the security and efficiency properties of particular elliptic curves available. They may (or may not) be overly concerned with relatively minor issues that can be mitigated by careful coding. But these "relatively minor" issues do sometimes lead to later attacks. And even if the "lead" or "killer" application driving adoption of the curve is carefully and correctly coded, hundreds of people out there will be programming things that are intended for compatibility with it, and some of them won't be as careful or as capable. Bear -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 819 bytes Desc: OpenPGP digital signature URL: From jya at pipeline.com Mon Aug 17 12:53:17 2015 From: jya at pipeline.com (John Young) Date: Mon, 17 Aug 2015 12:53:17 -0400 Subject: [Cryptography] CNSS Issues Memo on Shift to Quantum-Resistant Cryptography Message-ID: CNSS Advisory Memo on Use of Public Standards for Secure Sharing of Information Among NatSec Systems 08/11/15 https://www.cnss.gov/CNSS/openDoc.cfm?DLuhIVBMUGJh7R8iXAWwIQ== This Advisory expands on the guidance contained in CNSS Policy No. 15, National Information Assurance Policy on the Use of Public Standards for the Secure Sharing of Information Among National Security Systems (Reference a). Based on analysis of the effect of quantum computing on Information Assurance (IA) and IA-enabled Information Technology (IT) products, the policy's set of authorized algorithms is expanded to provide vendors and IT users more near-term flexibility in meeting their IA interoperability requirements. The purpose behind this additional flexibility is to avoid vendors and customers making two major transitions in a relatively short timeframe, as we anticipate a need to shift to quantum-resistant cryptography in the near future. -------------- next part -------------- An HTML attachment was scrubbed... URL: From cryptography at dukhovni.org Mon Aug 17 13:56:49 2015 From: cryptography at dukhovni.org (Viktor Dukhovni) Date: Mon, 17 Aug 2015 17:56:49 +0000 Subject: [Cryptography] SHA-3 FIPS-202: no SHAKE512 but SHAKE128; confusing SHAKE security In-Reply-To: <55D1F6CE.7090709@sonic.net> References: <20150805214103.GB13929@carbon.w2lan.cesnet.cz> <55CE3E3A.2020209@iang.org> <55CEB244.4030702@iang.org> <201508151854.t7FIs9f1027094@new.toad.com> <55D1F6CE.7090709@sonic.net> Message-ID: <20150817175649.GO24426@mournblade.imrryr.org> On Mon, Aug 17, 2015 at 07:59:26AM -0700, Ray Dillinger wrote: > > "RSA really hits diminishing returns above 2048 bits." > > > > If we want to get to 2^256 work factor we need to more than double the > > number of bits, we need 15360 bits which is ridiculous. > > I don't believe it's ridiculous. I mean, yes, large, but still under > 2k. We already had keys of such a length that nobody was going to > enter them by hand, and 2k is near-epsilon with regard to today's > protocols. > > It probably lets the bottom tier devices have a decent excuse not to > implement it, but other than that it's fine. The performance cost is ridiculous: sign verify sign/s verify/s rsa 1024 bits 0.000467s 0.000022s 2143.0 44570.3 rsa 2048 bits 0.002530s 0.000074s 395.3 13592.8 rsa 4096 bits 0.014179s 0.000198s 70.5 5047.2 What sort of numbers do you expect for RSA at 15k bits? I would conjecture around 2 signatures per second, and thus entirely unsuitable for key agreement. Perhaps still usable for verifying certificate signatures, but with enough such certificates in a chain, the chain will exceed TLS message size limits. For the record I don't see a compelling difference between a 112-bit work-factor and a 128-bit work-factor, provided the estimates hold up reasonably well. Also it seems that memory requirement for the matrix stage of GNFS for large moduli are quite prohibitive. Are the work-factor estimates for large RSA moduli too conservative? -- Viktor. From sneves at dei.uc.pt Mon Aug 17 15:29:02 2015 From: sneves at dei.uc.pt (Samuel Neves) Date: Mon, 17 Aug 2015 20:29:02 +0100 Subject: [Cryptography] SHA-3 FIPS-202: no SHAKE512 but SHAKE128; confusing SHAKE security In-Reply-To: <20150817175649.GO24426@mournblade.imrryr.org> References: <20150805214103.GB13929@carbon.w2lan.cesnet.cz> <55CE3E3A.2020209@iang.org> <55CEB244.4030702@iang.org> <201508151854.t7FIs9f1027094@new.toad.com> <55D1F6CE.7090709@sonic.net> <20150817175649.GO24426@mournblade.imrryr.org> Message-ID: <55D235FE.3050807@dei.uc.pt> On 08/17/2015 06:56 PM, Viktor Dukhovni wrote: > For the record I don't see a compelling difference between a 112-bit > work-factor and a 128-bit work-factor, provided the estimates hold > up reasonably well. Also it seems that memory requirement for the > matrix stage of GNFS for large moduli are quite prohibitive. Are > the work-factor estimates for large RSA moduli too conservative? It is arguable that the metric used, which only cares about operation counts, is not the right one. For typical parameter choices the storage cost (aka machine size) of the NFS is roughly proportional to the square root of the computational cost. To see this, note that the cost of the matrix step is matrix_size^(2 + o(1)), and the matrix step is asymptotically as costly as sieving. So at 15k bits, we get ~2^256 time complexity and ~2^128 memory. Multiplying area and time, the asymptotic cost here is L[1/3, 2.85], much larger than the L[1/3, 1.901] usually advertised. Taking the machine size into account gets you to the circuit or batch NFS, whose complexity is worked out in the AT (area-time) metric. For a single 15k-bit factorization this gets you time ~2^165 on a machine of size ~2^110 (as usual, ignoring o(1) factors in the asymptotic complexities). The asymptotic AT cost here is L[1/3, 1.976]. 12288-bit keys would suffice to thwart an attack of AT cost significantly below 2^256; 5120-bit keys would be enough for 256-bit AT security against the conventional, non-circuit, NFS. From pzbowen at gmail.com Mon Aug 17 16:12:00 2015 From: pzbowen at gmail.com (Peter Bowen) Date: Mon, 17 Aug 2015 13:12:00 -0700 Subject: [Cryptography] Speculation about Baton Block Cipher In-Reply-To: <9A043F3CF02CD34C8E74AC1594475C73F4ADDE87@uxcn10-5.UoA.auckland.ac.nz> References: <55D1A589.4070606@iang.org> <9A043F3CF02CD34C8E74AC1594475C73F4ADDE87@uxcn10-5.UoA.auckland.ac.nz> Message-ID: On Mon, Aug 17, 2015 at 8:09 AM, Peter Gutmann wrote: > ianG writes: >>On 13/08/2015 07:37 am, Ryan Carboni wrote: >>> https://en.wikipedia.org/wiki/BATON >>> >>> I think in modern terms, according to the above wikipedia page: >>> >>> BATON is a family of authenticated encryption ciphers, with a variable >>> block width, and accepts a tweak as an input? > >>"160 bits of the key are checksum material." > > That's not a tweak, it's just a way of making the crypto capture-proof, you > can only key it using an NSA-supplied fill device. The Clipper/Capstone chip > did the same thing (although not very well, as Matt Blaze demonstrated). > > So what you've got is... a block cipher. Nothing magic about it. And various docs have indicated that BATON is used in CBC mode with BIP32 for integrity while the newer MEDLEY algorithm is used in galois/counter mode. So both look like common block ciphers. From phill at hallambaker.com Tue Aug 18 08:20:24 2015 From: phill at hallambaker.com (Phillip Hallam-Baker) Date: Tue, 18 Aug 2015 08:20:24 -0400 Subject: [Cryptography] SHA-3 FIPS-202: no SHAKE512 but SHAKE128; confusing SHAKE security In-Reply-To: <55D1F6CE.7090709@sonic.net> References: <20150805214103.GB13929@carbon.w2lan.cesnet.cz> <55CE3E3A.2020209@iang.org> <55CEB244.4030702@iang.org> <201508151854.t7FIs9f1027094@new.toad.com> <55D1F6CE.7090709@sonic.net> Message-ID: On Mon, Aug 17, 2015 at 10:59 AM, Ray Dillinger wrote: > > > On 08/16/2015 10:49 AM, Phillip Hallam-Baker wrote: > > > RSA2048 is reckoned to present a work factor of 2^112 which falls short > of > > the 128 we prefer. > > > > To get to 128 bits we need 3072 bits. And even then that is only 128 bits > > against the best attack currently known. > > > > > > > > "RSA really hits diminishing returns above 2048 bits." > > > > If we want to get to 2^256 work factor we need to more than double the > > number of bits, we need 15360 bits which is ridiculous. > > I don't believe it's ridiculous. I mean, yes, large, but still under > 2k. We already had keys of such a length that nobody was going to > enter them by hand, and 2k is near-epsilon with regard to today's > protocols. > > It probably lets the bottom tier devices have a decent excuse not to > implement it, but other than that it's fine. > Speed is not fine and many of the libraries don't support RSA keysizes above 4096 bits. What I originally said was that RSA hits diminishing returns and the math completely justifies that statement. There certainly wasn't any reason for the type of response I got from Gilmore. It is not clear to me what 'Binary RSA Myopia' might be or why it would be appropriate to use such language. People are going to be using RSA for a very long time. It is not exactly broken but there are very good reasons that people are using it at 2048 bits rather than 4096 and the industry is looking for ECC based schemes rather than even larger keys. -------------- next part -------------- An HTML attachment was scrubbed... URL: From iang at iang.org Tue Aug 18 11:19:06 2015 From: iang at iang.org (ianG) Date: Tue, 18 Aug 2015 16:19:06 +0100 Subject: [Cryptography] SHA-3 FIPS-202: no SHAKE512 but SHAKE128; confusing SHAKE security In-Reply-To: <20150817175649.GO24426@mournblade.imrryr.org> References: <20150805214103.GB13929@carbon.w2lan.cesnet.cz> <55CE3E3A.2020209@iang.org> <55CEB244.4030702@iang.org> <201508151854.t7FIs9f1027094@new.toad.com> <55D1F6CE.7090709@sonic.net> <20150817175649.GO24426@mournblade.imrryr.org> Message-ID: <55D34CEA.400@iang.org> On 17/08/2015 18:56 pm, Viktor Dukhovni wrote: > On Mon, Aug 17, 2015 at 07:59:26AM -0700, Ray Dillinger wrote: > >>> "RSA really hits diminishing returns above 2048 bits." >>> >>> If we want to get to 2^256 work factor we need to more than double the >>> number of bits, we need 15360 bits which is ridiculous. >> >> I don't believe it's ridiculous. I mean, yes, large, but still under >> 2k. We already had keys of such a length that nobody was going to >> enter them by hand, and 2k is near-epsilon with regard to today's >> protocols. >> >> It probably lets the bottom tier devices have a decent excuse not to >> implement it, but other than that it's fine. > > The performance cost is ridiculous: > > sign verify sign/s verify/s > rsa 1024 bits 0.000467s 0.000022s 2143.0 44570.3 > rsa 2048 bits 0.002530s 0.000074s 395.3 13592.8 > rsa 4096 bits 0.014179s 0.000198s 70.5 5047.2 > > What sort of numbers do you expect for RSA at 15k bits? I would > conjecture around 2 signatures per second, and thus entirely > unsuitable for key agreement. Perhaps still usable for verifying > certificate signatures, but with enough such certificates in a > chain, the chain will exceed TLS message size limits. NSA is now pushing the notion that quantum vulnerable algorithms are to be avoided [0] [1]. fwiw, my understanding is in responding to quantum, we prefer large RSA in the medium term (8k?) and switch to NTRU [2] in the longer term. We avoid ECC. > For the record I don't see a compelling difference between a 112-bit > work-factor and a 128-bit work-factor, provided the estimates hold > up reasonably well. Also it seems that memory requirement for the > matrix stage of GNFS for large moduli are quite prohibitive. Are > the work-factor estimates for large RSA moduli too conservative? Right, dial down to 128 level. Or, we go to second order risk analysis -- who is our likely attacker, and is he likely to have quantum attack? For most people most of the time, NSA isn't our attacker, so maybe we accept this risk. Problem is, once the NSA has shifted in this direction, NIST comes out with standards for USG. Then, people who don't do their own security risk analysis copy NIST and the sheep move to protecting whatever it is that NSA was worried about. iang [0] I posted this hint last week http://www.metzdowd.com/pipermail/cryptography/2015-August/026287.html [1] John Young posted this hint too: http://www.metzdowd.com/pipermail/cryptography/2015-August/026303.html CNSS Advisory Memo on Use of Public Standards for Secure Sharing of Information Among NatSec Systems 08/11/15 This Advisory expands on the guidance contained in CNSS Policy No. 15, National Information Assurance Policy on the Use of Public Standards for the Secure Sharing of Information Among National Security Systems (Reference a). Based on analysis of the effect of quantum computing on Information Assurance (IA) and IA-enabled Information Technology (IT) products, the policy’s set of authorized algorithms is expanded to provide vendors and IT users more near-term flexibility in meeting their IA interoperability requirements. The purpose behind this additional flexibility is to avoid vendors and customers making two major transitions in a relatively short timeframe, as we anticipate a need *to shift to quantum-resistant cryptography in the near future*. [2] NTRU: https://en.wikipedia.org/wiki/NTRU From waywardgeek at gmail.com Tue Aug 18 11:36:58 2015 From: waywardgeek at gmail.com (Bill Cox) Date: Tue, 18 Aug 2015 08:36:58 -0700 Subject: [Cryptography] Why is ECC secure? In-Reply-To: References: <20150630145621.GF14121@mournblade.imrryr.org> <20150630182121.GG14121@mournblade.imrryr.org> <20150813232623.GZ9139@mournblade.imrryr.org> <20150814213131.GK9139@mournblade.imrryr.org> Message-ID: I think I'm finally getting the basics of why we like elliptic curve crypto. Here's my attempt to explain it in regular English. These are curves such that they fit into this form: a @ b = Finv(F(a) + F(b)) The @ is the group "addition" operator and the + is some group operation, likely addition or multiplication. In the case of Edwards elliptic curves, F(a) is a line integral along a path on the unit sphere. These would all be trivially broken except: 1) F(a) is a transcendental function, with no modular arithmetic equivalent 2) Finv(F(a) + F(b)) is algebraic This means we can easily compute the result, but not the magic in the middle, where we're doing addition or multiplication on the transcendental functions. In elliptic curve crypto, during DH key agreement, Alice publishes Finv(am*F(g)) mod p, where am is Alice's secret and g is the group generator. If we could compute Finv(x) mod p, Alice's secret would be revealed. It's a rare function that fits in this form and is algebraic when built with a transcendental function F. The circle group is in this form, when F = arcsin(x). It turns out that: sin(arcsin(x1) + arcsin(x2)) = x1x2 - y1y2 where y1 = sqrt(1 - x1^2), and y2 = sqrt(1 - x2*2) If this weren't the same as simple matrix multiplication, we might have trouble analyzing it, but it is, and we can convert this easily to regular DLP. So, naturally, we want to know what functions are in this form but are not equivalent to some well understood linear system. Elliptic curves seem to fit the bill. In unrelated noodling... I was trying to figure out what the NSA could possibly hide in a random looking group seed. It might be the case that they know the original arithmetic representation of the group seed on the curve, and no one else does. If I were attacking either the circle group or any Edwards curve, and I knew the generator's original arithmetic (x, y) position on the curve, and if I could find an arithmetic point on the curve which is in the group and maps to Alice's public key point, then it becomes trivial to reveal Alice's secret. Could it be helful to know the arithmetic representation of g on the curve to map Alice's pubic key to such a point? I think we've shown that finding such a point in the circle group is equivalent to DLP, since I can convert the circle group problem into regular DLP, and find the multiple of g that generates Alice's pubic key point. So, using DLP, I can find the arithmetic point on the curve. If instead, I were given an arithmetic representation (it would be a big operator tree corresponding to the multiplication of g by am), I can easily find am just by doing line integrals to am and dividing by the line integral to g. Therefore, on the circle group mapping a public key point to an arithmetic point in the group on the circle is equivalent to DLP. We can pose the same problem for Edwards curves. If we can map Alice's public key point to an arithmetic representation of the point on the curve that is in the group, then we can trivially reveal Alice's secret. This problem is equivalent to finding Alice's secret on Edwards curves. Would knowing the arithmetic representation for g help? I it might... That's the only potential use for obscuring the original point that I can think of... Bill -------------- next part -------------- An HTML attachment was scrubbed... URL: From ryacko at gmail.com Tue Aug 18 13:18:30 2015 From: ryacko at gmail.com (Ryan Carboni) Date: Tue, 18 Aug 2015 10:18:30 -0700 Subject: [Cryptography] Speculation about Baton Block Cipher Message-ID: Baton has: 12 byte block size 16 byte block size 24 byte initialization vector 20 byte key 20 byte checksum Let's play a what does not belong game. Which number does not belong? -------------- next part -------------- An HTML attachment was scrubbed... URL: From cryptography at dukhovni.org Tue Aug 18 14:15:23 2015 From: cryptography at dukhovni.org (Viktor Dukhovni) Date: Tue, 18 Aug 2015 18:15:23 +0000 Subject: [Cryptography] Why is ECC secure? In-Reply-To: References: <20150813232623.GZ9139@mournblade.imrryr.org> <20150814213131.GK9139@mournblade.imrryr.org> Message-ID: <20150818181523.GK24426@mournblade.imrryr.org> On Tue, Aug 18, 2015 at 08:36:58AM -0700, Bill Cox wrote: > I think I'm finally getting the basics of why we like elliptic curve > crypto. Here's my attempt to explain it in regular English. > > These are curves such that they fit into this form: > > a @ b = Finv(F(a) + F(b)) > > The @ is the group "addition" operator and the + is some group operation, > likely addition or multiplication. In the case of Edwards elliptic curves, > F(a) is a line integral along a path on the unit sphere. > > These would all be trivially broken except: > > 1) F(a) is a transcendental function, with no modular arithmetic equivalent > 2) Finv(F(a) + F(b)) is algebraic I'm afraid this argument is largely misguided. The security of Elliptic curves rests on deeper mathematics than mere lack of a birational equivalence to the circle group. Such a birational equivalence, if it existed, would of course spell trouble for EC, but lack thereof just precludes the carrying over of geometric attacks from continuous to discrete curves. Even though the Lie group isomorphism of "d < 0" real Edwards curves to the circle group is no use mod p, we can't immediately jump to the conclusion that DH on Elliptic curves mod p is adequately strong. > That's the only potential use for obscuring the original point that I can > think of... The reasoning (which I did not quote) is much too naive. You'll just have to trust experts (not me, I just enough more to know that I don't know enough) on the security of ECC. -- Viktor. From agr at me.com Tue Aug 18 15:02:21 2015 From: agr at me.com (Arnold Reinhold) Date: Tue, 18 Aug 2015 15:02:21 -0400 Subject: [Cryptography] Speculation on the origin of Speck and Simon Message-ID: I have been playing with NSA’s Speck cipher on ATtiny85 microprocessors and I happened upon Bruce Schneier’s 2013 blog on Speck and Simon’s introduction. He asks “Why was the work done, and why is it being made public? I'm curious.” This question provoked a long discussion thread. The comments fell pretty much into two schools of thought, the sneaky NSA must have some backdoor or the noble NSA is acting in its communication security role protecting the future Internet of Things. Here is a third possible explanation, based on the Snowden leak of NSA’s Tailored Access Organization catalog of implantable devices for compromising communications. Jacob Appelbaum revealed that NSA is using RC6 in those implants. Initial reactions to that tidbit included claims that NSA must not trust AES, but it turns out the leaked documents were written before the AES selection process was completed. RC6 was a finalist in that competition. Since the implants, once deployed, are out of NSA’s physical control, it is inevitable that some will be discovered by their targets and reverse engineered. So it makes sense to use a publicly available algorithm rather than a classified one. But RC6 (and AES) have relatively large code footprints. Presumably NSA wants those implants to be as small and inexpensive as possible. Small commercially available microprocessors like the Atmel AVR line have limited program space and even more limited RAM. So I can easily imagine that NSA would develop a suite of lighter weight ciphers for use with its implants. Publishing those ciphers eliminates any need to treat devices carrying the ciphers as sensitive material. It also provides some deniability if a device is captured. Note that NSA not only published the algorithms themselves (https://eprint.iacr.org/2013/404.pdf), but several ways to implement them in AVR assembly code: https://eprint.iacr.org/2014/947.pdf. If my reasoning is correct, it suggests that these ciphers are not just curiosities from the research lab but important production tools that NSA would have put put considerable effort into validating, and that these ciphers deserve to be taken seriously. BTW, for what its worth, the Speck 128/128 source code in the Wikipedia article works without modification on an ATtiny85. The published 128/128 test vector validates once one realizes that the 128-bit constants must stored with the low order word as the zero element of the long long word arrays. Decryption is a little trickier since one has to run the round keys in reverse. As far as I can see, the NSA has not published decryption pseudocode. Perhaps their implants have no need to decrypt, so decryption is left as an (easy enough) exercise for the reader.. Arnold Reinhold -------------- next part -------------- An HTML attachment was scrubbed... URL: From bear at sonic.net Tue Aug 18 16:54:52 2015 From: bear at sonic.net (Ray Dillinger) Date: Tue, 18 Aug 2015 13:54:52 -0700 Subject: [Cryptography] Speculation about Baton Block Cipher In-Reply-To: References: Message-ID: <55D39B9C.9010804@sonic.net> On 08/18/2015 10:18 AM, Ryan Carboni wrote: > Baton has: > > 12 byte block size > 16 byte block size > 24 byte initialization vector > 20 byte key > 20 byte checksum > > Let's play a what does not belong game. > > Which number does not belong? Heh. Is this a trick question? The checksum size is of course ludicrous with respect to the key and block size. They don't need more than 4 bytes for a checksum, if that. BATON is implemented in hardware with a secret algorithm, so virtually anything could be encoded in the remaining 16 bytes and nobody would be the wiser. The fact, however, doesn't lead me to any specific speculations, except that it's probably some kind of deliberate side channel. But it's not at all clear what such a side channel would be useful for. It's a Type 1 product. Why do you suppose the NSA would install a side channel on their own communications? Bear -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 819 bytes Desc: OpenPGP digital signature URL: From rsalz at akamai.com Tue Aug 18 18:32:12 2015 From: rsalz at akamai.com (Salz, Rich) Date: Tue, 18 Aug 2015 22:32:12 +0000 Subject: [Cryptography] SHA-3 FIPS-202: no SHAKE512 but SHAKE128; confusing SHAKE security In-Reply-To: <55D34CEA.400@iang.org> References: <20150805214103.GB13929@carbon.w2lan.cesnet.cz> <55CE3E3A.2020209@iang.org> <55CEB244.4030702@iang.org> <201508151854.t7FIs9f1027094@new.toad.com> <55D1F6CE.7090709@sonic.net> <20150817175649.GO24426@mournblade.imrryr.org> <55D34CEA.400@iang.org> Message-ID: <0196280e70e444b6aa99d0dfcfe56788@ustx2ex-dag1mb2.msg.corp.akamai.com> > Problem is, once the NSA has shifted in this direction, NIST comes out with > standards for USG. The law used to say that the NSA was the "expert" for NIST cryptography. After it became known that NSA gamed the system, I believe NIST no longer feels beholden to do what NSA says. I think some kind of law or regulation changed, but I could well be wrong on that last part. Perhaps Tim Polk can speak up here? From pzbowen at gmail.com Tue Aug 18 19:17:37 2015 From: pzbowen at gmail.com (Peter Bowen) Date: Tue, 18 Aug 2015 16:17:37 -0700 Subject: [Cryptography] Speculation about Baton Block Cipher In-Reply-To: <55D39B9C.9010804@sonic.net> References: <55D39B9C.9010804@sonic.net> Message-ID: On Tue, Aug 18, 2015 at 1:54 PM, Ray Dillinger wrote: > On 08/18/2015 10:18 AM, Ryan Carboni wrote: >> Baton has: >> >> 12 byte block size >> 16 byte block size >> 24 byte initialization vector >> 20 byte key >> 20 byte checksum >> >> Let's play a what does not belong game. >> >> Which number does not belong? > > > Heh. Is this a trick question? > > The checksum size is of course ludicrous with respect to the key > and block size. > > They don't need more than 4 bytes for a checksum, if that. BATON > is implemented in hardware with a secret algorithm, so virtually > anything could be encoded in the remaining 16 bytes and nobody > would be the wiser. > > The fact, however, doesn't lead me to any specific speculations, > except that it's probably some kind of deliberate side channel. > > But it's not at all clear what such a side channel would be > useful for. It's a Type 1 product. Why do you suppose the > NSA would install a side channel on their own communications? I think this is confused. It is a 20 byte (160 bit key) combined with 20 byte "checksum" which is tied to the key. I would guess this is a keyed checksum used to ensure that only authorized keys are used. I'm guessing BIP32 is a bit-interleaved parity algorithm, so you are only looking at 4 bytes of checksum for bulk data. From grarpamp at gmail.com Wed Aug 19 02:21:01 2015 From: grarpamp at gmail.com (grarpamp) Date: Wed, 19 Aug 2015 02:21:01 -0400 Subject: [Cryptography] [cryptography] CNSS Issues Memo on Shift to Quantum-Resistant Cryptography In-Reply-To: References: Message-ID: On Mon, Aug 17, 2015 at 12:53 PM, John Young wrote: > CNSS Advisory Memo on Use of Public Standards for Secure Sharing of > Information Among NatSec Systems 08/11/15 > > https://www.cnss.gov/CNSS/openDoc.cfm?DLuhIVBMUGJh7R8iXAWwIQ== > iang wrote: > John, that document blocked due to session variable or something. Is there an open copy? It's here (and on cryptome under the official filename)... https://www.cnss.gov/CNSS/issuances/Memoranda.cfm CNSS_Advisory_Memo_02-15.pdf All ur linkclicks are... From pgut001 at cs.auckland.ac.nz Wed Aug 19 03:44:57 2015 From: pgut001 at cs.auckland.ac.nz (Peter Gutmann) Date: Wed, 19 Aug 2015 07:44:57 +0000 Subject: [Cryptography] [FORGED] Re: SHA-3 FIPS-202: no SHAKE512 but SHAKE128; confusing SHAKE security In-Reply-To: References: <20150805214103.GB13929@carbon.w2lan.cesnet.cz> <55CE3E3A.2020209@iang.org> <55CEB244.4030702@iang.org> <201508151854.t7FIs9f1027094@new.toad.com> <55D1F6CE.7090709@sonic.net>, Message-ID: <9A043F3CF02CD34C8E74AC1594475C73F4ADF769@uxcn10-5.UoA.auckland.ac.nz> Phillip Hallam-Baker writes: >It is not clear to me what 'Binary RSA Myopia' might be or why it would be >appropriate to use such language. It's what's more usually called crypto numerology, I'm not sure where "binary RSA myopia" came from (it's an apt enough name, but doesn't cover other uses, e.g. with DH or DSA). Peter. From pgut001 at cs.auckland.ac.nz Wed Aug 19 03:49:59 2015 From: pgut001 at cs.auckland.ac.nz (Peter Gutmann) Date: Wed, 19 Aug 2015 07:49:59 +0000 Subject: [Cryptography] [FORGED] Re: Speculation about Baton Block Cipher In-Reply-To: References: Message-ID: <9A043F3CF02CD34C8E74AC1594475C73F4ADF78A@uxcn10-5.UoA.auckland.ac.nz> Ryan Carboni writes: >Baton has: > >12 byte block size >16 byte block size >24 byte initialization vector >20 byte key >20 byte checksum > >Let's play a what does not belong game. > >Which number does not belong? The IV is a bit odd, it hints at a LEAF-like capability a la Clipper/Capstone. However, it could be a completely ordinary composite nonce value as outlined in e.g. RFC 5116, "An Interface and Algorithms for Authenticated Encryption". Peter. From hbaker1 at pipeline.com Sat Aug 22 11:22:53 2015 From: hbaker1 at pipeline.com (Henry Baker) Date: Sat, 22 Aug 2015 08:22:53 -0700 Subject: [Cryptography] Augmented Reality Encrypted Displays Message-ID: FYI -- True end2END encryption: your eyes & brain do the decoding; the displays show only garbage. https://www.usenix.org/system/files/conference/soups2015/soups15-paper-andrabi.pdf Usability of Augmented Reality for Revealing Secret Messages to Users but Not Their Devices We evaluate the possibility of a human receiving a secret message while trusting no device with the contents of that message, by using visual cryptography (VC) implemented with augmented-reality displays (ARDs). ... Visual cryptography [26] is a cryptographic secret-sharing scheme where visual information is split into multiple shares, such that one share by itself is indiscernible from random noise. (Equivalently, one of these shares constitutes a one- time pad as discussed in Section 1, and the other represents the ciphertext of the secret message.) The human visual system performs a logical OR of the shares to decode the secret message being shared. From natanael.l at gmail.com Sat Aug 22 17:21:30 2015 From: natanael.l at gmail.com (Natanael) Date: Sat, 22 Aug 2015 23:21:30 +0200 Subject: [Cryptography] Augmented Reality Encrypted Displays In-Reply-To: References: Message-ID: Den 22 aug 2015 20:04 skrev "Henry Baker" : > > FYI -- True end2END encryption: your eyes & brain do the decoding; the displays show only garbage. > > https://www.usenix.org/system/files/conference/soups2015/soups15-paper-andrabi.pdf > > Usability of Augmented Reality for Revealing Secret Messages to Users but Not Their Devices > > We evaluate the possibility of a human receiving a secret > message while trusting no device with the contents of that > message, by using visual cryptography (VC) implemented > with augmented-reality displays (ARDs). > ... > Visual cryptography [26] is a cryptographic secret-sharing > scheme where visual information is split into multiple shares, > such that one share by itself is indiscernible from random > noise. (Equivalently, one of these shares constitutes a one- > time pad as discussed in Section 1, and the other represents > the ciphertext of the secret message.) The human visual > system performs a logical OR of the shares to decode the > secret message being shared. Neat. Another application of visual cryptography that is like to see is for 2FA dynamic authentication cards with transparent displays. Push a button, hold the card over the corresponding patten on the screen, type what you see into the 2FA field. Could also be used for highly sensitive/private notifications to the user, which you don't need to auto-clear from view electronically as it only is visible when the patterns are aligned. Accurate alignment and scaling would be the tricky part, but I've got some ideas there. (Think I've partially described this before. Gonna look up the details.) -------------- next part -------------- An HTML attachment was scrubbed... URL: From natanael.l at gmail.com Sun Aug 23 06:08:57 2015 From: natanael.l at gmail.com (Natanael) Date: Sun, 23 Aug 2015 12:08:57 +0200 Subject: [Cryptography] Augmented Reality Encrypted Displays In-Reply-To: References: Message-ID: Den 23 aug 2015 02:42 skrev "Joshua Marpet" : > > Wait, seriously like the red cards from 80's cartoons? Hold it over the screen, read out the "secret message"? > > Because a photo of the card makes it totally not a one time pad. > > Sorry, I am amused, and think it's a cool talk, but not practical in the slightest. Dynamic transparent screen, not static film. I guess I should have clarified that the pattern would be derived using deterministic CSPRNGs with a shared secret between the server and card. Using time as a component would make that photo absolutely useless unless the attacker also got a photo within X seconds of the computer screen. -------------- next part -------------- An HTML attachment was scrubbed... URL: From leichter at lrw.com Sun Aug 23 07:28:43 2015 From: leichter at lrw.com (Jerry Leichter) Date: Sun, 23 Aug 2015 07:28:43 -0400 Subject: [Cryptography] Augmented Reality Encrypted Displays In-Reply-To: References: Message-ID: <9DAC1E09-EEE9-45EB-9A7A-27905C2A59DC@lrw.com> > FYI -- True end2END encryption: your eyes & brain do the decoding; the displays show only garbage. > > https://www.usenix.org/system/files/conference/soups2015/soups15-paper-andrabi.pdf > > Usability of Augmented Reality for Revealing Secret Messages to Users but Not Their Devices > > We evaluate the possibility of a human receiving a secret > message while trusting no device with the contents of that > message, by using visual cryptography (VC) implemented > with augmented-reality displays (ARDs). > ... Cute idea, but let's do a simple threat analysis: The device doing the displaying necessarily has access to both images. Such a device necessarily has access to reasonably amounts of computing power. Just what is the extra security threat in letting the device do the combining for you? You might try to argue that the images to be combined would be hidden in other data, so the device wouldn't know what to combine. But in fact the images are highly distinctive and easy to recognize - they have to be for humans to readily be able to combine them. I just don't see a situation where having the user combine the images adds anything to the security. -- Jerry From webdawg at gmail.com Sun Aug 23 17:07:07 2015 From: webdawg at gmail.com (WebDawg) Date: Sun, 23 Aug 2015 14:07:07 -0700 Subject: [Cryptography] Augmented Reality Encrypted Displays In-Reply-To: <9DAC1E09-EEE9-45EB-9A7A-27905C2A59DC@lrw.com> References: <9DAC1E09-EEE9-45EB-9A7A-27905C2A59DC@lrw.com> Message-ID: On Sun, Aug 23, 2015 at 4:28 AM, Jerry Leichter wrote: > > Cute idea, but let's do a simple threat analysis: The device doing the > displaying necessarily has access to both images. Such a device > necessarily has access to reasonably amounts of computing power. Just what > is the extra security threat in letting the device do the combining for you? > > You might try to argue that the images to be combined would be hidden in > other data, so the device wouldn't know what to combine. But in fact the > images are highly distinctive and easy to recognize - they have to be for > humans to readily be able to combine them. > > I just don't see a situation where having the user combine the images adds > anything to the security. > > -- Jerry > > It could be a way to display more then one encrypted thing at once though. Encrypting sections of video stream so not completely useless. -------------- next part -------------- An HTML attachment was scrubbed... URL: From hbaker1 at pipeline.com Wed Aug 26 20:07:57 2015 From: hbaker1 at pipeline.com (Henry Baker) Date: Wed, 26 Aug 2015 17:07:57 -0700 Subject: [Cryptography] 3DES security? Message-ID: What's the current best estimate for the (in)security of 3DES, in bits ? From derek at ihtfp.com Wed Aug 26 20:36:31 2015 From: derek at ihtfp.com (Derek Atkins) Date: Wed, 26 Aug 2015 20:36:31 -0400 Subject: [Cryptography] 3DES security? In-Reply-To: References: Message-ID: <8ce68ffd0a56deb738002718e8aa01ff.squirrel@mail2.ihtfp.org> Henry, On Wed, August 26, 2015 8:07 pm, Henry Baker wrote: > What's the current best estimate for the (in)security of 3DES, in bits ? 2-key or 3-key 3DES? Generally 3DES implies 2-key EDE, which equates to 112-bit security. 3-Key 3DES uses more key bits, but my recollection is that it doesn't significantly increase the security.. So I would treat 3DES as 112-bit security. To date, the best known attack against DES is brute force. The REAL issue with 3DES is that it's still only a 64-bit block size so you have a 1 in 2^64 chance of randomly guessing the mapping from a plaintext block to a cipher block, regardless of the keys. Of course you need to repeat this mapping on every block, so it doesn't necessarily buy you anything. -derek -- Derek Atkins 617-623-3745 derek at ihtfp.com www.ihtfp.com Computer and Internet Security Consultant From anton at titov.net Wed Aug 26 22:05:19 2015 From: anton at titov.net (Anton Titov) Date: Thu, 27 Aug 2015 05:05:19 +0300 Subject: [Cryptography] 3DES security? In-Reply-To: References: Message-ID: <55DE705F.1030201@titov.net> On 27.08.2015 03:07, Henry Baker wrote: > What's the current best estimate for the (in)security of 3DES, in bits ? > The answer probably depends on how many know plaintexts you have and could range from (could!) 168 bits for 0 known plaintexts to 0 bits for 2^64 known (different) plaintexts, as for any 64 bit cipher. It is widely believed that the security is 112 bits because of meet in the middle attack. This attack however needs a solid known plaintext, not a knowledge that the plaintext is "English text" or any other vague idea about it. Due to the fact that 64bit block cipher with 168 bit key has many (2^104?) keys that yield the same ciphertext for the same plaintext you obviously need more that one plaintext or the ability to tell if blocks other that the known one decrypt to a sensible data and that may not always be the case. Also this attack needs 2^56 * block size (64 bits) of storage which is 512 peta bytes. That is $5b if RAM is used (without the cost of other components), $400m if SSDs are used or $35m if HDDs are used. You also need to perform 2^112 lookups in these 2^56 blocks. One can argue that the lookup can be considered constant (as opposed to log N) if many computers do that task in parallel, but this is also expensive. Frankly if I'm given one (or 10) modern computers my feeling is that it will brute-force one 128bit AES key faster than 3DES key (1 known plaintext+constant time check for correct key for 3DES). However both are unrealistic as of today. Best, Anton From outer at interlog.com Wed Aug 26 23:30:35 2015 From: outer at interlog.com (Richard Outerbridge) Date: Wed, 26 Aug 2015 23:30:35 -0400 Subject: [Cryptography] 3DES security? In-Reply-To: References: Message-ID: > On 2015-08-26 (238), at 20:07:57, Henry Baker wrote: > > What’s the current best estimate for the (in)security of 3DES, in bits ? Out of 112? Maybe 104? I’m guessing only Shamir knows for ”sure” these daze, for some value of sure. __outer — ”Incurably agnostic but prone to unpredictable relapses into faith.” From scott at hyperthought.com Thu Aug 27 09:10:58 2015 From: scott at hyperthought.com (Scott Kelly) Date: Thu, 27 Aug 2015 06:10:58 -0700 Subject: [Cryptography] 3DES security? In-Reply-To: <8ce68ffd0a56deb738002718e8aa01ff.squirrel@mail2.ihtfp.org> References: <8ce68ffd0a56deb738002718e8aa01ff.squirrel@mail2.ihtfp.org> Message-ID: <0ED003D0-E9E4-42A3-AB32-02B3BD057981@hyperthought.com> On Aug 26, 2015, at 5:36 PM, Derek Atkins wrote: > Henry, > > On Wed, August 26, 2015 8:07 pm, Henry Baker wrote: >> What's the current best estimate for the (in)security of 3DES, in bits ? > > 2-key or 3-key 3DES? Generally 3DES implies 2-key EDE, which equates to > 112-bit security. 3-Key 3DES uses more key bits, but my recollection is > that it doesn't significantly increase the security.. So I would treat > 3DES as 112-bit security. To date, the best known attack against DES is > brute force. > There are at least two other known attacks: meet in the middle, and related keys. These are described in RFC4772 and elsewhere. One of the MITM attacks (by Lucks) reduces the strength to 108 bits. > The REAL issue with 3DES is that it's still only a 64-bit block size so > you have a 1 in 2^64 chance of randomly guessing the mapping from a > plaintext block to a cipher block, regardless of the keys. Of course you > need to repeat this mapping on every block, so it doesn't necessarily buy > you anything. > > -derek > > -- > Derek Atkins 617-623-3745 > derek at ihtfp.com www.ihtfp.com > Computer and Internet Security Consultant > > _______________________________________________ > The cryptography mailing list > cryptography at metzdowd.com > http://www.metzdowd.com/mailman/listinfo/cryptography From phill at hallambaker.com Thu Aug 27 10:03:46 2015 From: phill at hallambaker.com (Phillip Hallam-Baker) Date: Thu, 27 Aug 2015 10:03:46 -0400 Subject: [Cryptography] 3DES security? In-Reply-To: References: Message-ID: On Wed, Aug 26, 2015 at 11:30 PM, Richard Outerbridge wrote: > > On 2015-08-26 (238), at 20:07:57, Henry Baker > wrote: > > > > What’s the current best estimate for the (in)security of 3DES, in bits ? > > Out of 112? Maybe 104? > > I’m guessing only Shamir knows for ”sure” these daze, for some value of > sure. > __outer > It is probably OK for any practical use where you are still allowed to use it. Yes there are ways to tradeoff performance for memory and maybe shave the key space a little with a vast number of known ciphertext/plaintext pairs. But unless you are doing something really, really secret, the chance anyone is going to go after your crypto is very small. Carders didn't find a way to reverse engineer PINs by cracking the DES crypto, they did it by driving bulldozers into an ATM and reverse engineering the hardware. So it is probably OK to use in the same way that SHA-1 is still OK to use. If you have a legacy system that would cost a lot of money to upgrade, you can probably spend the $$$$ on other improvements. But the main risk you will face in doing that is that everyone else in the industry considers it obsolete and that imposes costs on you as well. Being secure isn't enough these days, you have to show you are secure. And SHA-1 or 3DES is going to require an extra review every single time the system is audited. If it takes a day each time at $3,000 a day, that starts to mount up. At some point it is going to be difficult to find the library support. You don't make a system more secure by adding new stronger ciphers, you make it more secure by taking out the duff ones. DES should be gone already, I would want to take 3DES out along with SHA-1. -------------- next part -------------- An HTML attachment was scrubbed... URL: From derek at ihtfp.com Thu Aug 27 10:05:30 2015 From: derek at ihtfp.com (Derek Atkins) Date: Thu, 27 Aug 2015 10:05:30 -0400 Subject: [Cryptography] 3DES security? In-Reply-To: <0ED003D0-E9E4-42A3-AB32-02B3BD057981@hyperthought.com> (Scott Kelly's message of "Thu, 27 Aug 2015 06:10:58 -0700") References: <8ce68ffd0a56deb738002718e8aa01ff.squirrel@mail2.ihtfp.org> <0ED003D0-E9E4-42A3-AB32-02B3BD057981@hyperthought.com> Message-ID: Scott Kelly writes: > On Aug 26, 2015, at 5:36 PM, Derek Atkins wrote: > >> Henry, >> >> On Wed, August 26, 2015 8:07 pm, Henry Baker wrote: >>> What's the current best estimate for the (in)security of 3DES, in bits ? >> >> 2-key or 3-key 3DES? Generally 3DES implies 2-key EDE, which equates to >> 112-bit security. 3-Key 3DES uses more key bits, but my recollection is >> that it doesn't significantly increase the security.. So I would treat >> 3DES as 112-bit security. To date, the best known attack against DES is >> brute force. >> > > There are at least two other known attacks: meet in the middle, and > related keys. These are described in RFC4772 and elsewhere. One of the > MITM attacks (by Lucks) reduces the strength to 108 bits. The MITM attacks are why 3-key DES isn't much better than 2-key DES, and also why EDE is preferred over EEE. The reduction from 112 to 108 is something I didn't know about, so thank you for that reference. >> The REAL issue with 3DES is that it's still only a 64-bit block size so >> you have a 1 in 2^64 chance of randomly guessing the mapping from a >> plaintext block to a cipher block, regardless of the keys. Of course you >> need to repeat this mapping on every block, so it doesn't necessarily buy >> you anything. -derek -- Derek Atkins 617-623-3745 derek at ihtfp.com www.ihtfp.com Computer and Internet Security Consultant From hubert at levangong.org Thu Aug 27 11:18:28 2015 From: hubert at levangong.org (Hubert A. Le Van Gong) Date: Thu, 27 Aug 2015 08:18:28 -0700 Subject: [Cryptography] RC4 and SHA2 Message-ID: <55DE6529.3060405@levangong.org> Greetings, Are there any cryptographic reasons that would forbid a (TLS) ciphersuite to combine RC4 and a SHA2 MAC? Note: I do know RC4 is being deprecated so this is purely a theoretical question. Thanks, Hubert From ryacko at gmail.com Thu Aug 27 15:04:40 2015 From: ryacko at gmail.com (Ryan Carboni) Date: Thu, 27 Aug 2015 12:04:40 -0700 Subject: [Cryptography] Is MD4 as secure as Poly1305 in an AEAD scheme? Message-ID: Is MD4 as secure as Poly1305 in an AEAD scheme? I notice it consumes roughly the same amount of cycles, and any forgery attempts would be nearly as difficult without knowledge of the state of the MAC. Afterall, most AEAD schemes also encrypt the MAC which essentially negate many attacks. I can't help but feel that MD4 is set to unfair standards while everything else is set to more logical standards. From hanno at hboeck.de Thu Aug 27 15:52:59 2015 From: hanno at hboeck.de (Hanno =?UTF-8?B?QsO2Y2s=?=) Date: Thu, 27 Aug 2015 21:52:59 +0200 Subject: [Cryptography] RC4 and SHA2 In-Reply-To: <55DE6529.3060405@levangong.org> References: <55DE6529.3060405@levangong.org> Message-ID: <20150827215259.0395e358@pc1> On Thu, 27 Aug 2015 08:18:28 -0700 "Hubert A. Le Van Gong" wrote: > Are there any cryptographic reasons that would forbid a (TLS) > ciphersuite to combine RC4 and a SHA2 MAC? You need to be more precise with the question: What do you mean by "forbid". There is no technical reason that would prevent that. Right now as far as I know RC4 is specified with SHA1 and MD5 MACs, you could replace that with sha2, would increase your MAC blocks, but it's certainly possible. But of course it would suffer from all the known attacks on RC4. And there is an RFC "forbidding" RC4. Also there is pretty much agreement that future TLS ciphers should have AEAD modes. So even if RC4 wasn't broken your new construction probably wouldn't be welcomed. -- Hanno Böck http://hboeck.de/ mail/jabber: hanno at hboeck.de GPG: BBB51E42 -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 819 bytes Desc: OpenPGP digital signature URL: From bascule at gmail.com Thu Aug 27 17:07:27 2015 From: bascule at gmail.com (Tony Arcieri) Date: Thu, 27 Aug 2015 14:07:27 -0700 Subject: [Cryptography] RC4 and SHA2 In-Reply-To: <55DE6529.3060405@levangong.org> References: <55DE6529.3060405@levangong.org> Message-ID: On Thu, Aug 27, 2015 at 8:18 AM, Hubert A. Le Van Gong wrote: > Are there any cryptographic reasons that would forbid a (TLS) ciphersuite > to combine RC4 and a SHA2 MAC? > Don't use broken crypto. Note: I do know RC4 is being deprecated so this is purely a theoretical > question. I don't care. Don't use broken crypto! -- Tony Arcieri -------------- next part -------------- An HTML attachment was scrubbed... URL: From bascule at gmail.com Thu Aug 27 17:08:01 2015 From: bascule at gmail.com (Tony Arcieri) Date: Thu, 27 Aug 2015 14:08:01 -0700 Subject: [Cryptography] Is MD4 as secure as Poly1305 in an AEAD scheme? In-Reply-To: References: Message-ID: On Thu, Aug 27, 2015 at 12:04 PM, Ryan Carboni wrote: > Is MD4 as secure as Poly1305 in an AEAD scheme? > Don't use broken crypto. > I notice it consumes roughly the same amount of cycles, and any > forgery attempts would be nearly as difficult without knowledge of the > state of the MAC. Afterall, most AEAD schemes also encrypt the MAC > which essentially negate many attacks. I can't help but feel that MD4 > is set to unfair standards while everything else is set to more > logical standards. That's because MD4 is broken. Don't use broken crypto! -- Tony Arcieri -------------- next part -------------- An HTML attachment was scrubbed... URL: From bear at sonic.net Thu Aug 27 21:55:45 2015 From: bear at sonic.net (Ray Dillinger) Date: Thu, 27 Aug 2015 18:55:45 -0700 Subject: [Cryptography] Augmented Reality Encrypted Displays In-Reply-To: References: Message-ID: <55DFBFA1.50803@sonic.net> On 08/22/2015 08:22 AM, Henry Baker wrote: > Visual cryptography [26] is a cryptographic secret-sharing > scheme where visual information is split into multiple shares, > such that one share by itself is indiscernible from random > noise. (Equivalently, one of these shares constitutes a one- > time pad as discussed in Section 1, and the other represents > the ciphertext of the secret message.) The human visual > system performs a logical OR of the shares to decode the > secret message being shared. No matter how hard you squeeze the snakes, you just can't get enough oil to make it worth the effort. Bear -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 819 bytes Desc: OpenPGP digital signature URL: From bear at sonic.net Thu Aug 27 22:45:48 2015 From: bear at sonic.net (Ray Dillinger) Date: Thu, 27 Aug 2015 19:45:48 -0700 Subject: [Cryptography] 3DES security? In-Reply-To: References: Message-ID: <55DFCB5C.9000206@sonic.net> On 08/26/2015 05:07 PM, Henry Baker wrote: > What's the current best estimate for the (in)security of 3DES, in bits ? > 3DES (with 168-bit keys) is considered to be 112 bit secure. Per Wikipedia, the best known attack requires 2^32 known- plaintext messages, 2^113 steps, 2^90 single-DES encryptions, and 2^88 memory. That's well out of reach, I think. AES is very much more secure in raw numbers, but in practical terms both AES and 3DES are in the "The Sun Won't Last That Long" category, so it doesn't matter very much. It can be trusted to the extent that you trust your implementation of it, your protocols using it, your key handling, your users, etc. The weak point in any system using either of the two will not be the cipher. The advantage of AES is bigger blocks and lower resource usage. Bear -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 819 bytes Desc: OpenPGP digital signature URL: From bear at sonic.net Thu Aug 27 22:59:46 2015 From: bear at sonic.net (Ray Dillinger) Date: Thu, 27 Aug 2015 19:59:46 -0700 Subject: [Cryptography] AES Broken? Message-ID: <55DFCEA2.2020101@sonic.net> Is this guy blowing smoke, or did he find a real attack on AES? https://mjos.fi/doc/gavekort_kale.pdf The fact that he proposes a self-invented cipher in the same paper makes me very suspicious that his attack on AES is imaginary. It matches too much past 'crackpot' behavior. Bear -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 819 bytes Desc: OpenPGP digital signature URL: From alfonso.degregorio at gmail.com Fri Aug 28 01:50:40 2015 From: alfonso.degregorio at gmail.com (Alfonso De Gregorio) Date: Fri, 28 Aug 2015 05:50:40 +0000 Subject: [Cryptography] AES Broken? In-Reply-To: <55DFCEA2.2020101@sonic.net> References: <55DFCEA2.2020101@sonic.net> Message-ID: On Fri, Aug 28, 2015 at 2:59 AM, Ray Dillinger wrote: > > Is this guy blowing smoke, or did he find a real attack on AES? > > https://mjos.fi/doc/gavekort_kale.pdf Hey dude, this is a tongue-in-cheek pseudo paper written under a nom de plume just to have some fun -- and invite to reflection. May I suggest a short break? I surely need to grab a cup of tea. -- Alfonso From julien.bringer at gmail.com Fri Aug 28 02:34:23 2015 From: julien.bringer at gmail.com (Julien Bringer) Date: Fri, 28 Aug 2015 08:34:23 +0200 Subject: [Cryptography] AES Broken? In-Reply-To: <55DFCEA2.2020101@sonic.net> References: <55DFCEA2.2020101@sonic.net> Message-ID: 2015-08-28 4:59 GMT+02:00 Ray Dillinger : > Is this guy blowing smoke, or did he find a real attack on AES? > > https://mjos.fi/doc/gavekort_kale.pdf > > The fact that he proposes a self-invented cipher in the same paper > makes me very suspicious that his attack on AES is imaginary. It > matches too much past 'crackpot' behavior. > well, he says something true, that the AES S-box (SubBytes) is based on the inverse transformation in GF(2^8) (which is known since Rinjdael submission...). For the other claims in the document, this is another story... Your first guess is the right one! Julien -------------- next part -------------- An HTML attachment was scrubbed... URL: From mgrabovs at redhat.com Fri Aug 28 02:35:10 2015 From: mgrabovs at redhat.com (Matej Grabovsky) Date: Fri, 28 Aug 2015 02:35:10 -0400 (EDT) Subject: [Cryptography] AES Broken? In-Reply-To: <55DFCEA2.2020101@sonic.net> References: <55DFCEA2.2020101@sonic.net> Message-ID: <684279352.9825435.1440743710464.JavaMail.zimbra@redhat.com> Hi. This seems to be a contestant in the Snake Oil Crypto contest. I love the last paragraph: > The security of KALE against post-quantum [4], neuromorphic and > optogenetic [7], and other postmodern attacks is unknown at this point. > However, the use of complex numbers should rule out any real or rational > quantum attacks. Matěj ----- Original Message ----- From: "Ray Dillinger" To: cryptography at metzdowd.com, "Crypto-practicum" Sent: Friday, August 28, 2015 4:59:46 AM Subject: [Cryptography] AES Broken? Is this guy blowing smoke, or did he find a real attack on AES? https://mjos.fi/doc/gavekort_kale.pdf The fact that he proposes a self-invented cipher in the same paper makes me very suspicious that his attack on AES is imaginary. It matches too much past 'crackpot' behavior. Bear _______________________________________________ The cryptography mailing list cryptography at metzdowd.com http://www.metzdowd.com/mailman/listinfo/cryptography From hanno at hboeck.de Fri Aug 28 03:37:35 2015 From: hanno at hboeck.de (Hanno =?UTF-8?B?QsO2Y2s=?=) Date: Fri, 28 Aug 2015 09:37:35 +0200 Subject: [Cryptography] AES Broken? In-Reply-To: <55DFCEA2.2020101@sonic.net> References: <55DFCEA2.2020101@sonic.net> Message-ID: <20150828093735.42c34534@pc1> On Thu, 27 Aug 2015 19:59:46 -0700 Ray Dillinger wrote: > Is this guy blowing smoke, or did he find a real attack on AES? Rule of thumb: If anyone finds a real attack on AES, RSA or any other major cipher you will read about it on the frontpage of the new york times and everyone will run around trying to shutdown IT infrastructure. Try to actually read the paper. You'll find things like this. "In this note we show that a key component of AES in fact contains a backdoor the allows the Belgian Government and The Catholic Church (the forces behind Rijndael / AES design, who obviously hid the backdoor in the cipher) to secretly eavesdrop on all AES communications." This is clearly a joke paper. -- Hanno Böck http://hboeck.de/ mail/jabber: hanno at hboeck.de GPG: BBB51E42 -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 819 bytes Desc: OpenPGP digital signature URL: From pinterkr at gmail.com Fri Aug 28 03:54:38 2015 From: pinterkr at gmail.com (=?UTF-8?B?S3Jpc3p0acOhbiBQaW50w6ly?=) Date: Fri, 28 Aug 2015 09:54:38 +0200 Subject: [Cryptography] AES Broken? In-Reply-To: <55DFCEA2.2020101@sonic.net> References: <55DFCEA2.2020101@sonic.net> Message-ID: On Fri, Aug 28, 2015 at 4:59 AM, Ray Dillinger wrote: > https://mjos.fi/doc/gavekort_kale.pdf i'm not entirely sure whether it is trolling, hoax-attempt or joke. but one interesting use of this article is for the classroom. how many errors you can collect? the student collecting the most errors wins. any error spotted by only one student wins some extra points. etc. From dave at horsfall.org Fri Aug 28 11:08:17 2015 From: dave at horsfall.org (Dave Horsfall) Date: Sat, 29 Aug 2015 01:08:17 +1000 (EST) Subject: [Cryptography] AES Broken? In-Reply-To: References: <55DFCEA2.2020101@sonic.net> Message-ID: On Fri, 28 Aug 2015, Krisztián Pintér wrote: > > https://mjos.fi/doc/gavekort_kale.pdf > > i'm not entirely sure whether it is trolling, hoax-attempt or joke. but > one interesting use of this article is for the classroom. how many > errors you can collect? the student collecting the most errors wins. any > error spotted by only one student wins some extra points. etc. Without even going into the crypto aspects, I was writhing on the floor after just the first page. Cartoons? Catholic church? Etc. I mean, in a Monty Python sort of way, this is how not to submit a paper. -- Dave Horsfall DTM (VK2KFU) "Those who don't understand security will suffer." Concerned about shark attacks? Then don't go swimming in their food bowl... From waywardgeek at gmail.com Fri Aug 28 13:00:14 2015 From: waywardgeek at gmail.com (Bill Cox) Date: Fri, 28 Aug 2015 10:00:14 -0700 Subject: [Cryptography] AES Broken? In-Reply-To: References: <55DFCEA2.2020101@sonic.net> Message-ID: On Fri, Aug 28, 2015 at 8:08 AM, Dave Horsfall wrote: > On Fri, 28 Aug 2015, Krisztián Pintér wrote: > > > > https://mjos.fi/doc/gavekort_kale.pdf > > > > i'm not entirely sure whether it is trolling, hoax-attempt or joke. but > > one interesting use of this article is for the classroom. how many > > errors you can collect? the student collecting the most errors wins. any > > error spotted by only one student wins some extra points. etc. > > Without even going into the crypto aspects, I was writhing on the floor > after just the first page. Cartoons? Catholic church? Etc. > > I mean, in a Monty Python sort of way, this is how not to submit a paper. > I liked this part. They estimate the "security" of AES at: 2^127.88476373519208801711541761570483401788 with 2^127 precomputation. Is there any reason for stating a shot-in-the-dark security guess to 42 significant digits? Anyway, the method used in AES to generate the S-boxes are a bit scary in the first place. Bill -------------- next part -------------- An HTML attachment was scrubbed... URL: From tytso at mit.edu Fri Aug 28 13:04:40 2015 From: tytso at mit.edu (Theodore Ts'o) Date: Fri, 28 Aug 2015 13:04:40 -0400 Subject: [Cryptography] A thought about backdoors and quantuum-resistant encryption Message-ID: <20150828170440.GA15982@thunk.org> I don't know if this is possible, because I don't know enough about quantuum computing, and I don't know enough a about "quantuum resistant encryption". Suppose quantuum computing is a thing, and suppose NSA^H^H^H NIST supplies us with a quantuum-resistant encryption algorithm. Would it be possible to create an encryption algorithm which is resistant to quantuum computing --- except for someone with a quantuum computer *AND* knowledge of some secret quantuum state stored in a quantuum computer only available to the NSA. Even more, would it be possible to create such a thing in such a way that NSA^H^H^H NIST could introduce non-transparently in such a way that the public world *thinks* that that the encryption algorithm against all quantuum computers, but in fact there is a trapdoor that only the NSA could utilize --- but no one knows this? Of course, people wouldn't have to use the new quantuum resistant encryption algorithms, but if quantuum computer were a thing, they would be screwed if they kept on using AES, so the NSA would be quite happy with that outcome. And of course, if it was introduced non-transparently, then China and Russia and Iran would be able to demand that a backdoor be engineered for them, because no one would know that the backdoor existed. And if someone future Snowden leaks this, all of the current fear-mongering from James Comey and Keith Alexander would help prepare the ground in case it does leak. (Or maybe they plan to introduce this transparently, if they've learned their lesson from the Snowden disclosures.) All of this is premised by the hypothesis that it is possible to create quantuum-resistant encryption system for everyone but NSA, and preferably (for the NSA) in such a way that it's not possible to modify the encryption system so that backdoor can't be removed or changed so that China and Russia could have their own quantuum-backdoored encryption algorithm, and force companies who want to do business in those countries to use their alternate-backdoored encryption. Is this possible? - Ted From phill at hallambaker.com Fri Aug 28 13:09:42 2015 From: phill at hallambaker.com (Phillip Hallam-Baker) Date: Fri, 28 Aug 2015 13:09:42 -0400 Subject: [Cryptography] Augmented Reality Encrypted Displays In-Reply-To: <55DFBFA1.50803@sonic.net> References: <55DFBFA1.50803@sonic.net> Message-ID: On Thu, Aug 27, 2015 at 9:55 PM, Ray Dillinger wrote: > > > On 08/22/2015 08:22 AM, Henry Baker wrote: > > > Visual cryptography [26] is a cryptographic secret-sharing > > scheme where visual information is split into multiple shares, > > such that one share by itself is indiscernible from random > > noise. (Equivalently, one of these shares constitutes a one- > > time pad as discussed in Section 1, and the other represents > > the ciphertext of the secret message.) The human visual > > system performs a logical OR of the shares to decode the > > secret message being shared. > > No matter how hard you squeeze the snakes, you just can't get > enough oil to make it worth the effort. > That is overly negative. Chaum did some interesting stuff a while back for voting. Where I think the augmented reality bit will fall apart is that augmented reality tends to depend on things like cameras analyzing the environment so that it can register an overlay. So unlike Chaum's system in which you have a piece of inanimate paper and an inanimate overlay foil, in augmented reality you have at best an inanimate hacker-proof physical display and an eminently hackable IoT device that knows both parts of the puzzle. Yes, yes, you can split the signal paths, yada yada. But you are still back in the problem of trusted but untrustworthy devices and you are vulnerable. Crypto is hard... -------------- next part -------------- An HTML attachment was scrubbed... URL: From rsalz at akamai.com Fri Aug 28 13:26:36 2015 From: rsalz at akamai.com (Salz, Rich) Date: Fri, 28 Aug 2015 17:26:36 +0000 Subject: [Cryptography] AES Broken? In-Reply-To: <55DFCEA2.2020101@sonic.net> References: <55DFCEA2.2020101@sonic.net> Message-ID: It's hilarious. "Schneier (in joint work with Euclid)" The address is an actual bar. Gavekort is Norwegian for gift certificate (google it). Etc. Go to the top-level of the website, https://mjos.fi. Someone's made a very good joke. -- Senior Architect, Akamai Technologies IM: richsalz at jabber.at Twitter: RichSalz From phill at hallambaker.com Fri Aug 28 14:28:47 2015 From: phill at hallambaker.com (Phillip Hallam-Baker) Date: Fri, 28 Aug 2015 14:28:47 -0400 Subject: [Cryptography] A thought about backdoors and quantuum-resistant encryption In-Reply-To: <20150828170440.GA15982@thunk.org> References: <20150828170440.GA15982@thunk.org> Message-ID: On Fri, Aug 28, 2015 at 1:04 PM, Theodore Ts'o wrote: > I don't know if this is possible, because I don't know enough about > quantuum computing, and I don't know enough a about "quantuum > resistant encryption". > > Suppose quantuum computing is a thing, and suppose NSA^H^H^H NIST > supplies us with a quantuum-resistant encryption algorithm. Would it > be possible to create an encryption algorithm which is resistant to > quantuum computing --- except for someone with a quantuum computer > *AND* knowledge of some secret quantuum state stored in a quantuum > computer only available to the NSA. > > Even more, would it be possible to create such a thing in such a way > that NSA^H^H^H NIST could introduce non-transparently in such a way > that the public world *thinks* that that the encryption algorithm > against all quantuum computers, but in fact there is a trapdoor that > only the NSA could utilize --- but no one knows this? > > Of course, people wouldn't have to use the new quantuum resistant > encryption algorithms, but if quantuum computer were a thing, they > would be screwed if they kept on using AES, so the NSA would be quite > happy with that outcome. > > And of course, if it was introduced non-transparently, then China and > Russia and Iran would be able to demand that a backdoor be engineered > for them, because no one would know that the backdoor existed. And if > someone future Snowden leaks this, all of the current fear-mongering > from James Comey and Keith Alexander would help prepare the ground in > case it does leak. (Or maybe they plan to introduce this > transparently, if they've learned their lesson from the Snowden > disclosures.) > > All of this is premised by the hypothesis that it is possible to > create quantuum-resistant encryption system for everyone but NSA, and > preferably (for the NSA) in such a way that it's not possible to > modify the encryption system so that backdoor can't be removed or > changed so that China and Russia could have their own > quantuum-backdoored encryption algorithm, and force companies who want > to do business in those countries to use their alternate-backdoored > encryption. Is this possible? > It is certainly possible to design a protocol that has effective countermeasures to prevent this. Consider the problem from the attacker's point of view. A public key encryption scheme has three sets of parameters: Private Key K Public Key P Shared parameters. S A crypto system is QM secure if someone with a quantum computer cannot obtain the private key or decrypt messages using the Public key and shared public parameters using a QC. What you are suggesting here is a backdoor in the shared parameters such that these are chosen so that the attacker has leverage but other users of the system are not. In effect we have a second public key crypto system built into the shared parameters and the attacker has generated these shared parameters as some function of a master secret X so that S = f(X). Call this type of system 'backdoor QM insecure'. Note that X has to be sufficiently large that the system is still secure if the attacker doesn't know it. Otherwise this isn't a cipher with a hidden NSA backdoor, it is a cipher that has a set of weak keys that enable an attack method known to the attacker and not anyone else. Call this system 'unknown attack QM insecure' While it is certainly possible to imagine that a function f(X) might exist, it is fairly clear that from this point on, any and all parameters used in any standard for public key cryptography must be rigid so that an attacker cannot choose a set that provides them with a backdoor. MD4 introduced the notion of using parameters taken from an arithmetic function (e, pi, etc). The CFRG is looking at fast primes. So yes there is a risk here but we already have an effective control: rigid parameters. According to my source, the relevant NSA doctrine is NOBUS 'nobody but us'. So they might peddle an unknown attack QM insecure system like they did with Dual_ECRNG, but unknown attack would leave US IT systems vulnerable to attack by China, Russia, etc. -------------- next part -------------- An HTML attachment was scrubbed... URL: From michael at kjorling.se Fri Aug 28 14:38:02 2015 From: michael at kjorling.se (Michael =?utf-8?B?S2rDtnJsaW5n?=) Date: Fri, 28 Aug 2015 18:38:02 +0000 Subject: [Cryptography] A thought about backdoors and quantuum-resistant encryption In-Reply-To: <20150828170440.GA15982@thunk.org> References: <20150828170440.GA15982@thunk.org> Message-ID: <20150828183802.GV7218@yeono.kjorling.se> On 28 Aug 2015 13:04 -0400, from tytso at mit.edu (Theodore Ts'o): > Would it > be possible to create an encryption algorithm which is resistant to > quantuum computing --- except for someone with a quantuum computer > *AND* knowledge of some secret quantuum state stored in a quantuum > computer only available to the NSA. I don't know, and I don't know enough to even know where to begin speculating. > /.../ but if quantuum computer were a thing, they > would be screwed if they kept on using AES, so the NSA would be quite > happy with that outcome. Why does that follow? It is my understanding that based on current knowledge, quantum computing, when applied to symmetric cryptography, causes the security level to drop to the square root of what it used to be. So a cipher offering a 128-bit security level now offers a 64-bit security level (because sqrt(2^128) = 2^64) against an adversary that has a sufficiently powerful quantum computer that they are willing to throw at the problem. Which is a Bad Thing (tm). However, a today 256-bit security level cipher in this hypothetical quantum computing world "only" offers the equivalent of 128-bit security, which is Not Great (tm) but certainly not Terrible (r). So _symmetric_ cryptography is the easy part to solve: we just need to double the key lengths, and figure out what that means in practice. In situations where in a no-quantum-computing world we might have used AES-256 or AES-128, we might use XES-512 and XES-256 [1] respectively for a similar effective security level. In this hypothetical future world, symmetric keys remain short enough that key management is not significantly complicated compared to what it is like today. [1] XES is obviously the next great thing after sliced bread: eXtensible Encryption Standard. Because anything extensible is by definition great. I mean, just look at how easy XML is! -- Michael Kjörling • https://michael.kjorling.se • michael at kjorling.se OpenPGP B501AC6429EF4514 https://michael.kjorling.se/public-keys/pgp “People who think they know everything really annoy those of us who know we don’t.” (Bjarne Stroustrup) From ji at tla.org Fri Aug 28 14:40:55 2015 From: ji at tla.org (John Ioannidis) Date: Fri, 28 Aug 2015 14:40:55 -0400 Subject: [Cryptography] AES Broken? In-Reply-To: References: <55DFCEA2.2020101@sonic.net> Message-ID: On Fri, Aug 28, 2015 at 1:26 PM, Salz, Rich wrote: > It's hilarious. "Schneier (in joint work with Euclid)" The address is an > actual bar. Gavekort is Norwegian for gift certificate (google it). > Etc. > Go to the top-level of the website, https://mjos.fi. Someone's made a > very good joke. > My favorite was the last keyword: serpessence :) -------------- next part -------------- An HTML attachment was scrubbed... URL: From cryptography at dukhovni.org Fri Aug 28 14:48:39 2015 From: cryptography at dukhovni.org (Viktor Dukhovni) Date: Fri, 28 Aug 2015 18:48:39 +0000 Subject: [Cryptography] AES Broken? In-Reply-To: References: <55DFCEA2.2020101@sonic.net> Message-ID: <20150828184839.GP9021@mournblade.imrryr.org> On Fri, Aug 28, 2015 at 10:00:14AM -0700, Bill Cox wrote: > I liked this part. They estimate the "security" of AES at: > > 2^127.88476373519208801711541761570483401788 > with 2^127 precomputation. > Is there any reason for stating a shot-in-the-dark security guess to 42 > significant digits? Yes, this matches many significant figures of $\pi \cdot 10^{38}$: Wolframalpha's free interface displays log_2(pi * 10^38) as: 127.88476373519208801711541761570483401788 206986169710 (I added a space before the additional digits that show the security of AES even more precisely). -- Viktor. From hbaker1 at pipeline.com Fri Aug 28 14:53:11 2015 From: hbaker1 at pipeline.com (Henry Baker) Date: Fri, 28 Aug 2015 11:53:11 -0700 Subject: [Cryptography] Augmented Reality Encrypted Displays In-Reply-To: References: <55DFBFA1.50803@sonic.net> Message-ID: At 10:09 AM 8/28/2015, Phillip Hallam-Baker wrote: >On Thu, Aug 27, 2015 at 9:55 PM, Ray Dillinger wrote: > >On 08/22/2015 08:22 AM, Henry Baker wrote: > >> Visual cryptography [26] is a cryptographic secret-sharing >> scheme where visual information is split into multiple shares, >> such that one share by itself is indiscernible from random >> noise. The web may be *forced* into using something like "visual cryptography" in order to get around "clickjacking", whereby the user is tricked into clicking on the wrong button (or the right button for the wrong reasons). It's getting harder & harder for a web site to know & guarantee that what it thinks is being displayed is actually what a user sees & is agreeing to. See Dan Kaminsky's recent DEFCON talk for more info: https://www.youtube.com/watch?v=9wx2TnaRSGs DEF CON 23 - Dan Kaminsky - I Want These * Bugs off My * Internet From bear at sonic.net Fri Aug 28 15:36:10 2015 From: bear at sonic.net (Ray Dillinger) Date: Fri, 28 Aug 2015 12:36:10 -0700 Subject: [Cryptography] Augmented Reality Encrypted Displays In-Reply-To: References: <55DFBFA1.50803@sonic.net> Message-ID: <55E0B82A.3030205@sonic.net> On 08/28/2015 10:09 AM, Phillip Hallam-Baker wrote: > On Thu, Aug 27, 2015 at 9:55 PM, Ray Dillinger wrote: > >> No matter how hard you squeeze the snakes, you just can't get >> enough oil to make it worth the effort. > That is overly negative. Chaum did some interesting stuff a while back for > voting. My issue was that the proposed system requires both halves of the secret to be present, in the same hardware, at the same time. This nullifies the value of separating them at all. It is no more secure than any other cryptosystem where the data are present in toto, and will be attacked by the same methods. Bear -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 819 bytes Desc: OpenPGP digital signature URL: From dj at deadhat.com Fri Aug 28 16:57:38 2015 From: dj at deadhat.com (dj at deadhat.com) Date: Fri, 28 Aug 2015 20:57:38 -0000 Subject: [Cryptography] Augmented Reality Encrypted Displays In-Reply-To: References: <55DFBFA1.50803@sonic.net> Message-ID: <8b7ca4387d0291850aaab17d663435fe.squirrel@www.deadhat.com> > At 10:09 AM 8/28/2015, Phillip Hallam-Baker wrote: It's getting harder & > harder for a web site to know & guarantee that what it thinks > is being displayed is actually what a user sees & is agreeing > to. See Dan Kaminsky's recent DEFCON talk for more info: > Roughly 100% of the times I click on a banner ad, it is not because the ad is hiding. It is because the ad renders and reformats the screen while I'm aiming for something else. So everything shifts down and my click hits the ad. I would not be at all surprised to find people doing this deliberately (delaying the rendering with scripts to resize late) to get more clicks. From tytso at mit.edu Fri Aug 28 23:14:17 2015 From: tytso at mit.edu (Theodore Ts'o) Date: Fri, 28 Aug 2015 23:14:17 -0400 Subject: [Cryptography] AES Broken? In-Reply-To: References: <55DFCEA2.2020101@sonic.net> Message-ID: <20150829031417.GF10211@thunk.org> On Sat, Aug 29, 2015 at 01:08:17AM +1000, Dave Horsfall wrote: > > I mean, in a Monty Python sort of way, this is how not to submit a paper. This would be a great paper to submit to one of those SPAM Conferences in China --- to see if a paper that contains the following would get accepted by their program committee: We see that the algebraic degree of the underlying transform is deg S ′ = −1. This degree is very low, in fact lower than zero, the degree recommended by the Chinese Government (“sinkhole transform”). For reference, the U.S. National Security Agency recommends degree 1 (“the identity transform”) for all but the most confidential data. AES has been clearly designed to offer even lower security than these proposals against Algebraic Attacks of Courtois. - Ted From iang at iang.org Sat Aug 29 08:20:36 2015 From: iang at iang.org (ianG) Date: Sat, 29 Aug 2015 13:20:36 +0100 Subject: [Cryptography] A thought about backdoors and quantuum-resistant encryption In-Reply-To: <20150828170440.GA15982@thunk.org> References: <20150828170440.GA15982@thunk.org> Message-ID: <55E1A394.6020509@iang.org> On 28/08/2015 18:04 pm, Theodore Ts'o wrote: > I don't know if this is possible, because I don't know enough about > quantuum computing, and I don't know enough a about "quantuum > resistant encryption". > > Suppose quantuum computing is a thing, and suppose NSA^H^H^H NIST > supplies us with a quantuum-resistant encryption algorithm. Would it > be possible to create an encryption algorithm which is resistant to > quantuum computing --- except for someone with a quantuum computer > *AND* knowledge of some secret quantuum state stored in a quantuum > computer only available to the NSA. As PHB indicates, prevailing thesis is that this means the algorithm is a public key algorithm with the NSA holding the private key. Or perhaps the weak keys argument. It's certainly possible. If it was done using paramaters, that would be one thing. If it was done using some scientific understanding, that would be another, more risky thing, because others could figure it out. > Even more, would it be possible to create such a thing in such a way > that NSA^H^H^H NIST could introduce non-transparently in such a way > that the public world *thinks* that that the encryption algorithm > against all quantuum computers, but in fact there is a trapdoor that > only the NSA could utilize --- but no one knows this? It's certainly something they would try if they could get away with it. If one has followed the DUAL_EC story, and recent revelations about Crypto AG and the NSA mission statements that directly seek to pervert commercial cryptography, one can only conclude they would do it if they thought they could get away with it. > Of course, people wouldn't have to use the new quantuum resistant > encryption algorithms, but if quantuum computer were a thing, they > would be screwed if they kept on using AES, so the NSA would be quite > happy with that outcome. > > And of course, if it was introduced non-transparently, then China and > Russia and Iran would be able to demand that a backdoor be engineered > for them, because no one would know that the backdoor existed. And if > someone future Snowden leaks this, all of the current fear-mongering > from James Comey and Keith Alexander would help prepare the ground in > case it does leak. (Or maybe they plan to introduce this > transparently, if they've learned their lesson from the Snowden > disclosures.) > > All of this is premised by the hypothesis that it is possible to > create quantuum-resistant encryption system for everyone but NSA, and > preferably (for the NSA) in such a way that it's not possible to > modify the encryption system so that backdoor can't be removed or > changed so that China and Russia could have their own > quantuum-backdoored encryption algorithm, and force companies who want > to do business in those countries to use their alternate-backdoored > encryption. Is this possible? I think on the whole it is possible. It is also likely that they have thought of it. And they are spending money on that area in a big way. Whether it happens or not is too many hypotheticals for us to seriously predict at this stage. Which is to say, what the risk level is and whether to mitigate is too hard to tell. And normal Occam's razor logic on risk analysis would say that if you can't model it, treat it as if it doesn't exist. iang From kevin.w.wall at gmail.com Sat Aug 29 14:13:08 2015 From: kevin.w.wall at gmail.com (Kevin W. Wall) Date: Sat, 29 Aug 2015 14:13:08 -0400 Subject: [Cryptography] Using crypto to address clickjacking (was "Re: Augmented Reality Encrypted Displays") Message-ID: On Fri, Aug 28, 2015 at 2:53 PM, Henry Baker wrote: > At 10:09 AM 8/28/2015, Phillip Hallam-Baker wrote: >>On Thu, Aug 27, 2015 at 9:55 PM, Ray Dillinger wrote: >> >>On 08/22/2015 08:22 AM, Henry Baker wrote: >> >>> Visual cryptography [26] is a cryptographic secret-sharing >>> scheme where visual information is split into multiple shares, >>> such that one share by itself is indiscernible from random >>> noise. > > The web may be *forced* into using something like "visual > cryptography" in order to get around "clickjacking", whereby > the user is tricked into clicking on the wrong button (or the > right button for the wrong reasons). It's getting harder & > harder for a web site to know & guarantee that what it thinks > is being displayed is actually what a user sees & is agreeing > to. See Dan Kaminsky's recent DEFCON talk for more info: Slightly OT for crypto, but I'll toss this out. I just think crypto is overkill for addressing clickjacking attacks. Just about every recent version of every modern browser supports the X-Frame-Options HTTP response header which, when used correctly and consistently, is effective in preventing all known clickjacking (aka, UI redress) attacks. It's also dirt simple to deploy and can even be deployed separate from the application in a reverse proxy (e.g., in Apache HTTPD using mod_headers and mod_proxy). Defeating clickjacking is not going to require some complicated crypto-based solution. In fact, it is among the simplest web-based attacks to prevent. The reason that it is so prevalent has more to do with developer ignorance than any other reason. (That and many applications think they are already defeating it with very simplistic anti-framing JavaScript code which is usually easily defeated. A JavaScript solution is also possible [and useful for older browsers], but it requires more than the naive JavaScript solutions normally deployed.) Using Content Security Policy is another way to address clickjacking attacks that can provide finer grained control, but CSP is much more difficult to deploy. If you're concerned about clickjacking in malvertising that Kaminsky refers to because the developers are lax in protecting you, you can always use NoScript plugin in Firefox. (That prevents you from some pretty nasty Cross-Site Scripting attacks as well.) Ultimately though, a solution similar to W3C's IronFrame proposal that Kaminsky talked about will probably become THE clickjacking solution if only because we can't count on web developers to secure their applications from this attack. (If we could, this attack would have been wiped out by now.) And while NoScript is great for security, it's a little more intrusive than most users are willing to tolerate and it only works on Firefox, so we need a general solution that is part of the browser. It was hoped that X-Frame-Options would be that solution, but unfortunately that requires the cooperation of web developers who are actually aware of the clickjacking issues and care enough to fix it. -kevin -- NSA: All your crypto bits are belong to us. From 17Q4MX2hmktmpuUKHFuoRmS5MfB5XPb at Safe-mail.net Sat Aug 29 22:13:43 2015 From: 17Q4MX2hmktmpuUKHFuoRmS5MfB5XPb at Safe-mail.net (17Q4MX2hmktmpuUKHFuoRmS5MfB5XPb at Safe-mail.net) Date: Sat, 29 Aug 2015 22:13:43 -0400 Subject: [Cryptography] Drop Zone v0.1 released Message-ID: Announcing the first release of Drop Zone, a new e-commerce system that uses The Bitcoin network to preserve the opacity of supply chain activity. It is completely decentralized because it is built on The Blockchain. You can see screenshots of the project below: Github Link: https://github.com/17Q4MX2hmktmpuUKHFuoRmS5MfB5XPbhod/dropzone_ruby Rubygem Link: https://rubygems.org/gems/dropzone_ruby It is a command line only client for now, but it is perfectly functional, written in Ruby, and tested to the hilt. - Install ruby. This program was built on version 2.2.1 - gem install dropzone_ruby - Run "dropzone help" to get started - Fund a testnet and mainnet key with BTC/tBTC - Create a seller or buyer profile, and start using Drop Zone The basic commands of the protocol are simple to execute, and very simple to use, though I anticipate that the average non-dev will not find picking it up an intuitive experience. Obviously, just as Bitcoin was when it was released, this is an alpha release and it is experimental. As is the case with even the most meticulous code, there will have to be significant refactoring and updates as bugs are found. Below, and within the code itself, I have listed the bugs that are already known. The project is built with separation of concerns in mind, serves as a defacto reference for implementation in any language, and is trivially extensible by any interested programmer. This project leverages the anonymity, time-stamping, and persistence of the Bitcoin test network for its buyer/seller communications. The security, immutability, and "network effects" of the Bitcoin mainnet is leveraged for the advertisement and discovery of goods and reputational events by market participants. Bitcoin makes the security of message passing within Drop Zone a trivial concern. It also allows users to hide behind their addresses. Simply, while this is the first release of Drop Zone, it achieves what I outline in the white paper, and outsources all scalability concerns to The Blockchain. While development is a humble, sometimes thankless exercise, I would like Drop Zone to stand on its own as the project that inspired other developers to begin their work on The Blockchain. If this works, it is a new era for Bitcoin - the protocol that shows that The Blockchain is made for more than simple value transfers. It is a robust and valuable application for the purpose of disintermediating counterparty risk amongst parties engaging in high-risk transactions. While I have no certainty of it, I think that Drop Zone stands a chance at being the first Blockchain protocol to achieve widespread use. It is a scalable application that is built to use the Blockchain for nothing more than risk mitigation by way of an immutable queue of messages. Unlike many of the systems proposed today, Drop Zone allows for uptime of stores; as long as The Blockchain is up, a store is as well. This prevents sellers from having to take on the risk of hosting their own store or signing up for hosted services in the cloud. Drop Zone works as a secure message passing protocol, a marketplace, and has elements of a reputation system built in. While this first, command line version is not for the faint of heart, I hope to see an ecosystem of mobile integrations, block explorers, and subsequent commerce as the community realizes the potential of Drop Zone. Today is a Beautiful Day, Miracle Max (17Q4MX2hmktmpuUKHFuoRmS5MfB5XPbhod / mw8Ge8HDBStKyn8u4LTkUwueheFNhuo7Ch ) From ryacko at gmail.com Sun Aug 30 04:36:14 2015 From: ryacko at gmail.com (Ryan Carboni) Date: Sun, 30 Aug 2015 01:36:14 -0700 Subject: [Cryptography] Does Simon effectively use the Toffoli Gate as it's Feistel function? Message-ID: Does Simon effectively use the Toffoli Gate as it's Feistel function? From bascule at gmail.com Sun Aug 30 19:06:08 2015 From: bascule at gmail.com (Tony Arcieri) Date: Sun, 30 Aug 2015 16:06:08 -0700 Subject: [Cryptography] Using crypto to address clickjacking (was "Re: Augmented Reality Encrypted Displays") In-Reply-To: References: Message-ID: On Sat, Aug 29, 2015 at 11:13 AM, Kevin W. Wall wrote: > > The web may be *forced* into using something like "visual > > cryptography" in order to get around "clickjacking", whereby > > the user is tricked into clicking on the wrong button (or the > > right button for the wrong reasons). It's getting harder & > > harder for a web site to know & guarantee that what it thinks > > is being displayed is actually what a user sees & is agreeing > > to. See Dan Kaminsky's recent DEFCON talk for more info: > > Slightly OT for crypto, but I'll toss this out. I just think crypto > is overkill for addressing clickjacking attacks. > 100% agree with this. > Just about every recent version of every modern browser supports > the X-Frame-Options HTTP response header which, when used > correctly and consistently, is effective in preventing all known > clickjacking (aka, UI redress) attacks. There are a few deficiencies with X-Frame-Options which is what Dan Kaminsky's talk was about. X-Frame-Options: DENY is the nuclear option to prevent clickjacking. This prevents content embedding. But what if we want to embed a clickable widget in another page, but prevent clickjacking? What's really needed is a way for iframes to reason about how they're being embedded in other content. This was the actual subject matter of Dan Kaminsky's talk. He presented a concept called "IronFrame", but more recent W3C work seems to be around a Position Observer API: https://github.com/slightlyoff/PositionObserver/blob/master/explainer.md Anyway, no crypto necessary. And that's good: I am all for solving web security problems *without* crypto! -- Tony Arcieri -------------- next part -------------- An HTML attachment was scrubbed... URL: From brennan.tobias at openmailbox.org Mon Aug 31 04:28:52 2015 From: brennan.tobias at openmailbox.org (brennan.tobias at openmailbox.org) Date: Mon, 31 Aug 2015 09:28:52 +0100 Subject: [Cryptography] Ratcheting Message-ID: <5b4008bb9f9bbfc2c023631fa1fa283d@openmailbox.org> Hello! I'd like to open a thread about ratcheting in messaging between two parties. In particular the security it aims to provide and what threats it protects against. # Background It looks like any good encrypted messenger must include a cryptographic ratchet [1]: The part of a cryptographic protocol that constantly refreshes keys to reduce the impact of a key being leaked. A couple examples of this are the OTR ratchet [2] and Axolotl [3] (also used in Pond [4]). I found these protocols a bit complicated and in the interest of designing something simple got thinking about the purpose of ratchets. Also correct me if I'm wrong but some of the complexity in the Axolotl is because it's designed to work over a lot of different transports including SMS? So if restricted to TCP, a slightly simpler protocol should be possible. There seems to be 4 levels of security that are provided by increasingly strong ratchets: * (A) None. Alice and Bob just use their public keys to exchange a secret upon which to build a secure channel. * (B) Ephemeral key establishment. Alice and Bob generate and exchange ephemeral keys at the beginning and use those to establish a shared secret. * (C) Consistent key update. Like B, but generating or deriving a new shared secret every few messages. * (D) Consistent ephemeral key reestablishment. Doing a new ephemeral key establishment every few messages. The problem with (A) is that a passive capture of the traffic can be decrypted if alice or bobs key is compromised in future, (B) fixes this. (B) should be secure against a man in the middle attack given that the public private key communication they use is authenticated. The worry about (B) that demands stronger security is that entire conversations (that might last days) are encrypted using a single key, if that key were leaked the entire conversation could be decrypted. (C) attempts to fix this by "ratcheting" a key forwards (e.g. using a key derivation function), in that case a compromise at message 'm' only leads to decryption of later messages: m,m+1,m+2,.. The aim of (D) is to limit the extent of a compromise to only a short number of messages after the compromised one: m,m+1,m+2 (say). # Threat Modelling My understanding is that (A) is already secure against the Dolev-Yao model/semantic security, so we are looking for realistic threats outside of that model. The main one is conversations where Alice and Bob give up their keys after the conversation and (B) is secure against this. What sorts of attacks exists that might be able to compromise temporary secret keys during a conversation? some ideas: * (T1) someone could take the device Bob is using to talk to Alice. * (T2) an active attacker might perform an exploit on the software and do a heartbleed style memory read to compromise the keys * (T3) same as T2 but full remote code execution * anything missing here? Trying to judge the importance of the security of C and D relative to the cost of implementing them and what resources they require: I don't think anything cryptographic can protect against T1 so users should attempt to protect against it with physical security. C is quite easy to achieve in a couple different ways, only requiring an erasing PRNG or key derivation function. That protects against T2 and T3 since they older keys will have been erased from memory. D is what things like OTR and ratchet do, one very heavyweight way to do it is perform the full ephemeral establishment every message. Or you could stagger it. This gets complicated and hard to get right. I think it only protects against T2 though, not T3 so I doubt this is worth it. A much better way to protect against T2 and T3 would be to secure the implementation from vulnerabilities. Of course this is hard.. # Questions What do you think? Is my analysis realistic? What important points am I missing? Do we know of many real-life examples of attacks that ratchets have or haven't protected against. # Links * [1] https://whispersystems.org/blog/advanced-ratcheting/ * [2] https://otr.cypherpunks.ca/Protocol-v3-4.0.0.html (No explicit mention of "ratchet" but the section about Exchanging data covers it) * [3] https://github.com/trevp/axolotl/wiki * [4] https://pond.imperialviolet.org/tech.html From peter at m-o-o-t.org Mon Aug 31 09:28:17 2015 From: peter at m-o-o-t.org (Peter Fairbrother) Date: Mon, 31 Aug 2015 14:28:17 +0100 Subject: [Cryptography] Does Simon effectively use the Toffoli Gate as it's Feistel function? In-Reply-To: References: Message-ID: <55E45671.1060802@m-o-o-t.org> On 30/08/15 09:36, Ryan Carboni wrote: > Does Simon effectively use the Toffoli Gate as it's Feistel function? No. The Simon function has three inputs. S1, S8 and S2, but only two outputs. There is an AND of S1 and S2. Nor should it - reversibility in a "Fiestel function" is at best suspect. -- Peter Fairbrother From erikgranger at gmail.com Mon Aug 31 10:41:08 2015 From: erikgranger at gmail.com (Erik Granger) Date: Mon, 31 Aug 2015 10:41:08 -0400 Subject: [Cryptography] NSA looking for quantum-computing resistant encryption. How will encryption be affected by quantum computing Message-ID: www.engadget.com/2015/08/30/nsa-quantum-resistant-encryption/ I read this article and as a non-expert in quantum computing, I'm wondering what sort of impact quantum computing will have on our encryption. Will it just make brute forcing easier, thus requiring certificates to have a shorter shelf life? Or is it something more worrying? Less worrying? -------------- next part -------------- An HTML attachment was scrubbed... URL: From steveweis at gmail.com Mon Aug 31 12:34:06 2015 From: steveweis at gmail.com (Steve Weis) Date: Mon, 31 Aug 2015 09:34:06 -0700 Subject: [Cryptography] NSA looking for quantum-computing resistant encryption. How will encryption be affected by quantum computing In-Reply-To: References: Message-ID: On Mon, Aug 31, 2015 at 7:41 AM, Erik Granger wrote: > > www.engadget.com/2015/08/30/nsa-quantum-resistant-encryption/ > > I read this article and as a non-expert in quantum computing, I'm wondering what sort of impact quantum computing will have on our encryption. Will it just make brute forcing easier, thus requiring certificates to have a shorter shelf life? Or is it something more worrying? Less worrying? Here's a good summary on post-quantum crypto: http://cr.yp.to/talks/2008.10.18/slides.pdf I am not losing sleep over quantum computing, but it's prudent to have some standards to fall back on. Academia has been having conferences on post-quantum crypto (http://pqcrypto.org/) for 10 years, so the NSA is not saying anything new or scary. In terms of impact, large quantum computers will break public key factoring-based cryptography (e.g. RSA), discrete logarithm-based crypto (e.g. DSA), and elliptic curve crypto. Symmetric key algorithms aren't directly broken, but may require doubling the key length to maintain the same level of security. Cryptosystems based on lattices, codes, hash functions, and multivariate quadratic equations are not expected to be impacted. The PQ Crypto conference is talking about defining standards for cryptosystems based on these primitives. As far as I know, the record for factoring with Shor's algorithm is the number 21. Larger numbers have been factored (like 143 or 56,153), but they are special forms and not relevant to RSA keys. As a side note, D-Wave is talking about 1000-qubit adiabatic quantum computers. I don't think that is at all relevant to running Shor's algorithm or crypto. From leichter at lrw.com Mon Aug 31 12:36:47 2015 From: leichter at lrw.com (Jerry Leichter) Date: Mon, 31 Aug 2015 12:36:47 -0400 Subject: [Cryptography] NSA looking for quantum-computing resistant encryption. How will encryption be affected by quantum computing In-Reply-To: References: Message-ID: > On Aug 31, 2015, at 10:41 AM, Erik Granger wrote: > > www.engadget.com/2015/08/30/nsa-quantum-resistant-encryption/ > I read this article and as a non-expert in quantum computing, I'm wondering what sort of impact quantum computing will have on our encryption. Will it just make brute forcing easier, thus requiring certificates to have a shorter shelf life? Or is it something more worrying? Less worrying? > There are generic attacks, and there are attacks on specific techniques. The best general attack known is Grover's algorithm, which allows you to search through N elements to find the one for which some predicate is true (e.g., the one encryption key that produces a known decrypted result) in O(sqrt(N)) time (as opposed to O(N) classical time). There are some restrictions on the predicate, but it gets really complicated to determine with certainty what can and can't be computed this way (assuming, of course, you have a quantum computer!). So it's probably safer to just assume that "brute force search on a quantum computer is O(sqrt(N))". The net effect is that to retain the same level of security against brute force search that you had on a classical machine, you have to double the number of bits in your key. This isn't all that big a deal: No (classical) technology we can conceive of today could brute force an AES-128 key, so simply going to AES-256 gives you the same safety in a quantum world. The specific attacks depend on the details of the cryptographic algorithm. What got this whole field started is Shor's algorithm, which allows factoring on a quantum computer in polynomial time. (The best known classical technique has exponential time complexity, though it's not actually known whether factoring can be done in classical polynomial time.) The detailed numbers, again if one could build an appropriate quantum machine, would require that secure RSA keys be large enough to be impractical. Related algorithms are effective against the Discrete Logarithm problem for integers mod N (used in Diffie-Hellman key exchange) and the related algorithms that use elliptic curves rather then integers mod N. So most currently-used public key algorithms are attackable if quantum computers are practical. (Just for comparison: The current record for factoring an integer using a quantum computer is 56153, which you could factor by hand. Then again, 20-odd years ago, we didn't know that quantum factoring algorithms existed, much less that they could be implemented at all, so we can't be smug about this - the field is evolving *very* rapidly.) We have no idea whether there are specific quantum attacks against, say, AES. It seems highly unlikely. (The same is, in some sense, true of classical specific attacks against AES. But the theory and experience in support of the claim that such a classical attack is highly unlikely is much better developed, and has been around a lot longer.) If you're being paranoid, all you can really say is "we don't know"; but that doesn't give you much in the way of alternatives. -- Jerry -------------- next part -------------- An HTML attachment was scrubbed... URL: From hbaker1 at pipeline.com Mon Aug 31 13:49:50 2015 From: hbaker1 at pipeline.com (Henry Baker) Date: Mon, 31 Aug 2015 10:49:50 -0700 Subject: [Cryptography] NSA looking for quantum-computing resistant encryption. How will encryption be affected by quantum computing In-Reply-To: References: Message-ID: At 07:41 AM 8/31/2015, Erik Granger wrote: >www.engadget.com/2015/08/30/nsa-quantum-resistant-encryption/ > >I read this article and as a non-expert in quantum computing, I'm wondering what sort of impact quantum computing will have on our encryption. > >Will it just make brute forcing easier, thus requiring certificates to have a shorter shelf life? > >Or is it something more worrying? > >Less worrying? Perhaps the NSA is more worried about people who *might* run a quantum computation? http://arxiv.org/abs/quant-ph/9907007 Counterfactual Computation Suppose that we are given a quantum computer programmed ready to perform a computation if it is switched on. Counterfactual computation is a process by which the result of the computation may be learnt *without actually running the computer*. [Something D-Wave seems to have become quite good at...] http://arxiv.org/pdf/quant-ph/9907007v2 From crypto at senderek.ie Mon Aug 31 14:16:50 2015 From: crypto at senderek.ie (Ralf Senderek) Date: Mon, 31 Aug 2015 20:16:50 +0200 (CEST) Subject: [Cryptography] Ratcheting Message-ID: On Mon, 31 Aug 2015 10:28:52 Tobias Brennan writes: > What sorts of attacks exists that might be able to compromise temporary > secret keys during a conversation? some ideas: > * (T1) someone could take the device Bob is using to talk to Alice. > * (T2) an active attacker might perform an exploit on the software and > do a heartbleed style memory read to compromise the keys > * (T3) same as T2 but full remote code execution > * anything missing here? You asked for a realistic threat model. In case of T2 when someone is able to read substantial parts of the memory, there is not much left to protect. But I think, the more realistic attack is the effect of malware on a user's computer that allows "full remote code execution" with the user's access permissions. In this case I find the idea interesting to *separate* the secret key used to encrypt the conversation from the process environment the user (and the malware) has access to. There are several methods to perform this separation. One is to put the message keys on a separate hardware (like the Crypto Bone) and another method is to place the secret key into a space where a process with higher access permissions can read it and allow the user (and the malware) to authorize the use of these secrets by carefully crafted code that elevated the permission for the required action. It is essential that neither the user nor the malware can change this code. In this case the user can be impersonated by the malware but the secret keys remain unknown to the attacker, so that m2,m3, ... cannot be decrypted. And of course there is a combination of both. --ralf From ryacko at gmail.com Mon Aug 31 17:03:39 2015 From: ryacko at gmail.com (Ryan Carboni) Date: Mon, 31 Aug 2015 14:03:39 -0700 Subject: [Cryptography] Does Simon effectively use the Toffoli Gate as it's Feistel function? Message-ID: Good. It would be terrible if Simon had the same security level as a series of rotations and xors alone. From brk7bx at virginia.edu Mon Aug 31 20:19:32 2015 From: brk7bx at virginia.edu (Benjamin Kreuter) Date: Mon, 31 Aug 2015 20:19:32 -0400 Subject: [Cryptography] NSA looking for quantum-computing resistant encryption. How will encryption be affected by quantum computing In-Reply-To: References: Message-ID: <1441066772.20111.120.camel@krauser.san> On Mon, 2015-08-31 at 10:41 -0400, Erik Granger wrote: > www.engadget.com/2015/08/30/nsa-quantum-resistant-encryption/ > > I read this article and as a non-expert in quantum computing, I'm wondering > what sort of impact quantum computing will have on our encryption. Will it > just make brute forcing easier, thus requiring certificates to have a > shorter shelf life? Or is it something more worrying? Less worrying? More worrying. A scalable quantum computer would mean that cryptosystems based on RSA and discrete logarithms (and related assumptions), including elliptic curves, could not be considered secure. It would mean almost all of the public-key crypto in use today would need to be replaced. The good news is that we have candidate cryptosystems that are secure against quantum computers. The bad news is that in many cases it is unclear what the real security level of those systems is and performance is a possible concern (huge public keys and sometimes lots of computation). We also do not have much real-world experience with those cryptosystems. Also, remember that the key word is *scalable*. There are tons of quantum computers out there, but none of them scale to arbitrarily large problem sizes (not counting limited computers like D-wave, since that is irrelevant to crypto). We know how to increase key sizes arbitrarily, so a quantum computer that does not scale is not hard to defeat. -- Ben -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 836 bytes Desc: This is a digitally signed message part URL: From mitch at niftyegg.com Mon Aug 31 22:15:13 2015 From: mitch at niftyegg.com (Tom Mitchell) Date: Mon, 31 Aug 2015 19:15:13 -0700 Subject: [Cryptography] NSA looking for quantum-computing resistant encryption. How will encryption be affected by quantum computing In-Reply-To: References: Message-ID: On Mon, Aug 31, 2015 at 7:41 AM, Erik Granger wrote: > www.engadget.com/2015/08/30/nsa-quantum-resistant-encryption/ > > I read this article and as a non-expert in quantum computing, I'm > wondering what sort of impact quantum computing will have on our > encryption. Will it just make brute forcing easier, thus requiring > certificates to have a shorter shelf life? Or is it something more > worrying? Less worrying? > If you are the NSA you need to worry. A review of history -Example: the computers and tools at Bletchley Park were unknown (or under-appreciated) to the Germans. The effective decryption of messages was a critical aspect that influenced the outcome of WW2. One tender spot in history was the early days and onset of WW2 a lessons that some are apparently forgetting. Decrypting a critical communication one hour too late... is not a goal. We are seeing glimpses of comsec failure in all corners of industrial and government systems. Some is simple foolish blundering in management. Some is encryption failure that allowed undetected access to data and data transfers. One questionable strategy is the central servers that some agencies use for mail. Simply too many eggs in one basket. Quantum computing does not help the recent failures of management, policy and bugs but it does need to be something the NSA is ignorant about. -- T o m M i t c h e l l -------------- next part -------------- An HTML attachment was scrubbed... URL: From mitch at niftyegg.com Mon Aug 31 22:20:40 2015 From: mitch at niftyegg.com (Tom Mitchell) Date: Mon, 31 Aug 2015 19:20:40 -0700 Subject: [Cryptography] NSA looking for quantum-computing resistant encryption. How will encryption be affected by quantum computing In-Reply-To: References: Message-ID: On Mon, Aug 31, 2015 at 7:15 PM, Tom Mitchell wrote something inside out: > Quantum computing does not help the recent failures of management, policy and bugs > but it does need to be something the NSA is ignorant about. Rather they _need_ to be expert. -- T o m M i t c h e l l -------------- next part -------------- An HTML attachment was scrubbed... URL: