From ron at flownet.com Sun May 1 02:23:35 2016 From: ron at flownet.com (Ron Garret) Date: Sat, 30 Apr 2016 23:23:35 -0700 Subject: [Cryptography] Mathematics of variable substitutions? In-Reply-To: References: Message-ID: On Apr 30, 2016, at 3:57 PM, Watson Ladd wrote: > On Sat, Apr 30, 2016 at 4:17 AM, Bill Cox wrote: >> I was hoping someone could point me in the direction of relevant mathematics >> where we examine what equations can be converted to other equations using >> variable substitutions, in ways that are efficiently computable modulo a >> prime. For example, we can easily convert an Edwards curve into a circle >> with the substitution z^2 = x^2(1 + y^2). However, this substitution does >> not cause the Edwards addition law to become the circle group addition law. >> It becomes something cool, but the equations are no more efficient than >> computing the regular Edwards addition law. >> >> Has it been proven that no birational substitution can convert the Edwards >> addition law into the circle group addition law? The circle group addition >> law is: > > See any book on algebraic geometry which proves the genus is a > birational invariant. Can you please elaborate on that a bit? How does the non-existence of a birational substitution that converts Edwards to circles follow from the fact that the genus is a birational invariant? rg From gnu at toad.com Sun May 1 03:13:20 2016 From: gnu at toad.com (John Gilmore) Date: Sun, 01 May 2016 00:13:20 -0700 Subject: [Cryptography] USB 3.0 authentication: market power and DRM? In-Reply-To: References: <696C0627-33CE-4FF5-9137-CAB97F5A3181@lrw.com> <201604150721.u3F7LNfc007004@new.toad.com> Message-ID: <201605010713.u417DLVH007291@new.toad.com> > This may look odd. There are reasons. This spec is the first > part. It's addressing the authenticity of PD (powerdelivery) devices > by checking that they have been provisioned with certsunder a root > controlled by the certification body. These devices may nothave USB > data capability. The PD wires carry a low speed protocol tonegotiate > volts and amps. The 'problem' is counterfeit chargers anddefective > cables that can and do damage expensive computers and phones. I love the concept of the new Power Delivery modes (100w of power, by sending up to 20v at 5A over suitable cables). If done right, I can see people wiring their house and business wall outlets (and cars) with much safer, more compact, and Internet-enabled USB-PD sockets, replacing 110v or 220v wiring for a lot of uses. Particularly in places where the power source is DC anyway (like solar or cars) and/or where they want data or video connectivity as well as power. But I don't see how authentication fits in technically. It looks like it's there to build monopolies. The alleged problem statement seems to be: Some expensive devices will decline to spend the money to protect themselves from overvoltage or overcurrent situations, thereby being damaged by out-of-spec power supplies. We need to authenticate chargers so this won't happen. Let's examine this from an engineering point of view, then look at the politics. The One Laptop Per Child folks built their ~$100 laptops with power inputs that accept 11 to 24 V usable, -32 to +40V tolerated without damage. This lets them be used with all kinds of janky third world power, direct plug-ins to solar panels (the laptop does MPPT to optimize charging direct from solar too), etc. So it'll charge at +12 thru +24V, will decline to accept power at +35V or -12V or -24V but be undamaged. If you exceed this, e.g. by feeding 220V AC power to that input by accident, it will blow an internal fuse that's easy for a hardware tech to repair. But that's a well designed yet cheap device. Expensive USB3-PD devices could use similar circuitry to protect their expensive devices from overvoltage or overcurrent. Or, they could spend years in standards committees designing authentication. But I don't see how the standards committee solves the problem. Let's suppose that an expensive phone does USB3 authentication of its putative power source and decides that the authentication FAILS. Oh my god, it's been attached to a "counterfeit" charger or a "defective" cable! How does it protect itself? If it doesn't have circuitry that disconnects it from the power wires, it will fry anyway. But if it does have circuitry that disconnects it from the power wires, why not trigger that disconnect based on measuring overvoltage or overcurrent, rather than triggering it on failed authentication? It seems to me that a counterfeit charger could short 110V down the USB3 cable, with or without authentication. What protects the phone from that? Similarly, what prevents a counterfeit charger from using a chip and a flash image (including a signed certificate) that's identical to the one in a certified, tested, approved, paid-up charger. The counterfeiter only has to clone that real chip one time, then they can put it in all their products. Or they could actually buy the real chips on the open market, and just clone the firmware and the cert. Yet their shoddy wiring, Grade Z external components, faulty housing, etc, around that chip could still short 110V down the cable during the wrong phase of the moon. So the authentication will pass, but the voltages and currents will at sudden times be dangerous. I guess your expensive phone will fry anyway, despite the crypto, because you didn't spend 20c on protective components in the phone. What am I missing here? It looks like the alleged solution doesn't solve the alleged problem. Perhaps there's something else going on here. > The 'problem' is counterfeit chargers and defective > cables that can and do damage expensive computers and phones. "Can and do" is a misnomer here. There are no counterfeit USB3-PD chargers, because there are essentially no USB3-PD chargers on the market yet. I've been looking. So there isn't a problem "yet" from fake USB3-PD gear... Perhaps you are talking about counterfeit USB2 chargers, that don't even negotiate the voltage, just have resistor / capacitor networks that signal the option to draw >500ma power at 5v? Now let's look at the politics. It is well understood in the consumer electronics industry how to use authentication requirements to exert market power. To be able to build a peripheral devie that plugs into an iPhone, you have to include a chip made only by Apple. The phone won't talk to you without having that chip in your thingy to answer a crypto challenge sent by the iPhone. Apple will only sell the chip to you if you give them a significant part of the purchase price of the peripheral. The chip authentication is the technical hook that drives you to sign a contract with Apple to become an "Apple Certified Peripheral". There are no "Apple uncertified peripherals" in the market, they don't sell because they don't work, because Apple forces them to not work, using Apple's control over the iPhone firmware to not let them work. Didn't you wonder why every iPhone dock and iPhone charger and iPhone cable was vastly overpriced? Even from a variety of competing third-party manufacturers? That's Apple raking in their 40% or whatever. And if your gadget competes too well against one of Apple's peripherals, maybe they won't certify you at all. Like the authentication-checking on apps in the "app store": at any time, Apple can put you right out of business, at their whim, and you have no recourse. My initial suspicion is that THIS is what the USB3 "authentication" spec is for. A very similar scheme is the technical hook that forces you to sign a contract to put DRM into your products in order to be able to make an HDMI product that will interoperate with other HDMI products. In that case it isn't even to extract money for a single vendor -- it is to exert market power on behalf of a group that doesn't even make devices -- Hollywood. They used business pressure ("negotiation") against Intel to convince Intel to build this into the HDMI support in their motherboards, to deny every competitor the ability to build products that do things that consumers want but that Hollywood doesn't. So just like the bastards who are trying to put DRM into the HTML standards at the W3C, I suspect "someone" also trying to put DRM into the USB standards. This "USB Authentication" is the "hook" that means you have to do whatever the "certification body" says you have to do. Hey dj, who runs the org that will keep the master keys? Or is that a political issue that's conveniently outside the scope of the technical USB Authentication specs? Or will the certification be vendor-by-vendor, e.g. Apple devices will look for a cert signed by key X, while Blu-Ray devices will look for a cert signed by key Y? How convenient -- a generic "hook" that ANY vendor can use to make their USB products deliberately incompatible, unless you enter into a one-sided coerced contract with them! What a sneaky way to undermine the intent of the "Universal" Serial Bus! But never fear, it's all to prevent fried phones from those dastardly "counterfeiters". The leaders of our tech industry would NEVER use this power for evil, only for good. John Gilmore From ikizir at gmail.com Sun May 1 03:58:59 2016 From: ikizir at gmail.com (Ismail Kizir) Date: Sun, 1 May 2016 10:58:59 +0300 Subject: [Cryptography] WhatsApp, Curve25519 workspace etc. Message-ID: Hello, I want to state my thought more clearly. Curve25519 has 2^128 workspace for brute force attacks. Correct me if I am wrong please. Also, as far as I remember, -I don't remember where I read-, a supercomputer today, is able to break 56 bit DES encryption ~400 seconds. This is still brute-force. I am certain, there may be room for eliminating some possibilities, or even break it completely, as Bill Cox has pointed out. Moreover, the symmetric keys used are derived from this asymmetric key, which may be another source of vulnerability and another source of elimination of possibilities. Moreover, more important: WhatsApp uses AES 256 in CBC mode, which is excluded from TLS 1.3 draft. And there are some articles about it: http://link.springer.com/chapter/10.1007%2F3-540-45708-9_2 I want to repeat my question again: Isn't it highly suspicious to take so many risks, instead of simply using a larger key space? Curve25519, especially when used in combination with AES CBC, looks highly suspicious to me. Regards Ismail Kizir From hanno at hboeck.de Sun May 1 07:16:03 2016 From: hanno at hboeck.de (Hanno =?UTF-8?B?QsO2Y2s=?=) Date: Sun, 1 May 2016 13:16:03 +0200 Subject: [Cryptography] WhatsApp, Curve25519 workspace etc. In-Reply-To: References: Message-ID: <20160501131603.7b8594d5@pc1> Hi, On Sun, 1 May 2016 10:58:59 +0300 Ismail Kizir wrote: > I want to state my thought more clearly. > Curve25519 has 2^128 workspace for brute force attacks. Correct me if > I am wrong please. > > Also, as far as I remember, -I don't remember where I read-, a > supercomputer today, is able to break 56 bit DES encryption ~400 > seconds. Not sure where you're getting with this. 56 bit security is broken, 128 is not (and most likely never will be). Maybe you're line of thinking is that 128 is "only" a bit more than twice the size of 56. But that's not the case. You're counting bits here that exponentially increase the complexity. 128 bit is not (a bit more than) twice the security of 56, it's another universe of security. > Moreover, more important: WhatsApp uses AES 256 in CBC mode, which is > excluded from TLS 1.3 draft. And there are some articles about it: > http://link.springer.com/chapter/10.1007%2F3-540-45708-9_2 Ok, I must say I was surprised that Whatsapp uses CBC (I had expected either gcm or chacha20-poly1305), but there is no risk here either. All the weaknesses of CBC don't affect the mode itself, but a bad combination of cbc+hmac. Quickly skimming into the whatsapp whitepaper they use cbc+hmac with encrypt-then-mac. That's safe. What's unsafe is using the other way round or some wacky encrypt-and-mac constructions. > I want to repeat my question again: Isn't it highly suspicious to take > so many risks, instead of simply using a larger key space? It seems to me that what you classify as "so many risks" are just two misunderstandings. Neither the 128 bit security of curve25519 nor cbc in encrypt-then-mac mode are a risk. -- Hanno Böck https://hboeck.de/ mail/jabber: hanno at hboeck.de GPG: BBB51E42 -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 819 bytes Desc: OpenPGP digital signature URL: From hbaker1 at pipeline.com Sun May 1 11:00:01 2016 From: hbaker1 at pipeline.com (Henry Baker) Date: Sun, 01 May 2016 08:00:01 -0700 Subject: [Cryptography] sha1sum speed In-Reply-To: References: Message-ID: At 02:54 PM 4/30/2016, Mark Steward wrote: >Tell me you cached the files first. > >Mark > >On Sat, Apr 30, 2016 at 5:00 PM, Henry Baker wrote: >I just run Linux's 'sha1sum' on a number of very large files, and the calculation took significantly longer than I expected. > >'sha1sum' is only modestly faster on a very large file than copying the file. > >I noticed that >1) the cpu meter wasn't pinned at 100%; and >2) multiple cores weren't being fully utilized. > >BTW, I don't care about "SHA1", per se; I could just as easily have used "SHA256" or some other hash function. After cacheing this large file (i.e., on the 2nd & subsequent timings), sha1sum took 24 seconds. sha3sum (default algorithm) took 54 seconds. sha256sum took 54 seconds. b2sum-i686-linux took 35.7 seconds. b2sum-amd64-linux took 27.3 seconds. cksum took 22.8 seconds. cfv -C -tsha1 -f- took 19.5 seconds cfv -C -tcrc -f- took 8.8 seconds cfv -C -tmd5 -f- took 15.6 seconds ALL of these timings are single-threaded on the same Ubuntu box. Soooo, there are substantial differences in calculation times -- even among SHA1 implementations. Off-hand, I'd say that 'cksum' and 'sha1sum' could use a little TLC to improve their performance. From ron at flownet.com Sun May 1 11:12:53 2016 From: ron at flownet.com (Ron Garret) Date: Sun, 1 May 2016 08:12:53 -0700 Subject: [Cryptography] Mathematics of variable substitutions? In-Reply-To: References: Message-ID: <8B75DEB9-88C5-4A83-BBA6-087729FFD3DF@flownet.com> On Apr 30, 2016, at 11:23 PM, Ron Garret wrote: > > On Apr 30, 2016, at 3:57 PM, Watson Ladd wrote: > >> On Sat, Apr 30, 2016 at 4:17 AM, Bill Cox wrote: >>> I was hoping someone could point me in the direction of relevant mathematics >>> where we examine what equations can be converted to other equations using >>> variable substitutions, in ways that are efficiently computable modulo a >>> prime. For example, we can easily convert an Edwards curve into a circle >>> with the substitution z^2 = x^2(1 + y^2). However, this substitution does >>> not cause the Edwards addition law to become the circle group addition law. >>> It becomes something cool, but the equations are no more efficient than >>> computing the regular Edwards addition law. >>> >>> Has it been proven that no birational substitution can convert the Edwards >>> addition law into the circle group addition law? The circle group addition >>> law is: >> >> See any book on algebraic geometry which proves the genus is a >> birational invariant. > > Can you please elaborate on that a bit? How does the non-existence of a birational substitution that converts Edwards to circles follow from the fact that the genus is a birational invariant? Oh, never mind. The genus of the two curves must be different. Duh. rg From kevin.w.wall at gmail.com Sun May 1 11:41:13 2016 From: kevin.w.wall at gmail.com (Kevin W. Wall) Date: Sun, 1 May 2016 11:41:13 -0400 Subject: [Cryptography] USB 3.0 authentication: market power and DRM? In-Reply-To: <201605010713.u417DLVH007291@new.toad.com> References: <696C0627-33CE-4FF5-9137-CAB97F5A3181@lrw.com> <201604150721.u3F7LNfc007004@new.toad.com> <201605010713.u417DLVH007291@new.toad.com> Message-ID: On Sun, May 1, 2016 at 3:13 AM, John Gilmore wrote: [snip] > > But I don't see how authentication fits in technically. It looks like > it's there to build monopolies. > > The alleged problem statement seems to be: Some expensive devices will > decline to spend the money to protect themselves from overvoltage or > overcurrent situations, thereby being damaged by out-of-spec power > supplies. We need to authenticate chargers so this won't happen. > Let's examine this from an engineering point of view, then look at > the politics. If that is the concern, then certainly there are cheaper ways to achieve that. [snip] > But if it does have circuitry that disconnects it from the power > wires, why not trigger that disconnect based on measuring overvoltage or > overcurrent, rather than triggering it on failed authentication? > > It seems to me that a counterfeit charger could short 110V down > the USB3 cable, with or without authentication. What protects > the phone from that? > > Similarly, what prevents a counterfeit charger from using a chip and a > flash image (including a signed certificate) that's identical to the > one in a certified, tested, approved, paid-up charger. The > counterfeiter only has to clone that real chip one time, then they can > put it in all their products. Or they could actually buy the real > chips on the open market, and just clone the firmware and the cert. > Yet their shoddy wiring, Grade Z external components, faulty housing, > etc, around that chip could still short 110V down the cable during the > wrong phase of the moon. So the authentication will pass, but the > voltages and currents will at sudden times be dangerous. I guess your > expensive phone will fry anyway, despite the crypto, because you > didn't spend 20c on protective components in the phone. > > What am I missing here? It looks like the alleged solution doesn't > solve the alleged problem. Perhaps there's something else going on here. Is perhaps the (alleged) reason for the authentication to prevent altered chargers from delivering malware, as was described at Blackhat USA 2013? E.g., see . Just a at thought. If nothing else, this might be the pretense of requiring authentication even though it indeed might not be the true motives. -kevin -- Blog: http://off-the-wall-security.blogspot.com/ | Twitter: @KevinWWall NSA: All your crypto bit are belong to us. From dj at deadhat.com Sun May 1 13:56:47 2016 From: dj at deadhat.com (David Johnston) Date: Sun, 1 May 2016 10:56:47 -0700 Subject: [Cryptography] USB 3.0 authentication: market power and DRM? In-Reply-To: References: <696C0627-33CE-4FF5-9137-CAB97F5A3181@lrw.com> <201604150721.u3F7LNfc007004@new.toad.com> <201605010713.u417DLVH007291@new.toad.com> Message-ID: <5726435F.60105@deadhat.com> On 5/1/16 8:41 AM, Kevin W. Wall wrote: > Is perhaps the (alleged) reason for the authentication to prevent > altered chargers > from delivering malware, as was described at Blackhat USA 2013? E.g., > see . > > Just a at thought. If nothing else, this might be the pretense of requiring > authentication even though it indeed might not be the true motives. > > -kevin The basic mechanisms are already deployed in proprietary ways. The USB PD authentication spec is just a standardization of existing practice - which I'm told works just fine at limiting counterfeit chargers. The spec is not a copy and paste of any existing protocol though. It's a clean sheet design by members of the USB-IF. The PD auth spec is not fit for purpose for preventing the delivering of malware, except in specific cases that an enterprising malware distributor would just work around by using the USB data wires instead of the PD wires. The malware threat is principally on the USB data wires, both by exploiting vulnerabilities in known drivers ("Hi I'm an xyz-corp mouse, load my Swiss cheese driver") and exploiting overly trusting operating systems. That is for the other, as yet unwritten, spec which would do the auth before a driver is loaded and would enable different certification models (think corporate CA provisioning devices received through a secure supply chain). There are plenty of motives for a USB security spec without inventing hypothetical ones. Car park flash attacks, BadUSB, MITM loggers and other USB vectors all provide the motivation for a security spec on the data wires, but that simply isn't done yet. On PD it is entirely possible to make a device that lies and cause more volts or amps to be presented or pulled respectively than it compatible with the continues functioning of the device. This happens today with resistors on Type-C connectors, but with the PD protocol that negotiation is done with a protocol. The other thing the PD auth spec does is provide a means to see that specific electrical certifications (UL, EC etc) have been attested to and who is doing the attesting. Also to see that specific USB certifications have been granted. So the 'hidden' motive you suggest is not a motive for this spec, but it is a motive for the second part. As with any standards development, this can change until the final draft is approved. From gschultz at kc.rr.com Sun May 1 14:12:59 2016 From: gschultz at kc.rr.com (Grant Schultz) Date: Sun, 1 May 2016 13:12:59 -0500 Subject: [Cryptography] cryptography Digest, Vol 36, Issue 29 In-Reply-To: References: Message-ID: <5726472B.1010701@kc.rr.com> Subject: [Cryptography] More speculation on cryptographic breakthroughs. ON 4/30, Ray Dillinger wrote >The "major crypto breakthrough" that we keep hearing about, may >be just a giant database of audio recordings of people typing >passwords. Few question for thought: - With most cell phones in pockets, is the audio quality good enough for this kind of attack? - Would an effective countermeasure be to enter passwords via less audible means, such as touchscreens or mice, and picking out buttons on the screen? - What if we simply press the keys more slowly to where they don't reach their maximum stroke and cause much audible output? - I've wondered about the following: If the NSA/FBI/etc. capabilities are as far-reaching as we fear, wouldn't they (FBI in particular) have the ability to conquer organized crime by now? Wouldn't they pwn any computer being used inside organized crime in other words? (Of course if we're really paranoid, we could speculate that they are not putting organized crime out of business because it would tip their hand as to their capabilities...) Grant Schultz From alexander.kjeldaas at gmail.com Sun May 1 14:40:51 2016 From: alexander.kjeldaas at gmail.com (Alexander Kjeldaas) Date: Sun, 1 May 2016 20:40:51 +0200 Subject: [Cryptography] sha1sum speed In-Reply-To: References: Message-ID: On Sat, Apr 30, 2016 at 6:00 PM, Henry Baker wrote: > I just run Linux's 'sha1sum' on a number of very large files, and the > calculation took significantly longer than I expected. > > 'sha1sum' is only modestly faster on a very large file than copying the > file. > Yes, it's actually significantly slower than Blake 2b, in Javascript, in Firefox. Alexander -------------- next part -------------- An HTML attachment was scrubbed... URL: From bear at sonic.net Sun May 1 16:40:58 2016 From: bear at sonic.net (Ray Dillinger) Date: Sun, 1 May 2016 13:40:58 -0700 Subject: [Cryptography] USB 3.0 authentication: market power and DRM? In-Reply-To: <201605010713.u417DLVH007291@new.toad.com> References: <696C0627-33CE-4FF5-9137-CAB97F5A3181@lrw.com> <201604150721.u3F7LNfc007004@new.toad.com> <201605010713.u417DLVH007291@new.toad.com> Message-ID: <572669DA.5060207@sonic.net> On 05/01/2016 12:13 AM, John Gilmore wrote: > I love the concept of the new Power Delivery modes (100w of power, by > sending up to 20v at 5A over suitable cables). If done right, I can > see people wiring their house and business wall outlets (and cars) > with much safer, more compact, and Internet-enabled USB-PD sockets, > replacing 110v or 220v wiring for a lot of uses. That would be deeply impractical in terms of materials costs. Increasing amperage means you have to use heavier cables. To the extent that copper ain't cheap and space inside walls for wiring is often limited, a higher voltage/lower amperage is always a more effective use of materials. What you're talking about may be put off for USB 4.0 or something when negotiating services includes negotiating which of a dozen or so standard voltages is desired. I'm not sure I like that idea though, because now it means every last outlet in your home has electronic devices listening to it and people have to trust that that's all they're doing. Of course this whole discussion is about establishing that trust, but it's still true that it's easier to trust a copper wire screwed into a terminal socket with a hardware-store screw than it is to trust a chip with logic you can't see, made by somebody you don't know. Bear -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 819 bytes Desc: OpenPGP digital signature URL: From hbaker1 at pipeline.com Sun May 1 20:33:02 2016 From: hbaker1 at pipeline.com (Henry Baker) Date: Sun, 01 May 2016 17:33:02 -0700 Subject: [Cryptography] sha1sum speed In-Reply-To: References: Message-ID: At 01:01 PM 5/1/2016, Bill Cox wrote: >On Sun, May 1, 2016 at 8:00 AM, Henry Baker wrote: >sha1sum took 24 seconds. >sha3sum (default algorithm) took 54 seconds. >sha256sum took 54 seconds. >b2sum-i686-linux took 35.7 seconds. >b2sum-amd64-linux took 27.3 seconds. > >This shows a major problem we face in Linux distros: we like everyone to run the same binary, so everyone is forced to use the oldest supported CPU instruction set. > >The program sha1sum is from the coreutils package, which AFAICT contains zero vector-optimization of any kind. Here's what I get with my version of b2sum, compiled with AVX2 support, vs sha1sum shipping with Ubuntu. randfile is a 300MiB file, already cached: > >$ time sha1sum randfile >30b42c8894b108d65db90090c98c0a9c8cd63cb9 randfile > >real0m0.845s >user0m0.784s >sys0m0.056s > >$ time b2sum randfile >e2cb7410dcbe11930909f144da7c2121f22100d7825614d640fa63e14a2da01265da779030a250e718ed30250221157992567d7cee4c4b4a28f77bcbbe4df514 randfile > >real0m0.432s >user0m0.396s >sys0m0.036s > >BLAKE2 is almost twice as fast, and the parallel version is faster (for large hashing, not < 1KiB): > >$ time b2sum -a blake2bp randfile >4d33a9488a3a197a7179350b7c000296c231129679bc11ab024b11fda1f583cb957980e4c8e8cd6fb751ad3406842e54e7246675118d857342dbc8a60e4a84f2 randfile > >real0m0.324s >user0m0.996s >sys0m0.068s > >Not only that, but Samuel Neves (who wrote the optimized BLAKE2 code) has an optimized version of BLAKE2bp using more of the available parallelism per core to get around 1 byte/cycle throughput. Thanks, Bill. I compiled b2sum on my (old) Ubuntu system and am now getting 15 secs (compare with #'s above)), which is almost 2X the 27.3 seconds that I got with the single-threaded x86-64 code. Note that '-a blake2bp' uses *all 3 cores.* From pgut001 at cs.auckland.ac.nz Sun May 1 16:07:58 2016 From: pgut001 at cs.auckland.ac.nz (Peter Gutmann) Date: Sun, 1 May 2016 20:07:58 +0000 Subject: [Cryptography] [FORGED] Re: USB 3.0 authentication: market power and DRM? In-Reply-To: References: <696C0627-33CE-4FF5-9137-CAB97F5A3181@lrw.com> <201604150721.u3F7LNfc007004@new.toad.com> <201605010713.u417DLVH007291@new.toad.com>, Message-ID: <9A043F3CF02CD34C8E74AC1594475C73F4C77D42@uxcn10-5.UoA.auckland.ac.nz> Kevin W. Wall writes: >Is perhaps the (alleged) reason for the authentication to prevent altered >chargers from delivering malware, One genuine reason, although it's not clear that the auth achieves it, is to prevent problems due to cables that lie about their capabilities. The typical USB cable is 28 AWG, which can't carry anywhere near the power that USB 3 power delivery is rated for. So you get a cheap Chinese cable that lies about its capabilities, which then melts or catches fire when the device attached to it tries to draw the advertised amount of power. Or shorts out at full power, or fries the power source when the device attached to it tries to draw 5x what it's rated for based on what the cable told it. Or a zillion other failure modes induced by something lying about its capabilities. Peter. From waywardgeek at gmail.com Sun May 1 16:01:40 2016 From: waywardgeek at gmail.com (Bill Cox) Date: Sun, 1 May 2016 13:01:40 -0700 Subject: [Cryptography] sha1sum speed In-Reply-To: References: Message-ID: On Sun, May 1, 2016 at 8:00 AM, Henry Baker wrote: > sha1sum took 24 seconds. > sha3sum (default algorithm) took 54 seconds. > sha256sum took 54 seconds. > b2sum-i686-linux took 35.7 seconds. > b2sum-amd64-linux took 27.3 seconds. > This shows a major problem we face in Linux distros: we like everyone to run the same binary, so everyone is forced to use the oldest supported CPU instruction set. The program sha1sum is from the coreutils package, which AFAICT contains zero vector-optimization of any kind. Here's what I get with my version of b2sum, compiled with AVX2 support, vs sha1sum shipping with Ubuntu. randfile is a 300MiB file, already cached: $ time sha1sum randfile 30b42c8894b108d65db90090c98c0a9c8cd63cb9 randfile real 0m0.845s user 0m0.784s sys 0m0.056s $ time b2sum randfile e2cb7410dcbe11930909f144da7c2121f22100d7825614d640fa63e14a2da01265da779030a250e718ed30250221157992567d7cee4c4b4a28f77bcbbe4df514 randfile real 0m0.432s user 0m0.396s sys 0m0.036s BLAKE2 is almost twice as fast, and the parallel version is faster (for large hashing, not < 1KiB): $ time b2sum -a blake2bp randfile 4d33a9488a3a197a7179350b7c000296c231129679bc11ab024b11fda1f583cb957980e4c8e8cd6fb751ad3406842e54e7246675118d857342dbc8a60e4a84f2 randfile real 0m0.324s user 0m0.996s sys 0m0.068s Not only that, but Samuel Neves (who wrote the optimized BLAKE2 code) has an optimized version of BLAKE2bp using more of the available parallelism per core to get around 1 byte/cycle throughput. Now, any sane person who needs a whole lot of speed and has a processor supporting SSE4 or newer instruction set should just use BLAKE2. That said, the future looks bright for even more speed using HighwahHash-like parallel multiplications and byte shuffles. We're seeing some pretty sick speed in prototype code. Bill -------------- next part -------------- An HTML attachment was scrubbed... URL: From ron at flownet.com Mon May 2 01:43:35 2016 From: ron at flownet.com (Ron Garret) Date: Sun, 1 May 2016 22:43:35 -0700 Subject: [Cryptography] USB 3.0 authentication: market power and DRM? In-Reply-To: <572669DA.5060207@sonic.net> References: <696C0627-33CE-4FF5-9137-CAB97F5A3181@lrw.com> <201604150721.u3F7LNfc007004@new.toad.com> <201605010713.u417DLVH007291@new.toad.com> <572669DA.5060207@sonic.net> Message-ID: On May 1, 2016, at 1:40 PM, Ray Dillinger wrote: > I'm not sure I like that idea though, because now it means every > last outlet in your home has electronic devices listening to it > and people have to trust that that's all they're doing. Of course > this whole discussion is about establishing that trust, but it's > still true that it's easier to trust a copper wire screwed into > a terminal socket with a hardware-store screw than it is to trust > a chip with logic you can't see, made by somebody you don't know. The day is not far off when every hardware-store screw will have an NFC chip in it. rg From grarpamp at gmail.com Mon May 2 04:10:08 2016 From: grarpamp at gmail.com (grarpamp) Date: Mon, 2 May 2016 04:10:08 -0400 Subject: [Cryptography] Craig Wright is Satoshi Nakamoto Message-ID: http://www.bbc.co.uk/news/technology-36168863 Australian entrepreneur Craig Wright has publicly identified himself as Bitcoin creator Satoshi Nakamoto. From leichter at lrw.com Mon May 2 07:27:21 2016 From: leichter at lrw.com (Jerry Leichter) Date: Mon, 2 May 2016 07:27:21 -0400 Subject: [Cryptography] USB 3.0 authentication: market power and DRM? In-Reply-To: <572669DA.5060207@sonic.net> References: <696C0627-33CE-4FF5-9137-CAB97F5A3181@lrw.com> <201604150721.u3F7LNfc007004@new.toad.com> <201605010713.u417DLVH007291@new.toad.com> <572669DA.5060207@sonic.net> Message-ID: <0BE930DF-1F0E-4609-90DB-5D8B5957D5BF@lrw.com> > >> I love the concept of the new Power Delivery modes (100w of power, by >> sending up to 20v at 5A over suitable cables). If done right, I can >> see people wiring their house and business wall outlets (and cars) >> with much safer, more compact, and Internet-enabled USB-PD sockets, >> replacing 110v or 220v wiring for a lot of uses. > That would be deeply impractical in terms of materials costs. > Increasing amperage means you have to use heavier cables. To > the extent that copper ain't cheap and space inside walls for > wiring is often limited, a higher voltage/lower amperage is > always a more effective use of materials.... It would make no sense to *distribute* 20v from a central point. But putting a converter into the wall socket ... that makes more sense. In fact, we don't have to speculate about such things much: USB 2 versions of such things are already readily available. Stepping back, there's an interesting bit of evolution going on here. The proliferation of power distribution standards - different voltages, different frequencies - let to the development of different, incompatible plugs and sockets, which protected you against accidentally plugging a 110V device into a 220V socket (but also sometimes prevented you from making a connection that *did* make sense - but that's another story). As the international market in more sophisticated devices developed in the 1970's, the it became expensive to produce different devices for different markets. So the IEC 60302 standards - for all those power cords you've seen on computers that connect a local plug to a standard 3-wire jack, not to mention smaller versions for things like electric shavers - emerged. I used to joke that these were devices to help you destroy your equipment: Typically, on earlier devices, you had to throw a switch on the device to configure the correct voltage. Forget to do that could be a very expensive mistake. Eventually - partly as a result of pressure from Europe (Germany, in particular) which required that the device come with the switch *already set in the appropriate position for the country of delivery* - auto-ranging power supplies were developed and became pretty much universal. Wall jacks continue to come in a large variety of configurations around the world, but pretty much all electronics "just works" if you have a simple pass-through adapter. The USB standard demonstrated the need for a lighter lower-power standard. Portable devices don't want to pay the size and weight cost of a power-line-to-low-voltage-DC convertor, so rely on an external "bug", and those "bug's" have over time pretty settled on the USB voltages/amperages (though less on these)/physical configurations just for power. USB 3 may be killing the goose here. The great thing about USB is that first letter: Universal. We completed the effective evolution from USB 1.0 to 1.1 to 2.0 quite some time ago. There was a period of disagreement about how phones should tell chargers they could supply "more than USB 2.0" power for a couple of years, but that's hidden by smarter silicon these days. If you see a USB socket, you know what you're getting (modulo malware, of course), both on the power side and on the data side. USB 3, unfortunately, has introduced variety. The initial nonsense of 3.0 vs. 3.1 and just what speed you can get (5Gb/s or 10Gb/s) probably didn't poison the market because the market so far is fairly small, but it was a bad sign. The whole notion of extensible uses for the USB wires is great technically, but it's incompatible with the notion that "I just connect it and it works". My concern with the whole USB authentication process is that it will fragment the market even more. You'll have situations where device A can connect to B using cable C, A and D can also talk over C - but B won't talk to D over C. Users will have no way to figure out why: Everything will, by design, be labeled the same to maintain the illusion that it *is* the same. But under the covers, it won't be. I'm afraid engineer's love for "generality" - combined with varying business drivers, from IP protection to the ability to avoid the current commodity market in USB parts and carve out protected areas for rent-seekers - will lead us back to an era of confusion - and various increased costs. The easiest and best way to block dangerous USB cables that fry equipment is through reputation - reputation maintained through enforceable trademarks. And, of course, more robust equipment that blocks voltages and currents going to "the wrong place" to being with, protecting itself from a much wider array of faults than any certification process possibly could. This whole effort strikes me as wrong-headed. -- Jerry From johnl at iecc.com Mon May 2 09:52:42 2016 From: johnl at iecc.com (John Levine) Date: 2 May 2016 13:52:42 -0000 Subject: [Cryptography] Craig Wright is Satoshi Message-ID: <20160502135242.13675.qmail@ary.lan> Multiple press reports say Australian cryptographer Craig Wright created Bitcoin. This time it seems to be for real, he showed he has Satoshi's keys. http://www.bbc.com/news/technology-36168863 R's, John From hbaker1 at pipeline.com Mon May 2 10:31:11 2016 From: hbaker1 at pipeline.com (Henry Baker) Date: Mon, 02 May 2016 07:31:11 -0700 Subject: [Cryptography] USB 3.0 authentication: market power and DRM? In-Reply-To: References: <696C0627-33CE-4FF5-9137-CAB97F5A3181@lrw.com> <201604150721.u3F7LNfc007004@new.toad.com> <201605010713.u417DLVH007291@new.toad.com> <572669DA.5060207@sonic.net> Message-ID: At 10:43 PM 5/1/2016, Ron Garret wrote: >On May 1, 2016, at 1:40 PM, Ray Dillinger wrote: >> I'm not sure I like that idea though, because now it means every >> last outlet in your home has electronic devices listening to it >> and people have to trust that that's all they're doing. Of course >> this whole discussion is about establishing that trust, but it's >> still true that it's easier to trust a copper wire screwed into >> a terminal socket with a hardware-store screw than it is to trust >> a chip with logic you can't see, made by somebody you don't know. > >The day is not far off when every hardware-store screw will have an NFC chip in it. The CIA is already dusting people & things with synthetic DNA to track them. (No sh*t; check DARPA research from a few years ago.) Synthetic DNA is a heck of a lot cheaper. BTW, has anyone followed the installation of car chargers? I'd be willing to bet that these public car chargers log every charge by some unique identifier from the car itself. Heck, they don't even have to try very hard: simply log the Bluetooth & Wifi id's. Yes, a lot of people pay for gas by credit card (which is obviously tracked), but most public car charging stations are free (for now). Of course, they have to get in line. Tesla already logs just about everything about your car & uploads the data to Tesla. From pzbowen at gmail.com Mon May 2 11:31:25 2016 From: pzbowen at gmail.com (Peter Bowen) Date: Mon, 2 May 2016 08:31:25 -0700 Subject: [Cryptography] sha1sum speed In-Reply-To: References: Message-ID: On Sun, May 1, 2016 at 5:33 PM, Henry Baker wrote: > At 01:01 PM 5/1/2016, Bill Cox wrote: >>On Sun, May 1, 2016 at 8:00 AM, Henry Baker wrote: >>sha1sum took 24 seconds. >>sha3sum (default algorithm) took 54 seconds. >>sha256sum took 54 seconds. >>b2sum-i686-linux took 35.7 seconds. >>b2sum-amd64-linux took 27.3 seconds. >> >>This shows a major problem we face in Linux distros: we like everyone to run the same binary, so everyone is forced to use the oldest supported CPU instruction set. >> >>The program sha1sum is from the coreutils package, which AFAICT contains zero vector-optimization of any kind. Here's what I get with my version of b2sum, compiled with AVX2 support, vs sha1sum shipping with Ubuntu. randfile is a 300MiB file, already cached: How do the time compare to using 'openssl dgst -sha1'? I *think* that will use the optimized version. Thanks, Peter From bear at sonic.net Mon May 2 13:08:21 2016 From: bear at sonic.net (Ray Dillinger) Date: Mon, 2 May 2016 10:08:21 -0700 Subject: [Cryptography] USB 3.0 authentication: market power and DRM? In-Reply-To: <0BE930DF-1F0E-4609-90DB-5D8B5957D5BF@lrw.com> References: <696C0627-33CE-4FF5-9137-CAB97F5A3181@lrw.com> <201604150721.u3F7LNfc007004@new.toad.com> <201605010713.u417DLVH007291@new.toad.com> <572669DA.5060207@sonic.net> <0BE930DF-1F0E-4609-90DB-5D8B5957D5BF@lrw.com> Message-ID: <57278985.1090000@sonic.net> I'm seeing this whole thing as an attempt to prop up CA's which are otherwise essentially looking at a failed business model. Even if CA's did what they're supposed to do there would be no way for that business to function in the market of USB equipment. CA's were supposed to verify identities, respond to authentication attacks, handle revocations, etc. The race to the bottom and their business "need" to support stupid security decisions ("compatibility" means, if someone is stupid once, therefore everybody must be stupid forever!) meant, inevitably, that they only verify that their payments clear. Certification of USB equipment doesn't even pretend to have key revocation capabilities or any way of responding to authorization attacks. By design it pretty much can't. Which means that there is literally nothing CA's can contribute to it. You can tell some piece of kit presents a key which was valid, for somebody, once. Woo. Does that, in some way, help? Bear -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 819 bytes Desc: OpenPGP digital signature URL: From pg at futureware.at Mon May 2 13:25:34 2016 From: pg at futureware.at (Philipp =?iso-8859-1?Q?G=FChring?=) Date: Mon, 02 May 2016 19:25:34 +0200 Subject: [Cryptography] [FORGED] Re: USB 3.0 authentication: market power and DRM? In-Reply-To: <9A043F3CF02CD34C8E74AC1594475C73F4C77D42@uxcn10-5.UoA.auckland.ac.nz> References: <201604150721.u3F7LNfc007004@new.toad.com> <201605010713.u417DLVH007291@new.toad.com>, <9A043F3CF02CD34C8E74AC1594475C73F4C77D42@uxcn10-5.UoA.auckland.ac.nz> Message-ID: Hi, > One genuine reason, although it's not clear that the auth achieves it, > is to > prevent problems due to cables that lie about their capabilities. The > typical > USB cable is 28 AWG, which can't carry anywhere near the power that USB > 3 > power delivery is rated for [...] I agree with that problem, but it seems to me that this spec will not help, since it does end-to-end authentication, but you have a cable-in-the-middle problem. So you have to authenticate the whole chain, and I currently do not see any technology that could do that. Perhaps a slow start mechanism like TCP would be a better idea (to ramp up the power slowly), and to measure the temperature of the cable all the time. Best regards, Philipp From dj at deadhat.com Mon May 2 13:34:01 2016 From: dj at deadhat.com (dj at deadhat.com) Date: Mon, 2 May 2016 17:34:01 -0000 Subject: [Cryptography] [FORGED] Re: USB 3.0 authentication: market power and DRM? In-Reply-To: References: <201604150721.u3F7LNfc007004@new.toad.com> <201605010713.u417DLVH007291@new.toad.com>, <9A043F3CF02CD34C8E74AC1594475C73F4C77D42@uxcn10-5.UoA.auckland.ac.nz> Message-ID: > Hi, > >> One genuine reason, although it's not clear that the auth achieves it, >> is to >> prevent problems due to cables that lie about their capabilities. The >> typical >> USB cable is 28 AWG, which can't carry anywhere near the power that USB >> 3 >> power delivery is rated for [...] > > I agree with that problem, but it seems to me that this spec will not > help, since it does end-to-end authentication, but you have a > cable-in-the-middle problem. So you have to authenticate the whole chain, > and I currently do not see any technology that could do that. No. This is the wrong. The PD spec is adjacent point to adjacent point. The USB data spec is end-end through hubs. This is one of the many reasons for treating the PD part separately from the USB data part. > > Perhaps a slow start mechanism like TCP would be a better idea (to ramp up > the power slowly), and to measure the temperature of the cable all the > time. Measurement and electrical defense is orthogonal to establishing if quality and compliance have been attested to cryptographically. You can still do measurement if you want to. DJ From dj at deadhat.com Mon May 2 13:35:06 2016 From: dj at deadhat.com (dj at deadhat.com) Date: Mon, 2 May 2016 17:35:06 -0000 Subject: [Cryptography] Craig Wright is Satoshi In-Reply-To: <20160502135242.13675.qmail@ary.lan> References: <20160502135242.13675.qmail@ary.lan> Message-ID: <13ef806a6bb1a2f5886b2af9c7e3dce3.squirrel@deadhat.com> > Multiple press reports say Australian cryptographer Craig Wright > created Bitcoin. This time it seems to be for real, he showed > he has Satoshi's keys. > > http://www.bbc.com/news/technology-36168863 Or did he? The skepticism seems well founded for now.. https://news.ycombinator.com/item?id=11609707 DJ From ron at flownet.com Mon May 2 13:56:37 2016 From: ron at flownet.com (Ron Garret) Date: Mon, 2 May 2016 10:56:37 -0700 Subject: [Cryptography] Craig Wright is Satoshi In-Reply-To: <20160502135242.13675.qmail@ary.lan> References: <20160502135242.13675.qmail@ary.lan> Message-ID: On May 2, 2016, at 6:52 AM, John Levine wrote: > Multiple press reports say Australian cryptographer Craig Wright > created Bitcoin. This time it seems to be for real, he showed > he has Satoshi's keys. > > http://www.bbc.com/news/technology-36168863 Lots of evidence that the press has been snookered: https://news.ycombinator.com/item?id=11609611 https://np.reddit.com/r/btc/comments/4hfyyo/gavin_can_you_please_detail_all_parts_of_the/d2plygg TL;DR: the “proof” Craig provided is not publicly verifiable. It was given credence only because Gavin Andresen vouched for Craig’s “proof” and Gavin has to date been considered trustworthy in the bitcoin community. My guess is that he just squandered that trust. rg From bascule at gmail.com Mon May 2 14:02:24 2016 From: bascule at gmail.com (Tony Arcieri) Date: Mon, 2 May 2016 11:02:24 -0700 Subject: [Cryptography] Craig Wright is Satoshi In-Reply-To: <20160502135242.13675.qmail@ary.lan> References: <20160502135242.13675.qmail@ary.lan> Message-ID: On Mon, May 2, 2016 at 6:52 AM, John Levine wrote: > This time it seems to be for real, he showed he has Satoshi's keys. > > http://www.bbc.com/news/technology-36168863 Or did he? https://dankaminsky.com/2016/05/02/validating-satoshi-or-not/ -- Tony Arcieri -------------- next part -------------- An HTML attachment was scrubbed... URL: From erikgranger at gmail.com Mon May 2 14:17:45 2016 From: erikgranger at gmail.com (Erik Granger) Date: Mon, 2 May 2016 14:17:45 -0400 Subject: [Cryptography] Craig Wright is Satoshi Nakamoto In-Reply-To: References: Message-ID: I'll believe it when he signs arbitrary messages with satoshis key. No signature, no story. On May 2, 2016 1:29 PM, "grarpamp" wrote: > http://www.bbc.co.uk/news/technology-36168863 > Australian entrepreneur Craig Wright has publicly identified > himself as Bitcoin creator Satoshi Nakamoto. > _______________________________________________ > The cryptography mailing list > cryptography at metzdowd.com > http://www.metzdowd.com/mailman/listinfo/cryptography -------------- next part -------------- An HTML attachment was scrubbed... URL: From dj at deadhat.com Mon May 2 14:18:27 2016 From: dj at deadhat.com (dj at deadhat.com) Date: Mon, 2 May 2016 18:18:27 -0000 Subject: [Cryptography] USB 3.0 authentication: market power and DRM? In-Reply-To: <57278985.1090000@sonic.net> References: <696C0627-33CE-4FF5-9137-CAB97F5A3181@lrw.com> <201604150721.u3F7LNfc007004@new.toad.com> <201605010713.u417DLVH007291@new.toad.com> <572669DA.5060207@sonic.net> <0BE930DF-1F0E-4609-90DB-5D8B5957D5BF@lrw.com> <57278985.1090000@sonic.net> Message-ID: <1c3b0bab54260b7f0983791985100550.squirrel@deadhat.com> > I'm seeing this whole thing as an attempt to prop > up CA's which are otherwise essentially looking at > a failed business model. Even if CA's did what > they're supposed to do there would be no way for > that business to function in the market of USB > equipment. > The CA that needs to exist would the the USB-IF. That's a consequent of the spec that says the mandatory cert is one signed under the USB-IF root cert. In the USB data security spec (that is not yet released) leaves other slots open to organizational certs, some organization (a household, a corp, a government etc.) could provision devices with an org cert they want to work with their internal devices that enforce a policy of not working without the organizational cert. This was principally my idea. It has been present in a draft that I wrong a few years ago that sat moribund under the USB manufacturers felt pressure to have some sort of security spec. It doesn't include a model for using the normal CAs used for web certification since they have proven themselves ineffective so many times. My experience with normal CAs when trying to get them to support device certificates is that they expect too much money. They want $100 per year, per cert, rather than a couple of cents per device one time. That's why for WiMAX we had to initially deploy our own CA in a corporate CA, before the task to could passed on to an external CA that would accept the business. I see no difference here. The USB-IF is going to have to set up a CA somehow because that's what the spec implies. > CA's were supposed to verify identities, respond > to authentication attacks, handle revocations, etc. The USB-IF already does that for the (relatively small) population of USB silicon vendors. > > The race to the bottom and their business "need" > to support stupid security decisions ("compatibility" > means, if someone is stupid once, therefore everybody > must be stupid forever!) meant, inevitably, that > they only verify that their payments clear. If you thing something is stupid in the spec, please email specifics in response to the release of the spec, to the email addresses at the bottom of the page with the spec on it. > > Certification of USB equipment doesn't even > pretend to have key revocation capabilities or > any way of responding to authorization attacks. > By design it pretty much can't. Which means that > there is literally nothing CA's can contribute > to it. You can tell some piece of kit presents > a key which was valid, for somebody, once. Woo. > Does that, in some way, help? > Your PC or phone authenticating a charger certainly can do revocation using the usual mechanisms, but it has been my assertion that these things tend to be done by policy download from OS vendors and browser vendors. Why would this be any different? Browsers and OSes contain whitelists and blacklists as policy to be enforced because revocation is rarely fit for purpose. From hettinga at gmail.com Mon May 2 14:43:39 2016 From: hettinga at gmail.com (Robert Hettinga) Date: Mon, 2 May 2016 14:43:39 -0400 Subject: [Cryptography] Craig Wright is Satoshi Nakamoto In-Reply-To: References: Message-ID: <2CF6A1BE-9912-4FE6-98F4-202E539D9795@gmail.com> > On May 2, 2016, at 2:17 PM, Erik Granger wrote: > > I'll believe it when he signs arbitrary messages with satoshis key. No signature, no story. Spend the coins. Pics or it didn’t happen. Cheers, RAH From sneakypete81 at gmail.com Mon May 2 15:36:25 2016 From: sneakypete81 at gmail.com (Pete) Date: Mon, 02 May 2016 19:36:25 +0000 Subject: [Cryptography] USB 3.0 authentication: market power and DRM? In-Reply-To: References: <696C0627-33CE-4FF5-9137-CAB97F5A3181@lrw.com> <201604150721.u3F7LNfc007004@new.toad.com> <201605010713.u417DLVH007291@new.toad.com> Message-ID: > Let's suppose that an expensive phone does USB3 authentication of its > putative power source and decides that the authentication FAILS. Oh > my god, it's been attached to a "counterfeit" charger or a "defective" > cable! How does it protect itself? If authentication fails, the phone can choose not to charge at the maximum rate. Current draw is totally within the control of the phone's internal battery charging circuitry. Lower current, less chance of fire. The charger presents its power capabilities via USB-PD, it's up to the phone to select which one it wants to use. > It seems to me that a counterfeit charger could short 110V down > the USB3 cable, with or without authentication. What protects > the phone from that? Right, no amount of magic crypto will protect a device from this. And a USB cable with VBUS/GND swapped is usually enough to fry a laptop[1]. > It is well understood in the consumer electronics industry how to use > authentication requirements to exert market power. [snip] > My initial suspicion is that THIS is what the USB3 "authentication" > spec is for. You're right to be suspicious, and I imagine there's some truth in this. But as you say, these companies are already using their own proprietary charger authentication protocols. This spec is at least providing a common protocol to allow different vendors' products to authenticate, if they wish to do so. [1] https://plus.google.com/+BensonLeung/posts/EBGMagC46fN -------------- next part -------------- An HTML attachment was scrubbed... URL: From ikizir at gmail.com Tue May 3 00:33:00 2016 From: ikizir at gmail.com (Ismail Kizir) Date: Tue, 3 May 2016 07:33:00 +0300 Subject: [Cryptography] WhatsApp, Curve25519 workspace etc. In-Reply-To: <20160501131603.7b8594d5@pc1> References: <20160501131603.7b8594d5@pc1> Message-ID: >Maybe you're line of thinking is that 128 is "only" a bit more than >twice the size of 56. But that's not the case. You're counting bits That's the point: I didn't mean, there is any vulnerability in 56 bit DES encryption. A supercomputer tries all possibilities and breaks it via brute force in 399 seconds! In fact, concentrating solely on brute-force attack scenarios, you confirm my concerns: You speak as there is no possibility to reduce workspace, e.g. via protocol codes, via exif data, via mathematical properties etc. Especially considering we're talking about 10-20 years future, my humble opinion, 128 bit workspace is "highly suspicious". I respect also to your opinion. But that's what I think. Thank you Ismail Kizir On Sun, May 1, 2016 at 2:16 PM, Hanno Böck wrote: > Hi, > > On Sun, 1 May 2016 10:58:59 +0300 > Ismail Kizir wrote: > >> I want to state my thought more clearly. >> Curve25519 has 2^128 workspace for brute force attacks. Correct me if >> I am wrong please. >> >> Also, as far as I remember, -I don't remember where I read-, a >> supercomputer today, is able to break 56 bit DES encryption ~400 >> seconds. > > Not sure where you're getting with this. 56 bit security is broken, 128 > is not (and most likely never will be). > Maybe you're line of thinking is that 128 is "only" a bit more than > twice the size of 56. But that's not the case. You're counting bits > here that exponentially increase the complexity. 128 bit is not (a bit > more than) twice the security of 56, it's another universe of security. > >> Moreover, more important: WhatsApp uses AES 256 in CBC mode, which is >> excluded from TLS 1.3 draft. And there are some articles about it: >> http://link.springer.com/chapter/10.1007%2F3-540-45708-9_2 > > Ok, I must say I was surprised that Whatsapp uses CBC (I had expected > either gcm or chacha20-poly1305), but there is no risk here either. > All the weaknesses of CBC don't affect the mode itself, but a bad > combination of cbc+hmac. Quickly skimming into the whatsapp whitepaper > they use cbc+hmac with encrypt-then-mac. That's safe. What's unsafe is > using the other way round or some wacky encrypt-and-mac constructions. > >> I want to repeat my question again: Isn't it highly suspicious to take >> so many risks, instead of simply using a larger key space? > > It seems to me that what you classify as "so many risks" are just two > misunderstandings. Neither the 128 bit security of curve25519 nor cbc > in encrypt-then-mac mode are a risk. > > -- > Hanno Böck > https://hboeck.de/ > > mail/jabber: hanno at hboeck.de > GPG: BBB51E42 > > _______________________________________________ > The cryptography mailing list > cryptography at metzdowd.com > http://www.metzdowd.com/mailman/listinfo/cryptography From asanso at adobe.com Tue May 3 01:44:33 2016 From: asanso at adobe.com (Antonio Sanso) Date: Tue, 3 May 2016 05:44:33 +0000 Subject: [Cryptography] WhatsApp, Curve25519 workspace etc. In-Reply-To: <20160501131603.7b8594d5@pc1> References: <20160501131603.7b8594d5@pc1> Message-ID: hi On May 1, 2016, at 1:16 PM, Hanno Böck wrote: > Hi, > > On Sun, 1 May 2016 10:58:59 +0300 > Ismail Kizir wrote: > >> I want to state my thought more clearly. >> Curve25519 has 2^128 workspace for brute force attacks. Correct me if >> I am wrong please. >> >> Also, as far as I remember, -I don't remember where I read-, a >> supercomputer today, is able to break 56 bit DES encryption ~400 >> seconds. > > Not sure where you're getting with this. 56 bit security is broken, 128 > is not (and most likely never will be). > Maybe you're line of thinking is that 128 is "only" a bit more than > twice the size of 56. But that's not the case. You're counting bits > here that exponentially increase the complexity. 128 bit is not (a bit > more than) twice the security of 56, it's another universe of security. > >> Moreover, more important: WhatsApp uses AES 256 in CBC mode, which is >> excluded from TLS 1.3 draft. And there are some articles about it: >> http://link.springer.com/chapter/10.1007%2F3-540-45708-9_2 > > Ok, I must say I was surprised that Whatsapp uses CBC (I had expected > either gcm or chacha20-poly1305), FWIW I am not :) both vcm and chacha20-poly1305 are not nonce resistant and standard AES-GCM (with 92 bits nonce) can be safely be used “only” for 2^32 times :) regards antonio > but there is no risk here either. > All the weaknesses of CBC don't affect the mode itself, but a bad > combination of cbc+hmac. Quickly skimming into the whatsapp whitepaper > they use cbc+hmac with encrypt-then-mac. That's safe. What's unsafe is > using the other way round or some wacky encrypt-and-mac constructions. > >> I want to repeat my question again: Isn't it highly suspicious to take >> so many risks, instead of simply using a larger key space? > > It seems to me that what you classify as "so many risks" are just two > misunderstandings. Neither the 128 bit security of curve25519 nor cbc > in encrypt-then-mac mode are a risk. > > -- > Hanno Böck > https://hboeck.de/ > > mail/jabber: hanno at hboeck.de > GPG: BBB51E42 > _______________________________________________ > The cryptography mailing list > cryptography at metzdowd.com > http://www.metzdowd.com/mailman/listinfo/cryptography From brk7bx at virginia.edu Tue May 3 08:57:43 2016 From: brk7bx at virginia.edu (Benjamin Kreuter) Date: Tue, 03 May 2016 08:57:43 -0400 Subject: [Cryptography] WhatsApp, Curve25519 workspace etc. In-Reply-To: References: <20160501131603.7b8594d5@pc1> Message-ID: <1462280263.2923.41.camel@virginia.edu> On Tue, 2016-05-03 at 07:33 +0300, Ismail Kizir wrote: > Especially considering we're talking about 10-20 years future, my > humble opinion, 128 bit workspace is "highly suspicious". Well, if you want something larger, there are other Edwards curves you can use; for example, Curve41417: https://pure.tue.nl/ws/files/3937646/687849301558882.pdf The nice part about modern cryptography is that you are free to choose the security / computation cost trade-off that makes sense for you. -- Ben -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 819 bytes Desc: This is a digitally signed message part URL: From michel.arboi at gmail.com Tue May 3 11:00:57 2016 From: michel.arboi at gmail.com (Michel Arboi) Date: Tue, 3 May 2016 17:00:57 +0200 Subject: [Cryptography] sha1sum speed In-Reply-To: References: Message-ID: On 1 May 2016 at 00:20, Henry Baker wrote: > I did say I compared sha1sum with *copying* the file, not just *reading* the file. You are comparing apples (IO) and oranges (CPU) From dj at deadhat.com Tue May 3 16:49:53 2016 From: dj at deadhat.com (dj at deadhat.com) Date: Tue, 3 May 2016 20:49:53 -0000 Subject: [Cryptography] WhatsApp, Curve25519 workspace etc. In-Reply-To: <1462280263.2923.41.camel@virginia.edu> References: <20160501131603.7b8594d5@pc1> <1462280263.2923.41.camel@virginia.edu> Message-ID: > On Tue, 2016-05-03 at 07:33 +0300, Ismail Kizir wrote: > >> Especially considering we're talking about 10-20 years future, my >> humble opinion, 128 bit workspace is "highly suspicious". > > Well, if you want something larger, there are other Edwards curves you > can use; for example, Curve41417: > > https://pure.tue.nl/ws/files/3937646/687849301558882.pdf > > The nice part about modern cryptography is that you are free to choose > the security / computation cost trade-off that makes sense for you. > I rather like 128 bits for a key size. Especially when the data size is also 128 bits. This may be mostly because I implement crypto in hardware. An O(2**128) problem being attacked by 100,000,000 custom circuits in an NSA data center, each circuit trying 10,000,000,000 combinations a second (parameters we might hypothetically reach in 20 years if we're optimistic), it would take 10,790,283,070,806 years to complete. So my estimate of attack strength could be off by a factor of 1,000,000 and we still would be ok. I'd really like to get the contract for supplying those chips. The odds of a cryptographic attack seem higher that of a brute force attack. I am unafraid of quantum computers. They cannot and will not happen in the way imagined by the media. I'm having some of my time occupied in 'preparing' for quantum computers. I think it'll be a backwards step, moving us away from simple public key schemes to really complex ones that will be more vulnerable to cryptographic failures that would not happen to say Curve25519. If you're increasing the key strength to 256 or 512 bits to increase security, you are failing to achieve your goals. Your weakest link lies elsewhere and by focusing on key size beyond 128 bits, you are missing the opportunity to address the weakest link, or spending extra time lobbying NIST for 256 bit block sizes in their block ciphers (am I the only one doing this?). From ge at weijers.org Tue May 3 18:52:47 2016 From: ge at weijers.org (=?UTF-8?Q?G=C3=A9_Weijers?=) Date: Tue, 3 May 2016 15:52:47 -0700 Subject: [Cryptography] WhatsApp, Curve25519 workspace etc. In-Reply-To: References: <20160501131603.7b8594d5@pc1> Message-ID: > > > FWIW I am not :) > both vcm and chacha20-poly1305 are not nonce resistant and standard > AES-GCM (with 92 bits nonce) can be safely be used “only” for 2^32 times :) > To quote SP-800-38D: In other words, unless an implementation only uses 96-bit IVs that are generated by the deterministic construction: The total number of invocations of the authenticated encryption function shall not exceed 2^32, including all IV lengths and all instances of the authenticated encryption function with the given key. Or: if you use a counter for the nonce after doing a Diffie-Hellman exchange to generate fresh keys you can go safely beyond 2^32. The requirement is that repeating a nonce should have a probability < 2^-32, and using a counter you can trivially meet that requirement. -- Gé -------------- next part -------------- An HTML attachment was scrubbed... URL: From agr at me.com Wed May 4 12:25:34 2016 From: agr at me.com (Arnold Reinhold) Date: Wed, 04 May 2016 12:25:34 -0400 Subject: [Cryptography] USB 3.0 authentication: market power and DRM? Message-ID: <7D05622B-19E1-4937-B584-EDC28FC5A850@me.com> Authentication chips in every computer cable is disaster in the making. It’s an opportunity for cyber-sabotage on a grand scale too good for major powers to pass up. A small fraction of cables in service containing a chip that can be triggered remotely to fail or to load malware could wreak havoc on a modern economy. Where will the cable chips be made and who will control the final masks? The master private signing key owned by the USB tImplementers Forum will be an incredibly valuable target, on the order of an NSA core secret. There is no way a trade association will spend the kind of money needed to secure this asset, not will they have the layers of legal and other protections that the NSA enjoys (security clearances, long prison terms for leaks, threat of covert action, etc). What are the penalties these days for leaking a corporate trade secret? That’s assuming the leaker is caught; a few thousand bits passed to a contact or overnight access to an HSM in exchange for a suitcase of cash or freedom for a relative in the old country and no one is the wiser. And if the USB-IF does discover a leak, what can they do about it? It may be time to stock up on computers that can be powered by a pair of wires and can talk over chip-free copper cables. They are the only ones that should be used for critical infrastructure. Madness, irresponsible madness. Arnold Reinhold -------------- next part -------------- An HTML attachment was scrubbed... URL: From iang at iang.org Wed May 4 18:44:46 2016 From: iang at iang.org (ianG) Date: Thu, 5 May 2016 08:44:46 +1000 Subject: [Cryptography] Proof-of-Satoshi fails Proof-of-Proof. In-Reply-To: <2CF6A1BE-9912-4FE6-98F4-202E539D9795@gmail.com> References: <2CF6A1BE-9912-4FE6-98F4-202E539D9795@gmail.com> Message-ID: <572A7B5E.20904@iang.org> On 3/05/2016 04:43 am, Robert Hettinga wrote: > >> On May 2, 2016, at 2:17 PM, Erik Granger wrote: >> >> I'll believe it when he signs arbitrary messages with satoshis key. No signature, no story. > > Spend the coins. > > Pics or it didn’t happen. > > Cheers, > RAH That ain't gonna happen, sorry folks! Not to rag at RAH, I'm just picking up his perfect foil, and for reasons he'll wryly smile to: Physics. Humanity. Frailty. Complexity. Of the sort that we've all being talking about since forever on this list and many others. Let's break it down. Firstly, we all on this list know that cryptographic keys prove that a private key did a maths transform that a public key can confirm. Full Stop. What cryptographic proofs do not confirm is that a human said something meaningful to another human. Indeed, the more that the Bitcoin community and the tabloid press demand a proof-of-spend and examine the results they're given, the more it demonstrates how humans seem to be isolated by cryptography not joined. In theory, keys are mathware, humans are wetware and the two do not easily mix. How does this play out in real life? We know that the human experiment known as cryptographic signing has failed. We know that there is at least one tiny little country - Estonia - clinging to the European dream of using smart cards to identify humans, but statistically the world has failed to make human signing with public key cryptography work. People write books about this, I simply point it out as a significant data point of where many thousands of people really really tried to use keys to prove meaningful human things. And failed. Let's get more topical. There are strident, demanding calls for people who make statements concerning the identity of one said Satoshi Nakamoto to back those statements up with cryptographic proof. Yet these demands are .. unfounded, and that is the kindest thing that could be said about them. Why? Anyone offering information to the world has no necessary call to offer more information. When I say that Craig Wright was the leader of the team known as Satoshi Nakamoto, I do not contract to say more. Nor did Gavin or Jon or others in any sense contract to say more than they did. They don't owe anyone anything. Even if they made errors, it is not on them to correct them. "Extraordinary claims calls for extraordinary proof" is only a standard for academia, it has little place in human affairs, especially in that democratic tradition known as open discourse, nor in the human standards of proof that have been honed over a thousand years of legal history. In fact, I contracted to say less - as well all do, when we join the encryption business, we covenant to keep peoples' privacy. When I started what became Project Prometheus a few years ago, I promoted their privacy as a goal - because the team known as Satoshi Nakamoto asked for their privacy by posting here in 2008 and disappearing entirely 2 years later. Now, when I come out and say that Craig Wright was the leader of Satoshi Nakamoto, it is only because he himself finally announced it. I remain committed to privacy even if the community Satoshi wrought is revealing themselves to be a pack of rabid statist wolves looking to rip the wool off of the backs of the sheep that they call their customers and future users. Sorry, guys, it gets worse, and I hope the Bitcoin community dissolves itself in collective shame as to their inability to even contemplate protecting their own. As we know in cryptographic affairs, key management is hard. Keys can be lost. Misplaced. Traded. Breached and stolen. Keys can be spoofed - we have an entire cryptographic security system called SSL/HTTPS which is blighted by phishing, based on misuse of cryptographic proof of identity. Let's not go into the details, but I shall revise here FTR the claim of secure browsing: the identities are cryptographically proven. Which apparent claim does not reveal itself to the humans in sufficient reliability in order to defeat basic common or garden social engineering. If the IETF's biggest, bravest and most educated can fail to protect the browsing public from the obvious, known and counted threat, what hope the rest? Even if the above were not sufficient, let me get precise and particular as to why the Proof-of-Satoshi is dead-on-arrival. There are several facts which apply in this case. Firstly, Satoshi Nakamoto is not one human being. It is or was a team. Craig Wright named one person in his recent communications, being the late Dave Kleinman. Craig did not name others, nor should I. While he was the quintessential genius who had the original idea for Bitcoin and wrote the lion's share of the code, Craig could not have done it alone. Satoshi Nakamoto was a team effort. Indeed, a sort of proof is right there in front of you - when you look at Craig Wright, you do not see Satoshi. When you look at Satoshi Nakamoto, you're seeing some measure of the influence of Dave Kleinman, and it isn't possible for Dave to prove anything anymore to anyone. Team Satoshi is ephemeral, and no cryptographic multisig can now capture those that aren't around any more. This team effort was one of a most severe cost to all members of that team, and only privacy is holding us back from recognising it. Further, the keys that controlled critical parts were moved several times between various persons. Which is to say that control of the keys does not indicate more than the holder being trustworthy to the goals of the team at a point in time. Even if Craig manages to sign over a coin, it does not and cannot prove he is "the one," only that he was at one point in time a trusted member of the team. Albeit, the team that he founded, but a wise leader controls for all risks, including those risks posed by the leader himself. More: control at any time does not necessarily indicate ownership, either in the minds of the team nor in the eyes of the law. Recalling the reports of late 2015, can you rule out that the keys haven't been stolen? Finally, as has been reported, the headline bulk of the value is controlled by a trust. Any movement of those coins needs to operate according to trust rules; if not, then we are in a state of sin. What that means is not something that can be described in mathematical terms, but it can certainly be described in hysterical terms - the logic de jure of the Bitcoin community. As an aside, I really strongly suggest that the Bitcoin community not press for the breaking of the trust. If unsure on this point, ask your miners to explain that old curse "be careful what you wish for." Breaking the trust is way off the scale of what anyone will desire. I suggest that it is therefore impossible for any reasonable person to conclude that a "spend" of a Bitcoin coin proves anything beyond that the erstwhile signer was at some point in some way related to a key. A host of factors make the 'proof' too impractical to describe at a press or media level. And, if we have to call in opposing experts to argue the case, what's the point of the "proof"? It is with incredible sadness that I watch an entire community misunderstand the lesson that Satoshi originally taught - trust in mathematics to prove accountancy. Yes, cryptography can prove that a coin is available and disposable pending an attempt to further dispose it. But the Bitcoin design was deliberately weak when it came to proof of persons. Especially, when it comes to known and now revealed weaknesses in the persona once known as Satoshi Nakamoto, there is no proof in mathematics that can satisfy that community's yearning for yet another meal. By all means, take that lamb for yet another feast of slaughter, but do not soil the good name of mathematics for your Pavlovian hunger. iang, CARS. ps; after writing this, I stumbled across: http://hackingdistributed.com/2016/05/04/logical-fallacies-hunt-satoshi/ pps; This post reflects no commercial agenda or position of myself or any person related to me. I have no position in BTC and have never had any BTC other than a few pence lost in some test wallet somewhere. From leichter at lrw.com Wed May 4 22:35:03 2016 From: leichter at lrw.com (Jerry Leichter) Date: Wed, 4 May 2016 22:35:03 -0400 Subject: [Cryptography] USB 3.0 authentication: market power and DRM? In-Reply-To: <7D05622B-19E1-4937-B584-EDC28FC5A850@me.com> References: <7D05622B-19E1-4937-B584-EDC28FC5A850@me.com> Message-ID: > Authentication chips in every computer cable is disaster in the making. It’s an opportunity for cyber-sabotage on a grand scale too good for major powers to pass up. A small fraction of cables in service containing a chip that can be triggered remotely to fail or to load malware could wreak havoc on a modern economy.... You're confusing two things. *Authentication* chips in computer cables are supposed to be a way of protecting users from cheap and potentially dangerous cables. In reality, they are more likely to be a kind of DRM, maintaining the profits of the cable makers. We didn't need them to have safe power cables - we looked for the UL label. We don't need them for computer cables. *Chips* in cables are a done deal. The kinds of speeds we are pushing through copper today are impossible without active components to do pulse shaping and various other kinds of adaptation. Why it's worse to have those chips in the cables than in the jacks (that may be possible, but you really need to be right near the cable for this kind of things to work) is beyond me. Of course, if you go fiber rather than copper, you need a chip to do the translation between the electronic and photonic domains. At least some part of it has to physically couple to the fiber. Every chip can potentially be compromised. Chips in cables seem neither more nor less vulnerable than others. I see little basis for singling chips in cables as particularly hazardous. -- Jerry -------------- next part -------------- An HTML attachment was scrubbed... URL: From hbaker1 at pipeline.com Thu May 5 11:32:34 2016 From: hbaker1 at pipeline.com (Henry Baker) Date: Thu, 05 May 2016 08:32:34 -0700 Subject: [Cryptography] TLS proxies popped Message-ID: http://users.encs.concordia.ca/~mmannan/publications/ssl-interception-ndss2016.pdf http://www.theregister.co.uk/2016/05/05/tls_proxies_are_insecure/ TLS proxies: insecure by design say boffins 5 May 2016 at 07:15, Richard Chirgwin Have you ever suspected filters that decrypt traffic of being insecure? Canadian boffins agree with you, saying TLS proxies – commonly deployed in both business and home networks for traffic inspection – open up cans of worms. In their tests, "not a single TLS proxy implementation is secure with respect to all of our tests, sometimes leading to trivial server impersonation under an active man-in-the-middle attack, as soon as the product is installed on a system," write Xavier de Carné de Carnavalet and Mohammad Mannan of the Concordia Institute of Systems Engineering in Montrea. The trio's paper (PDF) goes on to say that users could be exposed to man-in-the-middle attacks or other CA-based impersonations. We found that four products are vulnerable to full server impersonation under an active man-in-the-middle (MITM) attack out-of-the-box, and two more if TLS filtering is enabled. Several of these tools also mislead browsers into believing that a TLS connection is more secure than it actually is, by e.g., artificially upgrading a server’s TLS version at the client. There's also the matter of how products protect their root certificates' private key. It's not pretty, as the table ... shows. From hbaker1 at pipeline.com Thu May 5 11:51:30 2016 From: hbaker1 at pipeline.com (Henry Baker) Date: Thu, 05 May 2016 08:51:30 -0700 Subject: [Cryptography] USB 3.0 authentication: market power and DRM? In-Reply-To: <7D05622B-19E1-4937-B584-EDC28FC5A850@me.com> References: <7D05622B-19E1-4937-B584-EDC28FC5A850@me.com> Message-ID: At 09:25 AM 5/4/2016, Arnold Reinhold wrote: >Authentication chips in every computer cable is disaster in the making. > >It's an opportunity for cyber-sabotage on a grand scale too good for major powers to pass up. > >A small fraction of cables in service containing a chip that can be triggered remotely to fail or to load malware could wreak havoc on a modern economy. > >Where will the cable chips be made and who will control the final masks? > >The master private signing key owned by the USB tImplementers Forum will be an incredibly valuable target, on the order of an NSA core secret. > >There is no way a trade association will spend the kind of money needed to secure this asset, not will they have the layers of legal and other protections that the NSA enjoys (security clearances, long prison terms for leaks, threat of covert action, etc). > >What are the penalties these days for leaking a corporate trade secret? > >That's assuming the leaker is caught; a few thousand bits passed to a contact or overnight access to an HSM in exchange for a suitcase of cash or freedom for a relative in the old country and no one is the wiser. > >And if the USB-IF does discover a leak, what can they do about it? > >It may be time to stock up on computers that can be powered by a pair of wires and can talk over chip-free copper cables. > >They are the only ones that should be used for critical infrastructure. > >Madness, irresponsible madness. > >Arnold Reinhold This horse left the barn with USB. Hacked USB HID devices can cause PC's to download malware from the Internet, or infect them directly. Check out NSA TAO's playset (Thanks, Ed Snowden). The only "non-smart"/non-hacked HW protocol left is a UART. From pg at futureware.at Thu May 5 13:58:00 2016 From: pg at futureware.at (Philipp =?iso-8859-1?Q?G=FChring?=) Date: Thu, 05 May 2016 19:58:00 +0200 Subject: [Cryptography] USB 3.0 authentication: market power and DRM? In-Reply-To: References: <7D05622B-19E1-4937-B584-EDC28FC5A850@me.com> Message-ID: Hi, > This horse left the barn with USB. Hacked USB HID devices can cause > PC's to download malware from the Internet, or infect them directly. > Check out NSA TAO's playset (Thanks, Ed Snowden). The only > "non-smart"/non-hacked HW protocol left is a UART. And even UART got a Plug&Play extension, to enable automatic driver loading, at least for Mice and Modems: http://www.osdever.net/documents/PNP-ExternalSerial-v1.00.pdf Since Microsoft Windows 95 is referenced in the standard, I guess that at least Windows 95 and perhaps some more versions would have implemented it, but I am not sure. Best regards, Philipp Gühring From awd at ddg.com Thu May 5 14:40:43 2016 From: awd at ddg.com (Andrew Donoho) Date: Thu, 5 May 2016 13:40:43 -0500 Subject: [Cryptography] Why two keys? [was: Re: WhatsApp, Curve25519 workspace etc.] In-Reply-To: <20160501131603.7b8594d5@pc1> References: <20160501131603.7b8594d5@pc1> Message-ID: > On May 1, 2016, at 06:16 , Hanno Böck wrote: > >> Moreover, more important: WhatsApp uses AES 256 in CBC mode, which is >> excluded from TLS 1.3 draft. And there are some articles about it: >> http://link.springer.com/chapter/10.1007%2F3-540-45708-9_2 > > Ok, I must say I was surprised that Whatsapp uses CBC (I had expected > either gcm or chacha20-poly1305), but there is no risk here either. > All the weaknesses of CBC don't affect the mode itself, but a bad > combination of cbc+hmac. Quickly skimming into the whatsapp whitepaper > they use cbc+hmac with encrypt-then-mac. Gentle folk, I have a question about the WhatsApp protocol. On page 6 of the WhatsApp Security Whitepaper, they describe their end to end encryption for media and attachments. To support encrypting in AES-CBC mode, they generate an ephemeral 256 bit key and a 128 bit IV. Then they go further and generate a second 256 bit ephemeral key for calculating the HMAC-SHA256. As the first key already has a significant amount of entropy and is only used once, why isn’t it reused for the HMAC-SHA256 calculation? On the face of it, it looks redundant for a single use key. Anon, Andrew ____________________________________ Andrew W. Donoho Donoho Design Group, L.L.C. awd at DDG.com, +1 (512) 750-7596, twitter.com/adonoho Essentially, all models are wrong, but some are useful. — George E.P. Box From dave at horsfall.org Thu May 5 19:42:05 2016 From: dave at horsfall.org (Dave Horsfall) Date: Fri, 6 May 2016 09:42:05 +1000 (EST) Subject: [Cryptography] Happy birthday, Ron Rivest! Message-ID: Born on this day in 1947, he is the "R" in RSA. -- Dave Horsfall DTM (VK2KFU) "Those who don't understand security will suffer." From phill at hallambaker.com Thu May 5 22:33:58 2016 From: phill at hallambaker.com (Phillip Hallam-Baker) Date: Thu, 5 May 2016 22:33:58 -0400 Subject: [Cryptography] Proof-of-Satoshi fails Proof-of-Proof. In-Reply-To: <572A7B5E.20904@iang.org> References: <2CF6A1BE-9912-4FE6-98F4-202E539D9795@gmail.com> <572A7B5E.20904@iang.org> Message-ID: Which algorithm was used to sign the 'proof'. I tried to work it out but couldn't tell for sure. But it looks to me like it was some form of ECDSA. [Trying to simplify for an audience not familiar, have I gone too far?] Now if you sign a document X with RSA, the signature will be the same every time. But with all forms of DH based signatures, a random number is generated and that affects the signature value. In effect, every signature has a salt value. Which means that a document X will only have the same signature a second time if the same random number is used. And if that happens and you sign any other document it allows an attacker to work out the private key. So anyone doing DSA has to be very careful to avoid that. So not only is it very suspicious that 'Satoshi' would choose to prove who he is with an authentication proof that the real Satoshi would laugh at, there is really no way that legitimate signature software would produce the same signature twice. This is not proof that the guy is not Satoshi. But it is definitive proof that he is lying when he makes the demonstration. The only circumstance in which the real Satoshi would do this rigmarole would be to attempt to squash rumors it was him. -------------- next part -------------- An HTML attachment was scrubbed... URL: From hbaker1 at pipeline.com Fri May 6 09:53:21 2016 From: hbaker1 at pipeline.com (Henry Baker) Date: Fri, 06 May 2016 06:53:21 -0700 Subject: [Cryptography] sha1sum speed In-Reply-To: References: Message-ID: At 08:31 AM 5/2/2016, Peter Bowen wrote: >randfile is a 300MiB file, already cached: How do the time compare to using 'openssl dgst -sha1'? > >I *think* that will use the optimized version. You are correct: time openssl dgst -sha1 18.677 seconds ***the winner (for SHA1)*** Note that this is only a single core. For completeness, here are the rest of the timings: openssl dgst -sha256 takes 41 seconds. sha1sum took 24 seconds. sha3sum (default algorithm) took 54 seconds. sha256sum took 54 seconds. b2sum-i686-linux took 35.7 seconds. b2sum-amd64-linux took 27.3 seconds. cksum took 22.8 seconds. cfv -C -tsha1 -f- took 19.5 seconds cfv -C -tcrc -f- took 8.8 seconds cfv -C -tmd5 -f- took 15.6 seconds b2sum took 29.4 seconds b2sum -a blake2bp took 15.034 seconds (all 3 cores) From brk7bx at virginia.edu Fri May 6 12:23:11 2016 From: brk7bx at virginia.edu (Benjamin Kreuter) Date: Fri, 06 May 2016 12:23:11 -0400 Subject: [Cryptography] Why two keys? [was: Re: WhatsApp, Curve25519 workspace etc.] In-Reply-To: References: <20160501131603.7b8594d5@pc1> Message-ID: <1462551791.8915.11.camel@virginia.edu> On Thu, 2016-05-05 at 13:40 -0500, Andrew Donoho wrote: > > Gentle folk, > > > > I have a question about the WhatsApp protocol. On page 6 of the > WhatsApp Security Whitepaper, they describe their end to end > encryption for media and attachments. To support encrypting in AES- > CBC mode, they generate an ephemeral 256 bit key and a 128 bit IV. > Then they go further and generate a second 256 bit ephemeral key for > calculating the HMAC-SHA256. As the first key already has a > significant amount of entropy and is only used once, why isn’t it > reused for the HMAC-SHA256 calculation? On the face of it, it looks > redundant for a single use key. Typically it is considered bad practice to use one key for two different purposes.  Also the proof of security for encrypt-then-MAC relies in subtle ways on the keys being different, so reusing the key can be insecure -- certainly true for CBC-MAC when the same block cipher is used for encryption.  Entropy is not really the issue here, since the encryption and MAC keys can safely be generated using a PRNG. -- Ben -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 819 bytes Desc: This is a digitally signed message part URL: From jon at callas.org Fri May 6 15:05:36 2016 From: jon at callas.org (Jon Callas) Date: Fri, 6 May 2016 12:05:36 -0700 Subject: [Cryptography] Why two keys? [was: Re: WhatsApp, Curve25519 workspace etc.] In-Reply-To: References: <20160501131603.7b8594d5@pc1> Message-ID: <55629778-8F9F-457D-B067-A877B988545B@callas.org> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256 > I have a question about the WhatsApp protocol. On page 6 of the WhatsApp Security Whitepaper, they describe their end to end encryption for media and attachments. To support encrypting in AES-CBC mode, they generate an ephemeral 256 bit key and a 128 bit IV. Then they go further and generate a second 256 bit ephemeral key for calculating the HMAC-SHA256. As the first key already has a significant amount of entropy and is only used once, why isn’t it reused for the HMAC-SHA256 calculation? On the face of it, it looks redundant for a single use key. The concept here in a broad form is called "key hygiene." The idea is that you should only use a key for one purpose. If you're going to encrypt with it, you shouldn't also use it for integrity. Sometimes there are vague reasons for it, and sometimes specific. Sometimes there are weak reasons, and sometimes there are strong reasons. Here's an example of a vague reason. With RSA, since decryption and signing are the same operation just using the public and private keys, using the same key for both turns each operation into an oracle for the other. I'm not sure that there's ever been a non-contrived attack, but there you have it. For Elgamal, there are generators that are desirable for performance, like 2. g^x is convenient to compute when g is 2. This is a fine generator for encryption, but trivially broken with signing. So don't use your encryption key for signing. In this specific case, it's also why people will say it's best not even to do Elgamal signatures. DSA, for all its own issues (nonces, hold on to that thought), is a better choice for an orthogonal reason -- signature size is proportional to the size of the hash rather than the key. In symmetric land, particularly with AE modes and constructions, the weak reasons are that there are proofs of security that assume that the auth key and crypto keys are independent. If you do that, you don't have have to prove anything about the mixing. So it's a weak thing in that have statements about security if you're using the same key in different places. It might be secure, but we don't know. It might also blow up in your face. Then there are the strong ones, where there is an actual attack on one part from the other. There are a whole lot of these in various places. In particular, this happens because we consider auth keys to be less valuable encryption keys for a lot of good and bad reasons. Let me contrive a hypothetical. Suppose Alice and Bob are talking and we're passively observing. Let us also presume that there's some sort of slow leak of authenticity through -- whatever. Timing, passive oracle, whatever. But let's suppose that they are on a noisy line and because of something we can learn the auth key with ~2^30 retransmits. This leak is somewhere between irritating and fatal to them. Let's suppose that the leak makes it so that we can know that Alice's message N is broken. Well, we were going to learn that anyway, most likely, because it's going to get a retransmit. It's interesting, but not overly useful. Also interesting but not overly useful is that if we see a good message whiz by us, but then get retransmitted, we learn something about where the noise in their comm channel is happening. On the other end of the scale, if they're using the same key, the auth leaks turn into crypto leaks. In the random case where they leak the auth key, we can decrypt their messages. In the case where we learn *part* of the auth key, we learn part of the crypto key and thus get an advantage into decryption. These auth errors might turn an intractable crypto break into a tractable one. And of course, if we end up learning the crypto+auth key, we can impersonate either of them to the other, and gain all the fun from that you can imagine. Beyond even this, we now have an *incentive* to stop being a passive listener and start injecting auth faults into the system. There's a systems break that happens because of bad key hygiene that escalates something unfortunate into something catastrophic, and potentially subtly catastrophic. Here's a slightly differently contrived thing. Key reuse, as you know, is bad. It's sometimes necessary (like with block-level disk encryption), but it's never desirable. Sometimes it also just happens for one reason or another. And also often there's the attacker-level problem of how do you know that a key was reused? If you consider Counter Mode and key reuse, then known plaintext leads to a plaintext leak on key reuse, even. Well, if they're generating a public parameter like an IV/nonce deterministically[1] from the key, then you're giving away a key reuse because the IV is often a public parameter. At the very least, an IV reuse broadcasts a reused key and tells the attacker where to look, and in some systems, like GCM, an IV reuse is far, far worse than a key reuse. So all of this is why key hygiene is a good thing. There are plenty of places where it doesn't matter. In a perfect system, it shouldn't matter. But in real systems it can and does matter, from a crypto standpoint as well as engineering standpoints. It's good practice to get into. It's probably okay to use the same key a lot of the time. But it's always okay to have two separate keys, and sometimes maximally bad not to. There are many subtle issues that you just don't have to worry about if auth keys and crypto keys are independent. Jon [1] I'm using "deterministically" in an affected manner. You know what I mean. -----BEGIN PGP SIGNATURE----- Version: PGP Universal 3.3.0 (Build 9060) Charset: utf-8 wsBVAwUBVyzrAvD9H+HfsTZWAQhAOgf9GVKaxSsz5fIlP6q5Ozpt6ZsZgml6NCin TsAMnoB6OU5YeaSGMoT3xWBI0jkNY/ep31czVPmLoyLArpZbP5waD5g5anoNNb3x 2nPFPBSX0eVZSURmEdE7dQ+GCh8oiOxlppy17bK6WaybWbcebrwNTVNzDAvR8pj6 oVLoAJS5ZpEAzKDLv05bX0TiPqNqedWuIT6wDGi5QbEHBVLXnltQf4gPVxjyoRyp YMcy6kPYPoAIk+LhQMb4OsLh9GyJIW1udiSPdFHviW8os4Nopk12CoDbablGOGwz fnxwowHVyHb8u7+ZHoN/r7devNtpPVJpa9TOKdcQfSJ4LyWa409EXw== =fU6X -----END PGP SIGNATURE----- From ron at flownet.com Fri May 6 15:06:51 2016 From: ron at flownet.com (Ron Garret) Date: Fri, 6 May 2016 12:06:51 -0700 Subject: [Cryptography] Proof-of-Satoshi fails Proof-of-Proof. In-Reply-To: References: <2CF6A1BE-9912-4FE6-98F4-202E539D9795@gmail.com> <572A7B5E.20904@iang.org> Message-ID: <1BAB8392-3AFE-4D07-9B09-52BD4CEC575E@flownet.com> On May 5, 2016, at 7:33 PM, Phillip Hallam-Baker wrote: > But with all forms of DH based signatures, a random number is generated and that affects the signature value. In effect, every signature has a salt value. No, that’s not true. Ed25519 signatures use a hash of the content being signed as the “random” value and so are deterministic. This is one of the things that makes Ed25519 better than standard ECDSA. rg From awd at ddg.com Fri May 6 16:57:25 2016 From: awd at ddg.com (Andrew Donoho) Date: Fri, 6 May 2016 15:57:25 -0500 Subject: [Cryptography] Why two keys? [was: Re: WhatsApp, Curve25519 workspace etc.] In-Reply-To: <55629778-8F9F-457D-B067-A877B988545B@callas.org> References: <20160501131603.7b8594d5@pc1> <55629778-8F9F-457D-B067-A877B988545B@callas.org> Message-ID: <397DC1E3-2902-43CB-B943-045DF0C958C3@ddg.com> >> On Thu, 2016-05-05 at 13:40 -0500, Andrew Donoho wrote: >> >> As the first key already has a >> significant amount of entropy and is only used once, why isn’t it >> reused for the HMAC-SHA256 calculation? On the face of it, it looks >> redundant for a single use key. Ben and Jon, Thank you both for sharing your insight. > On May 6, 2016, at 11:23 , Benjamin Kreuter wrote: > > Typically it is considered bad practice to use one key for two > different purposes. Also the proof of security for encrypt-then-MAC > relies in subtle ways on the keys being different, so reusing the key > can be insecure -- certainly true for CBC-MAC when the same block > cipher is used for encryption. Entropy is not really the issue here, > since the encryption and MAC keys can safely be generated using a PRNG. The rules of good and bad crypto practice are truly hidden. Ferguson, Schneier and Kohno are leaving some things out. Is there another book I should be consulting for added guidance? This rule, in particular, makes a great deal of sense. Using a second, independent high entropy key provides a kind of cryptographic “firewall" between the two operations. OTOH, I do not want to get into the habit of Creeping Cargo Cult Crypto. I want to implement the minimum to be secure. More complexity just adds places for lurking errors. Hence, in my system, I’ve just used a single key for both operations. Of course, making sure that the HMAC key is truly independent of the signing key is also possibly a problem. For example, when I prepare to encrypt a file, I can easily imagine naïvely creating the IV, AES key, and HMAC key sequentially. If you can predict the PRNG, then the public IV gives you some clues to the keys. Hence, inverting the order and, to guard against prediction, allocating an IV to throw away before creating the ephemeral keys has some value. Or am I being too suspicious about the low levels of entropy in my PRNG. (In my case, these keys will be generated on a mobile device. Mobile devices tend to have, from what I can see, some of the best entropy sources in systems today.) > On May 6, 2016, at 14:05 , Jon Callas wrote: > > The concept here in a broad form is called "key hygiene." The idea is that you should only use a key for one purpose. If you're going to encrypt with it, you shouldn't also use it for integrity. > > Sometimes there are vague reasons for it, and sometimes specific. Sometimes there are weak reasons, and sometimes there are strong reasons. [An excellent discussion of why to implement key hygiene snipped. Thank you for sharing that.] > Let me contrive a hypothetical. Suppose Alice and Bob are talking and we're passively observing. Let us also presume that there's some sort of slow leak of authenticity through -- whatever. Timing, passive oracle, whatever. But let's suppose that they are on a noisy line and because of something we can learn the auth key with ~2^30 retransmits. In the case of WhatsApp, these ephemeral keys are generated per file being transmitted. Hence, I think this attack is largely moot. But I take your point. Which is: develop good design habits that are robust when your protocol is used in unforeseen ways. [Further snippage. Blame Tamzen/Perry for my editing of your fine answer.] > So all of this is why key hygiene is a good thing. There are plenty of places where it doesn't matter. In a perfect system, it shouldn't matter. But in real systems it can and does matter, from a crypto standpoint as well as engineering standpoints. It's good practice to get into. It's probably okay to use the same key a lot of the time. But it's always okay to have two separate keys, and sometimes maximally bad not to. There are many subtle issues that you just don't have to worry about if auth keys and crypto keys are independent. This pattern of inserting randomness into a system's design is clearly key. My current system, which emits less than 5K messages per user-year, appears to emit orders of magnitude fewer messages than are necessary for your attack. Nonetheless, I’ll be revising future versions to have a independent authentication key. Again, thank you both for sharing your time and expertise. Anon, Andrew ____________________________________ Andrew W. Donoho Donoho Design Group, L.L.C. awd at DDG.com, +1 (512) 750-7596, twitter.com/adonoho Essentially, all models are wrong, but some are useful. — George E.P. Box From jmg at funkthat.com Fri May 6 19:06:51 2016 From: jmg at funkthat.com (John-Mark Gurney) Date: Fri, 6 May 2016 16:06:51 -0700 Subject: [Cryptography] Pragmatic, column-level data encryption at rest. Possible? In-Reply-To: <1461782936.3093878.591461817.28CAA774@webmail.messagingengine.com> References: <1461782936.3093878.591461817.28CAA774@webmail.messagingengine.com> Message-ID: <20160506230651.GR82472@funkthat.com> Jaycevee wrote this message on Wed, Apr 27, 2016 at 11:48 -0700: > The way I see it, there are two issues. I can't work out how you can > make the data searchable without storing an index of unsalted hashes, > which obviously becomes the weak point in the cryptosystem protecting > the data. Even with the index of hashes, you can't search on partial > values, but let's put that aside for a moment. How searchable do you need it to be? If you just need equality, use SIV[1] and you don't have the issue w/ unsalted hashes. [1] https://tools.ietf.org/html/rfc5297 -- John-Mark Gurney Voice: +1 415 225 5579 "All that I will do, has been done, All that I have, has not." From leichter at lrw.com Fri May 6 20:09:20 2016 From: leichter at lrw.com (Jerry Leichter) Date: Fri, 6 May 2016 20:09:20 -0400 Subject: [Cryptography] Why two keys? [was: Re: WhatsApp, Curve25519 workspace etc.] In-Reply-To: <397DC1E3-2902-43CB-B943-045DF0C958C3@ddg.com> References: <20160501131603.7b8594d5@pc1> <55629778-8F9F-457D-B067-A877B988545B@callas.org> <397DC1E3-2902-43CB-B943-045DF0C958C3@ddg.com> Message-ID: <0E1E69AA-3F21-419D-A264-4125F2708A01@lrw.com> >> Typically it is considered bad practice to use one key for two >> different purposes. Also the proof of security for encrypt-then-MAC >> relies in subtle ways on the keys being different, so reusing the key >> can be insecure -- certainly true for CBC-MAC when the same block >> cipher is used for encryption. Entropy is not really the issue here, >> since the encryption and MAC keys can safely be generated using a PRNG. > The rules of good and bad crypto practice are truly hidden. Ferguson, Schneier and Kohno are leaving some things out. Is there another book I should be consulting for added guidance? > > This rule, in particular, makes a great deal of sense. Using a second, independent high entropy key provides a kind of cryptographic “firewall" between the two operations. OTOH, I do not want to get into the habit of Creeping Cargo Cult Crypto. I want to implement the minimum to be secure. More complexity just adds places for lurking errors. Hence, in my system, I’ve just used a single key for both operations. Note that "additional entropy" is not generally a requirement here. Mainly what's necessary is that knowing one key gives you no help in learning the other. A common technique is to derive keys for different purposes from a common master key. For example, given master key M, you might use E = H(0 || M) as your encryption key and A = H(1 || M) as your authentication key. H, obviously, has to be a cryptographically secure one-way hash. M must have at least as many bits as E and A, but could well have more. But there's little reason to go nuts about this. 0 and 1 are arbitrary - people will often do things like E = H("Encrypt" || M). Is this "as secure" as using different random E and A? Clearly not. Perhaps there's a weakness in H which allows one to get A from E or E from A. Or perhaps there's some really odd connection between your encryption and authentication algorithms and H, used this way, that allows one to break the combination. In practice ... this is rather unlikely. Of course, you now have to trust that H's properties are secure. You could, as an alternative, build H from a primitive (like your encryption function) in such a way that the security properties of H that you relay on can be derived from the security properties of the encryption algorithm you're already relying on. That avoids adding any new things you have to trust - but perhaps increases the likelihood of an interaction between your hash and encryption functions. The main advantage of this approach is that you can easily derive new keys for new purposes without having to generate and securely store new random bits. The more you can isolate different cryptographic contexts from each other - so that even a leak of a key in one context provides no help in breaking messages in another - the better. -- Jerry From phill at hallambaker.com Fri May 6 21:04:40 2016 From: phill at hallambaker.com (Phillip Hallam-Baker) Date: Fri, 6 May 2016 21:04:40 -0400 Subject: [Cryptography] Proof-of-Satoshi fails Proof-of-Proof. In-Reply-To: <1BAB8392-3AFE-4D07-9B09-52BD4CEC575E@flownet.com> References: <2CF6A1BE-9912-4FE6-98F4-202E539D9795@gmail.com> <572A7B5E.20904@iang.org> <1BAB8392-3AFE-4D07-9B09-52BD4CEC575E@flownet.com> Message-ID: On Fri, May 6, 2016 at 3:06 PM, Ron Garret wrote: > > On May 5, 2016, at 7:33 PM, Phillip Hallam-Baker > wrote: > > > But with all forms of DH based signatures, a random number is generated > and that affects the signature value. In effect, every signature has a salt > value. > > No, that’s not true. Ed25519 signatures use a hash of the content being > signed as the “random” value and so are deterministic. This is one of the > things that makes Ed25519 better than standard ECDSA. > > rg > > Damn, forgot about that bit. Yes, have to make sure that the thing doesn't leak the private key... -------------- next part -------------- An HTML attachment was scrubbed... URL: From bascule at gmail.com Sat May 7 01:25:25 2016 From: bascule at gmail.com (Tony Arcieri) Date: Fri, 6 May 2016 22:25:25 -0700 Subject: [Cryptography] Proof-of-Satoshi fails Proof-of-Proof. In-Reply-To: <1BAB8392-3AFE-4D07-9B09-52BD4CEC575E@flownet.com> References: <2CF6A1BE-9912-4FE6-98F4-202E539D9795@gmail.com> <572A7B5E.20904@iang.org> <1BAB8392-3AFE-4D07-9B09-52BD4CEC575E@flownet.com> Message-ID: On Fri, May 6, 2016 at 12:06 PM, Ron Garret wrote: > > But with all forms of DH based signatures, a random number is generated > and that affects the signature value. In effect, every signature has a salt > value. Interesting sidebar: ECDSA nonces were one of the sources of Bitcoin's transaction malleability. The (massive pile of hacks that is) segregated witness feature being added to Bitcoin has an added side effect of removing signatures from the hash of a transaction, and with it the associated malleability. All that said, if you're designing a new system today, pick Ed25519. -- Tony Arcieri -------------- next part -------------- An HTML attachment was scrubbed... URL: From allenpmd at gmail.com Sat May 7 07:06:50 2016 From: allenpmd at gmail.com (Allen) Date: Sat, 7 May 2016 07:06:50 -0400 Subject: [Cryptography] Proof-of-Satoshi fails Proof-of-Proof. Message-ID: > Interesting sidebar: ECDSA nonces were one of the sources of Bitcoin's transaction malleability. > The (massive pile of hacks that is) segregated witness feature being added to Bitcoin has an added > side effect of removing signatures from the hash of a transaction, and with it the associated malleability. > All that said, if you're designing a new system today, pick Ed25519. FYI, while Ed25519 specifies that the nonce should be set deterministically, a signer can set it randomly and the signature will still verify. In fact, I don't see any way for a verifier to know if a signature was generated with a deterministic or a random nonce, so using Ed25519 might not solve malleability. -------------- next part -------------- An HTML attachment was scrubbed... URL: From awd at ddg.com Sat May 7 08:54:30 2016 From: awd at ddg.com (Andrew Donoho) Date: Sat, 7 May 2016 07:54:30 -0500 Subject: [Cryptography] Why two keys? [was: Re: WhatsApp, Curve25519 workspace etc.] In-Reply-To: <0E1E69AA-3F21-419D-A264-4125F2708A01@lrw.com> References: <20160501131603.7b8594d5@pc1> <55629778-8F9F-457D-B067-A877B988545B@callas.org> <397DC1E3-2902-43CB-B943-045DF0C958C3@ddg.com> <0E1E69AA-3F21-419D-A264-4125F2708A01@lrw.com> Message-ID: <3227F445-99CE-42DE-BE92-62C5F991D00F@ddg.com> > On May 6, 2016, at 19:09 , Jerry Leichter wrote: > > Note that "additional entropy" is not generally a requirement here. Mainly what's necessary is that knowing one key gives you no help in learning the other. Jerry, Yes, I was imprecise. Clearly, the two keys need to be independent and mutually uncorrelated. As my mobile device has a high entropy pool, I’ve gotten into the habit of using it in place of more subtle constructions. I suspect the WhatsApp people have a similar habit. It is probably much cheaper to generate the two keys and IV than to run things through a hash. > A common technique is to derive keys for different purposes from a common master key. For example, given master key M, you might use E = H(0 || M) as your encryption key and A = H(1 || M) as your authentication key. H, obviously, has to be a cryptographically secure one-way hash. M must have at least as many bits as E and A, but could well have more. But there's little reason to go nuts about this. 0 and 1 are arbitrary - people will often do things like E = H("Encrypt" || M). The above technique gives me a simple mechanism to evolve my keying strategy moving forward. Thank you for sharing it. > Is this "as secure" as using different random E and A? Clearly not. > In practice ... this is rather unlikely. > That avoids adding any new things you have to trust - but perhaps increases the likelihood of an interaction between your hash and encryption functions. These are the kinds of properties you crypto algorithm folks must assert/prove. My job is to ensure I don’t violate your conditions of use. As I am coming to learn though, there is a largely undocumented art in applying crypto functions. A few rules of thumb seem to guide me well: 1) Use just a few crypto algorithms. Learn how they interact. Isolate dependencies. 2) Many crypto sins are made moot by using another random number. 3) Don’t trust yourself. You'll never see the attack coming. Ask friends for guidance. > The main advantage of this approach is that you can easily derive new keys for new purposes without having to generate and securely store new random bits. The more you can isolate different cryptographic contexts from each other - so that even a leak of a key in one context provides no help in breaking messages in another - the better. My big takeaway is that this is a mechanism that might help me enhance the security of my system over time. My main key can be migrated behind a hash function in a future version of my app without having to redistribute keys. While that won’t fix my sin of using the key for both encryption and authentication on earlier data, it will stop the ability for new messages to contribute to discovering the key. Thank you for sharing your insight. Anon, Andrew ____________________________________ Andrew W. Donoho Donoho Design Group, L.L.C. awd at DDG.com, +1 (512) 750-7596, twitter.com/adonoho New: Spot marks the taX™ App, Retweever Family: , "To take no detours from the high road of reason and social responsibility." -- Marcus Aurelius From ron at flownet.com Sat May 7 17:30:15 2016 From: ron at flownet.com (Ron Garret) Date: Sat, 7 May 2016 14:30:15 -0700 Subject: [Cryptography] Proof-of-Satoshi fails Proof-of-Proof. In-Reply-To: References: Message-ID: <25BD23E4-2962-45D0-AB40-1D1B4A287431@flownet.com> On May 7, 2016, at 4:06 AM, Allen wrote: > > Interesting sidebar: ECDSA nonces were one of the sources of Bitcoin's transaction malleability. > > The (massive pile of hacks that is) segregated witness feature being added to Bitcoin has an added > > side effect of removing signatures from the hash of a transaction, and with it the associated malleability. > > All that said, if you're designing a new system today, pick Ed25519. > > FYI, while Ed25519 specifies that the nonce should be set deterministically, a signer can set it randomly and the signature will still verify. In fact, I don't see any way for a verifier to know if a signature was generated with a deterministic or a random nonce, so using Ed25519 might not solve malleability. No, that’s not true either. Ed25519 is not merely ECDSA with a specified nonce, it has structural changes from ECDSA specifically to prevent the kind of attack you are suggesting. The message content is hashed twice, once to produce the nonce, and again with the secret key as a prefix to produce the signature. Not only does this prevent malleability attacks, but it also protects against collisions in the underlying hash. Two different messages can actually have hash collisions and still produce different signatures. (The converse is also possible: two messages which do not collide in the underlying hash can collide in the signatures, but this is extremely unlikely because Ed25519 is, essentially, a keyed hash construction.) (I hereby coin Ron's first law of cryptography: if you think you’ve found a flaw in a DJB design, you are almost certainly wrong.) rg From allenpmd at gmail.com Sat May 7 17:45:17 2016 From: allenpmd at gmail.com (Allen) Date: Sat, 7 May 2016 17:45:17 -0400 Subject: [Cryptography] Proof-of-Satoshi fails Proof-of-Proof. In-Reply-To: <25BD23E4-2962-45D0-AB40-1D1B4A287431@flownet.com> References: <25BD23E4-2962-45D0-AB40-1D1B4A287431@flownet.com> Message-ID: > No, that’s not true either. Ed25519 is not merely ECDSA with a specified nonce, it has structural changes > from ECDSA specifically to prevent the kind of attack you are suggesting. The message content is hashed > twice, once to produce the nonce, and again with the secret key as a prefix to produce the signature. I'm not sure we're talking the same language. I'm saying: what if, instead of following the Ed25519 spec to compute the nonce deterministically from the hash of the message, the signer simply sets the nonce to a random value, and then proceeds with the rest of signing equations. AFAICS from the Ed25519 equations: (a) the signature produced with a random nonce will verify; (b) the signature produced with a random nonce will be "malleable", i.e., different random nonces will produce different signatures; and (c) there is no way for a verifier (i.e., anyone who does not know the signer's secret key) to tell if the signer followed the Ed25519 spec or used a random nonce. Of course, (a) and (b) could be tested fairly quickly by modifying the code, and if I really cared about and was relying on non-malleability, I would try this. The last assertion (c) however cannot be proven simply with code, it would take math. -------------- next part -------------- An HTML attachment was scrubbed... URL: From ron at flownet.com Sat May 7 18:16:28 2016 From: ron at flownet.com (Ron Garret) Date: Sat, 7 May 2016 15:16:28 -0700 Subject: [Cryptography] Proof-of-Satoshi fails Proof-of-Proof. In-Reply-To: References: <25BD23E4-2962-45D0-AB40-1D1B4A287431@flownet.com> Message-ID: On May 7, 2016, at 2:45 PM, Allen wrote: > > No, that’s not true either. Ed25519 is not merely ECDSA with a specified nonce, it has structural changes > > from ECDSA specifically to prevent the kind of attack you are suggesting. The message content is hashed > > twice, once to produce the nonce, and again with the secret key as a prefix to produce the signature. > > I'm not sure we're talking the same language. I'm saying: what if, instead of following the Ed25519 spec to compute the nonce deterministically from the hash of the message, the signer simply sets the nonce to a random value, and then proceeds with the rest of signing equations. AFAICS from the Ed25519 equations: (a) the signature produced with a random nonce will verify; (b) the signature produced with a random nonce will be "malleable", i.e., different random nonces will produce different signatures; and (c) there is no way for a verifier (i.e., anyone who does not know the signer's secret key) to tell if the signer followed the Ed25519 spec or used a random nonce. Of course, (a) and (b) could be tested fairly quickly by modifying the code, and if I really cared about and was relying on non-malleability, I would try this. The last assertion (c) however cannot be proven simply with code, it would take math. > OMG, you’re right. My apologies. rg From iang at iang.org Sun May 8 08:24:44 2016 From: iang at iang.org (ianG) Date: Sun, 8 May 2016 22:24:44 +1000 Subject: [Cryptography] russian spies using steganography? Message-ID: <572F300C.5010106@iang.org> http://www.theguardian.com/world/2016/may/07/discovered-our-parents-were-russian-spies-tim-alex-foley?CMP=share_btn_tw Bezrukov and Vavilova communicated with the SVR using digital steganography: they would post images online that contained messages hidden in the pixels, encoded using an algorithm written for them by the SVR. A message the FBI believes was sent in 2007 to Bezrukov by SVR headquarters was decoded as follows: “Got your note and signal. No info in our files about E.F., BT, DK, RR. Agree with your proposal to use ‘Farmer’ to start building network of students in DC. Your relationship with ‘Parrot’ looks very promising as a valid source of info from US power circles. To start working on him professionally we need all available details on his background, current position, habits, contacts, opportunities, etc.” From kevin.w.wall at gmail.com Sun May 8 17:15:12 2016 From: kevin.w.wall at gmail.com (Kevin W. Wall) Date: Sun, 8 May 2016 17:15:12 -0400 Subject: [Cryptography] russian spies using steganography? In-Reply-To: <572F300C.5010106@iang.org> References: <572F300C.5010106@iang.org> Message-ID: On Sun, May 8, 2016 at 8:24 AM, ianG wrote: > http://www.theguardian.com/world/2016/may/07/discovered-our-parents-were-russian-spies-tim-alex-foley?CMP=share_btn_tw > > Bezrukov and Vavilova communicated with the SVR using digital steganography: > they would post images online that contained messages hidden in the pixels, > encoded using an algorithm written for them by the SVR. A message the FBI > believes was sent in 2007 to Bezrukov by SVR headquarters was decoded as > follows: “Got your note and signal. No info in our files about E.F., BT, DK, > RR. Agree with your proposal to use ‘Farmer’ to start building network of [rest of message deleted] Dumb question....if this was done in 2007, why not encrypt a short message that is itself a URL shortener (e.g., bit.ly) and embed THAT encrypted URL into the image and then have the spies retrieve the encrypted URL, decrypt it, and then use the URL to retrieve the actual message (which could require authentication or itself be encrypted). That seems like it would be a lot more secure and could be built into the software that was allowing them to retrieve the embedded hidden message in the first place. And if it seems obvious to me, surely I would have thought a spy agency would have thought of it. -kevin -- Blog: http://off-the-wall-security.blogspot.com/ | Twitter: @KevinWWall NSA: All your crypto bit are belong to us. From adam at cypherspace.org Sun May 8 19:54:54 2016 From: adam at cypherspace.org (Adam Back) Date: Mon, 9 May 2016 00:54:54 +0100 Subject: [Cryptography] Proof-of-Satoshi fails Proof-of-Proof. In-Reply-To: <25BD23E4-2962-45D0-AB40-1D1B4A287431@flownet.com> References: <25BD23E4-2962-45D0-AB40-1D1B4A287431@flownet.com> Message-ID: On 7 May 2016 at 22:30, Ron Garret wrote: > No, that’s not true either. Ed25519 is not merely ECDSA with a specified nonce, it has structural changes from ECDSA specifically to prevent the kind of attack you are suggesting. The message content is hashed twice, once to produce the nonce, and again with the secret key as a prefix to produce the signature. Not only does this prevent malleability attacks, but it also protects against collisions in the underlying hash. Ed25519 (which I believe denotes EdDSA with Edwards 25519 curve) is actually Elliptic Curve Schnorr and not DSA at all. It does however use the analog of RFC6979 though much simpler. Neither is protected against signer malleability because the use of the deterministic nonce is not detectable to the verifier in either case. On 7 May 2016 at 06:25, Tony Arcieri wrote: > Interesting sidebar: ECDSA nonces were one of the sources of Bitcoin's > transaction malleability. The (massive pile of hacks that is) segregated > witness feature being added to Bitcoin has an added side effect of removing > signatures from the hash of a transaction, and with it the associated > malleability. I consider Segregated Witness quite elegant and the robust solution to malleability (which extends beyond signatures). The best way to avoid malleability is to omit the Script Signature from the hash which forms the transaction ID - that is what Segregated Witness does. The other changes it introduces are architecturally quite useful. Adam From huitema at huitema.net Sun May 8 19:12:01 2016 From: huitema at huitema.net (Christian Huitema) Date: Sun, 8 May 2016 16:12:01 -0700 Subject: [Cryptography] russian spies using steganography? In-Reply-To: References: <572F300C.5010106@iang.org> Message-ID: <036d01d1a97f$0862abc0$19280340$@huitema.net> On Sunday, May 8, 2016 2:15 PM, Kevin W. Wall wrote: > > On Sun, May 8, 2016 at 8:24 AM, ianG wrote: > > http://www.theguardian.com/world/2016/may/07/discovered-our-parents- > were-russian-spies-tim-alex-foley?CMP=share_btn_tw > > > > Bezrukov and Vavilova communicated with the SVR using digital > steganography: > > they would post images online that contained messages hidden in the > pixels, > > encoded using an algorithm written for them by the SVR.... > > Dumb question....if this was done in 2007, why not encrypt a short message > that is itself a URL shortener (e.g., bit.ly) and embed THAT encrypted URL into > the image and then have the spies retrieve the encrypted URL, decrypt it, > and then use the URL to retrieve the actual message (which could require > authentication or itself be encrypted). That seems like it would be > a lot more secure and could be built into the software that was > allowing them to retrieve the embedded hidden message in the first place. > And if it seems obvious to me, surely I would have thought a spy agency > would have thought of it. The "URL" variation would have the interest of minimizing the length of the message. It has some obvious drawbacks, if access to shortened URL is somehow monitored by the adversaries. But using few bits is a very good thing. Of course, the dumb way is to put the information in the least significant bit of every pixel, and that gives plenty of bits. But that is very easy to detect, in particular because it is not robust to a simple decompression/recompression. If you want that kind of robustness, you are in the domain of "robust undetectable watermarks," and the bandwidth is very limited. -- Christian Huitema From kentborg at borg.org Sun May 8 18:33:24 2016 From: kentborg at borg.org (Kent Borg) Date: Sun, 8 May 2016 18:33:24 -0400 Subject: [Cryptography] russian spies using steganography? In-Reply-To: References: <572F300C.5010106@iang.org> Message-ID: <572FBEB4.4000801@borg.org> On 05/08/2016 05:15 PM, Kevin W. Wall wrote: > Dumb question....if this was done in 2007, why not encrypt a short > message that is itself a URL shortener (e.g., bit.ly) and embed THAT > encrypted URL into the image Maybe because the NSA watches everything that goes into and out of bit.ly. I have a different question: Shouldn't steganography always be preceded with an encryption that outputs something indistinguishable from noise? (That is, no file format that frames the "secure" part.) Don't expect the steganography to obscure perfectly well, help it out by feeding it input that looks much like noise! -kb From pete at petertodd.org Sun May 8 15:39:18 2016 From: pete at petertodd.org (Peter Todd) Date: Sun, 8 May 2016 15:39:18 -0400 Subject: [Cryptography] Proof-of-Satoshi fails Proof-of-Proof. In-Reply-To: References: <2CF6A1BE-9912-4FE6-98F4-202E539D9795@gmail.com> <572A7B5E.20904@iang.org> <1BAB8392-3AFE-4D07-9B09-52BD4CEC575E@flownet.com> Message-ID: <20160508193918.GA9047@fedora-21-dvm> On Fri, May 06, 2016 at 10:25:25PM -0700, Tony Arcieri wrote: > On Fri, May 6, 2016 at 12:06 PM, Ron Garret wrote: > > > > But with all forms of DH based signatures, a random number is generated > > and that affects the signature value. In effect, every signature has a salt > > value. > > > Interesting sidebar: ECDSA nonces were one of the sources of Bitcoin's > transaction malleability. The (massive pile of hacks that is) segregated > witness feature being added to Bitcoin has an added side effect of removing > signatures from the hash of a transaction, and with it the associated > malleability. > > All that said, if you're designing a new system today, pick Ed25519. While ECDSA nonces are a "source" of Bitcoin tx malleability, they aren't a source that can be fixed, even by Ed25519, because you can't force the signer to use the deterministic signing method vs. using another number; if Bitcoin had used Ed25519 from the start we would still have a signature malleability problem. -- https://petertodd.org 'peter'[:-1]@petertodd.org -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 455 bytes Desc: Digital signature URL: From kentborg at borg.org Sun May 8 21:18:45 2016 From: kentborg at borg.org (Kent Borg) Date: Sun, 8 May 2016 21:18:45 -0400 Subject: [Cryptography] russian spies using steganography? In-Reply-To: <036d01d1a97f$0862abc0$19280340$@huitema.net> References: <572F300C.5010106@iang.org> <036d01d1a97f$0862abc0$19280340$@huitema.net> Message-ID: <572FE575.3020707@borg.org> A steganographic channel I have long thought is under appreciated is spam. I guess the flaw here is the real senders of spam seem to be pretty few (I see it come in in batches, some one person pushed a button, I see the results, not a lot of little mom-and-pop retail spam senders). Also, unfortunately to the spies, gmail does such a good job of filtering spam that who sees all that spam anymore? Twitter also seems an interesting one-to-many channel. But I guess that depends on Twittter not letting the TLAs into their network, and not letting them carefully monitor all the comings and goings. -kb, the Kent runs his own e-mail server, so all his spam does make it all the way to his computer--he doesn't know how the rest live in the luxury of gmail. P.S. I am convinced there are only a few big spammers, "Who will rid me of this troublesome priest?", is a quote that comes to mind... -------------- next part -------------- An HTML attachment was scrubbed... URL: From kentborg at borg.org Sun May 8 21:51:33 2016 From: kentborg at borg.org (Kent Borg) Date: Sun, 8 May 2016 21:51:33 -0400 Subject: [Cryptography] russian spies using steganography? In-Reply-To: <036d01d1a97f$0862abc0$19280340$@huitema.net> References: <572F300C.5010106@iang.org> <036d01d1a97f$0862abc0$19280340$@huitema.net> Message-ID: <572FED25.50101@borg.org> On 05/08/2016 07:12 PM, Christian Huitema wrote: > The "URL" variation would have the interest of minimizing the length > of the message. It has some obvious drawbacks, if access to shortened > URL is somehow monitored by the adversaries. Another problem with shortened URLs is that where regular URLs might be thought of as a sparse address space (though /very/ non-uniform) shortened URLs are very dense address spaces. Try one that is real, then increment and see what you get. Some juicy stuff is stashed in there. -kb, the Kent who actually did some let's-explore-the-space playing years before he (last week-ish) saw--and appreciated--the story on the security risk there. -------------- next part -------------- An HTML attachment was scrubbed... URL: From hbaker1 at pipeline.com Sun May 8 23:57:36 2016 From: hbaker1 at pipeline.com (Henry Baker) Date: Sun, 08 May 2016 20:57:36 -0700 Subject: [Cryptography] russian spies using steganography? In-Reply-To: <572FE575.3020707@borg.org> References: <572F300C.5010106@iang.org> <036d01d1a97f$0862abc0$19280340$@huitema.net> <572FE575.3020707@borg.org> Message-ID: At 06:18 PM 5/8/2016, Kent Borg wrote: >A steganographic channel I have long thought is under appreciated is spam. I guess the flaw here is the real senders of spam seem to be pretty few (I see it come in in batches, some one person pushed a button, I see the results, not a lot of little mom-and-pop retail spam senders). Also, unfortunately to the spies, gmail does such a good job of filtering spam that who sees all that spam anymore? > >Twitter also seems an interesting one-to-many channel. But I guess that depends on Twittter not letting the TLAs into their network, and not letting them carefully monitor all the comings and goings. Intel agencies & LEO's always take advantage of any "noise" and/or chaos in the system to hide under (reminiscent of spread spectrum radio, where the signal hides many dB down in the noise). One problem with "broken window" policing is that at some point *all* of the broken windows & other crimes are committed by undercover operatives and/or informants; e.g., in the 1950's, supposedly >50% of the dues to the American Communist Party were paid by FBI/CIA informants. Supposedly, the biggest fallout from the OPM hack has been to take 20+ million people off the eligible rolls for "secret"/undercover agents, because all their bio info -- including their fingerprints -- has been compromised. The FBI has had a number of money-laundering stings which involved setting up fake banks around the world. I believe that NPR recently talked about one such operation. These kinds of operations only work if there are already a lot of other shady operators for the FBI to hide among. I wouldn't be at all surprised if some non-negligible % of spam comes from TLA's around the world. We already know that some non-negligible % of phishing/malware emails come from TLA's, so outright spam simply widens the potential. From rpaulo at me.com Mon May 9 01:06:49 2016 From: rpaulo at me.com (Rui Paulo) Date: Sun, 08 May 2016 22:06:49 -0700 Subject: [Cryptography] russian spies using steganography? In-Reply-To: References: <572F300C.5010106@iang.org> <036d01d1a97f$0862abc0$19280340$@huitema.net> <572FE575.3020707@borg.org> Message-ID: <1462770409.7806.1.camel@me.com> On Sun, 2016-05-08 at 20:57 -0700, Henry Baker wrote: > The FBI has had a number of money-laundering stings which involved > setting up fake banks around the world.  I believe that NPR recently > talked about one such operation.  These kinds of operations only work > if there are already a lot of other shady operators for the FBI to > hide among. It was not the FBI, it was the DEA and the IRS.  The NPR episode aired recently, but it was a re-broadcast of a 2012 episode of Planet Money: http://www.npr.org/sections/money/2016/03/09/469674457/episode-418-how- the-government-set-up-a-fake-bank-to-launder-drug-money So, yeah, we need to start thinking about the capability of any government agency to go undercover, not just the FBI/CIA. -- Rui Paulo From stick at gk2.sk Mon May 9 10:17:28 2016 From: stick at gk2.sk (Pavol Rusnak) Date: Mon, 9 May 2016 16:17:28 +0200 Subject: [Cryptography] Show Crypto: prototype USB HSM In-Reply-To: AD09C42C-B713-4E49-A3F6-92D4E1D4EF03@flownet.com Message-ID: <57309BF8.10308@gk2.sk> Hi everybody! TREZOR co-author here. I saw a lot of questions popping up in this thread that we've already solved during our development. I recommend checking our FAQ: https://doc.satoshilabs.com/trezor-faq/ or User manual: https://doc.satoshilabs.com/trezor-user/ or watching the TREZOR talk I had at Chaos Communication Congress: https://www.youtube.com/watch?v=CgaBKNus1n0 To see how we solved the issues. Also feel free to contact me on-list or off-list if you have more questions. -- Best Regards / S pozdravom, Pavol Rusnak From ben at links.org Tue May 10 05:25:58 2016 From: ben at links.org (Ben Laurie) Date: Tue, 10 May 2016 10:25:58 +0100 Subject: [Cryptography] [cryptography] Show Crypto: prototype USB HSM In-Reply-To: <47F4A807-0289-4F3A-99ED-3BB026568F0D@lrw.com> References: <610F53D1-6111-46DF-8945-83D407CDF65A@flownet.com> <570E0C14.2060204@connotech.com> <47F4A807-0289-4F3A-99ED-3BB026568F0D@lrw.com> Message-ID: On 14 April 2016 at 00:16, Jerry Leichter wrote: >>> Yes, make it significantly smaller than the current form factor. >> >> Ah. OK, well, that is certainly doable, though how small you can make it is ultimately limited by the size of the display. How small do you want it, and how much are you willing to pay? > I wonder if one could get rid of the display per se and add some kind of MEMS steerable laser to it. The output would be projected onto some nearby surface. This could be physically much smaller. > > People have built "virtual keyboards" using this idea- here's a random one: http://www.amazon.com/AGS-Wireless-Projection-Bluetooth-Smartphone/dp/B00MR26TUO/ref=sr_1_1?ie=UTF8&qid=1460589277&sr=8-1&keywords=laser+projector+keyboard Oh no they haven't - that's simply projecting a static image, its not steerable. From hbaker1 at pipeline.com Wed May 11 15:30:42 2016 From: hbaker1 at pipeline.com (Henry Baker) Date: Wed, 11 May 2016 12:30:42 -0700 Subject: [Cryptography] 2nd Amendment Case for the Right to Bear Crypto Message-ID: FYI -- https://motherboard.vice.com/read/the-second-amendment-case-for-the-right-to-bear-encryption The Second Amendment Case for the Right to Bear Crypto Written by Susan McGregor May 11, 2016 // 05:00 AM EST On November 9, 1994, an American software engineer named Philip Zimmermann was detained by customs agents in Dulles International Airport as he returned from a speaking engagement in Europe. His luggage was searched and he was interrogated at length regarding his possible illegal export of "dangerous munitions." Though Zimmermann was carrying no guns, bombs, or chemical agents, he was carrying one item considered a weapon in the eyes of the US government: the strong cryptographic software of his own making known as "Pretty Good Privacy," or PGP. While today it may seem surprising that software like PGP was ever considered a weapon, the US government has long viewed strong crypto--typically any encryption mechanism that cannot be bypassed efficiently--as a dangerous technology in civilian hands. Legally, in fact, the right of individuals to strong cryptographic technology has never been affirmed, even as privacy and surveillance concerns have prompted companies like Google, Apple and, more recently, WhatsApp and WordPress, to encrypt their devices and platforms by default. Thanks to the "Crypto Wars" of the 1990s, legal scholars have debated the ways in which cryptographic research and technology might qualify for constitutional protection. Typically, however, these reflections have focused on interpretations of the First Amendment and the Fifth Amendment: the First, through the reasoning that code is speech, and the Fifth for its particular protection of the "liberty" to pursue one's chosen profession. http://readingroom.law.gsu.edu/cgi/viewcontent.cgi?article=2264&context=gsulr Yet the federal government's own decision to regard encryption technology as a weapon seems to suggest another constitutional lens: the Second Amendment, via the "right to bear arms." *** In the United States, restrictions on non-governmental uses of cryptography go back to at least 1977, when a member of the National Security Agency sent a letter to the IEEE warning that some of the material to be presented at a Cornell cryptography conference might run afoul of weapons export regulations. http://scholarship.law.cornell.edu/cgi/viewcontent.cgi?article=1136&context=cilj At the time, even the concepts of strong crypto appeared on the Munitions List of the International Traffic in Arms Regulations or ITAR, which govern US weapons exports. http://readingroom.law.gsu.edu/cgi/viewcontent.cgi?article=2264&context=gsulr In fact, it was not until almost two decades later that the US began to move some of the most common encryption technologies off the Munitions List. Without these changes, it would have been virtually impossible to secure commercial transactions online, stifling the then-nascent internet economy. http://www.crypto.com/papers/policy.txt Additional regulatory changes in the early 2000s further relaxed the export restrictions, making the use of PGP and other open-source software legal for individuals to transport and use. But legal proceedings over the right to encrypt have been largely inconclusive. For example, the three-year investigation of Zimmermann was eventually dropped, but without explanation. And while in 1999 researcher Daniel Bernstein secured a win against the Department of Justice in the 9th circuit, a series of legal technicalities accompanied by a big change in ITAR meant that the case was ultimately dismissed in 2003 without being decided. https://www.philzimmermann.com/EN/news/PRZ_case_dropped.html Meanwhile, "Software (including their cryptographic interfaces) capable of maintaining secrecy or confidentiality of information or information systems" remains on the ITAR Munitions List today--and the export of more sophisticated encryption software is still subject to both government oversight and a complex licensing process. http://www.ecfr.gov/cgi-bin/text-idx?node=pt22.1.121#se22.1.121_11 *** So what are the chances of encryption technologies being viewed as a "bearable arm" under the Second Amendment? "It's an interesting argument," says Mike McLively, a staff attorney at the Law Center to Prevent Gun Violence and a Second Amendment expert. "You can make the case, certainly." Doing so is no simple task, however. According to McLively, 94 percent of the more than 1000 Second Amendment infringement suits brought since the landmark District of Columbia v. Heller case in 2008 have been rejected. As always in law, details matter. "It would depend on which technology you're talking about," says McLively. "It is on everyone's phone, for example? Is it commonly used for self-defense, not just to defend your information? I think that the Court would be more inclined to say that the Second Amendment is for protecting your physical person." Of course, for the millions of people who currently encrypt their phones, adequately protecting the sensitive data they contain is absolutely a form of physical protection. A weakly encrypted phone, if lost or stolen, is a Pandora's Box of dangerously personal information: names, addresses, contacts, and photographs, to say nothing of the detailed calendar and appointment information that can act as a map to an individual's daily activities, or those of their children. Likewise, with the increasing prevalence of app-based "smart devices" for the home, including security systems, a poorly encrypted phone is essentially a remote-controlled index to doing one physical harm. https://canary.is/ *** Typically, of course, the "arms" considered for Second Amendment protection have been traditional firearms. Last month, however, the Supreme Court rejected the state's arguments in the case of Caetano v Massachusetts, in which the state held that the plaintiff did not have a Second Amendment right to a "stun" gun, in part because it was "a thoroughly modern invention." In a per curiam decision, however, the Supreme Court sent the case back to the state, with the forceful assertion that "the Second Amendment extends, prima facie, to all instruments that constitute bearable arms, even those that were not in existence at the time of the founding." Even this inclusive view of the Second Amendment, however, does not preclude limitations on the types of weapons that individuals can use. As McLively points out, "The Second Amendment doesn't protect the right to pick the gun we want." Does that mean that the government could eventually mandate the use of only the "smart" guns described in President Obama's executive order earlier this year? Could the right to bear encryption be limited to only allow encryption that allows "exceptional access" for law enforcement, as the recentBurr-Feinstein bill would require? https://www.whitehouse.gov/the-press-office/2016/01/04/fact-sheet-new-executive-actions-reduce-gun-violence-and-make-our http://www.feinstein.senate.gov/public/index.cfm?a=files.serve&File_id=5B990532-CC7F-427F-9942-559E73EB8BFB "If it's reliable and it works," says McLively, "all the government would have to do is show that access to only smart guns is just as respectful of self-defense." The problem, of course, is that to date neither smart gun nor "exceptional access" encryption technologies have been able meet that challenge of being "just as respectful of self-defense." Despite decades of research and development on both fronts, smart guns "can and will be jailbroken," as Ars Technica co-founder Jon Stokes put it in an LA Times op-ed in January, and encryption that allows access for law enforcement is no longer protective. A group of prominent computer security specialists--among them Whitfield Diffie, an author of an essential protocol for securing internet connections--recently detailed why "exceptional access" to encryption technologies is no more tenable today than it was 20 years ago. http://www.latimes.com/opinion/op-ed/la-oe-0117-stokes-smart-gun-problems-20160117-story.html http://cybersecurity.oxfordjournals.org/content/cybers/1/1/69.full.pdf *** Considering crypto under the Second Amendment is more than a semantic trick. It can help shed light on whether--and where--appropriate limits on individuals' right to encryption should lie. Traditional firearms and encryption technologies both have the capacity to protect and destroy; and both can be put to both lawful and criminal use. At SXSW this year, President Obama asserted that technologies like strong encryption "can empower folks who are very dangerous to spread dangerous messages," and there is little doubt that encryption technologies, like guns, can be used in dangerous ways. http://techcrunch.com/2016/03/11/obama-sxsw/ But of course most of the technologies protected by the Second Amendment are inherently dangerous--otherwise they wouldn't be much good for self-defense. Sensibly, then, the test for whether or not a weapon is protected by the Second Amendment does not rest on whether or not it is dangerous. Instead, it is the dominant application of a technology that affects its eligibility for constitutional protection. As Justices Alito and Thomas wrote in their Caetano opinion, "the relative dangerousness of a weapon is irrelevant when the weapon belongs to a class of arms commonly used for lawful purposes." So unless online banking is suddenly outlawed, the "dangerous" uses of strong encryption are somewhat beside the point. http://www.supremecourt.gov/opinions/15pdf/14-10078_aplc.pdf Whether "weapons of offense, or armor of defense"--whether firearms or encryption technologies--the Second Amendment "extends...to all instruments that constitute bearable arms." https://www.law.cornell.edu/supct/html/07-290.ZO.html In the fight to protect ourselves from having our medical records altered through identity theft or our physical whereabouts tracked from a stolen phone or laptop, the stakes can truly be life or death. When it comes to self-defense in the digital age, strong encryption is the only weapon we have--we need to protect it. http://www.wsj.com/articles/how-identity-theft-sticks-you-with-hospital-bills-1438966007 Topics: encryption, crypto, right to bear arms, second amendment, opinion From xander.sherry at gmail.com Wed May 11 22:47:33 2016 From: xander.sherry at gmail.com (Xander Sherry) Date: Wed, 11 May 2016 22:47:33 -0400 Subject: [Cryptography] 2nd Amendment Case for the Right to Bear Crypto In-Reply-To: References: Message-ID: On Wed, May 11, 2016 at 3:30 PM, Henry Baker wrote: > > The Second Amendment Case for the Right to Bear Crypto > > Written by Susan McGregor > > May 11, 2016 // 05:00 AM EST > > On November 9, 1994, an American software engineer named Philip Zimmermann > was detained by customs agents in Dulles International Airport as he > returned from a speaking engagement in Europe. > > His luggage was searched and he was interrogated at length regarding his > possible illegal export of "dangerous munitions." > > Though Zimmermann was carrying no guns, bombs, or chemical agents, he was > carrying one item considered a weapon in the eyes of the US government: the > strong cryptographic software of his own making known as "Pretty Good > Privacy," or PGP. > > While today it may seem surprising that software like PGP was ever > considered a weapon, the US government has long viewed strong > crypto--typically any encryption mechanism that cannot be bypassed > efficiently--as a dangerous technology in civilian hands. > > Legally, in fact, the right of individuals to strong cryptographic > technology has never been affirmed, even as privacy and surveillance > concerns have prompted companies like Google, Apple and, more recently, > WhatsApp and WordPress, to encrypt their devices and platforms by default. > > Thanks to the "Crypto Wars" of the 1990s, legal scholars have debated the > ways in which cryptographic research and technology might qualify for > constitutional protection. Typically, however, these reflections have > focused on interpretations of the First Amendment and the Fifth Amendment: > the First, through the reasoning that code is speech, and the Fifth for its > particular protection of the "liberty" to pursue one's chosen profession. > > > http://readingroom.law.gsu.edu/cgi/viewcontent.cgi?article=2264&context=gsulr > > Yet the federal government's own decision to regard encryption technology > as a weapon seems to suggest another constitutional lens: the Second > Amendment, via the "right to bear arms." > > I believe it is a mischaracterization to to try to equate crypto with a firearm. As muntions go, crypto is *not* like a firearm. The definition of munition is exceptionally broad, ranging from the weapons used to execute war, to anything that may be used to sustain and maintain waging war. Under this broad definition, crypto is indeed arguably a munition. So, arguably, is oil. And food. And water. However, there is is definite line between weapons like firearms, or artillery shells, or nuclear weapons and crypto. The former are useful when employed to actively attack and harm an enemy. A stun gun, as was an example in the article is also used in this way. Crypto is not. Crypto is designed to be used to protect *against* harm from an enemy, more like body armor, or an armored car. In the software world, it would be reasonable, possibly, to compare an exploit to a weapon like a firearm. Or more of a stretch, a DDoS vector, perhaps. To argue for 2nd Amendment protections for crypto, however, is to buy into the argument that it is indeed a weapon that is designed and intended to be most effectively used for harm (even in defense) and that simply is not the case. The additional confusion that would be caused by accepting that position and advocating for the rights of crypto-the-weapon might well do more harm than good. Best regards, -Xander Sherry -------------- next part -------------- An HTML attachment was scrubbed... URL: From phill at hallambaker.com Thu May 12 09:09:26 2016 From: phill at hallambaker.com (Phillip Hallam-Baker) Date: Thu, 12 May 2016 09:09:26 -0400 Subject: [Cryptography] 2nd Amendment Case for the Right to Bear Crypto In-Reply-To: References: Message-ID: Never use the second amendment when you can use the first. The interpretation of both amendments have changed over time. Less than a hundred years ago, Eugene Debs was in prison in the US for exercising his free speech right. But there is a strong consensus on the first amendment that is really not under serious challenge in any part of the establishment. Politicians and judges agree on an interpretation that is very close to being absolute. The current SCOTUS interpretation of the second amendment is very recent, it was handed down by the Rehnquist court and even that interpretation is far from absolute. Even Scalia probably wouldn't have found for a personal right to have hand grenades. And what is coming with DIY drones carrying IEDs is likely to scare future courts into being very very restrictive indeed. Cryptography was being regulated as a munition during the Rehnquist era. The case was won on first amendment, free speech grounds. -------------- next part -------------- An HTML attachment was scrubbed... URL: From hbaker1 at pipeline.com Thu May 12 09:44:24 2016 From: hbaker1 at pipeline.com (Henry Baker) Date: Thu, 12 May 2016 06:44:24 -0700 Subject: [Cryptography] 2nd Amendment Case for the Right to Bear Crypto In-Reply-To: References: Message-ID: At 06:09 AM 5/12/2016, Phillip Hallam-Baker wrote: >Never use the second amendment when you can use the first. > >The interpretation of both amendments have changed over time. Less than a hundred years ago, Eugene Debs was in prison in the US for exercising his free speech right. But there is a strong consensus on the first amendment that is really not under serious challenge in any part of the establishment. Politicians and judges agree on an interpretation that is very close to being absolute. > >The current SCOTUS interpretation of the second amendment is very recent, it was handed down by the Rehnquist court and even that interpretation is far from absolute. Even Scalia probably wouldn't have found for a personal right to have hand grenades. And what is coming with DIY drones carrying IEDs is likely to scare future courts into being very very restrictive indeed. > >Cryptography was being regulated as a munition during the Rehnquist era. The case was won on first amendment, free speech grounds. Yes, but... Since ancient times, *defensive* systems like shields, chain mail, body armor, etc. have been considered "arms" & "armament". So far as I know, there is *no* law prohibiting anyone in the U.S. from purchasing a bulletproof car with bulletproof windows. I know of one Mercedes dealer in Los Angeles that does indeed sell such cars. (These cars run up against another problem: in some cases, they weigh more than 6000 pounds, and are therefore no longer considered "cars", but that's a different issue. These cars cannot have blacked out front windows (in California), which may be a privacy violation, but once again that's a different issue.) I'm not aware of any law prohibiting me from wearing a "bulletproof" vest, although I understand they are bulky, uncomfortable and hot. Suppose that there were much better "bulletproof" armor which was ubiquitously built into the clothing that we all wear. The LEO's would freak out (just like they did after the "North Hollywood shootout" in 1997 in Los Angeles [1]), but -- as a public policy matter -- wouldn't everyone wearing such *armor* be a much better use of the 2nd Amendment than everyone carrying *offensive* weapons? When was the last time in the entire history of the universe where kids were killed in a shooting in which the bullets were random numbers? So I think that the Second Amendment *does* cover *defensive* armor. [1] https://en.wikipedia.org/wiki/North_Hollywood_shootout From jm at porup.com Thu May 12 13:20:48 2016 From: jm at porup.com (J.M. Porup) Date: Thu, 12 May 2016 13:20:48 -0400 Subject: [Cryptography] 2nd Amendment Case for the Right to Bear Crypto In-Reply-To: References: Message-ID: <20160512172048.GD1537@fedora-21-dvm> On Thu, May 12, 2016 at 06:44:24AM -0700, Henry Baker wrote: > Suppose that there were much better "bulletproof" armor which was ubiquitously built into the clothing that we all wear. The LEO's would freak out (just like they did after the "North Hollywood shootout" in 1997 in Los Angeles [1]), but -- as a public policy matter -- wouldn't everyone wearing such *armor* be a much better use of the 2nd Amendment than everyone carrying *offensive* weapons? Tailor Miguel Caballero of Bogotá, Colombia sells bespoke bulletproof clothing to people who can afford it: https://en.wikipedia.org/wiki/Miguel_Caballero_%28company%29 http://www.theguardian.com/world/2008/nov/21/colombia-fashion http://colombiareports.com/miguelo-romano-fashion-flaunt-protect/ One wonders when the price will drop and everyone can afford to wear such clothing. Not pie-in-the-sky, surely? jmp From fungi at yuggoth.org Thu May 12 14:30:07 2016 From: fungi at yuggoth.org (Jeremy Stanley) Date: Thu, 12 May 2016 18:30:07 +0000 Subject: [Cryptography] 2nd Amendment Case for the Right to Bear Crypto In-Reply-To: References: Message-ID: <20160512183007.GH15295@yuggoth.org> On 2016-05-12 06:44:24 -0700 (-0700), Henry Baker wrote: [...] > I'm not aware of any law prohibiting me from wearing a > "bulletproof" vest, although I understand they are bulky, > uncomfortable and hot. [...] It can be illegal in parts of the USA depending on your criminal conviction history. Also there's still H.R. 378[*] (Responsible Body Armor Possession Act) up for consideration, though it's not been visibly active for over a year and will hopefully die in committee. [*] https://www.congress.gov/bill/114th-congress/house-bill/378/text -- Jeremy Stanley From johnl at iecc.com Thu May 12 14:46:10 2016 From: johnl at iecc.com (John Levine) Date: 12 May 2016 18:46:10 -0000 Subject: [Cryptography] 2nd Amendment Case for the Right to Bear Crypto In-Reply-To: Message-ID: <20160512184610.22227.qmail@ary.lan> >So far as I know, there is *no* law prohibiting anyone in the U.S. from purchasing a bulletproof car with bulletproof >windows. You're probably right, but that only tells us that the government hasn't regulated them, not that it can't. Historically, the 2nd amendment was interpreted to refer to the state militias, i.e. the National Guard. In recent years the revisionist insurrectionist theory has become popular, and it's been interpreted to refer to personal ownership of some set of weapons. The exact boundaries of what weapons are included remain fuzzy; shotguns are pretty clearly included, machine guns and nuclear weapons are not. I'm not aware of any 2nd amendment cases where the "arms" weren't conventional guns, or maybe knives. I understand the metaphorical appeal of applying it to crypto software, but I think it'd be very tough to sell it to a judge. R's, John From hbaker1 at pipeline.com Thu May 12 15:14:06 2016 From: hbaker1 at pipeline.com (Henry Baker) Date: Thu, 12 May 2016 12:14:06 -0700 Subject: [Cryptography] 2nd Amendment Case for the Right to Bear Crypto In-Reply-To: <20160512184610.22227.qmail@ary.lan> References: <20160512184610.22227.qmail@ary.lan> Message-ID: At 11:46 AM 5/12/2016, John Levine wrote: >>So far as I know, there is *no* law prohibiting anyone in the U.S. from purchasing a bulletproof car with bulletproof >>windows. > >You're probably right, but that only tells us that the government hasn't regulated them, >not that it can't. > >Historically, the 2nd amendment was interpreted to refer to the state militias, i.e. the National >Guard. In recent years the revisionist insurrectionist theory has become popular, and it's >been interpreted to refer to personal ownership of some set of weapons. The exact boundaries >of what weapons are included remain fuzzy; shotguns are pretty clearly included, machine guns >and nuclear weapons are not. > >I'm not aware of any 2nd amendment cases where the "arms" weren't conventional guns, or maybe >knives. I understand the metaphorical appeal of applying it to crypto software, but I think >it'd be very tough to sell it to a judge. It takes *years*/*generations* to change people's attitudes towards laws & interpretations of the Constitution. Even though "privacy", per se, never shows up in the Constitution, it is implicit in a number of the Amendments, and has been crystallized during the past 70 years in quite a number of decisions. Achieving this goal required the efforts of huge numbers of people over many decades. Ditto for gay rights, and ditto for Second Amendment rights. All this means is that we in the encryption community have significant work to do, which will probably take the rest of our lives to gain enough traction. Ditto for using the Third Amendment to argue against govt backdoors and implants in our devices (and soon, ourselves!). No, it hasn't been used for this purpose in the past, but if you look to the *rationale* for the 3rd Amendment, it would certainly seem (at least to me) to apply to stationing appendages of the govt in my digital devices and networks. As everything becomes digitalized, the importance of the First Amendment becomes stronger and stronger -- e.g., when I can download and 3D print a gun in my own home, the First Amendment starts to subsume the Second Amendment. From leichter at lrw.com Thu May 12 15:59:13 2016 From: leichter at lrw.com (Jerry Leichter) Date: Thu, 12 May 2016 15:59:13 -0400 Subject: [Cryptography] 2nd Amendment Case for the Right to Bear Crypto In-Reply-To: References: Message-ID: > So far as I know, there is *no* law prohibiting anyone in the U.S. from purchasing a bulletproof car with bulletproof windows.... I can't speak for bulletproof *cars* - didn't bother to do the simple Google search - but bulletproof *vests* are, in fact, regulated. From http://www.criminaldefenselawyer.com/resources/criminal-defense/criminal-offense/when-its-illegal-to-own-a-bullet-proof-vest : "Under federal law, a bulletproof vest is considered “body armor,” which is regulated by statute, 18 U.S.C.A. Section 931. That law forbids anyone convicted of a violent felony to own or possess a vest, unless the person wearing the vest is an employee who is doing so in order to perform a lawful business activity and who has obtained prior written certification from the employer. A violation incurs a maximum of three years in prison. And using a vest during the commission of a federal crime of violence or a federal drug-trafficking crime will result in an enhanced sentence. (42 U.S.C. Section 3796ll-3(d)(1).) The federal law has been challenged on several grounds, all of them unsuccessfully." (The last of the challenges discussed was on Second Amendment grounds.) The states also have regulations: "A few states prohibit the use or possession in specified situations or circumstances, without regard to the criminal background of the wearer. One state prohibits wearing armor on school property or school-sponsored functions (Louisiana), while in Connecticut, sale of body armor must be done in person—Internet and phone purchases are illegal." We now return you to our regular fact-based discussions.... -- Jerry From ron at flownet.com Thu May 12 16:40:12 2016 From: ron at flownet.com (Ron Garret) Date: Thu, 12 May 2016 13:40:12 -0700 Subject: [Cryptography] 2nd Amendment Case for the Right to Bear Crypto In-Reply-To: <20160512184610.22227.qmail@ary.lan> References: <20160512184610.22227.qmail@ary.lan> Message-ID: On May 12, 2016, at 11:46 AM, John Levine wrote: > Historically, the 2nd amendment was interpreted to refer to the state militias, i.e. the National > Guard. In recent years the revisionist insurrectionist theory has become popular, and it's > been interpreted to refer to personal ownership of some set of weapons. It’s done more than become popular, it is the law of the land: https://en.wikipedia.org/wiki/District_of_Columbia_v._Heller https://en.wikipedia.org/wiki/McDonald_v._City_of_Chicago Personally, I think the supreme court got this right. The structure of the 2nd amendment is, “Because A, therefore B.” The argument that because A is no longer applicable in today’s world then neither is B makes the tacit assumption that A is the *only* justification for B. I see no basis for making that leap. If the American people want to clarify the matter (or change it outright) they can amend the Constitution. But polls show an overwhelming majority supporting the Heller decision. rg From leichter at lrw.com Thu May 12 16:57:55 2016 From: leichter at lrw.com (Jerry Leichter) Date: Thu, 12 May 2016 16:57:55 -0400 Subject: [Cryptography] =?utf-8?q?=22Chinese_ARM_vendor_left_developer_bac?= =?utf-8?q?kdoor_in_kernel_for_Android=2C_=E2=80=9CPi=E2=80=9D_devices=22?= Message-ID: <069A1DFA-D74B-456A-917C-35EEE655C512@lrw.com> From Ars Technica (http://arstechnica.com/security/2016/05/chinese-arm-vendor-left-developer-backdoor-in-kernel-for-android-pi-devices/) Allwinner, a Chinese system-on-a-chip company that makes the processor used in many low-cost Android tablets, set-top boxes, ARM-based PCs, and other devices, apparently shipped a version of its Linux kernel with a ridiculously easy-to-use backdoor built in. All any code needs to do to gain root access is send the text "rootmydevice" to an undocumented debugging process. The backdoor code may have inadvertently been left in the kernel after developers completed debugging. But the company has been less than transparent about it: information about the backdoor was released and then apparently deleted through Allwinner's own Github account. The kernel, linux-3.4-sunxi, which was originally developed to support Android on Allwinner's ARM processors for tablets, has also been used to develop a community version. The kernel was also the basis for porting over various versions of Linux to Allwinner's processors, which are used in the Orange Pi and Banana Pi micro-PCs (developer boards compatible with Raspberry Pi) along with a number of other devices. The way Allwinner has distributed its Linux kernel has been frustrating to many developers. The company has not encouraged or participated in community development and has been accused of numerous violations of the GPL license for the Linux kernel. The kernel "drops" by Allwinner include a number of binaries that are essentially closed source, as well as code released under other licenses—largely to support the graphics engines of its processors. -- Jerry From hbaker1 at pipeline.com Thu May 12 17:14:19 2016 From: hbaker1 at pipeline.com (Henry Baker) Date: Thu, 12 May 2016 14:14:19 -0700 Subject: [Cryptography] 2nd Amendment Case for the Right to Bear Crypto In-Reply-To: <2025885315.1261493.1463072559298.JavaMail.yahoo@mail.yahoo .com> References: <2025885315.1261493.1463072559298.JavaMail.yahoo@mail.yahoo.com> Message-ID: At 10:02 AM 5/12/2016, peter.thoenen at yahoo.com wrote: >> I'm not aware of any law prohibiting me from wearing a "bulletproof" vest, although I understand they are bulky, uncomfortable and hot. > >Certain states ban it if you are a ex-felon just FYI and it a sentencing enhancement in others. Hmmm... Based on the # of police shootings of more-or-less innocent and/or unarmed people in the last several years (e.g., Black Lives Matter), wearing body armor may soon become a necessity for all of us. Wearing body armor may at least preserve your life so you can appeal your conviction for wearing body armor. It's curious that wearing a motorcycle or bicycle *helmet* is *required by law*, while wearing *body armor* is *prohibited by law*. Given the current shortage of organ donors, perhaps the law should be flipped? BTW, the newest mountain biking & skiing body armor *inflates like an airbag*. Very cool! From fedor.brunner at azet.sk Fri May 13 10:39:04 2016 From: fedor.brunner at azet.sk (Fedor Brunner) Date: Fri, 13 May 2016 16:39:04 +0200 Subject: [Cryptography] NTRU Prime Message-ID: <5735E708.8000002@azet.sk> NTRU Prime, by Daniel J. Bernstein and Chitchanok Chuengsatiansup and Tanja Lange and Christine van Vredendaal https://ntruprime.cr.yp.to/ntruprime-20160511.pdf From tamzen at cannoy.org Fri May 13 13:32:29 2016 From: tamzen at cannoy.org (Tamzen Cannoy) Date: Fri, 13 May 2016 10:32:29 -0700 Subject: [Cryptography] Admin: Back on Track Message-ID: -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 I’m halting the 2nd amendment discussions (including the one on body armor, etc) except where it pertains to Crypto. The discussion has wandered far afield. Thanks Tamzen -----BEGIN PGP SIGNATURE----- Version: PGP Universal 3.3.0 (Build 9060) Charset: utf-8 wj8DBQFXNg+u5/HCKu9Iqw4RAiEaAKDlcDGpPFe3MV/dvuePuKkSjqqH2gCbBoQx XTuAlqsk9C/y7WWwEyNQY6g= =59G7 -----END PGP SIGNATURE----- From grisu at guru.at Fri May 13 13:18:48 2016 From: grisu at guru.at (Christoph Gruber) Date: Fri, 13 May 2016 19:18:48 +0200 Subject: [Cryptography] =?utf-8?q?DataGateKeeper=3A_The_FIRST_Impenetrable?= =?utf-8?q?_Anti-Hacking_Software_by_MyDataAngel=2Ecom=2C_Inc=2E_=E2=80=94?= =?utf-8?q?_Kickstarter?= Message-ID: <2C034E73-62B4-4364-8DD0-54029C46249E@guru.at> Hi all! I just want to bring your attention to this project. If you want to collect some buzzwords for your next bullshit-bingo, here you will find them: > https://www.kickstarter.com/projects/datagatekeeper/datagatekeeper-the-first-impenetrable-anti-hacking/description Looking forward to hear you comments about this “impenetrable software” ROFL Regards -- Christoph Gruber By not reading this email you don't agree you're not in any way affiliated with any government, police, ANTI- Piracy Group, RIAA, MPAA, or any other related group, and that means that you CANNOT read this email. By reading you are not agreeing to these terms and you are violating code 431.322.12 of the Internet Privacy Act signed by Bill Clinton in 1995. (which doesn't exist) PGP-Key-ID: E7DC7F8D PGP-Key-Fingerprint: 2BFE 3BC7 848F D669 A823 1E19 7096 CCA5 E7DC 7F8D -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 842 bytes Desc: Message signed with OpenPGP using GPGMail URL: From hettinga at gmail.com Thu May 12 18:49:33 2016 From: hettinga at gmail.com (Robert Hettinga) Date: Thu, 12 May 2016 18:49:33 -0400 Subject: [Cryptography] Proof-of-Satoshi fails Proof-of-Proof. In-Reply-To: <572A7B5E.20904@iang.org> References: <2CF6A1BE-9912-4FE6-98F4-202E539D9795@gmail.com> <572A7B5E.20904@iang.org> Message-ID: > On May 4, 2016, at 6:44 PM, ianG wrote: > > Keys can be lost. So, the dog ate his homework. Right. Pics or it didn’t happen. Cheers, RAH From natanael.l at gmail.com Fri May 13 14:45:10 2016 From: natanael.l at gmail.com (Natanael) Date: Fri, 13 May 2016 20:45:10 +0200 Subject: [Cryptography] =?utf-8?q?DataGateKeeper=3A_The_FIRST_Impenetrable?= =?utf-8?q?_Anti-Hacking_Software_by_MyDataAngel=2Ecom=2C_Inc=2E_?= =?utf-8?b?4oCUIEtpY2tzdGFydGVy?= In-Reply-To: <2C034E73-62B4-4364-8DD0-54029C46249E@guru.at> References: <2C034E73-62B4-4364-8DD0-54029C46249E@guru.at> Message-ID: Den 13 maj 2016 19:37 skrev "Christoph Gruber" : > > Hi all! > > I just want to bring your attention to this project. If you want to collect some buzzwords for your next bullshit-bingo, here you will find them: > > https://www.kickstarter.com/projects/datagatekeeper/datagatekeeper-the-first-impenetrable-anti-hacking/description < https://www.kickstarter.com/projects/datagatekeeper/datagatekeeper-the-first-impenetrable-anti-hacking/description > > > Looking forward to hear you comments about this “impenetrable software” > ROFL Posted on Reddit here, with plenty of comments; http://www.reddit.com/r/crypto/comments/4j5yso/_/ (I'm one of the moderators of that sub) -------------- next part -------------- An HTML attachment was scrubbed... URL: From pawel.veselov at gmail.com Fri May 13 19:04:17 2016 From: pawel.veselov at gmail.com (Pawel Veselov) Date: Fri, 13 May 2016 16:04:17 -0700 Subject: [Cryptography] =?utf-8?q?DataGateKeeper=3A_The_FIRST_Impenetrable?= =?utf-8?q?_Anti-Hacking_Software_by_MyDataAngel=2Ecom=2C_Inc=2E_?= =?utf-8?b?4oCUIEtpY2tzdGFydGVy?= In-Reply-To: References: <2C034E73-62B4-4364-8DD0-54029C46249E@guru.at> Message-ID: On Fri, May 13, 2016 at 11:45 AM, Natanael wrote: > Den 13 maj 2016 19:37 skrev "Christoph Gruber" : > > > > Hi all! > > > > I just want to bring your attention to this project. If you want to > collect some buzzwords for your next bullshit-bingo, here you will find > them: > > > > https://www.kickstarter.com/projects/datagatekeeper/datagatekeeper-the-first-impenetrable-anti-hacking/description > < > https://www.kickstarter.com/projects/datagatekeeper/datagatekeeper-the-first-impenetrable-anti-hacking/description > > > > > > Looking forward to hear you comments about this “impenetrable software” > > ROFL > Posted on Reddit here, with plenty of comments; > > http://www.reddit.com/r/crypto/comments/4j5yso/_/ > I just reported this project to Kickstarter and encourage all to do the same. KS does suspend hot air projects. -------------- next part -------------- An HTML attachment was scrubbed... URL: From kevin.w.wall at gmail.com Sat May 14 00:37:49 2016 From: kevin.w.wall at gmail.com (Kevin W. Wall) Date: Sat, 14 May 2016 00:37:49 -0400 Subject: [Cryptography] =?utf-8?q?DataGateKeeper=3A_The_FIRST_Impenetrable?= =?utf-8?q?_Anti-Hacking_Software_by_MyDataAngel=2Ecom=2C_Inc=2E_?= =?utf-8?b?4oCUIEtpY2tzdGFydGVy?= In-Reply-To: References: <2C034E73-62B4-4364-8DD0-54029C46249E@guru.at> Message-ID: On Fri, May 13, 2016 at 2:45 PM, Natanael wrote: > > Den 13 maj 2016 19:37 skrev "Christoph Gruber" : >> >> Hi all! >> >> I just want to bring your attention to this project. If you want to >> collect some buzzwords for your next bullshit-bingo, here you will find >> them: >> > >> > https://www.kickstarter.com/projects/datagatekeeper/datagatekeeper-the-first-impenetrable-anti-hacking/description >> > >> >> Looking forward to hear you comments about this “impenetrable software” >> ROFL > > Posted on Reddit here, with plenty of comments; > > http://www.reddit.com/r/crypto/comments/4j5yso/_/ Darn; I'm *soooo* disappointed that I didn't know about this earlier. I would have loved to donate a $1 so I could have received that "personalized hand written thank you note from Founding Data Angel Frankie. He’ll even throw in a handsome sticker of himself." Now I'm have to waste my time to track down his picture and create my own handsome sticker of him. Sigh. Oh well, they probably didn't tell us, but the probably encrypted the picture of Founding Data Angel Frankie anyway. -kevin -- Blog: http://off-the-wall-security.blogspot.com/ | Twitter: @KevinWWall NSA: All your crypto bit are belong to us. From phill at hallambaker.com Sat May 14 13:17:40 2016 From: phill at hallambaker.com (Phillip Hallam-Baker) Date: Sat, 14 May 2016 13:17:40 -0400 Subject: [Cryptography] =?utf-8?q?DataGateKeeper=3A_The_FIRST_Impenetrable?= =?utf-8?q?_Anti-Hacking_Software_by_MyDataAngel=2Ecom=2C_Inc=2E_?= =?utf-8?b?4oCUIEtpY2tzdGFydGVy?= In-Reply-To: References: <2C034E73-62B4-4364-8DD0-54029C46249E@guru.at> Message-ID: We should probably have someone make a point of debunking snakeoil crypto on Kickstarter. Recently there was a guy claiming to have 'unbreakable' crypto based on an OTP. So I asked how he exchanged the keystream. "Encrypted under AES256'. Despite my many attempts, he was unable to understand the fact that if the keystream is disclosed in any form, encrypted or not, the proof of unbreakability is lost. While the encrypted keystream and the ciphertext are unbreakable on their own, they are not unbreakable if an attacker has both. The only effect of the scheme was to double the data volume and reduce AES to stream cipher robustness. Stream ciphers can be secure but they are fragile as heck. -------------- next part -------------- An HTML attachment was scrubbed... URL: From amccullagh at live.com Sat May 14 01:36:59 2016 From: amccullagh at live.com (Adrian McCullagh) Date: Sat, 14 May 2016 15:36:59 +1000 Subject: [Cryptography] =?utf-8?q?DataGateKeeper=3A_The_FIRST_Impenetrable?= =?utf-8?q?_Anti-Hacking_Software_by_MyDataAngel=2Ecom=2C_Inc=2E_=E2=80=94?= =?utf-8?q?_Kickstarter?= Message-ID: From: Natanael Sent: Saturday, 14 May 2016 8:03 AM To: Christoph Gruber Cc: Cryptography Mailing List Subject: Re: [Cryptography] DataGateKeeper: The FIRST Impenetrable Anti-Hacking Software by MyDataAngel.com, Inc. — Kickstarter Den 13 maj 2016 19:37 skrev "Christoph Gruber" : > > Hi all! > > I just want to bring your attention to this project. If you want to collect some buzzwords for your next bullshit-bingo, here you will find them: > > https://www.kickstarter.com/projects/datagatekeeper/datagatekeeper-the-first-impenetrable-anti-hacking/description > > Looking forward to hear you comments about this “impenetrable software” > ROFL The real problem with this is that many people will be caught by the marketing blurb and really think that they are 100% safe. Maybe the FTC should be notified about this type of advertising. Unless the software is thoroughly tested independently and the source code reveiwed or the specification at least there no way on knowing what hidden vulnerabilities exist. Too many used car salemen are getting in on the security bandwagon because they think it is an easy sell. Kind Regards Dr. Adrian McCullagh Ph.D (IT Sec). LL.B. (Hons) B.App. Sc. (Computing) ODMOB Lawyers Email: ajmccullagh57 at gmail.com Email: amccullagh at live.com MOB: +61 401 646 486 SKYPE: admac57 The contents of this email are confidential between the sender and the intended recipient. If you are not the intended recipient then no rights are granted to you because of this error and you are requested to promptly inform the sender of the error and to promptly destroy all copies of the email in your power, possession or control. The sender reserves all rights concerning this email including any privilege, copyright and confidentiality associated with this email. Even though an email signature block has been appended to this email, and despite the Electronic Transactions Act (Qld) or the Electronic Transactions Act (Cth), the signature block does not exhibit the senders intention to be bound by an offer previously sent by the intended recipient, unless the email specifically states otherwise. -------------- next part -------------- An HTML attachment was scrubbed... URL: From mok-kong.shen at t-online.de Sun May 15 15:49:58 2016 From: mok-kong.shen at t-online.de (mok-kong shen) Date: Sun, 15 May 2016 21:49:58 +0200 Subject: [Cryptography] On a paper on a new probabilistic public-key encryption based on RSA Message-ID: <5738D2E6.8080809@t-online.de> V. A. Roman'kov, New probabilistic public-key encryption based on the RSA cryptosystem. http://u.math.biu.ac.il/~tsaban/Pdf/NewProbab.pdf The paper is in De Gruyter's "Groups Complexity Cryptology", an apparently sufficiently serious scientific journal. What I currently managed with my humble poor knowledge to capture (or conjecture) from it is the following: It could provide certain security enhancements over RSA but has corresponding trade-offs: (1) while for RSA with modulus n the message space is [0, 2**n-1], the message space of the scheme with the same n is smaller (being the size of the subgroup M in the paper, i.e. for the same message space size one needs to use a larger modulus), (2) The processing is more complex. (Both are not surprising in view of the Principle of No Free Lunch.) I hope that some experts would provide competent evaluations of that paper. M. K. Shen From grarpamp at gmail.com Wed May 18 02:38:42 2016 From: grarpamp at gmail.com (grarpamp) Date: Wed, 18 May 2016 02:38:42 -0400 Subject: [Cryptography] RNG Breakthrough: Explicit Two-Source Extractors and Resilient Functions Message-ID: Let's do another 100 post round on the favorite subject shall we... because serious RNG is serious. Academics Make Theoretical Breakthrough in Random Number Generation https://news.ycombinator.com/item?id=11719543 https://threatpost.com/academics-make-theoretical-breakthrough-in-random-number-generation/118150/ Explicit Two-Source Extractors and Resilient Functions. http://eccc.hpi-web.de/report/2015/119/ We explicitly construct an extractor for two independent sources on n bits, each with min-entropy at least logCn for a large enough constant~C. Our extractor outputs one bit and has error n−(1). The best previous extractor, by Bourgain, required each source to have min-entropy 499n. A key ingredient in our construction is an explicit construction of a monotone, almost-balanced boolean function on n bits that is resilient to coalitions of size n1−, for any 0. In fact, our construction is stronger in that it gives an explicit extractor for a generalization of non-oblivious bit-fixing sources on n bits, where some unknown n−q bits are chosen almost \polylog(n)-wise independently, and the remaining q=n1− bits are chosen by an adversary as an arbitrary function of the n−q bits. The best previous construction, by Viola, achieved q=n12− . Our explicit two-source extractor directly implies an explicit construction of a 2(loglogN)O(1)-Ramsey graph over N vertices, improving bounds obtained by Barak et al. and matching independent work by Cohen. From grarpamp at gmail.com Wed May 18 06:36:12 2016 From: grarpamp at gmail.com (grarpamp) Date: Wed, 18 May 2016 06:36:12 -0400 Subject: [Cryptography] NSA Crypto Breakthrough Bamford [was: WhatsApp keying...] Message-ID: On 4/29/16, Ray Dillinger wrote: > > On 04/28/2016 05:41 PM, grarpamp wrote: >> On 4/28/16, david wong wrote: >>> so as long as we don't discover a crazy breakthrough. >> >> This "breakthrough" hasn't yet been further identified / described... >> https://www.wired.com/2012/03/ff_nsadatacenter > > > I keep hearing rumors about this "breakthrough." I don't > know how seriously to take them, but I suspect that if it > exists it's more likely to be deliberate sabotage at the > hardware/software/firmware level than it is to be the > often-implicated Quantum Supercomputer or major mathematical > insight. > But I keep hearing noises about a fundamental breakthrough > in cryptology, with the strong implication that it's some > kind of new cryptanalytic technique, mathematical insight, > or design principle for special-purpose custom hardware. If you actually read and reassemble all the references in the article (which I won't do herein), they all refer to a 'cryptanalytic' breakthrough over modern crypto, further assisted with compute power, and deployed. That is obviously not just academic powers of two yielding moot partial solutions over limited rounds. And not sabotage, exploits, etc. Of course those are widespread, but they are not part of the 'cryptanalytic breakthrough' subthread of the article. > Assuming they can get four orders of magnitude of hardware > efficiency for purpose-built AES cracking silicon, and back > it up with scores of billions of dollars per year investment > in constantly updating overwhelming volumes of this custom > hardware -- I still don't see anybody cracking AES-128 any > time soon without either a mathematical insight so profound > as to be completely unexpected Maths and crackpots love a nice quiet life with everything taken care of so they can spend decades working their hard problems and crazy angles. The NSA provides that, and protects it and its results as their crown jewels. Do not underestimate it. > or a fundamentally new > computing technology like large scale Quantum Computers. This begins to matter when basic research yields a point where a secret investment of say $100B or less pays off. https://en.wikipedia.org/wiki/Quantum_computing https://en.wikipedia.org/wiki/List_of_megaprojects http://www.visualcapitalist.com/death-taxes-2015-visual-guide-tax-dollars-go/ > If the fundamental mathematical breakthrough is real, it's > very surprising that it hasn't leaked See crown jewels... > or been duplicated yet See Maths... > but in that case it's only a matter of time before one or the > other or both occur. Leaks can occur until time forgotten. Math occurs randomly. Snowden did not have access to the crypto compartments. No leaker seems to have had relavent access to post-WWII modern crypto. > Speculating about the effect of a > fundamental mathematical breakthrough is at best hard to do > meaningfully Those subject to the dark must speculate, those with knowledge of it can execute. > Physicists: > "A large-scale quantum supercomputer is very doubtful." > > Mathematicians: > "A mathematical insight of such magnitude is very doubtful." Wagering against physics is one thing, against the human mind... that may not be a wise investment. "...the ability to crack current public encryption." Some investigative journalist should be all over following up the crypto part of Bamford's piece as the scoop of a lifetime. For that matter, where is Bamford's own followup? Details of such a breakthrough are likely to serve and advance public knowledge and application by providing solution to some long desirable hard problem or going off somewhere new that we've never gone before. Keeping those kind of secrets for yourself is an affront to Humankind. Till then, everyone, including the keepers, rots in the Dark Ages. From hbaker1 at pipeline.com Wed May 18 12:14:25 2016 From: hbaker1 at pipeline.com (Henry Baker) Date: Wed, 18 May 2016 09:14:25 -0700 Subject: [Cryptography] Theoretical Breakthrough in Random Number Generation Message-ID: FYI -- https://threatpost.com/academics-make-theoretical-breakthrough-in-random-number-generation/118150/ Academics Make Theoretical Breakthrough in Random Number Generation by Michael Mimoso May 17, 2016 , 12:25 pm Two University of Texas academics have made what some experts believe is a breakthrough in random number generation that could have longstanding implications for cryptography and computer security. David Zuckerman, a computer science professor, and Eshan Chattopadhyay, a graduate student, published a paper in March that will be presented in June at the Symposium on Theory of Computing. The paper describes how the academics devised a method for the generation of high quality random numbers. The work is theoretical, but Zuckerman said down the road it could lead to a number of practical advances in cryptography, scientific polling, and the study of other complex environments such as the climate. "We show that if you have two low-quality random sources--lower quality sources are much easier to come by--two sources that are independent and have no correlations between them, you can combine them in a way to produce a high-quality random number," Zuckerman said. "People have been trying to do this for quite some time. Previous methods required the low-quality sources to be not that low, but more moderately high quality. "We improved it dramatically," Zuckerman said. The technical details are described in the academics' paper "Explicit Two-Source Extractors and Resilient Functions." The academics' introduction of resilient functions into their new algorithm built on numerous previous works to arrive at landmark moment in theoretical computer science. Already, one other leading designer of randomness extractors, Xin Li, has built on their work to create sequences of many more random numbers. "You expect to see advances in steps, usually several intermediate phases," Zuckerman said. "We sort of made several advances at once. That's why people are excited." In fact, academics worldwide have taken notice. Oded Goldreich, a professor of computer science at the Weizmann Institute of Science in Israel, called it a fantastic result. "It would have been great to see any explicit two-source extractor for min-entropy rate below one half, let alone one that beats Bourgain's rate of 0.499," Goldreich said on the Weizmann website. "Handling any constant min-entropy rate would have been a feast (see A Challenge from the mid-1980s), and going beyond that would have justified a night-long party." MIT's Henry Yuen, a MIT PhD student in theoretical computer science, called the paper "pulse-quickening." "If the result is correct, then it really is -- shall I say it -- a breakthrough in theoretical computer science," Yuen said. The study of existing random number generators used in commercial applications has intensified since the Snowden documents were published; sometimes random numbers aren't so random. Low quality random numbers are much easier to predict, and if they're used, they lower the integrity of the security and cryptography protecting data, for example. Right now, Zuckerman's and Chattopadhyay's result is theoretical and work remains in lowering the margins of error, Zuckerman said. Previous work on randomness extractors, including advances made by Zuckerman, required that one sequence used by the algorithm be truly random, or that both sources be close to random. The academics' latest work hurdles those restrictions allowing the use of sequences that are only weakly random. Their method requires fewer computational resources and results in higher quality randomness. Today's random number systems, for example, are fast, but are much more ad-hoc. "This is a problem I've come back to over and over again for more than 20 years," says Zuckerman. "I'm thrilled to have solved it." http://eccc.hpi-web.de/report/2015/119/ http://eccc.hpi-web.de/report/2015/119/revision/2/download/ Explicit Two-Source Extractors and Resilient Functions Revision #2 Authors: Eshan Chattopadhyay, David Zuckerman Accepted on: 20th March 2016 00:40 Downloads: 6384 Keywords: 2-source, collective coin-flipping, explicit construction, extractor, Pseudorandomness, Ramsey Graph Abstract: We explicitly construct an extractor for two independent sources on $n$ bits, each with min-entropy at least $\log^C n$ for a large enough constant~$C$. Our extractor outputs one bit and has error $n^{-\Omega(1)}$. The best previous extractor, by Bourgain, required each source to have min-entropy $.499n$. ... From hbaker1 at pipeline.com Wed May 18 16:53:23 2016 From: hbaker1 at pipeline.com (Henry Baker) Date: Wed, 18 May 2016 13:53:23 -0700 Subject: [Cryptography] NSA Crypto Breakthrough Bamford [was: WhatsApp keying...] In-Reply-To: References: Message-ID: At 03:36 AM 5/18/2016, grarpamp wrote: >If you actually read and reassemble all the references in the article (which I won't do herein), they all refer to a 'cryptanalytic' breakthrough over modern crypto, further assisted with compute power, and deployed. > >"...the ability to crack current public encryption." I tend to agree with Nadia Heninger's conjecture that NSA has broken discrete logs of certain types. It has the right flavor: NOBUS acres of computers. "Logjam" attack on discrete logs: https://weakdh.org/imperfect-forward-secrecy.pdf Note that achieving this discrete log breakthrough doesn't rule out other approaches: elliptic curve backdoors, more-than-modest improvements in integer factoring with non-quantum computers, back doors in Intel/AMD/Broadcom/TI/Qualcomm crypto hardware, etc. I think that the most recent complaints about Chinese interest in examining "proprietary" HW/SW of American manufacture might shake out some of these back doors (which I estimate to exist with nearly 100% probability). The reason why these HW/SW back doors almost certainly exist: NSA's lime-in-the-cleats arrogance -- together with rubber-stamp NSL's from a round-heeled FISA non-court -- means that the NSA spooks simply can't stop themselves. When you're addicted to the power to force American companies to "Click It or Ticket" (perhaps you have to live in California to understand this imperative to buckle under) like the NSA, your addiction makes you powerless to resist without outside intervention. Yes, when these back doors are eventually revealed by some post-Snowden patriot, they will destroy the rest of any credibility that remains in American chip and computer vendors, resulting in multi billions of $$$$ in losses (including job losses). I suspect that this is why Hayden has said that the health of the U.S. IT industry is more important than weakening encryption; he wants to "get ahead of the story" when these back doors are finally revealed. Notice that Hayden only changed his tune *after* Snowden, and Hayden now understands that more Snowdens are not only possible, but likely. It is possible that the US may let the Chinese in on these back door secrets in order to preserve its ability to keep using them against everyone else, but this fall-back strategy can't possibly be a long-term stable solution. (There is an historical precedent for this strategy: the U.S./Swiss continued to sell broken Enigma-style crypto equipment to the non-first-world nations in the 1950's.) The US will enable the Chinese to use these backdoors to suppress internal dissent for two reasons: the US thinks that a "stable" China -- even with massive human rights violations -- is vastly preferable to a chaotic democratic China (or a multiplicity of Chinas); and the US finds these back doors exceedingly useful for its own purposes both inside & outside the US. From rwilson at wisc.edu Thu May 19 14:09:44 2016 From: rwilson at wisc.edu (Bob Wilson) Date: Thu, 19 May 2016 13:09:44 -0500 Subject: [Cryptography] NSA Crypto Breakthrough Bamford [was: WhatsApp, keying...] In-Reply-To: References: Message-ID: <8b5c9ba3-e94c-f7ca-ebca-630168e1fe3d@wisc.edu> On 5/19/2016 11:00 AM, cryptography-request at metzdowd.com wrote: > [Cryptography] NSA Crypto Breakthrough Bamford [was: WhatsApp > keying...]: ... Already, one other leading designer of randomness extractors, Xin Li, has built on their work to create sequences of many more random numbers. ... > but in that case it's only a matter of time before one or the > other or both occur. Leaks can occur until time forgotten. Math occurs randomly. ------------------------------------------- As a mathematician, I could not pass up: (1) I work on (among other things) non-associative systems. The phrase "many more random numbers" is a delightful example showing English needs parentheses: One has to wonder, how much more random are they? (2) "Math occurs randomly": Maybe to some extent, but highly correlated with other things. E.g., the effect of Sputnik on research funding... Bob Wilson -------------- next part -------------- An HTML attachment was scrubbed... URL: From bear at sonic.net Thu May 19 17:14:51 2016 From: bear at sonic.net (Ray Dillinger) Date: Thu, 19 May 2016 14:14:51 -0700 Subject: [Cryptography] NSA Crypto Breakthrough Bamford [was: WhatsApp keying...] In-Reply-To: References: Message-ID: <573E2CCB.7020707@sonic.net> On 05/18/2016 01:53 PM, Henry Baker wrote: > I tend to agree with Nadia Heninger's conjecture that NSA has broken > discrete logs of certain types. > > It has the right flavor: NOBUS acres of computers. > "Logjam" attack on discrete logs: > > https://weakdh.org/imperfect-forward-secrecy.pdf Well, after reading, I suppose you and Bamford are probably right about what the breakthrough here probably is, but I strongly dispute NOBUS in this case. An attack based in known mathematical technique will have been deployed by many state-level adversaries elsewhere, and besides, once the database the precomputation generates exists, it can be stolen, bought, shared through diplomatic channels with other nations, or otherwise acquired through extortion, blackmail, or bribery by criminal actors. The potential financial payoffs to a criminal organization of having that database are immense. It might even justify the expenditure needed to do the computation themselves. It doesn't even have to be the USA that gets compromised. Even if one supposes that the USA may have kept its database secret, it is unreasonable to expect that several governments have done so - or that they will continue to do so in the future. China builds good supercomputers and undertakes such giant projects, so they've probably built this database without any NSA help. And they might not guard theirs so well against crooks, or might even willingly share it with business interests. In unrelated news, I read that SWIFT has been cracked yet again, and that a database of some millions of LinkedIn IDs is available in the black market this week. > Note that achieving this discrete log breakthrough doesn't rule out > other approaches: elliptic curve backdoors, more-than-modest > improvements in integer factoring with non-quantum computers, back > doors in Intel/AMD/Broadcom/TI/Qualcomm crypto hardware, etc. It is characteristic of these agencies, worldwide, that they pursue all available avenues of information compromise, never just one and never just a few dozen. I have no doubt that even as I type this, China is putting backdoors in chips, and North Korea is building a database of audio recordings of people typing passwords, and Iran is repurposing Stuxnet to attack facilities in other nations, and England is examining public video recordings to extract security codes whenever people publicly enter them into smartphones, and Venezuelan government hackers are examining the guts of Microsoft TLS implementations looking for holes. And on, and on, and on in every possible combination. NOBUS is a fiction, and none of these agencies are ever satisfied with any number of sources less than ALL OF THEM. Meanwhile crooks are busy ripping off bitcoin from online poker games that use good encryption but shuffle their decks using 32-bit random seeds. Crooks are in many ways more reasonable people; if they get one break that makes them money, they're usually happy with that until it gets shut down. > It is possible that the US may let the Chinese in on these back door > secrets in order to preserve its ability to keep using them against > everyone else, but this fall-back strategy can't possibly be a > long-term stable solution. I'm pretty sure that won't happen. The US and China, indeed, are the leaders of the two primary political coalitions contending for world domination, and as such the two most likely adversaries in espionage or in any future large-scale warfare. Bear -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 819 bytes Desc: OpenPGP digital signature URL: From hbaker1 at pipeline.com Thu May 19 18:34:23 2016 From: hbaker1 at pipeline.com (Henry Baker) Date: Thu, 19 May 2016 15:34:23 -0700 Subject: [Cryptography] NSA Crypto Breakthrough Bamford [was: WhatsApp keying...] In-Reply-To: <573E2CCB.7020707@sonic.net> References: <573E2CCB.7020707@sonic.net> Message-ID: At 02:14 PM 5/19/2016, Ray Dillinger wrote: >NOBUS is a fiction, and none of these agencies are ever satisfied with >any number of sources less than ALL OF THEM. I don't know the emoticon for "irony", else I would have used it liberally. NOBUS is a *conceit*, and a very expensive one. I haven't gone through all the Snowden material, but surely there is a slide somewhere in there that talks about "stuff we can do that no one else can so". NOBUS-like conceit is a common theme for American exceptionalists (& Fortune 500 B-school types) -- that "scale" and "big data" can overcome any obstacles. These people conveniently forget that there are *disadvantages* to size & scale -- e.g., O(n^2) (or even O(n*logn)) communication effects. And they conveniently ignore the fact that you can more easily *drown* in "big data" than just about any other outcome. I was always warned that "PhD" was an acronym for "piled higher and deeper"; NSA prides itself on its excess. NOBUS is used on Congress to increase budgets for Congresspersons who don't have the clearances to understand anything else. NOBUS is also used to cement the military-industrial-intel complex with big $$$$$ contracts -- whether they achieve any net increase in security or not. From grarpamp at gmail.com Fri May 20 02:51:27 2016 From: grarpamp at gmail.com (grarpamp) Date: Fri, 20 May 2016 02:51:27 -0400 Subject: [Cryptography] NSA Crypto Breakthrough Bamford [was: WhatsApp, keying...] In-Reply-To: References: <8b5c9ba3-e94c-f7ca-ebca-630168e1fe3d@wisc.edu> Message-ID: On 5/19/16, Bob Wilson wrote: > (2) "Math occurs randomly": Maybe to some extent, but highly correlated > with other things. E.g., the effect of Sputnik on research funding... Research funding is usually highly directed. To say "here's a house, car, living expenses... get back to us in a decade with something at least relavent" is different. Even if all it does is secure brain cells away from the competition that's a decent side win, and relatively low cost. From bear at sonic.net Thu May 19 20:49:07 2016 From: bear at sonic.net (Ray Dillinger) Date: Thu, 19 May 2016 17:49:07 -0700 Subject: [Cryptography] NSA Crypto Breakthrough Bamford [was: WhatsApp keying...] In-Reply-To: <573E2CCB.7020707@sonic.net> References: <573E2CCB.7020707@sonic.net> Message-ID: <573E5F03.5030508@sonic.net> On 05/19/2016 02:14 PM, Ray Dillinger wrote: > > > On 05/18/2016 01:53 PM, Henry Baker wrote: > >> I tend to agree with Nadia Heninger's conjecture that NSA has broken >> discrete logs of certain types. >> >> It has the right flavor: NOBUS acres of computers. > >> "Logjam" attack on discrete logs: >> >> https://weakdh.org/imperfect-forward-secrecy.pdf > > Well, after reading, I suppose you and Bamford are probably right about > what the breakthrough here probably is, but I strongly dispute NOBUS in > this case. Oh crap. I could be wrong here but I think maybe it's worse than that. The Number Field Sieve algorithm for finding vectors of coefficients for index calculus has a lot of sub-parts which don't depend on the particular modulus being considered. Those intermediate results can be applied to precalculations on multiple moduli. Some fraction of the work you do when precomputing for modulus x can be reused when doing precomputation for another modulus y. It's a small fraction, but an opponent trying to build these databases for a large number of different moduli (on the order of a few thousand groups) could eventually realize a benefit from many such small fractions that reduces the compute time required by ??? Uh, back of the envelope says maybe two orders of magnitude? The costs in data storage and the degree to which your calculations get I/O bound, get steeper the more of a speedup you ask for. I don't quite know if this is a practical technique; it depends on whether the I/O requirements are light enough that the computation can proceed at speed. It might be so I/O bound for significant advantage that it's not worth it. Also I'm still trying to figure out whether the intermediate data storage requirements for a 2-order speedup are merely very large (a few exabytes or less) or ludicrous (larger than a few yottabytes). ObNothingInParticular, we're getting close to needing more SI prefixes to describe our storage media. Bear -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 819 bytes Desc: OpenPGP digital signature URL: From leichter at lrw.com Sat May 21 07:00:51 2016 From: leichter at lrw.com (Jerry Leichter) Date: Sat, 21 May 2016 07:00:51 -0400 Subject: [Cryptography] "60 Minutes" hacks Congressman's phone In-Reply-To: References: Message-ID: >> http://www.cbsnews.com/news/60-minutes-hacking-your-phone/ > [big snip] >> Rep. Ted Lieu: You cannot have 300-some million Americans-- and really, right, the global citizenry be at risk of having their phone conversations >> intercepted with a known flaw, simply because some intelligence agencies mightget some data. That is not acceptable. > > If these are the same SS7 vulnerabilities that were widely discussed > in the WP (e.g., > https://www.washingtonpost.com/news/the-switch/wp/2014/12/18/german-researchers-discover-a-flaw-that-could-let-anyone-listen-to-your-cell-calls-and-read-your-texts/) > and other new media outlets in Dec 2014 (and it certainly sounds like > it) then the > only explanation is the that intelligence community are responsible > for they still > not being fixed.... Hardly. While I don't don't the intelligence community provides a word in the ear here and there, the fundamental problem here is much more deep-seated. The telcos and their systems were built in an age of mutual trust - well-place or otherwise. Before SS7, all switching was based on in-band signaling: Various tones transmitted over the same lines as voice. That's why the Black and Red Boxes of the late 1960's/early 1970's could be built: There were tones you could send of a phone line that gave you direct control over the switching equipment. "No one would do that" ... until they did. Before that, everything controlled by human beings (operators) - mainly over the same lines. If you knew the right lingo, you could fool operators into treating you as telco employees, letting you manipulate all kinds of stuff. "No one would do that". SS7 solved the in-band signaling problem by moving the signaling out of band. Nothing you sent on the line went to the switching equipment. Hook into the network used by SS7, though, and you were completely trusted. After all - who could hook into those lines? Just the telco's - initially AT&T and a few small companies in the US, and government-run PTT's in the rest of the world. "NOBUS" in a different sense. We're all friends here; we trust each other. Retrofitting SS7 with a system that's not based on trust would be a huge undertaking - but even that pales compared to the organizational changes needed. The telcos world-wide work as a fairly closed community. They would have to move to a system of mutual distrust and verification. Another place you can see this issue is in some of the billing abuses that the system has historically been rife with. In the US, you can switch LD carriers. As initially set up, your new carrier told your old one that you had switched - and by law, they had to allow the switch to take place. After all, all LD carriers are trustworthy and wouldn't take over an account without permission. Mutual trust, NOBUS. Similar things happened with third-party LD charges. The telcos are hardly alone here. Once you're accepted as a bank - anywhere in the world, vetted by any local government - you've historically had insider access to the entire banking system. After all, one bank wouldn't abuse another's trust, right? Our world was built on these kinds of trust relationships. The diamond trade is an example where this is very explicit. Before you can be accepted into the community, your picture is circulated to and posted very visibly at all the major trading floors for some period of time. If anyone recognizes your picture as that of someone they don't trust, you won't be accepted. Once you're in, you're in - deals for millions in diamonds are made with the shake of a hand. Abuse that trust and you're tossed out of the community. The word is spread very quickly; the community isn't that large. Here you clearly see the extension into an institution of the way individuals maintain their trust relationships. Of course, the entire Internet was built on similar ideas. Enter the IS-IS network and grab nearby packets for yourself. Get accepted as a BGP speaker and grab packets on a world-wide scale. Today's institutions work at scales and at speeds way beyond human abilities to judge trust. Global interconnectivity has removed the need for physical presence to carry out many attacks. And attackers have become much more technologically sophisticated. Changing what is often a century or more of design and practice is difficult and will take a long time, even given the best of intentions and the strongest motivations. In fact, many of these institutions will never change. Instead, solutions will get built "over the top". Do end-to-end encryption - realistic for phone conversations on a mass scale only in the last decade - and leakage of phone conversations by the SS7-based network becomes irrelevant. (Notice, BTW, that the cellphone network encrypts - for better or worse - *between cell and base station*. Once it's on a landline ... it's NOBUS, "our lines are secure". And people believe this stuff: There's a quote at the end of the WaPo article in which someone says he won't trust his cellphone any more, for confidential stuff he'll use a landline. Right.) Metadata is much harder because that's the stuff SS7 is saying *to itself*. Tor is an over-the-top solution for metadata on the Internet, but it's probably not the right solution for phone conversations. And the location information is entirely between your phone and the SS7 infrastructure - it's not clear that any over-the-top solution is possible. And ... if you eliminate the notion that all telco's trust each other to exchange location information, how do you do roaming? (You can get that effect for non-real-time communication using "dead drops", but real-time is much harder.) Very difficult problems. A golden age for the intelligence guys, and they only have to tap into it, not get it designed for them. -- Jerry From grarpamp at gmail.com Sat May 21 15:55:10 2016 From: grarpamp at gmail.com (grarpamp) Date: Sat, 21 May 2016 15:55:10 -0400 Subject: [Cryptography] US Case: Infinite Jail Contempt for Disk Crypto, 5th Amndmnt, All Writs, FileVault, Freenet CHKs In-Reply-To: <20160429075147.GH32679@Hirasawa> References: <20160429075147.GH32679@Hirasawa> Message-ID: On 4/29/16, Yui Hirasawa wrote: > grarpamp wrote: >> https://yro.slashdot.org/story/16/04/27/2357253/child-porn-suspect-jailed-indefinitely-for-refusing-to-decrypt-hard-drives >> http://thehackernews.com/2016/04/decrypt-hard-drive.html >> https://www.scribd.com/doc/310741233/Francis-Rawls-Case >> http://arstechnica.com/tech-policy/2016/04/child-porn-suspect-jailed-for-7-months-for-refusing-to-decrypt-hard-drives/ >> >> Amici Curiae by EFF and ACLU > This could set a VERY dangerous precedent. More links for those following the legal filings / Freenet crypto / commentary ... https://www.reddit.com/r/Freenet/ https://www.techdirt.com/articles/20160404/19300434101/using-all-writs-act-to-route-around-fifth-amendment.shtml https://www.techdirt.com/articles/20160517/11340134464/government-argues-that-indefinite-solitary-confinement-perfectly-acceptable-punishment-failing-to-decrypt-devices.shtml https://cdn.arstechnica.net/wp-content/uploads/2016/04/comply.pdf https://assets.documentcloud.org/documents/2783581/Granting-All-Writs.pdf https://assets.documentcloud.org/documents/2783585/Motion-to-Seal.pdf http://arstechnica.com/wp-content/uploads/2016/05/govporn.pdf http://ia601303.us.archive.org/0/items/gov.uscourts.paed.507511/gov.uscourts.paed.507511.8.0.pdf https://twitter.com/bradheath/status/714885413249351680 http://arstechnica.com/wp-content/uploads/2016/04/effamicus.pdf http://www.knowconnect.com/MIRLN From kentborg at borg.org Sat May 21 12:55:31 2016 From: kentborg at borg.org (Kent Borg) Date: Sat, 21 May 2016 12:55:31 -0400 Subject: [Cryptography] Entropy Needed for SSH Keys? Message-ID: <57409303.6080808@borg.org> Embedded devices are frequently starved for entropy, and frequently want to generate SSH keys on first boot when the entropy might be in particularly short supply. How much entropy does modern openssh key generation need? In a case I am playing with I want my own 512-bits of entropy after the ssh keys are generated. If I can come up with a nice plump 4096-bits at boot (common pool size these days for Linux urandom), and then generate the ssh keys, how many bits will be left over? This might be an elementary question, but embedded people are always getting this stuff terribly wrong, so my excuse is that a little repetition is good. Thanks, -kb From grarpamp at gmail.com Sun May 22 00:52:28 2016 From: grarpamp at gmail.com (grarpamp) Date: Sun, 22 May 2016 00:52:28 -0400 Subject: [Cryptography] USB 3.0 authentication: market power and DRM? In-Reply-To: <1c3b0bab54260b7f0983791985100550.squirrel@deadhat.com> References: <696C0627-33CE-4FF5-9137-CAB97F5A3181@lrw.com> <201604150721.u3F7LNfc007004@new.toad.com> <201605010713.u417DLVH007291@new.toad.com> <572669DA.5060207@sonic.net> <0BE930DF-1F0E-4609-90DB-5D8B5957D5BF@lrw.com> <57278985.1090000@sonic.net> <1c3b0bab54260b7f0983791985100550.squirrel@deadhat.com> Message-ID: On 5/2/16, dj at deadhat.com wrote: > The CA that needs to exist would the the USB-IF. Who cares what CA exists. So long as it is *optional* to the user. Remember SecureBoot... You can find many motherboards with SecureBoot that have the Microsoft PK's locked in the BIOS. Best you can do is 'disable' it and not be 'secure' anymore. The Linux crowd went apeshit over it, and rightfully so. Then they dropped to their knees and wrote silly stub loaders and submitted them to their Microsoft Overlord for signing. They are still submitting to this scheme today, even though they don't have to... Because if you look, you can find boards that allow completely deleting the Microsoft keys and installing and managing your own in the BIOS, and opensource tools to sign and authorize your own loaders do exist. Buy those boards instead. Tech can be useful, but you better fight for, select, and maintain your private right and control over it. From grarpamp at gmail.com Sun May 22 13:03:07 2016 From: grarpamp at gmail.com (grarpamp) Date: Sun, 22 May 2016 13:03:07 -0400 Subject: [Cryptography] FBI Gripes About Crypto on Facebook/WhatsApp/Everywhere, People Rebuffing and Accepting "Costs"? Message-ID: https://it.slashdot.org/story/16/04/06/1929201/top-fbi-attorney-worried-about-whatsapp-encryption WhatsApp on Tuesday announced that all types of messages on the latest version of its app are now automatically protected by end-to-end encryption, and the FBI's top attorney is worried some of the platform's more than 1 billion global users will take advantage of the move to hide their crime- or terrorism-related communications. FBI General Counsel James Baker said in Washington on Tuesday that the decision by the Facebook-owned messaging platform to encrypt its global offerings "presents us with a significant problem" because criminals and terrorists could "get ideas." "If the public does nothing, encryption like that will continue to roll out," he said. "It has public safety costs. Folks have to understand that, and figure out how they are going to deal with that. Do they want the public to bear those costs? Do they want the victims of terrorism to bear those costs?"Maybe the government shouldn't have imposed so many surveillance programs on its citizens -- and kept quiet about it for years -- that they now feel the need to use sophisticated security technologies. From grarpamp at gmail.com Sun May 22 13:53:03 2016 From: grarpamp at gmail.com (grarpamp) Date: Sun, 22 May 2016 13:53:03 -0400 Subject: [Cryptography] US Govt Reveals Bill Forcing Assistance, Chumping Crypto, Secrets Beget... Message-ID: https://yro.slashdot.org/story/16/04/06/2256257/fbi-telling-congress-how-it-hacked-iphone According to a new report in National Journal, the FBI has already briefed Senator Diane Feinstein (D-CA) on the methods used to break into the iPhone at the center of Apple's recent legal fight. Senator Richard Burr (R-NC) is also scheduled to be briefed on the topic in the days to come. TOP SECRET briefings to the two most worthless SECRET critters, at the top of the most worthless SECRET committee... is worthless to the people. And that's no SECRET. Yet somehow people continue to treat that as... notice served, all absolved. Don't forget... Torture, Murder, Surveillance, Databasing all supposedly went through that committee of the guilty too. Feinstein and Burr are both working on a new bill to limit the use of encryption in consumer technology, expected to be made public in the weeks to come. Much of the USA, online, even the world seem to be rather pissed with the stance and activities of the US Government against crypto. Cryptos should not just be in their own circles, but educating those they know... random public around them... that do not already know about the crypto / privacy issues, the things that haven't made the nightly news, etc. The battle against the second coming of Clipper is going to get uglier and riskier before it gets better. In part because new lawmaking is at stake, not just interpretation over old. Supposedly an educated populace has more influence over new laws than interpretation... The disclosures come amid widespread calls for the attack to be made public, particularly from privacy and technology groups. However the FBI's new method works, the ability to unlock an iPhone without knowing its passcode represents a significant break in Apple's security measures, one Apple would surely like to protect against. From alserkli at inbox.ru Sun May 22 12:19:03 2016 From: alserkli at inbox.ru (Alexander Klimov) Date: Sun, 22 May 2016 19:19:03 +0300 Subject: [Cryptography] Entropy Needed for SSH Keys? In-Reply-To: <57409303.6080808@borg.org> References: <57409303.6080808@borg.org> Message-ID: On Sat, 21 May 2016, Kent Borg wrote: > Embedded devices are frequently starved for entropy, and frequently want to > generate SSH keys on first boot when the entropy might be in particularly > short supply. > > How much entropy does modern openssh key generation need? > > In a case I am playing with I want my own 512-bits of entropy after the ssh > keys are generated. If I can come up with a nice plump 4096-bits at boot > (common pool size these days for Linux urandom), and then generate the ssh > keys, how many bits will be left over? The proper design is to use TRNG to seed DRBG (aka PRNG) and use only DRBG for crypto purposes. The idea that entropy of DRBG state can be lost due to its use is misleading. Once you have enough bits to seed DRBG (say, 384 bits for 256-bit security) you can use DRBG to generate all the keys you need. The only reason one may want to reseed DRBG (by getting more bits from TRNG) is if he is afraid that someone learned the DRBG state (say, by reading kernel memory). I guess it is not your case. -- Regards, ASK From dj at deadhat.com Sun May 22 15:35:13 2016 From: dj at deadhat.com (dj at deadhat.com) Date: Sun, 22 May 2016 19:35:13 -0000 Subject: [Cryptography] USB 3.0 authentication: market power and DRM? In-Reply-To: References: <696C0627-33CE-4FF5-9137-CAB97F5A3181@lrw.com> <201604150721.u3F7LNfc007004@new.toad.com> <201605010713.u417DLVH007291@new.toad.com> <572669DA.5060207@sonic.net> <0BE930DF-1F0E-4609-90DB-5D8B5957D5BF@lrw.com> <57278985.1090000@sonic.net> <1c3b0bab54260b7f0983791985100550.squirrel@deadhat.com> Message-ID: > On 5/2/16, dj at deadhat.com wrote: >> The CA that needs to exist would the the USB-IF. > > Who cares what CA exists. > So long as it is *optional* to the user. > Indeed. Up to 8 cert chain slots are addressable - The slot number occupies 3 bits in the protocol. The USB-IF slot 0 is mandatory for USB certified devices and gets filled with a cert chain rooted in the USB-IF. That is what I meant - It's mandatory for the USB-IF to set up CA to support that. Not mandatory on manufacturers to get certified (unless you want to use the trademarks and not mandatory for you to use or honor the USB rooted credentials. The USB rooted credential chain tells you that the USB-IF tested and certified the device design, subject to the usual ways that might be circumvented, which has already been discussed on this mailing list (the metzdowd crypto list at least). This is a mechanism that has proprietary solutions today, but it's not based on those solutions as far as I'm aware. Other slots can be used for whatever your purpose. What you need to be concerned about is control over policy. If the policy is hardwired to 'must have a USB-IF certified device cert' then you will be limited to devices certified by the USB-IF. If you want to roll your own USB device, you might want to wrestle control of the policy setting. The other caveat is that if you want to certify your devices under your own PKI, then you need 'provisionable' devices. Meaning they support the provisioning protocol that's in the spec, have enough non volatile storage and a means to securely hold keys. I can envisage companies charging extra for 'provisionable devices', even though the silicon might support it by default. That's the way of things. The other wrinkle is it might be the silicon that's certified, not the box it is in. Think about stand alone USB-RS232 chips. They get certified by the USB and integrator doesn't meddle with the insides. > Remember SecureBoot... > I do. > You can find many motherboards with SecureBoot > that have the Microsoft PK's locked in the BIOS. Best > you can do is 'disable' it and not be 'secure' anymore. > > The Linux crowd went apeshit over it, and rightfully so. Then they > dropped to their knees and wrote silly stub loaders and submitted > them to their Microsoft Overlord for signing. They are still submitting > to this scheme today, even though they don't have to... > > Because if you look, you can find boards that allow completely > deleting the Microsoft keys and installing and managing your > own in the BIOS, and opensource tools to sign and authorize > your own loaders do exist. Buy those boards instead. > > Tech can be useful, but you better fight for, select, and > maintain your private right and control over it. I have no argument with that. Again, in the USB context, this is a matter of devices being provisionable to support your desire to enforce your own policy. The analogy to the secure boot situation is finding that a USB vendor had pre provisioned slots above zero and had hard coded a policy relating to the pre-provisioned slot. Don't buy those devices. The spec has words to say on not using the auth protocol for vendor lock in, and so enforcement could happen with the certification process. Whether or not it will is not something I know. Of course a non certified device could do as it pleases. From hanno at hboeck.de Sun May 22 18:27:59 2016 From: hanno at hboeck.de (Hanno =?UTF-8?B?QsO2Y2s=?=) Date: Mon, 23 May 2016 08:27:59 +1000 Subject: [Cryptography] Entropy Needed for SSH Keys? In-Reply-To: <57409303.6080808@borg.org> References: <57409303.6080808@borg.org> Message-ID: <20160523082759.1754d767@pc1> On Sat, 21 May 2016 12:55:31 -0400 Kent Borg wrote: > Embedded devices are frequently starved for entropy, and frequently > want to generate SSH keys on first boot when the entropy might be in > particularly short supply. This is a real problem, Nadia Heninger and others found countless devices producing breakable keys due to this: https://factorable.net/ > How much entropy does modern openssh key generation need? ~128 bits of entropy are enough for everything with a reasonable safety margin. (As long as you can be sure that your 128 bits are really random. If you are not add some more.) > In a case I am playing with I want my own 512-bits of entropy after > the ssh keys are generated. If I can come up with a nice plump > 4096-bits at boot (common pool size these days for Linux urandom), > and then generate the ssh keys, how many bits will be left over? Here you have a fundamental misunderstanding (albeit a common one). Entropy bits don't get used up (although Linux's /dev/random manpage tries to tell you so). Once your rng is properly initialized with enough entropy you can use it practically forever. -- Hanno Böck https://hboeck.de/ mail/jabber: hanno at hboeck.de GPG: BBB51E42 -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 819 bytes Desc: OpenPGP digital signature URL: From kentborg at borg.org Sun May 22 21:18:10 2016 From: kentborg at borg.org (Kent Borg) Date: Sun, 22 May 2016 21:18:10 -0400 Subject: [Cryptography] Entropy Needed for SSH Keys? In-Reply-To: <20160523082759.1754d767@pc1> References: <57409303.6080808@borg.org> <20160523082759.1754d767@pc1> Message-ID: <57425A52.5090900@borg.org> Dammit, I can neither remember nor find that quote about how using a deterministic process to make up random numbers is against nature, or grace, or the universe. Like I say, I can't find it. On 05/22/2016 06:27 PM, Hanno Böck wrote: > Here you have a fundamental misunderstanding (albeit a common one). > Entropy bits don't get used up (although Linux's /dev/random manpage > tries to tell you so). Once your rng is properly initialized with > enough entropy you can use it [...] That agrees with another answer I got, but the worrywart in me frowns on putting so much faith in the perfection of SHA-1 (to pick a random version of Linux's drivers/char/random.c). Especially when it can be so easy to stir the pot and make a guessing observer's life a theoretical hell and not just a practical hell. > [...] practically forever. You hedge. Why? If the crypto is good, if it hides the pool state, what's the problem? At how many bits of draw does it become a problem? And why then? Why the hedge? Another response I got also referred me to https://factorable.net/ but it looks completely experimental, watching keys degrade as the system is starved of entropy. Let me try my own experiment: # strace ssh-keygen -t rsa Lot of output, only one mention of the string "random": [...] open("/dev/urandom", O_RDONLY|O_NOCTTY|O_NONBLOCK) = 3 fstat(3, {st_mode=S_IFCHR|0666, st_rdev=makedev(1, 9), ...}) = 0 poll([{fd=3, events=POLLIN}], 1, 10) = 1 ([{fd=3, revents=POLLIN}]) read(3, "\255J\373\231\323\256\251^\314\207MqkC\332\222^\352\275\307\373\351bM\267\273\260$G\232\301\r", 32) = 32 close(3) = 0 [...] (Was I supposed to say "dsa"? Okay...tried that too, same result.) Looks to me like it read 256-bits. I would have expected it would have read more, just to waste if nothing else. No where near using up 4096-bits (if "using up" even is real). Maybe do both DSA and RSA? It still would only "use" 1/8 of a 4096-bit pool. -kb From dave at horsfall.org Mon May 23 00:21:56 2016 From: dave at horsfall.org (Dave Horsfall) Date: Mon, 23 May 2016 14:21:56 +1000 (EST) Subject: [Cryptography] Entropy Needed for SSH Keys? In-Reply-To: <57425A52.5090900@borg.org> References: <57409303.6080808@borg.org> <20160523082759.1754d767@pc1> <57425A52.5090900@borg.org> Message-ID: On Sun, 22 May 2016, Kent Borg wrote: > Dammit, I can neither remember nor find that quote about how using a > deterministic process to make up random numbers is against nature, or > grace, or the universe. Like I say, I can't find it. Anyone who considers arithmetical methods of producing random digits is, of course, in a state of sin. John Von Neumann, 1951 -- Dave Horsfall DTM (VK2KFU) "Those who don't understand security will suffer." From dj at deadhat.com Mon May 23 00:13:20 2016 From: dj at deadhat.com (David Johnston) Date: Sun, 22 May 2016 21:13:20 -0700 Subject: [Cryptography] Entropy Needed for SSH Keys? In-Reply-To: <57425A52.5090900@borg.org> References: <57409303.6080808@borg.org> <20160523082759.1754d767@pc1> <57425A52.5090900@borg.org> Message-ID: <61fa3122-0243-d829-3fc7-74c0cb333950@deadhat.com> On 5/22/16 6:18 PM, Kent Borg wrote: > Dammit, I can neither remember nor find that quote about how using a > deterministic process to make up random numbers is against nature, or > grace, or the universe. Like I say, I can't find it. While I'm gainfully employed as an RNG designer and general crypto security person, I hold the opinion that ignorance beats entropy. In one sense, ignorance of the state of a system can be equated to that system having entropy relative to the thing that is ignorant of the state of the system. However we tend to think of entropy as being an intrinsic thing, arising from underlying quantum uncertainty, rather than a relative thing. However we know we don't have a complete understanding of quantum physics or quantum uncertainty, whereas we know all about ignorance. You can rely on ignorance. If someone is ignorant of your key, the key works just fine in a crypto system that is intended to prevent that person undermining security in some way. Deterministic processes are just fine at taking samples from complex system and turning into a state that is hard to predict. While having 'full entropy' numbers that therefore have no algorithmic connection between them is a fine thing for random numbers, the whole concept of full entropy comes from the assumption that the randomness of quantum uncertainty is a real thing. If not. If the rules of the universe are actually deterministic then we have to fall back on ignorance of the state of complex systems in order to create unpredictable numbers. So in that sense, ignorance beats quantum uncertainty. You can rely on ignorance, but have to trust the assumption that quantum uncertainty is real. If you make your crypto system such that it's secure providing either one of ignorance of a complex system state or quantum uncertainty is true, then the assumptions on which the security of the system are based will be more robust. DJ From dkp at ldd.org Mon May 23 01:21:58 2016 From: dkp at ldd.org (David Kane-Parry) Date: Sun, 22 May 2016 22:21:58 -0700 Subject: [Cryptography] Entropy Needed for SSH Keys? In-Reply-To: <57425A52.5090900@borg.org> References: <57409303.6080808@borg.org> <20160523082759.1754d767@pc1> <57425A52.5090900@borg.org> Message-ID: <227E153C-AE95-4638-A0F2-E9F59B459FA0@ldd.org> On May 22, 2016, at 6:18 PM, Kent Borg wrote: > Dammit, I can neither remember nor find that quote about how using a deterministic process to make up random numbers is against nature, or grace, or the universe. Like I say, I can't find it. "Any one who considers arithmetical methods of producing random digits is, of course, in a state of sin.” From "Various techniques used in connection with random digits" by John von Neumann in Monte Carlo Method (1951) edited by A.S. Householder, G.E. Forsythe, and H.H. Germond. - d. From jsd at av8n.com Mon May 23 00:27:14 2016 From: jsd at av8n.com (John Denker) Date: Sun, 22 May 2016 21:27:14 -0700 Subject: [Cryptography] immortal quote about randomness In-Reply-To: <57425A52.5090900@borg.org> References: <57409303.6080808@borg.org> <20160523082759.1754d767@pc1> <57425A52.5090900@borg.org> Message-ID: <574286A2.2090709@av8n.com> On 05/22/2016 06:18 PM, Kent Borg wrote: > Dammit, I can neither remember nor find that quote about how using a > deterministic process to make up random numbers is against nature, or > grace, or the universe. Like I say, I can't find it. ``Anyone who considers arithmetical methods of producing random digits is, of course, in a state of sin. For, as has been pointed out several times, there is no such thing as a random number -- there are only methods to produce random numbers, and a strict arithmetic procedure of course is not such a method.'' John von Neumann, ``Various Techniques Used in Connection With Random Digits'' page 36 in _Monte Carlo Method_ proceedings of a symposium held June 29 -- July, 1, 1949 A.S. Householder, G.E. Forsythe, and H.H. Germond (eds.) Institute for Numerical Analysis (published 1951) Actually it's a twofer. I use the second sentence more often than the first. Here's my version: There's no such thing as a random number. If it's random, it's not a number. If it's a number, it's not random. You can have a random distribution over numbers, but then the randomness is in the distribution, not in any particular number that may have been drawn from such a distribution. From yaronf.ietf at gmail.com Mon May 23 10:08:23 2016 From: yaronf.ietf at gmail.com (Yaron Sheffer) Date: Mon, 23 May 2016 17:08:23 +0300 Subject: [Cryptography] Entropy Needed for SSH Keys? In-Reply-To: <57425A52.5090900@borg.org> References: <57409303.6080808@borg.org> <20160523082759.1754d767@pc1> <57425A52.5090900@borg.org> Message-ID: <57430ED7.5030104@gmail.com> > Let me try my own experiment: > > # strace ssh-keygen -t rsa > > Lot of output, only one mention of the string "random": > > [...] > open("/dev/urandom", O_RDONLY|O_NOCTTY|O_NONBLOCK) = 3 > fstat(3, {st_mode=S_IFCHR|0666, st_rdev=makedev(1, 9), ...}) = 0 > poll([{fd=3, events=POLLIN}], 1, 10) = 1 ([{fd=3, revents=POLLIN}]) > read(3, > "\255J\373\231\323\256\251^\314\207MqkC\332\222^\352\275\307\373\351bM\267\273\260$G\232\301\r", > 32) = 32 > close(3) = 0 > [...] > > (Was I supposed to say "dsa"? Okay...tried that too, same result.) > > Looks to me like it read 256-bits. I would have expected it would have > read more, just to waste if nothing else. > > No where near using up 4096-bits (if "using up" even is real). Maybe do > both DSA and RSA? It still would only "use" 1/8 of a 4096-bit pool. > > > -kb Yes, interesting. I repeated the experiment on my Ubuntu 16.04, and ssh read 48 bytes. Still way too little. I can only speculate that they have their own PRNG which they seed from /dev/urandom. Thanks, Yaron From bear at sonic.net Mon May 23 14:09:46 2016 From: bear at sonic.net (Ray Dillinger) Date: Mon, 23 May 2016 11:09:46 -0700 Subject: [Cryptography] Entropy Needed for SSH Keys? In-Reply-To: References: <57409303.6080808@borg.org> Message-ID: <5743476A.3060903@sonic.net> On 05/22/2016 09:19 AM, Alexander Klimov wrote: > The proper design is to use TRNG to seed DRBG (aka PRNG) and use only > DRBG for crypto purposes. The idea that entropy of DRBG state can be > lost due to its use is misleading. Once you have enough bits to seed > DRBG (say, 384 bits for 256-bit security) you can use DRBG to > generate all the keys you need. > > The only reason one may want to reseed DRBG (by getting more bits from > TRNG) is if he is afraid that someone learned the DRBG state (say, by > reading kernel memory). I guess it is not your case. This is very close to true. It is certainly true if one trusts the algorithm and coding of one's DRBG and intends to produce less than a few trillion keys. But, honestly, I sincerely question the idea that you need random numbers "early" in the boot process. It's like thinking that you have to be in the middle of a long-distance call before you can hook up your phone. We were building operating systems that could finish booting up without network connections a long time ago. Thinking that we've lost that technology is silly. A non-networked operating system on a machine with sensors can run a program capable of gathering entropy, gather entropy, and *then* start using the network. So, if you're looking at a situation where anything is asking for key generation before bootup is even complete, you're looking at a design failure. It is bad design to do something the hard way when there is an easy way that is more reliable. Bear -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 819 bytes Desc: OpenPGP digital signature URL: From bear at sonic.net Mon May 23 14:34:00 2016 From: bear at sonic.net (Ray Dillinger) Date: Mon, 23 May 2016 11:34:00 -0700 Subject: [Cryptography] Entropy Needed for SSH Keys? In-Reply-To: <57425A52.5090900@borg.org> References: <57409303.6080808@borg.org> <20160523082759.1754d767@pc1> <57425A52.5090900@borg.org> Message-ID: <57434D18.2090602@sonic.net> On 05/22/2016 06:18 PM, Kent Borg wrote: > Dammit, I can neither remember nor find that quote about how using a > deterministic process to make up random numbers is against nature, or > grace, or the universe. Like I say, I can't find it. > "Anyone who attempts to generate random numbers by deterministic means is, of course, living in a state of sin." -- John Von Neumann >> [...] practically forever. > > You hedge. Why? If the crypto is good, if it hides the pool state, > what's the problem? At how many bits of draw does it become a problem? > And why then? Why the hedge? Even if the crypto is perfect, you still want an extra bit of state every time you double the amount of output you're going to produce. So, if making a few trillion additional keys, you'd want ~50 or so extra bits of state. Also, if making bigger individual reads of /dev/urandom. If you've got anything that's reading 2Kbytes at a time of output, then you want an extra 2Kbytes of RNG state. Try redirecting /sys/log/* to /dev/random, like TAILS does, if you're really concerned about topping up state. But for TAILS that's more about not writing sys/log/* than it is about keeping the RNG pool topped up. Bear ____ "The real problem is not whether machines think but whether men do." -- B.F. Skinner -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 819 bytes Desc: OpenPGP digital signature URL: From bear at sonic.net Mon May 23 14:52:22 2016 From: bear at sonic.net (Ray Dillinger) Date: Mon, 23 May 2016 11:52:22 -0700 Subject: [Cryptography] FBI Gripes About Crypto on Facebook/WhatsApp/Everywhere, People Rebuffing and Accepting "Costs"? In-Reply-To: References: Message-ID: <57435166.5080001@sonic.net> On 05/22/2016 10:03 AM, grarpamp wrote: > https://it.slashdot.org/story/16/04/06/1929201/top-fbi-attorney-worried-about-whatsapp-encryption > WhatsApp on Tuesday announced that all types of messages on the latest > version of its app are now automatically protected by end-to-end > encryption, and the FBI's top attorney is worried some of the > platform's more than 1 billion global users will take advantage of the > move to hide their crime- or terrorism-related communications. Um. One billion people is 14% of the world. Meaning, one person out of every seven. Assuming for the moment that there are more than seven criminals or terrorists, it seems likely that someone will in fact use it to hide crime or terrorism-related activities; otherwise criminals and terrorists would be underrepresented in the sample. On the other hand, there are vastly more people who will use it to secure information that criminals would otherwise steal and keep secure information that terrorists would otherwise use to plan attacks. Because there are vastly more people engaged in those activities than there are criminals and terrorists. Bear -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 819 bytes Desc: OpenPGP digital signature URL: From kentborg at borg.org Mon May 23 12:51:18 2016 From: kentborg at borg.org (Kent Borg) Date: Mon, 23 May 2016 12:51:18 -0400 Subject: [Cryptography] Entropy Needed for SSH Keys? In-Reply-To: <61fa3122-0243-d829-3fc7-74c0cb333950@deadhat.com> References: <57409303.6080808@borg.org> <20160523082759.1754d767@pc1> <57425A52.5090900@borg.org> <61fa3122-0243-d829-3fc7-74c0cb333950@deadhat.com> Message-ID: <57433506.7000602@borg.org> On 05/23/2016 12:13 AM, David Johnston wrote: > While I'm gainfully employed as an RNG designer and general crypto > security person, I hold the opinion that ignorance beats entropy. Hear, hear! I have long argued that an important consideration is the distance at which your foe is forced to observe. Consider the timing of a network interrupt. A CPU's system clock doesn't even exist outside the CPU chip (clean GHz-plus clock distribution is hard). So these digital chips go to the extra effort of including an analog PLL to multiply up from a far lower external oscillator that itself is fed only a very short distance to one of the chip leads. So if it is a fast Intel-ish chip, an observer just a few inches away will have a hard task to know what the Time Stamp Counter's LSB will be at the instant the CPU reads it. And as the observer's distance increases, low-order bits become unknowable--to that observer. An observer at a couple meters is worse off than an observer hovering over the CPU, and observer just at the other end of my last-mile DSL link (millisecond-order latency) is going to be in the dark about a lot of low order bits in the TSC. That observer likely can estimate a little beyond the number of high-order zeros in the TSC (ie, uptime). When building an RNG, merely putting it in a big metal box--say, the size of a computer--accomplishes a lot. Unfortunately, Arm chips don't have counters running as fast a TSC, so you get far less ignorance per interrupt per meter-to-which-you-can-push-off-your-foe. But this ignorance is still significant, if not entropy. -kb From leichter at lrw.com Mon May 23 06:02:07 2016 From: leichter at lrw.com (Jerry Leichter) Date: Mon, 23 May 2016 06:02:07 -0400 Subject: [Cryptography] immortal quote about randomness In-Reply-To: <574286A2.2090709@av8n.com> References: <57409303.6080808@borg.org> <20160523082759.1754d767@pc1> <57425A52.5090900@borg.org> <574286A2.2090709@av8n.com> Message-ID: > There's no such thing as a random number. > If it's random, it's not a number. > If it's a number, it's not random. > You can have a random distribution over numbers, > but then the randomness is in the distribution, > not in any particular number that may have been > drawn from such a distribution. And of course parallel statements apply to entropy. Entropy is a property of a source of bits (well, values drawn from some set, but in most cases bits is good enough). If you have a bunch of bits ... you have a bunch of bits. Any entropy is in the source you drew it from, not in the bits you have. (A bit oversimplified, both for entropy and randomness, since the connection between a particular set of bits, a particular source, and what exactly you measure may be subtle - see Kolmogorov complexity - but the point remains. English is not a precise way to express mathematical concepts.) -- Jerry From tytso at mit.edu Mon May 23 11:35:56 2016 From: tytso at mit.edu (Theodore Ts'o) Date: Mon, 23 May 2016 11:35:56 -0400 Subject: [Cryptography] Entropy Needed for SSH Keys? In-Reply-To: <61fa3122-0243-d829-3fc7-74c0cb333950@deadhat.com> References: <57409303.6080808@borg.org> <20160523082759.1754d767@pc1> <57425A52.5090900@borg.org> <61fa3122-0243-d829-3fc7-74c0cb333950@deadhat.com> Message-ID: <20160523153556.GC12817@thunk.org> On Sun, May 22, 2016 at 09:13:20PM -0700, David Johnston wrote: > While I'm gainfully employed as an RNG designer and general crypto security > person, I hold the opinion that ignorance beats entropy. > > In one sense, ignorance of the state of a system can be equated to that > system having entropy relative to the thing that is ignorant of the state of > the system. > > However we tend to think of entropy as being an intrinsic thing, arising > from underlying quantum uncertainty, rather than a relative thing. > > However we know we don't have a complete understanding of quantum physics or > quantum uncertainty, whereas we know all about ignorance. You can rely on > ignorance. If someone is ignorant of your key, the key works just fine in a > crypto system that is intended to prevent that person undermining security > in some way. I agree with this, and there are ways in which this can be useful --- for example, using the relative strength from multiple access points to seed a random number generator may be useful because the NSA analyst sittiing in Fort Meade might not know whether the mobile phone in your knapsack is sitting on top of the desk or below it, and this would change the RSSI numbers that you might get. However, I do worry about this a bit to the extent that sometimes don't know what we don't know, or more importantly, we don't know what the adversary might be able to find out. For example if you read the claims made by the CPU Jitter "True Random Number Generator", it essentially (albeit perhaps slightly unfairly) boils down to "The algorithms L1/L2 cache of an Intel CPU are horribly complex, and no one can figure them out, so we can treat the timing results as true randomness." Well, maybe you and I can't figure them out, but maybe someone with a more detailed understanding of the implementation details of the Intel CPU could do a better tjob. So while I think it is a useful engineering tool, and it's something I've relied upon myself, to use it as a fundamental design principle could be dangerous. Cheers, - Ted From natanael.l at gmail.com Mon May 23 06:02:20 2016 From: natanael.l at gmail.com (Natanael) Date: Mon, 23 May 2016 12:02:20 +0200 Subject: [Cryptography] Entropy Needed for SSH Keys? In-Reply-To: <61fa3122-0243-d829-3fc7-74c0cb333950@deadhat.com> References: <57409303.6080808@borg.org> <20160523082759.1754d767@pc1> <57425A52.5090900@borg.org> <61fa3122-0243-d829-3fc7-74c0cb333950@deadhat.com> Message-ID: Den 23 maj 2016 7:25 fm skrev "David Johnston" : > > While I'm gainfully employed as an RNG designer and general crypto security person, I hold the opinion that ignorance beats entropy. > > In one sense, ignorance of the state of a system can be equated to that system having entropy relative to the thing that is ignorant of the state of the system. Information IS surprise: https://plus.maths.org/content/information-surprise Meaning that ignorance actually is the source of entropy. We can't learn anything new of we know the seed and position of a deterministic system. Entropy (bits) is how much we can learn about a system from a given amount of information about that system. In cryptography we tend to settle with a small pool of secret entropy and deriving computational entropy from it (i.e. an adversary with unbounded computational power can break it and find the seed, but not a limited one). -------------- next part -------------- An HTML attachment was scrubbed... URL: From sidney at sidney.com Mon May 23 03:06:29 2016 From: sidney at sidney.com (Sidney Markowitz) Date: Mon, 23 May 2016 19:06:29 +1200 Subject: [Cryptography] Entropy Needed for SSH Keys? In-Reply-To: <57425A52.5090900@borg.org> References: <57409303.6080808@borg.org> <20160523082759.1754d767@pc1> <57425A52.5090900@borg.org> Message-ID: Kent Borg wrote on 23/05/16 1:18 PM: > Looks to me like it read 256-bits. I would have expected it would have > read more, just to waste if nothing else. > > No where near using up 4096-bits (if "using up" even is real). Maybe do > both DSA and RSA? It still would only "use" 1/8 of a 4096-bit pool. There is a difference between checking every one of the 2^256 4096 bit numbers that could have been generated from that 256 bits of entropy and going through all the calculations needed to factor a 4096 bit number. However both will take you more time and resources than you have. Both take much longer than factoring a 256 bit RSA key. Which is why 256 bits is enough entropy to generate the key but the key has to be 4096 bits. From cryptography at lakedaemon.net Mon May 23 15:52:07 2016 From: cryptography at lakedaemon.net (Jason Cooper) Date: Mon, 23 May 2016 19:52:07 +0000 Subject: [Cryptography] Entropy Needed for SSH Keys? In-Reply-To: <5743476A.3060903@sonic.net> References: <57409303.6080808@borg.org> <5743476A.3060903@sonic.net> Message-ID: <20160523195206.GD24391@io.lakedaemon.net> Hi Ray, On Mon, May 23, 2016 at 11:09:46AM -0700, Ray Dillinger wrote: ... > But, honestly, I sincerely question the idea that you need random > numbers "early" in the boot process. The caveat here is kernel ASLR. The address space is setup when the decompressor is run. It either needs an architecture-specific function like RDRAND/RDSEED, or to be handed a seed by the bootloader. There's also the whole suite of kernel self-protection mechanisms like stack canaries and so on. thx, Jason. From ben at links.org Mon May 23 17:07:32 2016 From: ben at links.org (Ben Laurie) Date: Mon, 23 May 2016 22:07:32 +0100 Subject: [Cryptography] Entropy Needed for SSH Keys? In-Reply-To: <61fa3122-0243-d829-3fc7-74c0cb333950@deadhat.com> References: <57409303.6080808@borg.org> <20160523082759.1754d767@pc1> <57425A52.5090900@borg.org> <61fa3122-0243-d829-3fc7-74c0cb333950@deadhat.com> Message-ID: On 23 May 2016 at 05:13, David Johnston wrote: > On 5/22/16 6:18 PM, Kent Borg wrote: > >> Dammit, I can neither remember nor find that quote about how using a >> deterministic process to make up random numbers is against nature, or grace, >> or the universe. Like I say, I can't find it. > > While I'm gainfully employed as an RNG designer and general crypto security > person, I hold the opinion that ignorance beats entropy. > > In one sense, ignorance of the state of a system can be equated to that > system having entropy relative to the thing that is ignorant of the state of > the system. > > However we tend to think of entropy as being an intrinsic thing, arising > from underlying quantum uncertainty, rather than a relative thing. > > However we know we don't have a complete understanding of quantum physics or > quantum uncertainty, whereas we know all about ignorance. You can rely on > ignorance. If someone is ignorant of your key, the key works just fine in a > crypto system that is intended to prevent that person undermining security > in some way. > > Deterministic processes are just fine at taking samples from complex system > and turning into a state that is hard to predict. While having 'full > entropy' numbers that therefore have no algorithmic connection between them > is a fine thing for random numbers, the whole concept of full entropy comes > from the assumption that the randomness of quantum uncertainty is a real > thing. If not. If the rules of the universe are actually deterministic then > we have to fall back on ignorance of the state of complex systems in order > to create unpredictable numbers. > > So in that sense, ignorance beats quantum uncertainty. You can rely on > ignorance, but have to trust the assumption that quantum uncertainty is > real. Gotta say, I really like this analysis. > If you make your crypto system such that it's secure providing either one of > ignorance of a complex system state or quantum uncertainty is true, then the > assumptions on which the security of the system are based will be more > robust. Why limit yourself to these two possibilities? From mitch at niftyegg.com Mon May 23 20:29:15 2016 From: mitch at niftyegg.com (Tom Mitchell) Date: Mon, 23 May 2016 17:29:15 -0700 Subject: [Cryptography] Entropy Needed for SSH Keys? In-Reply-To: <57425A52.5090900@borg.org> References: <57409303.6080808@borg.org> <20160523082759.1754d767@pc1> <57425A52.5090900@borg.org> Message-ID: On Sun, May 22, 2016 at 6:18 PM, Kent Borg wrote: > > Let me try my own experiment: > > # strace ssh-keygen -t rsa .... > read(3, > "\255J\373\231\323\256\251^\314\207MqkC\332\222^\352\275\307\373\351bM\267\273\260$G\232\301\r", > 32) = 32 > close(3) = 0 > [...] > > (Was I supposed to say "dsa"? Okay...tried that too, same result.) > ..... > Looks to me like it read 256-bits. I would have expected it would have > read more, just to waste if nothing else. > No where near using up 4096-bits (if "using up" even is real). Maybe do > both DSA and RSA? It still would only "use" 1/8 of a 4096-bit pool. Since the read() returns the count a solution is for the process to sleep some reasonable rand() seconds or nanosleep() do some math to know how many more to request then request more bits. As others have noted this is a known issue yet it is not a common computation. Where it might be common other tricks can leverage modest entropy bit counts returned by the read. And if both common and important additional hardware and local services makes sense. -- T o m M i t c h e l l -------------- next part -------------- An HTML attachment was scrubbed... URL: From grarpamp at gmail.com Mon May 23 20:31:45 2016 From: grarpamp at gmail.com (grarpamp) Date: Mon, 23 May 2016 20:31:45 -0400 Subject: [Cryptography] Text of Burr-Feinstein encryption backdoor bill In-Reply-To: References: Message-ID: On 4/8/16, Henry Baker wrote: > https://assets.documentcloud.org/documents/2797124/Burr-Feinstein-Encryption-Bill-Discussion-Draft.pdf > > "Compliance with Court Orders Act of 2016" Current version... https://www.burr.senate.gov/imo/media/doc/BAG16460.pdf From hbaker1 at pipeline.com Tue May 24 11:43:50 2016 From: hbaker1 at pipeline.com (Henry Baker) Date: Tue, 24 May 2016 08:43:50 -0700 Subject: [Cryptography] Hacking spread spectrum clocking of HW ? Message-ID: FYI -- https://www.maximintegrated.com/en/app-notes/index.mvp/id/1995 "In 1975 the Federal Communications Commission (FCC), the government agency that regulates radio frequency (RF) emissions in the United States, enacted new regulations called FCC Part 15. These were not directed at controlling equipment such as radio and TV transmitters, or aircraft-navigation and emergency beacons that deliberately radiate high-power RF energy. Instead, these regulations sought to control equipment that did not deliberately radiate RF energy such as televisions, automobiles, and low-power, unregulated RF radiators such as walkie-talkies and electronic remote controls. During the 1980s and 1990s, electronic devices from microwaves to cell phones proliferated. Cross interference between these devices became a problem. Traditional methods to address radiated emissions issues consisted of shielding, careful board layout, as well as filtering to reduce undesired radiated emissions. As electronics became smaller, another technique, Spread Spectrum, borrowed from communication applications was used. This article gives a background and history of spread spectrum and describes how it is used today as a technique to reduce radiated emissions in consumer electronic equipment." Q: How hard is it to diddle with the spreading codes on these clocking sources? I'd like to experiment with some longer codes. From kentborg at borg.org Tue May 24 10:05:16 2016 From: kentborg at borg.org (Kent Borg) Date: Tue, 24 May 2016 10:05:16 -0400 Subject: [Cryptography] Entropy Needed for SSH Keys? In-Reply-To: <20160523153556.GC12817@thunk.org> References: <57409303.6080808@borg.org> <20160523082759.1754d767@pc1> <57425A52.5090900@borg.org> <61fa3122-0243-d829-3fc7-74c0cb333950@deadhat.com> <20160523153556.GC12817@thunk.org> Message-ID: <57445F9C.5030906@borg.org> On 05/23/2016 11:35 AM, Theodore Ts'o wrote: > I agree with this, and there are ways in which this can be useful --- > for example, using the relative strength from multiple access points > to seed a random number generator may be useful because the NSA > analyst sittiing in Fort Meade might not know whether the mobile phone > in your knapsack is sitting on top of the desk or below it, and this > would change the RSSI numbers that you might get. I like that. Not a lot of bits, but some. > For example if you read the claims made by the CPU Jitter "True Random > Number Generator", it essentially (albeit perhaps slightly unfairly) > boils down to "The algorithms L1/L2 cache of an Intel CPU are horribly > complex, and no one can figure them out, so we can treat the timing > results as true randomness." Assume there is no jitter. Just consider that the TSC is running at over 2GHz. For an observer to know what value the CPU will read, that observer will have to know not only how the CPU might jitter (and let's assume zero), but also the observer needs to know the state of the clock. Not just how many ticks have gone by (hard already), but exactly *where* the edge of those ticks are or an LSB value will slip by. The observer needs precise phase information. Isn't this essentially an exercise in clean distribution of a crappy clock? That clock only exists over the space of a few millimeters--but enough span that the "correct" phase information starts out ambiguous. Tracking a good clock is hard (the right answer is a win), tracking a crappy clock is harder (gotta know the specific wrong answer). GPS is designed to be as accurate as possible, yet its time distribution accuracy at best is nanoseconds. Frequency accuracy (an easier problem) is still only 10-times better via GPS. But an observer of my TSC needs to do still better, tracking a crappy clock, without my cooperation, from how far away? Thought experiment: Best-case design a system that can precisely track phase of a 2GHz CPU clock over a distance of meters. A clock that is referenced to a crystal that is not temperature compensated, multiplied up by a PLL that is designed only to be good enough, and then intentionally made worse with a spread-spectrum smear varying the frequency. Spend millions if you have to, be big and bulky, but track that clock edge. How confident are you that it can be done at all? And if it can be done, to a distance of how many meters? Now do it covertly and cheaply. -kb From leichter at lrw.com Mon May 23 22:44:18 2016 From: leichter at lrw.com (Jerry Leichter) Date: Mon, 23 May 2016 22:44:18 -0400 Subject: [Cryptography] Entropy Needed for SSH Keys? In-Reply-To: <20160523195206.GD24391@io.lakedaemon.net> References: <57409303.6080808@borg.org> <5743476A.3060903@sonic.net> <20160523195206.GD24391@io.lakedaemon.net> Message-ID: >> But, honestly, I sincerely question the idea that you need random >> numbers "early" in the boot process. > > The caveat here is kernel ASLR. The address space is setup when the > decompressor is run. It either needs an architecture-specific function > like RDRAND/RDSEED, or to be handed a seed by the bootloader. > > There's also the whole suite of kernel self-protection mechanisms like > stack canaries and so on. Let's think this through a bit. Kernel ASLR, stack canaries, and so on, are there to protect against external code that finds holes. Early during boot, *there's no external code running*. We're before network initialization, so there's nothing coming in from the network links. Basically, if an attacker has managed to get code running at this point during boot, you don't have much hope anyway. So it seems to me you want to address a different issue: Not how do I get enough randomness to set up kernel ASLR and related mechanisms early in boot, but how to I *put off* setting up kernel ASLR and related mechanisms until I have a usable source of randomness? -- Jerry From mitch at niftyegg.com Mon May 23 21:25:54 2016 From: mitch at niftyegg.com (Tom Mitchell) Date: Mon, 23 May 2016 18:25:54 -0700 Subject: [Cryptography] Text of Burr-Feinstein encryption backdoor bill In-Reply-To: References: Message-ID: On Mon, May 23, 2016 at 5:31 PM, grarpamp wrote: > On 4/8/16, Henry Baker wrote: > > > https://assets.documentcloud.org/documents/2797124/Burr-Feinstein-Encryption-Bill-Discussion-Draft.pdf > > > > "Compliance with Court Orders Act of 2016" > > Current version... > > https://www.burr.senate.gov/imo/media/doc/BAG16460.pdf Catch 22.. "(4) all providers of communications services and products (including software) should protect the privacy of United States persons through implementation of appropriate data security and still respect the rule of law and comply with all legal requirements 14 and court orders; " The technology to protect data and privacy from modern electronic threats domestic and foreign does not currently have any back door magic. The including of obfuscated is troubliing to all that cannot spall. I suspect code like this is also at risk. https://www.yahoo.com/beauty/disney-world-staffers-share-dark-225731043.html The authors of the bill need to be reminded that "Tora Tora Tora" was sent in the clear. Tiger Tiger Tiger... but wait yahoo answers tells me" "tora is Japanese for "tiger", but in this case, "To" is the initial syllable of the Japanese word totsugeki, meaning "charge" or "attack", and "ra" is initial syllable of raigeki, meaning "torpedo attack" This also implies that all data on compute services like *Amazon* EC2 is now a liability for Amazon. All web services including Etsy where multiple vendors have hosed credit card and transaction services are covered. And then there are tattoos. A whole show is crafted on secrets hidden in tattoos. Shave the skull and read the secret message. i.e. we must all be bald and nakid... It is a difficult problem but this smells like a novice attempt at legislating something that they see done on TV. -- T o m M i t c h e l l -------------- next part -------------- An HTML attachment was scrubbed... URL: From natanael.l at gmail.com Tue May 24 15:18:37 2016 From: natanael.l at gmail.com (Natanael) Date: Tue, 24 May 2016 21:18:37 +0200 Subject: [Cryptography] Entropy Needed for SSH Keys? In-Reply-To: References: <57409303.6080808@borg.org> <5743476A.3060903@sonic.net> <20160523195206.GD24391@io.lakedaemon.net> Message-ID: Den 24 maj 2016 7:15 em skrev "Jerry Leichter" : > > > >> But, honestly, I sincerely question the idea that you need random > >> numbers "early" in the boot process. > > > > The caveat here is kernel ASLR. The address space is setup when the > > decompressor is run. It either needs an architecture-specific function > > like RDRAND/RDSEED, or to be handed a seed by the bootloader. > > > > There's also the whole suite of kernel self-protection mechanisms like > > stack canaries and so on. > Let's think this through a bit. Kernel ASLR, stack canaries, and so on, are there to protect against external code that finds holes. Early during boot, *there's no external code running*. We're before network initialization, so there's nothing coming in from the network links. Basically, if an attacker has managed to get code running at this point during boot, you don't have much hope anyway. > > So it seems to me you want to address a different issue: Not how do I get enough randomness to set up kernel ASLR and related mechanisms early in boot, but how to I *put off* setting up kernel ASLR and related mechanisms until I have a usable source of randomness? Let the bootloader / UEFI do it? If the question is how to load and run code complex enough to interact with potentially arbitary hardware to fetch meaningful entropy, before the kernel has had a chance to prepare anything, without making the boot process more complex, it seems like that's the only clear answer. The bootloader has to already know a thing or two anyway about the hardware and get a few things up and running, and letting it pass on entropy to the kernel as it loads it to memory keeps the overhead low. -------------- next part -------------- An HTML attachment was scrubbed... URL: From mok-kong.shen at t-online.de Tue May 24 17:06:52 2016 From: mok-kong.shen at t-online.de (mok-kong shen) Date: Tue, 24 May 2016 23:06:52 +0200 Subject: [Cryptography] On a paper on a new probabilistic public-key encryption based on RSA In-Reply-To: <5738D2E6.8080809@t-online.de> References: <5738D2E6.8080809@t-online.de> Message-ID: <5744C26C.4020803@t-online.de> [Addendum:] It seems further debatable, whether the choice of the subgroups M and H, as stated in the paper, is optimal for the security to be achieved by the scheme. M. K. Shen From pgut001 at cs.auckland.ac.nz Tue May 24 22:14:31 2016 From: pgut001 at cs.auckland.ac.nz (Peter Gutmann) Date: Wed, 25 May 2016 02:14:31 +0000 Subject: [Cryptography] Entropy Needed for SSH Keys? In-Reply-To: References: <57409303.6080808@borg.org> <5743476A.3060903@sonic.net> <20160523195206.GD24391@io.lakedaemon.net>, Message-ID: <9A043F3CF02CD34C8E74AC1594475C73F4C89DF3@uxcn10-5.UoA.auckland.ac.nz> Jerry Leichter writes: >So it seems to me you want to address a different issue: Not how do I get >enough randomness to set up kernel ASLR and related mechanisms early in boot, >but how to I *put off* setting up kernel ASLR and related mechanisms until I >have a usable source of randomness? You don't need a usable (where I assume "usable" means "capable of generating crypto keys") source of randomness, for ASLR and stack canaries and the like you just need enough to make it hard for an attacker (meaning a dumb piece of code, not an active, adaptive attack) to guess. 16 bits should be fine (see various analyses of this topic, in practice it's anywhere from 12 to 24 bits, based more on hardware limits than anything else). Peter. From grarpamp at gmail.com Wed May 25 02:44:11 2016 From: grarpamp at gmail.com (grarpamp) Date: Wed, 25 May 2016 02:44:11 -0400 Subject: [Cryptography] Hacking spread spectrum clocking of HW ? In-Reply-To: References: Message-ID: On 5/24/16, Henry Baker wrote: > https://www.maximintegrated.com/en/app-notes/index.mvp/id/1995 > > Q: How hard is it to diddle with the spreading codes on these clocking > sources? I'd like to experiment with some longer codes. See their spec sheet... https://datasheets.maximintegrated.com/en/ds/DS1086-DS1086Z.pdf I looking for links to different whitepapers... where dither driving the spread is not pretty triangle frequency and amplitude, but is a random shared key. And it's driving an RF tx/rx capable of extremely wide spread range. Other option is to tx/rx faux wideband noise modulo a random spectrum key. Pointers? From dj at deadhat.com Wed May 25 10:56:28 2016 From: dj at deadhat.com (dj at deadhat.com) Date: Wed, 25 May 2016 14:56:28 -0000 Subject: [Cryptography] Hacking spread spectrum clocking of HW ? In-Reply-To: References: Message-ID: > On 5/24/16, Henry Baker wrote: >> https://www.maximintegrated.com/en/app-notes/index.mvp/id/1995 >> >> Q: How hard is it to diddle with the spreading codes on these clocking >> sources? I'd like to experiment with some longer codes. > > See their spec sheet... > https://datasheets.maximintegrated.com/en/ds/DS1086-DS1086Z.pdf > > I looking for links to different whitepapers... where dither driving > the spread is not pretty triangle frequency and amplitude, but > is a random shared key. And it's driving an RF tx/rx capable of > extremely wide spread range. Other option is to tx/rx faux > wideband noise modulo a random spectrum key. Pointers? It depends on the application. Cryptographic spreading codes and wide bandwidths are seen in military radiations. CDMA Spread spectrum mobile phones used Walsh codes. I haven't kept up with WCDMA and LTE since I got sucked into crypto but the principle is the same. Those little clock oscillators typically use an LFSR since the goal is simply to smear out the clock peak to keep within emissions limits. But any long random looking sequence into a VCO would do. Bluetooth's frequency hopping spread spectrum was not designed to resist predicting the sequence. Quite the opposite. This is wireless 101. Any modern wireless comms textbook should cover it. If you're talking side channel mitigation or FI tolerance, then it's currently open season on clever ideas. But that sounds too much like my day job. CAZAC codes for stealth canaries anyone? From hbaker1 at pipeline.com Wed May 25 13:25:01 2016 From: hbaker1 at pipeline.com (Henry Baker) Date: Wed, 25 May 2016 10:25:01 -0700 Subject: [Cryptography] Hacking spread spectrum clocking of HW ? In-Reply-To: References: Message-ID: At 11:44 PM 5/24/2016, grarpamp wrote: >On 5/24/16, Henry Baker wrote: >> https://www.maximintegrated.com/en/app-notes/index.mvp/id/1995 >> >> Q: How hard is it to diddle with the spreading codes on these clocking >> sources? I'd like to experiment with some longer codes. > >See their spec sheet... > >https://datasheets.maximintegrated.com/en/ds/DS1086-DS1086Z.pdf > >I looking for links to different whitepapers... where dither driving the spread is not pretty triangle frequency and amplitude, but is a random shared key. > >And it's driving an RF tx/rx capable of extremely wide spread range. > >Other option is to tx/rx faux wideband noise modulo a random spectrum key. > >Pointers? Thanks for the link. This particular chip can be hacked, but perhaps not enough. It's possible that none of the existing 'spread spectrum' clock chips can be hacked enough. In that case, it may be necessary to 'emulate' one using a free-running state machine -- perhaps synchronized in some way (phase locked loop?) to a low frequency time standard. From hbaker1 at pipeline.com Wed May 25 20:03:24 2016 From: hbaker1 at pipeline.com (Henry Baker) Date: Wed, 25 May 2016 17:03:24 -0700 Subject: [Cryptography] Hacking spread spectrum clocking of HW ? In-Reply-To: <87y46xogjq.fsf@setec.io> References: <87y46xogjq.fsf@setec.io> Message-ID: At 04:37 PM 5/25/2016, Harlan Lieberman-Berg wrote: >Henry Baker writes: >> Q: How hard is it to diddle with the spreading codes on these clocking >> sources? I'd like to experiment with some longer codes. > >It's worth noting, you'll very probably be in violation of whatever >country you belong to's laws if you mess with those codes without the >proper license. > >In the US, for example, even amateur radio operators otherwise allowed >to do way more than the general public aren't necessarily allowed to >transmit spread spectrum outside certain spreading codes. > >cf https://www.tapr.org/ss_fcc.html and WT 97-12. >-- >Harlan Lieberman-Berg >~hlieberman Thanks for the info, but this device isn't a *radio*, it's a *computer*, and a really low power one at that. So far as I know, there's no antenna anywhere on this device. I'm only interested in hacking its f'ing clock! If this device is "transmitting" anything at all, then there's something dreadfully wrong with it, and/or someone else has already hacked it. From hlieberman at setec.io Wed May 25 19:37:29 2016 From: hlieberman at setec.io (Harlan Lieberman-Berg) Date: Wed, 25 May 2016 19:37:29 -0400 Subject: [Cryptography] Hacking spread spectrum clocking of HW ? In-Reply-To: References: Message-ID: <87y46xogjq.fsf@setec.io> Henry Baker writes: > Q: How hard is it to diddle with the spreading codes on these clocking > sources? I'd like to experiment with some longer codes. It's worth noting, you'll very probably be in violation of whatever country you belong to's laws if you mess with those codes without the proper license. In the US, for example, even amateur radio operators otherwise allowed to do way more than the general public aren't necessarily allowed to transmit spread spectrum outside certain spreading codes. cf https://www.tapr.org/ss_fcc.html and WT 97-12. -- Harlan Lieberman-Berg ~hlieberman From grarpamp at gmail.com Wed May 25 14:50:45 2016 From: grarpamp at gmail.com (grarpamp) Date: Wed, 25 May 2016 14:50:45 -0400 Subject: [Cryptography] Hacking spread spectrum clocking of HW ? In-Reply-To: References: Message-ID: On 5/25/16, dj at deadhat.com wrote: > It depends on the application. > CAZAC codes for stealth canaries anyone? > Cryptographic spreading codes and wide bandwidths are seen in military > *radiations*. *This*, moar liek this. Imagine noise radiator capable of making your spectrum analyzer look like /dev/urandom across the board. There's no center frequency, no clock, no freq hopping, no spreading, no observables, no off the shelf wireless hardware or reference design... it's not based on that. To any viewer, it's just noise. To you and your peers who hold, say, a shared XOR key for data and a seed for DRBG noise, it looks like data... lots of data ;-) With achievable datarate, error correction, and unjammability governed by the range of spectrum you can generate noise over. You could even mimic within existing spectra if need be. The amplifiers and radiators to cover the spectrum are hardware. Everything else is SDR. There is at least one good paper on this, particularly involving GNURadio style SDR as the enabling basis, but I forgot the magic search terms to find it again. While not the one in mind (and not necessarily from the new SDR guerrilla crowd), these are somewhat relavant... Digital Chaotic Communications https://smartech.gatech.edu/bitstream/handle/1853/34849/michaels_alan_j_200908_phd.pdf Synchronization in Cognitive Overlay Systems http://lib.tkk.fi/Dipl/2012/urn100685.pdf Covert Ultrawideband Random Noise papers by Jack Chuang and Ram Narayanan... https://etda.libraries.psu.edu/files/final_submissions/3142 From tytso at mit.edu Wed May 25 22:25:26 2016 From: tytso at mit.edu (Theodore Ts'o) Date: Wed, 25 May 2016 22:25:26 -0400 Subject: [Cryptography] Entropy Needed for SSH Keys? In-Reply-To: <57445F9C.5030906@borg.org> References: <57409303.6080808@borg.org> <20160523082759.1754d767@pc1> <57425A52.5090900@borg.org> <61fa3122-0243-d829-3fc7-74c0cb333950@deadhat.com> <20160523153556.GC12817@thunk.org> <57445F9C.5030906@borg.org> Message-ID: <20160526022526.GA5509@thunk.org> On Tue, May 24, 2016 at 10:05:16AM -0400, Kent Borg wrote: > > For example if you read the claims made by the CPU Jitter "True Random > > Number Generator", it essentially (albeit perhaps slightly unfairly) > > boils down to "The algorithms L1/L2 cache of an Intel CPU are horribly > > complex, and no one can figure them out, so we can treat the timing > > results as true randomness." > > Assume there is no jitter. Just consider that the TSC is running at over > 2GHz. > > For an observer to know what value the CPU will read, that observer will > have to know not only how the CPU might jitter (and let's assume zero), but > also the observer needs to know the state of the clock. Not just how many > ticks have gone by (hard already), but exactly *where* the edge of those > ticks are or an LSB value will slip by. The observer needs precise phase > information. Right but what are you measuring that CPU clock against? In the absence of interrupts if you are running something in a tight loop, and then periodically sampling the TSC, then if there is no jitter, the only thing which is unknown is the starting offset of the TSC. So maybe that's ten bits of entropy. But that's *all* which is unknowable. Running the jitter "true random number generator" continuously isn't going to change how bits of initial uncertainty --- just how many bits you've extracted out. Keep in mind that on many hardware implementations, there is only a single crystal-controlled oscillator, and all of the clocks are generated by using various divide by N circuits. So you own't even get any uncertainty caused by two different osclliators beating against one another. Now, if you have interrupts, then you may have additional bits of uncertainty. But that's not the claim of the jitter true random number generator. The claim is that you can run in a tight loop, and continuously generate lots of high-quality, "true" random numbers. - Ted From fedor.brunner at azet.sk Thu May 26 03:59:44 2016 From: fedor.brunner at azet.sk (Fedor Brunner) Date: Thu, 26 May 2016 09:59:44 +0200 Subject: [Cryptography] "60 Minutes" hacks Congressman's phone In-Reply-To: References: Message-ID: <5746ACF0.9070507@azet.sk> Kevin W. Wall: > On Mon, Apr 18, 2016 at 10:23 AM, Henry Baker wrote: >> FYI -- >> >> http://www.cbsnews.com/news/60-minutes-hacking-your-phone/ > [big snip] >> Rep. Ted Lieu: You cannot have 300-some million Americans-- and really, right, >> the global citizenry be at risk of having their phone conversations >> intercepted with a known flaw, simply because some intelligence agencies might >> get some data. That is not acceptable. > > If these are the same SS7 vulnerabilities that were widely discussed > in the WP (e.g., > https://www.washingtonpost.com/news/the-switch/wp/2014/12/18/german-researchers-discover-a-flaw-that-could-let-anyone-listen-to-your-cell-calls-and-read-your-texts/) > and other new media outlets in Dec 2014 (and it certainly sounds like > it) then the > only explanation is the that intelligence community are responsible > for they still > not being fixed. Thinking these vulnerabilities will remain secret is > foolish. Lieu is > right; those people should be fired. > > -kevin > There is an interesting video describing the details of SS7 manipulation: SS7: Locate. Track. Manipulate. You have a tracking device in your pocket Tobias Engel https://media.ccc.de/v/31c3_-_6249_-_en_-_saal_1_-_201412271715_-_ss7_locate_track_manipulate_-_tobias_engel From ryacko at gmail.com Thu May 26 15:29:25 2016 From: ryacko at gmail.com (Ryan Carboni) Date: Thu, 26 May 2016 12:29:25 -0700 Subject: [Cryptography] A promising method to thwart global surveillence Message-ID: The Russian Illegals spy ring in New York used steganography. The Caliphate cell in Brussels used truecrypt files uploaded to cyberlockers in Turkey. But the grugq notes that truecrypt files would probably have a fixed size (and even with a random length, it would still round to kilobyte sizes), so it wouldn't be so simple. Obviously if state-level actors use these methods against the NSA, steganography does have a good role to play. Problem is that machine learning has advanced substantially. In a worst case scenario, it will be obvious that you have steganographic files, that is if photodna hashes are similar for many files, but fuzzy hashes aren't as similar. The best that could be done would be to make automated scans more probabilistic and less reliable (I have tens of thousands of files on my computer), by embedding encrypted data steganographically in images in the PDF file. The text and images of the PDF file could be procedurally generated. But I'm not an expert. I'm just pointing out what makes sense to me. From mitch at niftyegg.com Thu May 26 20:45:40 2016 From: mitch at niftyegg.com (Tom Mitchell) Date: Thu, 26 May 2016 17:45:40 -0700 Subject: [Cryptography] A promising method to thwart global surveillence In-Reply-To: References: Message-ID: On Thu, May 26, 2016 at 12:29 PM, Ryan Carboni wrote: > The Russian Illegals spy ring in New York used steganography. > > The Caliphate cell in Brussels used truecrypt files uploaded to > cyberlockers in Turkey. But the grugq notes that truecrypt files would > probably have a fixed size (and even with a random length, it would > still round to kilobyte sizes), so it wouldn't be so simple. > .... > in the PDF file. The text and images of the PDF file could be > procedurally generated. > If one was willing to corrupt a truecrypt file algorithmically and perhaps strip or hack checksum checking multiple layers of hidden messages could be in the apparently random bits. This applies to any encrypted or encoded message. PDF files are rich in possible stenography tricks. PostScript is even richer because it is a rich programming language as well. http://partners.adobe.com/public/developer/en/pdf/PDFReference.pdf "PDF character set is divided into three classes, called regular, delimiter, and white-space characters. This classification determines the grouping of characters into tokens, except within strings, streams, and comments; different rules apply in those contexts." Start with comments... Font-hackery alone makes some PDF/PostScript messages odd to read and is perhaps a handle for steganography too. i.e ignore all serif and kerning characters, or key on all serif characters. The secret key could be hidden thus: Yes *Bob* *is* not *your uncle, Fred is*. Font files can be scrambled Rot13 and beyond. A Turtle graphics program could draw a message... or a jumble depending on a simple transformation to the source. -- T o m M i t c h e l l -------------- next part -------------- An HTML attachment was scrubbed... URL: From rwilson at wisc.edu Thu May 26 22:37:25 2016 From: rwilson at wisc.edu (Bob Wilson) Date: Thu, 26 May 2016 21:37:25 -0500 Subject: [Cryptography] Radios vs. computers Message-ID: > Thanks for the info, but this device isn't a *radio*, it's a *computer*, > and a really low power one at that. So far as I know, there's no antenna > anywhere on this device. I'm only interested in hacking its f'ing clock! > > If this device is "transmitting" anything at all, then there's something > dreadfully wrong with it, and/or someone else has already hacked it. There is no computer that is not a radio transmitter. For a personal computer to be sold in the US, all that fancy metal stuff around the cabinet edges, the conductive layers put on a plastic case, etc., are all intended to try to control what it radiates so as to meet FCC specifications. Your computer will have been certified to meet certain requirements, differing depending on the expected use environment. Even a little "wall wart" power supply that you plug into the wall to charge a cell phone or to run a small appliance probably has enough high-transient switching going on that it is radiating fairly strong radio signals. (Older linear supplies don't radiate much but they use more expensive copper and iron so they are disappearing.) I can remember hearing music played by the ORACLE at Oak Ridge in 1953, I think it was, by turning on a nearby radio and running appropriate code, and a decade later similarly for the little IBM 1620. What you want to do may not depend on your computer transmitting, but someone else wanting to hear what you are doing might well take advantage of it. Try putting a radio near your computer(s) and running a variety of programs. The radio will definitely pick up signals, but whether it puts out sounds that you can hear depends on how it is choosing to demodulate those signals as well as what frequencies it detects. AM will more likely produce a sound you can hear than FM would, but the signals are there even if they don't get demodulated in a way your ears pick them up. Bob Wilson From dave at horsfall.org Thu May 26 22:48:23 2016 From: dave at horsfall.org (Dave Horsfall) Date: Fri, 27 May 2016 12:48:23 +1000 (EST) Subject: [Cryptography] Hacking spread spectrum clocking of HW ? In-Reply-To: <87y46xogjq.fsf@setec.io> References: <87y46xogjq.fsf@setec.io> Message-ID: On Wed, 25 May 2016, Harlan Lieberman-Berg wrote: > It's worth noting, you'll very probably be in violation of whatever > country you belong to's laws if you mess with those codes without the > proper license. > > In the US, for example, even amateur radio operators otherwise allowed > to do way more than the general public aren't necessarily allowed to > transmit spread spectrum outside certain spreading codes. Australian amateurs are allowed to use SS (and I have) provided the authorities have been notified of the spreading code (which sorta defeats the purpose); we can also use crypto, for sensitive search and rescue messages (and I have). On one memorable episode, one of us, knowing that the reptiles of the press were listening on their scanners, came up on CW (Morse) to announce the news that they'd found the baby's corpse... -- Dave Horsfall DTM (VK2KFU) "Those who don't understand security will suffer." From hbaker1 at pipeline.com Thu May 26 23:47:35 2016 From: hbaker1 at pipeline.com (Henry Baker) Date: Thu, 26 May 2016 20:47:35 -0700 Subject: [Cryptography] Radios vs. computers In-Reply-To: References: Message-ID: At 07:37 PM 5/26/2016, Bob Wilson wrote: >>Thanks for the info, but this device isn't a *radio*, it's a *computer*, >>and a really low power one at that. So far as I know, there's no antenna >>anywhere on this device. I'm only interested in hacking its f'ing clock! >> >>If this device is "transmitting" anything at all, then there's something >>dreadfully wrong with it, and/or someone else has already hacked it. > >There is no computer that is not a radio transmitter. For a personal computer to be sold in the US, all that fancy metal stuff around the cabinet edges, the conductive layers put on a plastic case, etc., are all intended to try to control what it radiates so as to meet FCC specifications. Your computer will have been certified to meet certain requirements, differing depending on the expected use environment. Even a little "wall wart" power supply that you plug into the wall to charge a cell phone or to run a small appliance probably has enough high-transient switching going on that it is radiating fairly strong radio signals. (Older linear supplies don't radiate much but they use more expensive copper and iron so they are disappearing.) > >I can remember hearing music played by the ORACLE at Oak Ridge in 1953, I think it was, by turning on a nearby radio and running appropriate code, and a decade later similarly for the little IBM 1620. What you want to do may not depend on your computer transmitting, but someone else wanting to hear what you are doing might well take advantage of it. Try putting a radio near your computer(s) and running a variety of programs. The radio will definitely pick up signals, but whether it puts out sounds that you can hear depends on how it is choosing to demodulate those signals as well as what frequencies it detects. AM will more likely produce a sound you can hear than FM would, but the signals are there even if they don't get demodulated in a way your ears pick them up. (Yes, in my first job as an IBM 1401 nanny, we listened to the AM radio to see what the computer was doing. Nowadays, you probably need a downconverter, but the idea is the same.) All true, but other than regulating for RFI (interference), computers aren't legally radios, and don't fall under the same reqs. I would imagine that many dimmable LED lights generate far more RFI than computers -- due to their use of pulse-width (or any other type of modulation) in order to get the dimming capability. Everyone on this list should be afraid -- be very afraid -- of LED lights in their homes, as the hack to make them into audio bugs (which can be listened to from miles away with a decent telescope) is quite trivial (due to poor design, many LED lights might already be modulating the LED with sound frequencies purely by accident!). Only slightly more difficult is converting an LED light into a wifi bug. (If the intel agencies can already listen to audio waves by logging minute vibrations of window panes, then modulating an LED light is child's play.) Such modulated LED light can still be detected and read even when window blinds are shut, in a manner similar to being able to detect which TV channel someone is watching based upon correlating the modulated light from someone's window with the overall brightness of a particular TV channel. From leichter at lrw.com Fri May 27 06:58:37 2016 From: leichter at lrw.com (Jerry Leichter) Date: Fri, 27 May 2016 06:58:37 -0400 Subject: [Cryptography] Hacking spread spectrum clocking of HW ? In-Reply-To: References: <87y46xogjq.fsf@setec.io> Message-ID: >> In the US, for example, even amateur radio operators otherwise allowed >> to do way more than the general public aren't necessarily allowed to >> transmit spread spectrum outside certain spreading codes. > > Australian amateurs are allowed to use SS (and I have) provided the > authorities have been notified of the spreading code (which sorta defeats > the purpose); Depends on what you think the purpose is. If it's to communicate through noisy channels, or just to experiment to improve or even just personally understand the technology, or to provide a public service - then it's within the ambit of the amateur services, at least as they are defined in the US. If it's defined to include secret communications, it's definitely outside the US definitions. > we can also use crypto, for sensitive search and rescue > messages (and I have). I'm not sure that's ever permitted in the US under any circumstances. -- Jerry From kentborg at borg.org Fri May 27 10:12:28 2016 From: kentborg at borg.org (Kent Borg) Date: Fri, 27 May 2016 10:12:28 -0400 Subject: [Cryptography] Entropy Needed for SSH Keys? In-Reply-To: <20160526022526.GA5509@thunk.org> References: <57409303.6080808@borg.org> <20160523082759.1754d767@pc1> <57425A52.5090900@borg.org> <61fa3122-0243-d829-3fc7-74c0cb333950@deadhat.com> <20160523153556.GC12817@thunk.org> <57445F9C.5030906@borg.org> <20160526022526.GA5509@thunk.org> Message-ID: <574855CC.3080401@borg.org> On 05/25/2016 10:25 PM, Theodore Ts'o wrote: > Right but what are you measuring that CPU clock against? In the > absence of interrupts if you are running something in a tight loop, > and then periodically sampling the TSC, then if there is no jitter, > the only thing which is unknown is the starting offset of the TSC. Sorry, I am talking about measuring against external interrupts. I guess I am promoting that old trick of beating two clocks against each other. But I am impressed that one clock (in the case of Intel chips) is pretty special: it is running very fast, it is physically small (does not even exist beyond a span of a few mm), it is designed to be only mostly regular and not particularly stable. It drives a counter that can be sampled in response to an interrupt. As a bonus, this interrupt servicing is itself very complex--but I don't trust that either. The other clock (interrupt) has to be much slower: The CPU is mostly for doing other work and doesn't want to spend all its time servicing interrupts, and it is physically incapable of servicing interrupts at anything very close to its internal clock speed. It also seems important here that the TSC is running fast. We aren't talking lots of big fat nanoseconds here, we are interested in the precise phase on a sub-nanosecond period. I don't think we have to pine for sloppy mechanical stuff like keyboard and mouse activity, I think any interrupt from any other subsystem will do--let's fudge it and say "subsystem with its own crystal". Certainly anything so external as a network interrupt is great. Is there a term for how far a photon can travel in a clock period? Well, whatever that might be called, if the physical distance of a second clock is on-order that far away--inches in this case--it feels like the problem changes. It seems there is real entropy in the analog aspects inside the CPU and there are theoretical problems with how well that could ever be communicated to a distance, and similar problems with how well it could ever be correlated at a distance. Or am I being overly impressed by how a fast 2GHz is? -kb, the Kent who remembers kilocycles. From bear at sonic.net Fri May 27 16:17:24 2016 From: bear at sonic.net (Ray Dillinger) Date: Fri, 27 May 2016 13:17:24 -0700 Subject: [Cryptography] Entropy Needed for SSH Keys? In-Reply-To: <574855CC.3080401@borg.org> References: <57409303.6080808@borg.org> <20160523082759.1754d767@pc1> <57425A52.5090900@borg.org> <61fa3122-0243-d829-3fc7-74c0cb333950@deadhat.com> <20160523153556.GC12817@thunk.org> <57445F9C.5030906@borg.org> <20160526022526.GA5509@thunk.org> <574855CC.3080401@borg.org> Message-ID: <5748AB54.5090608@sonic.net> On 05/27/2016 07:12 AM, Kent Borg wrote: > Is there a term for how far a photon can travel in a clock period? Well, > whatever that might be called, if the physical distance of a second > clock is on-order that far away--inches in this case--it feels like the > problem changes. IIRC, one nanosecond was once defined to me as the approximate amount of time it takes for light to travel fifteen centimeters. So, something just across the room from you is probably fifteen to thirty light-nanoseconds away, depending on the size of the room. Does this matter much, in terms of creating useful interference patterns? -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 819 bytes Desc: OpenPGP digital signature URL: From decoy at iki.fi Fri May 27 20:53:09 2016 From: decoy at iki.fi (Sampo Syreeni) Date: Sat, 28 May 2016 03:53:09 +0300 (EEST) Subject: [Cryptography] Entropy Needed for SSH Keys? In-Reply-To: <5748AB54.5090608@sonic.net> References: <57409303.6080808@borg.org> <20160523082759.1754d767@pc1> <57425A52.5090900@borg.org> <61fa3122-0243-d829-3fc7-74c0cb333950@deadhat.com> <20160523153556.GC12817@thunk.org> <57445F9C.5030906@borg.org> <20160526022526.GA5509@thunk.org> <574855CC.3080401@borg.org> <5748AB54.5090608@sonic.net> Message-ID: On 2016-05-27, Ray Dillinger wrote: > IIRC, one nanosecond was once defined to me as the approximate amount > of time it takes for light to travel fifteen centimeters. In vacuum (and mostly in dry air), you can calculate it to be exactly 299792458 m/s * 1e-9 s ~= 30 cm. So about a foot is the basic rule. No need to remember any of that, you can get it straight from Wikipedia. In silico it's more complicated, because of the varying dielectric constant of different kinds of doping levels, surface structure, stray capacitance, and whatnot. But based on what I've seen of the possible permittivities, I'd wager the nanosecond can range anywhere from 28cm downto as little as 2cm, depending on the substrate. > Does this matter much, in terms of creating useful interference > patterns? In interference, it matters downto the twelth digit. -- Sampo Syreeni, aka decoy - decoy at iki.fi, http://decoy.iki.fi/front +358-40-3255353, 025E D175 ABE5 027C 9494 EEB0 E090 8BA9 0509 85C2 From mitch at niftyegg.com Fri May 27 21:35:38 2016 From: mitch at niftyegg.com (Tom Mitchell) Date: Fri, 27 May 2016 18:35:38 -0700 Subject: [Cryptography] Entropy Needed for SSH Keys? In-Reply-To: <5748AB54.5090608@sonic.net> References: <57409303.6080808@borg.org> <20160523082759.1754d767@pc1> <57425A52.5090900@borg.org> <61fa3122-0243-d829-3fc7-74c0cb333950@deadhat.com> <20160523153556.GC12817@thunk.org> <57445F9C.5030906@borg.org> <20160526022526.GA5509@thunk.org> <574855CC.3080401@borg.org> <5748AB54.5090608@sonic.net> Message-ID: On Fri, May 27, 2016 at 1:17 PM, Ray Dillinger wrote: > > > On 05/27/2016 07:12 AM, Kent Borg wrote: > > > Is there a term for how far a photon can travel in a clock period? Well, > > whatever that might be called, if the physical distance of a second > > clock is on-order that far away--inches in this case--it feels like the > > problem changes. > > IIRC, one nanosecond was once defined to me as the approximate > amount of time it takes for light to travel fifteen centimeters. > Aha yes Grace Hopper had an answer to the nano second. One key is that it is a wire not C in a vacuum. A trace will have a slightly different length. https://www.youtube.com/watch?v=JEpsKnWZrJ8 <-- Grace Hopper Of interest an external entropy generator could be quite small and only need three or four pins. A lot of energy has been given to precision components but entropy generators could be in an easy to manufacture less precise category. Some of the phase lock loop clock recovery logic blocks could also be coaxed into generating entropy. PLL blocks are key to many of the very fast links between devices like SATA disks and DRAM. https://www.rambus.com/dllpll-on-a-dram/ http://www.ti.com.cn/cn/lit/ug/sprufw0b/sprufw0b.pdf Communication of these fast links demands a quality clock but behind the curtain will be a sub block that maintains that quality with tiny push/ pull surges into the clock generator (VCO) block and other tricks. Those pulses could suffice as one component of entropy in a system. -- T o m M i t c h e l l -------------- next part -------------- An HTML attachment was scrubbed... URL: From jthorn at astro.indiana.edu Fri May 27 22:21:17 2016 From: jthorn at astro.indiana.edu (Jonathan Thornburg) Date: Fri, 27 May 2016 22:21:17 -0400 Subject: [Cryptography] Entropy Needed for SSH Keys? In-Reply-To: <5748AB54.5090608@sonic.net> References: <57409303.6080808@borg.org> <20160523082759.1754d767@pc1> <57425A52.5090900@borg.org> <61fa3122-0243-d829-3fc7-74c0cb333950@deadhat.com> <20160523153556.GC12817@thunk.org> <57445F9C.5030906@borg.org> <20160526022526.GA5509@thunk.org> <574855CC.3080401@borg.org> <5748AB54.5090608@sonic.net> Message-ID: <20160528022101.GA25887@cobalt.astro.indiana.edu> On Fri, May 27, 2016 at 01:17:24PM -0700, Ray Dillinger wrote: > IIRC, one nanosecond was once defined to me as the approximate > amount of time it takes for light to travel fifteen centimeters. It's actually 29.98cm = 11.80 inches (in round numbers, 1 foot). -- -- "Jonathan Thornburg [remove -animal to reply]" Dept of Astronomy & IUCSS, Indiana University, Bloomington, Indiana, USA "There was of course no way of knowing whether you were being watched at any given moment. How often, or on what system, the Thought Police plugged in on any individual wire was guesswork. It was even conceivable that they watched everybody all the time." -- George Orwell, "1984" From thierry.moreau at connotech.com Sat May 28 15:26:00 2016 From: thierry.moreau at connotech.com (Thierry Moreau) Date: Sat, 28 May 2016 19:26:00 +0000 Subject: [Cryptography] Anybody sorted out the MQV patent claims? Message-ID: <5749F0C8.4010008@connotech.com> Hi, While looking at discrete logarithm signatures in relation with Diffie-Hellman key establishment, I (re-)discovered a whole facet of public key cryptography. Certicom is aggressive in asserting intellectual property rights in this area. In a 2005 letter to a standardization body, Certicom indicated four US patents as pertaining to the MQV protocol (two "continuation in part") and one european patent. US 5,896,455 --> US 5,761,305 US 6,785,813 --> US 6,122,736 (EP 0 739 105) In all of these, the independent claims include the limitation that each party computes a digital signature value separate from the ephemeral D-H shared secret. However, the MTI (ref [47] in [0]) protocol (the seminal idea for MQV, HMQV, and OAKE [0] as well) precisely *avoids* such a signature value (and thus avoids the DSA-type vulnerability to ephemeral private random number leakage -- neat achievement). Thus, I see the above four patents as claiming something other than MQV. Anybody ever sorted this out? The question pertains to patents that appears either expired (in first-to-file jurisdictions) or about to expire (in first-to-invent jurisdiction). I ask because the technical issues at stake appear relatively simple: compare figure 2 in US 6,122,736 and/or claim 1 in EP 0 739 105 with the basic MQV operating principle. - Thierry Moreau [0] Andrew C. Yao and Yunlei Zhao, "A New Family of Implicitly Authenticated Diffie-Hellman Protocols", Cryptology ePrint Archive: Report 2011/035, http://eprint.iacr.org/2011/035 From rwilson at wisc.edu Sat May 28 15:35:59 2016 From: rwilson at wisc.edu (Bob Wilson) Date: Sat, 28 May 2016 14:35:59 -0500 Subject: [Cryptography] Hacking spread spectrum clocking of HW ? Message-ID: <58809f5e-a6eb-efe7-68be-ccf21d8f6582@wisc.edu> >>> In the US, for example, even amateur radio operators otherwise allowed >>> to do way more than the general public aren't necessarily allowed to >>> transmit spread spectrum outside certain spreading codes. >> Australian amateurs are allowed to use SS (and I have) provided the >> authorities have been notified of the spreading code (which sorta defeats >> the purpose); > Depends on what you think the purpose is. If it's to communicate through noisy channels, or just to experiment to improve or even just personally understand the technology, or to provide a public service - then it's within the ambit of the amateur services, at least as they are defined in the US. If it's defined to include secret communications, it's definitely outside the US definitions. > >> we can also use crypto, for sensitive search and rescue >> messages (and I have). > I'm not sure that's ever permitted in the US under any circumstances. > > -- Jerry In the US hams can encrypt under exactly one specified circumstance: Control of a space station, meaning a radio station on a satellite. That is presumably to keep "outsiders" from taking over control. In general a US ham cannot transmit anything intended to conceal information. (That exception is the basis for a question frequently appearing on ham license exams.) Back during and at the end of WWI, the US Navy wanted control over all broadcasting. That by itself has little to do with cryptography, but a combination of that desire implying limiting what hams could do and fear that uncontrolled radio operations could include spying for foreign powers leads to this denial, which is a connection to crypto. And it might explain why this restriction is not found (so far as I know) outside the US. So spread spectrum cannot be used to conceal anything. But the FCC rules governing the amateur service specifically justify the existence and encouragement of the service in large part based on the progress in communication coming from hams experimenting and developing new techniques, hence exceptions for experiments intended to improve technology. It is hard to write enforceable rules that say you can do X but not if your intent is to accomplish Y, resulting in very specific statements about what you can and cannot do: That implies they are soon obsolete, hence need revision from time to time, and that is definitely what has happened with respect to spread spectrum use by hams. Bob Wilson (WA9D) -------------- next part -------------- An HTML attachment was scrubbed... URL: From phill at hallambaker.com Sat May 28 21:32:45 2016 From: phill at hallambaker.com (Phillip Hallam-Baker) Date: Sat, 28 May 2016 21:32:45 -0400 Subject: [Cryptography] Anybody sorted out the MQV patent claims? In-Reply-To: <5749F0C8.4010008@connotech.com> References: <5749F0C8.4010008@connotech.com> Message-ID: On Sat, May 28, 2016 at 3:26 PM, Thierry Moreau < thierry.moreau at connotech.com> wrote: > Hi, > > While looking at discrete logarithm signatures in relation with > Diffie-Hellman key establishment, I (re-)discovered a whole facet of public > key cryptography. > > Certicom is aggressive in asserting intellectual property rights in this > area. > > In a 2005 letter to a standardization body, Certicom indicated four US > patents as pertaining to the MQV protocol (two "continuation in part") and > one european patent. > > US 5,896,455 --> US 5,761,305 > > US 6,785,813 --> US 6,122,736 (EP 0 739 105) > > In all of these, the independent claims include the limitation that each > party computes a digital signature value separate from the ephemeral D-H > shared secret. > > However, the MTI (ref [47] in [0]) protocol (the seminal idea for MQV, > HMQV, and OAKE [0] as well) precisely *avoids* such a signature value (and > thus avoids the DSA-type vulnerability to ephemeral private random number > leakage -- neat achievement). > > Thus, I see the above four patents as claiming something other than MQV. > Anybody ever sorted this out? > > The question pertains to patents that appears either expired (in > first-to-file jurisdictions) or about to expire (in first-to-invent > jurisdiction). I ask because the technical issues at stake appear > relatively simple: compare figure 2 in US 6,122,736 and/or claim 1 in EP 0 > 739 105 with the basic MQV operating principle. > > - Thierry Moreau > > [0] Andrew C. Yao and Yunlei Zhao, "A New Family of Implicitly > Authenticated Diffie-Hellman Protocols", Cryptology ePrint Archive: Report > 2011/035, http://eprint.iacr.org/2011/035 > _______________________________________________ > The cryptography mailing list > cryptography at metzdowd.com > http://www.metzdowd.com/mailman/listinfo/cryptography I don't know and I don't care. The first patent has a priority date in 1995 so it has expired. The second mentions a signature in the independent claim. I do not believe that an authentication protocol should use signature unless the purpose of the exchange is to provide non repudiation in which case they should probably just be signing the data. Diffie Hellman is a fine key exchange protocol as it stands. All you really need to provide authentication proofs to each party is for each side to contribute a random nonce (to prevent replay attacks) and to push all the output data through a one way function. e^x, y -> e^xy e^y, e -> e^xy Agreed Key = H (e^xy + nx + ny) Proof = If you want an ephemeral then use it as a mix in on the master key, not a replacement for it. I am sure there are examples of that protocol written down back in the 1980s. It works, it is robust. Recent supreme court precedent holds that replacement of like with like is 'obvious' and so upgrading from DH to ECDH isn't an enforceable claim. I am pretty conservative when it comes to patent claims. I am sure that the ContentGuard patent that I found the other month isn't actually enforceable but why risk it when the patent expires in 18 months and I will be hard pressed to have a working demo by then anyway. In this case, I don't think there is anything to worry about but as always, I am not a lawyer, use at your own risk. If you want to pay me I can give an expert opinion but nobody is an infallible expert in what a jury might decide. -------------- next part -------------- An HTML attachment was scrubbed... URL: From hbaker1 at pipeline.com Sat May 28 21:35:54 2016 From: hbaker1 at pipeline.com (Henry Baker) Date: Sat, 28 May 2016 18:35:54 -0700 Subject: [Cryptography] Blue Coat has been issued a MITM encryption certificate Message-ID: FYI -- http://www.theregister.co.uk/2016/05/27/blue_coat_ca_certs/ A Controversial Surveillance Firm Was Granted a Powerful Encryption Certificate Written by Joseph Cox, Contributor May 27, 2016 // 03:25 PM EST A controversial surveillance company whose products have been detected in Iran and Sudan was recently issued a powerful encryption certificate by a US cybersecurity company. The certificate, and the authority that comes with it, could allow Blue Coat Systems to more easily snoop on encrypted traffic. But Symantec, the company that provided it, downplayed concern from the security community. Blue Coat, which sells web-monitoring software, was granted the power in September last year, but it was only widely noticed this week. The company's devices are used by both government and commercial customers for keeping tabs on networks or conducting surveillance. In Syria, the technology has been used to censor web sites and monitor the communications of dissidents, activists and journalists, The Washington Post reports. Certificates are used to encrypt web pages, including bank or email login screens. Certificate authorities (CA), such as cybersecurity company Symantec, act as the trust holders in the encrypted web--they sign certificates which are then used to secure websites. If a web browser comes across an untrusted certificate, then a warning may pop up, alerting the user. CAs can award ostensibly trusted organisations with the power to sign certificates too. That is what happened here: in short, Symantec has vouched for Blue Coat's legitimacy. "Think of a root CA like your super trustworthy friend who would never lie--if he or she says you can trust someone, you'd trust them," Bryan Crow wrote on WonderHowTo on Friday. But having a company known for selling surveillance equipment to authoritarian regimes getting this extra power has made people pretty damn worried. So much so that security researcher Filippo Valsorda explained how to manually set an OSX system to distrust any certificate issued by Blue Coat. Others followed with instructions for Windows. "Since they now have a trusted CA, and they're known for creating [man-in-the-middle] attack devices, they can use this certificate to issue fake certificates for any website you visit," Crow said. "To clarify, they can intercept your connection to, say, YourBank.com, open their connection to YourBank using their real certificate, but send your computer their own certificate that claims to be YourBank's, sign it with their trusted CA, and your computer won't blink an eye. It will implicitly trust it, seeing as if it checks the signing CA, it'll find that it is properly signed, and trusted on your machine," Crow added. "What the certificate does not give them the ability to do is issue public certificates to other organizations. That's the big misunderstanding." But Symantec and Blue Coat said that the certificate was only used for internal testing. "We provided it because companies that want to secure private servers without the risks that come with working in the public domain is a common customer request," Symantec spokesperson Jane Gideon told Motherboard in an email. "Symantec has reviewed the intermediate CA issued to Blue Coat and determined it was used appropriately. Consistent with our protocols, Symantec maintained full control of the private key and Blue Coat never had access to it. Blue Coat has confirmed it was used for internal testing and has since been discontinued. Therefore, rumors of misuse are unfounded," she wrote. When asked for comment, Blue Coat pointed to Symantec's statement. The certificate is "still valid, and they could use it for further internal testing in the future as long as the CA is valid, which is a completely legitimate use," Gideon clarified. "What the certificate does not give them the ability to do is issue public certificates to other organizations," Gideon said. "That's the big misunderstanding." "This intermediate CA is for their private servers only," she wrote. Correction: Due to a formatting issue, a quote from Symantec looked like it was attributed to Bryan Crow. This article has since been updated to correct the error. Topics: privacy, surveillance, Blue Coat Systems, symantec, encryption --- How to unfriend Blue Coat in OS-X: https://blog.filippo.io/untrusting-an-intermediate-ca-on-os-x/ How to unfriend in Windows: http://blogs.msmvps.com/alunj/2016/05/26/untrusting-the-blue-coat-intermediate-ca-from-windows/ The Reg article: http://www.theregister.co.uk/2016/05/27/blue_coat_ca_certs/ From grarpamp at gmail.com Sun May 29 01:17:17 2016 From: grarpamp at gmail.com (grarpamp) Date: Sun, 29 May 2016 01:17:17 -0400 Subject: [Cryptography] Hacking spread spectrum clocking of HW ? In-Reply-To: <58809f5e-a6eb-efe7-68be-ccf21d8f6582@wisc.edu> References: <58809f5e-a6eb-efe7-68be-ccf21d8f6582@wisc.edu> Message-ID: Various government subjects wrote: > eg: Laws in US forbid use of encrypted radio Traditional spread spectrum seem rather off the shelf and wouldn't really consider encrypted as such. What I mentioned *is* encrypted, both at the RF layer itself, and at the data layer riding on top. It's also really hard to locate random background noise (power). Even if, since random noise can't be proven to be crypto, it can't be shut down due to any crypto reason, insufficient cause. Nor do guerrilla radios care about such laws. > In the US hams can encrypt under exactly one specified circumstance: > Control of a space station, meaning a radio station on a satellite. That > is presumably to keep "outsiders" from taking over control. The history of any such takeovers, classified or otherwise, would be interesting read. From stephen.farrell at cs.tcd.ie Sun May 29 08:55:04 2016 From: stephen.farrell at cs.tcd.ie (Stephen Farrell) Date: Sun, 29 May 2016 13:55:04 +0100 Subject: [Cryptography] Blue Coat has been issued a MITM encryption certificate In-Reply-To: References: Message-ID: <574AE6A8.4040302@cs.tcd.ie> On 29/05/16 02:35, Henry Baker wrote: > FYI -- > > http://www.theregister.co.uk/2016/05/27/blue_coat_ca_certs/ > > A Controversial Surveillance Firm Was Granted a Powerful Encryption Certificate > Written by Joseph Cox, Contributor Yeah, two things strike me: 1 - yay for certificate transparency - CAs behaving oddly being spotted and outed is good 2 - what kind of "testing" would require symantec to issue a CA cert with path-len 0 and for symanetec to hold the private key? I can't figure anything that makes sense unless symantec were thinking of actively helping blue coat spoof web sites better, maybe at run-time, or on a case-by-case basis - or am I missing something? Cheers, S. > > May 27, 2016 // 03:25 PM EST > > A controversial surveillance company whose products have been detected in Iran and Sudan was recently issued a powerful encryption certificate by a US cybersecurity company. The certificate, and the authority that comes with it, could allow Blue Coat Systems to more easily snoop on encrypted traffic. But Symantec, the company that provided it, downplayed concern from the security community. > > Blue Coat, which sells web-monitoring software, was granted the power in September last year, but it was only widely noticed this week. > > The company's devices are used by both government and commercial customers for keeping tabs on networks or conducting surveillance. In Syria, the technology has been used to censor web sites and monitor the communications of dissidents, activists and journalists, The Washington Post reports. > > Certificates are used to encrypt web pages, including bank or email login screens. Certificate authorities (CA), such as cybersecurity company Symantec, act as the trust holders in the encrypted web--they sign certificates which are then used to secure websites. If a web browser comes across an untrusted certificate, then a warning may pop up, alerting the user. > > CAs can award ostensibly trusted organisations with the power to sign certificates too. That is what happened here: in short, Symantec has vouched for Blue Coat's legitimacy. > > "Think of a root CA like your super trustworthy friend who would never lie--if he or she says you can trust someone, you'd trust them," Bryan Crow wrote on WonderHowTo on Friday. > > But having a company known for selling surveillance equipment to authoritarian regimes getting this extra power has made people pretty damn worried. So much so that security researcher Filippo Valsorda explained how to manually set an OSX system to distrust any certificate issued by Blue Coat. Others followed with instructions for Windows. > > "Since they now have a trusted CA, and they're known for creating [man-in-the-middle] attack devices, they can use this certificate to issue fake certificates for any website you visit," Crow said. > > "To clarify, they can intercept your connection to, say, YourBank.com, open their connection to YourBank using their real certificate, but send your computer their own certificate that claims to be YourBank's, sign it with their trusted CA, and your computer won't blink an eye. It will implicitly trust it, seeing as if it checks the signing CA, it'll find that it is properly signed, and trusted on your machine," Crow added. > > "What the certificate does not give them the ability to do is issue public certificates to other organizations. That's the big misunderstanding." > > But Symantec and Blue Coat said that the certificate was only used for internal testing. > > "We provided it because companies that want to secure private servers without the risks that come with working in the public domain is a common customer request," Symantec spokesperson Jane Gideon told Motherboard in an email. > > "Symantec has reviewed the intermediate CA issued to Blue Coat and determined it was used appropriately. Consistent with our protocols, Symantec maintained full control of the private key and Blue Coat never had access to it. Blue Coat has confirmed it was used for internal testing and has since been discontinued. Therefore, rumors of misuse are unfounded," she wrote. > > When asked for comment, Blue Coat pointed to Symantec's statement. > > The certificate is "still valid, and they could use it for further internal testing in the future as long as the CA is valid, which is a completely legitimate use," Gideon clarified. > > "What the certificate does not give them the ability to do is issue public certificates to other organizations," Gideon said. "That's the big misunderstanding." > > "This intermediate CA is for their private servers only," she wrote. > > Correction: Due to a formatting issue, a quote from Symantec looked like it was attributed to Bryan Crow. This article has since been updated to correct the error. > > Topics: privacy, surveillance, Blue Coat Systems, symantec, encryption > --- > > How to unfriend Blue Coat in OS-X: > > https://blog.filippo.io/untrusting-an-intermediate-ca-on-os-x/ > > How to unfriend in Windows: > > http://blogs.msmvps.com/alunj/2016/05/26/untrusting-the-blue-coat-intermediate-ca-from-windows/ > > The Reg article: > > http://www.theregister.co.uk/2016/05/27/blue_coat_ca_certs/ > > _______________________________________________ > The cryptography mailing list > cryptography at metzdowd.com > http://www.metzdowd.com/mailman/listinfo/cryptography > -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 3840 bytes Desc: S/MIME Cryptographic Signature URL: From ikizir at gmail.com Tue May 31 07:31:59 2016 From: ikizir at gmail.com (Ismail Kizir) Date: Tue, 31 May 2016 14:31:59 +0300 Subject: [Cryptography] Wi-fi spyware injection Message-ID: Hello, Today, 3 Turkish National Police Officers are taken into custody because for "Injecting malware and or spyware to Turkish National Police Intelligence IT Systems". It's interesting. It seems, they were using Galileo. Wikileaks has published some e-mails on bargain of price of the system, ~400,000 USD. And since there are inner power struggles between groups in police intelligence service, those crazies uploaded the malware to their own servers! Then I've read many things about Galileo. Even their source code has been exposed on GitHub. But one thing is very dangerous: They can could inject the malware to any phone via wi-fi hacks: http://thehackernews.com/2015/07/boeing-drone-hacking.html Does someone knows about this? Any protection methods etc? Regards Ismail Kizir From phill at hallambaker.com Tue May 31 10:34:07 2016 From: phill at hallambaker.com (Phillip Hallam-Baker) Date: Tue, 31 May 2016 10:34:07 -0400 Subject: [Cryptography] Blue Coat has been issued a MITM encryption certificate In-Reply-To: <574AE6A8.4040302@cs.tcd.ie> References: <574AE6A8.4040302@cs.tcd.ie> Message-ID: On Sun, May 29, 2016 at 8:55 AM, Stephen Farrell wrote: > > > On 29/05/16 02:35, Henry Baker wrote: > > FYI -- > > > > http://www.theregister.co.uk/2016/05/27/blue_coat_ca_certs/ > > > > A Controversial Surveillance Firm Was Granted a Powerful Encryption > Certificate > > Written by Joseph Cox, Contributor > > Yeah, two things strike me: > > 1 - yay for certificate transparency - CAs behaving oddly being spotted > and outed is good > > 2 - what kind of "testing" would require symantec to issue a CA > cert with path-len 0 and for symanetec to hold the private key? I > can't figure anything that makes sense unless symantec were thinking > of actively helping blue coat spoof web sites better, maybe at > run-time, or on a case-by-case basis - or am I missing something? > > Cheers, > S. For the benefit of us who can't remember, what is the effect of path-len 0? As in, what is the effect on systems out there in the wild as opposed to what does the spec say. Is there a difference and if so for what systems? Does 0 = infinity? Probably not in the spec but what about elsewhere? -------------- next part -------------- An HTML attachment was scrubbed... URL: From cryptography at dukhovni.org Tue May 31 12:30:49 2016 From: cryptography at dukhovni.org (Viktor Dukhovni) Date: Tue, 31 May 2016 16:30:49 +0000 Subject: [Cryptography] Blue Coat has been issued a MITM encryption certificate In-Reply-To: References: <574AE6A8.4040302@cs.tcd.ie> Message-ID: <20160531163049.GX3300@mournblade.imrryr.org> On Tue, May 31, 2016 at 10:34:07AM -0400, Phillip Hallam-Baker wrote: > For the benefit of us who can't remember, what is the effect of path-len 0? In the specs and in OpenSSL it means that the CA can only issue EE certificates, it cannot issue subsidiary intermediates. I'd be suprised if other X.509 toolkits interpreted pathlen == 0 differently. I would not be suprised to find toolkits that completely ignore path length constraints, but don't know of any that do. The extension should be "critical", which might help with those toolkits that don't ignore unhandled critical extensions. -- Viktor. From erwann at abalea.com Tue May 31 12:54:37 2016 From: erwann at abalea.com (Erwann ABALEA) Date: Tue, 31 May 2016 18:54:37 +0200 Subject: [Cryptography] Blue Coat has been issued a MITM encryption certificate In-Reply-To: References: <574AE6A8.4040302@cs.tcd.ie> Message-ID: Bonjour, 2016-05-31 16:34 GMT+02:00 Phillip Hallam-Baker : > > > On Sun, May 29, 2016 at 8:55 AM, Stephen Farrell < > stephen.farrell at cs.tcd.ie> wrote: > >> >> >> On 29/05/16 02:35, Henry Baker wrote: >> > FYI -- >> > >> > http://www.theregister.co.uk/2016/05/27/blue_coat_ca_certs/ >> > >> > A Controversial Surveillance Firm Was Granted a Powerful Encryption >> Certificate >> > Written by Joseph Cox, Contributor >> >> Yeah, two things strike me: >> >> 1 - yay for certificate transparency - CAs behaving oddly being spotted >> and outed is good >> >> 2 - what kind of "testing" would require symantec to issue a CA >> cert with path-len 0 and for symanetec to hold the private key? I >> can't figure anything that makes sense unless symantec were thinking >> of actively helping blue coat spoof web sites better, maybe at >> run-time, or on a case-by-case basis - or am I missing something? >> >> Cheers, >> S. > > > For the benefit of us who can't remember, what is the effect of path-len 0? > A CA certificate containing a BasicConstraints with pathLenConstraint=0 means that this CA certificate can only be used to verify an end-entity certificate, or a CA certificate that doesn't issue any certificate, but not a CA certificate that itself would issue another certificate (either CA or end-entity). To simplify: CA(BC:pathLenConstraint=0) -> end-entity : OK CA(BC:pathLenConstraint=0) -> CA(anything) : OK CA(BC:pathLenConstraint=0) -> CA(anything) -> any certificate : NOT OK As in, what is the effect on systems out there in the wild as opposed to > what does the spec say. Is there a difference and if so for what systems? > > Does 0 = infinity? Probably not in the spec but what about elsewhere? > 0 is not infinity. Infinity is expressed as the absence of the pathLenConstraint field. Some not so old versions of GnuTLS didn't correctly verify the pathLenConstraint, at least. I think it was corrected in 2014. OpenSSL, NSS, MSCAPI, and Opera are OK. Don't know about PolarSSL/mbedTLS or other smaller TLS stacks. -- Erwann. -------------- next part -------------- An HTML attachment was scrubbed... URL: From phill at hallambaker.com Tue May 31 13:25:08 2016 From: phill at hallambaker.com (Phillip Hallam-Baker) Date: Tue, 31 May 2016 13:25:08 -0400 Subject: [Cryptography] Blue Coat has been issued a MITM encryption certificate In-Reply-To: References: <574AE6A8.4040302@cs.tcd.ie> Message-ID: On Tue, May 31, 2016 at 12:54 PM, Erwann ABALEA wrote: > Bonjour, > > 2016-05-31 16:34 GMT+02:00 Phillip Hallam-Baker : > >> >> >> On Sun, May 29, 2016 at 8:55 AM, Stephen Farrell < >> stephen.farrell at cs.tcd.ie> wrote: >> >>> >>> >>> On 29/05/16 02:35, Henry Baker wrote: >>> > FYI -- >>> > >>> > http://www.theregister.co.uk/2016/05/27/blue_coat_ca_certs/ >>> > >>> > A Controversial Surveillance Firm Was Granted a Powerful Encryption >>> Certificate >>> > Written by Joseph Cox, Contributor >>> >>> Yeah, two things strike me: >>> >>> 1 - yay for certificate transparency - CAs behaving oddly being spotted >>> and outed is good >>> >>> 2 - what kind of "testing" would require symantec to issue a CA >>> cert with path-len 0 and for symanetec to hold the private key? I >>> can't figure anything that makes sense unless symantec were thinking >>> of actively helping blue coat spoof web sites better, maybe at >>> run-time, or on a case-by-case basis - or am I missing something? >>> >>> Cheers, >>> S. >> >> >> For the benefit of us who can't remember, what is the effect of path-len >> 0? >> > > A CA certificate containing a BasicConstraints with pathLenConstraint=0 > means that this CA certificate can only be used to verify an end-entity > certificate, or a CA certificate that doesn't issue any certificate, but > not a CA certificate that itself would issue another certificate (either CA > or end-entity). > > To simplify: > CA(BC:pathLenConstraint=0) -> end-entity : OK > CA(BC:pathLenConstraint=0) -> CA(anything) : OK > CA(BC:pathLenConstraint=0) -> CA(anything) -> any certificate : NOT OK > One of the things I learned from experimental physics was that you should always ask the question even if you think you know the answer. I deliberately asked what the *effect* was, not what the specification says. The questions are not the same. What I had forgotten is: CA(BC:pathLenConstraint=0) -> CA(anything) : OK Which is kinda screwed up. I am still not seeing how to turn this into an exploit if Symantec hold the private key. > As in, what is the effect on systems out there in the wild as opposed to >> what does the spec say. Is there a difference and if so for what systems? >> >> Does 0 = infinity? Probably not in the spec but what about elsewhere? >> > > 0 is not infinity. Infinity is expressed as the absence of the > pathLenConstraint field. > OK so that possibility out. > Some not so old versions of GnuTLS didn't correctly verify the > pathLenConstraint, at least. I think it was corrected in 2014. > OpenSSL, NSS, MSCAPI, and Opera are OK. Don't know about PolarSSL/mbedTLS > or other smaller TLS stacks. > Does any browser use GnuTLS though? I don't think we need to panic if the code is being used for STARTTLS in SMTP or the like as those aren't typically tied to a root of trust in any case. -------------- next part -------------- An HTML attachment was scrubbed... URL: From pzbowen at gmail.com Tue May 31 12:44:37 2016 From: pzbowen at gmail.com (Peter Bowen) Date: Tue, 31 May 2016 09:44:37 -0700 Subject: [Cryptography] Blue Coat has been issued a MITM encryption certificate In-Reply-To: References: <574AE6A8.4040302@cs.tcd.ie> Message-ID: On Tue, May 31, 2016 at 7:34 AM, Phillip Hallam-Baker wrote: > On Sun, May 29, 2016 at 8:55 AM, Stephen Farrell wrote: >> On 29/05/16 02:35, Henry Baker wrote: >> > http://www.theregister.co.uk/2016/05/27/blue_coat_ca_certs/ >> > >> > A Controversial Surveillance Firm Was Granted a Powerful Encryption >> > Certificate >> > Written by Joseph Cox, Contributor >> >> Yeah, two things strike me: >> >> 1 - yay for certificate transparency - CAs behaving oddly being spotted >> and outed is good >> >> 2 - what kind of "testing" would require symantec to issue a CA >> cert with path-len 0 and for symanetec to hold the private key? I >> can't figure anything that makes sense unless symantec were thinking >> of actively helping blue coat spoof web sites better, maybe at >> run-time, or on a case-by-case basis - or am I missing something? > > For the benefit of us who can't remember, what is the effect of path-len 0? > > As in, what is the effect on systems out there in the wild as opposed to > what does the spec say. Is there a difference and if so for what systems? > > Does 0 = infinity? Probably not in the spec but what about elsewhere? Pathlen = 0 means the CA can only issue end-entity certificates and cannot be used to sign further CA certificates. Path length of zero is a good thing and is correctly interpreted by every certificate validation library I know about. It is fairly common practice for a CA operator to create issuing CAs (e.g. pathlen=0) for customers for branding purposes or to enable authorization via issuer. In these cases the issuing CA is the same as every other CA operated by the same company (e.g. Symantec or Comodo), but the issuer name is the customer name. Mozilla is working on getting all CAs to add info on their issuing CAs to their database; you can see the current status at https://mozillacaprogram.secure.force.com/CA/PublicAllIntermediateCerts. If there are checkboxes under both "CP/CPS Same As Parent" and "Audit Same As Parent", then it is safe to assume that the issuing CA is just a branded CA operated by the parent. Thanks, Peter From mitch at niftyegg.com Tue May 31 13:03:13 2016 From: mitch at niftyegg.com (Tom Mitchell) Date: Tue, 31 May 2016 10:03:13 -0700 Subject: [Cryptography] Blue Coat has been issued a MITM encryption certificate In-Reply-To: <574AE6A8.4040302@cs.tcd.ie> References: <574AE6A8.4040302@cs.tcd.ie> Message-ID: On Sun, May 29, 2016 at 5:55 AM, Stephen Farrell wrote: > > > On 29/05/16 02:35, Henry Baker wrote: > > FYI -- > > > > http://www.theregister.co.uk/2016/05/27/blue_coat_ca_certs/ > > > > A Controversial Surveillance Firm Was Granted a Powerful Encryption > Certificate > > Written by Joseph Cox, Contributor > > Yeah, two things strike me: > > 1 - yay for certificate transparency - CAs behaving oddly being spotted > and outed is good > > 2 - what kind of "testing" > .... > run-time, or on a case-by-case basis - or am I missing something? One thing I can think of is a counter move to cope with a world full of MITM attackers. Either detection, discovery, eradication.. With armies of bots out there it might take a MITM defense to shut the door on some vectors that constantly manage and refresh the millions of compromised machines. Hard IP addresses can be firewalled but once DNS is borked then firewalls have a more slippery handle on things. Just a thought... And yes it may have a compliance with law enforcement component. A very real threat is when anti virus tools are compromised. By design they run at a very high level and can see and change vast swaths of the system to the purposes of the virus. -- T o m M i t c h e l l -------------- next part -------------- An HTML attachment was scrubbed... URL: From eabalea at gmail.com Tue May 31 14:47:11 2016 From: eabalea at gmail.com (Erwann Abalea) Date: Tue, 31 May 2016 20:47:11 +0200 Subject: [Cryptography] Blue Coat has been issued a MITM encryption certificate In-Reply-To: References: <574AE6A8.4040302@cs.tcd.ie> Message-ID: Bonsoir, 2016-05-31 19:25 GMT+02:00 Phillip Hallam-Baker : > > > On Tue, May 31, 2016 at 12:54 PM, Erwann ABALEA wrote: > >> Bonjour, >> >> 2016-05-31 16:34 GMT+02:00 Phillip Hallam-Baker : >> >>> >>> >>> On Sun, May 29, 2016 at 8:55 AM, Stephen Farrell < >>> stephen.farrell at cs.tcd.ie> wrote: >>> >>>> >>>> >>>> On 29/05/16 02:35, Henry Baker wrote: >>>> > FYI -- >>>> > >>>> > http://www.theregister.co.uk/2016/05/27/blue_coat_ca_certs/ >>>> > >>>> > A Controversial Surveillance Firm Was Granted a Powerful Encryption >>>> Certificate >>>> > Written by Joseph Cox, Contributor >>>> >>>> Yeah, two things strike me: >>>> >>>> 1 - yay for certificate transparency - CAs behaving oddly being spotted >>>> and outed is good >>>> >>>> 2 - what kind of "testing" would require symantec to issue a CA >>>> cert with path-len 0 and for symanetec to hold the private key? I >>>> can't figure anything that makes sense unless symantec were thinking >>>> of actively helping blue coat spoof web sites better, maybe at >>>> run-time, or on a case-by-case basis - or am I missing something? >>>> >>>> Cheers, >>>> S. >>> >>> >>> For the benefit of us who can't remember, what is the effect of path-len >>> 0? >>> >> >> A CA certificate containing a BasicConstraints with pathLenConstraint=0 >> means that this CA certificate can only be used to verify an end-entity >> certificate, or a CA certificate that doesn't issue any certificate, but >> not a CA certificate that itself would issue another certificate (either CA >> or end-entity). >> >> To simplify: >> CA(BC:pathLenConstraint=0) -> end-entity : OK >> CA(BC:pathLenConstraint=0) -> CA(anything) : OK >> CA(BC:pathLenConstraint=0) -> CA(anything) -> any certificate : NOT OK >> > > One of the things I learned from experimental physics was that you should > always ask the question even if you think you know the answer. > > I deliberately asked what the *effect* was, not what the specification > says. The questions are not the same. > > What I had forgotten is: > > CA(BC:pathLenConstraint=0) -> CA(anything) : OK > > Which is kinda screwed up. I am still not seeing how to turn this into an > exploit if Symantec hold the private key. > The normative path validation algorithm takes as input a prospective certification path, and this certification path can end with a CA certificate. Which can be seen as useless, but may raise some specific implementation quirks. This CA certificate could be an X.509v1 cert, raising other potential quirks. Another behavior dictated by the norm is this: CA(BC:pathLenConstraint=0) -> self-issued CA(anything) -> end-entity : OK That is, they could issue another CA certificate named the same (C=US, O/OU..., CN=Blue Coat Public Services Intermediate CA) for which they have the private key, and then issue end-entity certificates. It works because the pathLength is decremented for each non self-issued CA certificate. I haven't tested implementations on this point. As in, what is the effect on systems out there in the wild as opposed to >>> what does the spec say. Is there a difference and if so for what systems? >>> >>> Does 0 = infinity? Probably not in the spec but what about elsewhere? >>> >> >> 0 is not infinity. Infinity is expressed as the absence of the >> pathLenConstraint field. >> > > OK so that possibility out. > > > >> Some not so old versions of GnuTLS didn't correctly verify the >> pathLenConstraint, at least. I think it was corrected in 2014. >> OpenSSL, NSS, MSCAPI, and Opera are OK. Don't know about PolarSSL/mbedTLS >> or other smaller TLS stacks. >> > > Does any browser use GnuTLS though? I don't think we need to panic if the > code is being used for STARTTLS in SMTP or the like as those aren't > typically tied to a root of trust in any case. > Browser, maybe none. But some Linux distributions compile and link some software with GnuTLS (I've seen some OpenLDAP in Debian/Ubuntu, for example). Some cli tools such as curl/wget, or proxies can be compiled with GnuTLS. -------------- next part -------------- An HTML attachment was scrubbed... URL: From cryptography at dukhovni.org Tue May 31 16:38:38 2016 From: cryptography at dukhovni.org (Viktor Dukhovni) Date: Tue, 31 May 2016 20:38:38 +0000 Subject: [Cryptography] Blue Coat has been issued a MITM encryption certificate In-Reply-To: References: <574AE6A8.4040302@cs.tcd.ie> Message-ID: <20160531203837.GY3300@mournblade.imrryr.org> On Tue, May 31, 2016 at 08:47:11PM +0200, Erwann Abalea wrote: > Another behavior dictated by the norm is this: > > CA(BC:pathLenConstraint=0) -> self-issued CA(anything) -> end-entity : OK > > That is, they could issue another CA certificate named the same (C=US, > O/OU..., CN=Blue Coat Public Services Intermediate CA) for which they have > the private key, and then issue end-entity certificates. It works because > the pathLength is decremented for each non self-issued CA certificate. I > haven't tested implementations on this point. If BlueCoat had the key for the path-constrained intermediate CA they could indeed create additional self-issued intermediates. However, allegedly they don't have the key. So the self-issued intermediate would have to be issued to BlueCoat by Symantec. > Browser, maybe none. But some Linux distributions compile and link some > software with GnuTLS (I've seen some OpenLDAP in Debian/Ubuntu, for > example). Some cli tools such as curl/wget, or proxies can be compiled with > GnuTLS. Many distibutions/builds of the Exim MTA are linked with GnuTLS. -- Viktor. From bear at sonic.net Tue May 31 16:44:17 2016 From: bear at sonic.net (Ray Dillinger) Date: Tue, 31 May 2016 13:44:17 -0700 Subject: [Cryptography] Blue Coat has been issued a MITM encryption certificate In-Reply-To: References: <574AE6A8.4040302@cs.tcd.ie> Message-ID: <574DF7A1.10104@sonic.net> (attributions unclear so I left them out). >> As in, what is the effect on systems out there in the wild as opposed >> to what does the spec say. Is there a difference and if so for what >> systems? > A CA certificate containing a BasicConstraints with > pathLenConstraint=0 means that this CA certificate can only be used > to verify an end-entity certificate, or a CA certificate that doesn't > issue any certificate, but not a CA certificate that itself would > issue another certificate (either CA or end-entity). > Symantec keeping the Blue Coat private keys is an interesting twist for a certificate at that level. In theory that should prevent them from issuing a cert for a different domain. In practice it depends on what kind of cert they were issued. If the crypto is weak enough they could crack the private key themselves and I don't know how much software still accepts that weak-ass crypto. More likely, given their business model, they'll have bribed somebody who got them the key "off the record," so they can now issue keys that software will recognize as being from Symantec, and Symantec, who didn't "officially" provide the key, isn't liable for what they do with it. Nice arrangement. Alert, I'm making standard "must be an excessively suspicious paranoid to do security" assumptions here; it may not be this bad. But actually, I think it's worse than that. I think this is probably the system operating as designed. The CA cert system used by x.509 was proposed in response to the need for an "introduction" system for customers and merchants with no prior relationship. But what emerged from committee is self-evidently not designed to do exactly that job. If it were designed to do that job then CAs would be used when doing introductions (or establishing new keys on a new device) and key management would be done by both the parties to the contact after that, with certificate pinning on both sides. I don't doubt that this protocol was debated by people the majority of whom acted in good faith. But I have long assumed that the debate was unduly influenced by, and the swing votes cast by, actors who didn't have that job as their primary concern. Among these actors were CAs who intended to profit from increased dependency of everyone on their services, but I don't think they were the only ones. I have, for a long time, considered the CA system to be, first and foremost, a tool for concentrating the ability to perform MITM attacks to a set of known and controllable CA's, and thereby make certain that MITM attacks are available to government actors while protecting ordinary customers from all but the most influential crooks. Unfortunately the set of CA's rapidly became uncontrolled, with the effect that many CA's are now frankly run by crooks and the ability to perform MITM attacks is available on the black market for crooks with no political influence and only a little money. We've been patching X.509 for its many holes and failures for a long time, most recently with (FINALLY) certificate pinning - the beginnings of key management by the parties to the contact. Still conspicuously missing are persistent customer keys/self-certs that the businesses can "pin" and associate with particular accounts on their side. Maybe if we keep patching we'll eventually get to something that does the job that X.509 was supposed to do. But we could have started with something a hell of a lot closer to fulfilling the requirements. Bear -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 819 bytes Desc: OpenPGP digital signature URL: