From waywardgeek at gmail.com Wed Oct 1 16:54:37 2014 From: waywardgeek at gmail.com (Bill Cox) Date: Wed, 1 Oct 2014 16:54:37 -0400 Subject: [Cryptography] Internet of Things and small cheap ASICs? Message-ID: Personally, the Internet of Things seems to have a major security problem. I personally do not plan to hook my thermostat to the Internet any time soon, for example. Can anyone point me to the best papers describing how to actually secure the IoT? Since you guys were so helpful with feedback on my Infinite Noise Generator concept, I thought I'd go back to the well and bug you about something a bit less crypto related... I am trying to find out if there is any need for board-level designers out there to be able to create small mixed-signal ASICs. I'm not talking about an iPod-Nano on a chip, but simple arrays of capacitors, resistors, transistors, a few logic gates, and maybe some amplifiers. The die would be tiny, and each would have the same components. Designs would be configured with custom routing. The minimum order might be 1,000. So, for example, a chip you could design using say 100 0.1pF caps, 300 6K Ohm resistors, maybe 50 analog N and P mosfets configured for analog (wide gates), maybe 20-ish T-gates, and 20-ish logic gates (NAND/NOR/INV), a couple of op-amps, and maybe 16 pads, and come in some tiny 16-pin surface mount package. It might even have 1K-ish gates of real logic, and 128 FLOPs, and even a small block of SRAM, if people think it should. The resistors would not be very accurate, but they would match well. Same thing for the other components. It would come with free design tools, likely based on existing open-source tools. Something like this I think can be done for under $1/chip, in quantities of 1,000. I am trying to figure out if this is a good fit for helping enable the Internet of Things. It might be useful for simple sensor interfaces, for example, or reducing part-counts and size. Would anything like that be exciting? Thanks Bill -------------- next part -------------- An HTML attachment was scrubbed... URL: From clemens at ladisch.de Wed Oct 1 03:53:37 2014 From: clemens at ladisch.de (Clemens Ladisch) Date: Wed, 01 Oct 2014 09:53:37 +0200 Subject: [Cryptography] The world's most secure TRNG In-Reply-To: References: <542A3800.8010007@iang.org> Message-ID: <542BB301.5030603@ladisch.de> Bill Cox wrote: > On Tue, Sep 30, 2014 at 7:03 AM, Natanael wrote: > > Den 30 sep 2014 09:55 skrev "Philipp G?hring": > > > So from a marketing point of view you should put a whitener on the > > > part. > > > > Yes! > > Thanks for that suggestion. I'll whiten with some of the leftover gates. > How to do a decent job sounds like a fun problem. You need custom drivers for this device anyway, so it might be a better idea to let the software do a decent job. (You might want to add to the USB packets a header with the current settings and the actual amount of entropy; in that case there is less danger that anybody thinks this data is a perfectly random bit stream.) And why are you calling it a whitener instead of a randomness extractor? The former name could imply that the output looks random, but has less than 100% entropy. Regards, Clemens From dave at horsfall.org Wed Oct 1 01:23:30 2014 From: dave at horsfall.org (Dave Horsfall) Date: Wed, 1 Oct 2014 15:23:30 +1000 (EST) Subject: [Cryptography] Cryptography for consensual sex in California ? In-Reply-To: References: Message-ID: On Tue, 30 Sep 2014, Henry Baker wrote: > With California's new "yes means yes" law, how would you design a > protocol for engaging in consensual sex, which would authenticate the > parties' consents, which protected their privacy, but which couldn't be > subsequently repudiated ? When I saw the Subject: I thought I'd blundered onto the wrong list... Anyway, Bruce Schneier's tome will probably have something that could be used, and if not then I'm sure he'll include something in the next edition. -- Dave From hbaker1 at pipeline.com Wed Oct 1 08:45:24 2014 From: hbaker1 at pipeline.com (Henry Baker) Date: Wed, 01 Oct 2014 05:45:24 -0700 Subject: [Cryptography] Best Internet crypto clock ? Message-ID: [I'm quite new to this mailing list, so I hope that the following question isn't embarrassingly trivial.] In old B/W movies, when a person was kidnapped, the kidnapper sent a photo of the person together with a picture of the front page of today's newspaper to prove that he had the kidnapped person _on or after the date_ of the newspaper. In this case, the newspaper headlines for that date are unknowable in advance, so the recipient of the photo can establish an earliest date bound for the photo. In today's Internet world, one could presumably do the same thing with a crypto hash of the current contents of the NYTimes, but this is now quite difficult to check because the "front page of the NYTimes" is no longer very constant, various versions are served up to different viewers ("A/B testing"), and I doubt that anyone is keeping track of exactly what pages are being served up at exactly what times. One could also hash the closing bid/ask prices for N stocks on the NYSE; since this information is kept for long periods of time, it could be far more reliable. Unfortunately, these closing prices are available only once per day, 5 days a week. Another possibility would be to capture a hash of a snapshot of the Bitcoin blockchain at a particular time. However, I don't know how easily one can search backwards in the Bitcoin blockchain to check for when a particular crypto clock value occurred. It's clear that any crypto clock would have to be a simple append-only, read-only database whose future values cannot be predicted. It would be nice to be able to quickly search such a database, but since most searches would be querying about relatively recent events, even backwards linear searching wouldn't be too bad. Since we are free to choose the format of our authenticated crypto clock value, we can easily include an index of what wall clock time it thinks it is, so a simple RAM access to the database will be able to check the value. Assuming authenticated values, we can also trivially compare two such values to determine if one "time" precedes another "time". I would imagine that the best agency to publish such a crypto clock value would be the National Bureau of Standards, using their existing time servers. Does such a standardized "crypto clock" currently exist? From iang at iang.org Wed Oct 1 13:33:23 2014 From: iang at iang.org (ianG) Date: Wed, 01 Oct 2014 10:33:23 -0700 Subject: [Cryptography] NSA versus DES etc.... In-Reply-To: <21544.64868.836051.664605@desk.crynwr.com> References: <541A6E04.6050803@av8n.com> <201409191008.s8JA8e6u001629@home.unipay.nl> <201409200416.s8K4G1jW019462@new.toad.com> <541DE11B.6040305@av8n.com> <9E0708DC-ED74-4879-AEE7-D1F41B2252F5@lrw.com> <5421E7DB.2040704@av8n.com> <2422FDD0-1570-4BB7-B099-9719A933ED82@interlog.com> <5422600D.2060104@av8n.com> <2DB4E01B-CA6E-4F8E-827A-36A2022E5485@interlog.com> <21544.64868.836051.664605@desk.crynwr.com> Message-ID: <542C3AE3.4090207@iang.org> On 28/09/2014 23:34 pm, Russ Nelson wrote: > Richard Outerbridge writes: > > On 2014-09-24 (267), at 02:09:17, John Denker wrote: > > > > > The entirely foreseeable result of putting out a > > > weakened cipher standard was that friends would use > > > the weakened version and enemies would very rapidly > > > come up with a non-weakened version. > > > > Y'know, I really don't believe the NSA have ever been > > that dumb. > > Why not? All of corporate America is that dumb. Corporate America has > all the incentives in the world to make money, while the NSA has the > usual bureaucratic (weaker) incentives. > > Every corporate leader who says "I will protect my IP by taking steps > which make it harder to use" is indulging in this error. Why should > the NSA be any different? > > http://www.crynwr.com/on-being-proprietary.html One point for: Suite A and friends, which remains a heavily shared secret. One point against: In this particular place called cryptography, there is a frequently repeated aphorism "the enemy knows my algorithm" recently attributed as Shannon's maxim and historically as Kerckhoffs' 2nd Principle. I guess the various well-funded enemies have figured out each other's secret algorithms by now, but out of politeness and common interest they cartelise the secrets. iang From Jeff.Hodges at KingsMountain.com Wed Oct 1 09:57:29 2014 From: Jeff.Hodges at KingsMountain.com (=JeffH) Date: Wed, 01 Oct 2014 06:57:29 -0700 Subject: [Cryptography] "Spy Agencies Urge Caution on Phone Deal" Message-ID: <542C0849.4060002@KingsMountain.com> > On Mon, Sep 29, 2014 at 11:51 PM, Jerry Leichter wrote: >> a special network/database - oddly never named in the article - that "rout[es] millions of phone calls and text messages in the United States". Apparently this was a system created back in the late 1990's to implement number portability. > > It's the "Number Portability Administration Center": > https://www.npac.com/ see also... North American Numbering Plan https://en.wikipedia.org/wiki/North_American_Numbering_Plan North American Numbering Plan Administration: NANPA http://www.nanpa.com/about_us/abt_nanp.html From richard at highwayman.com Wed Oct 1 05:46:50 2014 From: richard at highwayman.com (Richard Clayton) Date: Wed, 1 Oct 2014 10:46:50 +0100 Subject: [Cryptography] Cryptography for consensual sex in California ? In-Reply-To: References: Message-ID: -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 In message , Henry Baker writes >With California's new "yes means yes" law, how would you design a protocol for >engaging in consensual sex, which would authenticate the parties' consents, >which protected their privacy, but which couldn't be subsequently repudiated ? http://www.cl.cam.ac.uk/~fms27/papers/2000-StajanoHar-romantic.pdf - -- richard Richard Clayton They that can give up essential liberty to obtain a little temporary safety deserve neither liberty nor safety. Benjamin Franklin -----BEGIN PGP SIGNATURE----- Version: PGPsdk version 1.7.1 iQA/AwUBVCvNiuINNVchEYfiEQIfKQCgpDZ4vEwXGiufP0WqkuWy4FFoYv8AnR8B VoCAjgmCEY0xXBsCS9qv6D6B =W4Zq -----END PGP SIGNATURE----- From basal at riseup.net Wed Oct 1 20:20:27 2014 From: basal at riseup.net (Basal) Date: Wed, 01 Oct 2014 17:20:27 -0700 Subject: [Cryptography] Cryptography for consensual sex in California ? In-Reply-To: References: Message-ID: <542C9A4B.4070000@riseup.net> On 09/30/2014 06:57 AM, Henry Baker wrote: > With California's new "yes means yes" law, how would you design a protocol for engaging in consensual sex, which would authenticate the parties' consents, which protected their privacy, but which couldn't be subsequently repudiated ? The bitcoin blockchain could be used as cryptographic proof that something happened at a certain time. I have to point out though that the law lets people revoke consent at any time, so it is a moot point regardless. "Affirmative consent must be ongoing throughout a sexual activity and can be revoked at any time" Sorry if this message is malformatted, it's my first time sending something to the list. -------------- next part -------------- An HTML attachment was scrubbed... URL: From bascule at gmail.com Wed Oct 1 19:28:15 2014 From: bascule at gmail.com (Tony Arcieri) Date: Wed, 1 Oct 2014 16:28:15 -0700 Subject: [Cryptography] Best Internet crypto clock ? In-Reply-To: References: Message-ID: You could take a hash of some content, use that hash as a Bitcoin private key, and send the associated public key 1 Satoshi. You could then prove you calculated that hash at a given time by looking up the associated public key in the block chain. Here's something that does just that: https://www.btproof.com/ -- Tony Arcieri -------------- next part -------------- An HTML attachment was scrubbed... URL: From chk at pobox.com Wed Oct 1 20:07:35 2014 From: chk at pobox.com (Harald Koch) Date: Wed, 1 Oct 2014 20:07:35 -0400 Subject: [Cryptography] Cryptography for consensual sex in California ? In-Reply-To: References: Message-ID: Trust a bunch of men to be discussing this. Yeesh. We're the problem, not the solution. This is not a technology problem, and not likely one that can be solved by technology; consent can be revoked *at any time*, including when the previously willing participant doesn't have access to any technology. -- Harald -------------- next part -------------- An HTML attachment was scrubbed... URL: From dennis.hamilton at acm.org Wed Oct 1 20:36:04 2014 From: dennis.hamilton at acm.org (Dennis E. Hamilton) Date: Wed, 1 Oct 2014 17:36:04 -0700 Subject: [Cryptography] Cryptography for consensual sex in California ? In-Reply-To: <542B5D96.7070006@iang.org> References: <542B5D96.7070006@iang.org> Message-ID: <00c101cfddd8$da3e63b0$8ebb2b10$@acm.org> below. -----Original Message----- From: cryptography [mailto:cryptography-bounces+dennis.hamilton=acm.org at metzdowd.com] On Behalf Of ianG Sent: Tuesday, September 30, 2014 18:49 To: Henry Baker; cryptography at metzdowd.com Subject: Re: [Cryptography] Cryptography for consensual sex in California ? On 30/09/2014 06:57 am, Henry Baker wrote: > With California's new "yes means yes" law, how would you design a protocol for engaging in consensual sex, which would authenticate the parties' consents, which protected their privacy, but which couldn't be subsequently repudiated ? [ ... ] Seeking a non-repudiation scheme is not going to work. There is a misunderstanding about what the law establishes. I.e., "Lack of protest or resistance does not mean consent," the law states, "nor does silence mean consent. Affirmative consent must be ongoing throughout a sexual activity and can be revoked at any time." From . The law is here: . Note that No still means No, even after a yes. The point is that without any explicit yes at all, initiation of sex is a very bad idea. From huitema at huitema.net Wed Oct 1 21:05:12 2014 From: huitema at huitema.net (Christian Huitema) Date: Wed, 1 Oct 2014 18:05:12 -0700 Subject: [Cryptography] Cryptography for consensual sex in California ? In-Reply-To: References: Message-ID: <000201cfdddc$ebd5e180$c381a480$@huitema.net> >> With California's new "yes means yes" law, how would you design a >> protocol for engaging in consensual sex, which would authenticate the >> parties' consents, which protected their privacy, but which couldn't be >> subsequently repudiated ? > > When I saw the Subject: I thought I'd blundered onto the wrong list... Byzantine Generals having Cryptic Sex? -- Christian Huitema From l at odewijk.nl Wed Oct 1 20:12:07 2014 From: l at odewijk.nl (=?UTF-8?Q?Lodewijk_andr=C3=A9_de_la_porte?=) Date: Thu, 2 Oct 2014 02:12:07 +0200 Subject: [Cryptography] Cryptography for consensual sex in California ? In-Reply-To: References: Message-ID: Just to be clear: I'll never accept having to open an app before being allowed to have sex. In fact I think required affirmative verbalized and specific consent is indecent in a way that prevents freely enjoyed sex, it's something sporadic more often than not because it depends upon fleeting emotions. And the system wouldn't make legal sense until enforced (iow: "yes means yes if parties involved presigned a contract"), leaving it thoroughly without usecase. (That was the main point in this e-mail. The crypto might find application elsewhere, of course.) -------------- next part -------------- An HTML attachment was scrubbed... URL: From ji at tla.org Wed Oct 1 21:53:30 2014 From: ji at tla.org (John Ioannidis) Date: Wed, 1 Oct 2014 18:53:30 -0700 Subject: [Cryptography] Best Internet crypto clock ? In-Reply-To: References: Message-ID: On Wed, Oct 1, 2014 at 4:28 PM, Tony Arcieri wrote: > You could take a hash of some content, use that hash as a Bitcoin private > key, and send the associated public key 1 Satoshi. You could then prove you > calculated that hash at a given time by looking up the associated public key > in the block chain. > > Here's something that does just that: > > https://www.btproof.com/ > > -- > Tony Arcieri > > _______________________________________________ > The cryptography mailing list > cryptography at metzdowd.com > http://www.metzdowd.com/mailman/listinfo/cryptography The problem was solved a very long time ago by Haber et al: http://dl.acm.org/citation.cfm?id=705358 From pgut001 at cs.auckland.ac.nz Wed Oct 1 23:03:51 2014 From: pgut001 at cs.auckland.ac.nz (Peter Gutmann) Date: Thu, 02 Oct 2014 16:03:51 +1300 Subject: [Cryptography] Internet of Things and small cheap ASICs? In-Reply-To: Message-ID: Bill Cox writes: >Personally, the Internet of Things seems to have a major security problem. I >personally do not plan to hook my thermostat to the Internet any time soon, >for example. Can anyone point me to the best papers describing how to >actually secure the IoT? That question would take a small essay to answer (even defining IoT would take a small essay, I'm going to map it to SCADA-like systems rather than a Twitter feed to the LCD panel on your fridge), so I'll just reply with a few bullet points to cover the main issues: * The infrastructure is stuck at about the Windows 95 level of security, and isn't getting any better. * There's no obvious driver for improvement. With Win95 (and NT) it was global worms and the fact that you had one of these things on every desktop, but if your thermostat reboots itself every now and then because it's part of a botnet no-one will notice or care much. * Availability and safety trump security in every case. Having a hundred-ton hydraulic press take someone's fingers off because of an expired certificate (although I'm not quite sure how that particular case could happen) is a no- no. * After availability and safety comes cost. Security comes in at about position 100 in the feature priority list, with the first 80 slots being taken up by "availability/safety". * The security model for IoT (in the form of SCADA-like devices) has always been not to hook them up to a WAN. Unroutable serial protocols helped here. For more recent devices, the security model is "block it at the firewall". * Oh, and assume it's insecure by design. You'll rarely be disappointed. * To finally answer the question, see any work on securing things, the OWASP guides, static source code analysis tools, a roomful of books on secure coding and pen-testing, etc. Peter. From dennis.hamilton at acm.org Thu Oct 2 02:10:31 2014 From: dennis.hamilton at acm.org (Dennis E. Hamilton) Date: Wed, 1 Oct 2014 23:10:31 -0700 Subject: [Cryptography] Cryptography for consensual sex in California ? In-Reply-To: <00c101cfddd8$da3e63b0$8ebb2b10$@acm.org> References: <542B5D96.7070006@iang.org> <00c101cfddd8$da3e63b0$8ebb2b10$@acm.org> Message-ID: <00fd01cfde07$939cae10$bad60a30$@acm.org> addition below -----Original Message----- From: cryptography [mailto:cryptography-bounces+dennis.hamilton=acm.org at metzdowd.com] On Behalf Of Dennis E. Hamilton Sent: Wednesday, October 1, 2014 17:36 To: 'ianG'; 'Henry Baker'; cryptography at metzdowd.com Subject: Re: [Cryptography] Cryptography for consensual sex in California ? below. -----Original Message----- From: cryptography [mailto:cryptography-bounces+dennis.hamilton=acm.org at metzdowd.com] On Behalf Of ianG Sent: Tuesday, September 30, 2014 18:49 To: Henry Baker; cryptography at metzdowd.com Subject: Re: [Cryptography] Cryptography for consensual sex in California ? On 30/09/2014 06:57 am, Henry Baker wrote: > With California's new "yes means yes" law, how would you design a protocol for engaging in consensual sex, which would authenticate the parties' consents, which protected their privacy, but which couldn't be subsequently repudiated ? [ ... ] Seeking a non-repudiation scheme is not going to work. There is a misunderstanding about what the law establishes. I.e., "Lack of protest or resistance does not mean consent," the law states, "nor does silence mean consent. Affirmative consent must be ongoing throughout a sexual activity and can be revoked at any time." From . The law is here: . Note that No still means No, even after a yes. The point is that without any explicit yes at all, initiation of sex is a very bad idea. I should add that the law applies specifically to duties Of post-secondary institutions regulated by the State of California. It is directed toward date-rape and non- consensual sex involving college students. Finally, the way one engages in a non-repudiatable agree- ment is of course the same way one now does so, using digital signatures or other means. It just doesn't happen to apply in the case in the badly dubbed "yes means yes" law in California. _______________________________________________ The cryptography mailing list cryptography at metzdowd.com http://www.metzdowd.com/mailman/listinfo/cryptography From pgut001 at cs.auckland.ac.nz Thu Oct 2 03:18:39 2014 From: pgut001 at cs.auckland.ac.nz (Peter Gutmann) Date: Thu, 02 Oct 2014 20:18:39 +1300 Subject: [Cryptography] Best Internet crypto clock ? In-Reply-To: Message-ID: Henry Baker writes: >Does such a standardized "crypto clock" currently exist? Look for work on secure/authenticated audit logs, e.g. google "cryptographically secure audit logs" (there's an awful lot of material out there, too much to post individual references to). Peter. From leichter at lrw.com Thu Oct 2 11:35:46 2014 From: leichter at lrw.com (Jerry Leichter) Date: Thu, 2 Oct 2014 11:35:46 -0400 Subject: [Cryptography] NSA versus DES etc.... In-Reply-To: <542C3AE3.4090207@iang.org> References: <541A6E04.6050803@av8n.com> <201409191008.s8JA8e6u001629@home.unipay.nl> <201409200416.s8K4G1jW019462@new.toad.com> <541DE11B.6040305@av8n.com> <9E0708DC-ED74-4879-AEE7-D1F41B2252F5@lrw.com> <5421E7DB.2040704@av8n.com> <2422FDD0-1570-4BB7-B099-9719A933ED82@interlog.com> <5422600D.2060104@av8n.com> <2DB4E01B-CA6E-4F8E-827A-36A2022E5485@interlog.com> <21544.64868.836051.664605@desk.crynwr.com> <542C3AE3.4090207@iang.org> Message-ID: On Oct 1, 2014, at 1:33 PM, ianG wrote: > One point for: Suite A and friends, which remains a heavily shared secret. > > One point against: In this particular place called cryptography, there > is a frequently repeated aphorism "the enemy knows my algorithm" > recently attributed as Shannon's maxim and historically as Kerckhoffs' > 2nd Principle. One can read way too much into this rule. There's a countervailing principle: Defense in depth. Your data is protected (a) by the secrecy of your algorithm; (b) by the secrecy of your code. The enemy needs *both* to read your data. Why give him one for free? Granted, the algorithm lives much longer and is much more widely distributed than any given key. So in your analyses you're going assume that the probability of the algorithm leaking is much higher than that of any given key being lost. But that doesn't change the basic assumption needed for defense in depth: That failure of any given level is *independent* of failure of any other level. NSA has traditionally favored crypto embedded in hardware. The hardware itself is subject to defense in depth. It's kept in secure locations, and there are mechanisms for quickly destroying it if it's about to fall into enemy hands. The hardware itself resists attack. "The enemy knows my algorithm" is akin to "the enemy will figure out my attack plan". Yes, you try to keep the attack plan secret. But it will eventually become clear to the enemy, and you'd better be prepared for what happens when it does. That doesn't mean you don't do your damnedest to keep the plans secret until the last possible moment. -- Jerry -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 4813 bytes Desc: not available URL: From leichter at lrw.com Thu Oct 2 12:04:38 2014 From: leichter at lrw.com (Jerry Leichter) Date: Thu, 2 Oct 2014 12:04:38 -0400 Subject: [Cryptography] Cryptography for consensual sex in California ? In-Reply-To: <542B5D96.7070006@iang.org> References: <542B5D96.7070006@iang.org> Message-ID: <44F0BBDD-9E70-41AD-BB90-568EFC7B5EBB@lrw.com> On Sep 30, 2014, at 9:49 PM, ianG wrote: > In my threat model, we are faced with intimate aggression delivered over > an IM/chat channel. So we've decided to add a mode that BCC's the > messages encrypted to an arbitrator. If a person is unsure about the > situation, then she can hit the BCC button and carry on. If/when a > dispute arises on any question, the transcript can be pulled out, > decrypted and become part of the evidence for fact finding. Phone companies have done this (in a limited way) for years: If you are faced with a harassing phone call, you hang up and enter some key sequence. Information about the caller is saved at the phone company, which will make it available only to internal investigators or the police, and only at your request. In the past, they'd sometimes require you to sign an agreement to prosecute before they'd give you the information. This pre-dates CallerID, and the information saved is typically not blockable by the caller. (These days, with the wide use of private PBX's and VoIP, it's probably no longer all that useful against anyone but dumb callers.) Since CallerID has made the identification of the caller almost universally available, the old "privacy" arguments that drove the design are mainly irrelevant today. In fact, the whole mechanism is probably more or less obsolete. I'm not sure if it's even offered any more. -- Jerry -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 4813 bytes Desc: not available URL: From drwho at virtadpt.net Thu Oct 2 13:45:17 2014 From: drwho at virtadpt.net (The Doctor) Date: Thu, 02 Oct 2014 10:45:17 -0700 Subject: [Cryptography] new wiretap resistance in iOS 8? In-Reply-To: References: Message-ID: <542D8F2D.5030400@virtadpt.net> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA512 On 09/30/2014 07:50 PM, Ryan Carboni wrote: > I guess we shouldn't use USB anymore. It is definitely something to worry about. On that note, the PoC implementation of malicious microcontroller firmware (not a file system image) can be checked out of Github from here for experimentation, development, and weaponization: https://github.com/adamcaudill/Psychson So much for sleeping well tonight... - -- The Doctor [412/724/301/703] [ZS] Developer, Project Byzantium: http://project-byzantium.org/ PGP: 0x807B17C1 / 7960 1CDC 85C9 0B63 8D9F DD89 3BD8 FF2B 807B 17C1 WWW: https://drwho.virtadpt.net/ To the makers of music -- all worlds, all times. -----BEGIN PGP SIGNATURE----- iQIcBAEBCgAGBQJULY8sAAoJED1np1pUQ8RkDjwP/iRAa4cH1coWtFUwjm0SGtuS dOfw5HSQduPERmxz0MzgkZNva4m3FiUFgmAeVjpTxALTTkMdOQNGiUrDOY2K3wSU LY/0+33dbVwyQ9wgcjSlj6LOM9tT1mz/+SiNHKPJ57UHFyMv3h4mxXqoViM9R6Fz 1vBdWVgX3kYlyUaG0LNNIROLpHlD8xleenaSvZsM6wyb+e3avwNuRdBxCaMxKyv3 BAAFGT37039TFlN1gYlKomY4lqeevlQbeUaj1hjuusP2JKgVG3KDVZehhb4QVCAa bP8pCzWyPjLtXMW2ajYJlMI2JUEgF14BsE3g6EtprkK6DF3y+YOcsXOmRB1UZofB ZhxxkwEWAV9TqkPSf1OzZN6pg7Z7uBVweAUKf+fr+7p5+Xtq/EM35R2YHXkJQSyU ZizyqycxsKwSZK6X4gcsQsIKVV6yvMuvx/aQ6tzhPlWe5HgZqDv+4zklZwY+kdp1 1SfLnBOBPgA7cz/UGv5mw1qD1LSilH944N/Tdg23WuSP1lMBGEXUwFYfV0/0huN7 4lGwZcBJ3c3t0o/2ETWNr64VhWxVT38mohEoX2iCaniNBINcKEDRd5vrJ1NIdDco 6vn033XNQl87gKL72Q6EuAVtk22JSF44Bx3jM4YdUi+cuLLlgw9oiHc1b4fG1fK2 k8q7SpamxpgpDbTk64nA =sKHp -----END PGP SIGNATURE----- From gnu at toad.com Thu Oct 2 05:32:03 2014 From: gnu at toad.com (John Gilmore) Date: Thu, 02 Oct 2014 02:32:03 -0700 Subject: [Cryptography] NSA versus DES etc.... In-Reply-To: <542C3AE3.4090207@iang.org> References: <541A6E04.6050803@av8n.com> <201409191008.s8JA8e6u001629@home.unipay.nl> <201409200416.s8K4G1jW019462@new.toad.com> <541DE11B.6040305@av8n.com> <9E0708DC-ED74-4879-AEE7-D1F41B2252F5@lrw.com> <5421E7DB.2040704@av8n.com> <2422FDD0-1570-4BB7-B099-9719A933ED82@interlog.com> <5422600D.2060104@av8n.com> <2DB4E01B-CA6E-4F8E-827A-36A2022E5485@interlog.com> <21544.64868.836051.664605@desk.crynwr.com> <542C3AE3.4090207@iang.org> Message-ID: <201410020932.s929W3pE009125@new.toad.com> > One point for: Suite A and friends, which remains a heavily shared secret. > ... > I guess the various well-funded enemies have figured out each other's > secret algorithms by now, but out of politeness and common interest they > cartelise the secrets. Let's go a bit deeper into this. Politeness? Common interest? Really? Suppose Nation X reveals big Nation U's sooper secret crypto algorithms. Then Nation U is embarrassed -- and possibly has to go to great trouble and expense to update all their crypto algorithms. The only time Nation X has a real interest in keeping the algorithms secret is when Nation X has cracked them and don't want Nation U to know it yet, since they might change to an as-yet-uncracked system. But if Nation U is running its spooks on crackable crypto, in these days of gigahertz fingernail sized embedded systems, Nation U's secret bureacracy is sounding new lows in incompetence. It's likely that Nation X could get away with revealing the secret algorithms without implicating themselves; they could find some hacker, academic, activist, freedom-of-information maven, journalist, or someone else to actually do the public posting. They may only have to gently steer some of these folks in the direction of asking the question, or to finding information that has been left lying around on some obscure user-contributed web site from some long-dormant IP address. Or the classic brown paper envelope that "fell off a truck". So what's the real reason? "It just isn't done"? Come on, these guys do every other *&$(#)! thing they aren't supposed to do -- why not this one? John From jya at pipeline.com Thu Oct 2 07:37:49 2014 From: jya at pipeline.com (John Young) Date: Thu, 02 Oct 2014 07:37:49 -0400 Subject: [Cryptography] Retired NSA Technical Director Explains Snowden Docs Message-ID: Retired NSA Technical Director Explains Snowden Docs http://www.alexaobrien.com/secondsight/wb/binney.html Best account yet of the Snowden releases by a technically capable person. Eventually, perhaps, the other 96% will receive similar public disclosure to fully inform beyond opportunistic journalism. -------------- next part -------------- An HTML attachment was scrubbed... URL: From natanael.l at gmail.com Thu Oct 2 14:50:15 2014 From: natanael.l at gmail.com (Natanael) Date: Thu, 2 Oct 2014 20:50:15 +0200 Subject: [Cryptography] NSA versus DES etc.... In-Reply-To: <201410020932.s929W3pE009125@new.toad.com> References: <541A6E04.6050803@av8n.com> <201409191008.s8JA8e6u001629@home.unipay.nl> <201409200416.s8K4G1jW019462@new.toad.com> <541DE11B.6040305@av8n.com> <9E0708DC-ED74-4879-AEE7-D1F41B2252F5@lrw.com> <5421E7DB.2040704@av8n.com> <2422FDD0-1570-4BB7-B099-9719A933ED82@interlog.com> <5422600D.2060104@av8n.com> <2DB4E01B-CA6E-4F8E-827A-36A2022E5485@interlog.com> <21544.64868.836051.664605@desk.crynwr.com> <542C3AE3.4090207@iang.org> <201410020932.s929W3pE009125@new.toad.com> Message-ID: Den 2 okt 2014 20:42 skrev "John Gilmore" : > So what's the real reason? "It just isn't done"? Come on, these > guys do every other *&$(#)! thing they aren't supposed to do -- why > not this one? One plausible explanation is that their custom crypto is so mundane that nobody cares. Could be custom Rijndael variants tweaked for extremely specific use cases. Could be algorithms designed for hardware primarily, with overhead (due to tempest resistance) most people won't accept. Another is that every one of them have been leaked, but without strong evidence or any convincing reason for experienced cryptographers to take a close look, nobody noticed. You might be able to find NSA's algorithms openly on Russian forums, but without knowing where they come from or what they do, or if there's anything special about them in the first place. -------------- next part -------------- An HTML attachment was scrubbed... URL: From dave at horsfall.org Thu Oct 2 15:20:38 2014 From: dave at horsfall.org (Dave Horsfall) Date: Fri, 3 Oct 2014 05:20:38 +1000 (EST) Subject: [Cryptography] Cryptography for consensual sex in California ? In-Reply-To: <44F0BBDD-9E70-41AD-BB90-568EFC7B5EBB@lrw.com> References: <542B5D96.7070006@iang.org> <44F0BBDD-9E70-41AD-BB90-568EFC7B5EBB@lrw.com> Message-ID: On Thu, 2 Oct 2014, Jerry Leichter wrote: > Phone companies have done this (in a limited way) for years: If you are > faced with a harassing phone call, you hang up and enter some key > sequence. Information about the caller is saved at the phone company, > which will make it available only to internal investigators or the > police, and only at your request. In the past, they'd sometimes require > you to sign an agreement to prosecute before they'd give you the > information. In Australia at least, you have to report it yourself (there is no magic code) and you must agree to prosecute before they'll even investigate. [...] > Since CallerID has made the identification of the caller almost > universally available, the old "privacy" arguments that drove the design > are mainly irrelevant today. In fact, the whole mechanism is probably > more or less obsolete. I'm not sure if it's even offered any more. I have a firm policy of never answering calls if the number is blocked; they get to talk to the machine instead.[0] This also means that I never answer overseas calls, but then again I don't know anyone overseas; almost all call centres are now located thus, however. [0] Which leads to an amusing exchange whenever a robo-caller gets me; they talk right over the top of the announcement[1] making it utterly illegible, and I usually wind up with "Press 1 to accept this call" on the recording. [1] I always stick with the generic announcement; that way, they don't hear my actual voice[2]. [2] Yes, I've had people stalking me. Jeeze, all they have to do is leave a sodding message, and I will call them back on *my* sixpence. I guess that they don't want me to... -- Dave From outer at interlog.com Thu Oct 2 15:25:18 2014 From: outer at interlog.com (Richard Outerbridge) Date: Thu, 2 Oct 2014 15:25:18 -0400 Subject: [Cryptography] NSA versus DES etc.... In-Reply-To: <201410020932.s929W3pE009125@new.toad.com> References: <541A6E04.6050803@av8n.com> <201409191008.s8JA8e6u001629@home.unipay.nl> <201409200416.s8K4G1jW019462@new.toad.com> <541DE11B.6040305@av8n.com> <9E0708DC-ED74-4879-AEE7-D1F41B2252F5@lrw.com> <5421E7DB.2040704@av8n.com> <2422FDD0-1570-4BB7-B099-9719A933ED82@interlog.com> <5422600D.2060104@av8n.com> <2DB4E01B-CA6E-4F8E-827A-36A2022E5485@interlog.com> <21544.64868.836051.664605@desk.crynwr.com> <542C3AE3.4090207@iang.org> <201410020932.s929W3pE009125@new.toad.com> Message-ID: <2E639FB9-D0E2-445F-A6C1-C44CAF465D38@interlog.com> On 2014-10-02 (275), at 05:32:03, John Gilmore wrote: >> One point for: Suite A and friends, which remains a heavily shared secret. >> ... >> I guess the various well-funded enemies have figured out each other's >> secret algorithms by now, but out of politeness and common interest they >> cartelise the secrets. > > Let's go a bit deeper into this. Politeness? Common interest? Really? > > Suppose Nation X reveals big Nation U's sooper secret crypto > algorithms. Then Nation U is embarrassed -- and possibly has to go to > great trouble and expense to update all their crypto algorithms. Both the USA & USSR relied on keeping rotor wirings for the KL-7 and the FIALKA secret, the USSR less so, since theirs never changed, and the USA more so, since theirs did on a regular basis. We know that the USSR regularly obtained at least some of the USA Naval KL-7 wirings through the John Walker spy ring. We don?t know what those were. To this day we have no public knowledge of any actual KL-7 wiring subnet sets. We have no evidence that the USA ever had similar cognizance of any USSR subnet wirings at the time, though perhaps since we do know the wirings of the fixed Hungarian, Czechoslovakian and Polish FIALKA suggests they did. __outer From leichter at lrw.com Thu Oct 2 16:17:52 2014 From: leichter at lrw.com (Jerry Leichter) Date: Thu, 2 Oct 2014 16:17:52 -0400 Subject: [Cryptography] NSA versus DES etc.... In-Reply-To: <201410020932.s929W3pE009125@new.toad.com> References: <541A6E04.6050803@av8n.com> <201409191008.s8JA8e6u001629@home.unipay.nl> <201409200416.s8K4G1jW019462@new.toad.com> <541DE11B.6040305@av8n.com> <9E0708DC-ED74-4879-AEE7-D1F41B2252F5@lrw.com> <5421E7DB.2040704@av8n.com> <2422FDD0-1570-4BB7-B099-9719A933ED82@interlog.com> <5422600D.2060104@av8n.com> <2DB4E01B-CA6E-4F8E-827A-36A2022E5485@interlog.com> <21544.64868.836051.664605@desk.crynwr.com> <542C3AE3.4090207@iang.org> <201410020932.s929W3pE009125@new.toad.com> Message-ID: On Oct 2, 2014, at 5:32 AM, John Gilmore wrote: >> ... >> I guess the various well-funded enemies have figured out each other's >> secret algorithms by now, but out of politeness and common interest they >> cartelise the secrets. > > Let's go a bit deeper into this. Politeness? Common interest? Really? A friend (Martin Minow, in case anyone here remembers him) years ago told me a story he heard in Sweden. The rivers and bays along the Swedish coast are extremely tricky to navigate, being full of underwater canyons and ridges. For many years, the maps of some of the major ports were considered to be essential state secrets, a means of defense against (mainly Soviet) naval attack. If you wanted to sail into one of these harbors, you had best get a Swedish pilot, who had access to the maps but would ban you from watching while he used them. One day, a number of Soviet vessels arrived for some kind of political port visit. The Soviets had always asked for Swedish pilots in the past - but for some reason this time they went ahead and sailed up river at speed, easily navigating around the not-quite-so-secret underwater obstacles. This was certainly a decision made at a high level in the Soviet Navy, if not at political levels, though as far as I know, exactly *why* they chose to do it has never been explained. The Swedes, being reasonable people, stopped classifying the maps. (If they had followed current American practice, anyone with a government position would have been required to continue to treat the information as secret.) -- Jerry -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 4813 bytes Desc: not available URL: From fergdawgster at mykolab.com Thu Oct 2 15:59:15 2014 From: fergdawgster at mykolab.com (Paul Ferguson) Date: Thu, 02 Oct 2014 12:59:15 -0700 Subject: [Cryptography] new wiretap resistance in iOS 8? In-Reply-To: <542D8F2D.5030400@virtadpt.net> References: <542D8F2D.5030400@virtadpt.net> Message-ID: <542DAE93.1010101@mykolab.com> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256 On 10/2/2014 10:45 AM, The Doctor wrote: > On 09/30/2014 07:50 PM, Ryan Carboni wrote: > >> I guess we shouldn't use USB anymore. > > It is definitely something to worry about. On that note, the PoC > implementation of malicious microcontroller firmware (not a file > system image) can be checked out of Github from here for > experimentation, development, and weaponization: > > https://github.com/adamcaudill/Psychson > > So much for sleeping well tonight... See also: http://www.wired.com/2014/10/code-published-for-unfixable-usb-attack/ - - ferg - -- Paul Ferguson VP Threat Intelligence, IID PGP Public Key ID: 0x54DC85B2 Key fingerprint: 19EC 2945 FEE8 D6C8 58A1 CE53 2896 AC75 54DC 85B2 -----BEGIN PGP SIGNATURE----- Version: GnuPG v2 iF4EAREIAAYFAlQtrpMACgkQKJasdVTchbK40wEA0NfH0ulIJvhph966TkJL0SSI f+bxPnElRigDV1mkTOEA/idmv0DU+kggW18x4JCyC/JAe+sj6ZVmfzkNhX7hF1w6 =7LXS -----END PGP SIGNATURE----- From jresch at cleversafe.com Thu Oct 2 18:53:36 2014 From: jresch at cleversafe.com (Jason Resch) Date: Thu, 2 Oct 2014 17:53:36 -0500 Subject: [Cryptography] Creating a Parallelizeable Cryptographic Hash Function Message-ID: <542DD770.2050007@cleversafe.com> Assuming there was a secure cryptographic function H() with an output of L bits, what attacks or weaknesses would exist in a protocol that did the following: Digest = H(B_0 || C_0) ^ H(B_1 || C_1) ^ H(B_2 || C_2) ^ ... ^ H(B_N || C_N) ^ H(N) Where B_0 through B_N are the blocks (of size L) constituting the message and C_0 through C_N are L-bit counters. One problem seems to be that if any collision can be found for a given H(X || C_i) and H(Y || C_i), it leads to an essentially infinite number of collisions (any message that contains X as a block can have that block replaced with Y), but what other vulnerabilities does this construction have that would make it unsuitable as a general purpose cryptographic hash function? Thanks for your expertise. Jason From jkatz at cs.umd.edu Thu Oct 2 19:50:17 2014 From: jkatz at cs.umd.edu (Jonathan Katz) Date: Thu, 2 Oct 2014 19:50:17 -0400 Subject: [Cryptography] Creating a Parallelizeable Cryptographic Hash Function In-Reply-To: <542DD770.2050007@cleversafe.com> References: <542DD770.2050007@cleversafe.com> Message-ID: On Thu, Oct 2, 2014 at 6:53 PM, Jason Resch wrote: > Assuming there was a secure cryptographic function H() with an output of L > bits, what attacks or weaknesses would exist in a protocol that did the > following: > > > Digest = H(B_0 || C_0) ^ H(B_1 || C_1) ^ H(B_2 || C_2) ^ ... ^ H(B_N || > C_N) ^ H(N) > > > Where B_0 through B_N are the blocks (of size L) constituting the message > and C_0 through C_N are L-bit counters. > > One problem seems to be that if any collision can be found for a given H(X > || C_i) and H(Y || C_i), it leads to an essentially infinite number of > collisions (any message that contains X as a block can have that block > replaced with Y), but what other vulnerabilities does this construction > have that would make it unsuitable as a general purpose cryptographic hash > function? > > Thanks for your expertise. > There are several issues. Most obvious is that your hash is homomorphic, i.e., digest(B_0, B_1) ^ digest(B'_0, B_1) ^ digest(B_0, B'_1) = digest(B'_0, B'_1) Also, collisions in your hash function can be found in faster than square-root time using Wagner's generalized birthday attack. -------------- next part -------------- An HTML attachment was scrubbed... URL: From coruus at gmail.com Thu Oct 2 20:17:30 2014 From: coruus at gmail.com (David Leon Gil) Date: Thu, 2 Oct 2014 20:17:30 -0400 Subject: [Cryptography] Creating a Parallelizeable Cryptographic Hash Function In-Reply-To: <542DD770.2050007@cleversafe.com> References: <542DD770.2050007@cleversafe.com> Message-ID: On Thu, Oct 2, 2014 at 6:53 PM, Jason Resch wrote: > Assuming there was a secure cryptographic function H() with an output of L > bits, what attacks or weaknesses would exist in a protocol that did the > following: You're probably better off using a construction that has been designed to be a sound "tree hashing" mode. E.g., the Keccak team's Sakura tree hash coding: http://keccak.noekeon.org/Sakura.pdf From ryacko at gmail.com Thu Oct 2 21:08:00 2014 From: ryacko at gmail.com (Ryan Carboni) Date: Thu, 2 Oct 2014 18:08:00 -0700 Subject: [Cryptography] Best Internet crypto clock ? Message-ID: Link to one of the following: https://beacon.nist.gov/home Or use Bitcoin block 000000000000000012e01b574d2244e0182f64ed986513f0a049e01941950c2b https://blockchain.info/block-index/470728/000000000000000012e01b574d2244e0182f64ed986513f0a049e01941950c2b -------------- next part -------------- An HTML attachment was scrubbed... URL: From sandyinchina at gmail.com Thu Oct 2 22:37:01 2014 From: sandyinchina at gmail.com (Sandy Harris) Date: Thu, 2 Oct 2014 22:37:01 -0400 Subject: [Cryptography] Creating a Parallelizeable Cryptographic Hash Function In-Reply-To: References: <542DD770.2050007@cleversafe.com> Message-ID: There has been a lot of work on parallelizable hashing. Web search for "tree hashing" will turn up much of it. Several of the SHA-3 competition candidates, including at least the winner Keccak and finalist Skein, had discussions in their submissions of how to do a tree hash with their algorithm. From dave at horsfall.org Fri Oct 3 01:14:41 2014 From: dave at horsfall.org (Dave Horsfall) Date: Fri, 3 Oct 2014 15:14:41 +1000 (EST) Subject: [Cryptography] NSA versus DES etc.... In-Reply-To: References: <541A6E04.6050803@av8n.com> <201409191008.s8JA8e6u001629@home.unipay.nl> <201409200416.s8K4G1jW019462@new.toad.com> <541DE11B.6040305@av8n.com> <9E0708DC-ED74-4879-AEE7-D1F41B2252F5@lrw.com> <5421E7DB.2040704@av8n.com> <2422FDD0-1570-4BB7-B099-9719A933ED82@interlog.com> <5422600D.2060104@av8n.com> <2DB4E01B-CA6E-4F8E-827A-36A2022E5485@interlog.com> <21544.64868.836051.664605@desk.crynwr.com> <542C3AE3.4090207@iang.org> <201410020932.s929W3pE009125@new.toad.com> Message-ID: On Thu, 2 Oct 2014, Jerry Leichter wrote: > A friend (Martin Minow, in case anyone here remembers him) years ago > told me a story he heard in Sweden. The rivers and bays along the > Swedish coast are extremely tricky to navigate, being full of underwater > canyons and ridges. For many years, the maps of some of the major ports > were considered to be essential state secrets, a means of defense > against (mainly Soviet) naval attack. If you wanted to sail into one of > these harbors, you had best get a Swedish pilot, who had access to the > maps but would ban you from watching while he used them. And didn't the Swedes find a Russian sub in their waters some years back? > One day, a number of Soviet vessels arrived for some kind of political > port visit. The Soviets had always asked for Swedish pilots in the past > - but for some reason this time they went ahead and sailed up river at > speed, easily navigating around the not-quite-so-secret underwater > obstacles. I'm guessing that the Russians had been mapping the waters for years with their sonar; see above about a sub being caught. > This was certainly a decision made at a high level in the Soviet Navy, > if not at political levels, though as far as I know, exactly *why* they > chose to do it has never been explained. To show that they could? The old aphorism about locking a flimsy front door springs to mind. -- Dave From hbaker1 at pipeline.com Fri Oct 3 01:39:00 2014 From: hbaker1 at pipeline.com (Henry Baker) Date: Thu, 02 Oct 2014 22:39:00 -0700 Subject: [Cryptography] Best Internet crypto clock ? In-Reply-To: References: Message-ID: At 06:08 PM 10/2/2014, Ryan Carboni wrote: >Link to one of the following: > >https://beacon.nist.gov/home >Or use Bitcoin block >000000000000000012e01b574d2244e0182f64ed986513f0a049e01941950c2b > >https://blockchain.info/block-index/470728/000000000000000012e01b574d2244e0182f64ed986513f0a049e01941950c2b Thanks very much for these links; I had assumed that NIST would be doing something like this. However, while these "clocks" are very interesting, how can I easily transform these values into GMT times & vice versa? Also, I'm fairly confident that the bitcoin blockchain can't be hacked, because that would take an extremely well-heeled adversary, but I have no such confidence in the NIST values. From ryacko at gmail.com Fri Oct 3 02:29:48 2014 From: ryacko at gmail.com (Ryan Carboni) Date: Thu, 2 Oct 2014 23:29:48 -0700 Subject: [Cryptography] Best Internet crypto clock ? In-Reply-To: References: Message-ID: "Each such value is sequence-numbered, time-stamped and signed, and includes the hash of the previous value to chain the sequence of values together and prevent even the source to retroactively change an output package without being detected." They are both block chains. And they both include the time. -------------- next part -------------- An HTML attachment was scrubbed... URL: From hbaker1 at pipeline.com Fri Oct 3 09:42:39 2014 From: hbaker1 at pipeline.com (Henry Baker) Date: Fri, 03 Oct 2014 06:42:39 -0700 Subject: [Cryptography] Best Internet crypto clock ? In-Reply-To: References: Message-ID: At 11:29 PM 10/2/2014, Ryan Carboni wrote: >"Each such value is sequence-numbered, time-stamped and signed, and includes the hash of the previous value to chain the sequence of values together and prevent even the source to retroactively change an output package without being detected." >They are both block chains. > >And they both include the time. So you can easily convert from cryptotime to GMT time. What is the authenticated algorithm to convert GMT time to cryptotime for a) Bitcoin blockchain; b) NIST whatever-its-called ? And unlike Bitcoin, where millions of processors are working very hard to make sure that it can't be hacked, where are those millions of processors to make sure that the NIST chain can't be hacked? From jkatz at cs.umd.edu Fri Oct 3 13:25:34 2014 From: jkatz at cs.umd.edu (Jonathan Katz) Date: Fri, 3 Oct 2014 13:25:34 -0400 Subject: [Cryptography] Creating a Parallelizeable Cryptographic Hash Function In-Reply-To: <542EDA54.4010904@cleversafe.com> References: <542DD770.2050007@cleversafe.com> <542EDA54.4010904@cleversafe.com> Message-ID: On Fri, Oct 3, 2014 at 1:18 PM, Jason Resch wrote: > On 10/02/2014 06:50 PM, Jonathan Katz wrote: > > On Thu, Oct 2, 2014 at 6:53 PM, Jason Resch wrote: > >> Assuming there was a secure cryptographic function H() with an output of >> L bits, what attacks or weaknesses would exist in a protocol that did the >> following: >> >> >> Digest = H(B_0 || C_0) ^ H(B_1 || C_1) ^ H(B_2 || C_2) ^ ... ^ H(B_N || >> C_N) ^ H(N) >> >> >> Where B_0 through B_N are the blocks (of size L) constituting the message >> and C_0 through C_N are L-bit counters. >> >> One problem seems to be that if any collision can be found for a given >> H(X || C_i) and H(Y || C_i), it leads to an essentially infinite number of >> collisions (any message that contains X as a block can have that block >> replaced with Y), but what other vulnerabilities does this construction >> have that would make it unsuitable as a general purpose cryptographic hash >> function? >> >> Thanks for your expertise. >> > > There are several issues. Most obvious is that your hash is homomorphic, > i.e., > digest(B_0, B_1) ^ digest(B'_0, B_1) ^ digest(B_0, B'_1) = > digest(B'_0, B'_1) > > > But here you are not using counter values in the digest calculation. Is > there a way to determine any kind of homomorphism when no collisions can be > found in H()? > Yes, I was. By digest(., .), I meant to apply your scheme. > > Also, collisions in your hash function can be found in faster than > square-root time using Wagner's generalized birthday attack. > > > Interesting, thanks for pointing this out. If I interpret the improvement > of the GBA correctly, does that mean the time complexity to find a > collision is N^(L/2) / L vs. N^(L/2)? > > Thanks, > > Jason > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jresch at cleversafe.com Fri Oct 3 13:18:12 2014 From: jresch at cleversafe.com (Jason Resch) Date: Fri, 3 Oct 2014 12:18:12 -0500 Subject: [Cryptography] Creating a Parallelizeable Cryptographic Hash Function In-Reply-To: References: <542DD770.2050007@cleversafe.com> Message-ID: <542EDA54.4010904@cleversafe.com> On 10/02/2014 06:50 PM, Jonathan Katz wrote: > On Thu, Oct 2, 2014 at 6:53 PM, Jason Resch > wrote: > > Assuming there was a secure cryptographic function H() with an > output of L bits, what attacks or weaknesses would exist in a > protocol that did the following: > > > Digest = H(B_0 || C_0) ^ H(B_1 || C_1) ^ H(B_2 || C_2) ^ ... ^ > H(B_N || C_N) ^ H(N) > > > Where B_0 through B_N are the blocks (of size L) constituting the > message and C_0 through C_N are L-bit counters. > > One problem seems to be that if any collision can be found for a > given H(X || C_i) and H(Y || C_i), it leads to an essentially > infinite number of collisions (any message that contains X as a > block can have that block replaced with Y), but what other > vulnerabilities does this construction have that would make it > unsuitable as a general purpose cryptographic hash function? > > Thanks for your expertise. > > > There are several issues. Most obvious is that your hash is > homomorphic, i.e., > digest(B_0, B_1) ^ digest(B'_0, B_1) ^ digest(B_0, B'_1) = > digest(B'_0, B'_1) But here you are not using counter values in the digest calculation. Is there a way to determine any kind of homomorphism when no collisions can be found in H()? > > Also, collisions in your hash function can be found in faster than > square-root time using Wagner's generalized birthday attack. Interesting, thanks for pointing this out. If I interpret the improvement of the GBA correctly, does that mean the time complexity to find a collision is N^(L/2) / L vs. N^(L/2)? Thanks, Jason -------------- next part -------------- An HTML attachment was scrubbed... URL: From jresch at cleversafe.com Fri Oct 3 13:23:16 2014 From: jresch at cleversafe.com (Jason Resch) Date: Fri, 3 Oct 2014 12:23:16 -0500 Subject: [Cryptography] Creating a Parallelizeable Cryptographic Hash Function In-Reply-To: References: <542DD770.2050007@cleversafe.com> Message-ID: <542EDB84.8040509@cleversafe.com> On 10/02/2014 07:17 PM, David Leon Gil wrote: > On Thu, Oct 2, 2014 at 6:53 PM, Jason Resch wrote: >> Assuming there was a secure cryptographic function H() with an output of L >> bits, what attacks or weaknesses would exist in a protocol that did the >> following: > You're probably better off using a construction that has been designed > to be a sound "tree hashing" mode. E.g., the Keccak team's Sakura tree > hash coding: http://keccak.noekeon.org/Sakura.pdf David, Sandy, Thanks for these resources on tree hashing. I was considering a case where a small changes between very large messages, M and M' could be computed efficiently to produce an updated hash value. I am correct that tree hashing doesn't support this without using a lot of extra memory to store the intermediate hash values? Jason From leichter at lrw.com Fri Oct 3 11:15:43 2014 From: leichter at lrw.com (Jerry Leichter) Date: Fri, 3 Oct 2014 11:15:43 -0400 Subject: [Cryptography] Creating a Parallelizeable Cryptographic Hash Function In-Reply-To: References: <542DD770.2050007@cleversafe.com> Message-ID: <4A7A3B6E-71F8-4A15-BF10-7529B5587EA2@lrw.com> On Oct 2, 2014, at 10:37 PM, Sandy Harris wrote: > There has been a lot of work on parallelizable hashing. Web search for > "tree hashing" will turn up much of it. Several of the SHA-3 > competition candidates, including at least the winner Keccak and > finalist Skein, had discussions in their submissions of how to do a > tree hash with their algorithm. Keep in mind that "parallelizable" is often taken to mean "linear in the number of available processors". No tree algorithm is "parallelizable" in this sense - it has a logarithmic delay to roll up the results. Some encryption modes - e.g., CTR mode - *are* parallelizable in this strong sense. But it shouldn't be hard to prove that hashing can't possibly be. (In fact, I suspect that just the requirement that flipping any bit of the input has a 50% chance of flipping any given bit of the output should be enough to show that.) -- Jerry -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 4813 bytes Desc: not available URL: From philipp at jovanovic.io Fri Oct 3 09:31:48 2014 From: philipp at jovanovic.io (Philipp Jovanovic) Date: Fri, 3 Oct 2014 15:31:48 +0200 Subject: [Cryptography] Creating a Parallelizeable Cryptographic Hash Function In-Reply-To: References: <542DD770.2050007@cleversafe.com> Message-ID: <8FBA2CEE-2EC6-44C2-A249-F51BFDE7C4BC@jovanovic.io> On 03 Oct 2014, at 04:37, Sandy Harris wrote: > There has been a lot of work on parallelizable hashing. Web search for > "tree hashing" will turn up much of it. Several of the SHA-3 > competition candidates, including at least the winner Keccak and > finalist Skein, had discussions in their submissions of how to do a > tree hash with their algorithm. BLAKE2 is another hash function having a parallel hashing mode. Implementations can be found on https://blake2.net/ All the best, Philipp -------------- next part -------------- An HTML attachment was scrubbed... URL: From jresch at cleversafe.com Fri Oct 3 14:16:43 2014 From: jresch at cleversafe.com (Jason Resch) Date: Fri, 3 Oct 2014 13:16:43 -0500 Subject: [Cryptography] Creating a Parallelizeable Cryptographic Hash Function In-Reply-To: References: <542DD770.2050007@cleversafe.com> <542EDA54.4010904@cleversafe.com> Message-ID: <542EE80B.3090406@cleversafe.com> On 10/03/2014 12:25 PM, Jonathan Katz wrote: > On Fri, Oct 3, 2014 at 1:18 PM, Jason Resch > wrote: > > On 10/02/2014 06:50 PM, Jonathan Katz wrote: >> On Thu, Oct 2, 2014 at 6:53 PM, Jason Resch >> > wrote: >> >> Assuming there was a secure cryptographic function H() with >> an output of L bits, what attacks or weaknesses would exist >> in a protocol that did the following: >> >> >> Digest = H(B_0 || C_0) ^ H(B_1 || C_1) ^ H(B_2 || C_2) ^ ... >> ^ H(B_N || C_N) ^ H(N) >> >> >> Where B_0 through B_N are the blocks (of size L) constituting >> the message and C_0 through C_N are L-bit counters. >> >> One problem seems to be that if any collision can be found >> for a given H(X || C_i) and H(Y || C_i), it leads to an >> essentially infinite number of collisions (any message that >> contains X as a block can have that block replaced with Y), >> but what other vulnerabilities does this construction have >> that would make it unsuitable as a general purpose >> cryptographic hash function? >> >> Thanks for your expertise. >> >> >> There are several issues. Most obvious is that your hash is >> homomorphic, i.e., >> digest(B_0, B_1) ^ digest(B'_0, B_1) ^ digest(B_0, B'_1) = >> digest(B'_0, B'_1) > > But here you are not using counter values in the digest > calculation. Is there a way to determine any kind of homomorphism > when no collisions can be found in H()? > > > Yes, I was. By digest(., .), I meant to apply your scheme. Okay I see how that works now. That is an interesting property, but can it be used to undermine the security of any typical applications of hash functions? Thanks, Jason >> >> Also, collisions in your hash function can be found in faster >> than square-root time using Wagner's generalized birthday attack. > > Interesting, thanks for pointing this out. If I interpret the > improvement of the GBA correctly, does that mean the time > complexity to find a collision is N^(L/2) / L vs. N^(L/2)? > > Thanks, > > Jason > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jsd at av8n.com Fri Oct 3 14:43:12 2014 From: jsd at av8n.com (John Denker) Date: Fri, 03 Oct 2014 11:43:12 -0700 Subject: [Cryptography] cryptologic proof-of-life ... was: crypto clock In-Reply-To: References: Message-ID: <542EEE40.7070904@av8n.com> On 10/01/2014 05:45 AM, Henry Baker wrote: > In old B/W movies, when a person was kidnapped, the kidnapper sent a > photo of the person together with a picture of the front page of > today's newspaper to prove that he had the kidnapped person _on or > after the date_ of the newspaper. Hmmmm. > one could presumably do the same thing with a crypto hash I am not ready to presume that. None of the messages in the "Best Internet crypto clock" thread have addressed this use-case. As others have noted, there are any number of ways of applying a cryptologic time-stamp to a message (such as an image of the hostage) ... but none of them offer any assurance that the message itself is not deceptive. Post-dating is only one of innumerable possible deceptions. Specifically, I could prepare an image of Elvis holding today's New York Times in one hand and the latest NIST beacon number plus a bunch of Pick Six lottery numbers in the other hand. Alas this does not prove that Elvis is alive. As a tangentially related matter: Duress codes have been part of cryptology for centuries. Reference: Excellent book: Leo Marks _Between Silk and Cyanide_ Proof-of-life falls into a weird intermediate category: "I'm under duress so you can't trust what I say, except for my proof-of-life claim." This is a subset of the infinitely-tricky "double agent" problem: you know your guy has been captured, but you are trying to double him, and you think/wish/hope he can tell you which of his messages are believable and which not. =========== To answer a question that wasn't asked: The /opposite/ functionality exists: It is possible to use crypto to prove that a certain message was prepared /before/ a certain date and has not been tampered with since. I've been using this idea for decades. As a particularly simple example: Write up a description of an invention. Compute a HMAC. Send it to your patent attorney, with instructions to date-stamp it and save it in the files. This creates zero incremental risk of exposure, but can be used later to prove that your invention existed on or before the date of the email. Fancy online services along this line exist. See e.g. https://www.google.com/search?q=online+digital+notary+service http://www.ncbi.nlm.nih.gov/pmc/articles/PMC116325/ From outer at interlog.com Fri Oct 3 14:57:05 2014 From: outer at interlog.com (Richard Outerbridge) Date: Fri, 3 Oct 2014 14:57:05 -0400 Subject: [Cryptography] NSA versus DES etc.... In-Reply-To: References: <541A6E04.6050803@av8n.com> <201409191008.s8JA8e6u001629@home.unipay.nl> <201409200416.s8K4G1jW019462@new.toad.com> <541DE11B.6040305@av8n.com> <9E0708DC-ED74-4879-AEE7-D1F41B2252F5@lrw.com> <5421E7DB.2040704@av8n.com> <2422FDD0-1570-4BB7-B099-9719A933ED82@interlog.com> <5422600D.2060104@av8n.com> <2DB4E01B-CA6E-4F8E-827A-36A2022E5485@interlog.com> <21544.64868.836051.664605@desk.crynwr.com> <542C3AE3.4090207@iang.org> <201410020932.s929W3pE009125@new.toad.com> <2E639FB9-D0E2-445F-A6C1-C44CAF465D38@interlog.com> Message-ID: <5E16D2FE-D94E-462E-83B2-47C7D68D78CE@interlog.com> On 2014-10-03 (276), at 02:17:14, Paul Reuvers wrote: > Hi Guys, > > The fact that we have the wirings for the HU, CZ and PO version of Fialka, is due to the fact that we actually found these machines on the surplus market (well, CZ and PO that is), although at least one person was jailed for that. We did not get them from any US source. So the immediate lesson learned is that Kerckhoff?s principle is alive & well? Relying on the secrecy of the method is a fatal mistake, one the NSA apparently blithely committed from 1949 through the early 1980s with its KL-7 device. The much more important lesson is that key management matters far more than any crypto itself, at least for symmetric crypto. __outer From jsd at av8n.com Fri Oct 3 15:48:43 2014 From: jsd at av8n.com (John Denker) Date: Fri, 03 Oct 2014 12:48:43 -0700 Subject: [Cryptography] NSA versus DES etc.... In-Reply-To: References: <541A6E04.6050803@av8n.com> <201409191008.s8JA8e6u001629@home.unipay.nl> <201409200416.s8K4G1jW019462@new.toad.com> <541DE11B.6040305@av8n.com> <9E0708DC-ED74-4879-AEE7-D1F41B2252F5@lrw.com> <5421E7DB.2040704@av8n.com> <2422FDD0-1570-4BB7-B099-9719A933ED82@interlog.com> <5422600D.2060104@av8n.com> <2DB4E01B-CA6E-4F8E-827A-36A2022E5485@interlog.com> <21544.64868.836051.664605@desk.crynwr.com> <542C3AE3.4090207@iang.org> Message-ID: <542EFD9B.1020003@av8n.com> On Oct 1, 2014, at 1:33 PM, ianG wrote: >> One point for: Suite A and friends, which remains a heavily shared secret. >> >> One point against: In this particular place called cryptography, there >> is a frequently repeated aphorism "the enemy knows my algorithm" >> recently attributed as Shannon's maxim and historically as Kerckhoffs' >> 2nd Principle. OK, that's a balanced view. On 10/02/2014 08:35 AM, Jerry Leichter replied: > One can read way too much into this rule. There's a countervailing > principle: Defense in depth. Your data is protected (a) by the > secrecy of your algorithm; (b) by the secrecy of your code. The > enemy needs *both* to read your data. Why give him one for free? Defense in depth ("belt and suspenders") makes sense sometimes ... but if you go to far ("belt and suspenders and crazy glue") it can defeat important parts of your core mission. As a good rule of thumb, I tell my customers /not/ to pour crazy glue into their pants. As for crypto in particular: The opposite of Kerckhoffs's principle is called "security by obscurity" and is held in contempt by serious cryptographers and security experts. You don't need to make a virtue of disclosing the algorithm, but if you /rely/ on non-disclosure you are doing something wrong. Among other things, you will hesitate to put your best crypto into the field, for fear that the algorithm will be captured. ============= More generally, I am astonished by the amount of traffic on this list attempting to justify NSA actions that are by any objective standard unwise or illegal or both. Of course we want to /understand/ where the NSA is coming from, but that does not require rationalizing or justifying it. There are actually some rather simple ways of understanding the observed behavior. For starters: Follow the money. The US "black budget" is on the order of 50 billion dollars per year. Over the course of ten years, that starts to add up to real money, something like half a trillion dollars. That can be compared to the "nominal" cost of the Iraq war, namely a couple trillion expended so far (not counting various accrued liabilities). In any case, it stands to reason that bureaucrats will fight over the money, and fight intensely. I've seen people go nuts over a lot less than that. a) Of that, the amount spent on code /breaking/ completely dwarfs the amount spent on code /making/ i.e. information assurance (IA). So it stands to reason that in any bureaucratic knife-fight the IA guys are going to lose. b) As a related point, they guy who /benefits/ from codebreaking knows where the benefits are coming from, and is willing to pay. In contrast, the guy who /suffers/ from codebreaking is usually slow to find out what the problem is, and therefore unwilling to pay for security (until it's too late). So this is another reason why in any bureaucratic knife-fight, the IA guys are going to lose. I emphasize again: These explanations are /not/ justifications. The fact that the NSA feels obliged to lie to Congress about what they are doing indicates that even they know it is wrong. https://firstlook.org/theintercept/2014/10/02/the-nsa-and-me/ https://firstlook.org/theintercept/2014/09/29/new-intel-doc-led-astray-commonly-understood-definitions/ It is bizarre for the US taxpayers to be paying the NSA to spy on them and (!) to leave them open to spying by foreign powers. The NSA has repeatedly taken actions that are self-defeating in terms of their stated mission. Their actions are unconstitutional. Even if they were constitutional they would be illegal. Even if they were legal they would be bad public policy. ++ [The method] should not require secrecy, and it should not be a problem if it falls into enemy hands. -- Auguste Kerckhoffs ++ The enemy knows the system. -- Claude Shannon ++ In the long run it is more important to secure one's own communications than to exploit those of the enemy. -- Frank Rowlett -- Let's create a situation where our friends can be spied upon more easily than our enemies." -- NSA policy for 40+ years From leichter at lrw.com Fri Oct 3 15:56:46 2014 From: leichter at lrw.com (Jerry Leichter) Date: Fri, 3 Oct 2014 15:56:46 -0400 Subject: [Cryptography] Creating a Parallelizeable Cryptographic Hash Function In-Reply-To: <542DD770.2050007@cleversafe.com> References: <542DD770.2050007@cleversafe.com> Message-ID: <436CCF30-03F4-4112-888D-11AAC6B56E0F@lrw.com> On Oct 2, 2014, at 6:53 PM, Jason Resch wrote: > Assuming there was a secure cryptographic function H() with an output of L bits, what attacks or weaknesses would exist in a protocol that did the following: > > > Digest = H(B_0 || C_0) ^ H(B_1 || C_1) ^ H(B_2 || C_2) ^ ... ^ H(B_N || C_N) ^ H(N) > > > Where B_0 through B_N are the blocks (of size L) constituting the message and C_0 through C_N are L-bit counters. This construction is insecure. Call your function J. Let A and B be any L-bit values. Then: J(A || B) = H(A || 0) ^ H(B || 1) ^ H(1) = (H(A || 0) ^ H(0)) ^ H(0) ^ H(B || 1) ^ H(1) = J(A) ^ (H(0) ^ H(1) ^ H(B || 1)) (Your use of N is a bit odd - it "feels" like it's the number of blocks, but in fact it's one less than that. This becomes obvious here when you have to add the constant length-dependent terms - with one block, I expected to add H(1)!) So if I'm given J(A) for an unknown A, I can compute J(A || B) for any B. This is a form of length extension attack. In this case, I don't think the usual tricks for preventing length extension attacks, like replacing H by an HMAC based on H or just wrapping an additional H around the whole thing, will help you - they are meant for cases where there is a secret key that the attacker doesn't have access to, but here everything but the original message hashed is assumed to be available. -- Jerry -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 4813 bytes Desc: not available URL: From jya at pipeline.com Fri Oct 3 16:31:26 2014 From: jya at pipeline.com (John Young) Date: Fri, 03 Oct 2014 16:31:26 -0400 Subject: [Cryptography] NSA releases two documents on decrypting CIA KRYPTOS sculpture Message-ID: NSA releases two documents on decrypting CIA KRYPTOS sculpture, formerly classified FVEY//20320108 The CIA Kryptos Sculpture Slides https://www.nsa.gov/public_info/_files/cia_kryptos_sculpture/KRYPTOS_Slides.pdf The CIA Kryptos Sculpture: A summary of Previous Work and New Revelations https://www.nsa.gov/public_info/_files/cia_kryptos_sculpture/KRYPTOS_Summary.pdf From jresch at cleversafe.com Fri Oct 3 18:30:33 2014 From: jresch at cleversafe.com (Jason Resch) Date: Fri, 3 Oct 2014 17:30:33 -0500 Subject: [Cryptography] Creating a Parallelizeable Cryptographic Hash Function In-Reply-To: <436CCF30-03F4-4112-888D-11AAC6B56E0F@lrw.com> References: <542DD770.2050007@cleversafe.com> <436CCF30-03F4-4112-888D-11AAC6B56E0F@lrw.com> Message-ID: <542F2389.1090406@cleversafe.com> On 10/03/2014 02:56 PM, Jerry Leichter wrote: > On Oct 2, 2014, at 6:53 PM, Jason Resch wrote: >> Assuming there was a secure cryptographic function H() with an output of L bits, what attacks or weaknesses would exist in a protocol that did the following: >> >> >> Digest = H(B_0 || C_0) ^ H(B_1 || C_1) ^ H(B_2 || C_2) ^ ... ^ H(B_N || C_N) ^ H(N) >> >> >> Where B_0 through B_N are the blocks (of size L) constituting the message and C_0 through C_N are L-bit counters. > This construction is insecure. Call your function J. Let A and B be any L-bit values. Then: > > J(A || B) = H(A || 0) ^ H(B || 1) ^ H(1) > = (H(A || 0) ^ H(0)) ^ H(0) ^ H(B || 1) ^ H(1) > = J(A) ^ (H(0) ^ H(1) ^ H(B || 1)) > > (Your use of N is a bit odd - it "feels" like it's the number of blocks, but in fact it's one less than that. This becomes obvious here when you have to add the constant length-dependent terms - with one block, I expected to add H(1)!) > > So if I'm given J(A) for an unknown A, I can compute J(A || B) for any B. This is a form of length extension attack. In this case, I don't think the usual tricks for preventing length extension attacks, like replacing H by an HMAC based on H or just wrapping an additional H around the whole thing, will help you - they are meant for cases where there is a secret key that the attacker doesn't have access to, but here everything but the original message hashed is assumed to be available. > -- Jerry > > Jerry, Very clever. I see now that this is clearly vulnerable to a length extension attack. However, it isn't clear to me why throwing the final result through H() as a final post-processing step wouldn't serve to address it. Jason From mitch at niftyegg.com Fri Oct 3 20:19:45 2014 From: mitch at niftyegg.com (Tom Mitchell) Date: Fri, 3 Oct 2014 17:19:45 -0700 Subject: [Cryptography] Creating a Parallelizeable Cryptographic Hash Function In-Reply-To: <4A7A3B6E-71F8-4A15-BF10-7529B5587EA2@lrw.com> References: <542DD770.2050007@cleversafe.com> <4A7A3B6E-71F8-4A15-BF10-7529B5587EA2@lrw.com> Message-ID: On Fri, Oct 3, 2014 at 8:15 AM, Jerry Leichter wrote: > On Oct 2, 2014, at 10:37 PM, Sandy Harris wrote: > > There has been a lot of work on parallelizable hashing. Web search for > > "tree hashing" will turn up much of it. > ...... > Keep in mind that "parallelizable" is often taken to mean "linear in the > number of available processors". No tree algorithm is "parallelizable" in > this sense - it has a logarithmic delay to roll up the results. > Minor point that should not be ignored. To a programmer a good hash table is not the same as a good crypto hash. A programmer simply wants a fast lookup with a minimum miss, collision. Most programmers do not care if a collision is moderately easy to fabricate because they want to get close enough not exactly and will walk their way to the desired data (short walk). Crypto hashes need to be nearly impossible to generate by altering the data input and spoofing a match. Thus a fast hash for a Google webpage lookup is not the same design need as a fast hash for Google data that should be kept secret and private. -- T o m M i t c h e l l -------------- next part -------------- An HTML attachment was scrubbed... URL: From gschultz at kc.rr.com Fri Oct 3 20:37:44 2014 From: gschultz at kc.rr.com (Grant Schultz) Date: Fri, 03 Oct 2014 19:37:44 -0500 Subject: [Cryptography] Best internet crypto clock In-Reply-To: References: Message-ID: <542F4158.8080907@kc.rr.com> On 10/2/2014 11:00 AM, cryptography-request at metzdowd.com wrote: > In old B/W movies, when a person was kidnapped, the kidnapper sent a photo of the person together with a picture of the front page of today's newspaper to prove that he had the kidnapped person_on or after the date_ of the newspaper. Are you planning on kidnapping someone? 8-) Grant -------------- next part -------------- An HTML attachment was scrubbed... URL: From hbaker1 at pipeline.com Fri Oct 3 22:34:26 2014 From: hbaker1 at pipeline.com (Henry Baker) Date: Fri, 03 Oct 2014 19:34:26 -0700 Subject: [Cryptography] Best internet crypto clock In-Reply-To: <542F4158.8080907@kc.rr.com> References: <542F4158.8080907@kc.rr.com> Message-ID: At 05:37 PM 10/3/2014, Grant Schultz wrote: >On 10/2/2014 11:00 AM, cryptography-request at metzdowd.com wrote: > >>In old B/W movies, when a person was kidnapped, the kidnapper sent a photo of the person together with a picture of the front page of today's newspaper to prove that he had the kidnapped person _on or after the date_ of the newspaper. > >Are you planning on kidnapping someone? No. But I would like to see some simple, robust Internet crypto services, starting with a simple crypto clock with reasonable resolution that can't be hacked by anyone, not even the NSA. To a first approximation, the Bitcoin blockchain is the only current candidate, although at a much coarser resolution, a hash of all of the Fortune 500 daily closing stock prices would also function. If there are any other candidates -- e.g., NIST "beacons" with some less-corruptible authentication mechanism -- that have the same level of non-hackability, I'd be interested in finding out about them. From rsalz at akamai.com Sat Oct 4 11:04:48 2014 From: rsalz at akamai.com (Salz, Rich) Date: Sat, 4 Oct 2014 10:04:48 -0500 Subject: [Cryptography] Best internet crypto clock In-Reply-To: References: <542F4158.8080907@kc.rr.com> Message-ID: <2A0EFB9C05D0164E98F19BB0AF3708C71D2F8F84F7@USMBX1.msg.corp.akamai.com> > No. But I would like to see some simple, robust Internet crypto services, > starting with a simple crypto clock with reasonable resolution that can't be > hacked by anyone, not even the NSA. A blockchain of multiple (tier-1) NTP servers? GPS satellites? The DNS queries for sites like nsa.gov, nato.int, google.com and so on? -- Principal Security Engineer, Akamai Technologies IM: rsalz at jabber.me Twitter: RichSalz From benl at google.com Sat Oct 4 13:21:19 2014 From: benl at google.com (Ben Laurie) Date: Sat, 4 Oct 2014 18:21:19 +0100 Subject: [Cryptography] Creating a Parallelizeable Cryptographic Hash Function In-Reply-To: <542EE80B.3090406@cleversafe.com> References: <542DD770.2050007@cleversafe.com> <542EDA54.4010904@cleversafe.com> <542EE80B.3090406@cleversafe.com> Message-ID: On 3 October 2014 19:16, Jason Resch wrote: > On Fri, Oct 3, 2014 at 1:18 PM, Jason Resch wrote: >> >> On 10/02/2014 06:50 PM, Jonathan Katz wrote: >> >> On Thu, Oct 2, 2014 at 6:53 PM, Jason Resch wrote: >>> >>> Assuming there was a secure cryptographic function H() with an output of >>> L bits, what attacks or weaknesses would exist in a protocol that did the >>> following: >>> >>> >>> Digest = H(B_0 || C_0) ^ H(B_1 || C_1) ^ H(B_2 || C_2) ^ ... ^ H(B_N || >>> C_N) ^ H(N) >>> >>> >>> Where B_0 through B_N are the blocks (of size L) constituting the message >>> and C_0 through C_N are L-bit counters. >>> >>> One problem seems to be that if any collision can be found for a given >>> H(X || C_i) and H(Y || C_i), it leads to an essentially infinite number of >>> collisions (any message that contains X as a block can have that block >>> replaced with Y), but what other vulnerabilities does this construction have >>> that would make it unsuitable as a general purpose cryptographic hash >>> function? >>> >>> Thanks for your expertise. >> >> >> There are several issues. Most obvious is that your hash is homomorphic, >> i.e., >> digest(B_0, B_1) ^ digest(B'_0, B_1) ^ digest(B_0, B'_1) = digest(B'_0, >> B'_1) >> >> >> But here you are not using counter values in the digest calculation. Is >> there a way to determine any kind of homomorphism when no collisions can be >> found in H()? > > > Yes, I was. By digest(., .), I meant to apply your scheme. > > > Okay I see how that works now. That is an interesting property, but can it > be used to undermine the security of any typical applications of hash > functions? Of course ... I get you to sign the first three, now I can sign the fourth one for you... You could fix it by adding an IV. :-) However, this is not a good way to go about designing crypto primitives. From iang at iang.org Sat Oct 4 15:08:42 2014 From: iang at iang.org (ianG) Date: Sat, 04 Oct 2014 12:08:42 -0700 Subject: [Cryptography] 1023 nails in the coffin of 1024 RSA... Message-ID: <543045BA.1090400@iang.org> (some skepticism about whether this there is really a break in OpenSSL, but the rumour mill will no doubt throw mud on the 1024 bit part as well...) OpenSSL bug allows RSA 1024 key factorization in 20 minutes https://www.reddit.com/r/crypto/comments/2i9qke/openssl_bug_allows_rsa_1024_key_factorization_in/ Supposedly. So just a few minutes ago has finished a talk at Navaja Negra 2014, the third? most important security congress in Spain, where the speaker (a member of the organization) claimed to have found a bug in OpenSSL RSA key generation, which he is able to exploit to factorize N into p and q in around 20 minutes (on a laptop). He did a live demo. I wasn't there, but some friends were. He claimed: The bug originates in this lines of rsa_gen.c: 117 bitsp=(bits+1)/2; 118 bitsq=bits-bitsp; the main problem being that the rounding of 1025 isn't downwards but upwards, resulting in bitsp= 513 and bitsq=511, which, supposedly, later on the code and due to compiler optimizations, causes the bug. It affects all versions of OpenSSL. He is neither going to report it to the developers, nor publish anything. I personally think he's full of shit, but the fact that he's a member of the organization and thus not only his personal prestige but also the organization's is at stake, makes you wonder. Anyhow, we'll see. I posted it yesterday to netsec but the mods removed it. Let's discuss it here! Edit 1: so my friends talked to him today, and he's serious about it. He says he's broken 1024 keys on Amazon clusters in 18 seconds. Edit 2: he claims some guy from Argentina found the same thing 6 years ago, and has been trying to show it on cons since then, but no con accepted his talk because they wouldn't believe him. Edit 3: he also says the attack consists in trying "probable primes", whose probability is generated by said bug. Might it be some variation on Fermat's attack? From leichter at lrw.com Sat Oct 4 15:50:01 2014 From: leichter at lrw.com (Jerry Leichter) Date: Sat, 4 Oct 2014 15:50:01 -0400 Subject: [Cryptography] Best internet crypto clock In-Reply-To: References: <542F4158.8080907@kc.rr.com> Message-ID: <4C0050A4-8CD7-4E29-9013-53C22D72BCFF@lrw.com> On Oct 3, 2014, at 10:34 PM, Henry Baker wrote: >>> In old B/W movies, when a person was kidnapped, the kidnapper sent a photo of the person together with a picture of the front page of today's newspaper to prove that he had the kidnapped person _on or after the date_ of the newspaper. > No. But I would like to see some simple, robust Internet crypto services, starting with a simple crypto clock with reasonable resolution that can't be hacked by anyone, not even the NSA. > > To a first approximation, the Bitcoin blockchain is the only current candidate, although at a much coarser resolution, a hash of all of the Fortune 500 daily closing stock prices would also function. > > If there are any other candidates -- e.g., NIST "beacons" with some less-corruptible authentication mechanism -- that have the same level of non-hackability, I'd be interested in finding out about them. There are two issues here: The clock, and the original problem of establishing that some event occurred no later than a given time. The first isn't hard to solve, in the traditional way of producing trustworthy random number generators: Simply have NIST, the NSA, the EFF, the Russian and Chinese governments - whoever is willing - implement beacons. To produce a beacon you trust, choose any subset, combine the "random" numbers, and sign the result in the usual way. The subset and the method of combination are all public and committed to; all the inputs are public. Since the individual beacons can only be corrupted by entirely stopping them, or by producing predictable (to the attacker) values, unless someone corrupts *all* the sources, the combination is unpredictable. The question of replicating the "picture of the kidnapped person" scenario, however, seems impossible. Consider what it claims to deliver: Anyone looking at the photo, at any time after it was made, can be sure that the person in the photo was actually alive when the photo was taken, and the photo could not have been taken earlier than the date on the newspaper. Well, maybe that was more or less true back in the days of black-and-white photography; but there would not be the slightest difficulty in faking such a photograph today using Photoshop or similar software. You then are reduced to the battle of the photo experts - the ones who produce better and better fakes vs. the ones doing better and better detection of fakes. The fundamental thing you're trying to prove is that some *event* - the taking of the photograph - took place after some time T. This isn't the kind of thing we deal with in cryptography, where the usual starting point is "some string of bits" B. Proving that "some string of bits" could not have been produced before T seems difficult. In fact, if you pose the problem as "combine B with some other string of bits S(T), such that the result proves that B was not known before T", the problem is clearly insoluble. (Before you go, oh, but you can commit a hash of B to the blockchain at time T - that solves the *inverse* problem: It proves that you knew B *no later than* T.) If you instead go back to trying to solve the original problem, you can pose it a different way: I want to "apply" my victim to S(T) to produce an output that (a) only the victim could have produced; (b) could only be produced with the knowledge of S(T). For example, suppose that voice-printing were an infallible way of identifying a speaker. Then we could use a recording of the victim reading S(T) aloud. (Of course, "infallible" has to include the ability to detect splices and other ways of modifying or combining recordings made earlier to produce the "proof of life".) Having him write it out with pen and paper would work about as well. If there were a way to produce a (digital) signature based on "something you are" - assuming that this becomes unavailable after death - then the victim's signature of S(T) would serve this purpose. Some of the work on biometrics might eventually get us there, though it seems doubtful. I'm not even sure how to pose a general version of this problem. There are some special cases that work and might be useful. Extending the signature example, suppose we have a tamper-proof signing box. Using it to sign S(T) is proof of possession of the box at some time after T. Perhaps this could provide some kind of proof of receipt. -- Jerry From hanno at hboeck.de Sat Oct 4 15:50:51 2014 From: hanno at hboeck.de (Hanno =?ISO-8859-1?B?QvZjaw==?=) Date: Sat, 4 Oct 2014 21:50:51 +0200 Subject: [Cryptography] 1023 nails in the coffin of 1024 RSA... In-Reply-To: <543045BA.1090400@iang.org> References: <543045BA.1090400@iang.org> Message-ID: <20141004215051.2ec5d2fc@hboeck.de> Am Sat, 04 Oct 2014 12:08:42 -0700 schrieb ianG : > (some skepticism about whether this there is really a break in > OpenSSL, but the rumour mill will no doubt throw mud on the 1024 bit > part as well...) I saw this earlier and got curious, but this doesn't make sense from start to end. I personally tried if openssl will for whatever reason round 1025/2 down to 513 by inserting a printf at that point for bitsp and bitsq. It doesn't. Even if it would it is not clear how N as a product of a 511 and a 513 bit prime should pose any significant risk. That said: There are good reasons to get rid of 1024 bit rsa. This is not one of them. It's a very vague rumor with an implausible story. However it certainly doesn't hurt if a few people look at the supposed source code and see if there's anything suspicious. -- Hanno B?ck http://hboeck.de/ mail/jabber: hanno at hboeck.de GPG: BBB51E42 -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 819 bytes Desc: not available URL: From cryptography at dukhovni.org Sat Oct 4 16:20:20 2014 From: cryptography at dukhovni.org (Viktor Dukhovni) Date: Sat, 4 Oct 2014 20:20:20 +0000 Subject: [Cryptography] 1023 nails in the coffin of 1024 RSA... In-Reply-To: <543045BA.1090400@iang.org> References: <543045BA.1090400@iang.org> Message-ID: <20141004202019.GM13254@mournblade.imrryr.org> On Sat, Oct 04, 2014 at 12:08:42PM -0700, ianG wrote: > He claimed: > > The bug originates in this lines of rsa_gen.c: > > 117 bitsp=(bits+1)/2; > 118 bitsq=bits-bitsp; > > the main problem being that the rounding of 1025 isn't downwards but > upwards, resulting in bitsp= 513 and bitsq=511, which, supposedly, later > on the code and due to compiler optimizations, causes the bug. This is plainly wrong for 1024-bit moduli, in the example below, p and q are respectively: 00d71a4e9865d1cdcdd8e6e4cbc5309971e52c121efee4a080a7d11af6fa7096e18470cbef6034c096b4170133d9edb45bd90a2906b34f58bf66278ed1dba8ffad 00d26e1e81fde36b9daec7acbee3279b70d00fe771b65dbf8786f2f006621d4b517e5970801b517be34b7c483678ac99cfa9b22b075bde85a6d069dce9ef53a3f9 both are consist of 128 nibbles after the initial 00 to ensure that the number is unsigned, the high nibble of both is "d", so both are 512 bit numbers. $ openssl genrsa 1024 2>/dev/null | openssl rsa -text Private-Key: (1024 bit) modulus: 00:b0:d0:1b:69:17:cd:68:1f:f9:d1:9e:82:a0:eb: 9f:18:76:0d:32:53:5d:2f:e9:44:4f:1e:d7:03:02: 13:7e:42:94:c5:2d:03:83:1f:07:82:50:07:f8:d3: cb:91:6d:62:9a:a5:9a:22:1f:41:f6:37:f5:f1:07: 8a:b6:3c:28:a4:cc:b6:61:31:da:c7:00:a4:f7:1b: db:ef:f6:c2:89:b0:8a:53:ba:bc:db:f0:50:f8:18: c3:ac:42:7b:e0:69:63:e3:f1:88:b3:43:b4:56:ab: 11:7a:ec:27:5a:ee:18:0a:0c:57:ed:e4:e6:d6:a6: 60:5d:04:e7:ed:aa:42:d6:45 publicExponent: 65537 (0x10001) privateExponent: 00:a4:8e:6a:94:5a:a4:bf:1d:d3:61:76:06:d9:41: b1:66:10:a8:a3:87:d6:98:ba:9e:ea:8c:27:4c:13: 68:94:ff:de:79:cc:35:12:99:94:61:81:9e:89:c4: 84:17:2b:18:b4:19:1f:e4:55:f7:0b:f2:75:21:08: 05:df:29:0a:21:1a:a2:b0:24:0e:9b:2b:31:97:34: be:22:9e:e2:73:5e:c5:ce:3f:e8:99:6f:15:68:13: fd:e7:d7:ef:18:dd:dd:6e:0f:26:f9:86:9a:f1:a1: 6d:aa:89:59:29:20:e2:26:0d:28:15:fb:4f:e7:33: 86:ea:b6:5c:86:05:e8:cd:41 prime1: 00:d7:1a:4e:98:65:d1:cd:cd:d8:e6:e4:cb:c5:30: 99:71:e5:2c:12:1e:fe:e4:a0:80:a7:d1:1a:f6:fa: 70:96:e1:84:70:cb:ef:60:34:c0:96:b4:17:01:33: d9:ed:b4:5b:d9:0a:29:06:b3:4f:58:bf:66:27:8e: d1:db:a8:ff:ad prime2: 00:d2:6e:1e:81:fd:e3:6b:9d:ae:c7:ac:be:e3:27: 9b:70:d0:0f:e7:71:b6:5d:bf:87:86:f2:f0:06:62: 1d:4b:51:7e:59:70:80:1b:51:7b:e3:4b:7c:48:36: 78:ac:99:cf:a9:b2:2b:07:5b:de:85:a6:d0:69:dc: e9:ef:53:a3:f9 exponent1: 7d:d8:30:3f:4c:e2:90:2b:6c:48:b9:76:d5:e8:f6: fd:01:7c:e5:25:29:2f:0d:0f:f8:1e:88:4e:12:7b: 28:6a:cc:17:49:d8:c4:4a:58:9a:52:c6:5a:b7:c1: 3a:26:98:cd:c3:f9:f8:a7:93:36:72:d4:0b:34:ad: 66:7b:db:09 exponent2: 1e:2b:53:8c:67:8e:17:7b:bf:f7:38:b9:15:70:34: 44:f4:4f:93:6b:26:2e:42:ab:77:99:94:f8:15:51: 05:df:65:32:05:83:18:67:92:4f:80:1f:0d:6b:61: d9:bd:23:9c:bc:c2:96:87:81:5b:c0:12:d9:5a:a6: df:7d:2a:61 coefficient: 35:76:a3:29:95:ef:ee:98:a0:0e:3a:2e:5c:41:c0: 0f:9c:4d:48:f0:92:06:72:d9:47:36:8a:9f:89:41: 0f:4f:27:a7:c3:22:f7:ea:22:44:94:a8:20:84:73: f0:f9:a9:3b:63:70:c8:b7:d8:21:9b:64:65:67:92: 29:09:71:91 writing RSA key -----BEGIN RSA PRIVATE KEY----- MIICXAIBAAKBgQCw0BtpF81oH/nRnoKg658Ydg0yU10v6URPHtcDAhN+QpTFLQOD HweCUAf408uRbWKapZoiH0H2N/XxB4q2PCikzLZhMdrHAKT3G9vv9sKJsIpTurzb 8FD4GMOsQnvgaWPj8YizQ7RWqxF67Cda7hgKDFft5ObWpmBdBOftqkLWRQIDAQAB AoGBAKSOapRapL8d02F2BtlBsWYQqKOH1pi6nuqMJ0wTaJT/3nnMNRKZlGGBnonE hBcrGLQZH+RV9wvydSEIBd8pCiEaorAkDpsrMZc0viKe4nNexc4/6JlvFWgT/efX 7xjd3W4PJvmGmvGhbaqJWSkg4iYNKBX7T+czhuq2XIYF6M1BAkEA1xpOmGXRzc3Y 5uTLxTCZceUsEh7+5KCAp9Ea9vpwluGEcMvvYDTAlrQXATPZ7bRb2QopBrNPWL9m J47R26j/rQJBANJuHoH942udrsesvuMnm3DQD+dxtl2/h4by8AZiHUtRfllwgBtR e+NLfEg2eKyZz6myKwdb3oWm0Gnc6e9To/kCQH3YMD9M4pArbEi5dtXo9v0BfOUl KS8ND/geiE4SeyhqzBdJ2MRKWJpSxlq3wTommM3D+finkzZy1As0rWZ72wkCQB4r U4xnjhd7v/c4uRVwNET0T5NrJi5Cq3eZlPgVUQXfZTIFgxhnkk+AHw1rYdm9I5y8 wpaHgVvAEtlapt99KmECQDV2oymV7+6YoA46LlxBwA+cTUjwkgZy2Uc2ip+JQQ9P J6fDIvfqIkSUqCCEc/D5qTtjcMi32CGbZGVnkikJcZE= -----END RSA PRIVATE KEY----- It is also far from clear how having p/q ~ 4 rather than p/q ~ 1 would help the attacker. IIRC on the contrary having p and q too close to each other (sharing too many top bits) is known to be problematic. > He is neither going to report it to the developers, nor publish > anything. I have a simple proof of the Goldbach conjecture and the Riemann hypothesis. I like his approach, so I'm not going to publish either. -- Viktor. From cryptography at dukhovni.org Sat Oct 4 16:39:01 2014 From: cryptography at dukhovni.org (Viktor Dukhovni) Date: Sat, 4 Oct 2014 20:39:01 +0000 Subject: [Cryptography] 1023 nails in the coffin of 1024 RSA... In-Reply-To: <20141004215051.2ec5d2fc@hboeck.de> References: <543045BA.1090400@iang.org> <20141004215051.2ec5d2fc@hboeck.de> Message-ID: <20141004203901.GO13254@mournblade.imrryr.org> On Sat, Oct 04, 2014 at 09:50:51PM +0200, Hanno B?ck wrote: > However it certainly doesn't hurt if a few people look at the supposed > source code and see if there's anything suspicious. For a modulus with 2k bits, p and q will both have k bits. For a module with 2k+1 bits, p will have k+1 bits and q will have k bits. The bit counts in question are completely unremarkable. If there are bugs, they are somewhere else (random number generation, prime sieving, ...). -- Viktor. From bascule at gmail.com Sat Oct 4 16:14:24 2014 From: bascule at gmail.com (Tony Arcieri) Date: Sat, 4 Oct 2014 13:14:24 -0700 Subject: [Cryptography] 1023 nails in the coffin of 1024 RSA... In-Reply-To: <543045BA.1090400@iang.org> References: <543045BA.1090400@iang.org> Message-ID: Using 1024-bit keys is silly, but PoC||GTFO On Sat, Oct 4, 2014 at 12:08 PM, ianG wrote: > (some skepticism about whether this there is really a break in OpenSSL, > but the rumour mill will no doubt throw mud on the 1024 bit part as > well...) > > > > OpenSSL bug allows RSA 1024 key factorization in 20 minutes > > > https://www.reddit.com/r/crypto/comments/2i9qke/openssl_bug_allows_rsa_1024_key_factorization_in/ > > Supposedly. > > So just a few minutes ago has finished a talk at Navaja Negra 2014, the > third? most important security congress in Spain, where the speaker (a > member of the organization) claimed to have found a bug in OpenSSL RSA > key generation, which he is able to exploit to factorize N into p and q > in around 20 minutes (on a laptop). He did a live demo. I wasn't there, > but some friends were. > > He claimed: > > The bug originates in this lines of rsa_gen.c: > > 117 bitsp=(bits+1)/2; > 118 bitsq=bits-bitsp; > > the main problem being that the rounding of 1025 isn't downwards but > upwards, resulting in bitsp= 513 and bitsq=511, which, supposedly, later > on the code and due to compiler optimizations, causes the bug. > > It affects all versions of OpenSSL. > > He is neither going to report it to the developers, nor publish > anything. > > I personally think he's full of shit, but the fact that he's a member of > the organization and thus not only his personal prestige but also the > organization's is at stake, makes you wonder. Anyhow, we'll see. > > I posted it yesterday to netsec but the mods removed it. Let's discuss > it here! > > Edit 1: so my friends talked to him today, and he's serious about it. He > says he's broken 1024 keys on Amazon clusters in 18 seconds. > > Edit 2: he claims some guy from Argentina found the same thing 6 years > ago, and has been trying to show it on cons since then, but no con > accepted his talk because they wouldn't believe him. > > Edit 3: he also says the attack consists in trying "probable primes", > whose probability is generated by said bug. Might it be some variation > on Fermat's attack? > _______________________________________________ > The cryptography mailing list > cryptography at metzdowd.com > http://www.metzdowd.com/mailman/listinfo/cryptography > -- Tony Arcieri -------------- next part -------------- An HTML attachment was scrubbed... URL: From pg at futureware.at Sat Oct 4 18:56:09 2014 From: pg at futureware.at (Philipp =?iso-8859-1?Q?G=FChring?=) Date: Sun, 05 Oct 2014 00:56:09 +0200 Subject: [Cryptography] 1023 nails in the coffin of 1024 RSA... In-Reply-To: <543045BA.1090400@iang.org> References: <543045BA.1090400@iang.org> Message-ID: Hi, > Edit 2: he claims some guy from Argentina found the same thing 6 years > ago, and has been trying to show it on cons since then, but no con > accepted his talk because they wouldn't believe him. Ok, I think there is a simple solution: Take OpenSSL, generate a 1024 bit RSA key. Extract the public key, send it to him. Ask him to factorize it. Receive the p and q from him. Verify whether they are correct. If they are, please tell us. Best regards, Philipp From hbaker1 at pipeline.com Sat Oct 4 19:14:39 2014 From: hbaker1 at pipeline.com (Henry Baker) Date: Sat, 04 Oct 2014 16:14:39 -0700 Subject: [Cryptography] Best internet crypto clock In-Reply-To: <4C0050A4-8CD7-4E29-9013-53C22D72BCFF@lrw.com> References: <542F4158.8080907@kc.rr.com> <4C0050A4-8CD7-4E29-9013-53C22D72BCFF@lrw.com> Message-ID: At 12:50 PM 10/4/2014, Jerry Leichter wrote: >On Oct 3, 2014, at 10:34 PM, Henry Baker wrote: >> No. But I would like to see some simple, robust Internet crypto services, starting with a simple crypto clock with reasonable resolution that can't be hacked by anyone, not even the NSA. >> >> To a first approximation, the Bitcoin blockchain is the only current candidate, although at a much coarser resolution, a hash of all of the Fortune 500 daily closing stock prices would also function. >> >> If there are any other candidates -- e.g., NIST "beacons" with some less-corruptible authentication mechanism -- that have the same level of non-hackability, I'd be interested in finding out about them. >There are two issues here: The clock, and the original problem of establishing that some event occurred no later than a given time. > >The first isn't hard to solve, in the traditional way of producing trustworthy random number generators: Simply have NIST, the NSA, the EFF, the Russian and Chinese governments - whoever is willing - implement beacons. To produce a beacon you trust, choose any subset, combine the "random" numbers, and sign the result in the usual way. The subset and the method of combination are all public and committed to; all the inputs are public. Since the individual beacons can only be corrupted by entirely stopping them, or by producing predictable (to the attacker) values, unless someone corrupts *all* the sources, the combination is unpredictable. Yes, one could do as you say, but _checking_ this calculation isn't going to be easy unless a large number of place on the Internet _store the appropriate sequences_. You then have the problem of _checking that all (or most) of these sequences are the same_. In the case of the Fortune 500 or the Dow Jones share prices, a large number of sources _already publish_ these numbers, so you only have to go check a sufficient number (for your purposes) of probes to these public databases to convince yourself that they are all consistent, and then perform your checking calculations on the published share prices. Other sources might be sec.gov, which store all the submissions by public companies of their quarterly reports, changes in ownership, etc. While the current sec.gov makes no attempt (that I'm aware of) to blockchain these submissions, it would be pretty easy to change sec.gov to require a "previous" hash for incorporation into any SEC report submission, and this would have the effect of partial ordering all the submissions in such a way that it would be essentially impossible for _anyone_ to change the ordering (including the SEC itself), without everyone else being about to notice that someone was trying to make a change. --- It would be nice to have an in-the-clear/public Internet database with the following properties: 1. The database is readonly, appendonly. 2. Everyone sees the same database, and can "easily" ("without inordinate amount of effort") check this (somehow). 3. Everyone can easily see _every element of this database_, and thus there is no possibility of tampering or censorship. 4. Because everything is cross-hashed in various ways, it becomes impossible to delete any information in this blockchain. 5. Because everything is cross-hashed & cross-coded in various ways, it becomes impossible to "redact" any information in this database. I.e., you can't even follow the chain without having _every bit_ of every item in the portion of the chain you're trying to follow. 6. If someone comes up with a piece of data that purports to come from this database, it should be easy to check this database that the data is indeed already there. 7. Everyone -- in the sec.gov case "every public company" -- can submit an addition to this database. Yes, there are issues about proper authentication of the submission to make sure that it is indeed from that public company, but this is garden-variety PKE. sec.gov itself is a pretty good example where such a database makes sense. Everything is supposed to be public; so now all we need is to make sure that it can never be tampered with--even by someone at the SEC. If someone submits a report in error, the error can be explained, but will remain ever-after in this database. The possibilities and harm from corruption are orders-of-magnitude more than the harm/embarrassment from repaired errors, so that such an incorruptible database is essential. The same idea could be used in a wide variety of instances where there is a "server" and "people" to be served. The order-of-arrival of the people served by the server is completely arbitrary, but once this serving order has occurred, it is completely nailed down into a linear order that can't later be spoofed. From huitema at huitema.net Sat Oct 4 20:11:27 2014 From: huitema at huitema.net (Christian Huitema) Date: Sat, 4 Oct 2014 17:11:27 -0700 Subject: [Cryptography] Creating a Parallelizeable Cryptographic Hash Function In-Reply-To: References: <542DD770.2050007@cleversafe.com> <4A7A3B6E-71F8-4A15-BF10-7529B5587EA2@lrw.com> Message-ID: <003a01cfe030$e9047830$bb0d6890$@huitema.net> > To a programmer a good hash table is not the same as a good crypto hash. > A programmer simply wants a fast lookup with a minimum miss, collision. > Most programmers do not care if a collision is moderately easy to fabricate > because they want to get close enough not exactly and will walk their way to > the desired data (short walk). Actually, it is a bit more complex than that. In many applications, you have to be concerned about denial of service attacks. If an outsider can manufacture hash collisions, then you can end up with a serious issue, the hash resolution moving for example from O(1) to O(N). Think for example of a hash table going from TCP headers to TCP context, and a SYN attack amplifying the damage by picking combinations of address and ports that result in hash collisions. That may be why in many such applications the common practice is to compute the hash using truncated MD5. Of course, this creates a maintenance problem when MD5 is deemed "unsafe" for cryptography applications, and you have to fix your code to now use SHA256... -- Christian Huitema From l at odewijk.nl Sat Oct 4 22:50:20 2014 From: l at odewijk.nl (=?UTF-8?Q?Lodewijk_andr=C3=A9_de_la_porte?=) Date: Sun, 5 Oct 2014 04:50:20 +0200 Subject: [Cryptography] Best internet crypto clock In-Reply-To: References: <542F4158.8080907@kc.rr.com> Message-ID: 2014-10-04 4:34 GMT+02:00 Henry Baker : > No. But I would like to see some simple, robust Internet crypto services, > starting with a simple crypto clock with reasonable resolution that can't > be hacked by anyone, not even the NSA. A satellite running L4 verified with a single verified userspace application that listens to radio's and publishes hash-digests coming in from them in some sort of blockchain. And a DHT like setup that allows retrieval of those digests, and the sort-of-blockchain. Of course, those blocks are also transmitted back. Make people listen for the blocks and publish them. Allow requests for republishing blocks, so that holes may be filled by the satellite. Couple it to Bitcoin to allow for payment, use a Mastercoin-like setup to transmit the full-hash. Actually, if you put a hash in the Blockchain that's pretty great too. The alternative is not having payment for it, or having a trusted party gatekeep for the satellite. The satellite should cryptographically sign all outgoing communication. The gatekeeping party should not have the key. It'd be pretty hard to prove that, but it's possible. Other than satellite I don't think anything is safe from the NSA & friends, yet observable by everyone (with a radio). Perhaps an advanced Tor Hidden service? But how would a trustable party be able to audit a secret service? -------------- next part -------------- An HTML attachment was scrubbed... URL: From leichter at lrw.com Sun Oct 5 08:06:38 2014 From: leichter at lrw.com (Jerry Leichter) Date: Sun, 5 Oct 2014 08:06:38 -0400 Subject: [Cryptography] Best internet crypto clock In-Reply-To: References: <542F4158.8080907@kc.rr.com> <4C0050A4-8CD7-4E29-9013-53C22D72BCFF@lrw.com> Message-ID: <1AD8C685-981D-477E-939C-D3573C69B657@lrw.com> On Oct 4, 2014, at 7:14 PM, Henry Baker wrote: >> The first isn't hard to solve, in the traditional way of producing trustworthy random number generators: Simply have NIST, the NSA, the EFF, the Russian and Chinese governments - whoever is willing - implement beacons. To produce a beacon you trust, choose any subset, combine the "random" numbers, and sign the result in the usual way. The subset and the method of combination are all public and committed to; all the inputs are public. Since the individual beacons can only be corrupted by entirely stopping them, or by producing predictable (to the attacker) values, unless someone corrupts *all* the sources, the combination is unpredictable. > > Yes, one could do as you say, but _checking_ this calculation isn't going to be easy unless a large number of place on the Internet _store the appropriate sequences_. You then have the problem of _checking that all (or most) of these sequences are the same_. I'm not sure what it is you would want to check. The protocol for each beacon would be, at each time T, to send the triple , where T is the current time, Tp < T is the value that T had in the last emitted triple, and R is the random value. The triple is signed using the beacon's signature. Any collection of such value can be merged to create a new triple. The Tp/T values are re-computed relative to the new triple. A careful "combining beacon" will forward the triples that went into its computation. (There are some obvious requirements for how the various input Tp/T value can be combined to produce the output Tp/T values.) A signed triple from a beacon is self-identifying - there is no need for anyone who doesn't intend to use it as part of an assertion to store it, and there's no need for a history. Chaining the values together makes it harder for a beacon to go back and "revise history", though how much that adds isn't clear - anyone who *uses* a beacon value will present a signed triple, and if the beacon ever lies about a past value that someone has used, it will get caught. (If it wishes to lie about past values that no one used ... why should we care?) (The reason for including Tp is that it makes the question "what was the triple emitted by a given beacon that had the smallest value T >= t?" unambiguously answerable.) > In the case of the Fortune 500 or the Dow Jones share prices, a large number of sources _already publish_ these numbers, so you only have to go check a sufficient number (for your purposes) of probes to these public databases to convince yourself that they are all consistent, and then perform your checking calculations on the published share prices. Someone here a couple of months back discussed an actual, real-world attempt to compute a value this way. It failed. (I searched around for it but was unable to find it....) The numbers get corrected and changed after the fact. The changes may be trivial, and they may be infrequent, but they are frequent enough to make the process fail. > Other sources might be sec.gov, which store all the submissions by public companies of their quarterly reports, changes in ownership, etc. While the current sec.gov makes no attempt (that I'm aware of) to blockchain these submissions, it would be pretty easy to change sec.gov to require a "previous" hash for incorporation into any SEC report submission, and this would have the effect of partial ordering all the submissions in such a way that it would be essentially impossible for _anyone_ to change the ordering (including the SEC itself), without everyone else being about to notice that someone was trying to make a change. While this wasn't part of the previous posting, I think the lesson to be learned is that public sources like this *make no claim that what they've published will never change*. There's no reason why they should - such a claim isn't relevant to the reason the sources exist. Errors get corrected, stuff gets reformatted to match some new standard. The nominal semantics of what's in the database is supposed to never change, but that's based on human understanding, not something you could readily create a hash from. And, in fact ... even *that's* probably not true. Documents get screwed up in production - someone leaves out a paragraph, or includes some material in both old a new forms by mistake, or does something else that doesn't get noticed until later. Then the document gets fixed and the database updated. *Maybe* the new one gets a "Revised" marker. Almost certainly, however, the old document gets deleted: Saving it doesn't add to - and likely detracts from - the value of the database. > It would be nice to have an in-the-clear/public Internet database with the following properties [list of 7 properties to provide a "perfect" record of documents] Ask yourself: To whom would this be valuable? Would the value exceed the cost of maintaining such a thing? You cite the SEC as an example of a potential user, but there is, as far as I can tell, nothing in any SEC regulation that would require such a thing. It would supposedly be for protection against someone producing a faked version of a document from the past. While such issues aren't common, they do occur - consider Paul Ceglia's claim that he owns half of Facebook. We already have plenty of ways of investigating the validity of such a claim. Unless you *require* that all documents be added to this database, anyone creating a fake will simply say "Oh, we didn't think it was important at the time so we didn't send it in." -- Jerry From cloos at jhcloos.com Sun Oct 5 14:03:38 2014 From: cloos at jhcloos.com (James Cloos) Date: Sun, 05 Oct 2014 14:03:38 -0400 Subject: [Cryptography] 1023 nails in the coffin of 1024 RSA... In-Reply-To: <543045BA.1090400@iang.org> (iang@iang.org's message of "Sat, 04 Oct 2014 12:08:42 -0700") References: <543045BA.1090400@iang.org> Message-ID: >>>>> "i" == ianG writes: i> (some skepticism about whether this there is really a break in i> OpenSSL, but the rumour mill will no doubt throw mud on the 1024 i> bit part as well...) i> He claimed: i> The bug originates in this lines of rsa_gen.c: i> 117 bitsp=(bits+1)/2; i> 118 bitsq=bits-bitsp; i> the main problem being that the rounding of 1025 isn't downwards but i> upwards, resulting in bitsp= 513 and bitsq=511, which, supposedly, i> later on the code and due to compiler optimizations, causes the bug. In order for that mis-rounding to occur, the compiler must mis-optimize the code. The /2 will get changed to >>1 (right shift). Because bits in an int rather than an unsigned int, that will be an arithmetic right shift. The +1 might get converted to an INCrement. To round up, the optimizer would then need to swap the order of the increment and the shift. If the compiler is mis-optimizing that, it is not surprising that it might also mis-optimize something else later on, resulting in an exploitable bug. But it would have to be specific to an architecture and compiler. -JimC -- James Cloos OpenPGP: 0x997A9F17ED7DAEA6 From sandyinchina at gmail.com Sun Oct 5 14:29:29 2014 From: sandyinchina at gmail.com (Sandy Harris) Date: Sun, 5 Oct 2014 14:29:29 -0400 Subject: [Cryptography] RFC possible changes for Linux random device In-Reply-To: References: Message-ID: On Fri, Sep 12, 2014 at 7:18 PM, Sandy Harris wrote: > I have some experimental code to replace parts of random.c ... > Next two posts will be the main code and a support program it uses. Based partly on comments received both on and off list (Thanks!), I now have a rather different version. Most changes were aimed at cleaner code and clearer comments; I think I have achieved both. I also added some things. Current code is ~1800 lines, ~800 of which are comments and ~200 test scaffolding so my file compiles to a standalone test program, not a driver. I am not going to clutter the list with that, but I will happily send it along to anyone who asks. My next step will be to start submitting it as patches. (Thanks for the instructions, Jason.) Organising that looks difficult, since what I propose is a major rewrite, not easily expressed as a series of incremental changes. However, I'll try to figure it out. From pgut001 at cs.auckland.ac.nz Mon Oct 6 01:42:30 2014 From: pgut001 at cs.auckland.ac.nz (Peter Gutmann) Date: Mon, 06 Oct 2014 18:42:30 +1300 Subject: [Cryptography] 1023 nails in the coffin of 1024 RSA... In-Reply-To: Message-ID: James Cloos writes: >In order for that mis-rounding to occur, the compiler must mis-optimize the >code. Having had to deal with waaay too many optimisation bugs in gcc (including one just found in in 4.8.2 that produces totally incorrect code when optimisation is enabled) this may not be too far-fetched. It seems like every new release of gcc has a new set of code-generation bugs, but unless you run extensive test suites (which a crypto library typically will, or at least should) you won't notice them. The advantage of a crypto test suite is that slight errors are much easier to detect when everything is cryptographically secured, while they'd slip by unnoticed in other cases. That's the scary thing about this, the the buggy code that gcc can generate will appear to function just fine in 99.9% of cases [0], it's only if you include lots of internal consistency checks that you'll catch these can-never- occur cases (in my case I have both a huge test suite and large amounts of internal self-checks that caught this problem, even if I haven't been able to track down the exact location and after a day of poking through disassembled code I'm tempted to just say "get a less buggy compiler"). So a quick question for the people behind this: - What version of gcc was this present for, and what level of optimisation was used? - Does it still occur with a different gcc version? - Does it still occur if you disable optimisation? - Does it still occur if you use a less buggy compiler like clang/LLVM or MSVC, or in fact anything but gcc? If, and that's a big if, this is real, I'd say the probability that it's yet another gcc compiler bug is far, far higher than the probability that it's some fundamental flaw in RSA or the RSA implementation. Peter. [0] Figure freely pulled out of thin air. From benl at google.com Mon Oct 6 06:59:28 2014 From: benl at google.com (Ben Laurie) Date: Mon, 6 Oct 2014 11:59:28 +0100 Subject: [Cryptography] Best internet crypto clock In-Reply-To: References: <542F4158.8080907@kc.rr.com> Message-ID: On 4 October 2014 03:34, Henry Baker wrote: > At 05:37 PM 10/3/2014, Grant Schultz wrote: >>On 10/2/2014 11:00 AM, cryptography-request at metzdowd.com wrote: >> >>>In old B/W movies, when a person was kidnapped, the kidnapper sent a photo of the person together with a picture of the front page of today's newspaper to prove that he had the kidnapped person _on or after the date_ of the newspaper. >> >>Are you planning on kidnapping someone? > > No. But I would like to see some simple, robust Internet crypto services, starting with a simple crypto clock with reasonable resolution that can't be hacked by anyone, not even the NSA. > > To a first approximation, the Bitcoin blockchain is the only current candidate, although at a much coarser resolution, a hash of all of the Fortune 500 daily closing stock prices would also function. That would prevent forgeries in the future, but not the past. From benl at google.com Mon Oct 6 07:03:19 2014 From: benl at google.com (Ben Laurie) Date: Mon, 6 Oct 2014 12:03:19 +0100 Subject: [Cryptography] Best internet crypto clock In-Reply-To: References: <542F4158.8080907@kc.rr.com> <4C0050A4-8CD7-4E29-9013-53C22D72BCFF@lrw.com> Message-ID: On 5 October 2014 00:14, Henry Baker wrote: > It would be nice to have an in-the-clear/public Internet database with the following properties: > > 1. The database is readonly, appendonly. > > 2. Everyone sees the same database, and can "easily" ("without inordinate amount of effort") check this (somehow). > > 3. Everyone can easily see _every element of this database_, and thus there is no possibility of tampering or censorship. > > 4. Because everything is cross-hashed in various ways, it becomes impossible to delete any information in this blockchain. > > 5. Because everything is cross-hashed & cross-coded in various ways, it becomes impossible to "redact" any information in this database. I.e., you can't even follow the chain without having _every bit_ of every item in the portion of the chain you're trying to follow. > > 6. If someone comes up with a piece of data that purports to come from this database, it should be easy to check this database that the data is indeed already there. > > 7. Everyone -- in the sec.gov case "every public company" -- can submit an addition to this database. Yes, there are issues about proper authentication of the submission to make sure that it is indeed from that public company, but this is garden-variety PKE. > > sec.gov itself is a pretty good example where such a database makes sense. Everything is supposed to be public; so now all we need is to make sure that it can never be tampered with--even by someone at the SEC. If someone submits a report in error, the error can be explained, but will remain ever-after in this database. The possibilities and harm from corruption are orders-of-magnitude more than the harm/embarrassment from repaired errors, so that such an incorruptible database is essential. > > The same idea could be used in a wide variety of instances where there is a "server" and "people" to be served. The order-of-arrival of the people served by the server is completely arbitrary, but once this serving order has occurred, it is completely nailed down into a linear order that can't later be spoofed. This is essentially the Certificate Transparency mechanism. From mitch at niftyegg.com Mon Oct 6 08:09:37 2014 From: mitch at niftyegg.com (Tom Mitchell) Date: Mon, 6 Oct 2014 05:09:37 -0700 Subject: [Cryptography] Creating a Parallelizeable Cryptographic Hash Function In-Reply-To: <003a01cfe030$e9047830$bb0d6890$@huitema.net> References: <542DD770.2050007@cleversafe.com> <4A7A3B6E-71F8-4A15-BF10-7529B5587EA2@lrw.com> <003a01cfe030$e9047830$bb0d6890$@huitema.net> Message-ID: On Sat, Oct 4, 2014 at 5:11 PM, Christian Huitema wrote: > > To a programmer a good hash table is not the same as a good crypto hash. > ..... > > Actually, it is a bit more complex than that. In many applications, you > have to be concerned about denial of service attacks. If an outsider can > manufacture hash collisions, then you can end up with a serious issue, the > hash resolution moving for example from O(1) to O(N). Think for example of > a hash table going from TCP headers to TCP context, and a SYN attack > amplifying the damage by picking combinations of address and ports that > result in hash collisions. > Absolutely.... it is clearly necessary to understand how data can be messed with and more to the point that it can or cannot be messed with. It gets interesting when an application fully in control of data in and out is modified and opened to the world in a more general case. The initial assumptions are now invalidated and the new context needs to be reconsidered. The impact is often less obvious than one might hope (and could make your heart bleed). -- T o m M i t c h e l l -------------- next part -------------- An HTML attachment was scrubbed... URL: From michael at kjorling.se Mon Oct 6 08:30:54 2014 From: michael at kjorling.se (Michael =?utf-8?B?S2rDtnJsaW5n?=) Date: Mon, 6 Oct 2014 12:30:54 +0000 Subject: [Cryptography] NSA versus DES etc.... In-Reply-To: References: <9E0708DC-ED74-4879-AEE7-D1F41B2252F5@lrw.com> <5421E7DB.2040704@av8n.com> <2422FDD0-1570-4BB7-B099-9719A933ED82@interlog.com> <5422600D.2060104@av8n.com> <2DB4E01B-CA6E-4F8E-827A-36A2022E5485@interlog.com> <21544.64868.836051.664605@desk.crynwr.com> <542C3AE3.4090207@iang.org> <201410020932.s929W3pE009125@new.toad.com> Message-ID: <20141006123054.GC12902@yeono.kjorling.se> On 3 Oct 2014 15:14 +1000, from dave at horsfall.org (Dave Horsfall): > On Thu, 2 Oct 2014, Jerry Leichter wrote: >> A friend (Martin Minow, in case anyone here remembers him) years ago >> told me a story he heard in Sweden. The rivers and bays along the >> Swedish coast are extremely tricky to navigate, being full of underwater >> canyons and ridges. For many years, the maps of some of the major ports >> were considered to be essential state secrets, a means of defense >> against (mainly Soviet) naval attack. If you wanted to sail into one of >> these harbors, you had best get a Swedish pilot, who had access to the >> maps but would ban you from watching while he used them. > > And didn't the Swedes find a Russian sub in their waters some years back? While this has preciously little to do with cryptography, yes. https://en.wikipedia.org/wiki/Swedish_submarine_incidents#List_of_major_reported_incidents and https://sv.wikipedia.org/wiki/Ub%C3%A5tskr%C3%A4nkningar_i_Sverige#Rapporterade_incidenter The Swedish Wikipedia article states that "between 1981 and 1994 approximately 4700 observations" of submarine-like objects were made, presumably within Swedish territorial waters (I don't really feel like digging out the Swedish government report cited as the source for that). It's interesting to note that the lists differ; the Swedish-language list gives Sep 15 2011 as the date for what appears to be the same event that is listed by the English-language list as occuring on Sep 11 2011. It's possible that further scrutiny would uncover further differences. U-137/S-363 [1] is probably the most famous incident, but as can trivially be seen, far from the only one. [1]: https://en.wikipedia.org/wiki/Soviet_submarine_S-363 -- Michael Kj?rling ? https://michael.kjorling.se ? michael at kjorling.se OpenPGP B501AC6429EF4514 https://michael.kjorling.se/public-keys/pgp ?People who think they know everything really annoy those of us who know we don?t.? (Bjarne Stroustrup) From phill at hallambaker.com Mon Oct 6 09:24:08 2014 From: phill at hallambaker.com (Phillip Hallam-Baker) Date: Mon, 6 Oct 2014 09:24:08 -0400 Subject: [Cryptography] 1023 nails in the coffin of 1024 RSA... In-Reply-To: References: Message-ID: Optimization error or not, RSA has been tested quite extensively with mismatched p and q and it works just fine. The only reason not to do that is that the work factor depends on the size of the smaller of p and q rather than the size of the modulus. So the work factor of asymmetric p/q is not attractive unless you are also doing something else thats odd like Chaumian blinding or modulus compression or anti kleptography or the like. From fungi at yuggoth.org Mon Oct 6 10:01:06 2014 From: fungi at yuggoth.org (Jeremy Stanley) Date: Mon, 6 Oct 2014 14:01:06 +0000 Subject: [Cryptography] 1023 nails in the coffin of 1024 RSA... In-Reply-To: References: Message-ID: <20141006140105.GB9816@yuggoth.org> There's now a blog post[1] and English translation[2] which have been making the rounds... [1] http://www.cristianamicelli.com.ar/blog/rsahack/ [2] http://pastebin.com/D8itq6Ff -- Jeremy Stanley From pgut001 at cs.auckland.ac.nz Mon Oct 6 10:20:17 2014 From: pgut001 at cs.auckland.ac.nz (Peter Gutmann) Date: Tue, 07 Oct 2014 03:20:17 +1300 Subject: [Cryptography] 1023 nails in the coffin of 1024 RSA... In-Reply-To: Message-ID: Phillip Hallam-Baker writes: >Optimization error or not, RSA has been tested quite extensively with >mismatched p and q and it works just fine. Oh, I didn't mean the problem was a mismatched p and q but that it could have come about because of some other code-generation error. I've seen gcc in the past generate output code that bears no relation to the source code that's fed to it, it could be that they discovered some combination of gcc release and target platform that produces broken code. Or at least that seems a less unlikely explanation than their mismatched-p-q one. Peter. From cloos at jhcloos.com Mon Oct 6 10:27:47 2014 From: cloos at jhcloos.com (James Cloos) Date: Mon, 06 Oct 2014 10:27:47 -0400 Subject: [Cryptography] 1023 nails in the coffin of 1024 RSA... In-Reply-To: (Phillip Hallam-Baker's message of "Mon, 6 Oct 2014 09:24:08 -0400") References: Message-ID: >>>>> "PH" == Phillip Hallam-Baker writes: PH> Optimization error or not, RSA has been tested quite extensively PH> with mismatched p and q and it works just fine. I presume that the real error in the compilation occurs later; the quoted part is likely just an additional, minor error which is an easy way to differentiate affected compilations. Like a canary. The note Ian quoted did not claim that the mistmatched p and q were the cause of the bug, but rather that later miscompilation were. -JimC -- James Cloos OpenPGP: 0x997A9F17ED7DAEA6 From ljcamp at indiana.edu Mon Oct 6 12:11:27 2014 From: ljcamp at indiana.edu (L Jean Camp) Date: Mon, 6 Oct 2014 12:11:27 -0400 Subject: [Cryptography] Best internet crypto clock Message-ID: Surety tried a hash tree approach whereby periodic results were published in the NYTimes. Here is an analysis of that approach: http://teal.gmu.edu/courses/ECE543/project/reports_2005/TIMESTAMPING_report.pdf Prof. L. Jean Camp http://www.ljean.com Human-Centered Security http://usablesecurity.net/ Economics of Security http://www.infosecon.net/ Congressional Fellow http://www.ieeeusa.org/policy/govfel/congfel.asp -------------- next part -------------- An HTML attachment was scrubbed... URL: From dave at horsfall.org Mon Oct 6 16:00:06 2014 From: dave at horsfall.org (Dave Horsfall) Date: Tue, 7 Oct 2014 07:00:06 +1100 (EST) Subject: [Cryptography] NSA versus DES etc.... In-Reply-To: <20141006123054.GC12902@yeono.kjorling.se> References: <9E0708DC-ED74-4879-AEE7-D1F41B2252F5@lrw.com> <5421E7DB.2040704@av8n.com> <2422FDD0-1570-4BB7-B099-9719A933ED82@interlog.com> <5422600D.2060104@av8n.com> <2DB4E01B-CA6E-4F8E-827A-36A2022E5485@interlog.com> <21544.64868.836051.664605@desk.crynwr.com> <542C3AE3.4090207@iang.org> <201410020932.s929W3pE009125@new.toad.com> <20141006123054.GC12902@yeono.kjorling.se> Message-ID: On Mon, 6 Oct 2014, Michael Kj?rling wrote: > > And didn't the Swedes find a Russian sub in their waters some years > > back? > > While this has preciously little to do with cryptography, yes. Precious little to do with cryptography? Moderators, reject this post if you must, but: Security through obscurity. A strong lock on a paper-tissue door. Locking the front door and keeping the key under the mat. Etc. -- Dave From jschauma at netmeister.org Mon Oct 6 17:57:35 2014 From: jschauma at netmeister.org (Jan Schaumann) Date: Mon, 6 Oct 2014 17:57:35 -0400 Subject: [Cryptography] 1023 nails in the coffin of 1024 RSA... In-Reply-To: <543045BA.1090400@iang.org> References: <543045BA.1090400@iang.org> Message-ID: <20141006215735.GF5358@netmeister.org> ianG wrote: > (some skepticism about whether this there is really a break in OpenSSL, > but the rumour mill will no doubt throw mud on the 1024 bit part as well...) > OpenSSL bug allows RSA 1024 key factorization in 20 minutes > > https://www.reddit.com/r/crypto/comments/2i9qke/openssl_bug_allows_rsa_1024_key_factorization_in/ > > Supposedly. The author just admitted "Ok, so I took some missteps during my research. Excellent talk with @julianor, he gave me some pointers that put me...(1/2)" https://twitter.com/camicelli/status/519231503260467200 "...in the right direction in my investigation. One thing is certain. It is definitely NOT an OpenSSL vuln. (2/2)" https://twitter.com/camicelli/status/519231538589085696 @julianor tweeted: "talked with @camicelli :isn't OpenSSL bug, he thought with enough hardware he can make a list of every 512b prime.Demo's priv key was known" https://twitter.com/julianor/status/519230526029570048 -Jan -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 478 bytes Desc: not available URL: From waywardgeek at gmail.com Mon Oct 6 18:23:42 2014 From: waywardgeek at gmail.com (Bill Cox) Date: Mon, 6 Oct 2014 18:23:42 -0400 Subject: [Cryptography] Creating a Parallelizeable Cryptographic Hash Function In-Reply-To: References: <542DD770.2050007@cleversafe.com> <542EDA54.4010904@cleversafe.com> <542EE80B.3090406@cleversafe.com> Message-ID: On Sat, Oct 4, 2014 at 1:21 PM, Ben Laurie wrote: > However, this is not a good way to go about designing crypto primitives. > > I disagree with this point. This thread is an excellent way for people to *avoid* mistakes like this hash function. People should be *encouraged* to post their latest dumb idea about hashing here, so it can be reviewed before harming anyone. Furthermore, this thread introduces a valuable concept I was not aware of before, and it's fun :-) The uses for such a hash function are I think quite broad. For example, we could incrementally update a cryptographically strong hash of a huge database in constant time. So... here's *my* dumb hashing solution for this problem: Simply compute: Digest = H(N || H(H(1 || B1) * H(2 || B2) * ... * H(n || Bn) mod p)) While I am often wrong, I have managed in my head to prove to myself that finding collisions in the generalized birthday problem over a multiplicative group modulo a prime is equivalent to solving the discrete log problem. If this proof holds up, then this should be a secure hash, so long as p is large enough, such as 2048 bits. Also, I assume that H is effectively a random oracle with no possible analysis of it's function that could help us find collisions. Some hash functions, such as H(x) = g^x mod p clearly fail here. However, any respected ARX based hash should work out, as should Wolfram's Rule-30-like hashes such as Keccak. My proposed proof is simple. Instead of picking true random numbers ri < p to multiply together, looking for a collision in the usual way, compute si = g^ri mod p for i in 1 .. n. If g is a group generator, then si is just as random as ri, but we know something about si (it's discrete log). Using a supposed algorithm faster than solving the discrete log, we now find the discrete log of y = g^x. Just find r1 ... rn such that g^r1 * g^r2 * ... * g^rn = x, using our fast algorithm, and then the discrete log of x is trivially found as r1 + r2 + ... + rn mod p-1. Surely, any algorithm that can find such solutions multiplying r1 * r2 * ... * rn will work just as well using ri = g^H(i) mod p as it would using the H(i) values as random numbers instead, so this step of replacing ri with si is sound. Therefore, this hash function is secure, based on the security of the discrete log problem. Does this work out? If it is secure, as I currently believe, then it should have a number of good uses in cryptography. Bill -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Mon Oct 6 18:24:29 2014 From: fungi at yuggoth.org (Jeremy Stanley) Date: Mon, 6 Oct 2014 22:24:29 +0000 Subject: [Cryptography] 1023 nails in the coffin of 1024 RSA... In-Reply-To: <20141006215735.GF5358@netmeister.org> References: <543045BA.1090400@iang.org> <20141006215735.GF5358@netmeister.org> Message-ID: <20141006222429.GF9816@yuggoth.org> On 2014-10-06 17:57:35 -0400 (-0400), Jan Schaumann wrote: [...] > @julianor tweeted: > "talked with @camicelli :isn't OpenSSL bug, he thought with enough > hardware he can make a list of every 512b prime.Demo's priv key was > known" > https://twitter.com/julianor/status/519230526029570048 2^512/ln(2^512) is still ~2^504 primes which would need to be found and stored. That's... a _lot_ of "hardware." -- Jeremy Stanley From waywardgeek at gmail.com Mon Oct 6 18:46:16 2014 From: waywardgeek at gmail.com (Bill Cox) Date: Mon, 6 Oct 2014 18:46:16 -0400 Subject: [Cryptography] Creating a Parallelizeable Cryptographic Hash Function In-Reply-To: References: <542DD770.2050007@cleversafe.com> <542EDA54.4010904@cleversafe.com> <542EE80B.3090406@cleversafe.com> Message-ID: On Mon, Oct 6, 2014 at 6:23 PM, Bill Cox wrote: > Simply compute: > > Digest = H(N || H(H(1 || B1) * H(2 || B2) * ... * H(n || Bn) mod p)) > > In case it's not obvious to Tom or anyone not familiar with modulo arithmetic, this hash function can be updated in constant time by keeping track of the Digest, as well as the mod p result. Just multiply the old mod p result by the multiplicative inverse of the H value that changed, and then multiply by the new H value mod p, and recompute the digest. Bill -------------- next part -------------- An HTML attachment was scrubbed... URL: From zooko at leastauthority.com Mon Oct 6 22:05:43 2014 From: zooko at leastauthority.com (Zooko Wilcox-OHearn) Date: Tue, 7 Oct 2014 02:05:43 +0000 Subject: [Cryptography] Creating a Parallelizeable Cryptographic Hash Function In-Reply-To: References: <542DD770.2050007@cleversafe.com> <542EDA54.4010904@cleversafe.com> <542EE80B.3090406@cleversafe.com> Message-ID: Hello again, Jason Resch of cleversafe. I'd like to emphasize what Philipp Jovanovic said, in case he didn't express it strongly enough: just use BLAKE2! BLAKE2 has been designed by skilled cryptographers (not meaning myself) to be safe in the ways that you are wondering about, and it has excellent performance. It is also being adopted by quite a few other folks with similar needs (i.e. the combination of crypto and large volumes of data). You could also make a good case for using Keccak or Skein, which also have parallel and/or tree modes, but I do not think you can make a good case for hacking together some construct yourself. Here's a presentation I gave at ACNS on "Why BLAKE2?": https://blake2.net/acns/slides.html Regards, Zooko ?disclosure: I'm one of the authors of BLAKE2. The other ones are more skilled cryptographers than I am. :-) CEO, https://LeastAuthority.com ?Freedom matters.? From benl at google.com Mon Oct 6 23:53:46 2014 From: benl at google.com (Ben Laurie) Date: Tue, 7 Oct 2014 04:53:46 +0100 Subject: [Cryptography] Creating a Parallelizeable Cryptographic Hash Function In-Reply-To: References: <542DD770.2050007@cleversafe.com> <542EDA54.4010904@cleversafe.com> <542EE80B.3090406@cleversafe.com> Message-ID: On 6 October 2014 23:23, Bill Cox wrote: > On Sat, Oct 4, 2014 at 1:21 PM, Ben Laurie wrote: >> >> However, this is not a good way to go about designing crypto primitives. >> > > I disagree with this point. This thread is an excellent way for people to > *avoid* mistakes like this hash function. People should be *encouraged* to > post their latest dumb idea about hashing here, so it can be reviewed before > harming anyone. Sure thing, but that's not what I meant. What I meant was that starting with a dumb idea, then incrementally fixing things people point out is not likely to lead to something good. From ben at links.org Tue Oct 7 01:44:49 2014 From: ben at links.org (Ben Laurie) Date: Tue, 7 Oct 2014 06:44:49 +0100 Subject: [Cryptography] 1023 nails in the coffin of 1024 RSA... In-Reply-To: <20141006222429.GF9816@yuggoth.org> References: <543045BA.1090400@iang.org> <20141006215735.GF5358@netmeister.org> <20141006222429.GF9816@yuggoth.org> Message-ID: On 6 October 2014 23:24, Jeremy Stanley wrote: > On 2014-10-06 17:57:35 -0400 (-0400), Jan Schaumann wrote: > [...] >> @julianor tweeted: >> "talked with @camicelli :isn't OpenSSL bug, he thought with enough >> hardware he can make a list of every 512b prime.Demo's priv key was >> known" >> https://twitter.com/julianor/status/519230526029570048 > > 2^512/ln(2^512) is still ~2^504 primes which would need to be found > and stored. That's... a _lot_ of "hardware." Oh, come on. Its only the number of atoms in the (observable) universe ... squared. From waywardgeek at gmail.com Tue Oct 7 06:52:26 2014 From: waywardgeek at gmail.com (Bill Cox) Date: Tue, 7 Oct 2014 06:52:26 -0400 Subject: [Cryptography] Creating a Parallelizeable Cryptographic Hash Function In-Reply-To: References: <542DD770.2050007@cleversafe.com> <542EDA54.4010904@cleversafe.com> <542EE80B.3090406@cleversafe.com> Message-ID: On Mon, Oct 6, 2014 at 10:05 PM, Zooko Wilcox-OHearn < zooko at leastauthority.com> wrote: > Hello again, Jason Resch of cleversafe. > > I'd like to emphasize what Philipp Jovanovic said, in case he didn't > express it strongly enough: just use BLAKE2! > Tom's idea being discussed here is a constant time updateable hash function of very many records/messages/blocks, which Blake2 does not do. A good solution to this problem has many real-world use cases. Rsync, for example, uses a rolling hash function which is insecure. Maybe we could use: y = H(1 || B1) * H(2 || B2) * ... * H(n | Bn) mod p This looks secure to me, based on the difficulty of the discrete log problem. If you can give me an algorithm that takes random numbers and finds combinations of them that multiply out to a specific digest y mod p, then I can use your algorithm to find the discrete log base g of y. I simply give your algorithm g^rand() rather than rand(), and the algorithm finds y = g^r1 * g^r2 * ... * g^rn mod p. I am somewhat surprised Wagner did not see this in his paper on the generalized birthday problem, since significant effort was put into framing attacks using multiplicative groups. I just didn't find that part very convincing :-) Bill -------------- next part -------------- An HTML attachment was scrubbed... URL: From waywardgeek at gmail.com Tue Oct 7 07:02:56 2014 From: waywardgeek at gmail.com (Bill Cox) Date: Tue, 7 Oct 2014 07:02:56 -0400 Subject: [Cryptography] Creating a Parallelizeable Cryptographic Hash Function In-Reply-To: References: <542DD770.2050007@cleversafe.com> <542EDA54.4010904@cleversafe.com> <542EE80B.3090406@cleversafe.com> Message-ID: On Mon, Oct 6, 2014 at 11:53 PM, Ben Laurie wrote: > On 6 October 2014 23:23, Bill Cox wrote: > > On Sat, Oct 4, 2014 at 1:21 PM, Ben Laurie wrote: > >> > >> However, this is not a good way to go about designing crypto primitives. > >> > > > > I disagree with this point. This thread is an excellent way for people > to > > *avoid* mistakes like this hash function. People should be *encouraged* > to > > post their latest dumb idea about hashing here, so it can be reviewed > before > > harming anyone. > > Sure thing, but that's not what I meant. What I meant was that > starting with a dumb idea, then incrementally fixing things people > point out is not likely to lead to something good. > Actually, this is one of my favorite processes for producing good ideas. Continuing with this process, what's wrong with: Digest = H(1 || B1) * H(2 || B2) * ... * H(n | Bn) mod p I think I've shown this is secure based on the difficulty of the discrete log problem. If true, isn't this exactly what you say is unlikely to happen? Bill -------------- next part -------------- An HTML attachment was scrubbed... URL: From leichter at lrw.com Tue Oct 7 07:14:47 2014 From: leichter at lrw.com (Jerry Leichter) Date: Tue, 7 Oct 2014 07:14:47 -0400 Subject: [Cryptography] 1023 nails in the coffin of 1024 RSA... In-Reply-To: References: <543045BA.1090400@iang.org> <20141006215735.GF5358@netmeister.org> <20141006222429.GF9816@yuggoth.org> Message-ID: <92E81F1B-BB17-4D09-9905-1FC7DEBF465A@lrw.com> On Oct 7, 2014, at 1:44 AM, Ben Laurie wrote: @julianor tweeted: >>> "talked with @camicelli :isn't OpenSSL bug, he thought with enough >>> hardware he can make a list of every 512b prime.Demo's priv key was >>> known" >>> https://twitter.com/julianor/status/519230526029570048 >> >> 2^512/ln(2^512) is still ~2^504 primes which would need to be found >> and stored. That's... a _lot_ of "hardware." > > Oh, come on. Its only the number of atoms in the (observable) universe > ... squared. Well, sure, but N/ln N is just an approximation. For all you know, the true number of primes could be only a millionth of that! -- Jerry :-) From agr at me.com Tue Oct 7 10:21:54 2014 From: agr at me.com (Arnold Reinhold) Date: Tue, 07 Oct 2014 10:21:54 -0400 Subject: [Cryptography] Best internet crypto clock Message-ID: <5329A60E-B029-4947-AD3C-3D72BA1D8915@me.com> On 4 Oct 2014 15:50 Jerry Leichter wrote: ... > The question of replicating the "picture of the kidnapped person" scenario, however, seems impossible. Consider what it claims to deliver: Anyone looking at the photo, at any time after it was made, can be sure that the person in the photo was actually alive when the photo was taken, and the photo could not have been taken earlier than the date on the newspaper. Well, maybe that was more or less true back in the days of black-and-white photography; but there would not be the slightest difficulty in faking such a photograph today using Photoshop or similar software. You then are reduced to the battle of the photo experts - the ones who produce better and better fakes vs. the ones doing better and better detection of fakes. > > The fundamental thing you're trying to prove is that some *event* - the taking of the photograph - took place after some time T. This isn't the kind of thing we deal with in cryptography, where the usual starting point is "some string of bits" B. Proving that "some string of bits" could not have been produced before T seems difficult. In fact, if you pose the problem as "combine B with some other string of bits S(T), such that the result proves that B was not known before T", the problem is clearly insoluble. > > (Before you go, oh, but you can commit a hash of B to the blockchain at time T - that solves the *inverse* problem: It proves that you knew B *no later than* T.) > > If you instead go back to trying to solve the original problem, you can pose it a different way: I want to "apply" my victim to S(T) to produce an output that (a) only the victim could have produced; (b) could only be produced with the knowledge of S(T). For example, suppose that voice-printing were an infallible way of identifying a speaker. Then we could use a recording of the victim reading S(T) aloud. (Of course, "infallible" has to include the ability to detect splices and other ways of modifying or combining recordings made earlier to produce the "proof of life".) Having him write it out with pen and paper would work about as well. > > If there were a way to produce a (digital) signature based on "something you are" - assuming that this becomes unavailable after death - then the victim's signature of S(T) would serve this purpose. Some of the work on biometrics might eventually get us there, though it seems doubtful. > > I'm not even sure how to pose a general version of this problem. There are some special cases that work and might be useful. Extending the signature example, suppose we have a tamper-proof signing box. Using it to sign S(T) is proof of possession of the box at some time after T. Perhaps this could provide some kind of proof of receipt. This conundrum suggests a need for a camera that cryptographically signs its images. It could be packaged and certified as a FIPS-140 level 4 HSM. The camera would have a built-in asymmetric key pair with the public key available from the manufacturer by camera serial number. It might also accept additional keys via Bluetooth or USB and sign images using those keys as well. As with any HSM, secret keys would be erased upon detection of tampering. The camera could communicate via Bluetooth or USB or an optical link and be controlled by a cell phone app, perhaps clipping onto the cell phone or phone case. It might use inductive charging to minimize electrical connections. I would envision including a good quality internal clock, set at time of manufacture and non alterable. (When the clock battery dies, the camera is toast.) The camera would periodically or on command output a signed certificate containing the current reading of its internal click and maybe an external nonce like the NIST beacon, which might then be sent to a time stamping service, creating a record of internal clock drift over time.. The camera might store a correction factor, so it could output a UTC time, but the internal clock would be included in any certificate as well. It would seem that a camera like this would be useful in a variety of applications (besides kidnapping) to create legally provable documents. Assuming it had a video mode, it could be used as a notary, recording a person's spoken acceptance of contract, or witnessing his handwritten signature on a document. Of course one would still have to trust the manufacturer. A signing camera isn?t a new idea, a quick Google search came up with this 1992 paper http://www.friedmanarchives.com/Writings/Trustworthy_Digital_Camera_Technical_Paper.pdf , but camera technology developed for cell phones makes something like this much more affordable. Has anyone attempted this? How close could we get with an iPhone 6, given Apple's improved security scheme? Arnold Reinhold From zooko at leastauthority.com Tue Oct 7 11:21:09 2014 From: zooko at leastauthority.com (Zooko Wilcox-OHearn) Date: Tue, 7 Oct 2014 15:21:09 +0000 Subject: [Cryptography] Creating a Parallelizeable Cryptographic Hash Function In-Reply-To: References: <542DD770.2050007@cleversafe.com> <542EDA54.4010904@cleversafe.com> <542EE80B.3090406@cleversafe.com> Message-ID: On Tue, Oct 7, 2014 at 10:52 AM, Bill Cox wrote: > > Tom's idea being discussed here is a constant time updateable hash function > of very many records/messages/blocks, which Blake2 does not do. BLAKE2 (and Skein, and Keccak, etc.) do logarithmic-time updates using tree hashing, which is efficient enough for all uses that I have looked at, and which can be secure in the traditional senses of collision-resistance, etc. I agree that a constant-time variant would be potentially interesting, and I'm not saying (as someone else on this thread was thought to have said) that we shouldn't discuss such a thing. What I'm saying is that Cleversafe, as a commercial concern working on actual products, should not be planning to use such a novel construction when BLAKE2 (et al.) would work fine. Regards, Zooko Wilcox-O'Hearn Founder, CEO, and Customer Support Rep https://LeastAuthority.com Freedom matters. From zooko at leastauthority.com Tue Oct 7 11:29:48 2014 From: zooko at leastauthority.com (Zooko Wilcox-OHearn) Date: Tue, 7 Oct 2014 15:29:48 +0000 Subject: [Cryptography] SPHINCS: practical hash-based digital signatures Message-ID: Dear Crypto Folks: I'd like to draw your attention to a new digital signature scheme, SPHINCS: http://sphincs.cr.yp.to/ (? Disclosure and disclaimer: Like with the recently-mentioned BLAKE2, I'm a co-author, and like with BLAKE2, my co-authors did more of the heavy lifting intellectually than I did.) But anyway, here's my pitch for why you might care about SPHINCS: Every digital signature algorithm that you can think of could be broken by an attacker who could exploit a flaw in its secure hash algorithm. *Or* the attacker could exploit a flaw in the *other* part ? the signature scheme. That's because every digital signature algorithm (e.g. RSA-PSS, Ed25519, ECDSA, etc.) uses a secure hash function to generate a short fixed-length message representative, and then uses the signature scheme to sign the message representative. So there are two ways that an attacker can break any of the digital signature algorithms mentioned above (RSA, Ed25519, etc. etc.) ? by breaking the hash function or by cracking the other part. But there is only one way that an attacker can break SPHINCS: by breaking the hash function. I think that's pretty awesome. Regards, Zooko Wilcox-O'Hearn Founder, CEO, and Customer Support Rep https://LeastAuthority.com Freedom matters. ?Eliminate the state!? ?Use more hash!? From kyle.creyts at gmail.com Tue Oct 7 11:31:36 2014 From: kyle.creyts at gmail.com (Kyle Creyts) Date: Tue, 7 Oct 2014 08:31:36 -0700 Subject: [Cryptography] Creating a Parallelizeable Cryptographic Hash Function In-Reply-To: References: <542DD770.2050007@cleversafe.com> <542EDA54.4010904@cleversafe.com> <542EE80B.3090406@cleversafe.com> Message-ID: however, it does wind up distributing knowledge. On Mon, Oct 6, 2014 at 8:53 PM, Ben Laurie wrote: > On 6 October 2014 23:23, Bill Cox wrote: >> On Sat, Oct 4, 2014 at 1:21 PM, Ben Laurie wrote: >>> >>> However, this is not a good way to go about designing crypto primitives. >>> >> >> I disagree with this point. This thread is an excellent way for people to >> *avoid* mistakes like this hash function. People should be *encouraged* to >> post their latest dumb idea about hashing here, so it can be reviewed before >> harming anyone. > > Sure thing, but that's not what I meant. What I meant was that > starting with a dumb idea, then incrementally fixing things people > point out is not likely to lead to something good. > _______________________________________________ > The cryptography mailing list > cryptography at metzdowd.com > http://www.metzdowd.com/mailman/listinfo/cryptography -- Kyle Creyts Information Assurance Professional Founder BSidesDetroit From natanael.l at gmail.com Tue Oct 7 14:50:56 2014 From: natanael.l at gmail.com (Natanael) Date: Tue, 7 Oct 2014 20:50:56 +0200 Subject: [Cryptography] Best internet crypto clock In-Reply-To: <5329A60E-B029-4947-AD3C-3D72BA1D8915@me.com> References: <5329A60E-B029-4947-AD3C-3D72BA1D8915@me.com> Message-ID: Den 7 okt 2014 20:41 skrev "Arnold Reinhold" : > > This conundrum suggests a need for a camera that cryptographically signs its images. It could be packaged and certified as a FIPS-140 level 4 HSM. The camera would have a built-in asymmetric key pair with the public key available from the manufacturer by camera serial number. It might also accept additional keys via Bluetooth or USB and sign images using those keys as well. As with any HSM, secret keys would be erased upon detection of tampering. The camera could communicate via Bluetooth or USB or an optical link and be controlled by a cell phone app, perhaps clipping onto the cell phone or phone case. It might use inductive charging to minimize electrical connections. > > I would envision including a good quality internal clock, set at time of manufacture and non alterable. (When the clock battery dies, the camera is toast.) The camera would periodically or on command output a signed certificate containing the current reading of its internal click and maybe an external nonce like the NIST beacon, which might then be sent to a time stamping service, creating a record of internal clock drift over time.. The camera might store a correction factor, so it could output a UTC time, but the internal clock would be included in any certificate as well. This approach is still limited. The camera can only attest to what color values for each pixel it captured, not what really happened. A very precisely color calibrated HDR display setup can likely fool most cameras, or why not just a proper stage set up by theater / movie prop designers, with heads of wax and all. Harder to fake video, but something convincing can probably be made with face masks. Light field cameras like lytro could be harder to fool due to the vastly greater amount of information captured, but they still can't reveal a good stage being fake. -------------- next part -------------- An HTML attachment was scrubbed... URL: From leichter at lrw.com Tue Oct 7 14:58:49 2014 From: leichter at lrw.com (Jerry Leichter) Date: Tue, 7 Oct 2014 14:58:49 -0400 Subject: [Cryptography] Creating a Parallelizeable Cryptographic Hash Function In-Reply-To: References: <542DD770.2050007@cleversafe.com> <542EDA54.4010904@cleversafe.com> <542EE80B.3090406@cleversafe.com> Message-ID: <55E71AF5-0426-4C55-81FF-42F96C8BCE02@lrw.com> On Oct 7, 2014, at 7:02 AM, Bill Cox wrote: > Actually, this is one of my favorite processes for producing good ideas. Continuing with this process, what's wrong with: > > Digest = H(1 || B1) * H(2 || B2) * ... * H(n | Bn) mod p This falls immediately to a prefix attack: If I know Digest(M) and length(M) (assume for simplicity that length(M) is a multiple of the block size) then Digest(M || Bn+1) = Digest(M) * H(n + 1 || Bn+1) mod p - taking the remainder mod p twice produces the same result as doing it only once. > I think I've shown this is secure based on the difficulty of the discrete log problem. If true, isn't this exactly what you say is unlikely to happen? You've tossed around a powerful result without tying it to the security of what you wanted to secure! -- Jerry -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 4813 bytes Desc: not available URL: From l at odewijk.nl Tue Oct 7 15:02:46 2014 From: l at odewijk.nl (=?UTF-8?Q?Lodewijk_andr=C3=A9_de_la_porte?=) Date: Tue, 7 Oct 2014 21:02:46 +0200 Subject: [Cryptography] Best internet crypto clock In-Reply-To: <5329A60E-B029-4947-AD3C-3D72BA1D8915@me.com> References: <5329A60E-B029-4947-AD3C-3D72BA1D8915@me.com> Message-ID: 2014-10-07 16:21 GMT+02:00 Arnold Reinhold : > I would envision including a good quality internal clock, set at time of > manufacture and non alterable. (When the clock battery dies, the camera is > toast.) The camera would periodically or on command output a signed > certificate containing the current reading of its internal click and maybe > an external nonce like the NIST beacon, which might then be sent to a time > stamping service, creating a record of internal clock drift over time.. The > camera might store a correction factor, so it could output a UTC time, but > the internal clock would be included in any certificate as well. > > It would seem that a camera like this would be useful in a variety of > applications (besides kidnapping) to create legally provable documents. > Assuming it had a video mode, it could be used as a notary, recording a > person's spoken acceptance of contract, or witnessing his handwritten > signature on a document. Of course one would still have to trust the > manufacturer. > Just put it in the SD card, not the camera. Lot cheaper to replace. Of course, that doesn't mean you trust the contents, just that the image existed at some point in time (which is pretty good!). Then you could have additional certificates from the camera, if the HSM self-destructs there'll be no more certificates. If I can show such a file + certificate, then it means a valid camera shot that specific image, just not at what time specifically. The failure is that there's more than the eye can see.. An unedited video of a fake moonlanding is still a fake moonlanding, even if it looked real. For that matter, something faked at the same time as when something was supposed to happen still looks real anyhow! So, I guess the in-SD-card option seems better. It'd be cool to have for other files too, claiming prior art over an idea that someone's patenting because of a sketch or a dribble in your (digital) notebook? Pretty cool! Preserves privacy and solves conflicts in a really positive way. It would also really kill overly broad patents, because everyone has probably thought about it at some point :). You can even sell your notebook's page-with-similar-enough-note to a party that wants to invalidate a patent! That'd really turn things upside down and around again. > A signing camera isn?t a new idea, a quick Google search came up with this > 1992 paper > http://www.friedmanarchives.com/Writings/Trustworthy_Digital_Camera_Technical_Paper.pdf > , but camera technology developed for cell phones makes something like this > much more affordable. Has anyone attempted this? How close could we get > with an iPhone 6, given Apple's improved security scheme? > Secure coprocessors can do just about anything. The wait is on for the NSA chips to begin rolling out as "Secure coprocessors" :] Of course, it's also cheating. You're just putting a central entity in charge of authenticity, and duplicating the whole entity a whole lot. I suspect that on the iPhone 6 there's no added value, given the amount of other possible exploits. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jsd at av8n.com Tue Oct 7 15:21:29 2014 From: jsd at av8n.com (John Denker) Date: Tue, 07 Oct 2014 12:21:29 -0700 Subject: [Cryptography] 1023 nails in the coffin of 1024 RSA... In-Reply-To: <92E81F1B-BB17-4D09-9905-1FC7DEBF465A@lrw.com> References: <543045BA.1090400@iang.org> <20141006215735.GF5358@netmeister.org> <20141006222429.GF9816@yuggoth.org> <92E81F1B-BB17-4D09-9905-1FC7DEBF465A@lrw.com> Message-ID: <54343D39.5080909@av8n.com> On 10/07/2014 04:14 AM, Jerry Leichter wrote: > N/ln N is just an approximation. For all you know, > the true number of primes could be only a millionth of that! I wouldn't have said that. Actually ?(N) > N / (2 + ln N) [1] is a hard lower bound for all N ? 2, i.e. for all nontrivial N. Even tighter bounds exist, but [1] is more than good enough for present purposes. Reference: http://en.wikipedia.org/wiki/Prime_number_theorem#Bounds_on_the_prime-counting_function From hettinga at gmail.com Tue Oct 7 17:18:58 2014 From: hettinga at gmail.com (Robert Hettinga) Date: Tue, 7 Oct 2014 17:18:58 -0400 Subject: [Cryptography] Facebook's reportedly working on a mobile app for anonymity Message-ID: Holography is destiny, boys and girls. And behavior, particularly on Facebook, is holography. Pull the other leg, Zuck. It has bells on it. Cheers, RAH -------- http://www.engadget.com/2014/10/07/facebook-anonymity-app/ Facebook's reportedly working on a mobile app for anonymity Social media giant Facebook drew ire from some users recently due to its strict real name policy. Some even fled for Ello -- madness! Now, it looks like Facebook's responding to complaints with a mobile app "that allows users to interact inside of it without having to use their real names." That's according to two people speaking with The New York Times, anyway; the sources also said the app is set to launch "in the coming weeks." Facebook's long held policies requiring verified email addresses, originally requiring college-specific email address logins per its collegiate origins. Those policies clashed recently with drag queens, some of whom had their profiles outright removed from Facebook (by algorithms). It's not clear how this app would affect web-based Facebook use, if at all, and Facebook's not saying a peep thus far. A Facebook rep told Engadget, "We don't comment on rumor or speculation." Facebook is also widely criticized for its use of user information. When Facebook made its Messenger app standalone and required more access to users' information, many swore off the service as a result. Which is to say nothing of criticisms about user privacy; see our privacy how-to from way back in 2010 for evidence of how long this has been a user concern. Issues have repeatedly arisen around Facebook changing its Terms of Service. CEO Mark Zuckerberg even had to face Federal courts regarding user concerns. At the time, he admitted mistakes were made in Facebook's past. Zuckerberg and co. certainly aren't against addressing user concerns, and they've been meeting with members of the LGBTQ community to address the recent issues with real names. We'll have to wait and see if this is a reaction to that, or something else altogether. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 496 bytes Desc: Message signed with OpenPGP using GPGMail URL: From hanno at hboeck.de Tue Oct 7 17:30:41 2014 From: hanno at hboeck.de (Hanno =?ISO-8859-1?B?QvZjaw==?=) Date: Tue, 7 Oct 2014 23:30:41 +0200 Subject: [Cryptography] SPHINCS: practical hash-based digital signatures In-Reply-To: References: Message-ID: <20141007233041.50e21497@pc> I like it that the whole area of post-quantum-crypto is getting more attention lately. However what immediately catched my attention: The webpage says "Signatures are 41 KB, public keys are 1 KB, and private keys are 1 KB" The signature size is a problem. It makes the claim that it's a "drop-in replacement" for current signature schemes somewhat questionable. 41 kb may not seem much, but consider a normal TLS handshake. It usually already contains three signatures (2 for the certificate chain and one for the handshake itself). That already makes 120 kb. It may not seem that much, but it definitely is an obstacle because this would significantly impact your loading time. -- Hanno B?ck http://hboeck.de/ mail/jabber: hanno at hboeck.de GPG: BBB51E42 -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 819 bytes Desc: not available URL: From waywardgeek at gmail.com Tue Oct 7 17:34:01 2014 From: waywardgeek at gmail.com (Bill Cox) Date: Tue, 7 Oct 2014 17:34:01 -0400 Subject: [Cryptography] Creating a Parallelizeable Cryptographic Hash Function In-Reply-To: <55E71AF5-0426-4C55-81FF-42F96C8BCE02@lrw.com> References: <542DD770.2050007@cleversafe.com> <542EDA54.4010904@cleversafe.com> <542EE80B.3090406@cleversafe.com> <55E71AF5-0426-4C55-81FF-42F96C8BCE02@lrw.com> Message-ID: On Tue, Oct 7, 2014 at 2:58 PM, Jerry Leichter wrote: > On Oct 7, 2014, at 7:02 AM, Bill Cox wrote: > > Actually, this is one of my favorite processes for producing good > ideas. Continuing with this process, what's wrong with: > > > > Digest = H(1 || B1) * H(2 || B2) * ... * H(n | Bn) mod p > This falls immediately to a prefix attack: If I know Digest(M) and > length(M) (assume for simplicity that length(M) is a multiple of the block > size) then > > Digest(M || Bn+1) = Digest(M) * H(n + 1 || Bn+1) mod p > > - taking the remainder mod p twice produces the same result as doing it > only once. > > > I think I've shown this is secure based on the difficulty of the > discrete log problem. If true, isn't this exactly what you say is unlikely > to happen? > You've tossed around a powerful result without tying it to the security of > what you wanted to secure! > -- Jerr Hi, Jerr. Thanks for taking a look at this. However, I am confused. Why do I care that an attacker can in constant time compute the correct hash of a different set of data? By multiplying in H(n+1 || Bn+1), the data is different, and so is the hash. That seems to be the way we want this function to work. In constant time any new block can be added or subtracted from the hash. That is the goal, I believe. It seems to me that the security model is that if we have D = H(B1, B2, ... , Bn), then an attacker should not be able to find find D = H(C1, C2, ... , Cm), unless n == m and Bi == Ci for all i in 1 .. n. By appending H(n+1 || Bn+1), you've changed the message, and derived the correct hash of it, in constant time. Isn't that useful? You can also prepend, or even replaced any Bi you want in constant time. I agree that this should certainly not be used for anything for now, but I would like to talk about it more. Do you agree with my assertion that it is secure for the purpose of verifying data integrity of a large data set, while allowing constant time update, including appending new data blocks or replacing old ones? Bill -------------- next part -------------- An HTML attachment was scrubbed... URL: From agr at me.com Tue Oct 7 19:41:37 2014 From: agr at me.com (Arnold Reinhold) Date: Tue, 07 Oct 2014 19:41:37 -0400 Subject: [Cryptography] Best internet crypto clock In-Reply-To: References: <5329A60E-B029-4947-AD3C-3D72BA1D8915@me.com> Message-ID: <80E0A79D-0349-4E01-9190-4BC175BB6DF2@me.com> On Oct 7, 2014, at 3:02 PM, Lodewijk andr? de la porte wrote: > 2014-10-07 16:21 GMT+02:00 Arnold Reinhold : > I would envision including a good quality internal clock, set at time of manufacture and non alterable. (When the clock battery dies, the camera is toast.) The camera would periodically or on command output a signed certificate containing the current reading of its internal click and maybe an external nonce like the NIST beacon, which might then be sent to a time stamping service, creating a record of internal clock drift over time.. The camera might store a correction factor, so it could output a UTC time, but the internal clock would be included in any certificate as well. > > It would seem that a camera like this would be useful in a variety of applications (besides kidnapping) to create legally provable documents. Assuming it had a video mode, it could be used as a notary, recording a person's spoken acceptance of contract, or witnessing his handwritten signature on a document. Of course one would still have to trust the manufacturer. > > Just put it in the SD card, not the camera. Lot cheaper to replace. Of course, that doesn't mean you trust the contents, just that the image existed at some point in time (which is pretty good!). Then you could have additional certificates from the camera, if the HSM self-destructs there'll be no more certificates. If I can show such a file + certificate, then it means a valid camera shot that specific image, just not at what time specifically. Having the camera and a cock inside the module cuts out all video editing techniques. The camera can attest when the entire optical image was captured. I?d go further and include a gyro/accelerometer package so a panorama could be captured with attestation that the camera was actually turned, rather than presented with a moving image. > > The failure is that there's more than the eye can see.. An unedited video of a fake moonlanding is still a fake moonlanding, even if it looked real. For that matter, something faked at the same time as when something was supposed to happen still looks real anyhow! So, I guess the in-SD-card option seems better. It'd be cool to have for other files too, claiming prior art over an idea that someone's patenting because of a sketch or a dribble in your (digital) notebook? Pretty cool! Preserves privacy and solves conflicts in a really positive way. It would also really kill overly broad patents, because everyone has probably thought about it at some point :). You can even sell your notebook's page-with-similar-enough-note to a party that wants to invalidate a patent! That'd really turn things upside down and around again. INAPL, but my understanding is that the US is going to a first-to-file patent priority system, a has much of the rest of the world already, so what ideas you had when only matters if you publish them. An online, timestamped, well indexed patent disclosure journal where people could post all their clever ideals as fast as they get them would be easy enough to implement. A mechanism for others to add comments that clarified and extended your ideas (much like these threads) would help prevent trolls patenting the gaps in your ideas. A small fee could keep down spam. Existing crypto and time-stamping services should suffice for authentication. Arnold Reinhold -------------- next part -------------- An HTML attachment was scrubbed... URL: From ryacko at gmail.com Tue Oct 7 20:17:11 2014 From: ryacko at gmail.com (Ryan Carboni) Date: Tue, 7 Oct 2014 17:17:11 -0700 Subject: [Cryptography] Creating a Parallelizeable Cryptographic Hash Function Message-ID: Hash trees are provably secure, and fastest on typical processors when parallelized. -------------- next part -------------- An HTML attachment was scrubbed... URL: From ryacko at gmail.com Tue Oct 7 20:30:24 2014 From: ryacko at gmail.com (Ryan Carboni) Date: Tue, 7 Oct 2014 17:30:24 -0700 Subject: [Cryptography] Do you think RC4 will become insecure for 2^16 encryptions of the same plaintext or less? Message-ID: Do you think RC4 will become insecure for 2^16 encryptions of the same plaintext or less? Such a weakness would be crackable. It could probably be achieved if weak states were discovered. But I don't think it would be a result of the current attack discovered in 2013 by Bernstein and Co. -------------- next part -------------- An HTML attachment was scrubbed... URL: From waywardgeek at gmail.com Tue Oct 7 21:31:58 2014 From: waywardgeek at gmail.com (Bill Cox) Date: Tue, 7 Oct 2014 21:31:58 -0400 Subject: [Cryptography] Creating a Parallelizeable Cryptographic Hash Function In-Reply-To: References: Message-ID: On Tue, Oct 7, 2014 at 8:17 PM, Ryan Carboni wrote: > Hash trees are provably secure, and fastest on typical processors when > parallelized. > Constant time update is asymptotically infinitely faster than log-time hash trees for updates, and I think also provably secure. Besides that, there are plenty of real-world applications where constant time updates are acceptable, but log(n) are not. Is anyone here going to address my defence against Wagner's generalized birthday attack? By the way, Wagner is one of my heroes. Defending against even one of his attacks would be quite validating. Bill -------------- next part -------------- An HTML attachment was scrubbed... URL: From waywardgeek at gmail.com Tue Oct 7 21:59:28 2014 From: waywardgeek at gmail.com (Bill Cox) Date: Tue, 7 Oct 2014 21:59:28 -0400 Subject: [Cryptography] The world's most secure TRNG In-Reply-To: <542BB301.5030603@ladisch.de> References: <542A3800.8010007@iang.org> <542BB301.5030603@ladisch.de> Message-ID: On Wed, Oct 1, 2014 at 3:53 AM, Clemens Ladisch wrote: > Bill Cox wrote: > > On Tue, Sep 30, 2014 at 7:03 AM, Natanael wrote: > > > Den 30 sep 2014 09:55 skrev "Philipp G?hring": > > > > So from a marketing point of view you should put a whitener on the > > > > part. > > > > > > Yes! > > > > Thanks for that suggestion. I'll whiten with some of the leftover gates. > > How to do a decent job sounds like a fun problem. > > You need custom drivers for this device anyway, so it might be a better > idea to let the software do a decent job. (You might want to add to the > USB packets a header with the current settings and the actual amount of > entropy; in that case there is less danger that anybody thinks this data > is a perfectly random bit stream.) > > And why are you calling it a whitener instead of a randomness extractor? > The former name could imply that the output looks random, but has less > than 100% entropy. > > > Regards, > Clemens > I've reduced the BOM for the parts (not board/assembly/test yet) from about $7.00 to $2.60. Unfortunately, my bandwidth dropped from 1MiB/s to maybe 25KiB/s. Also, I was convinced by the argument above that I had to write a driver anyway, so why not put the whitener there? I call it a whitener because that's the accepted term... frankly I think that term sucks, but I have gripes about a lot of common terms like this. I removed the FPGA from the design and now only have a USB-to-FIFO chip acting in bit-bang mode to control the infinite noise multiplier, which is much slower. I think you guys were right to have me focus on cost. More people will copy my $1.10 in parts (without the USB controller) even if it generates only 25KiB/s, than ever would copy my $5.50 1MiB/s TRNG. I've got my $1.60 USB interface chip to configure Lattice ICE40 FPGAs, which only cost about $1.50 (both in quantities 1,000). It seems like that would be a fun proto-board by itself. A $5 FPGA USB hacker board might be fun... The Lattice tools to configure it runs a free copy of Synplify Pro, which looks almost exactly like it did when I stopped working on this tool in 1998. The schematic generator seems to be about the same as I left it, though there was a really good guy making amazing improvements for a while after I left. Time seems to have degraded it back to my version. Bill -------------- next part -------------- An HTML attachment was scrubbed... URL: From bascule at gmail.com Tue Oct 7 23:45:08 2014 From: bascule at gmail.com (Tony Arcieri) Date: Tue, 7 Oct 2014 20:45:08 -0700 Subject: [Cryptography] Do you think RC4 will become insecure for 2^16 encryptions of the same plaintext or less? In-Reply-To: References: Message-ID: Is there some legacy reason you really need to support RC4 in this use case? If not, why not use a modern cipher like ChaCha20 instead? RC4 is clearly not a good cipher, especially compared to modern alternatives. Do we really need to take a straw poll of how horribly it will be broken in the future? -- Tony Arcieri -------------- next part -------------- An HTML attachment was scrubbed... URL: From agr at me.com Wed Oct 8 08:09:21 2014 From: agr at me.com (Arnold Reinhold) Date: Wed, 08 Oct 2014 08:09:21 -0400 Subject: [Cryptography] Best internet crypto clock In-Reply-To: References: Message-ID: <1FB924BA-C18E-41A3-A084-D8E2E74C4826@me.com> On Oct 8, 2014, at 12:55 AM, Peter Gutmann wrote: > Arnold Reinhold writes: > >> This conundrum suggests a need for a camera that cryptographically signs its images. > > These already exist, and have been in use for many years: > > http://www.canberra.com/products/safeguards_surveillance_seals/pdf/DCM-14-SS-C29203.pdf > http://www.iaea.org/safeguards/symposium/2010/Documents/PapersRepository/2605737238931922303602.pdf > > Peter. Those are interesting examples, but intended for fixed mounting in nuclear surveillance situations and presumably very costly. I'm thinking of a much more portable device based on cell phone technology, with video, audio and additional sensors such as motion, compass, GPS (ok, that one's a deal breaker for the kidnapping market). And as a separate question, what can be done with, say, the newer iPhones, given their stronger security model. For example, two adversarial parties could video and sign the same event with separate phones and then sign each others videos after inspecting them and concluding that each captured the intended information. One could bound the time of the videos by starting with a Nist Beacon value and ending with a time stamp from a service. Arnold Reinhold From bascule at gmail.com Wed Oct 8 00:41:08 2014 From: bascule at gmail.com (Tony Arcieri) Date: Tue, 7 Oct 2014 21:41:08 -0700 Subject: [Cryptography] Do you think RC4 will become insecure for 2^16 encryptions of the same plaintext or less? In-Reply-To: References: Message-ID: On Tue, Oct 7, 2014 at 9:02 PM, Ryan Carboni wrote: > There is an ongoing debate on an ietf mailing list that I'd like to be > brought here. > RC4 should probably be abandoned sooner than later -- Tony Arcieri -------------- next part -------------- An HTML attachment was scrubbed... URL: From bear at sonic.net Wed Oct 8 11:57:27 2014 From: bear at sonic.net (Bear) Date: Wed, 08 Oct 2014 08:57:27 -0700 Subject: [Cryptography] Best Internet crypto clock ? In-Reply-To: References: Message-ID: <1412783847.28022.1.camel@sonic.net> On Fri, 2014-10-03 at 06:42 -0700, Henry Baker wrote: > At 11:29 PM 10/2/2014, Ryan Carboni wrote: > >"Each such value is sequence-numbered, time-stamped and signed, and includes the hash of the previous value to chain the sequence of values together and prevent even the source to retroactively change an output package without being detected." > >They are both block chains. > > > >And they both include the time. > > So you can easily convert from cryptotime to GMT time. > > What is the authenticated algorithm to convert GMT time to cryptotime > for a) Bitcoin blockchain; b) NIST whatever-its-called ? > > And unlike Bitcoin, where millions of processors are working very hard > to make sure that it can't be hacked, where are those millions of > processors to make sure that the NIST chain can't be hacked? I have to point out here that there is absolutely nothing in the Bitcoin protocol that prevents servers who solve a block from misreporting the time. In fact, it has been a strategy in the past for parallelizing the hashing. When the portion of the nonce that people could adjust seeking a winning hash was exceeded by hardware capacity, the time field was incremented even though the mentioned time had not yet arrived, and the search started over. Because that was faster than rearranging the transactions to form a different base for hashing, I guess. Anyway, the time reported in bitcoin blocks is approximate, and historically not even monotonically increasing. It is testimony that the Bitcoin network accepted the server's claim of what time it was, so probably, usually, within ten minutes of the real time. But it does not establish any precise notion of time. Bear From bear at sonic.net Wed Oct 8 12:20:24 2014 From: bear at sonic.net (Bear) Date: Wed, 08 Oct 2014 09:20:24 -0700 Subject: [Cryptography] Creating a Parallelizeable Cryptographic Hash Function In-Reply-To: <003a01cfe030$e9047830$bb0d6890$@huitema.net> References: <542DD770.2050007@cleversafe.com> <4A7A3B6E-71F8-4A15-BF10-7529B5587EA2@lrw.com> <003a01cfe030$e9047830$bb0d6890$@huitema.net> Message-ID: <1412785224.28022.3.camel@sonic.net> On Sat, 2014-10-04 at 17:11 -0700, Christian Huitema wrote: > > To a programmer a good hash table is not the same as a good crypto hash. > > A programmer simply wants a fast lookup with a minimum miss, collision. > > Most programmers do not care if a collision is moderately easy to fabricate > > because they want to get close enough not exactly and will walk their way to > > the desired data (short walk). > > Actually, it is a bit more complex than that. In many applications, you > have to be concerned about denial of service attacks. If an outsider > can manufacture hash collisions, then you can end up with a serious > issue, the hash resolution moving for example from O(1) to O(N). Think > for example of a hash table going from TCP headers to TCP context, and > a SYN attack amplifying the damage by picking combinations of address > and ports that result in hash collisions. True, but he has a point. In most programming applications a hash drives a hash table in a context where we aren't at all worried about an attacker. For example, in a program that's keeping track of the actors in a simulation, where each actor is assigned a GUID number when it's created. Then as the simulation continues, old actors are removed and new actors are created, and the program keeps track of them with a hash table. This app doesn't even communicate over any network and (if it's, eg, a single-player game) probably doesn't compute anything that any attacker cares about. All the programmer cares about for the hash function on that table is that a) you want aliasing (accidental collisions) to be minimized. b) you want the hash to be fast to compute. c) you want "random" distribution of GUID's in the hash table (ie, no single part of your hash table should be *more* full than the other parts, or at least not by any margin distinguishable from statistical noise on random numbers). I have in fact solved this problem in the past by using a counter viewed through a linear congruential transformation to assign the GUID's, and then using the GUID modulo the size of the hash table as a hash function. And that's considered to be an elegant and completely standard solution. Table hashes, usually, are not cryptographic hashes. Bear From benl at google.com Wed Oct 8 11:03:17 2014 From: benl at google.com (Ben Laurie) Date: Wed, 08 Oct 2014 15:03:17 +0000 Subject: [Cryptography] SPHINCS: practical hash-based digital signatures References: <20141007233041.50e21497@pc> Message-ID: On Tue Oct 07 2014 at 10:51:21 PM Hanno B?ck wrote: > I like it that the whole area of post-quantum-crypto is getting more > attention lately. > > However what immediately catched my attention: The webpage says > "Signatures are 41 KB, public keys are 1 KB, and private keys are 1 KB" > > The signature size is a problem. It makes the claim that it's a > "drop-in replacement" for current signature schemes somewhat > questionable. > > 41 kb may not seem much, but consider a normal TLS handshake. It > usually already contains three signatures (2 for the certificate chain > and one for the handshake itself). That already makes 120 kb. > > It may not seem that much, but it definitely is an obstacle because this > would significantly impact your loading time. > Definitely a deal breaker for HTTPS. -------------- next part -------------- An HTML attachment was scrubbed... URL: From hbaker1 at pipeline.com Wed Oct 8 12:16:42 2014 From: hbaker1 at pipeline.com (Henry Baker) Date: Wed, 08 Oct 2014 09:16:42 -0700 Subject: [Cryptography] Best Internet crypto clock ? In-Reply-To: <1412783847.28022.1.camel@sonic.net> References: <1412783847.28022.1.camel@sonic.net> Message-ID: At 08:57 AM 10/8/2014, Bear wrote: >I have to point out here that there is absolutely nothing in the Bitcoin >protocol that prevents servers who solve a block from misreporting >the time. > >In fact, it has been a strategy in the past for parallelizing the >hashing. When the portion of the nonce that people could adjust >seeking a winning hash was exceeded by hardware capacity, the >time field was incremented even though the mentioned time had >not yet arrived, and the search started over. Because that was >faster than rearranging the transactions to form a different >base for hashing, I guess. > >Anyway, the time reported in bitcoin blocks is approximate, and >historically not even monotonically increasing. It is testimony >that the Bitcoin network accepted the server's claim of what >time it was, so probably, usually, within ten minutes of the >real time. But it does not establish any precise notion of time. For most purposes, the time priority relative to some other events _not under any one organization's control_ is more important than some nominal time value. After all, almost all digital devices these days include some form of a clock; the problem is, we can't believe these clock values unless they are "laced up" with external events in such a way that the clock values can't have been spoofed by very much. So no, the nominal clock value in the Bitcoin blockchain is no more trustworthy than my computer's own clock. That having been said, a proper Internet crypto clock _would_ make a much better attempt to correspond with real GMT, and _would_ attempt to produce a strict monotonically increasing series of nominal clock values, in addition to its crypto hash "ticks". From jya at pipeline.com Wed Oct 8 07:59:36 2014 From: jya at pipeline.com (John Young) Date: Wed, 08 Oct 2014 07:59:36 -0400 Subject: [Cryptography] State Hash Message-ID: http://sphincs.cr.yp.to/ Special note to law-enforcement agents: The word "state" is a technical term in cryptography. Typical hash-based signature schemes need to record information, called "state", after every signature. Google's Adam Langley refers to this as a "huge foot-cannon" from a security perspective. By saying "eliminate the state" we are advocating a security improvement, namely adopting signature schemes that do not need to record information after every signature. We are not talking about eliminating other types of states. We love most states, especially yours! Also, "hash" is another technical term and has nothing to do with cannabis. From leichter at lrw.com Wed Oct 8 10:33:17 2014 From: leichter at lrw.com (Jerry Leichter) Date: Wed, 8 Oct 2014 10:33:17 -0400 Subject: [Cryptography] Creating a Parallelizeable Cryptographic Hash Function In-Reply-To: References: <542DD770.2050007@cleversafe.com> <542EDA54.4010904@cleversafe.com> <542EE80B.3090406@cleversafe.com> <55E71AF5-0426-4C55-81FF-42F96C8BCE02@lrw.com> Message-ID: <85AF0E33-E35C-41BA-84AE-9923AB33F864@lrw.com> On Oct 7, 2014, at 5:34 PM, Bill Cox wrote: > > Digest = H(1 || B1) * H(2 || B2) * ... * H(n | Bn) mod p > This falls immediately to a prefix attack... > > Hi, Jerr. Thanks for taking a look at this. However, I am confused. Why do I care that an attacker can in constant time compute the correct hash of a different set of data? Urk, I kind of applied the wrong tool. In fact, the iterative structure of the commonly used hash functions all have this prefix property - which is what makes the obvious techniques of pre- or post-fixing a key to make a keyed MAC from a hash fail. However, this has always been seen as a weakness. You'd like a hash function to "look like a random function". Anything that makes it look less like a random function leaves potential traps for the unwary - many, many people have made the mistake of using H(key || X) as a keyed MAC. It's good to have robust primitives that are hard to break. Every algebraic property of a hash is also a potential trap for the unwary. In fact, the newer generation of hash functions, I believe, do *not* have this prefix property - nor do they have any other algebraic properties anyone is aware of. In the case you propose, there's another issue: Digest(X) == 0 if and only if one of the constituent hashes is 0. That breaks one of the basic requirements for a secure hash function: Given H(X), it's difficult to find a Y != X such that H(Y) == H(X). When H(X) == 0, it's trivial to find as many examples as you like. > By multiplying in H(n+1 || Bn+1), the data is different, and so is the hash. That seems to be the way we want this function to work. In constant time any new block can be added or subtracted from the hash. That is the goal, I believe. I'm not sure what the goal is. This is not a property of a random function, so is not a property expected of a secure hash function. You're proposing a new primitive, with its own set of security properties (which you haven't fully written down), and which may or may not be useful. > It seems to me that the security model is that if we have D = H(B1, B2, ... , Bn), then an attacker should not be able to find find D = H(C1, C2, ... , Cm), unless n == m and Bi == Ci for all i in 1 .. n. See above; this is false when D == 0. > By appending H(n+1 || Bn+1), you've changed the message, and derived the correct hash of it, in constant time. Isn't that useful? I have no idea. You would need to propose a use, define the security properties needed for that use, then show (under appropriate assumptions) that you've attained them. Note that the cost of computing your digest is at least double that of simply doing a hash over the original data (as you run the hash on twice as much data - not to mention the cost of all those modular multiplications). You'd need to justify that cost. > You can also prepend, or even replaced any Bi you want in constant time. > I agree that this should certainly not be used for anything for now, but I would like to talk about it more. Do you agree with my assertion that it is secure for the purpose of verifying data integrity of a large data set, while allowing constant time update, including appending new data blocks or replacing old ones? I have no clue. I also see no obvious advantage to your scheme over simply adding the constituent hashes together (effectively mod 2^n) - a much, much cheaper operation which doesn't have problems with 0. (You can make it even cheaper by considering each H() value as a vector of 32- or 64-bit values and adding them as vectors.) Can you describe an attack on the cheaper approach that fails for yours? -- Jerry -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 4813 bytes Desc: not available URL: From pgut001 at cs.auckland.ac.nz Wed Oct 8 00:55:41 2014 From: pgut001 at cs.auckland.ac.nz (Peter Gutmann) Date: Wed, 08 Oct 2014 17:55:41 +1300 Subject: [Cryptography] Best internet crypto clock In-Reply-To: <5329A60E-B029-4947-AD3C-3D72BA1D8915@me.com> Message-ID: Arnold Reinhold writes: >This conundrum suggests a need for a camera that cryptographically signs its images. These already exist, and have been in use for many years: http://www.canberra.com/products/safeguards_surveillance_seals/pdf/DCM-14-SS-C29203.pdf http://www.iaea.org/safeguards/symposium/2010/Documents/PapersRepository/2605737238931922303602.pdf Peter. From ryacko at gmail.com Wed Oct 8 03:15:08 2014 From: ryacko at gmail.com (Ryan Carboni) Date: Wed, 8 Oct 2014 00:15:08 -0700 Subject: [Cryptography] Do you think RC4 will become insecure for 2^16 encryptions of the same plaintext or less? In-Reply-To: References: Message-ID: I doubt it will ever be broken though. Attacks don't improve linearly, they improve logarithmically as knowledge reaches the maximum attainable. -------------- next part -------------- An HTML attachment was scrubbed... URL: From bascule at gmail.com Wed Oct 8 15:56:17 2014 From: bascule at gmail.com (Tony Arcieri) Date: Wed, 8 Oct 2014 12:56:17 -0700 Subject: [Cryptography] Do you think RC4 will become insecure for 2^16 encryptions of the same plaintext or less? In-Reply-To: References: Message-ID: On Wednesday, October 8, 2014, Ryan Carboni wrote: > I doubt it will ever be broken though. > It's already broken. ChaCha20 is both faster and more secure. Stop defending bad crypto. -- Tony Arcieri -------------- next part -------------- An HTML attachment was scrubbed... URL: From agr at me.com Wed Oct 8 16:57:43 2014 From: agr at me.com (Arnold Reinhold) Date: Wed, 08 Oct 2014 16:57:43 -0400 Subject: [Cryptography] The world's most secure TRNG Message-ID: <48153DBF-756B-47A6-9B53-EE8E5CFFB730@me.com> On Tue, 7 Oct 2014 21:59 Bill Cox wrote: > I've reduced the BOM for the parts (not board/assembly/test yet) from about > $7.00 to $2.60. Unfortunately, my bandwidth dropped from 1MiB/s to maybe > 25KiB/s. Also, I was convinced by the argument above that I had to write a > driver anyway, so why not put the whitener there? ... Good call. > > I removed the FPGA from the design and now only have a USB-to-FIFO chip > acting in bit-bang mode to control the infinite noise multiplier, which is > much slower. I think you guys were right to have me focus on cost. More > people will copy my $1.10 in parts (without the USB controller) even if it > generates only 25KiB/s, than ever would copy my $5.50 1MiB/s TRNG. The Lattice ICE40 FPGA product page http://www.latticesemi.com/Products/FPGAandCPLD/iCE40.aspx says it has a hard I2C core. Is that only for configuring the FPGA or can it be used to output your random bits? If not, how hard would it be to make a second I2C controller from the excess FPGA logic? It sounds like the ICE40 costs about the same as the USB-to-FIFO, so an alternative FPGA-only I2C model should hit the same price point. I2C would open a different market -- embedded -- that might be much larger and really needs a good cheap random source. > > I've got my $1.60 USB interface chip to configure Lattice ICE40 FPGAs, > which only cost about $1.50 (both in quantities 1,000). It seems like that > would be a fun proto-board by itself. A $5 FPGA USB hacker board might be > fun... The Lattice tools to configure it runs a free copy of Synplify Pro, > which looks almost exactly like it did when I stopped working on this tool > in 1998. The schematic generator seems to be about the same as I left it, > though there was a really good guy making amazing improvements for a while > after I left. Time seems to have degraded it back to my version. There might well be a market for hobbyists, e.g. via AdaFruit. FPGA dev boards I've seen start in the $100 range, admittedly much more powerful arrays but how many hobbyists can begin to use that power? You'll need to package up the design software with some documentation and a few simple examples. Arnold Reinhold From bascule at gmail.com Wed Oct 8 18:17:01 2014 From: bascule at gmail.com (Tony Arcieri) Date: Wed, 8 Oct 2014 15:17:01 -0700 Subject: [Cryptography] Do you think RC4 will become insecure for 2^16 encryptions of the same plaintext or less? In-Reply-To: References: Message-ID: On Wed, Oct 8, 2014 at 3:10 PM, Eric Mill wrote: > Tony - what's ChaCha20 client support like? > libsodium provides a ChaCha20+Poly1305 AEAD construction across a variety of platforms: https://github.com/jedisct1/libsodium See: http://doc.libsodium.org/advanced/chacha20.html -- Tony Arcieri -------------- next part -------------- An HTML attachment was scrubbed... URL: From bascule at gmail.com Wed Oct 8 18:26:02 2014 From: bascule at gmail.com (Tony Arcieri) Date: Wed, 8 Oct 2014 15:26:02 -0700 Subject: [Cryptography] SPHINCS: practical hash-based digital signatures In-Reply-To: References: <20141007233041.50e21497@pc> Message-ID: On Wed, Oct 8, 2014 at 8:03 AM, Ben Laurie wrote: > Definitely a deal breaker for HTTPS. > This is much more interesting for data-at-rest use cases, especially for what Zooko calls "hundred year cryptography". Digital signatures are the only public key component of the "data-at-rest" format of the Tahoe-LAFS distributed filesystem, so if Tahoe changed to hash-based signatures, it could theoretically survive the advent of quantum computers. For transport encryption? Probably not ;) -- Tony Arcieri -------------- next part -------------- An HTML attachment was scrubbed... URL: From dave at horsfall.org Wed Oct 8 19:00:33 2014 From: dave at horsfall.org (Dave Horsfall) Date: Thu, 9 Oct 2014 10:00:33 +1100 (EST) Subject: [Cryptography] The world's most secure TRNG In-Reply-To: <48153DBF-756B-47A6-9B53-EE8E5CFFB730@me.com> References: <48153DBF-756B-47A6-9B53-EE8E5CFFB730@me.com> Message-ID: It's possible that I may have missed this (the list seems to have spiked lately), but how would the device present itself to the host? A serial stream of random bits (like a terminal or a keyboard), or some sort of a structure with command and control etc? -- Dave From mitch at niftyegg.com Wed Oct 8 19:37:36 2014 From: mitch at niftyegg.com (Tom Mitchell) Date: Wed, 8 Oct 2014 16:37:36 -0700 Subject: [Cryptography] Best internet crypto clock In-Reply-To: <80E0A79D-0349-4E01-9190-4BC175BB6DF2@me.com> References: <5329A60E-B029-4947-AD3C-3D72BA1D8915@me.com> <80E0A79D-0349-4E01-9190-4BC175BB6DF2@me.com> Message-ID: On Tue, Oct 7, 2014 at 4:41 PM, Arnold Reinhold wrote: > > On Oct 7, 2014, at 3:02 PM, Lodewijk andr? de la porte > wrote: > > 2014-10-07 16:21 GMT+02:00 Arnold Reinhold : > >> I would envision including a good quality internal clock, set at time of >> manufacture and non alterable. (When the clock battery dies, the camera is >> toast.) The camera would periodically or on command output a signed >> certificate containing the current reading of its internal click and maybe >> an external nonce like the NIST beacon, which might then be sent to a time >> stamping service, creating a record of internal clock drift over time.. The >> camera might store a correction factor, so it could output a UTC time, but >> the internal clock would be included in any certificate as well. >> > ...... > Having the camera and a cock inside the module cuts out all video editing > techniques. The camera can attest when the entire optical image was > captured. I?d go further and include a gyro/accelerometer package so a > panorama could be captured with attestation that the camera was actually > turned, rather than presented with a moving image. > > A clock that is sufficiently stable for long periods of time and temperature deltas is nearly impossible to design. As soon as one states UTC or somthing common the answer approaches impossible. However a free running ticker and counter with sufficient bits to never overflow makes sense. The Timex on my wrist has a ten year battery... but not enough bits and no electrical access. Trivia for digital wrist watches.. the transistors of the clock logic operate in an analog mode to allow extreme low power (low clock rate). In designing clocks one fact is that when broken most are exactly correct twice a day and the average of errors will approach zero. A statistician might convince a politician that a broken clock is a perfect clock. -------------- next part -------------- An HTML attachment was scrubbed... URL: From coruus at gmail.com Wed Oct 8 20:09:41 2014 From: coruus at gmail.com (David Leon Gil) Date: Wed, 8 Oct 2014 20:09:41 -0400 Subject: [Cryptography] Creating a Parallelizeable Cryptographic Hash Function In-Reply-To: <1412785224.28022.3.camel@sonic.net> References: <542DD770.2050007@cleversafe.com> <4A7A3B6E-71F8-4A15-BF10-7529B5587EA2@lrw.com> <003a01cfe030$e9047830$bb0d6890$@huitema.net> <1412785224.28022.3.camel@sonic.net> Message-ID: On Wed, Oct 8, 2014 at 12:20 PM, Bear wrote: > a) you want aliasing (accidental collisions) to be minimized. > b) you want the hash to be fast to compute. > c) you want "random" distribution of GUID's in the hash table > (ie, no single part of your hash table should be *more* > full than the other parts, or at least not by any margin > distinguishable from statistical noise on random numbers). As an aside, what you're describing is a function with low discrepancy on the inputs. This is (interestingly enough) different from a function that generates values indistinguishable from uniform random numbers in the interval. (The values "anti-bunch", in some sense, compared to random numbers.) See, e.g., http://en.wikipedia.org/wiki/Low-discrepancy_sequence for the fascinating details of low-discrepancy sequences and quasi-random numbers. (I used to use (t,m,s)-nets and Niederreiter sequences for statistical work. And folks here might know Harald Niederreiter's name from his linear-code-based public key cryptosystem.) From waywardgeek at gmail.com Wed Oct 8 20:59:02 2014 From: waywardgeek at gmail.com (Bill Cox) Date: Wed, 8 Oct 2014 20:59:02 -0400 Subject: [Cryptography] The world's most secure TRNG In-Reply-To: References: <48153DBF-756B-47A6-9B53-EE8E5CFFB730@me.com> Message-ID: On Wed, Oct 8, 2014 at 7:00 PM, Dave Horsfall wrote: > It's possible that I may have missed this (the list seems to have spiked > lately), but how would the device present itself to the host? A serial > stream of random bits (like a terminal or a keyboard), or some sort of a > structure with command and control etc? > > -- Dave > _______________________________________________ > The cryptography mailing list > cryptography at metzdowd.com > http://www.metzdowd.com/mailman/listinfo/cryptography > No command/control. In fact, I feel a lot better not having a microcontroller on there that could transmit nasty malware when being plugged into a new system, or which could be reprogrammed to emit non-random data. It's just a simple USB -> 8-bit fifo chip controlling the TRNG. The USB controller is a FT240X, which has some reconfigurability, but not even enough to create a 2-bit state machine. The host just sets Ph1 high and Ph2 low and vice versa through the bit-bang mode on the FT240X, and receives the resulting bytes one per clock. Only one bit of each byte is output from the TRNG, so you clock it 8 times and then send a byte to the whitener. I'm working on the Eagle schematic and board layout now. It's a lot of fun. I know I should put an EMI shield on the device to keep it from leaking data to attackers, but I am leaning towards shipping naked cheap little USB boards, similar to a DigiSpark. How important is the proper USB connector vs a raw connector with no housing like the DigiSpark? Do we really feel we need to wrap this thing in metal to keep it from radiating secret bits? I figure if we feed it into a whitener, an attacker would have to know *every* bit to know the state of the whitener. That seems like a tall order for an attacker trying to read bits from EMI. Bill -------------- next part -------------- An HTML attachment was scrubbed... URL: From jsd at av8n.com Wed Oct 8 21:14:41 2014 From: jsd at av8n.com (John Denker) Date: Wed, 08 Oct 2014 18:14:41 -0700 Subject: [Cryptography] The world's most secure TRNG In-Reply-To: References: Message-ID: <5435E181.4060108@av8n.com> On 09/28/2014 04:27 AM, Bill Cox wrote: > I have a quick question for you guys. For a USB stick TRNG, would you > rather pay ~$15 for a 100K-byte/second source of true entropy, or ~$30 for > a 1M-byte/second source? How about $1.08 for the whole thing, for a finished product (not just the of materials), including labor and including shipping? http://www.ebay.com/itm/USB2-0-Audio-Headset-Headphone-Earphone-Mic-Microphone-Jack-Converter-Adapter-XS-/390945793533?pt=US_Sound_Cards_Internal Or pay $0.00 if your machine comes with a built-in audio subsystem. It's an audio device, so it comes with a documented standard interface. Also the fact that it has an output comes in handy for calibration and for life-long quality-assurance checks. It's as secure as anything you could build yourself. The entropy delivery rate is high enough for all ordinary purposes. From dj at deadhat.com Wed Oct 8 21:17:57 2014 From: dj at deadhat.com (David Johnston) Date: Wed, 08 Oct 2014 18:17:57 -0700 Subject: [Cryptography] The world's most secure TRNG In-Reply-To: References: <48153DBF-756B-47A6-9B53-EE8E5CFFB730@me.com> Message-ID: <5435E245.8070600@deadhat.com> On 10/8/2014 4:00 PM, Dave Horsfall wrote: > It's possible that I may have missed this (the list seems to have spiked > lately), but how would the device present itself to the host? A serial > stream of random bits (like a terminal or a keyboard), or some sort of a > structure with command and control etc? The USB serial profile isn't a bad one. The drivers will be present in any OS and you can communicate the necessary protocol on top of the serial device. It certainly beats writing a device driver for every OS. Since the device would be external to the computer (i.e. on the other end of a usb connection) it would be good if the owner of the device could provision the device with a secret key or a keypair which then sends the random data in signed lumps with some monotonic counter. So if something evil got in between the device and the consumer (application or OS kernel or VM or whatever) the consumer could check the data is what came from the device and isn't a replay or spoofed data. It's not perfect, but it addresses a number of attack scenarios. I think the primary problem with writing software that uses random data is establishing that you have it. Most environments are indistinguishable in that sense. A low entropy platform with lots of interrupts (E.G. a synchronously clocked embedded controller with no IO until after it booted) will still provide data from /dev/random. It's easy to build a platform that has an entropy supply. It's hard to know how to tell that you're on such a platform if you're writing software to run on many platforms. An external USB source is a good solution if you have an application that can securely identify data sourced from the device, regardless of what the platform in between is. Stick the device in the usb port, run the software and you've bypassed the risks of a low entropy platform that isn't otherwise acting against your best interests. If it's just a noise source, it'll still work, but I wouldn't call it the most defensive design you could create. FWIW, I've analyzed the raw entropy from hardware entropy sources on several products from several manufacturers and an alarming proportion of them either don't meet their min-entropy criteria or never defined them in the first place. Get your ducks in a row on the min-entropy you guarantee, the design margin, the online testing to ensure it's working and the extraction process and you will be in the upper quartile of RNG design quality. -DJ From waywardgeek at gmail.com Wed Oct 8 21:24:36 2014 From: waywardgeek at gmail.com (Bill Cox) Date: Wed, 8 Oct 2014 21:24:36 -0400 Subject: [Cryptography] Secure parallel hash, with constant time update Message-ID: This was Creating a Parallelizeable Cryptographic Hash Function, started by Jason Resch. His XORing hashes together is not secure, but doing the same thing with multiplication modulo a prime appears to be. The hash function is: Digest y = H(1 || B1) * H(2 || B2) * ... * H(n || Bn) mod p The prime p should be large, such as 2048 bits, because if an attacker can compute the discrete log of the H values mod p, he can easily find collisions. My security proof is simple. Assume an attacker has found an algorithm that takes essentially random numbers ri as inputs (the H values for each i), and finds a way to multiply some of them together to equal the previous digest. All we do is change his algorithm so that instead of picking various ri = H(i) to multiply together, compute instead si = g^ri mod p for each i used by the algorithm. If g is a group generator, then si is just as random as ri, but we know something about si (it's discrete log). Using the attacker's algorithm, we now find the discrete log of y = g^x. Just find s1 ... sn such that s1 * s2 * ... * sm = x, and then the discrete log of x is trivially found as r1 + r2 + ... + rn mod p-1. I've had a few days to noodle on this proof, and I now believe it is sound. If the world needs a constant-time updateable parallel hash function, this should do the job. When we add new messages to the end, we can compute the new digest in constant time. We can also replace any existing message Bi with Bi', and compute the new hash in constant time. We can also use this for a rolling-window hash, similar to what rsync uses, but more secure. Jason, is this what you were looking for? I would love to know what use case you have in mind. Bill -------------- next part -------------- An HTML attachment was scrubbed... URL: From mitch at niftyegg.com Wed Oct 8 21:56:12 2014 From: mitch at niftyegg.com (Tom Mitchell) Date: Wed, 8 Oct 2014 18:56:12 -0700 Subject: [Cryptography] Creating a Parallelizeable Cryptographic Hash Function In-Reply-To: <1412785224.28022.3.camel@sonic.net> References: <542DD770.2050007@cleversafe.com> <4A7A3B6E-71F8-4A15-BF10-7529B5587EA2@lrw.com> <003a01cfe030$e9047830$bb0d6890$@huitema.net> <1412785224.28022.3.camel@sonic.net> Message-ID: On Wed, Oct 8, 2014 at 9:20 AM, Bear wrote: > On Sat, 2014-10-04 at 17:11 -0700, Christian Huitema wrote: > > > To a programmer a good hash table is not the same as a good crypto > hash. > > > A programmer simply wants a fast lookup with a minimum miss, collision. > > > Most programmers do not care if a collision is moderately easy to > fabricate > > > because they want to get close enough not exactly and will walk their > way to > > > the desired data (short walk). > > > > Actually, it is a bit more complex than that. In many applications, you .... > > and ports that result in hash collisions. > > True, but he has a point. In most programming applications a hash > drives a hash table in a context where we aren't at all worried about > an attacker. > ...... > For example, > Table hashes, usually, are not cryptographic hashes. > > Thank you.. Exactly my point. As people research ways to speed up hash functions knowing exactly what the goal is is key. More importantly when something changes and an internal hash lookup becomes external because it is a handy handle a risk surfaces. Anytime specifications change it is necessary to flip the full chain of dominoes to see what gets touched and what might get exposed. Incautious use of a single word in a specification can insert a problem inadvertently. T.hash C.hash and others? -- T o m M i t c h e l l -------------- next part -------------- An HTML attachment was scrubbed... URL: From mitch at niftyegg.com Wed Oct 8 23:01:26 2014 From: mitch at niftyegg.com (Tom Mitchell) Date: Wed, 8 Oct 2014 20:01:26 -0700 Subject: [Cryptography] Do you think RC4 will become insecure for 2^16 encryptions of the same plaintext or less? In-Reply-To: References: Message-ID: On Wed, Oct 8, 2014 at 12:15 AM, Ryan Carboni wrote: > I doubt it will ever be broken though. Attacks don't improve linearly, > they improve logarithmically as knowledge reaches the maximum attainable. > Attacks do have catastrophic breakthroughs. Both on the implementation side as well as the raw mathematics and analysis. It is simplistic to think that attacks behave in a linear or log.... way. Some are analysis by humans, some automated analysis comes to play but insight comes in a flash and can change the rules abruptly. The growth and size of the internet combined with the politics and finances of illegal hacking do imply a model. Such models make sense for some budgets but not for those with confidential data where the value of the data and penalties dominate the analysis. -- T o m M i t c h e l l -------------- next part -------------- An HTML attachment was scrubbed... URL: From pgut001 at cs.auckland.ac.nz Thu Oct 9 01:37:55 2014 From: pgut001 at cs.auckland.ac.nz (Peter Gutmann) Date: Thu, 09 Oct 2014 18:37:55 +1300 Subject: [Cryptography] Best internet crypto clock In-Reply-To: Message-ID: Tom Mitchell writes: >A clock that is sufficiently stable for long periods of time and temperature >deltas is nearly impossible to design. You don't need a perfectly accurate clock, you just need to track the drift at the receiving end. In fact is you can bound the transmission latency (if it's an online protocol rather than store-and-forward) you don't need a clock at all on the sending device. Peter. From mitch at niftyegg.com Thu Oct 9 01:52:29 2014 From: mitch at niftyegg.com (Tom Mitchell) Date: Wed, 8 Oct 2014 22:52:29 -0700 Subject: [Cryptography] Best internet crypto clock In-Reply-To: References: Message-ID: On Wed, Oct 8, 2014 at 10:37 PM, Peter Gutmann wrote: > Tom Mitchell writes: > > >A clock that is sufficiently stable for long periods of time and > temperature > >deltas is nearly impossible to design. > > You don't need a perfectly accurate clock, you just need to track the > drift at > the receiving end. In fact is you can bound the transmission latency (if > it's > an online protocol rather than store-and-forward) you don't need a clock at > all on the sending device. > > Thanks... I saw UTS and that is a standard that folk get serious about. A free running tick counter that never overflows is a good thing. Freedom from time of day issues leap seconds and more make it easy. The frequency choice is open and precision and accuracy is open. An external map of ticks to historic real world time (and temperature) is interesting in the right context. Might be interesting salt and pepper for hashing something for example. [?] -- T o m M i t c h e l l -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 330.gif Type: image/gif Size: 96 bytes Desc: not available URL: From iang at iang.org Thu Oct 9 02:12:32 2014 From: iang at iang.org (ianG) Date: Thu, 09 Oct 2014 07:12:32 +0100 Subject: [Cryptography] The world's most secure TRNG In-Reply-To: References: <48153DBF-756B-47A6-9B53-EE8E5CFFB730@me.com> Message-ID: <54362750.8010809@iang.org> On 9/10/2014 01:59 am, Bill Cox wrote: > On Wed, Oct 8, 2014 at 7:00 PM, Dave Horsfall > wrote: > > It's possible that I may have missed this (the list seems to have spiked > lately), but how would the device present itself to the host? A serial > stream of random bits (like a terminal or a keyboard), or some sort of a > structure with command and control etc? > > -- Dave > _______________________________________________ > The cryptography mailing list > cryptography at metzdowd.com > http://www.metzdowd.com/mailman/listinfo/cryptography > > > No command/control. In fact, I feel a lot better not having a > microcontroller on there that could transmit nasty malware when being > plugged into a new system, or which could be reprogrammed to emit > non-random data. My guess is that if you don't have an easy defined interface (file? tty) then it won't work in the marketplace. In terms of the nasty malware, what would be nice would be a firewall. A device that has male & female and sits there and watches for naughty traffic. If this came with a good RN source as well, I'd reckon it would be a hit. ... > How important is the proper USB connector vs a raw connector with no > housing like the DigiSpark? Do we really feel we need to wrap this > thing in metal to keep it from radiating secret bits? Yes, otherwise it will be noisy :) You don't want it interfering with random gear. You could probably get away without in a prototype device and encourage someone to do some testing... > I figure if we > feed it into a whitener, an attacker would have to know *every* bit to > know the state of the whitener. That seems like a tall order for an > attacker trying to read bits from EMI. Oh, no :) In the crypto world we deal with bit-rated paranoia. Even one bit leaked to an attacker will earn the device the BROKEN award. iang From mitch at niftyegg.com Thu Oct 9 07:50:55 2014 From: mitch at niftyegg.com (Tom Mitchell) Date: Thu, 9 Oct 2014 04:50:55 -0700 Subject: [Cryptography] NSA versus DES etc.... In-Reply-To: References: <9E0708DC-ED74-4879-AEE7-D1F41B2252F5@lrw.com> <5421E7DB.2040704@av8n.com> <2422FDD0-1570-4BB7-B099-9719A933ED82@interlog.com> <5422600D.2060104@av8n.com> <2DB4E01B-CA6E-4F8E-827A-36A2022E5485@interlog.com> <21544.64868.836051.664605@desk.crynwr.com> <542C3AE3.4090207@iang.org> <201410020932.s929W3pE009125@new.toad.com> <20141006123054.GC12902@yeono.kjorling.se> Message-ID: On Mon, Oct 6, 2014 at 1:00 PM, Dave Horsfall wrote: > On Mon, 6 Oct 2014, Michael Kj?rling wrote: > > .... > you must, but: > > Security through obscurity. > > A strong lock on a paper-tissue door. > > Locking the front door and keeping the key under the mat. > And would a modern GPS map device running at the time the pilot navigated the ill mapped channels be equivalent to shoulder surfacing and reading the keys typed or dialed to unlock something. A couple portable GPS devices sub $200 would do the trick and only require a modest number of trips to validate good map data. So add key management to the list. -- T o m M i t c h e l l -------------- next part -------------- An HTML attachment was scrubbed... URL: From waywardgeek at gmail.com Thu Oct 9 09:43:34 2014 From: waywardgeek at gmail.com (Bill Cox) Date: Thu, 9 Oct 2014 09:43:34 -0400 Subject: [Cryptography] Creating a Parallelizeable Cryptographic Hash Function In-Reply-To: <85AF0E33-E35C-41BA-84AE-9923AB33F864@lrw.com> References: <542DD770.2050007@cleversafe.com> <542EDA54.4010904@cleversafe.com> <542EE80B.3090406@cleversafe.com> <55E71AF5-0426-4C55-81FF-42F96C8BCE02@lrw.com> <85AF0E33-E35C-41BA-84AE-9923AB33F864@lrw.com> Message-ID: On Wed, Oct 8, 2014 at 10:33 AM, Jerry Leichter wrote: > In the case you propose, there's another issue: Digest(X) == 0 if and > only if one of the constituent hashes is 0. That breaks one of the basic > requirements for a secure hash function: Given H(X), it's difficult to > find a Y != X such that H(Y) == H(X). When H(X) == 0, it's trivial to find > as many examples as you like. > > I challenge you to find x such that H(x) == 0. Assuming we have a good hash primitive, you can't. So, this is not really a problem. However, we can't use just any hash function. H has to be secure. > Note that the cost of computing your digest is at least double that of > simply doing a hash over the original data (as you run the hash on twice as > much data - not to mention the cost of all those modular multiplications). > You'd need to justify that cost. > Not true for block sizes of B where B is large. Anything over 1MiB should be fine. However, if the block size is just a KiB or so, then the modular multiplications will dominate. There needs to be a significant need for constant time update in this case. > > I also see no obvious advantage to your scheme over simply adding the > constituent hashes together (effectively mod 2^n) - a much, much cheaper > operation which doesn't have problems with 0. (You can make it even > cheaper by considering each H() value as a vector of 32- or 64-bit values > and adding them as vectors.) Can you describe an attack on the cheaper > approach that fails for yours? > -- Jerry > > That is much cheaper, but David Wagner broke such systems in his paper on Generalized Birthday attacks: www.di.ens.fr/~fouque/ens-rennes/gbp.eps He explores security using various operators and goes into some depth about possible attacks on multiplication modulo primes. However, his explored directions also directly apply to attacks on discrete logs. I'm afraid I know of no secure shortcut that avoids 2048-bit arithmetic. That this can be done securely at all is new information, SFAIK. Thanks for taking a crack at it. Bill -------------- next part -------------- An HTML attachment was scrubbed... URL: From ryacko at gmail.com Thu Oct 9 12:04:11 2014 From: ryacko at gmail.com (Ryan Carboni) Date: Thu, 9 Oct 2014 09:04:11 -0700 Subject: [Cryptography] Best Internet crypto clock ? Message-ID: I think it is ludicrous to demand time be more precise then ten minutes, I mean, if you're committing a crime and you're demanding bitcoins for payment, I think it a printed out screenshot of the current block held up in a photo would be the most reasonable. A cryptoclock would be more precise than a newspaper in any case, which has a granularity of one day. -------------- next part -------------- An HTML attachment was scrubbed... URL: From bear at sonic.net Thu Oct 9 17:01:34 2014 From: bear at sonic.net (Bear) Date: Thu, 09 Oct 2014 14:01:34 -0700 Subject: [Cryptography] Sonic.net implements DNSSEC, performs MITM against customers. Are they legally liable? Message-ID: <1412888494.25670.1.camel@sonic.net> Here is an amusing/infuriating example of an otherwise pretty good ISP getting security exactly wrong: https://forums.sonic.net/viewtopic.php?f=10&t=1866 Sonic implemented and deployed DNSSEC - and put it on their shiny new servers along with an 'RBZ service' that censors supposed malware and phishing sites. And while they told their customers about DNSSEC, they didn't mention the 'RBZ service.' They didn't get prior informed consent from their customers. In fact they didn't inform their customers, beyond quietly putting up a few mentions on webpages their customers normally have no reason to look at. They didn't provide a click-through link enabling customers to get the content anyway. And they diverted traffic to a page that does not mention who is doing the diversion, how, or why, or how to opt out. And they aren't providing DNSSEC in any form that doesn't have this 'service' (coughATTACKcough) imposed. Black hats immediately found a way to get sites they dislike onto the list of supposed malware and phishing sites. Among the blocked sites: Local democratic party campaigners (first post). Financial services and markets - at a crucial time. (page 4). Software development sites (apparently some devs use the same utility network libraries used by malware devs, so the unknown-because-todays-compilation executables have code in common with known malware and aren't on the whitelist...) I had occasionally been annoyed by the 'mousetrap page' on software dev sites, but never annoyed enough to finally eliminate all other suspects and track it down -- too much trouble, right? But after personally taking a hit on the 'financial services' thing, I tracked this down to sonic.net -- I'd been assuming that it was some overeager plugin that had defaulted to 'ON' and I just hadn't figured out which one and how to turn it OFF. But it kept happening even with all plugins uninstalled. It turned out to be the very same attack that I had switched to DNSSEC specifically to avoid. And it was performed by the very same ISP that I'd been relying on to protect me from it. I have rarely been so angry. As I understand the law, "common carriers" are protected from prosecution when crimes are committd using their services because they aren't in the business of determining what traffic moves via those services. But Sonic.net, by failing to conform to the standards of care for filtering services (no prior consent, no clickthrough link, no identification of blocking agency, no basic notification, no provision of DNSSEC service without the blockage) appears to me to have no claim to common carrier status for DNSSEC. They DID make the decision, based on content, what traffic they would carry on DNSSEC. As a result, didn't they become liable for damages from crimes committed by the abuse of that service? Bear From bear at sonic.net Thu Oct 9 17:24:19 2014 From: bear at sonic.net (Bear) Date: Thu, 09 Oct 2014 14:24:19 -0700 Subject: [Cryptography] Creating a Parallelizeable Cryptographic Hash Function In-Reply-To: References: <542DD770.2050007@cleversafe.com> <4A7A3B6E-71F8-4A15-BF10-7529B5587EA2@lrw.com> <003a01cfe030$e9047830$bb0d6890$@huitema.net> <1412785224.28022.3.camel@sonic.net> Message-ID: <1412889859.25670.3.camel@sonic.net> On Wed, 2014-10-08 at 20:09 -0400, David Leon Gil wrote: > On Wed, Oct 8, 2014 at 12:20 PM, Bear wrote: > > a) you want aliasing (accidental collisions) to be minimized. > > b) you want the hash to be fast to compute. > > c) you want "random" distribution of GUID's in the hash table > > (ie, no single part of your hash table should be *more* > > full than the other parts, or at least not by any margin > > distinguishable from statistical noise on random numbers). > > As an aside, what you're describing is a function with low discrepancy > on the inputs. This is (interestingly enough) different from a > function that generates values indistinguishable from uniform random > numbers in the interval. (The values "anti-bunch", in some sense, > compared to random numbers.) See, e.g., > http://en.wikipedia.org/wiki/Low-discrepancy_sequence for the > fascinating details of low-discrepancy sequences and quasi-random > numbers. > Yes, that is exactly the distinction. A counter viewed through a linear congruential transformation is a perfect example of a low-discrepancy or quasirandom function. It will generate *every* possible value, *once*, before the counter rolls over and it starts repeating. As a reductio ad absurdum, if it's a sixteen bit counter and you've seen the last 65535 outputs, you know exactly what the next output is going to be -- it's going to be whatever value you haven't seen yet, and then the sequence you've already seen will repeat. If it were a good cryptographic hash, you would have, as always, absolutely no idea what the next output will be just by knowing previous outputs. Bear From alfiej at fastmail.fm Thu Oct 9 19:36:29 2014 From: alfiej at fastmail.fm (Alfie John) Date: Fri, 10 Oct 2014 00:36:29 +0100 Subject: [Cryptography] Cryptography, backdoors and the Second Amendment Message-ID: <1412897789.3376891.177247101.2F9CFC02@webmail.messagingengine.com> After the Apple encryption announcement, we had the usual pundits bring up the Four Horsemen of the Infocalypse [1]: "Attorney General Eric Holder, the US top law enforcement official, said it is "worrisome" that tech companies are providing default encryption on consumer electronics. Locking the authorities out of being able to physically access the contents of devices puts children at risk, he said. ... Holder said he wants a backdoor to defeat encryption. He urged the tech sector "to work with us to ensure that law enforcement retains the ability, with court-authorization, to lawfully obtain information in the course of an investigation, such as catching kidnappers and sexual predators." After reading Keybase cofounder Chris Coyne's response to the backdoor nonsense, it got me thinking about cryptography and the Second Amendment: "A well regulated militia being necessary to the security of a free state, the right of the people to keep and bear arms shall not be infringed." As the US State Department classifies cryptography as a munition, shouldn't the use of cryptography be protected under the 2nd Amendment? If so, as the NSA continues its concerted effort to cripple encryption by providers [3] [4], shouldn't this be seen as the equivalent of the Department of Justice colluding with Smith & Wesson to manufacture guns that don't shoot straight and bullets that don't fire? Alfie [1] http://arstechnica.com/tech-policy/2014/10/us-top-cop-decries-encryption-demands-backdoors/ [2] https://keybase.io/blog/2014-10-08/the-horror-of-a-secure-golden-key [3] http://www.theguardian.com/world/2013/sep/05/nsa-gchq-encryption-codes-security [4] http://www.mail-archive.com/cryptography at metzdowd.com/msg12325.html -- Alfie John alfiej at fastmail.fm From demonfighter at gmail.com Thu Oct 9 20:24:44 2014 From: demonfighter at gmail.com (Steve Furlong) Date: Thu, 9 Oct 2014 20:24:44 -0400 Subject: [Cryptography] Cryptography, backdoors and the Second Amendment In-Reply-To: <1412897789.3376891.177247101.2F9CFC02@webmail.messagingengine.com> References: <1412897789.3376891.177247101.2F9CFC02@webmail.messagingengine.com> Message-ID: On Thu, Oct 9, 2014 at 7:36 PM, Alfie John wrote: > As the US State Department classifies cryptography as a munition, > shouldn't the use of cryptography be protected under the 2nd Amendment? You're expecting consistency, logic, or even honesty from a government? Your naivete is so /cute/! -- Neca eos omnes. Deus suos agnoscet. -- Arnaud-Amaury, 1209 -------------- next part -------------- An HTML attachment was scrubbed... URL: From alfiej at fastmail.fm Thu Oct 9 22:07:38 2014 From: alfiej at fastmail.fm (Alfie John) Date: Fri, 10 Oct 2014 03:07:38 +0100 Subject: [Cryptography] Cryptography, backdoors and the Second Amendment In-Reply-To: References: <1412897789.3376891.177247101.2F9CFC02@webmail.messagingengine.com> Message-ID: <1412906858.1238659.177292229.6E67D70E@webmail.messagingengine.com> On Fri, Oct 10, 2014, at 02:49 AM, Sampo Syreeni wrote: > So, then, as it's basically a valid argument, how about taking its > contraposition? "As we then already know crypto is right, and it'ss used > by precisely the right, righteous people all round, should it not be the > case those who make a claim against are simply wrong." > > Should it not in fact be, that making a case against free crypto should > be taken as a prima facie case of the speaker being a fascist, against > democracy, a luddite, and an all-round bad guy? Out to get immortalized > as the next Hitler? Yes, that was my entire point. Alfie -- Alfie John alfiej at fastmail.fm From decoy at iki.fi Thu Oct 9 21:49:13 2014 From: decoy at iki.fi (Sampo Syreeni) Date: Fri, 10 Oct 2014 04:49:13 +0300 (EEST) Subject: [Cryptography] Cryptography, backdoors and the Second Amendment In-Reply-To: References: <1412897789.3376891.177247101.2F9CFC02@webmail.messagingengine.com> Message-ID: On 2014-10-09, Steve Furlong wrote: >> As the US State Department classifies cryptography as a munition, >> shouldn't the use of cryptography be protected under the 2nd >> Amendment? > > You're expecting consistency, logic, or even honesty from a > government? Your naivete is so /cute/! So is yours: obviously you can *have* and *use* it, it's just that you can't *export* it to the *terrorists* and the rest of the bad people who aren't you. Perfectly consistent. Of course perfectly fucked up from the viewpoint of a foreign libertarian like me as well. But it really is fully consistent, and it was so from the very start, right downto the basic classical liberal ideology I as well share: "there is only one correct law, it is universal, if you don't share it then you haven't Been Enlightened yet, and thus we for very good reason don't Mind you too much". "Till you join our movement of universal rationality..." So, then, as it's basically a valid argument, how about taking its contraposition? "As we then already know crypto is right, and it'ss used by precisely the right, righteous people all round, should it not be the case those who make a claim against are simply wrong." Should it not in fact be, that making a case against free crypto should be taken as a prima facie case of the speaker being a fascist, against democracy, a luddite, and an all-round bad guy? Out to get immortalized as the next Hitler? -- Sampo Syreeni, aka decoy - decoy at iki.fi, http://decoy.iki.fi/front +358-40-3255353, 025E D175 ABE5 027C 9494 EEB0 E090 8BA9 0509 85C2 From ji at tla.org Fri Oct 10 11:06:08 2014 From: ji at tla.org (John Ioannidis) Date: Fri, 10 Oct 2014 11:06:08 -0400 Subject: [Cryptography] Sonic.net implements DNSSEC, performs MITM against customers. Are they legally liable? In-Reply-To: <1412888494.25670.1.camel@sonic.net> References: <1412888494.25670.1.camel@sonic.net> Message-ID: On Thu, Oct 9, 2014 at 5:01 PM, Bear wrote: > Here is an amusing/infuriating example of an otherwise pretty good > ISP getting security exactly wrong: > > https://forums.sonic.net/viewtopic.php?f=10&t=1866 > > Sonic implemented and deployed DNSSEC - and put it on their shiny > new servers along with an 'RBZ service' that censors supposed malware > and phishing sites. And while they told their customers about > DNSSEC, they didn't mention the 'RBZ service.' > > They didn't get prior informed consent from their customers. In fact > they didn't inform their customers, beyond quietly putting up a few > mentions on webpages their customers normally have no reason to look > at. > > They didn't provide a click-through link enabling customers to get the > content anyway. > > And they diverted traffic to a page that does not mention who is doing > the diversion, how, or why, or how to opt out. > > And they aren't providing DNSSEC in any form that doesn't have this > 'service' (coughATTACKcough) imposed. > > Black hats immediately found a way to get sites they dislike onto > the list of supposed malware and phishing sites. > > Among the blocked sites: > Local democratic party campaigners (first post). > > Financial services and markets - at a crucial time. (page 4). > > Software development sites (apparently some devs use the same > utility network libraries used by malware devs, so the > unknown-because-todays-compilation executables have code > in common with known malware and aren't on the whitelist...) > > I had occasionally been annoyed by the 'mousetrap page' on software > dev sites, but never annoyed enough to finally eliminate all other > suspects and track it down -- too much trouble, right? > > But after personally taking a hit on the 'financial services' thing, > I tracked this down to sonic.net -- I'd been assuming that it was > some overeager plugin that had defaulted to 'ON' and I just hadn't > figured out which one and how to turn it OFF. But it kept happening > even with all plugins uninstalled. > > It turned out to be the very same attack that I had switched to > DNSSEC specifically to avoid. And it was performed by the very > same ISP that I'd been relying on to protect me from it. > > I have rarely been so angry. > > As I understand the law, "common carriers" are protected from > prosecution when crimes are committd using their services because > they aren't in the business of determining what traffic moves via > those services. > ISPs are most certainly not "common carriers" in the USA, and they don't want to be, so that they can do preferential treatment of traffic. > But Sonic.net, by failing to conform to the standards of care for > filtering services (no prior consent, no clickthrough link, no > identification of blocking agency, no basic notification, no > provision of DNSSEC service without the blockage) appears to me > to have no claim to common carrier status for DNSSEC. They DID > make the decision, based on content, what traffic they would > carry on DNSSEC. As a result, didn't they become liable for > damages from crimes committed by the abuse of that service? > > > Bear IANAL, but it would be interesting to see if this violates the CFAA, and whether they can be sued under that. /ji From johnl at iecc.com Fri Oct 10 11:44:50 2014 From: johnl at iecc.com (John Levine) Date: 10 Oct 2014 15:44:50 -0000 Subject: [Cryptography] Sonic.net implements DNSSEC, performs MITM against customers. Are they legally liable? In-Reply-To: <1412888494.25670.1.camel@sonic.net> Message-ID: <20141010154450.2235.qmail@ary.lan> > They DID >make the decision, based on content, what traffic they would >carry on DNSSEC. As a result, didn't they become liable for >damages from crimes committed by the abuse of that service? Nerds playing junior lawyer rarely turn out well, but what the heck. A) Like all Internet providers, they have a service agreement for their users, and you waived your remedies against them by being a customer: https://wiki.sonic.net/wiki/Category:Policies B) This is clearly a screwup on Sonic's part, not malicious. It's very hard to persuade a court that a mistake was so egregious that it's tantamount to malice. For the specific example of the CFAA, that outlaws unauthorized access to computers. Sonic is obviously authorized to access their own equipmment. Sending you data you don't like is not access. R's, John From drc at virtualized.org Fri Oct 10 13:51:19 2014 From: drc at virtualized.org (David Conrad) Date: Fri, 10 Oct 2014 10:51:19 -0700 Subject: [Cryptography] Sonic.net implements DNSSEC, performs MITM against customers. Are they legally liable? In-Reply-To: <1412888494.25670.1.camel@sonic.net> References: <1412888494.25670.1.camel@sonic.net> Message-ID: <65FEF9D3-AA0C-47BC-9456-72AB44653FA8@virtualized.org> Hi, On Oct 9, 2014, at 2:01 PM, Bear wrote: > Sonic implemented and deployed DNSSEC - and put it on their shiny > new servers along with an 'RBZ service' that censors supposed malware > and phishing sites. And while they told their customers about > DNSSEC, they didn't mention the 'RBZ service.' > > They didn't get prior informed consent from their customers. In fact > they didn't inform their customers, beyond quietly putting up a few > mentions on webpages their customers normally have no reason to look > at. I'm not clear what this has to do with DNSSEC, other than it was implemented at the same time as Sonic's 'RBZ' service (by which I suspect you mean RPZ, which is BIND's "Response Policy Zone" -- a technology ISC implemented that facilitates the rewriting of responses according to (recursive operator's) policy). > It turned out to be the very same attack that I had switched to > DNSSEC specifically to avoid. And it was performed by the very > same ISP that I'd been relying on to protect me from it. If you are using your ISP's resolver, you are explicitly granting them a vast amount of trust: they (or whoever might influence them) can collect vast amounts of meta data and can have essentially complete control over any connection you might try to make. I sometimes get the impression that people don't fully understand the level of trust we're talking about here. If you need a refresher, see http://www.slideshare.net/dakami/dmk-bo2-k8, starting at slide 45. It really isn't that hard to run your own DNSSEC-validating resolver. BIND or Unbound (http://unbound.net) aren't that hard to set up. > But Sonic.net ... have no claim to common carrier status for DNSSEC. I don't believe ISPs in general have common carrier status (at least yet, see discussions about net neutrality). Regards, -drc -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 496 bytes Desc: Message signed with OpenPGP using GPGMail URL: From hbaker1 at pipeline.com Fri Oct 10 15:22:40 2014 From: hbaker1 at pipeline.com (Henry Baker) Date: Fri, 10 Oct 2014 12:22:40 -0700 Subject: [Cryptography] Spam one-time pads Message-ID: I've been paying careful attention to my spam email for a number of months, and I've noticed the following pattern with one particular type of spam. The spam emails always arrive in pairs, during European office hours (i.e., only M-F), both with the *same reply name*, but different domain names, and most often ending in ".co". The reply name is something innocuous, such as "admin", "reply", "donotreply", etc. -- e.g., "admin at bestvaluesintown.co" & "admin at marketinggenius.co" (these are names I just made up, but fit the pattern). The content of the email looks like it might have been 100% copied from some more-or-less legitimate advertising email, but the paired items are always completely different ("hair club for men", "diet" something-or-other, etc.). The domain names are never used again (I'm keeping track), which leads me to believe that they're used for one day only, and then sold on to someone else. The domain names sound semi-legitimate, except for using .co instead of .com. I suspect that the spammer in this case is checking for email continuity, rather than trying to sell anything, since the content of the email seems to have nothing to do with the sender. It's entirely possible that the name registrar is Airbnb'ing these domains to spammers to pick up a few extra bucks. The problem is that if everyone like me is blacklisting all of these domains (including .co itself), then they're going to be useless forever more for any legitimate purpose. Perhaps someone else here has an idea? From dave at horsfall.org Fri Oct 10 16:06:21 2014 From: dave at horsfall.org (Dave Horsfall) Date: Sat, 11 Oct 2014 07:06:21 +1100 (EST) Subject: [Cryptography] Spam one-time pads In-Reply-To: References: Message-ID: On Fri, 10 Oct 2014, Henry Baker wrote: > I've been paying careful attention to my spam email for a number of > months, and I've noticed the following pattern with one particular type > of spam. You might want to ask this over on the anti-spam list SDLU (Spammers Don't Like Us) at https://spammers.dontlike.us/mailman/listinfo/list . The lads and lassies there are extremely knowledgeable and helpful. -- Dave From grarpamp at gmail.com Fri Oct 10 16:53:43 2014 From: grarpamp at gmail.com (grarpamp) Date: Fri, 10 Oct 2014 16:53:43 -0400 Subject: [Cryptography] Cryptography, backdoors and the Second Amendment In-Reply-To: <1412897789.3376891.177247101.2F9CFC02@webmail.messagingengine.com> References: <1412897789.3376891.177247101.2F9CFC02@webmail.messagingengine.com> Message-ID: On Thu, Oct 9, 2014 at 7:36 PM, Alfie John wrote: > After the Apple encryption announcement, we had the usual pundits bring > up the Four Horsemen of the Infocalypse [1]: > > "Attorney General Eric Holder, the US top law enforcement official, > said it is "worrisome" that tech companies are providing default > encryption on consumer electronics. Locking the authorities out of > being able to physically access the contents of devices puts children > at risk, he said. > > ... > > Holder said he wants a backdoor to defeat encryption. He urged the > tech sector "to work with us to ensure that law enforcement retains > the ability, with court-authorization, to lawfully obtain information > in the course of an investigation, such as catching kidnappers and > sexual predators." > > After reading Keybase cofounder Chris Coyne's response to the backdoor > nonsense, it got me thinking about cryptography and the Second > Amendment: > > "A well regulated militia being necessary to the security of a free > state, the right of the people to keep and bear arms shall not be > infringed." > > As the US State Department classifies cryptography as a munition, > shouldn't the use of cryptography be protected under the 2nd Amendment? Though it is perhaps helpful for them to make such classification here: a) that's in regards largely to exports, not internal use b) the phrase is 'arms shall not', not 'things on our list shall not', so any such classification list is irrelevent. Ignoring the NBC / large arms debate, crypto is clearly small arms in this context and thus shall not be infringed. Crypto is also clearly necessary to the security of a free people, and thus of/to the state being of the people. And shy of state failure requiring its use in support of revolt, crypto is clearly a defensive arm primarily against encroachment. In current example, mass surveillance, lack of individualized warrant, abuse of process, abuse of implied right to privacy, of the 1st, 4th and 5th, etc. It would certainly be an interesting use/case/argument to explore and test. From william.muriithi at gmail.com Fri Oct 10 17:47:42 2014 From: william.muriithi at gmail.com (William Muriithi) Date: Fri, 10 Oct 2014 17:47:42 -0400 Subject: [Cryptography] Spam one-time pads In-Reply-To: References: Message-ID: <20141010214742.6037648.61137.5499@gmail.com> Dave, ? > I've been paying careful attention to my spam email for a number of > months, and I've noticed the following pattern with one particular type > of spam. You might want to ask this over on the anti-spam list SDLU (Spammers Don't Like Us) at https://spammers.dontlike.us/mailman/listinfo/list . Nice, looking for way to improve spamassassin as its allowing a good number of spam through? I think they are down though? Or did they fold? You subscribed? Is it active? William The lads and lassies there are extremely knowledgeable and helpful. -- Dave _______________________________________________ The cryptography mailing list cryptography at metzdowd.com http://www.metzdowd.com/mailman/listinfo/cryptograph y From dave at horsfall.org Fri Oct 10 18:42:40 2014 From: dave at horsfall.org (Dave Horsfall) Date: Sat, 11 Oct 2014 09:42:40 +1100 (EST) Subject: [Cryptography] Spam one-time pads In-Reply-To: <20141010214742.6037648.61137.5499@gmail.com> References: <20141010214742.6037648.61137.5499@gmail.com> Message-ID: On Fri, 10 Oct 2014, William Muriithi wrote: > I think they are down though? Or did they fold? You subscribed? Is it > active? Very much active; I made a couple of posts there this morning. You might be thinking of its previous incarnations. You have to be a member to post (of course), but it's not moderated. -- Dave From zenadsl6186 at zen.co.uk Fri Oct 10 19:00:22 2014 From: zenadsl6186 at zen.co.uk (Peter Fairbrother) Date: Sat, 11 Oct 2014 00:00:22 +0100 Subject: [Cryptography] Sonic.net implements DNSSEC, performs MITM against customers. Are they legally liable? In-Reply-To: <65FEF9D3-AA0C-47BC-9456-72AB44653FA8@virtualized.org> References: <1412888494.25670.1.camel@sonic.net> <65FEF9D3-AA0C-47BC-9456-72AB44653FA8@virtualized.org> Message-ID: <54386506.3080203@zen.co.uk> On 10/10/14 18:51, David Conrad wrote: > Hi, > > On Oct 9, 2014, at 2:01 PM, Bear wrote: [..] >> But Sonic.net ... have no claim to common carrier status for DNSSEC. > > I don't believe ISPs in general have common carrier status (at least yet, see discussions about net neutrality). Being a Brit I know very little about US law, but in UK and EU law common carrier status isn't something that an ISP either does or does not have. If a person (eg an ISP) is acting, in a particular case, only as a carrier of information for other people's data, then they may have common carrier status in that particular case; which is a defence against many civil actions and criminal charges, ranging from treason to copyright violation to libel or slander. It is like they are saying that they are not responsible for the content of what they transmit, as they just carried it - just like the post office is not responsible for threats against the president or fruits of treason which are carried in the mail. Roughly implicit in that is the idea that the person did not know what the content was, or that it was unlawful - but only roughly, not necessarily. Perhaps more implicit, but again not always necessarily so, is the idea that they must not discriminate, ie they must carry comms from anyone to anyone (as long as they get paid). Persons may have to comply with other legislation in order to retain their common carrier status, and thus their immunity from civil and criminal liability - for instance, youtube must respond in timely fashion to DCMA takedown requests. In most cases, ISPs do have common carrier status, and they value it highly. In US statutory law common carrier status gives an ISP immunity to liability for copyright violations in third party content (DMCA), and against action for libel or slander in third party content (Communications Decency Act). The other immunities I mentioned are a mix of statutory and common law. In the EU at least ISPs can also, for example, do spam filtering, and that does not affect their common carrier status, if it is done in order to facilitate the transmission of emails - they can reasonably say the email system would get completely clogged up if they didn't. However when they start inspecting or censoring traffic for reasons other than facilitating the transmission of communications they may lose their common carrier status. This would leave them open to some civil suits and criminal prosecutions. In the UK/EU it would also be illegal interception if they looked at content, but not in the US. Their T+C's are not usually immediately relevant to whether a person who passes on a communication has common carrier status. (Net neutrality is kinda orthogonal to common carrier status - they don't really have that much to do with each other. Even if an ISP does deep packet inspection in order to decide whether to send a packet by the fast or the slow routes, that needn't necessarily affect its common carrier status. As long as the slow stuff gets there without inordinate delay, if the fast stuff gets there quicker then so what? Common carrier status goes back a very long way; eg a shipping agent in India in the 1850's might offer a clipper service which would take ten weeks, or a barque service which would take twenty - but as long as he didn't discriminate based on factors other than price he would be a common carrier. The censoring of communications by ISPs based on eg IP address or other communications metadata, rather than based on immediate inspection of content, is a slightly different, and thorny, issue, For instance if they bar access to hard-pron.com, for eg child-protection reasons, that is not interception of content (which would be illegal in the EU) - but it may cause them to lose common carrier status, not just for those comms, but for all comms. The law on all this is a bit unclear.) As for Sonic.net and DNSSEC, no they do not have common carrier status in that respect. The DNSSEC communications are (presumably) between you and Sonic who run the DNSSEC server, so common carrier status would be impossible, and not relevant to the issue of whether you can sue them. Sadly, as I know very little US law, I have no idea whether you can sue them or not. -- Peter Fairbrother From hbaker1 at pipeline.com Fri Oct 10 21:01:58 2014 From: hbaker1 at pipeline.com (Henry Baker) Date: Fri, 10 Oct 2014 18:01:58 -0700 Subject: [Cryptography] Cryptography, backdoors and the Second Amendment In-Reply-To: References: <1412897789.3376891.177247101.2F9CFC02@webmail.messagingengine.com> Message-ID: At 01:53 PM 10/10/2014, grarpamp wrote: >On Thu, Oct 9, 2014 at 7:36 PM, Alfie John wrote: >> After reading Keybase cofounder Chris Coyne's response to the backdoor >> nonsense, it got me thinking about cryptography and the Second Amendment: >> >> "A well regulated militia being necessary to the security of a free >> state, the right of the people to keep and bear arms shall not be infringed." >> >> As the US State Department classifies cryptography as a munition, >> shouldn't the use of cryptography be protected under the 2nd Amendment? Traditionally, armor is considered "arms", so purely defensive arms are obviously covered by the Second Amendment. I've never heard of anyone dying of good armor (except perhaps dying of heat stroke), but millions have died from bad armor. I've never heard of anyone dying from good crypto, but millions have died from bad crypto. BTW, Jacob Appelbaum has referenced law professor Glenn Harlan Reynolds's idea that the stationing of spyware within citizen's computers & routers is a violation of the *Third* Amendment regarding troop quartering: http://www.usatoday.com/story/opinion/2013/07/22/third-amendment-nsa-spying-column/2573225/ Should 3rd Amendment prevent government spying? Glenn Harlan Reynolds 11:24 a.m. EDT July 22, 2013 Technological advancements could call for an update to the amendment that protects us in our homes. * Troop quartering also violated the notion of the home as a castle. * Now we have electronic troops in the form of software, gadgets, and sensors. * It seems clear that our government feels entirely comfortable violating people's right of privacy. So a couple of weeks ago, I wrote about a Third Amendment case from Nevada in which a family's home was literally seized and occupied by police seeking a vantage point over their neighbor's home. That case falls pretty much within the literal language of the Constitution's Third Amendment, which provides: "No soldier shall, in time of peace be quartered in any house, without the consent of the owner, nor in time of war, but in a manner to be prescribed by law." But that led to some further thoughts. When the Framers drafted the Third Amendment, they had a specific evil in mind: The quartering of troops "upon" a population by the English crown. As the term suggests, this wasn't just about getting a cheap place for soldiers to stay. Forcing citizens to put up troops in their homes was expensive and the troops -- then drawn from the jails and gutters, for the most part -- were likely to rob, rape and assault members of the household at the least provocation. Troop quartering was a way to punish a restive region that had been resisting the government. But beyond that, troop quartering also violated one of the classic "rights of Englishmen," the notion of the home as a castle. As the U.S. Court of Appeals for the Second Circuit said in one of the few Third Amendment cases ever to be heard, the Amendment was designed to assure a fundamental right of privacy. If you think of it that way, what things does the government do that violate that privacy right today? If the government places a surveillance device in your home, is that sufficiently like quartering troops there to trigger Third Amendment scrutiny? What if it installs spyware on your computer or your cable modem? What if it requires "smart meters" that allow moment-to-moment monitoring of your thermostat settings or toilet flushes? The famous birth-control case of Griswold v. Connecticut invoked the Third Amendment, along with several others, with the Court asking, "Would we allow the police to search the sacred precincts of marital bedrooms for telltale signs of the use of contraceptives? The very idea is repulsive to the notions of privacy surrounding the marriage relationship." If physically searching the bedroom is "repulsive," what about activating the camera on someone's laptop for remote viewing? Or monitoring the "Skype sex" sessions of spouses who are apart? How could that possibly be less repulsive? These specific concerns weren't what the Framers had in mind. In their day, to spy on a family in its own home, you'd have to put a soldier there. But now we have electronic troops in the form of software, gadgets and sensors. Maybe the law needs to take account of this. We have updated our interpretations of the First Amendment to go beyond hand-operated letterpresses, the Second Amendment to go beyond flintlocks, and the rest of the Bill of Rights to account for technological change of all sorts. Why not the Third Amendment, too? In the wake of the various government-spying scandals that have broken this summer, it seems clear that our government feels entirely comfortable violating people's fundamental right of privacy, whether in their homes or out of it. Big Brother wants the whole haystack of your data, in case it should later decide to look for a needle in there somewhere. Should we invoke the Third Amendment to ensure that your home, at least, is safe? Glenn Harlan Reynolds is professor of law at the University of Tennessee. He blogs at InstaPundit.com. From dan at geer.org Fri Oct 10 22:11:57 2014 From: dan at geer.org (dan at geer.org) Date: Fri, 10 Oct 2014 22:11:57 -0400 Subject: [Cryptography] HP accidentally signs malware, will revoke certificate Message-ID: <20141011021157.DD59C2281E9@palinka.tinho.net> [ public case study now in progress ] HP accidentally signs malware, will revoke certificate (Ars:) http://arstechnica.com/security/2014/10/hp-accidentally-signed-malwa= re-will-revoke-certificate/ Regardless of the cause, the revocation of the affected certificate will require HP to re-issue a large number of software packages with a new digital signature. While the certificate drop may not affect systems with the software already installed, users will be alerted to a bad certificate if they attempt to re-install software from original media. The full impact of the certificate revocation won't be known until after Verisign revokes the certificate on October 21, Wahlin said. From Dane at sonic.net Fri Oct 10 23:34:27 2014 From: Dane at sonic.net (Dane Jasper) Date: Sat, 11 Oct 2014 03:34:27 +0000 (UTC) Subject: [Cryptography] Sonic.net implements DNSSEC, performs MITM against customers. Are they legally liable? References: <1412888494.25670.1.camel@sonic.net> Message-ID: I have posted a reply on this topic, here: https://forums.sonic.net/viewtopic.php?f=10&t=1866&p=14566#p14563 -Dane Jasper Sonic.net From pgut001 at cs.auckland.ac.nz Sat Oct 11 04:24:54 2014 From: pgut001 at cs.auckland.ac.nz (Peter Gutmann) Date: Sat, 11 Oct 2014 21:24:54 +1300 Subject: [Cryptography] Sonic.net implements DNSSEC, performs MITM against customers. Are they legally liable? In-Reply-To: <1412888494.25670.1.camel@sonic.net> Message-ID: Bear writes: >Sonic implemented and deployed DNSSEC - and put it on their shiny new servers >along with an 'RBZ service' that censors supposed malware and phishing sites. >And while they told their customers about DNSSEC, they didn't mention the >'RBZ service.' So just to make sure I'm getting this right, Sonic are sending out DNSSEC- authenticated but invalid/spoofed/however you want to label them DNS responses? As you say, the very thing that DNSSEC was designed to prevent? Peter. From l at odewijk.nl Sat Oct 11 07:05:18 2014 From: l at odewijk.nl (=?UTF-8?Q?Lodewijk_andr=C3=A9_de_la_porte?=) Date: Sat, 11 Oct 2014 13:05:18 +0200 Subject: [Cryptography] Cryptography, backdoors and the Second Amendment In-Reply-To: <1412897789.3376891.177247101.2F9CFC02@webmail.messagingengine.com> References: <1412897789.3376891.177247101.2F9CFC02@webmail.messagingengine.com> Message-ID: It sure should be seen as a second amendment thing. Although, so should drones and heavy weapons. A revolution is impossible for the US citizens, so there's not much point. -------------- next part -------------- An HTML attachment was scrubbed... URL: From hbaker1 at pipeline.com Sat Oct 11 10:15:02 2014 From: hbaker1 at pipeline.com (Henry Baker) Date: Sat, 11 Oct 2014 07:15:02 -0700 Subject: [Cryptography] HP accidentally signs malware, will revoke certificate In-Reply-To: <20141011021157.DD59C2281E9@palinka.tinho.net> References: <20141011021157.DD59C2281E9@palinka.tinho.net> Message-ID: At 07:11 PM 10/10/2014, dan at geer.org wrote: >[ public case study now in progress ] > >HP accidentally signs malware, will revoke certificate > >(Ars:) http://arstechnica.com/security/2014/10/hp-accidentally-signed-malware-will-revoke-certificate/ And we know this HP malware-signing incident is an "accident", because... ??? https://firstlook.org/theintercept/2014/10/10/core-secrets/ 'But the briefing document suggests *another category of employees*?-*ones who are secretly working for the NSA* without anyone else being aware. This kind of double game, in which the NSA works with and against its corporate partners, already characterizes some of the agency?s work, in which information or concessions that it desires are surreptitiously acquired if corporations will not voluntarily comply. The reference to ?under cover? agents jumped out at two security experts who reviewed the NSA documents for The Intercept.' ' ?That one bullet point, it?s really strange,? said Matthew Green, a cryptographer at Johns Hopkins University. ?I don?t know how to interpret it.? He added that the cryptography community in America would be surprised and upset if it were the case that *?people are inside [an American] company covertly communicating with NSA and they are not known to the company or to their fellow employees.?* ' 'The ACLU?s Soghoian said technology executives are already deeply concerned about the prospect of clandestine agents on the payroll to gain access to highly sensitive data, including encryption keys, that could make the NSA?s work ?a lot easier.? ' ' ?As more and more communications become encrypted, the attraction for intelligence agencies of stealing an encryption key becomes irresistible,? he said. ?It?s such a juicy target.? ' [Or simply sign malware??] From drc at virtualized.org Sat Oct 11 10:37:49 2014 From: drc at virtualized.org (David Conrad) Date: Sat, 11 Oct 2014 07:37:49 -0700 Subject: [Cryptography] Sonic.net implements DNSSEC, performs MITM against customers. Are they legally liable? In-Reply-To: References: Message-ID: <052ED42D-B2B9-4A17-919D-762741A3D5BB@virtualized.org> Peter, On Oct 11, 2014, at 1:24 AM, Peter Gutmann wrote: > So just to make sure I'm getting this right, Sonic are sending out DNSSEC- > authenticated but invalid/spoofed/however you want to label them DNS > responses? Not DNSSEC-authenticated. > As you say, the very thing that DNSSEC was designed to prevent? Not really. Data between the resolver and the client application is not protected by DNSSEC. And, of course, the resolver can do anything it wants to the data it returns to the client application. DNSSEC can best be seen as protecting the integrity of the data that is entered into the resolver's cache. The best (IMHO) way to protect that data is to run your own validating resolver locally (on the same machine as the client application). Regards, -drc -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 496 bytes Desc: Message signed with OpenPGP using GPGMail URL: From phill at hallambaker.com Sat Oct 11 10:43:12 2014 From: phill at hallambaker.com (Phillip Hallam-Baker) Date: Sat, 11 Oct 2014 10:43:12 -0400 Subject: [Cryptography] Sonic.net implements DNSSEC, performs MITM against customers. Are they legally liable? In-Reply-To: References: <1412888494.25670.1.camel@sonic.net> Message-ID: On Sat, Oct 11, 2014 at 4:24 AM, Peter Gutmann wrote: > Bear writes: > >>Sonic implemented and deployed DNSSEC - and put it on their shiny new servers >>along with an 'RBZ service' that censors supposed malware and phishing sites. >>And while they told their customers about DNSSEC, they didn't mention the >>'RBZ service.' > > So just to make sure I'm getting this right, Sonic are sending out DNSSEC- > authenticated but invalid/spoofed/however you want to label them DNS > responses? As you say, the very thing that DNSSEC was designed to prevent? It isn't clear but what they appear to be doing is turning on DNSSEC validation in the resolver, then editing the results. This is something I predicted long ago and the problem is that the DNS architecture as received is stupid. DNS is a trusted service so you should only use a trustworthy DNS service. DNSSEC only ensures that you do not receive bogus responses. It does not ensure that you receive a response. The original idea of DNSSEC was to use the DNS as a distribution point for keys for use in IPSEC and SSL like protocols. Then it was co-opted as a mechanism for making the resolver untrusted which was stupid. Its the tin-foil hat version of crypto-autarky where you use crypto to eliminate reliance on any party at all, except of course for the ones you don't notice like ICANN (a US QANGO) or the resolver. Yes, the DNS resolver can MITM you. Which is why the communications between the resolver and the client must be encrypted and authenticated so that you can be suer that you get the DNS service from the service you chose and not a service that your ISP chose. I have this machine set up to connect to the Google public DNS. But a few weeks ago I was seeing Verizon sitefinder inserts. The bastards had MITM the NXTDOMAIN responses. So I wrote this spec and have some running code https://datatracker.ietf.org/doc/draft-hallambaker-privatedns/ The first step is to choose your DNS service or set up one of your own. The client binds to the service using a TLS secured key exchange that spits out a Kerberos ticket type object. And then DNS transactions with the service are encrypted and authenticated in both directions using the ticket. The design is stateless on the server side and should not impact performance at all for modern machines. The crypto overhead is negligible. Now for the mind bending part, you probably don't want the authoritative DNS responses unless they are DANE records or otherwise contain a key. If it is an A record or a AAAA record you might well want the resolver to have the ability to modify it. One reason is to block access to known bots. Just because the Russian Business Network has bought a domain does not mean that I want my machine to resolve it. And I have lists of a million bots. I don't want my machines connecting to them either. If the policy is chosen by the end user then this is anti-virus for the Internet. If it is imposed by the carrier or government it is censorship. One particularly fun approach is using it for IPv6 to IPv4 gateways. This allows a machine that is pure IPv6 with no IPv4 whatsoever to survive on the transitional Internet without a performance drag. When the IPv6 client attempts to connect to a resource that is IPv4 only, it returns an IPv6 address at the best available IPv6-4 gateway. This might be a local gateway or a gateway closer to the desired end point. From cryptography at dukhovni.org Sat Oct 11 12:26:55 2014 From: cryptography at dukhovni.org (Viktor Dukhovni) Date: Sat, 11 Oct 2014 16:26:55 +0000 Subject: [Cryptography] Sonic.net implements DNSSEC, performs MITM against customers. Are they legally liable? In-Reply-To: References: <1412888494.25670.1.camel@sonic.net> Message-ID: <20141011162655.GL13254@mournblade.imrryr.org> On Sat, Oct 11, 2014 at 09:24:54PM +1300, Peter Gutmann wrote: > Bear writes: > > >Sonic implemented and deployed DNSSEC - and put it on their shiny new servers > >along with an 'RBZ service' that censors supposed malware and phishing sites. > >And while they told their customers about DNSSEC, they didn't mention the > >'RBZ service.' > > So just to make sure I'm getting this right, Sonic are sending out DNSSEC- > authenticated but invalid/spoofed/however you want to label them DNS > responses? As you say, the very thing that DNSSEC was designed to prevent? No. Their recursive resolver validates data from upstream sources. It then serves some synthetic data of its own, which is seen as bogus by downstream validating resolvers. A similar thing was done briefly to me by Time Warner Cable. They "quarantined" my cable modem by making it serve the same bogus A record for all domains with a 5s TTL. The reason was apparently that they wanted me to "request" a cable modem upgrade. I did not notice for some time, because my OpenWrt router runs its own validating resolver and does not use the ISP's recursive caches. When I power-cycled the router a few days back, I was in for a protracted trouble-shoot. The router could not sync its clock, because none of the openwrt NTP pool servers would resolve. With an incorrect clock the signature on the root zone looked wrong, so the local DNS resolver failed to work. Eventually I figured out what happened, and "volunteered" for the upgrade. -- Viktor. From iang at iang.org Sat Oct 11 11:18:45 2014 From: iang at iang.org (ianG) Date: Sat, 11 Oct 2014 16:18:45 +0100 Subject: [Cryptography] HP accidentally signs malware, will revoke certificate In-Reply-To: <20141011021157.DD59C2281E9@palinka.tinho.net> References: <20141011021157.DD59C2281E9@palinka.tinho.net> Message-ID: <54394A55.90601@iang.org> On 11/10/2014 03:11 am, dan at geer.org wrote: > [ public case study now in progress ] indeed, and we need more of them ;-) > HP accidentally signs malware, will revoke certificate > > (Ars:) http://arstechnica.com/security/2014/10/hp-accidentally-signed-malwa= > re-will-revoke-certificate/ > > Regardless of the cause, the revocation of the affected certificate > will require HP to re-issue a large number of software packages with a > new digital signature. While the certificate drop may not affect > systems with the software already installed, users will be alerted to > a bad certificate if they attempt to re-install software from original > media. The full impact of the certificate revocation won't be known > until after Verisign revokes the certificate on October 21, Wahlin > said. That's um amazing. So a 4 year old expired cert is still a critical piece of infrastructure, and they are still going to revoke it. Rather finishes the argument of whether revocation means anything different than expiry... More on Krebs. http://krebsonsecurity.com/2014/10/signed-malware-is-expensive-oops-for-hp/ Revocation as a system only works if it is reasonable to roll out a new cert, and this works as long as the scale is small. It looks like code-signing can escape that assumption, making one userland cert as powerful as .. a root cert! Revocation was always a safety blanket, cute for users but not for serious applications, so this must be causing some headaches in the risk department. iang ps; HP's comment that they weren't breached is laughable. From brk7bx at virginia.edu Sat Oct 11 12:40:24 2014 From: brk7bx at virginia.edu (Benjamin Kreuter) Date: Sat, 11 Oct 2014 12:40:24 -0400 Subject: [Cryptography] Cryptography, backdoors and the Second Amendment In-Reply-To: <1412897789.3376891.177247101.2F9CFC02@webmail.messagingengine.com> References: <1412897789.3376891.177247101.2F9CFC02@webmail.messagingengine.com> Message-ID: <1413045624.7378.97.camel@demonking> On Fri, 2014-10-10 at 00:36 +0100, Alfie John wrote: > As the US State Department classifies cryptography as a munition, > shouldn't the use of cryptography be protected under the 2nd Amendment? 1. The second amendment is not without limits. You cannot possess a machine gun without a license, for example. The second amendment is not a free pass to possess or distribute arms. 2. The classification is only relevant for exporting a product from the USA. Nothing stops you from possessing or distributing cryptography within the US. Really though, that classification is an anachronism that predates PCs and the Internet. Instead of invoking it (which is a kind of endorsement), we should be trying to get rid of it entirely. We need to make the case that cryptography is not some kind of military device, but a necessity in a computerized society as a low-cost safeguard against various abuses and crimes. Calling cryptography "munitions" is as absurd as calling combination locks "munitions," and that point needs to be driven home. > If so, as the NSA continues its concerted effort to cripple encryption > by providers [3] [4], shouldn't this be seen as the equivalent of the > Department of Justice colluding with Smith & Wesson to manufacture guns > that don't shoot straight and bullets that don't fire? What makes you think that laws matter when it comes to the NSA? There have been no consequences for the NSA's violations of the law. They openly ignored a court order, and nothing happened. Their leadership lied to Congress, and nothing happened. They have conspired with federal, state, and even local police forces and prosecutors to break the law, and nothing happened. Lawsuits are shut down in the name of secrecy. We are past the point of legal arguments. We should think of the NSA as we would think of the Chinese government: big, scary, actively working to subvert computer security, and beyond the reach of the law. -- Ben -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 836 bytes Desc: This is a digitally signed message part URL: From leichter at lrw.com Sat Oct 11 15:57:54 2014 From: leichter at lrw.com (Jerry Leichter) Date: Sat, 11 Oct 2014 15:57:54 -0400 Subject: [Cryptography] HP accidentally signs malware, will revoke certificate In-Reply-To: <54394A55.90601@iang.org> References: <20141011021157.DD59C2281E9@palinka.tinho.net> <54394A55.90601@iang.org> Message-ID: On Oct 11, 2014, at 11:18 AM, ianG wrote: > That's um amazing. So a 4 year old expired cert is still a critical > piece of infrastructure, and they are still going to revoke it. Rather > finishes the argument of whether revocation means anything different > than expiry... More on Krebs. > > http://krebsonsecurity.com/2014/10/signed-malware-is-expensive-oops-for-hp/ > > Revocation as a system only works if it is reasonable to roll out a new > cert, and this works as long as the scale is small. It looks like > code-signing can escape that assumption, making one userland cert as > powerful as .. a root cert! Yes ... but. Part of the problem here is that revocation was never meant, and is clearly inappropriate for, this application. If I sign a software update using an "unleaked" key, that signature, on that update, is logically valid forever. The fact that the key is later revoked changes nothing. And, in fact, *allowing* to change "past facts" produces high costs. Patches - or, really, distributions of software - can have an extremely long lifetime. Going back and re-creating them with a new key is a big undertaking. Worse, *the big advantage of a signature is that it eliminates dependence on the distribution channel*. If things work as they are supposed to, I can safely download a properly signed copy of an updated from Malware-R-US and be quite sure that the bits I got really came from the signer. But lets consider the failure modes. Suppose a signing key is leaked. You obviously can no longer trust anything created after the leak - but stuff signed *before* the leak is still fine. But you need to determine *when* the signature was done - which brings us back to the recent thread on secure timestamps. If (a) a software distribution is signed; (b) that same software distribution is time-stamped as "created before time T" in a way that can be checked independently of the signature (i.e., the fact that it says "Made in 2005" inside the signed envelope means nothing); (c) a revocation indicates the last known "secure" date; then you get the most usable possible system: Artifacts signed before the key was leaked are still verifiably safe; artifacts signed after the key was leaked will be rejected. (You can argue about how someone can definitely give a "good up to" date, but then any use of an unrevoked key is implicitly an assertion that it hasn't yet leaked....) In HP's case, we have a completely different issue: The key, according to HP, was never leaked. (Their assertion that they were never hacked *is* meaningful, because this is what it's asserting.) So in fact *all signatures with that key are valid, and will remain invalid indefinitely*. The problem isn't with the signing; it's with *one particular piece of software that was signed*. Revocation is *not* the "right" solution here, as there's nothing at all wrong with the key or any signatures. The *right* solution is something like what Microsoft does to blacklist particular pieces of software. If HP had something like this, they could blacklist the malware without changing any keys or re-issuing any patches. The problem of the validity of signed material has been discussed for years, and my comment about the need for timestamping is not new. (It probably appeared in the papers discussing uses for digital timestamps!) The only attack against a signing system I've ever seen mentioned is signing key leakage, and as a result, the only solution on offer is revocation. What we have here is an entirely different attack, which directly contradicts the usual assumptions about signing: Yes, my signing provides a perfectly correct proof of provenance; but "what I said wasn't want I meant". In the typical toy examples of digital signatures that get discussed, what's signed is always an assertion or a commitment, and the whole point is to bind it to the signer forever. Here we're signing something that, if interpreted as an assertion or commitment, loses its whole point: It's something that actually affects the real world, and it's potentially harmful. A different attack requiring a different defense. -- Jerry From rsalz at akamai.com Sat Oct 11 16:36:00 2014 From: rsalz at akamai.com (Salz, Rich) Date: Sat, 11 Oct 2014 16:36:00 -0400 Subject: [Cryptography] HP accidentally signs malware, will revoke certificate References: <20141011021157.DD59C2281E9@palinka.tinho.net> <54394A55.90601@iang.org> Message-ID: <2A0EFB9C05D0164E98F19BB0AF3708C71D39ECE017@USMBX1.msg.corp.akamai.com> Several folks* are interested in, and talking about what it would take, to use Certificate Transparency for software. I wonder how well it would address these problems? [* We're one of the interested parties; feel free to get in touch. ] -- Principal Security Engineer, Akamai Technologies IM: rsalz at jabber.me Twitter: RichSalz From agr at me.com Sun Oct 12 10:18:21 2014 From: agr at me.com (Arnold Reinhold) Date: Sun, 12 Oct 2014 10:18:21 -0400 Subject: [Cryptography] Best internet crypto clock In-Reply-To: References: Message-ID: Sent from my iPhone > On Oct 9, 2014, at 1:52 AM, Tom Mitchell wrote: > ... > A free running tick counter that never overflows is a good thing. Freedom > from time of day issues leap seconds and more make it easy. The frequency > choice is open and precision and accuracy is open. An external map of ticks to > historic real world time (and temperature) is interesting in the right context. A simple counter with no overflow would work, of course, but Inexpensive cpu clock chips, like the DS-1307 family, provide a 99 year range with one second resolution and have all the circuitry for dual supply (5 VDC and battery) with very low power (500 na) operation on battery. Another possible advantage over a straight counter: yy-mm-dd-hh-ss in a time stamp is a lot easier to explain to a judge and jury than a long hexadecimal constant. Here's a data point. I installed a cheap digital video recorder for a surveillance system just over four years ago. It's not connected to the Internet and I never adjusted the clock since installing it. I had to pull a clip off of it last week and the clock was 44 minutes fast. That's about a minute a month. So if the device grabbed the current NIST beacon signed it with its internal clock and had the resulting certificate time stamped by an external authority once a month, that should be enough to establish minute accuracy. Arnold Reinhold From brk7bx at virginia.edu Sun Oct 12 12:04:15 2014 From: brk7bx at virginia.edu (Benjamin Kreuter) Date: Sun, 12 Oct 2014 12:04:15 -0400 Subject: [Cryptography] Cryptography, backdoors and the Second Amendment In-Reply-To: References: <1412897789.3376891.177247101.2F9CFC02@webmail.messagingengine.com> <1413045624.7378.97.camel@demonking> Message-ID: <1413129855.7378.226.camel@demonking> On Sun, 2014-10-12 at 04:28 +0200, Lodewijk andr? de la porte wrote: > On Oct 11, 2014 7:55 PM, "Benjamin Kreuter" wrote: > > > > On Fri, 2014-10-10 at 00:36 +0100, Alfie John wrote: > > > > > As the US State Department classifies cryptography as a munition, > > > shouldn't the use of cryptography be protected under the 2nd Amendment? > > > > 1. The second amendment is not without limits. You cannot possess a > > machine gun without a license, for example. The second amendment is not > > a free pass to possess or distribute arms. > > I never understood this though! Doesn't it significantly weaken the second > amendment? What could weigh up to constitutional values, and who is > authorized to judge? Please don't say politicians... Interpretation is an important component of any law, including the constitution. Laws are not software, courts are not computers, and nobody would want to live in a society where the law is completely inflexible. Laws tend to be written non-precisely, and even the bill of rights is not so precisely as to require no interpretation at all. As for the authority to judge, the answer is that "judges" have that authority. Courts exist to settle disputes about the meaning of the law and whether or not it is being followed. I would say that some kind of court system is necessary for the rule of law. > So, should we treat them as a theoretical adversary and move on? Advocate > against them at every opportunity, but just, move on? Unfortunately there is not much else that can be done. In theory Congress could pull the plug, but that does not look terribly likely right now. Obviously we should advocate against this kind of behavior whenever possible, as long as it remains legal to do so. Beyond that, the public cryptography community needs to design systems with the understanding that this kind of adversary exists. Yes, the NSA is actively sabotaging our work. Now we need to design systems that are harder to sabotage, easier to check, etc. It is not easy and I am not going to claim that I have a magic formula, nor am I claiming that there is a magic formula. What I will say is that we should be trying to reach such a state, and that when we have a chance to move closer to that goal we should do so. -- Ben -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 836 bytes Desc: This is a digitally signed message part URL: From hasan.diwan at gmail.com Sat Oct 11 19:43:25 2014 From: hasan.diwan at gmail.com (Hasan Diwan) Date: Sat, 11 Oct 2014 16:43:25 -0700 Subject: [Cryptography] HP accidentally signs malware, will revoke certificate In-Reply-To: <54394A55.90601@iang.org> References: <20141011021157.DD59C2281E9@palinka.tinho.net> <54394A55.90601@iang.org> Message-ID: On 11 October 2014 08:18, ianG wrote: > So a 4 year old expired cert is still a critical > piece of infrastructure, and they are still going to revoke it > Why aren't certificates revoked automatically on expiration? All using a revoked/expired certification should do is warn me that "the cert you are using has expired/been revoked, please get a new one from foo.com". What is the other use case I'm missing? -- H -- OpenPGP: https://hasan.d8u.us/gpg.key Sent from my mobile device Envoy? de mon portable -------------- next part -------------- An HTML attachment was scrubbed... URL: From l at odewijk.nl Sat Oct 11 22:28:43 2014 From: l at odewijk.nl (=?UTF-8?Q?Lodewijk_andr=C3=A9_de_la_porte?=) Date: Sun, 12 Oct 2014 04:28:43 +0200 Subject: [Cryptography] Cryptography, backdoors and the Second Amendment In-Reply-To: <1413045624.7378.97.camel@demonking> References: <1412897789.3376891.177247101.2F9CFC02@webmail.messagingengine.com> <1413045624.7378.97.camel@demonking> Message-ID: On Oct 11, 2014 7:55 PM, "Benjamin Kreuter" wrote: > > On Fri, 2014-10-10 at 00:36 +0100, Alfie John wrote: > > > As the US State Department classifies cryptography as a munition, > > shouldn't the use of cryptography be protected under the 2nd Amendment? > > 1. The second amendment is not without limits. You cannot possess a > machine gun without a license, for example. The second amendment is not > a free pass to possess or distribute arms. I never understood this though! Doesn't it significantly weaken the second amendment? What could weigh up to constitutional values, and who is authorized to judge? Please don't say politicians... > 2. The classification is only relevant for exporting a product from the > USA. Nothing stops you from possessing or distributing cryptography > within the US. Which is probably the only reason for the classification anyway (that and how useful it is!). > Really though, that classification is an anachronism that predates PCs > and the Internet. Instead of invoking it (which is a kind of > endorsement), we should be trying to get rid of it entirely. We need to > make the case that cryptography is not some kind of military device, but > a necessity in a computerized society as a low-cost safeguard against > various abuses and crimes. Calling cryptography "munitions" is as > absurd as calling combination locks "munitions," and that point needs to > be driven home. It makes little difference. This is about current law. I'm sure the world would be a better place if we left it to the right people, but who could the right people be? > > If so, as the NSA continues its concerted effort to cripple encryption > > by providers [3] [4], shouldn't this be seen as the equivalent of the > > Department of Justice colluding with Smith & Wesson to manufacture guns > > that don't shoot straight and bullets that don't fire? > > What makes you think that laws matter when it comes to the NSA? So, is there an accuracy difference in military and non military S&M weapons? > They openly ignored a court order, and nothing happened. Their leadership > lied to Congress, and nothing happened. They have conspired with > federal, state, and even local police forces and prosecutors to break > the law, and nothing happened. Lawsuits are shut down in the name of > secrecy. It's probable they had a very serious internal discussion about these things. I also suspect many others wanted to affect change but that they found things became very hard for them from that point forward. (Please, someone get this reference!) > We are past the point of legal arguments. We should think of the NSA as > we would think of the Chinese government: big, scary, actively working > to subvert computer security, and beyond the reach of the law. For me it's both foreign superpowers with nukes and a lot of people beleving in very self-justified governments. Although, that's the US and China, not the NSA seperately. Thing is, the NSA is just another program on USGOV payroll. Theoretically democratic parts of the government can indeed shut it down. I just think the NSA is too influential and self-serving to let that happen. Which sort of means the NSA runs the nation. But, of course, that's a carefully curated image that may be a total fiction. So, should we treat them as a theoretical adversary and move on? Advocate against them at every opportunity, but just, move on? -------------- next part -------------- An HTML attachment was scrubbed... URL: From l at odewijk.nl Sun Oct 12 12:31:57 2014 From: l at odewijk.nl (=?UTF-8?Q?Lodewijk_andr=C3=A9_de_la_porte?=) Date: Sun, 12 Oct 2014 18:31:57 +0200 Subject: [Cryptography] Cryptography, backdoors and the Second Amendment In-Reply-To: <1413129855.7378.226.camel@demonking> References: <1412897789.3376891.177247101.2F9CFC02@webmail.messagingengine.com> <1413045624.7378.97.camel@demonking> <1413129855.7378.226.camel@demonking> Message-ID: On Oct 12, 2014 6:04 PM, "Benjamin Kreuter" wrote: > > On Sun, 2014-10-12 at 04:28 +0200, Lodewijk andr? de la porte wrote: > > > > I never understood this though! Doesn't it significantly weaken the second > > amendment? What could weigh up to constitutional values, and who is > > authorized to judge? Please don't say politicians... > > Interpretation is an important component of any law, including the > constitution. Laws are not software, courts are not computers, and > nobody would want to live in a society where the law is completely > inflexible. Laws tend to be written non-precisely, and even the bill of > rights is not so precisely as to require no interpretation at all. Yes and no, precision of language can be greater and smaller. It's much overlooked. I find especially computer-related law lacks precision. I also find that precise law is essential, lest it is not law at all. We also see that the "intention of the lawmaker" is an important factor. Finally, we even see laws that depend upon interpretation yet to be given by judges that may oppose directly. Often we see "fundamental law", eg human rights, opposing more readily changed law. And the readily changed law wins easily if polticians interpreted laws in ways that make it simple. > As for the authority to judge, the answer is that "judges" have that > authority. Courts exist to settle disputes about the meaning of the law > and whether or not it is being followed. I would say that some kind of > court system is necessary for the rule of law. Ah, but that wasn't the question. A judge must always give precedence to constitutional laws. How could a judge explain the second amendment such that machine guns could be illegal? I suspect that ruling should be exceedingly controversial and dangerous. -------------- next part -------------- An HTML attachment was scrubbed... URL: From leichter at lrw.com Sat Oct 11 20:28:07 2014 From: leichter at lrw.com (Jerry Leichter) Date: Sat, 11 Oct 2014 20:28:07 -0400 Subject: [Cryptography] HP accidentally signs malware, will revoke certificate In-Reply-To: <20141011230552.GB6262@thunk.org> References: <20141011021157.DD59C2281E9@palinka.tinho.net> <54394A55.90601@iang.org> <20141011230552.GB6262@thunk.org> Message-ID: On Oct 11, 2014, at 7:05 PM, Theodore Ts'o wrote: > It seems the real problem is that while we have Certificate Revocation > Lists when a CA wants to revoke its signature on a certificate, there > isn't the same concept of a Signed Software Revocation List where a > code signer can revoke a signature on a piece of code that it has > signed. Of course, this presumes that all code that verifies code > also attempts to pull down and check the latest SSRL, just as > certification verification code must pull down and verify against the > latest CRL. Microsoft has had such a mechanism - known as a killbit http://en.wikipedia.org/wiki/Killbit - for many years. It applies only to Active-X controls - it's not clear why they never extended the idea to arbitrary code. However, they could probably get essentially the same effect with their malware scanner. OS X has a similar mechanism with its simple-minded malware blacklisting mechanism, which has a special-purpose extension to do such things as blacklisting outdated versions of Java and Flash. iOS apparently includes a "kill application" mechanism which would allow Apple to quickly prevent a malicious app from running. (Apple has never used this, saying it's there for emergencies.) I don't think Android has an equivalent mechanism, and it certainly wouldn't work for stuff installed from alternative stores. Generally, these are integrated with patch mechanisms (though I think the iOS one is an "instant push"): You don't poll the "CRL", you update it on a schedule like anything else. While instant revocation might be useful in extreme situations, in practice even something polled every week would be a huge improvement over the (almost nothing) we have today. I'm not aware of any similar mechanism in the OSS world (which doesn't mean it isn't out there). The extreme version of all this is whitelisting of software - pioneered in the Windows world by bit9, now also available (though I don't know any details) in Windows itself. > Given that we don't have this, can we really blame HP for deciding to > ask their CA to revoke their code signing certificate, as the least > bad option? Actually, they could have done something else: Send out a patch that specifically looks for this malware and kills it, and also updates the patch mechanism to filter out any subsequent attempts to install the malware. Microsoft has done this in the past enough times that they have the mechanism fully developed. HP would have had to start from scratch. Frankly, it's not clear that what they did makes a whole load of sense. Since signatures are only checked during installation, they haven't done anything at all to protect customers who already installed the malware - and it's been out there for quite some time. When all you have is a hammer and you want to look like you're *doing* something ... go bang on whatever it is. >> The problem of the validity of signed material has been discussed >> for years, and my comment about the need for timestamping is not >> new. (It probably appeared in the papers discussing uses for >> digital timestamps!) > > I don't think we need to have timestamping here. What we need instead > is to have the same concept of a CRL, but applied to signed software. I was deliberately distinguishing between two problems: The bad software with a proper signature, and the leaked signature. The timestamp is useful only for the latter case, where it's really an optimization of a CRL. > ...I would aruge that the whole point of having signed code is not to > bind it to the signer forever, but it's the signer saying, "this is > good code". It may be the case that the signer had legitimately > signed some piece of code as being "good stuff", but then later on, > the signer discovers that said signed code included bash with the > Shellshock bug, or openssl with the with the Heartbleed bug. Again, two distinct problems: The signer declaring "don't trust this signature (if made after time T)" vs. "don't trust this piece of code, I no longer believe it's safe". > So one could imagine, especially in a world where legislation is > passed per Dan Geer's proposal eliminating the "we warrant that this > code contains ones and zeros, and even if your business loses > gazillions of dollars, we'll refund the cost of your software... on a > prorated basis" is no longer legally operative, that the software > signer might want to not only release a new version of their software > without the Heartbleed or Shellshock bug, but also put the older > version of their software on the SSRL list, to limit their liability > exposure. That would be a fine idea. As I pointed out above, the closed-source world does this kind of thing. I suspect it hasn't made much headway in the OSS world because many people - especially the developers - use OSS exactly because they want the freedom to run whatever they want. The notion that *someone else* - even the author of the software - could shut down their ability to do what they want on their own box would be anathema to many in the OSS community. That doesn't mean such a mechanism couldn't be built for those who want to use it, of course. -- Jerry From phill at hallambaker.com Sat Oct 11 23:20:41 2014 From: phill at hallambaker.com (Phillip Hallam-Baker) Date: Sat, 11 Oct 2014 23:20:41 -0400 Subject: [Cryptography] Cryptography, backdoors and the Second Amendment In-Reply-To: References: <1412897789.3376891.177247101.2F9CFC02@webmail.messagingengine.com> Message-ID: On Thu, Oct 9, 2014 at 8:24 PM, Steve Furlong wrote: > On Thu, Oct 9, 2014 at 7:36 PM, Alfie John wrote: > >> As the US State Department classifies cryptography as a munition, >> shouldn't the use of cryptography be protected under the 2nd Amendment? > > You're expecting consistency, logic, or even honesty from a government? Your > naivete is so /cute/! The Supreme Court only recognized an individual right to bear arms in 2008. And three of the judges in the 5-4 majority voted to stop counting the votes in Florida in 2000. So its a pretty thin reed you are depending on there. Scalia and Thomas make no pretense about consistency, they just make stuff up to suit their prejudices. The best you could hope for by persuading the judges that crypto and firearms are the same thing would be to see them return to the pre 2008 approach to firearms. In the wake of Sandy Hook, the only politician to deliberately raise gun control this cycle was a Republican who wanted to downgrade his NRA A rating to an F by backing background checks. As it happens, it does not matter very much because Holder isn't the administration any more than Louis Freeh was. He only came out with this talk after he decided to hand in his resignation. Which pretty much tells us how much support he has for the position. Since the democratic nominee is virtually certain to be Clinton, it is rather more likely that the state department view will win out over FBI, Justice and the military. Not least because Clinton is not at all amused that due to the incompetence of the NSA, Chelsea Manning was able to leak those embarrassing cables. The Republicans usually give the nomination to the guy who came second last time round. And Ricky Santorum looks as good as any of the alternatives likely to run. I don't know what his position on crypto is but I doubt it is a liberal one. From tytso at mit.edu Sat Oct 11 20:38:55 2014 From: tytso at mit.edu (Theodore Ts'o) Date: Sat, 11 Oct 2014 20:38:55 -0400 Subject: [Cryptography] HP accidentally signs malware, will revoke certificate In-Reply-To: References: <20141011021157.DD59C2281E9@palinka.tinho.net> <54394A55.90601@iang.org> <20141011230552.GB6262@thunk.org> Message-ID: <20141012003855.GA14342@thunk.org> On Sat, Oct 11, 2014 at 08:28:07PM -0400, Jerry Leichter wrote: > ... HP would have had to start > from scratch. Frankly, it's not clear that what they did makes a > whole load of sense. Since signatures are only checked during > installation, they haven't done anything at all to protect customers > who already installed the malware - and it's been out there for > quite some time. Ah, I didn't realize that they didn't catch they had signed the malware for a long period of time. You're right though, having an SSRL only stops new installations of the malware. I was thinking of the sort of "packagekit" situation where you might send someone software with a particular MIME type that automatically trigges an offer to download software to handle that MIME type, where you might really want to block new installations of said malware. > That would be a fine idea. As I pointed out above, the > closed-source world does this kind of thing. I suspect it hasn't > made much headway in the OSS world because many people - especially > the developers - use OSS exactly because they want the freedom to > run whatever they want. The notion that *someone else* - even the > author of the software - could shut down their ability to do what > they want on their own box would be anathema to many in the OSS > community. This is all in the definition of what it means for a software to be on the SSRL list. Does it mean, "you're not allowed to run it" (a DRM mechanism), or does it mean, "I strongly recommend you stop using this code; upgrade NOW"? One could imagine that if the software was already installed, that on the periodic check, the checker would display a pop box that printed some explanatory string that was included in the SSRL, and then asked the user if they wanted to continue using the software. This would be much like the "Danger Will Robisnon" warning which Chrome pops up when you visit a web site that is on the "is known to try to download malware" list. You can still continue on if you __really__ want to visit that site, but there is a clear explanation of why it is not a good idea, and what to do if you are the owner of said web site. I could imagine a similar dialog box which would explain how to upgrade the software component, or perhaps offers to automatically upgrade the software after the user gives permission. So whether or not this makes headway is all in the UX design, I would think. Cheers, - Ted From outer at interlog.com Sun Oct 12 14:53:50 2014 From: outer at interlog.com (Richard Outerbridge) Date: Sun, 12 Oct 2014 14:53:50 -0400 Subject: [Cryptography] The first published LUCIFER triples In-Reply-To: References: Message-ID: Please, anyone, tell me that I?m wrong. Key: 01 23 45 67 89 ab cd ef fe dc ba 98 76 54 32 10 Input: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 Cipher: a2 01 fc 18 d6 2c 85 ef 59 65 a5 82 95 bb f6 09 Key: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 Input: 01 23 45 67 89 ab cd ef fe dc ba 98 76 54 32 10 Cipher: 9d 14 fe 43 77 aa 87 dd 07 cc 8a 14 52 2c 21 ed Key: 01 23 45 67 89 ab cd ef fe dc ba 98 76 54 32 10 Input: ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff Cipher: 97 f1 c1 04 b0 f1 20 d1 94 c0 70 24 f1 48 15 ed Key: ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff Input: 01 23 45 67 89 ab cd ef fe dc ba 98 76 54 32 10 Cipher: d4 42 a3 4d d7 0e 2b 41 56 eb 0f 2a 8a de d1 a7 Key: 01 23 45 67 89 ab cd ef fe dc ba 98 76 54 32 10 Input: 01 23 45 67 89 ab cd ef fe dc ba 98 76 54 32 10 Cipher: cf 46 62 2f a9 85 46 bb 9a 5b c0 02 39 eb 0c 92 Key: fe dc ba 98 76 54 32 10 01 23 45 67 89 ab cd ef Input: 01 23 45 67 89 ab cd ef fe dc ba 98 76 54 32 10 Cipher: 7f af 65 bf c5 45 8f d2 dc 9c c2 26 60 12 ef 44 __outer -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 447 bytes Desc: Message signed with OpenPGP using GPGMail URL: From iang at iang.org Mon Oct 13 06:00:50 2014 From: iang at iang.org (ianG) Date: Mon, 13 Oct 2014 11:00:50 +0100 Subject: [Cryptography] HP accidentally signs malware, will revoke certificate In-Reply-To: References: <20141011021157.DD59C2281E9@palinka.tinho.net> <54394A55.90601@iang.org> Message-ID: <543BA2D2.6010107@iang.org> On 12/10/2014 00:43 am, Hasan Diwan wrote: > > On 11 October 2014 08:18, ianG > wrote: > > So a 4 year old expired cert is still a critical > piece of infrastructure, and they are still going to revoke it > > > Why aren't certificates revoked automatically on expiration? All using a > revoked/expired certification should do is warn me that "the cert you > are using has expired/been revoked, please get a new one from foo.com > ". What is the other use case I'm missing? -- H Expiry and revocation were supposed to "mean" the same thing, the cert could no longer be used. There was either/or not both, and the revocation lists typically had scaling problems so had to be kept brief, 'and' was not good. (Indeed some CAs did revoke on expiry...) But, there are differences. It might "mean" the same thing but it can't mean the same thing, if you get my drift. Expiry is "can't use" coz you need to feed the gasmeter to stay warm. Contractual issue? Whereas revocation is "must not use" because there's a gas leak and the house is about to blow. Safety issue? They are completely different in meaning... but not "meaning." Expiry of course is an optional concept. If software realised there was nothing wrong with an expired cert then the game was up. And, some software does realise this. And, expiry can be tricked by changing the date, so for example the compromised cert (if it is indeed compromise) can be used to still sign a 3 year old package... And, it gets very complicated trying to manage all the corner cases. So, because of risk analysis not being able to answer the real size of the problem, HP decided evidently to cover all bases and revoke as well. The answer to all this is that certs, expiries and especially revocation simply do not work as advertised. In short, the only thing that works is liveness and capabilities, which is the favoured choice for just about every other system. But you cannot fix a system like PKI without staring the architectural myths in the face, and backing off and finding some honest work to do. So we're stuck. HP get tricked that they've been compromised 4 years ago, and they have to now compromise all their customers today. Oops. iang From randy at psg.com Mon Oct 13 06:44:25 2014 From: randy at psg.com (Randy Bush) Date: Mon, 13 Oct 2014 03:44:25 -0700 Subject: [Cryptography] Sonic.net implements DNSSEC, performs MITM against customers. Are they legally liable? In-Reply-To: <1412888494.25670.1.camel@sonic.net> References: <1412888494.25670.1.camel@sonic.net> Message-ID: as it has nothing to do with crypto, could this discussion be moved to nanog? thanks. randy From leichter at lrw.com Mon Oct 13 15:20:13 2014 From: leichter at lrw.com (Jerry Leichter) Date: Mon, 13 Oct 2014 15:20:13 -0400 Subject: [Cryptography] [messaging] Gossip doesn't save Certificate Transparency In-Reply-To: References: <1BAD4834-6832-41C4-8936-407C358BD762@lrw.com> Message-ID: <4EF58F01-688F-4467-9AFC-C1CFBE1795B3@lrw.com> On Oct 13, 2014, at 2:53 PM, Chris Palmer wrote: >> Yup, and that's been proposed in the past (late 1990s) as a way of getting >> away from X.509's 1970s origins in offline systems. Instead of asking a >> source for a certified copy from some self-appointed authority (certificate >> from a CA) and then groping around for further information to check whether >> the certified copy you've just fetched is actually valid (CRL), you just ask >> the authority directly, "give me the currently-valid, known-good key for X" >> (pin from Google). This short-circuits all of PKI. >> >> For some reason it hasn't proven too popular with CAs and browser vendors. > > Well, we have static pins in Chrome, Mozilla is starting to do it, and > HPKP finally made it to Proposed Standards status and is partially > implemented by Chrome and Firefox now. So I'm not sure why you say > we're not into it. :) How many years did it take to get here? Still, I'm glad we're here. > We can't ship a giant blob of public keys for all sites, and have that > be the end of the story, for a variety of raisins. > > * Web sites change even faster than browsers do. Browsers are already moving to automatic updates. Updating the list of keys could be done on a faster schedule. Today's internet isn't yesterday's. > * We can't even ship a complete list of revoked keys in our CRLSets, > for size reasons ? forget about pins for all sites. Why? I did the calculation in my original posting. You can cover the top 100,000 sites in 30MB. That's the size of a couple of image files used to make the browser demos look nice. Plus ... the *changes* to the list are very simple: Just insertions and deletions, nothing fancy. So distributing deltas is simple and very cheap. > * Do site operators really want to let us be the sole managers of > their cryptographic identities? As opposed to ... what? That they rely on the CA's to be the *official* managers, while the browser makers - who after all are the ones with their hands on the code that actually *uses* all those cert's - just stay in the background, with no one needing to examine where they fit in the trust framework? Besides, if others want to distribute key lists - why not? > * A big criticism of DNSSEC is that *exacerbates* the trusted third > party introducer problem: multiply-centralized --> singly centralized. > This is that. Multiple-centralized is good if you can trust the aggregate as long as you can trust any single member (ideal case) or the majority (a common case) or really any fraction of the members of the trust set. But what we have now requires trust *everyone*. That's *significantly worse* than the single-centralized approach. (Besides, again, anyone can offer to maintain and distribute a key collection.) > * Probably other reasons I'm forgetting. > > TTP + CT, TTP + PKP, and TTP + PKP + CT are all pretty darn good. Not > perfect, but workable. You really need to look at the details of what they provide to decide if they help at all. (See the recent long-running thread here complaining that CT doesn't actually help anything. I've stayed out of that entirely because I'm not sure who's right, so I want to see both sides present their best arguments.) But the point of my posting was not to attack attempts to improve what we have, but that we should *also* consider "clean sheet" solutions (even if some of them are retreads of ideas form 20+ years ago). -- Jerry -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 4813 bytes Desc: not available URL: From pgut001 at cs.auckland.ac.nz Mon Oct 13 07:48:20 2014 From: pgut001 at cs.auckland.ac.nz (Peter Gutmann) Date: Tue, 14 Oct 2014 00:48:20 +1300 Subject: [Cryptography] [messaging] Gossip doesn't save Certificate Transparency In-Reply-To: <1BAD4834-6832-41C4-8936-407C358BD762@lrw.com> Message-ID: Jerry Leichter writes: >The logical outcome of pinning is to get rid of the certs entirely. Your >browser vendor provides you with a bucket of public keys for well-known sites, >and you just use them. Yup, and that's been proposed in the past (late 1990s) as a way of getting away from X.509's 1970s origins in offline systems. Instead of asking a source for a certified copy from some self-appointed authority (certificate from a CA) and then groping around for further information to check whether the certified copy you've just fetched is actually valid (CRL), you just ask the authority directly, "give me the currently-valid, known-good key for X" (pin from Google). This short-circuits all of PKI. For some reason it hasn't proven too popular with CAs and browser vendors. >Pinning is a hack to buttress a PKI system that we know is failing. I >appreciate the importance of having something that improves existing systems >as transparently as possible - it's so difficult to deploy anything entirely >new. As a transition - that's fine. But it shouldn't block us from thinking >about a better replacement. It's just a very roundabout way of implementing the "give me a known-good key for X" described above without disintermediating the CAs. Peter. From waywardgeek at gmail.com Mon Oct 13 16:53:42 2014 From: waywardgeek at gmail.com (Bill Cox) Date: Mon, 13 Oct 2014 16:53:42 -0400 Subject: [Cryptography] Secure parallel hash, with constant time update In-Reply-To: References: Message-ID: On Wed, Oct 8, 2014 at 9:24 PM, Bill Cox wrote: > This was Creating a Parallelizeable Cryptographic Hash Function, started > by Jason Resch. His XORing hashes together is not secure, but doing the > same thing with multiplication modulo a prime appears to be. > > The hash function is: > > Digest y = H(1 || B1) * H(2 || B2) * ... * H(n || Bn) mod p > > The prime p should be large, such as 2048 bits, because if an attacker can > compute the discrete log of the H values mod p, he can easily find > collisions. > > My security proof is simple. Assume an attacker has found an algorithm > that takes essentially random numbers ri as inputs (the H values for each > i), and finds a way to multiply some of them together to equal the previous > digest. All we do is change his algorithm so that instead of picking > various ri = H(i) to multiply together, compute instead si = g^ri mod p for > each i used by the algorithm. If g is a group generator, then si is just > as random as ri, but we know something about si (it's discrete log). Using > the attacker's algorithm, we now find the discrete log of y = g^x. Just > find s1 ... sn such that s1 * s2 * ... * sm = x, and then the discrete log > of x is trivially found as r1 + r2 + ... + rn mod p-1. > > I've had a few days to noodle on this proof, and I now believe it is > sound. If the world needs a constant-time updateable parallel hash > function, this should do the job. When we add new messages to the end, we > can compute the new digest in constant time. We can also replace any > existing message Bi with Bi', and compute the new hash in constant time. > We can also use this for a rolling-window hash, similar to what rsync uses, > but more secure. > > Jason, is this what you were looking for? I would love to know what use > case you have in mind. > > Bill > Can I take it as a good sign than no one offered any attacks or found any weaknesses so far? :-) While I am often wrong, I claim it is secure based on the difficulty of the discrete log problem. What would be the next natural step for this algorithm? It seems the usual way is to write a paper... Bill -------------- next part -------------- An HTML attachment was scrubbed... URL: From leichter at lrw.com Mon Oct 13 17:08:09 2014 From: leichter at lrw.com (Jerry Leichter) Date: Mon, 13 Oct 2014 17:08:09 -0400 Subject: [Cryptography] Secure parallel hash, with constant time update In-Reply-To: References: Message-ID: <7FAB095F-DFE0-4AC6-BFAF-3143F1800DC5@lrw.com> On Oct 13, 2014, at 4:53 PM, Bill Cox wrote: > Can I take it as a good sign than no one offered any attacks or found any weaknesses so far? :-) You can take it as a sign that people aren't very interested. I lost interest after an exchange we had that went: "It's secure because of the discrete log problem. But it violates its own basic security requirements when it produces a 0 result! Oh, that's so unlikely - why worry about it?" At that point, we left the realm of mathematics and proofs for someplace else where I, for one prefer not to go. > While I am often wrong, I claim it is secure based on the difficulty of the discrete log problem. What would be the next natural step for this algorithm? It seems the usual way is to write a paper... Best of luck. This is my last message on this subject. -- Jerry -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 4813 bytes Desc: not available URL: From waywardgeek at gmail.com Mon Oct 13 17:49:51 2014 From: waywardgeek at gmail.com (Bill Cox) Date: Mon, 13 Oct 2014 17:49:51 -0400 Subject: [Cryptography] Secure parallel hash, with constant time update In-Reply-To: <7FAB095F-DFE0-4AC6-BFAF-3143F1800DC5@lrw.com> References: <7FAB095F-DFE0-4AC6-BFAF-3143F1800DC5@lrw.com> Message-ID: On Mon, Oct 13, 2014 at 5:08 PM, Jerry Leichter wrote: > On Oct 13, 2014, at 4:53 PM, Bill Cox wrote: > > Can I take it as a good sign than no one offered any attacks or found > any weaknesses so far? :-) > You can take it as a sign that people aren't very interested. > > I lost interest after an exchange we had that went: "It's secure because > of the discrete log problem. But it violates its own basic security > requirements when it produces a 0 result! Oh, that's so unlikely - why > worry about it?" At that point, we left the realm of mathematics and > proofs for someplace else where I, for one prefer not to go. > Hi, Jerry. Thanks for making the attempt, but your attack fails. To mount it, an attacker must have an algorithm for finding x such that H(x) = 0. If he has such an algorithm, he has broken H completely, which contradicts the assumption that H is a secure cryptographic hash. A more interesting point is does the algorithm have a practical use? Is there need for a hashing algorithm that defends well against the generalized birthday attack? It probably is worth noting that there is such an algorithm in any case, and we cannot assume that this attack breaks all hashes of this form. Bill -------------- next part -------------- An HTML attachment was scrubbed... URL: From jkatz at cs.umd.edu Mon Oct 13 18:11:32 2014 From: jkatz at cs.umd.edu (Jonathan Katz) Date: Mon, 13 Oct 2014 18:11:32 -0400 Subject: [Cryptography] Secure parallel hash, with constant time update In-Reply-To: References: Message-ID: It is well known (going back to the '90s) that the following function is collision resistant if the discrete log problem is hard: public constants: g_1, g_2, ..., g_n, all elements mod p input: x_1 | x_2 | x_3 | ... | x_n output: \prod_i g_i^{x_i} mod p A proof for the case of n=2 is in my cryptography textbook. Note that this function is also updateable in constant time. If we model H as a random oracle, we can modify the above to: input: x_1 | ... | x_n output: \prod_i H(i)^{x_i} mod p This has also been suggested in the literature. You are suggesting instead to look at the construction input: x_1 | x_2 | ... | x_n output: \prod_i H(i | x_i) mod p This has the advantage of being more efficient than the above constructions. This, too, is secure -- but was already proposed in the paper "A New Paradigm for collision-free hashing: Incrementality at reduced cost" available here: http://cseweb.ucsd.edu/~mihir/papers/incremental.html I'm not sure if that's good or bad news vis-a-vis your constructions. =) On Mon, Oct 13, 2014 at 5:57 PM, Bill Cox wrote: > > On Mon, Oct 13, 2014 at 5:01 PM, Jonathan Katz wrote: >> >> What is your definition of security? > > > In this context, I define it as collision resistance of the digest. If an attacker found two sets of message blocks that hash to the same digest, then this hash would be broken. > > Finding two different sets of messages resulting in the same hash y is as difficult as finding the discrete log of y. > > Bill -------------- next part -------------- An HTML attachment was scrubbed... URL: From jkatz at cs.umd.edu Mon Oct 13 19:23:24 2014 From: jkatz at cs.umd.edu (Jonathan Katz) Date: Mon, 13 Oct 2014 19:23:24 -0400 Subject: [Cryptography] factoring small(ish) numbers Message-ID: I'm curious if anyone can point me to references that would indicate values of n for which n-bit numbers can be factored "easily." One can debate what "easily" means, but for my purposes I am thinking of something where (1) the factoring is done on a single, standard PC, (2) in less than a month, using (3) code that is either readily available or could be written by a talented undergraduate CS student. I am aware of the RSA factoring challenges, but those are solved by large, distributed efforts run by academics using special-purpose setups and taking much more than 1 month. Thanks in advance for any pointers. -------------- next part -------------- An HTML attachment was scrubbed... URL: From waywardgeek at gmail.com Mon Oct 13 19:34:47 2014 From: waywardgeek at gmail.com (Bill Cox) Date: Mon, 13 Oct 2014 19:34:47 -0400 Subject: [Cryptography] Secure parallel hash, with constant time update In-Reply-To: References: Message-ID: On Mon, Oct 13, 2014 at 6:11 PM, Jonathan Katz wrote: > It is well known (going back to the '90s) that the following function is > collision resistant if the discrete log problem is hard: > public constants: g_1, g_2, ..., g_n, all elements mod p > input: x_1 | x_2 | x_3 | ... | x_n > output: \prod_i g_i^{x_i} mod p > A proof for the case of n=2 is in my cryptography textbook. Note that this > function is also updateable in constant time. > Very cool... > If we model H as a random oracle, we can modify the above to: > input: x_1 | ... | x_n > output: \prod_i H(i)^{x_i} mod p > This has also been suggested in the literature. > > You are suggesting instead to look at the construction > input: x_1 | x_2 | ... | x_n > output: \prod_i H(i | x_i) mod p > This has the advantage of being more efficient than the above > constructions. This, too, is secure -- but was already proposed in the > paper "A New Paradigm for collision-free hashing: Incrementality at reduced > cost" available here: > http://cseweb.ucsd.edu/~mihir/papers/incremental.html > > I'm not sure if that's good or bad news vis-a-vis your constructions. =) > > Thanks for point out the paper! I take it as good news. It means I'm less of a dork than I thought, although I wish I had better google-fu. I was going for "parallel" when "incremental" would have gotten me these results quickly. It's fun to read these papers knowing about the generalized birthday attack. My construction is identical to MuHASH, which he proved secure in the same way. However, he then argues that we can securely use AdHASH, can't prove it, and instead states similarities between finding collisions in hashes added together modulo a big number to the modular knapsack problem, known to be hard. It's a good example of how hand-waving in crypto can be a bad idea. Wagner and friends clearly broke AdHASH with his generalized birthday attack, but I think MuHASH still stands. For example, the 4-element generalized birthday problem works so long as we throw out any hash values > M/4, where we're doing modulo M addition. I am a bit surprised that Wagner's paper spends so much time trying to find weaknesses with multiplication, but I guess it makes sense if we're talking about weaker multiplicative groups than integers modulo large primes. I assume he knew about AdHASH and MuHASH. In general, the important property of the combining operator seems to be that it mixes bits cryptographically well. Addition didn't quite get there. With multiplication, even 2-way, picking two random numbers between 0 and p-1 that multiply to anything other than astronomically larger than p is on the easily less likely than 1/sqrt(p) - how's that for had waving :-) Therefore, we can't throw out all those too-large random numbers and expect to run faster than sqrt(p) calls to H. Thanks again for the pointer. Very cool Bill -------------- next part -------------- An HTML attachment was scrubbed... URL: From mitch at niftyegg.com Mon Oct 13 19:37:32 2014 From: mitch at niftyegg.com (Tom Mitchell) Date: Mon, 13 Oct 2014 16:37:32 -0700 Subject: [Cryptography] HP accidentally signs malware, will revoke certificate In-Reply-To: References: <20141011021157.DD59C2281E9@palinka.tinho.net> <54394A55.90601@iang.org> <20141011230552.GB6262@thunk.org> Message-ID: On Sat, Oct 11, 2014 at 5:28 PM, Jerry Leichter wrote: > On Oct 11, 2014, at 7:05 PM, Theodore Ts'o wrote: > > It seems the real problem is that while we have Certificate Revocation > > Lists when a CA wants to revoke its signature on a certificate, there > > isn't the same concept of a Signed Software Revocation List where a > > code signer can revoke a signature on a piece of code ....... > Microsoft has had such a mechanism - known as a killbit > http://en.wikipedia.org/wiki/Killbit - for many years. It applies only > to Active-X controls - it's not clear why they never extended the idea to > arbitrary code. However, they could probably get essentially the same > effect with their malware scanner. Revocation of software seems like a double or triple sharp edge solution. In a nutshell one could think of it as a global DRM take down. It could be built into any system package management tool or virus scanner. Apple, Adobe, Microsoft and many more have a daemon process that checks for and installs the latest version of itself and of the application collection under its purview. It seems to me that any of these could become a problem and should be the research topic of MAC and other policy management tools. i.e. an Adobe tool should be fenced in and able to only check and modify Adobe products. One of the strengths of WinNT was a decent policy framework but because it got in the way of too many things it was side tracked and fell into disuse. MS failed to establish a policy that others could work with. The lack of physical install media removes one anchor to bootstrap a correct environment. Install media for the most part does little to repair and tends to risk data. For example I have ancient email collections that I cannot open because one of 10,000 messages triggers virus scan tools that "do the right thing" but 9,999 messages are also impacted. The apparent abuses of DRM take down processes makes the entire topic interesting. The impact of a TLA suborning such tools to further social or political gains is facilitated because some policy designs are not transparent. -- T o m M i t c h e l l -------------- next part -------------- An HTML attachment was scrubbed... URL: From coruus at gmail.com Tue Oct 14 00:01:49 2014 From: coruus at gmail.com (David Leon Gil) Date: Tue, 14 Oct 2014 00:01:49 -0400 Subject: [Cryptography] factoring small(ish) numbers In-Reply-To: References: Message-ID: On Monday, October 13, 2014, Jonathan Katz wrote: > I'm curious if anyone can point me to references that would indicate > values of n for which n-bit numbers can be factored "easily." > cado-nfs + undergraduate + 1 month == answer Or take a look at: www.wired.com/2012/10/dkim-vulnerability-widespread/all/ It has the numbers you're after, as of two years ago. See also cado-nfs.gforge.inria.fr One can debate what "easily" means, but for my purposes I am thinking of > something where (1) the factoring is done on a single, standard PC, (2) in > less than a month, using (3) code that is either readily available or could > be written by a talented undergraduate CS student. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From hanno at hboeck.de Tue Oct 14 03:35:05 2014 From: hanno at hboeck.de (Hanno =?ISO-8859-1?B?QvZjaw==?=) Date: Tue, 14 Oct 2014 09:35:05 +0200 Subject: [Cryptography] factoring small(ish) numbers In-Reply-To: References: Message-ID: <20141014093505.53108f1d@pc> Am Mon, 13 Oct 2014 19:23:24 -0400 schrieb Jonathan Katz : > I'm curious if anyone can point me to references that would indicate > values of n for which n-bit numbers can be factored "easily." > > One can debate what "easily" means, but for my purposes I am thinking > of something where (1) the factoring is done on a single, standard > PC, (2) in less than a month, using (3) code that is either readily > available or could be written by a talented undergraduate CS student. The best algorithm for factoring is the general number field sieve. There's some code available [1], however it's not "ready-to-use" and requires some manual steps. I think factoring 512 bit is what's "doable". People have been doing this on their home PCs for quite a while [2]. 768 bit is still challenging and probably not something you do at home. [1] http://www.math.ttu.edu/~cmonico/software/ggnfs/ [2] https://en.wikipedia.org/wiki/TI-84_Plus_series -- Hanno B?ck http://hboeck.de/ mail/jabber: hanno at hboeck.de GPG: BBB51E42 -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 819 bytes Desc: not available URL: From codesinchaos at gmail.com Tue Oct 14 04:34:21 2014 From: codesinchaos at gmail.com (CodesInChaos) Date: Tue, 14 Oct 2014 10:34:21 +0200 Subject: [Cryptography] factoring small(ish) numbers In-Reply-To: References: Message-ID: 512 bits is certainly in the breakable-by-a-hobbyist range using existing applications: Tom Ritter wrote a how-to: https://github.com/tomrittervg/cloud-and-control/tree/master/gnfs-info It's not using a single computer, but rents cloud resources for about $100. From tom at ritter.vg Mon Oct 13 22:21:20 2014 From: tom at ritter.vg (Tom Ritter) Date: Mon, 13 Oct 2014 21:21:20 -0500 Subject: [Cryptography] factoring small(ish) numbers In-Reply-To: References: Message-ID: > (1) the factoring is done on a single, standard PC, (2) in less than a month, using (3) code that is either readily available or could be written by a talented undergraduate CS student. 512-bit numbers are just on the cusp of 'doable in a month' depending on how 'standard' your 'standard' PC is. (3) is satisfied. https://github.com/tomrittervg/cloud-and-control/blob/master/gnfs-info/factoring-howto.txt -tom From cryptography at dukhovni.org Tue Oct 14 13:09:44 2014 From: cryptography at dukhovni.org (Viktor Dukhovni) Date: Tue, 14 Oct 2014 17:09:44 +0000 Subject: [Cryptography] factoring small(ish) numbers In-Reply-To: References: Message-ID: <20141014170944.GP13254@mournblade.imrryr.org> On Mon, Oct 13, 2014 at 07:23:24PM -0400, Jonathan Katz wrote: > I'm curious if anyone can point me to references that would indicate values > of n for which n-bit numbers can be factored "easily." Are you looking at factoring RSA moduli (product of two primes with ~n/2 bits) or general $n-bit$ numbers? Many numbers have small factors, which are easily found by ECM with run-time dependent only on the size of the factors not the number itself. The remaining large factors can be subjected to primality tests, and either found composite or if desired proved prime. It is only at that point that any large composites with no small factors found by ECM should be subjected to GNFS. Of course with properly generated RSA moduli you'll start with GNFS immediately. -- Viktor. From mitch at niftyegg.com Tue Oct 14 14:53:48 2014 From: mitch at niftyegg.com (Tom Mitchell) Date: Tue, 14 Oct 2014 11:53:48 -0700 Subject: [Cryptography] factoring small(ish) numbers In-Reply-To: <20141014170944.GP13254@mournblade.imrryr.org> References: <20141014170944.GP13254@mournblade.imrryr.org> Message-ID: On Tue, Oct 14, 2014 at 10:09 AM, Viktor Dukhovni wrote: > On Mon, Oct 13, 2014 at 07:23:24PM -0400, Jonathan Katz wrote: > > > I'm curious if anyone can point me to references that would indicate > values > > of n for which n-bit numbers can be factored "easily." > ......good stuff snipped. > It is only at that point that any large composites with no small > factors > This may open door number two. Many key pairs depend on pairs of large primes. However discovering large pairs is problematic so large pseudo primes get used. This does open an attack family because finding large primes that have not been found by others seems less likely and pseudo primes present a false view of the number of valuable underlying bits. For a laptop or desktop to generate sufficiently interesting prime number dependent key pairs seems difficult perhaps as difficult as factoring large pseudo primes. Perhaps I misjudge difficulty however we cannot misjudge the importance of the generation and test process. The method as well as chain and lists of tests used to select pseudo prime numbers to use in place of primes can be long or short and if understood well enough could prove to be a fruitful attack path. -- T o m M i t c h e l l -------------- next part -------------- An HTML attachment was scrubbed... URL: From agr at me.com Tue Oct 14 19:24:25 2014 From: agr at me.com (Arnold Reinhold) Date: Tue, 14 Oct 2014 19:24:25 -0400 Subject: [Cryptography] [messaging] Gossip doesn't save Certificate Transparency Message-ID: <328D961E-7923-4585-9EB8-AF85204DBBB4@me.com> On Mon, 13 Oct 2014 15:20 Jerry Leichter wrote: >> * We can't even ship a complete list of revoked keys in our CRLSets, >> for size reasons - forget about pins for all sites. > Why? I did the calculation in my original posting. You can cover the top 100,000 sites in 30MB. That's the size of a couple of image files used to make the browser demos look nice. > > Plus ... the *changes* to the list are very simple: Just insertions and deletions, nothing fancy. So distributing deltas is simple and very cheap. I agree that the pin distribution problem seems quite solvable. But how do browser manufacturers get valid pin data for 100,000 sites, not to mention regular updates? If they want to get the information independently, they will have to set up the kind of rigorous verification infrastructure that we would want CAs to employ. (The fact that most CAs fall short does not suggest the problem is an easy one.) And if I trust my browser manufacturer?s signature on the browser software distribution that includes the initial pin list, as well as on subsequent pin updates, why not also trust the same signature key to sign individual web site credentials and use the existing TLS infrastructure, with the browser manufacturer serving as a super-CA for those 100,000 sites? If the browser manufacturers choose instead to subcontract getting the pin data to one or a few high quality CAs, expect those CAs to charge a very steep price since it undermines their business model. The other CAs will no doubt raise a ruckus, perhaps invoking local antitrust laws. And if the browser manufacturers accept most CA data, what is the point? I?d still like QR codes in my bank?s lobby. Arnold Reinhold From iang at iang.org Tue Oct 14 20:03:02 2014 From: iang at iang.org (ianG) Date: Wed, 15 Oct 2014 01:03:02 +0100 Subject: [Cryptography] SSL bug: This POODLE Bites: Exploiting The SSL 3.0 Fallback Message-ID: <543DB9B6.8040808@iang.org> https://www.openssl.org/~bodo/ssl-poodle.pdf SSL 3.0 [RFC6101] is an obsolete and insecure protocol. While for most practical purposes it has been replaced by its successors TLS 1.0 [RFC2246], TLS 1.1 [RFC4346], and TLS 1.2 [RFC5246], many TLS implementations remain backwards?compatible with SSL 3.0 to interoperate with legacy systems in the interest of a smooth user experience. The protocol handshake provides for authenticated version negotiation, so normally the latest protocol version common to the client and the server will be used. However, even if a client and server both support a version of TLS, the security level offered by SSL 3.0 is still relevant since many clients implement a protocol downgrade dance to work around server?side interoperability bugs. In this Security Advisory, we discuss how attackers can exploit the downgrade dance and break the cryptographic security of SSL 3.0. Our POODLE attack (Padding Oracle On Downgraded Legacy Encryption) will allow them, for example, to steal "secure" HTTP cookies (or other bearer tokens such as HTTP Authorization header contents). We then give recommendations for both clients and servers on how to counter the attack: if disabling SSL 3.0 entirely is not acceptable out of interoperability concerns, TLS implementations should make use of TLS_FALLBACK_SCSV. CVE?2014?3566 has been allocated for this protocol vulnerability. http://googleonlinesecurity.blogspot.co.uk/2014/10/this-poodle-bites-exploiting-ssl-30.html -------------- next part -------------- An HTML attachment was scrubbed... URL: From phill at hallambaker.com Wed Oct 15 10:51:20 2014 From: phill at hallambaker.com (Phillip Hallam-Baker) Date: Wed, 15 Oct 2014 10:51:20 -0400 Subject: [Cryptography] Compressed CRLS Message-ID: Rob Stradling and myself have been looking at the revocation problem again. Using traditional techniques some CRLs have expanded to 3 MB. Using a novel compression technique we can create a CRLSet that allows encoding densities of about 6 bits per revoked certificate. In practice this means that we can give the status of every unexpired WebPKI SSL certificate from every public CA in about 170KB. [Rob scraped the Web and collected up 2.5 Million certs of which 250 thousand were revoked]. Delta CRLsets are also possible, a daily update would be about 5KB. The asymptotic space requirement (in bits) is |B| (log2 (|A|+|B|)) where A is the set of valid unexpired certs and |B| is the revoked certificates. The compression technique is described here: http://tools.ietf.org/html/draft-hallambaker-compressedcrlset-00 Note that IPR claims do apply but it is understood that any application to the Certificate Revocation problem for the Web would have to be open source compatible. The key to efficiency here is that the CRLSet only allows us to distinguish valid unexpired certs from revoked unexpired certs. While it gives exact answers for these cases it gives a random answer for certs that are not in A or B. The paper describes the simplest approach we came up with. Rob's improvements double the efficiency. I will be in Hawaii if anyone wants to talk to me there. From frantz at pwpconsult.com Tue Oct 14 20:46:28 2014 From: frantz at pwpconsult.com (Bill Frantz) Date: Tue, 14 Oct 2014 17:46:28 -0700 Subject: [Cryptography] [messaging] Gossip doesn't save Certificate Transparency In-Reply-To: <4EF58F01-688F-4467-9AFC-C1CFBE1795B3@lrw.com> Message-ID: On 10/13/14 at 12:20 PM, leichter at lrw.com (Jerry Leichter) wrote: >>* We can't even ship a complete list of revoked keys in our CRLSets, >>for size reasons ? forget about pins for all sites. >Why? I did the calculation in my original posting. You can >cover the top 100,000 sites in 30MB. That's the size of a >couple of image files used to make the browser demos look nice. Here I'm sitting, using my phone for Internet with a 2 gig limit before the charges start coming in. I avoid 30MB downloads like the plague. At home, with "unlimited" (i.e. how much bandwidth does DSL have anyway), I would feel differently. I also have friends with only dialup, and they will indeed feel very differently from me. Cheers - Bill ----------------------------------------------------------------------- Bill Frantz | Truth and love must prevail | Periwinkle (408)356-8506 | over lies and hate. | 16345 Englewood Ave www.pwpconsult.com | - Vaclav Havel | Los Gatos, CA 95032 From carimachet at gmail.com Wed Oct 15 14:21:20 2014 From: carimachet at gmail.com (Cari Machet) Date: Wed, 15 Oct 2014 20:21:20 +0200 Subject: [Cryptography] need a place to put some docs Message-ID: best place ? -- Cari Machet NYC 646-436-7795 carimachet at gmail.com AIM carismachet Syria +963-099 277 3243 Amman +962 077 636 9407 Berlin +49 152 11779219 Reykjavik +354 894 8650 Twitter: @carimachet 7035 690E 5E47 41D4 B0E5 B3D1 AF90 49D6 BE09 2187 Ruh-roh, this is now necessary: This email is intended only for the addressee(s) and may contain confidential information. If you are not the intended recipient, you are hereby notified that any use of this information, dissemination, distribution, or copying of this email without permission is strictly prohibited. -------------- next part -------------- An HTML attachment was scrubbed... URL: From cryptography at dukhovni.org Wed Oct 15 15:24:25 2014 From: cryptography at dukhovni.org (Viktor Dukhovni) Date: Wed, 15 Oct 2014 19:24:25 +0000 Subject: [Cryptography] factoring small(ish) numbers In-Reply-To: References: <20141014170944.GP13254@mournblade.imrryr.org> Message-ID: <20141015192425.GI13254@mournblade.imrryr.org> On Tue, Oct 14, 2014 at 11:53:48AM -0700, Tom Mitchell wrote: > This may open door number two. > Many key pairs depend on pairs of large primes. > However discovering large pairs is problematic so > large pseudo primes get used. Modern pseudo-primality tests are very good. > This does open an attack family because finding > large primes that have not been found by others > seems less likely and pseudo primes present a false > view of the number of valuable underlying bits. Primes are very common. Read about the prime-number theorem. For 512 bit numbers roughly one in every 384 is prime. > For a laptop or desktop to generate sufficiently interesting > prime number dependent key pairs seems difficult perhaps as difficult > as factoring large pseudo primes. This is wrong. > Perhaps I misjudge difficulty Grossly. -- Viktor. From phill at hallambaker.com Wed Oct 15 15:48:43 2014 From: phill at hallambaker.com (Phillip Hallam-Baker) Date: Wed, 15 Oct 2014 15:48:43 -0400 Subject: [Cryptography] [messaging] Gossip doesn't save Certificate Transparency In-Reply-To: References: Message-ID: On Sun, Sep 28, 2014 at 1:47 AM, Peter Gutmann wrote: > Chris Palmer writes: > >>On Saturday, September 27, 2014, Peter Gutmann wrote: >>> That's always puzzled me about CT, who is going to monitor these logs, and why >>> would they bother? This seems to be built from the same fallacy as "open- >>> source code is more secure because lots of people will be auditing the code >>> for security bugs". >> >>It's a simple matter of a shell script to scan logs for misissuance for names >>you care about. Google certainly cares, EFF and other activist organizations, >>PayPal, Facebook, ... > > So in other words it'll help the organisations who are already more or less > covered by certificate pinning (except that CT does it in a really roundabout, > complex manner rather than directly at the source as pinning does). > > Looking at what CT gives you, there seem to be three scenarios to cover: > > 1. Cert issued for Google or Paypal. > 2. Cert issued for First Bank of Podunk. > 3. Cert issued for www.verify-chase-credit-card.com. CT gives me an infrastructure of notaries that I can use as the starting point to hang useful stuff on. Think of it like the olympics. No city wants the olympics, they are a pain. But if you want to have a subway line built out from Boston to (say) Woburn it will take 50 years through incremental expansion of the green line. Stick the olympic stadium in Woburn and construction has to be done inside a five year window. The big problem in PKI has always been revocation. The Diginotar event would not have been half as bad if we could revoke the certs. The idea of CT is to provide earlier notice that a CA has been breached and not noticed, an event that has so far happened once in 20 years. But without revocation it doesn't do much. Fortunately, compressed CRLs solve the revocation problem. But to apply our scheme you have to have a complete list of issued certs. So CT is convenient. From gomesbascoy at gmail.com Wed Oct 15 16:57:37 2014 From: gomesbascoy at gmail.com (Brian Gomes Bascoy) Date: Wed, 15 Oct 2014 16:57:37 -0400 Subject: [Cryptography] DSAuth: public key authentication for the web Message-ID: <543EDFC1.70808@gmail.com> Hi! I am working on a small protocol for password-less authentication using ECDSA, for the world wide web. I already have a functional prototype which you can play with in here: https://github.com/pera/dsauth/ In some way it's similar to the SSH's public key authentication method. I'm using SJCL inside an user-friendly Chrome extension that manage the key pairs and do all the client-side part, and also coded a Node.js web service for testing. I'm looking for any kind of comments, or even contributors :) Thank you very much! From coruus at gmail.com Wed Oct 15 17:46:19 2014 From: coruus at gmail.com (David Leon Gil) Date: Wed, 15 Oct 2014 17:46:19 -0400 Subject: [Cryptography] RFC: Generating RSA moduli / semiprimes with predetermined bits Message-ID: Request for citations! Does anyone happen to know early references for the generation of semiprimes with prescribed bit-patterns? In particular, I know of the following articles: Vanstone, Scott A., and Robert J. Zuccherato. "Short RSA keys and their generation." Journal of Cryptology 8, no. 2 (1995): 101-114. Lenstra, Arjen K. "Generating RSA moduli with a predetermined portion." In Advances in Cryptology?Asiacrypt?98, pp. 1-10. Springer Berlin Heidelberg, 1998. Young, Adam, and Moti Yung. "The Dark Side of ?Black-Box? Cryptography or: Should We Trust Capstone?." In Advances in Cryptology?CRYPTO?96, pp. 89-103. Springer Berlin Heidelberg, 1996. Desmedt, Yvo. "Abuses in cryptography and how to fight them." In Proceedings on Advances in cryptology, pp. 375-389. Springer-Verlag New York, Inc., 1990. Young and Yung cite Yvo Desmedt as having introduced the idea for RSA moduli in particular. (I don't have this conference proceeding to verify the citation; can anyone verify this?) There are also some works by GJ Simmons (e.g., "The subliminal channel and digital signatures") from 1984-85 that seem apropos; does anyone know if this is discussed there? Also, it seems like this would have made an interesting exercise in a number theory book; is it possible that the observation that you can choose half the bits in a semiprime by one of the methods in the papers above described anywhere in that literature? - David From leichter at lrw.com Wed Oct 15 18:32:21 2014 From: leichter at lrw.com (Jerry Leichter) Date: Wed, 15 Oct 2014 18:32:21 -0400 Subject: [Cryptography] [messaging] Gossip doesn't save Certificate Transparency In-Reply-To: References: Message-ID: <83898C95-56E1-40D6-8494-E0BFD9D5292A@lrw.com> On Oct 14, 2014, at 8:46 PM, Bill Frantz wrote: >>> * We can't even ship a complete list of revoked keys in our CRLSets, >>> for size reasons ? forget about pins for all sites. >> Why? I did the calculation in my original posting. You can cover the top 100,000 sites in 30MB. That's the size of a couple of image files used to make the browser demos look nice. > > Here I'm sitting, using my phone for Internet with a 2 gig limit before the charges start coming in. I avoid 30MB downloads like the plague. a) Deltas will be tiny. How often does a site need to change its keys? b) You never connect to WiFi? Just how up-to-the-minute do you need your list of keys to be? > At home, with "unlimited" (i.e. how much bandwidth does DSL have anyway), I would feel differently. I also have friends with only dialup, and they will indeed feel very differently from me. What modern web sites are they looking at over dialup? At some point, one has to move on and stop supporting IE6 :-). Should we also worry about people still using 2400 baud modems? -- Jerry -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 4813 bytes Desc: not available URL: From benl at google.com Thu Oct 16 09:13:21 2014 From: benl at google.com (Ben Laurie) Date: Thu, 16 Oct 2014 13:13:21 +0000 Subject: [Cryptography] [messaging] Gossip doesn't save Certificate Transparency References: Message-ID: On Sun Sep 28 2014 at 8:52:47 AM Peter Gutmann wrote: > Chris Palmer writes: > > >On Saturday, September 27, 2014, Peter Gutmann > wrote: > >> That's always puzzled me about CT, who is going to monitor these logs, > and why > >> would they bother? This seems to be built from the same fallacy as > "open- > >> source code is more secure because lots of people will be auditing the > code > >> for security bugs". > > > >It's a simple matter of a shell script to scan logs for misissuance for > names > >you care about. Google certainly cares, EFF and other activist > organizations, > >PayPal, Facebook, ... > > So in other words it'll help the organisations who are already more or less > covered by certificate pinning (except that CT does it in a really > roundabout, > complex manner rather than directly at the source as pinning does). > > Looking at what CT gives you, there seem to be three scenarios to cover: > > 1. Cert issued for Google or Paypal. > 2. Cert issued for First Bank of Podunk. > 3. Cert issued for www.verify-chase-credit-card.com. > > Case #1 is already handled by pinning, and cases #2 and #3 won't be helped > through CT. Why not? I'm not sure what your threat model is for 2, so hard to respond to it, but for 3, CT will allow you to see that this cert has been issued and object to it. -------------- next part -------------- An HTML attachment was scrubbed... URL: From frantz at pwpconsult.com Wed Oct 15 20:18:18 2014 From: frantz at pwpconsult.com (Bill Frantz) Date: Wed, 15 Oct 2014 17:18:18 -0700 Subject: [Cryptography] [messaging] Gossip doesn't save Certificate Transparency In-Reply-To: <83898C95-56E1-40D6-8494-E0BFD9D5292A@lrw.com> Message-ID: On 10/15/14 at 3:32 PM, leichter at lrw.com (Jerry Leichter) wrote: >On Oct 14, 2014, at 8:46 PM, Bill Frantz wrote: >>>> * We can't even ship a complete list of revoked keys in our CRLSets, >>>> for size reasons ? forget about pins for all sites. >>> Why? I did the calculation in my original posting. You can cover the top 100,000 sites in 30MB. >That's the size of a couple of image files used to make the browser demos look nice. >> >>Here I'm sitting, using my phone for Internet with a 2 gig limit before the charges start coming >in. I avoid 30MB downloads like the plague. >a) Deltas will be tiny. How often does a site need to change its keys? >b) You never connect to WiFi? Just how up-to-the-minute do you need your list of keys to be? If deltas are tiny, then there is much less of a problem. If a site can continue to use its old key while phasing in a new one, then if I can select the old key I can wait until I get to an area of high bandwidth/cheap bandwidth before updating the keys. If, on the other hand, it all happens "automagically" in the background, I may buy a big bill with out knowing it. I have this problem with auto-download of replacement phone software. That download takes about an hour on DSL. I don't know what it will do to my cell phone bill, which is why I have the system set to download only on command. >>At home, with "unlimited" (i.e. how much bandwidth does DSL have anyway), I would feel differently. >I also have friends with only dialup, and they will indeed feel very differently from me. >What modern web sites are they looking at over dialup? I have no idea, but I could see them doing banking. >At some point, one has to move on and stop supporting IE6 :-). >Should we also worry about people still using 2400 baud modems? Well, I think there will always be people with poor connectivity. I think we should make it possibile for them to enjoy as much of the online world as possible. We probably can't show them movies, but email and text messaging are low bandwidth. Some level of web browsing is also possible, limited by their patients. How do you think we should treat them? Cheers - Bill ----------------------------------------------------------------------- Bill Frantz | Concurrency is hard. 12 out | Periwinkle (408)356-8506 | 10 programmers get it wrong. | 16345 Englewood Ave www.pwpconsult.com | - Jeff Frantz | Los Gatos, CA 95032 From leichter at lrw.com Thu Oct 16 07:43:14 2014 From: leichter at lrw.com (Jerry Leichter) Date: Thu, 16 Oct 2014 07:43:14 -0400 Subject: [Cryptography] Secure transfer of subsequences Message-ID: <8335DC8C-9B3C-4EF1-95FF-304035872309@lrw.com> Suppose Alice has a sequence of bits S and wishes to securely transfer them to Bob, in the sense that Bob can prove that the bits S' he receives are actually the same as S. Depending on what we mean by "prove", either a keyed MAC or a signature will do the trick. But suppose S is very large, and Bob is only interested some subsequence T of S. (That is, there is a sequence i0 < i1 < ... < ik such that T == S[i0] || S[i1] || ... || S[ik].) Again, Bob wishes to be able to prove that T is indeed of this form. Obviously, Bob can accomplish this by transferring all of S and then constructing T, but if |T| << |S|, this is very wasteful. Bob would like to do this by transferring a number of bits "not much larger" than |T|. (If the subsequence really is arbitrary, the cost of sending the i's might dominate. That's not the interesting part of the problem, and can be ignored.) Does anyone know of solutions to problems of this sort? For the case that the subsequences consist of a small number of long contiguous runs of bits, using a tree hash on substrings of contiguous bits of S might work. I'm guessing that *some* constraints on S are necessary, but who knows. In general form, the closest thing I've seen to this is "proofs of storage". In terms of this problem, Bob sent S to Alice, who promises to keep it stored; Bob will later ask for something small - T might be an example, but is not enough - that Alice will be unable to produce unless she has, indeed, retained all of S. There are some clever ideas here that might be adapted, but I haven't kept up with the literature. -- Jerry From lists at peter.de.com Thu Oct 16 03:07:37 2014 From: lists at peter.de.com (Oliver Peter) Date: Thu, 16 Oct 2014 09:07:37 +0200 Subject: [Cryptography] need a place to put some docs In-Reply-To: References: Message-ID: <20141016070737.GA61818@mail.opdns.de> On Wed, Oct 15, 2014 at 08:21:20PM +0200, Cari Machet wrote: > best place ? 127.0.0.1 -- Oliver PETER oliver at gfuzz.de 0x456D688F -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 181 bytes Desc: not available URL: From pgut001 at cs.auckland.ac.nz Thu Oct 16 07:21:07 2014 From: pgut001 at cs.auckland.ac.nz (Peter Gutmann) Date: Fri, 17 Oct 2014 00:21:07 +1300 Subject: [Cryptography] HP accidentally signs malware, will revoke certificate In-Reply-To: Message-ID: Jerry Leichter writes: >Microsoft has had such a mechanism - known as a killbit >http://en.wikipedia.org/wiki/Killbit - for many years. It applies only to >Active-X controls - it's not clear why they never extended the idea to >arbitrary code. However, they could probably get essentially the same effect >with their malware scanner. > >OS X has a similar mechanism with its simple-minded malware blacklisting >mechanism, which has a special-purpose extension to do such things as >blacklisting outdated versions of Java and Flash. > >iOS apparently includes a "kill application" mechanism which would allow Apple >to quickly prevent a malicious app from running. (Apple has never used this, >saying it's there for emergencies.) I don't think Android has an equivalent >mechanism, and it certainly wouldn't work for stuff installed from alternative >stores. X.509 doesn't handle this situation by design. More than a decade ago a a neverValid revocation flag to handle this type of situation was proposed and rejected because That's Not How PKI Is Supposed To Work: we cannot allow a status which implies 'please unwind all transactions using this certificate, the purchaser must return the goods and a refund will be issued' as this removes all the certainty which PKIX is trying to provide. PKI provides absolute certainty, dammit, and to have any kind of facility that even hints otherwise is heresy. Peter. From sneves at dei.uc.pt Wed Oct 15 20:50:42 2014 From: sneves at dei.uc.pt (Samuel Neves) Date: Thu, 16 Oct 2014 01:50:42 +0100 Subject: [Cryptography] RFC: Generating RSA moduli / semiprimes with predetermined bits In-Reply-To: References: Message-ID: <543F1662.5040305@dei.uc.pt> On 10/15/2014 10:46 PM, David Leon Gil wrote: > Request for citations! > > Does anyone happen to know early references for the generation of > semiprimes with prescribed bit-patterns? > > In particular, I know of the following articles: > > Vanstone, Scott A., and Robert J. Zuccherato. "Short RSA keys and > their generation." Journal of Cryptology 8, no. 2 (1995): 101-114. [1] points out that [2, Section 2.1] predates Vanstone and Zuccherato, invalidating their 1994 patent on the technique. [1] http://cr.yp.to/papers/sigs.pdf [2] http://link.springer.com/chapter/10.1007%2F3-540-46877-3_42 > > Lenstra, Arjen K. "Generating RSA moduli with a predetermined > portion." In Advances in Cryptology?Asiacrypt?98, pp. 1-10. Springer > Berlin Heidelberg, 1998. > > Young, Adam, and Moti Yung. "The Dark Side of ?Black-Box? > Cryptography or: Should We Trust Capstone?." In Advances in > Cryptology?CRYPTO?96, pp. 89-103. Springer Berlin Heidelberg, 1996. > > Desmedt, Yvo. "Abuses in cryptography and how to fight them." In > Proceedings on Advances in cryptology, pp. 375-389. Springer-Verlag > New York, Inc., 1990. > > Young and Yung cite Yvo Desmedt as having introduced the idea for RSA > moduli in particular. (I don't have this conference proceeding to > verify the citation; can anyone verify this?) The article is freely available here, as far as I can tell: http://link.springer.com/chapter/10.1007%2F0-387-34799-2_29#page-1 Section 3.1 does mention that "Another method for leaking information is to choose p and q such that the least significant bits of n have a special form not required by the specifications". > > There are also some works by GJ Simmons (e.g., "The subliminal channel > and digital signatures") from 1984-85 that seem apropos; does anyone > know if this is discussed there? This one is also available here: http://link.springer.com/chapter/10.1007%2F3-540-39757-4_25. As far as I can tell, there is no mention of generating special semiprimes; the subliminal channel are the signatures themselves, not the modulus. From natanael.l at gmail.com Thu Oct 16 14:07:37 2014 From: natanael.l at gmail.com (Natanael) Date: Thu, 16 Oct 2014 20:07:37 +0200 Subject: [Cryptography] Secure transfer of subsequences In-Reply-To: <8335DC8C-9B3C-4EF1-95FF-304035872309@lrw.com> References: <8335DC8C-9B3C-4EF1-95FF-304035872309@lrw.com> Message-ID: Den 16 okt 2014 19:36 skrev "Jerry Leichter" : > > Suppose Alice has a sequence of bits S and wishes to securely transfer them to Bob, in the sense that Bob can prove that the bits S' he receives are actually the same as S. Depending on what we mean by "prove", either a keyed MAC or a signature will do the trick. > > But suppose S is very large, and Bob is only interested some subsequence T of S. (That is, there is a sequence i0 < i1 < ... < ik such that > T == S[i0] || S[i1] || ... || S[ik].) Again, Bob wishes to be able to prove that T is indeed of this form. Obviously, Bob can accomplish this by transferring all of S and then constructing T, but if |T| << |S|, this is very wasteful. Bob would like to do this by transferring a number of bits "not much larger" than |T|. (If the subsequence really is arbitrary, the cost of sending the i's might dominate. That's not the interesting part of the problem, and can be ignored.) > > Does anyone know of solutions to problems of this sort? For the case that the subsequences consist of a small number of long contiguous runs of bits, using a tree hash on substrings of contiguous bits of S might work. I'm guessing that *some* constraints on S are necessary, but who knows. > > In general form, the closest thing I've seen to this is "proofs of storage". In terms of this problem, Bob sent S to Alice, who promises to keep it stored; Bob will later ask for something small - T might be an example, but is not enough - that Alice will be unable to produce unless she has, indeed, retained all of S. There are some clever ideas here that might be adapted, but I haven't kept up with the literature. The only thing I can think of you haven't mentioned already is Zero-knowledge proofs, showing that what you provided really is exactly the same as the bits in range X to Y within the file with the hash Z (a less formal way to describe what you explained in your second paragraph). These are still very inefficient to generate, OTOH the proofs can be constant size and quick to verify. I would think tree hashes are the simplest practical solution. -------------- next part -------------- An HTML attachment was scrubbed... URL: From stephen.farrell at cs.tcd.ie Thu Oct 16 14:32:58 2014 From: stephen.farrell at cs.tcd.ie (Stephen Farrell) Date: Thu, 16 Oct 2014 19:32:58 +0100 Subject: [Cryptography] DSAuth: public key authentication for the web In-Reply-To: <543EDFC1.70808@gmail.com> References: <543EDFC1.70808@gmail.com> Message-ID: <54400F5A.1040709@cs.tcd.ie> On 15/10/14 21:57, Brian Gomes Bascoy wrote: > Hi! I am working on a small protocol for password-less authentication > using ECDSA, for the world wide web. I already have a functional > prototype which you can play with in here: > https://github.com/pera/dsauth/ > > In some way it's similar to the SSH's public key authentication method. > I'm using SJCL inside an user-friendly Chrome extension that manage the > key pairs and do all the client-side part, and also coded a Node.js web > service for testing. > > I'm looking for any kind of comments, or even contributors :) Be interested in how that compares to [1] which is (cross fingers) almost done processing in the IETF. See also [2] for an implementation. S. [1] https://tools.ietf.org/html/draft-ietf-httpauth-hoba [2] https://hoba.ie/ > > Thank you very much! > _______________________________________________ > The cryptography mailing list > cryptography at metzdowd.com > http://www.metzdowd.com/mailman/listinfo/cryptography > > From jsd at av8n.com Thu Oct 16 23:53:37 2014 From: jsd at av8n.com (John Denker) Date: Thu, 16 Oct 2014 20:53:37 -0700 Subject: [Cryptography] =?utf-8?q?Whisper_app_tracks_=E2=80=98anonymous?= =?utf-8?b?4oCZIHVzZXJz?= Message-ID: <544092C1.1080502@av8n.com> In case you missed it: The Whisper app is a product of WhisperText LLC. On the GooglePlay store it claims: > Whisper is an anonymous social network that allows people to express > themselves, connect with like-minded individuals, and discover the > unseen world around us. With Whisper, you can anonymously share your > thoughts and emotions with the world, and form lasting and meaningful > relationships in a community built around trust and honesty. If you > have ever had something too intimate to share on traditional social > networks, simply share it on Whisper Now there is a long, detailed article: Paul Lewis and Dominic Rushe "Revealed: how Whisper app tracks ?anonymous? users" The Guardian, 16 October 2014 11.35 EDT http://www.theguardian.com/world/2014/oct/16/-sp-revealed-whisper-app-tracking-users Apparently Whisper's notion of "anonymous" means they don't overtly ask for your name. They do however capture and store all of your messages. By default they also capture and store your GPS location. If you opt out of "location tracking" they track your location anyway, using IP location and perhaps other means; I don't know if they use cell tower triangulation. If you post something sufficiently provocative, they have staffers who undertake to track you "for the rest of your life." Also, the firm has cozy relationships with FBI, DoD, and news organizations. You can read the story: http://www.theguardian.com/world/2014/oct/16/-sp-revealed-whisper-app-tracking-users My thoughts: Evidently "anonymity" is not the same as privacy. AFAICT using Whisper is like running around in public all day every day, naked except for a tag that says "Hello, my name is Anonymous". I wonder how many of the folks who signed up for the service were aware of this. I wonder how many other "free" apps play by the same rules. Does anybody have any suggestions for how to prevent this sort of thing? From ekaggata at gmail.com Thu Oct 16 18:02:36 2014 From: ekaggata at gmail.com (Adam Gibson) Date: Fri, 17 Oct 2014 01:02:36 +0300 Subject: [Cryptography] TLSNotary In-Reply-To: <5414368A.6030208@gmail.com> References: <201409121621.s8CGLuQN002099@home.unipay.nl> <541435C3.8080903@gmail.com> <5414368A.6030208@gmail.com> Message-ID: <5440407C.6040809@gmail.com> If anyone would be kind enough to take a look at this (it was posted here last month), and let us know if you see any problems, we'd be grateful. (Note to moderator: if this kind of bump, which I don't intend to do again, is not appropriate, no worries :) ) Regards, Adam Gibson On 09/13/2014 03:20 PM, Adam Gibson wrote: > Paper: > https://github.com/tlsnotary/tlsnotary/blob/master/data/documentation/TLSNotary.pdf?raw=true > > TLSNotary is intended to allow an auditor to audit the contents of an > https server response (e.g. html page), without the auditor having > control of or access to the live TLS session between client and server. > It is restricted to TLS 1.0/1.1, at least for now. > > To boil it down to the simplest terms: > > * Have the auditor and client separately generate two 'halves' of the > premaster secret. > * Use the TLS 1.0/1 PRF** to have the auditor hold the server mac write > secret, while the client (called 'auditee' in the paper) holds the other > secrets in the expanded keys. > * Client Key exchange message can still work without the client having > the full premaster secret, using the RSA homomorphism*** (hence - > restricted to RSA cipher suites). > * Client sends an initial request on the new connection as normal, > receives response, but is not able to authenticate as doesn't have the > server mac secret. > * Client makes commitment of server response to auditor. Auditor "hands > over" server mac secret, client can now safely authenticate. > * Finally, if the client is happy with the material to be audited (i.e. > html page usually), he can pass this over to the auditor as a 'reveal' > of the earlier commitment. > > ** This part is the main innovation and is described in the paper, > Section 2.1. > *** This is described in detail in Section 2.2. It (at the moment) > necessitates a considerable reduction in the entropy of the premaster > secret; crudely, one could say that the entropy protecting from an > external attacker is reduced from 46 bytes to about 21 bytes, and the > protection the auditee has from the auditor is only 12 bytes (but such > an attack would have to be carried out in an unfeasibly short time). > > Motivation: we were motivated by the problem of the difficulty of doing > decentralised exchange of bitcoin for things like bank wires - the > question being, is there any way to cryptographically prove a bank wire > has taken place (rather than relying on less sound proof methods). > Obviously, using such a system for something as sensitive as bank > statements raises the stakes considerably. However, our main motivation > in describing the algorithm here is to ask if anyone can see holes in it > either cryptographically or in more general computer security terms. > > An example of the non-cryptographic concern would be: what happens if > the html page sent to the auditor contains a session cookie? We believe > that it would be sufficient to log out of the session in advance. > We don't see exposure of login credentials as a concern, because the > client/auditee chooses a page within the logged-in site to audit (the > audit, notice, only covers one single server response), and the auditee > can sanity check before sending any material to the auditor anyway. > Perhaps these arguments aren't convincing - opinions welcome. > > There is a working implementation at > https://github.com/tlsnotary/tlsnotary - needs Firefox and Python 2. > There is a specific 'self-test' mode to allow the curious to try it out > on their own - here you act as both auditor and auditee at the same time > on your own machine. > > Many thanks for any feedback, > Adam Gibson (see contacts for me and the other two developers on the > README). > From ji at tla.org Fri Oct 17 10:36:25 2014 From: ji at tla.org (John Ioannidis) Date: Fri, 17 Oct 2014 10:36:25 -0400 Subject: [Cryptography] =?utf-8?q?Whisper_app_tracks_=E2=80=98anonymous?= =?utf-8?b?4oCZIHVzZXJz?= In-Reply-To: <544092C1.1080502@av8n.com> References: <544092C1.1080502@av8n.com> Message-ID: On Thu, Oct 16, 2014 at 11:53 PM, John Denker wrote: > In case you missed it: > > The Whisper app is a product of WhisperText LLC. > On the GooglePlay store it claims: > > > Whisper is an anonymous social network that allows people to express > > themselves, connect with like-minded individuals, and discover the > > unseen world around us. With Whisper, you can anonymously share your > > thoughts and emotions with the world, and form lasting and meaningful > > relationships in a community built around trust and honesty. If you > > have ever had something too intimate to share on traditional social > > networks, simply share it on Whisper > > Now there is a long, detailed article: > Paul Lewis and Dominic Rushe > "Revealed: how Whisper app tracks ?anonymous? users" > The Guardian, 16 October 2014 11.35 EDT > > http://www.theguardian.com/world/2014/oct/16/-sp-revealed-whisper-app-tracking-users > > Apparently Whisper's notion of "anonymous" means they don't > overtly ask for your name. They do however capture and > store all of your messages. By default they also capture > and store your GPS location. If you opt out of "location > tracking" they track your location anyway, using IP > location and perhaps other means; I don't know if they > use cell tower triangulation. > > If you post something sufficiently provocative, they have > staffers who undertake to track you "for the rest of your > life." > > Also, the firm has cozy relationships with FBI, DoD, and > news organizations. > > You can read the story: > > http://www.theguardian.com/world/2014/oct/16/-sp-revealed-whisper-app-tracking-users > > My thoughts: > > Evidently "anonymity" is not the same as privacy. AFAICT > using Whisper is like running around in public all day every > day, naked except for a tag that says "Hello, my name is > Anonymous". > > I wonder how many of the folks who signed up for the service > were aware of this. > > I wonder how many other "free" apps play by the same rules. > > Does anybody have any suggestions for how to prevent this > sort of thing? > > _______________________________________________ > The cryptography mailing list > cryptography at metzdowd.com > http://www.metzdowd.com/mailman/listinfo/cryptography The standard answer to all of these questions is that if you get something for free, you are not the customer, you are the product. Do the people who install whisper really think that the company that runs it does it out of the goodness of their heart? /ji -------------- next part -------------- An HTML attachment was scrubbed... URL: From ji at tla.org Fri Oct 17 11:34:04 2014 From: ji at tla.org (John Ioannidis) Date: Fri, 17 Oct 2014 11:34:04 -0400 Subject: [Cryptography] =?utf-8?q?Whisper_app_tracks_=E2=80=98anonymous?= =?utf-8?b?4oCZIHVzZXJz?= In-Reply-To: References: <544092C1.1080502@av8n.com> Message-ID: On Fri, Oct 17, 2014 at 10:54 AM, Lodewijk andr? de la porte wrote: > 2014-10-17 16:36 GMT+02:00 John Ioannidis : > >> The standard answer to all of these questions is that if you get >> something for free, you are not the customer, you are the product. Do the >> people who install whisper really think that the company that runs it does >> it out of the goodness of their heart? > > > I have a good (enough) heart. If it were me running the service it would > be as good as I managed to make it. I'm slowly learning not to be so > perfectionistic that I just drop the product, only to find alternatives are > SO MUCH WORSE. > > Maybe I'm just doing it wrong? > Someone has to pay for your datacenter and your network connectivity even if you are willing to write and maintain the software for free. /ji > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From leichter at lrw.com Fri Oct 17 07:32:43 2014 From: leichter at lrw.com (Jerry Leichter) Date: Fri, 17 Oct 2014 07:32:43 -0400 Subject: [Cryptography] [messaging] Gossip doesn't save Certificate Transparency In-Reply-To: References: Message-ID: <18F802A5-AE2C-4445-9BB1-654BF3AC4EE3@lrw.com> On Oct 15, 2014, at 8:18 PM, Bill Frantz wrote: >>> Here I'm sitting, using my phone for Internet with a 2 gig limit before the charges start coming >> in. I avoid 30MB downloads like the plague. >> a) Deltas will be tiny. How often does a site need to change its keys? >> b) You never connect to WiFi? Just how up-to-the-minute do you need your list of keys to be? > > If deltas are tiny, then there is much less of a problem. Since we're talking specifically about size, keep in mind that my 30 meg estimate was deliberately on the high side, assuming 100,000 RSA keys and uncompressed site names. If you use ECC, compressed site names, and let those for whom size is a major issue use a shorter list, the file could be dramatically smaller, > If a site can continue to use its old key while phasing in a new one, then if I can select the old key I can wait until I get to an area of high bandwidth/cheap bandwidth before updating the keys. Keep in mind that for this to work there needs to be a proper key rollover procedure. Presumably the site will send you its new key signed with its old one, which you'll normally accept if your copy of the old key is recent enough. That doesn't handle revocation in the case of key compromise - but that's impossible without real-time validation of keys, which (a) is an independent issue that's the same whether we are talking certs or keys; (b) comes with its own bag of problems. > If, on the other hand, it all happens "automagically" in the background, I may buy a big bill with out knowing it. I have this problem with auto-download of replacement phone software. That download takes about an hour on DSL. I don't know what it will do to my cell phone bill, which is why I have the system set to download only on command. A decent implementation would need to allow you to control this. It's not hard. (In fact, in iOS the ability to use cellular data is directly controllable on an app by app basis through the OS, exactly to deal with this kind of thing. I don't know if recent versions of Android provide the same capability.) >>> At home, with "unlimited" (i.e. how much bandwidth does DSL have anyway), I would feel differently. >> I also have friends with only dialup, and they will indeed feel very differently from me. >> What modern web sites are they looking at over dialup? > > I have no idea, but I could see them doing banking. I stopped thinking of dialup as a significant constraint when by wife's parents finally, after complaining for years about how "they take away the stuff you use and make you pay more", finally dropped their AOL dialup account and got their Internet through the cable provider they were paying for anyway. One bases one's views on personal experience, I guess. The most recent statistic for the US that I was able to find in a quick search was a Pew survey done in May 2013, at which time 3% of users used dialup for Internet access from home - a percentage that hadn't changed since an August 2011 survey, even as broadband usage grew from 62% to 70%. Having seen - even a couple of years back - just how limited a view of the Internet dialup provides today, I wouldn't worry very much about how to provide crypto updates to that remaining population. What they need much more is better access. >> At some point, one has to move on and stop supporting IE6 :-). Should we also worry about people still using 2400 baud modems? > > Well, I think there will always be people with poor connectivity. I think we should make it possibile for them to enjoy as much of the online world as possible. We probably can't show them movies, but email and text messaging are low bandwidth. Some level of web browsing is also possible, limited by their patients. > > How do you think we should treat them? I don't see any particular responsibility on anyone's part to adjust things to an obsolete least-common-denominator. Back in the 1970's, I designed protocols that had to run in Europe over X.25. The European PTT's at the time had a complete monopoly on any communications infrastructure that cross public space. If you had two buildings on opposite sides of a road, you couldn't run a wire between them - you had to use the PTT and X.25. X.25 was charged by the packet - I think 128 bytes. Send a one-byte ACK - pay for a packet. And that packet did not come cheap. Designing for this environment was crippling. Imagine if the designers of IP had been told that it had to work - at "reasonable cost" - over X.25. Times change. Technology changes. Yes, people get left behind unless they are in a position to upgrade. The US continues to have massive fraud problems with credit cards because we refused for so long to move off of magnetic stripe technology: Just think of the cost of replacing all those POS systems! Even now, we're doing a half-way move to chip-and-signature, which is only a small improvement over chip-and-pin - but does a better job of keeping all those "legacy" players going. From l at odewijk.nl Fri Oct 17 10:54:46 2014 From: l at odewijk.nl (=?UTF-8?Q?Lodewijk_andr=C3=A9_de_la_porte?=) Date: Fri, 17 Oct 2014 16:54:46 +0200 Subject: [Cryptography] =?utf-8?q?Whisper_app_tracks_=E2=80=98anonymous?= =?utf-8?b?4oCZIHVzZXJz?= In-Reply-To: References: <544092C1.1080502@av8n.com> Message-ID: 2014-10-17 16:36 GMT+02:00 John Ioannidis : > The standard answer to all of these questions is that if you get something > for free, you are not the customer, you are the product. Do the people who > install whisper really think that the company that runs it does it out of > the goodness of their heart? I have a good (enough) heart. If it were me running the service it would be as good as I managed to make it. I'm slowly learning not to be so perfectionistic that I just drop the product, only to find alternatives are SO MUCH WORSE. Maybe I'm just doing it wrong? Also: why are people at all challenged in making text communication apps secure?? It's... Text, encryption, network-traffic-fuzzing, done And how could people think snapchat was made for anything other than causing you to send naked pictures to the guys who made it? It's so ridiculous.. If you can view it on screen, someone can save it. They defended /reasonably well/ against screenshots, but then you find out they used AES with *hardcoded key?!* I've spend last night producing a trustable random source, so that I can generate IV's, because just having a random password didn't seem enough when you use CBC... And then this multi million company doesn't even use asymetric crypto to generate keypairs??? WTF???? (actually, this is pretty important if they'd want to look at everyone's pictures) It's *NOT HARD *and I *DON'T* believe it's just me who thinks so. I'm *not that smart*! In fact, just saying *it's not that hard* makes people upset with my arrogance, wtf? Could someone please give me some reasonable measure of how hard it is, so that this completely paradoxal part of reality can just go "poof" and dissapear? I honestly don't get it. (Which is, incidentally, further evidence of not being that smart) -------------- next part -------------- An HTML attachment was scrubbed... URL: From l at odewijk.nl Fri Oct 17 11:36:18 2014 From: l at odewijk.nl (=?UTF-8?Q?Lodewijk_andr=C3=A9_de_la_porte?=) Date: Fri, 17 Oct 2014 17:36:18 +0200 Subject: [Cryptography] =?utf-8?q?Whisper_app_tracks_=E2=80=98anonymous?= =?utf-8?b?4oCZIHVzZXJz?= In-Reply-To: References: <544092C1.1080502@av8n.com> Message-ID: On Oct 17, 2014 5:34 PM, "John Ioannidis" wrote: > > On Fri, Oct 17, 2014 at 10:54 AM, Lodewijk andr? de la porte wrote: >> >> Maybe I'm just doing it wrong? > > > Someone has to pay for your datacenter and your network connectivity even if you are willing to write and maintain the software for free. Who said anything about centralized hosting? Why does a product have to be bad to be make money? -------------- next part -------------- An HTML attachment was scrubbed... URL: From mitch at niftyegg.com Fri Oct 17 19:29:52 2014 From: mitch at niftyegg.com (Tom Mitchell) Date: Fri, 17 Oct 2014 16:29:52 -0700 Subject: [Cryptography] =?utf-8?q?Whisper_app_tracks_=E2=80=98anonymous?= =?utf-8?b?4oCZIHVzZXJz?= In-Reply-To: References: <544092C1.1080502@av8n.com> Message-ID: On Fri, Oct 17, 2014 at 8:36 AM, Lodewijk andr? de la porte wrote: > On Oct 17, 2014 5:34 PM, "John Ioannidis" wrote: > > On Fri, Oct 17, 2014 at 10:54 AM, Lodewijk andr? de la porte < > l at odewijk.nl> wrote: > >> Maybe I'm just doing it wrong? > > > > Someone has to pay for your datacenter and your network connectivity > even if you are willing to write and maintain the software for free. > > Who said anything about centralized hosting? Why does a product have to be > bad to be make money? > Meta data.... it is harmless. -- T o m M i t c h e l l -------------- next part -------------- An HTML attachment was scrubbed... URL: From mitch at niftyegg.com Fri Oct 17 20:17:58 2014 From: mitch at niftyegg.com (Tom Mitchell) Date: Fri, 17 Oct 2014 17:17:58 -0700 Subject: [Cryptography] Best internet crypto clock In-Reply-To: References: Message-ID: On Sun, Oct 12, 2014 at 7:18 AM, Arnold Reinhold wrote: > > > On Oct 9, 2014, at 1:52 AM, Tom Mitchell wrote: > > ... > > A free running tick counter that never overflows is a good thing. > Freedom > > from time of day issues leap seconds and more make it easy. The > frequency > > choice is open and precision and accuracy is open. An external map of > ticks to > > historic real world time (and temperature) is interesting in the right > context. > > A simple counter with no overflow would work, of course, but Inexpensive > cpu clock chips, like the DS-1307 family, provide a 99 year range with one > second resolution and have all the circuitry for dual supply (5 VDC and > battery) with very low power (500 na) operation on battery. Another > possible advantage over a straight counter: yy-mm-dd-hh-ss in a time stamp > is a lot easier to explain to a judge and jury than a long hexadecimal > constant. > > Here's a data point. I installed a cheap digital video recorder for a > surveillance system just over four years ago. It's not connected to the > Internet and I never adjusted the clock since installing it. I had to pull > a clip off of it last week and the clock was 44 minutes fast. That's about > a minute a month. > > So if the device grabbed the current NIST beacon signed it with its > internal clock and had the resulting certificate time stamped by an > external authority once a month, that should be enough to establish minute > accuracy. I am with you except for the "grab NIST beacon" part. This implies that the clock can be set and reset. This muddies the accuracy and precision stuff further. If it can be reset based on an external reference then the jury can be told that the reference is unreliable even if the device is understood... http://pdfserv.maximintegrated.com/en/an/AN504.pdf Some of this has been addressed with key generation devices where management can connect the device and set and reset the device while matching it to a user or user group account. For the specific case of validating a photo equivalent without a preexisting trust anchor the problem is hard. One special case involving security camera images where a cell phone image (any camera) and security camera data of the same public location can be compared and the relative location of people in motion can be matched. Now the date time has value in finding the corresponding video images captured and archived from multiple angles and from multiple authorities. The low cost and low power of a DS-1307 does make it interesting. It also moves a power requirement away from programmable logic or processing that do not need to be on all the time. -- T o m M i t c h e l l -------------- next part -------------- An HTML attachment was scrubbed... URL: From jsd at av8n.com Fri Oct 17 21:32:48 2014 From: jsd at av8n.com (John Denker) Date: Fri, 17 Oct 2014 18:32:48 -0700 Subject: [Cryptography] Best internet crypto clock In-Reply-To: References: Message-ID: <5441C340.6020709@av8n.com> On 10/17/2014 05:17 PM, Tom Mitchell wrote: > I am with you so far so good .... > except for the "grab NIST beacon" part. This implies that > the clock can be set and reset. ? Resetting the local clock hardware is not necessary, not desirable, and not implied by anything that was said. When you grab the official time from NIST or wherever, you should use that to write a calibration certificate, which you keep in a file along with all the previous calibration certificates. The local clock hardware continues to be free-running and imperturbable. Using the calibration certificates, you can define a /calibration function/ that gives the official time as a function of the local clock hardware reading. This function is a) one-to-one, b) continuous, c) differentiable [except on a set of measure zero, at worst], d) very nearly unit slope, and e) highly overconstrained. > This muddies the accuracy and precision > stuff further. > If it can be reset based on an external reference then the jury can be told > that > the reference is unreliable even if the device is understood... Because it is overconstrained, you can perform jackknife resampling and claim that the calibrated time was never off by more than XYZ milliseconds during the times of interest. There is strong evidence in support of this claim and no evidence against it. Courts have seen this sort of calibration a gazillion times, e.g. for the speedometers and radars in police cars. If you do it right, the evidence is so overwhelming that the adversary will not seriously consider challenging it. A challenge would look like a crackpot move, and would just be an admission of weakness and desperation. From agr at me.com Sat Oct 18 22:40:48 2014 From: agr at me.com (Arnold Reinhold) Date: Sat, 18 Oct 2014 22:40:48 -0400 Subject: [Cryptography] Best internet crypto clock In-Reply-To: References: Message-ID: On Oct 17, 2014, at 8:17 PM, Tom Mitchell wrote: > On Sun, Oct 12, 2014 at 7:18 AM, Arnold Reinhold wrote: > > On Oct 9, 2014, at 1:52 AM, Tom Mitchell wrote: > > ... > > A free running tick counter that never overflows is a good thing. Freedom > > from time of day issues leap seconds and more make it easy. The frequency > > choice is open and precision and accuracy is open. An external map of ticks to > > historic real world time (and temperature) is interesting in the right context. > > A simple counter with no overflow would work, of course, but Inexpensive cpu clock chips, like the DS-1307 family, provide a 99 year range with one second resolution and have all the circuitry for dual supply (5 VDC and battery) with very low power (500 na) operation on battery. Another possible advantage over a straight counter: yy-mm-dd-hh-ss in a time stamp is a lot easier to explain to a judge and jury than a long hexadecimal constant. > > Here's a data point. I installed a cheap digital video recorder for a surveillance system just over four years ago. It's not connected to the Internet and I never adjusted the clock since installing it. I had to pull a clip off of it last week and the clock was 44 minutes fast. That's about a minute a month. > > So if the device grabbed the current NIST beacon signed it with its internal clock and had the resulting certificate time stamped by an external authority once a month, that should be enough to establish minute accuracy. > > I am with you except for the "grab NIST beacon" part. This implies that > the clock can be set and reset. This muddies the accuracy and precision stuff further. > If it can be reset based on an external reference then the jury can be told that > the reference is unreliable even if the device is understood... > http://pdfserv.maximintegrated.com/en/an/AN504.pdf No, you misunderstood me. My conception is that the clock can never be reset or adjusted once the device is FIPS-140 sealed during the manufacturing process. For example, a module that contains the clock chip, crystal and battery, as described in the Maxim application note above, might first be plugged into a station that starts up the clock and set its time. If the NVRAM on the clock chip were used, it would be initialized as well. The clock module would then be plugged into the camera board. The camera board would include a hardware interface that did not permit writing to the clock module. A parallel interface might wire the write line off. An I2C serial interface might use a special state machine that never asserts the write bit. My "grab NIST beacon" step is part of creating an electronic document that might be called a Clock Calibration Certificate (CCC). The camera gets the latest NIST beacon, appends its current clock reading, signs it with its secret key, and then send the resulting document to a time stamping authority. The timestamped document is our CCC (or maybe we have the camera add a second internal clock time stamp and sig). Each CCC bounds the actual time that corresponds to the internal clock reading. It is no sooner that the time of the NIST beacon value and no later that the time stamp authority's time stamp. CCCs can be generated periodically or at the start of a picture-taking session. The CCCs can be stored on the camera, as they are small compared to even a single photo, and/or backed up to the cloud, a server or a registrar. Each image file might have the latest CCC attached and perhaps enough older CCCs to allow the camera clock's drift rate to be calculated, allowing more precise time measurements. There are a variety of ways of using the CCC, but the point is that each one is an irrefutable comparison of internal clock time with time traceable to national standards. > > Some of this has been addressed with key generation devices where > management can connect the device and set and reset the device > while matching it to a user or user group account. That is a possibility, but I like the simplicity of a device that cannot have its clock and security parameters altered. > > For the specific case of validating a photo equivalent without a preexisting trust anchor > the problem is hard. > > One special case involving security camera images where a cell phone > image (any camera) and security camera data of the same public > location can be compared and the relative location of people in motion > can be matched. Now the date time has value in finding the corresponding > video images captured and archived from multiple angles and from multiple > authorities. If the camera has Internet access, it might be able to use the same approach of getting a NIST beacon, appending it to an image or set of images, then hashing everything. Or maybe use the NIST beacon number as the key for a keyed hash. Then time stamp the hash > > The low cost and low power of a DS-1307 does make it interesting. It also > moves a power requirement away from programmable logic or processing > that do not need to be on all the time. > One fewer wheel to re-invent. Arnold Reinhold -------------- next part -------------- An HTML attachment was scrubbed... URL: From jamesd at echeque.com Sat Oct 18 22:52:00 2014 From: jamesd at echeque.com (James A. Donald) Date: Sun, 19 Oct 2014 12:52:00 +1000 Subject: [Cryptography] [messaging] Gossip doesn't save Certificate Transparency In-Reply-To: <328D961E-7923-4585-9EB8-AF85204DBBB4@me.com> References: <328D961E-7923-4585-9EB8-AF85204DBBB4@me.com> Message-ID: <54432750.4090000@echeque.com> -- On 2014-10-15 09:24, Arnold Reinhold wrote: > I agree that the pin distribution problem seems quite solvable. But > how do browser manufacturers get valid pin data for 100,000 sites, > not to mention regular updates? If they want to get the information > independently, they will have to set up the kind of rigorous > verification infrastructure that we would want CAs to employ. (The > fact that most CAs fall short does not suggest the problem is an > easy one.) And if I trust my browser manufacturer?s signature on the > browser software distribution that includes the initial pin list, as > well as on subsequent pin updates, why not also trust the same > signature key to sign individual web site credentials and use the > existing TLS infrastructure, with the browser manufacturer serving > as a super-CA for those 100,000 sites? > > If the browser manufacturers choose instead to subcontract getting > the pin data to one or a few high quality CAs, expect those CAs to > charge a very steep price since it undermines their business model. > The other CAs will no doubt raise a ruckus, perhaps invoking local > antitrust laws. And if the browser manufacturers accept most CA > data, what is the point? Active attacks by powerful adversaries are rare, because an active attack leaks information, and people are interested in information about powerful adversaries. If active attacks were common, we would be hosed, since the standard password recover system is to send it in the clear in email. So, everyone self signs their own certificate, and we then have the system make sure that everyone sees the same self signed certificate as everyone else. From leichter at lrw.com Sun Oct 19 09:01:28 2014 From: leichter at lrw.com (Jerry Leichter) Date: Sun, 19 Oct 2014 09:01:28 -0400 Subject: [Cryptography] NSA versus DES etc.... In-Reply-To: <20141006123054.GC12902@yeono.kjorling.se> References: <9E0708DC-ED74-4879-AEE7-D1F41B2252F5@lrw.com> <5421E7DB.2040704@av8n.com> <2422FDD0-1570-4BB7-B099-9719A933ED82@interlog.com> <5422600D.2060104@av8n.com> <2DB4E01B-CA6E-4F8E-827A-36A2022E5485@interlog.com> <21544.64868.836051.664605@desk.crynwr.com> <542C3AE3.4090207@iang.org> <201410020932.s929W3pE009125@new.toad.com> <20141006123054.GC12902@yeono.kjorling.se> Message-ID: <3C03A146-1ECA-4F3D-B637-73F904749897@lrw.com> On Oct 6, 2014, at 8:30 AM, Michael Kj?rling wrote: >> And didn't the Swedes find a Russian sub in their waters some years back? > > While this has preciously little to do with cryptography, yes... > The Swedish Wikipedia article states that "between 1981 and 1994 > approximately 4700 observations" of submarine-like objects were made, > presumably within Swedish territorial waters (I don't really feel like > digging out the Swedish government report cited as the source for > that).... It turns out this isn't just "the dead past". See "Sweden searches for suspected Russian submarine off Stockholm. Helicopters, minesweepers and 200 service personnel mobilised in search after tipoff about ?foreign underwater activity?". http://www.theguardian.com/world/2014/oct/19/sweden-search-russian-submarine-stockholm -- Jerry From mitch at niftyegg.com Mon Oct 20 18:34:05 2014 From: mitch at niftyegg.com (Tom Mitchell) Date: Mon, 20 Oct 2014 15:34:05 -0700 Subject: [Cryptography] Best internet crypto clock In-Reply-To: <5441C340.6020709@av8n.com> References: <5441C340.6020709@av8n.com> Message-ID: On Fri, Oct 17, 2014 at 6:32 PM, John Denker wrote: > On 10/17/2014 05:17 PM, Tom Mitchell wrote: > > I am with you > > so far so good .... > > > except for the "grab NIST beacon" part. This implies that > > the clock can be set and reset. ? > > Resetting the local clock hardware is not necessary, not > desirable, and not implied by anything that was said. > Implied only by the choice of a DS-1307 part. On an I2C device there is no Read/Write pin that can be cut to force the device to be read only in the future. Only audited software and software security covers that base. I am a slightly cautious about this because I have had to sift through system logs when time was changing in a bad ways. In my case an international company complained that the time of day on our system was moving by hours once in a while. They told me that the network was isolated I showed them that it was not... It turns out that a dual boot PC which kept TOD in local time for WindowZ but should have kept it as UTS for the *nix environment was the problem. The data link was just a single wire to a room to a satellite dish to another little room to a big building full of machines that should have been firewalled from any production site. Yes, NTP is a better tool than the old timed tool.... They wanted subsecond or better accuracy and precision but hardware clock oscillators were not good enough so they allowed a network tool... BTW they were on an island and "they thought" all was local. -- T o m M i t c h e l l -------------- next part -------------- An HTML attachment was scrubbed... URL: From ryacko at gmail.com Mon Oct 20 20:35:49 2014 From: ryacko at gmail.com (Ryan Carboni) Date: Mon, 20 Oct 2014 17:35:49 -0700 Subject: [Cryptography] Better Version of Triple-DES Message-ID: Might be wrong, but I think this would be a better version of TripleDES, Have the first encryption be DES in CTR mode, the second encryption being the data, and the third encryption being CTR mode again. Since CTR mode is a stream cipher, it would be DES-X style key-whitening, except the full 168-bit key would be usable. -------------- next part -------------- An HTML attachment was scrubbed... URL: From waywardgeek at gmail.com Tue Oct 21 01:05:21 2014 From: waywardgeek at gmail.com (Bill Cox) Date: Tue, 21 Oct 2014 01:05:21 -0400 Subject: [Cryptography] The world's most secure TRNG In-Reply-To: <54362750.8010809@iang.org> References: <48153DBF-756B-47A6-9B53-EE8E5CFFB730@me.com> <54362750.8010809@iang.org> Message-ID: Top posting just the new news, with responses to your comments below. The breadboard works! The estimated entropy coming out of the Infinite Noise Multiplier is very close to the predicted amount. I measure it by recording outcomes given the previous 16 bits many times until I have a reasonable guess for the probability of the next bit being a 1 or 0. I use that to estimate the probability of a long string of bits occurring. The entropy is estimated as log2(1/P(S)), where P(S) is the probability estimate of the string of bits S occurring. This estimated entropy closely matches the expected log2(K), where K is the gain in the op-amp circuit. I tested this for 3 different gains, and they all matched within 5% of the theoretical value. I added a picture of the breadboard here: https://github.com/waywardgeek/infnoise I also wrote some code to find how soon we see a repeated string N bits long. The data from the INM has repeated strings of size N consistent with the estimated entropy. This proves that there is no scary cycling of the same outputs over and over, at least with a period less than the expected length before seeing an N-bit repeated string (20,000+ in my tests for 34 bits). On Thu, Oct 9, 2014 at 2:12 AM, ianG wrote: > On 9/10/2014 01:59 am, Bill Cox wrote: > > On Wed, Oct 8, 2014 at 7:00 PM, Dave Horsfall > > wrote: > > > > It's possible that I may have missed this (the list seems to have > spiked > > lately), but how would the device present itself to the host? A > serial > > stream of random bits (like a terminal or a keyboard), or some sort > of a > > structure with command and control etc? > > > > -- Dave > > _______________________________________________ > > The cryptography mailing list > > cryptography at metzdowd.com > > http://www.metzdowd.com/mailman/listinfo/cryptography > > > > > > No command/control. In fact, I feel a lot better not having a > > microcontroller on there that could transmit nasty malware when being > > plugged into a new system, or which could be reprogrammed to emit > > non-random data. > > > My guess is that if you don't have an easy defined interface (file? tty) > then it won't work in the marketplace. > For now, I've got an application that reads from the USB using the existing serial interface driver that comes with the FT240X USB interface chip I'm using. It normally whitens by reading 2X the amount of entropy requested and filtering it through the 1600 bit version of the Keccak (SHA3) sponge. It just writes the binary data to stdout for now, but it's simple to make that a file socket or whatever. There's a --raw flag which dumps raw data from the noise source without whitening. I have been doing some fun health checking stuff with that. A --debug flag causes it to print estimated entropy, gain in the op-amp, and a couple of other stats. > In terms of the nasty malware, what would be nice would be a firewall. > A device that has male & female and sits there and watches for naughty > traffic. If this came with a good RN source as well, I'd reckon it > would be a hit. > Some sort of automated Internet traffic cop might be a hit. If it needs a source of random data, it's about $1 in extra components to an embedded system. > ... > > How important is the proper USB connector vs a raw connector with no > > housing like the DigiSpark? Do we really feel we need to wrap this > > thing in metal to keep it from radiating secret bits? > > > Yes, otherwise it will be noisy :) You don't want it interfering with > random gear. > > You could probably get away without in a prototype device and encourage > someone to do some testing... > I added a real USB connector, and have nickle EMI paint I can use on the inside of the USB key housing. Hopefully that will keep it quiet. > > I figure if we > > feed it into a whitener, an attacker would have to know *every* bit to > > know the state of the whitener. That seems like a tall order for an > > attacker trying to read bits from EMI. > > > Oh, no :) In the crypto world we deal with bit-rated paranoia. Even > one bit leaked to an attacker will earn the device the BROKEN award. > True enough. I'm shielding it with conductive paint on the inside of the plastic housing. I am tempted to leave the housing un-glued so that users can take it apart if they like and poke at the internals. I saw at least one TRNG company that encases their electronics in potting material. That's no better than Intel asking us to just trust that their TRNG circuit is secure. If we can't open it up and see for ourselves, why should we trust the manufacturer? Bill -------------- next part -------------- An HTML attachment was scrubbed... URL: From hbaker1 at pipeline.com Tue Oct 21 10:07:43 2014 From: hbaker1 at pipeline.com (Henry Baker) Date: Tue, 21 Oct 2014 07:07:43 -0700 Subject: [Cryptography] Chinese MITM Attack on iCloud Message-ID: Instead of whining & whinging like FBI's Comey, http://www.fbi.gov/news/speeches/going-dark-are-technology-privacy-and-public-safety-on-a-collision-course the Chinese appear to be getting on with iPhone spying business as usual: http://www.netresec.com/?page=Blog&month=2014-10&post=Chinese-MITM-Attack-on-iCloud The timing seems far too convenient; Apple's rollout in China appears to have been delayed until the MITM machinery was ready. Of course, Apple seems to also have left a few back doors open in OSX Yosemite -- perhaps on purpose. One can only wonder if the same type of back doors were also left open in iOS8... "It would seem that no matter how you configure Yosemite, Apple is listening. Keeping in mind that this is only what's been discovered so far, and given what's known to be going on, it's not unthinkable that more is as well." http://apple.slashdot.org/story/14/10/20/003257/if-youre-connected-apple-collects-your-data This is the project that is producing software to find out what data Apple is busy collecting: https://github.com/fix-macosx/yosemite-phone-home/ Choosing a non-Apple Safari search engine raises eyebrows: "The logs show that *** a copy of your Safari searches are still sent to Apple, even when selecting DuckDuckGo as your search provider, *** and 'Spotlight Suggestions' are disabled in System Preferences > Spotlight." as does a non-Apple email account: "When setting up a new Mail.app account for the address admin at fix-macosx.com, which is hosted locally, searching the logs for "fix-macosx.com" shows that *** Mail quietly sends the domain entered by the user to Apple, too. ***" --- Methinks Mr. Comey doth protest too much... From leichter at lrw.com Tue Oct 21 11:47:25 2014 From: leichter at lrw.com (Jerry Leichter) Date: Tue, 21 Oct 2014 11:47:25 -0400 Subject: [Cryptography] Better Version of Triple-DES In-Reply-To: References: Message-ID: On Oct 20, 2014, at 8:35 PM, Ryan Carboni wrote: > Might be wrong, but I think this would be a better version of TripleDES, Have the first encryption be DES in CTR mode, the second encryption being the data, and the third encryption being CTR mode again. Since CTR mode is a stream cipher, it would be DES-X style key-whitening, except the full 168-bit key would be usable. Please define "better". (You should probably also define what "the second encrypting being the data" is supposed to mean.) -- Jerry -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 4813 bytes Desc: not available URL: From leichter at lrw.com Tue Oct 21 14:40:33 2014 From: leichter at lrw.com (Jerry Leichter) Date: Tue, 21 Oct 2014 14:40:33 -0400 Subject: [Cryptography] Best internet crypto clock In-Reply-To: <1413910953.21882.1.camel@sonic.net> References: <1413910953.21882.1.camel@sonic.net> Message-ID: On Oct 21, 2014, at 1:02 PM, Bear wrote: > IIRC a lot has been done to verify video and audio as having come > from a certain moment in time or general location based on recovering > the precise 'drift' of the omnipresent 60-cycle (or 50-cycle if > you're Australian) hum of the surrounding electrical system. While > it's fairly precise, it's not exact, and over very widespread areas > the exact frequency and interference patterns recovered from a video > or audio record have been used to determine exactly when (and to some > extent where) the record was made. > > Relevant law enforcement and Intel agencies are, yes, known to monitor > and record the variances specifically for purposes of dating recordings > that later may become evidence. That's a cool technique. Do you have any references? -- Jerry -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 4813 bytes Desc: not available URL: From bear at sonic.net Tue Oct 21 13:02:33 2014 From: bear at sonic.net (Bear) Date: Tue, 21 Oct 2014 10:02:33 -0700 Subject: [Cryptography] Best internet crypto clock In-Reply-To: References: Message-ID: <1413910953.21882.1.camel@sonic.net> IIRC a lot has been done to verify video and audio as having come from a certain moment in time or general location based on recovering the precise 'drift' of the omnipresent 60-cycle (or 50-cycle if you're Australian) hum of the surrounding electrical system. While it's fairly precise, it's not exact, and over very widespread areas the exact frequency and interference patterns recovered from a video or audio record have been used to determine exactly when (and to some extent where) the record was made. Relevant law enforcement and Intel agencies are, yes, known to monitor and record the variances specifically for purposes of dating recordings that later may become evidence. Bear From jthorn at astro.indiana.edu Tue Oct 21 17:48:26 2014 From: jthorn at astro.indiana.edu (Jonathan Thornburg) Date: Tue, 21 Oct 2014 17:48:26 -0400 Subject: [Cryptography] Best internet crypto clock In-Reply-To: References: <1413910953.21882.1.camel@sonic.net> Message-ID: <20141021214825.GA8146@copper.astro.indiana.edu> On Oct 21, 2014, at 1:02 PM, Bear wrote: > IIRC a lot has been done to verify video and audio as having come > from a certain moment in time or general location based on recovering > the precise 'drift' of the omnipresent 60-cycle (or 50-cycle if > you're Australian) hum of the surrounding electrical system. [[...]] While > > Relevant law enforcement and Intel agencies are, yes, known to monitor > and record the variances specifically for purposes of dating recordings > that later may become evidence. On Tue, Oct 21, 2014 at 02:40:33PM -0400, Jerry Leichter wrote: > That's a cool technique. Do you have any references? http://www.bbc.co.uk/news/science-environment-20629671 describes the technique, and says that the UK police have recorded this since 2005. -- -- Jonathan Thornburg Dept of Astronomy & IUCSS, Indiana University, Bloomington, Indiana, USA "There was of course no way of knowing whether you were being watched at any given moment. How often, or on what system, the Thought Police plugged in on any individual wire was guesswork. It was even conceivable that they watched everybody all the time." -- George Orwell, "1984" From dj at deadhat.com Tue Oct 21 18:16:13 2014 From: dj at deadhat.com (dj at deadhat.com) Date: Tue, 21 Oct 2014 22:16:13 -0000 Subject: [Cryptography] Simon, Speck and ISO Message-ID: <65806f68b096fbc8d116d8f80613e9f5.squirrel@www.deadhat.com> Today the NSA proposed that Simon and Speck be added the the ISO JTC1/SC27 approved ciphers spec. A study period was approved. But no other non NSA lightweight algorithms have been proposed to ISO, other than chaskey from Hitachi. If you have opinions on alternatives for lightweight block vipers, macs, hashes etc, please let me know so I can try and set the ball rolling with ISO. Simon and speck look OK. But the source is not a little bit tainted. From william.muriithi at gmail.com Tue Oct 21 19:23:20 2014 From: william.muriithi at gmail.com (William Muriithi) Date: Tue, 21 Oct 2014 19:23:20 -0400 Subject: [Cryptography] (no subject) Message-ID: <20141021232320.6037649.18509.6284@gmail.com> ?Evening, I believe some of the people here may have taken ?GIAC Web Application Penetration Tester exam. Came across a link on it today at work and felt like it may be something worth looking at. Have just checked amazon and nothing related with that exam show up. The books from SANS seem a tad expensive. Want to check through the material and see if they are deep enough for such investment.? How did you guys go about that evaluation? Was the course nourishing enough intellectually for the amount of money they are asking Regards, William From waywardgeek at gmail.com Tue Oct 21 22:17:25 2014 From: waywardgeek at gmail.com (Bill Cox) Date: Tue, 21 Oct 2014 22:17:25 -0400 Subject: [Cryptography] The worlds easiest TRNG not to F-up Message-ID: IanG has some awesome things to say about the difficulty of TRNGs and random number generation in general: http://iang.org/ssl/hard_truths_hard_random_numbers.html I also discovered the excellent work of OneRNG, and Paul has been awesome with advice: http://onerng.info/ After learning about these efforts, I no longer feel "the world's most secure TRNG" is appropriate for my Infinite Noise Multiplier. Instead, I'm backing off to "least likely to F-up". The whole noise immunity makes it comparatively simple to get right. Bill -------------- next part -------------- An HTML attachment was scrubbed... URL: From iang at iang.org Wed Oct 22 04:57:05 2014 From: iang at iang.org (ianG) Date: Wed, 22 Oct 2014 09:57:05 +0100 Subject: [Cryptography] CFP by 24 Nov - Usable Security - San Diego 8th Feb Message-ID: <54477161.4020608@iang.org> The Workshop on Usable Security (USEC) will be held in conjunction with NDSS on February 8, 2015. The deadline for USEC Workshop submissions is November 24, 2014. ? In previous years, USEC has also been collocated with FC; for example in Okinawa, Bonaire, and Trinidad and Tobago. Additional information and paper submission instructions: http://www.internetsociety.org/events/ndss-symposium-2015/usec-workshop-call-papers ****************** The Workshop on Usable Security invites submissions on all aspects of human factors and usability in the context of security and privacy. USEC 2015 aims to bring together researchers already engaged in this interdisciplinary effort with other computer science researchers in areas such as visualization, artificial intelligence and theoretical computer science as well as researchers from other domains such as economics or psychology. We particularly encourage collaborative research from authors in multiple fields. Topics include, but are not limited to: * Evaluation of usability issues of existing security and privacy models or technology * Design and evaluation of new security and privacy models or technology * Impact of organizational policy or procurement decisions * Lessons learned from designing, deploying, managing or evaluating security and privacy technologies * Foundations of usable security and privacy * Methodology for usable security and privacy research * Ethical, psychological, sociological and economic aspects of security and privacy technologies USEC solicits short and full research papers. ***** Program Committee Jens Grossklags (The Pennsylvania State University) - Chair Rebecca Balebako (Carnegie Mellon University) Zinaida Benenson (University of Erlangen-Nuremberg) Sonia Chiasson (Carleton University) Emiliano DeCristofaro (University College London) Tamara Denning (University of Utah) Alain Forget (Carnegie Mellon University) Julien Freudiger (PARC) Vaibhav Garg (VISA) Cormac Herley (Microsoft Research) Mike Just (Glasgow Caledonian University) Bart Knijnenburg (University of California, Irvine) Janne Lindqvist (Rutgers University) Heather Lipford (University of North Carolina at Charlotte) Debin Liu (Paypal) Xinru Page (University of California, Irvine) Adrienne Porter Felt (Google) Franziska Roesner (University of Washington) Pamela Wisniewski (The Pennsylvania State University) Kami Vaniea (Indiana University) With best regards, Jens Grossklags Chair ? USEC 2015 From iang at iang.org Wed Oct 22 05:20:32 2014 From: iang at iang.org (ianG) Date: Wed, 22 Oct 2014 10:20:32 +0100 Subject: [Cryptography] Simon, Speck and ISO In-Reply-To: <65806f68b096fbc8d116d8f80613e9f5.squirrel@www.deadhat.com> References: <65806f68b096fbc8d116d8f80613e9f5.squirrel@www.deadhat.com> Message-ID: <544776E0.1020209@iang.org> On 21/10/2014 23:16 pm, dj at deadhat.com wrote: > Today the NSA proposed that Simon and Speck be added the the ISO JTC1/SC27 > approved ciphers spec. What pray tell is the ISO JTC1/SC27 and who cares? Or to put it cynically, who is the NSA trying to ease into this time? (Seriously, we can only comment on the threat of Simon & Speck if we know what the business model the security model needs to defend.) iang From hanno at hboeck.de Wed Oct 22 05:28:40 2014 From: hanno at hboeck.de (Hanno =?ISO-8859-1?B?QvZjaw==?=) Date: Wed, 22 Oct 2014 11:28:40 +0200 Subject: [Cryptography] Simon, Speck and ISO In-Reply-To: <65806f68b096fbc8d116d8f80613e9f5.squirrel@www.deadhat.com> References: <65806f68b096fbc8d116d8f80613e9f5.squirrel@www.deadhat.com> Message-ID: <20141022112840.20efb992@pc> Am Tue, 21 Oct 2014 22:16:13 -0000 schrieb dj at deadhat.com: > Today the NSA proposed that Simon and Speck be added the the ISO > JTC1/SC27 approved ciphers spec. That sounds interesting, can you give some more background on this? I'm probably not the only one who has never heard of JTC1/SC27 before. Wikipedia tells me this is located at the DIN in germany. What's the role of these approved ciphers? Is anyone bound to support / use them? -- Hanno B?ck http://hboeck.de/ mail/jabber: hanno at hboeck.de GPG: BBB51E42 -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 819 bytes Desc: not available URL: From stephen.farrell at cs.tcd.ie Wed Oct 22 09:28:15 2014 From: stephen.farrell at cs.tcd.ie (Stephen Farrell) Date: Wed, 22 Oct 2014 14:28:15 +0100 Subject: [Cryptography] Simon, Speck and ISO In-Reply-To: <20141022112840.20efb992@pc> References: <65806f68b096fbc8d116d8f80613e9f5.squirrel@www.deadhat.com> <20141022112840.20efb992@pc> Message-ID: <5447B0EF.1030606@cs.tcd.ie> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On 22/10/14 10:28, Hanno B?ck wrote: > Am Tue, 21 Oct 2014 22:16:13 -0000 schrieb dj at deadhat.com: > >> Today the NSA proposed that Simon and Speck be added the the ISO >> JTC1/SC27 approved ciphers spec. > > That sounds interesting, can you give some more background on > this? > > I'm probably not the only one who has never heard of JTC1/SC27 > before. Wikipedia tells me this is located at the DIN in germany. > > What's the role of these approved ciphers? Is anyone bound to > support / use them? Not that I know of. Sometimes people do come to e.g. the IETF and say "but ISO standardised our alg, why won't you?" That isn't treated as very meaningful though as anything relatively credible can afaik get through the ISO process with not that much effort, if backed by a nation-state that participates in SC27. (That said, I think SC27 has some capable folks involved, but a pretty small number of 'em probably.) It may well be the case that some other national, or nation-state oriented, standards bodies prefer algorithms that SC27 have ok'd, or even that some of those standards might not be voluntary in some places for some things. That'd be a crappy idea really but could happen I guess. The crappiness there though would be as much to do with mandatory vs. voluntary standards as it is with potential lack of broad review of crypto. Personally, I'd weigh the "was it published at crypto more than 5 years ago with a history of papers since" smell-test as being a more important factor than that something was standardised by SC27. (But with neither by itself being sufficient.) S. > > > > > _______________________________________________ The cryptography > mailing list cryptography at metzdowd.com > http://www.metzdowd.com/mailman/listinfo/cryptography > -----BEGIN PGP SIGNATURE----- Version: GnuPG v1 iQEcBAEBAgAGBQJUR7DvAAoJEC88hzaAX42i2AgH/ihOV6rUe/NR3M6w1wkdjmUi anaM5mT1yohk2lIMXjTe/PMO2MPVzy4HDF+o2wXi5edmMTx7kY7v/tc2auD8OIPg nArtQMIOAeYNtdvRYJVGbgwDlVB1J0tJciGauSzMPsdGTKum/YFS04K/yUJRLUAE ptf4ojP9InEaBUHWtb6w+GIfR7yPr3K4rf4ZTboq5CKgL5OUTy+MT93Zzc/QyEoh J4BqpZLJsoCoBtVORUUZ/X2doYss8pQmZONNcKwbbPF8fl6s+YNCijtEBlUiKK38 wjRxZgAnaRsDRdXyYiqWCN7G0ANng1oap/ALQfcnEoJGlX5LJCcBztF9KWZiLLg= =pe6L -----END PGP SIGNATURE----- From aoz.syn at gmail.com Wed Oct 22 09:41:20 2014 From: aoz.syn at gmail.com (RB) Date: Wed, 22 Oct 2014 07:41:20 -0600 Subject: [Cryptography] (no subject) In-Reply-To: <20141021232320.6037649.18509.6284@gmail.com> References: <20141021232320.6037649.18509.6284@gmail.com> Message-ID: On Tue, Oct 21, 2014 at 5:23 PM, William Muriithi wrote: > I believe some of the people here may have taken ?GIAC Web Application Penetration Tester exam. Came across a link on it today at work and felt like it may be something worth looking at. Although I'm not certain this is on-topic for the cryptography list, I'm also mildly surprised that there's anything here I can answer. I don't hold the GWAPT (but know someone who just renewed), and do hold several other SANS certs myself. > Have just checked amazon and nothing related with that exam show up. The books from SANS seem a tad expensive. Want to check through the material and see if they are deep enough for such investment. SANS keeps a very, very tight fist on rights to their materials, you won't find anything secondhand or third-party unless it violates their [extensive] licensing agreements. > How did you guys go about that evaluation? Was the course nourishing enough intellectually for the amount of money they are asking My evaluation was that I have never spent my own money on SANS courses and certifications, it has always been employers'. It is nourishing enough if you're pretty much new to the particular discipline the course covers, but if you have any prior experience in the field it won't get you very far. A personal example, GPEN - having already participated in some red-team and CTF exercises, I learned no new concepts (only a few specific applications of tools) from the coursework. I feel that as all certs become more popular and peopled, their utilitiy ultimately wanes, and SANS is little different. The recent move to not publish test scores, for example, I feel reduces the value of doing well - "they still call the C medical student 'doctor'." However, this week I did hear that they'd made the GWAPT test significantly harder and shorter, which may be a glimmer that they're trying to fight the dilution effect a little. From waywardgeek at gmail.com Wed Oct 22 10:01:59 2014 From: waywardgeek at gmail.com (Bill Cox) Date: Wed, 22 Oct 2014 10:01:59 -0400 Subject: [Cryptography] A review per day of TRNGs: OneRNG Message-ID: I had a ton of fun reviewing PHC candidates, and I learned a lot in the process. If people think it would be fun to review TRNGs in a similar manner on this thread, then I'll do a review once per day-ish, until I can't find any more TRNGs to review. I'll start with my favorite, to begin on a positive note: OneRNG. http://onerng.info/ OneRNG is free-hardware and free-software, as in freedom. It's typically called open-hardware and open-source. This is *very* important for a TRNG. *Any* TRNG that has either unavailable software source, or unavailable hardware design is going to get a poor rating by me, since security-through-obscurity has been shown over and over again to fail, particularly with TRNGs. AFAIK, OneRNG is the *only* open-hardware/software device that has been built which is suitable for cryptography (mine has only been breadboarded so far). TrueRNG claims to be, but I have yet to see a schematic, let along a board layout or software source. However, maybe I'll find it when I do that review :-) Rob Seward gets an honorable mention: http://robseward.com/misc/RNG2/ This is free hardware and software in the best form. However, as he states, there are some security issues with this design that make it more suitable for white noise generation than cryptography. To support secure crypto, OneRNG takes unpredictability of their resulting data *very* seriously. Rather than rely on either radio noise or zener noise, they put *both* on their board, and mix the streams together. They continuously monitor the health of both, and shut down if either is not functioning properly. They also disabled programming over USB, so nasty malware cannot subvert the device. This is a limitation of Rob Seward's design that he wisely states in his documentation. However, it is possible to intercept a OneRNG in the mail, and reprogram it in nasty ways. Users who are particularly concerned about this possibility are encouraged to re-flash the device themselves. This brings up threat models. No hardware can be considered secure if sent through the mail, unless we assume the mail service is trustworthy. This is true for laptops as well as TRNGs. More than any other TRNG, OneRNG has considered this a real threat and done something about it. To verify you have a genuine OneRNG, the metal shielding is removable. You are encouraged to inspect the board yourself and compare it to the picture online. The microcontroller label can be inspected, though it's hard to prove it is not an impostor. However, it is *very* difficult to make an impostor of a microcontroller that functions properly with a programmer and debugging interface. It most likely has to be built by the original manufacturer, in this case TI. So, there is an assumption that a complex $4 chip has no back-door, but I find that far more palatable than the assumption that Intel's RDRAND instruction has no back door. The radio entropy source can be influenced remotely by a radio transmitter, so the OneRNG randomly skips around in the frequency being sampled, and makes that decision I assume using output at includes the zener noise. While I am not sure I would want to rely on radio alone, when combined with the zener, it seems secure enough to me. The zener noise is, I believe, generated a typical reverse base-emitter breakdown, because the fabs don't bother to make this mode of using a transistor low-noise. Real zeners are far less noisy. This circuit is cheap, but has some problems. It drifts over time, and the noise level can very a great deal from part to part, making it hard to build a reliable, dependable entropy source. However, they do monitor it's health, and shut down if it fails. Because of the saturating amplification of the zener noise, an attacker can influence the output with a very small injected signal. To counter this threat, OneRNG encases all of the analog circuits in a solid metal box. The back side of the board under this box is a solid ground plane, with several vias connecting the box to this ground plane. Paul seems to know what he's doing here, and I think he has likely succeeded in an excellent shield against external interference. As for downsides, OneRNG is not as simple as some TRNGs. This makes it tougher to insure it is secure. Also, the possibility of having it reprogrammed by an attacker who intercepts it in the mail remains an issue, since most users will not likely re-flash their device. I am not sure if the flash can be dumped securely over USB, or if an attacker can mod the program to deliver the original firmware, hiding the malware. The biggest current downside to OneRNG is that you cannot buy one yet. They are in Beta stage. Paul has his own pick-and-place machine, and hopefully will ramp production soon. I plan to buy one when he does. In summary, I give this TRNG my highest rating: secure for all cryptographic purpose, IMO. All threat models I can conceive have been considered. I would encourage users concerned about mail interception to compare their firmware to that on the website, and then re-flash it anyway. Also, Paul has been very helpful to me on my own TRNG project, which is going beyond the call of duty. He really does seem to want the world to be more secure, and is willing to help other TRNG developers towards that goal. I do hope that in a future version, Paul might consider dropping the zener and use a more noise injection resistant and more consistently manufacturable Infinite Noise Multiplier, but he should ship what he has for now. Upgrading to an INM might be splitting hairs for the security of this device. Bill -------------- next part -------------- An HTML attachment was scrubbed... URL: From gnu at toad.com Wed Oct 22 10:12:37 2014 From: gnu at toad.com (John Gilmore) Date: Wed, 22 Oct 2014 07:12:37 -0700 Subject: [Cryptography] Best internet crypto clock: hmmmmm... In-Reply-To: <20141021214825.GA8146@copper.astro.indiana.edu> References: <1413910953.21882.1.camel@sonic.net> <20141021214825.GA8146@copper.astro.indiana.edu> Message-ID: <201410221412.s9MECbCl014892@new.toad.com> >> IIRC a lot has been done to verify video and audio as having come >> from a certain moment in time or general location based on recovering >> the precise 'drift' of the omnipresent 60-cycle (or 50-cycle if >> you're Australian) hum of the surrounding electrical system. > > http://www.bbc.co.uk/news/science-environment-20629671 That's fascinating! Turning an annoyance of audio engineers and home stereos into a forensic tool. But once the technique is known, it can be forged, by pasting a recording of the "hum" from one time or place, into a recording made or edited at another time or place. It would be amusing to file a FOIA request for the FBI's recordings of US power networks' hum, and watch them squirm trying to find a reason why you couldn't have it. Also, in theory, even if nobody was recording the hum continuously, the hum could be extracted from two or more existing recordings and compared to determine whether they happened at the same time (or copied to another recording). For example, two concerts that were recorded at the same time should show the same hum if they were done within the same power grid. Isn't there also some research showing that over time, you can tell what time zone a remote computer is in, by pinging it for timestamps, and noticing when its oscillators run minutely faster during the heat of the day, and slower during the cool of the night? John From hbaker1 at pipeline.com Wed Oct 22 10:35:37 2014 From: hbaker1 at pipeline.com (Henry Baker) Date: Wed, 22 Oct 2014 07:35:37 -0700 Subject: [Cryptography] Best internet crypto clock In-Reply-To: <20141021214825.GA8146@copper.astro.indiana.edu> References: <1413910953.21882.1.camel@sonic.net> <20141021214825.GA8146@copper.astro.indiana.edu> Message-ID: I like this AC hum idea for a crypto clock, except that: 1. It is highly local, so you need recordings from your local power provider to provide a time base. 2. All of these recordings have to be done by multiple, independent parties, so that collusion among the parties can be ruled out. Perhaps a better source would be something that couldn't possibly be hacked -- e.g., variations in solar flux of neutrinos or other solar variations. There are lots of laboratories around the world recording solar phenomena, so perhaps some combination of these records could become a non-hackable clock. At 02:48 PM 10/21/2014, Jonathan Thornburg wrote: >On Oct 21, 2014, at 1:02 PM, Bear wrote: >> IIRC a lot has been done to verify video and audio as having come >> from a certain moment in time or general location based on recovering >> the precise 'drift' of the omnipresent 60-cycle (or 50-cycle if >> you're Australian) hum of the surrounding electrical system. >[[...]] While >> >> Relevant law enforcement and Intel agencies are, yes, known to monitor >> and record the variances specifically for purposes of dating recordings >> that later may become evidence. > >On Tue, Oct 21, 2014 at 02:40:33PM -0400, Jerry Leichter wrote: >> That's a cool technique. Do you have any references? > >http://www.bbc.co.uk/news/science-environment-20629671 >describes the technique, and says that the UK police have recorded >this since 2005. > >-- >-- Jonathan Thornburg > Dept of Astronomy & IUCSS, Indiana University, Bloomington, Indiana, USA > "There was of course no way of knowing whether you were being watched > at any given moment. How often, or on what system, the Thought Police > plugged in on any individual wire was guesswork. It was even conceivable > that they watched everybody all the time." -- George Orwell, "1984" From dennis.hamilton at acm.org Wed Oct 22 12:16:27 2014 From: dennis.hamilton at acm.org (Dennis E. Hamilton) Date: Wed, 22 Oct 2014 09:16:27 -0700 Subject: [Cryptography] Simon, Speck and ISO In-Reply-To: <20141022112840.20efb992@pc> References: <65806f68b096fbc8d116d8f80613e9f5.squirrel@www.deadhat.com> <20141022112840.20efb992@pc> Message-ID: <006601cfee13$89286930$9b793b90$@acm.org> below, -----Original Message----- From: cryptography [mailto:cryptography-bounces+dennis.hamilton=acm.org at metzdowd.com] On Behalf Of Hanno B?ck Sent: Wednesday, October 22, 2014 02:29 To: dj at deadhat.com Cc: cryptography at metzdowd.com Subject: Re: [Cryptography] Simon, Speck and ISO Am Tue, 21 Oct 2014 22:16:13 -0000 schrieb dj at deadhat.com: > Today the NSA proposed that Simon and Speck be added the the ISO > JTC1/SC27 approved ciphers spec. That sounds interesting, can you give some more background on this? >From , ISO/IEC JTC1/SC27 "IT Security Techniques" (meeting this week in Mexico City), WG1: Information Security Management Systems WG2: Cryptography and security mechanisms WG3: Security evaluation, testing and specification WG4: Security controls and services WG5: Identity management and privacy techniques It is commonplace for "National Bodies" (e.g., DIN, BSA, ANSI, ...) to have "mirror" technical committees that correspond with JTC1 subcommittees and working groups. DIN also holds the Secretariat for SC27, but any DIN mirror committee is different, even with overlapping participants. Here are the member countries whose National Bodies participate in SC27 . In the US, ANSI designates INCITS as the Technical Activity Group that administers US participation in SC27. The "mirror" responsibility and voice of US participation is INCITS/CS1 for Cyber Security. I'm probably not the only one who has never heard of JTC1/SC27 before. Wikipedia tells me this is located at the DIN in germany. What's the role of these approved ciphers? Is anyone bound to support / use them? These are voluntary standards. Requirements concerning their use, specification in procurements, etc., may show up in member countries (sort of how FIPS transposes voluntary standards for governmental use) along with recommendations for other use within a national (or regional, in the case of the EU) jurisdiction. In the US, the practice for INCITS is to automatically adopt the relevant ISO/IEC JTC1 standards as ANSI standards. I imagine something similar happens in the case of DIN. -- Hanno B?ck http://hboeck.de/ mail/jabber: hanno at hboeck.de GPG: BBB51E42 From leichter at lrw.com Wed Oct 22 12:46:17 2014 From: leichter at lrw.com (Jerry Leichter) Date: Wed, 22 Oct 2014 12:46:17 -0400 Subject: [Cryptography] A review per day of TRNGs: OneRNG In-Reply-To: References: Message-ID: <795DC9FD-27FD-4122-B2CC-0E54AAE34ECB@lrw.com> On Oct 22, 2014, at 10:01 AM, Bill Cox wrote: > As for downsides.... Also, the possibility of having it reprogrammed by an attacker who intercepts it in the mail remains an issue, since most users will not likely re-flash their device. I am not sure if the flash can be dumped securely over USB, or if an attacker can mod the program to deliver the original firmware, hiding the malware. Sounds like a great application for "sparkly nail polish" security. Paint over the access points - the outside screws, the chips and on to the board, over a piece of tap sealing the USB - with one of those nail polishes with sparkly bits in it. Take photos of each spot and deliver separately from the device itself, preferably through multiple channels (e.g., send in a separate envelope, and put signed copies on line). The exact speckle pattern is random and as far as I know impossible to duplicate. It's also easy to check "by eye". -- Jerry -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 4813 bytes Desc: not available URL: From dave at horsfall.org Wed Oct 22 13:43:05 2014 From: dave at horsfall.org (Dave Horsfall) Date: Thu, 23 Oct 2014 04:43:05 +1100 (EST) Subject: [Cryptography] In search of random numbers Message-ID: A wild idea just occurred to me (I get them all the time). Just grep the mail logs for rejected spammers, and hash that info etc i.e. use the system against itself; two birds, one hammer. -- Dave Horsfall (VK2KFU) http://www.horsfall.org/spam.html (and check the home page) From agr at me.com Wed Oct 22 17:36:16 2014 From: agr at me.com (Arnold Reinhold) Date: Wed, 22 Oct 2014 17:36:16 -0400 Subject: [Cryptography] Best internet crypto clock In-Reply-To: <1413910953.21882.1.camel@sonic.net> References: <1413910953.21882.1.camel@sonic.net> Message-ID: <6088DD91-3186-40CF-B9BF-2F211B8F26E4@me.com> > On Oct 21, 2014, at 1:02 PM, Bear wrote: > > IIRC a lot has been done to verify video and audio as having come > from a certain moment in time or general location based on recovering > the precise 'drift' of the omnipresent 60-cycle (or 50-cycle if > you're Australian) hum of the surrounding electrical system. While > it's fairly precise, it's not exact, and over very widespread areas > the exact frequency and interference patterns recovered from a video > or audio record have been used to determine exactly when (and to some > extent where) the record was made. > > Relevant law enforcement and Intel agencies are, yes, known to monitor > and record the variances specifically for purposes of dating recordings > that later may become evidence. > > Bear Presumably this would mainly apply to video or audio files. I see no reason a still image would contain a significant hum signal, as long as the exposure time is << 1/60 sec. I wonder how hard it would be to forge one of these power line hum time signals? Per previous posts, the only case of interest is ?no earlier than?, i.e. pretending a file was recorded at a notional date and time which is later than when it actually was recorded. (Time stamp authorities solve the ?no later than? case.) The simplest situation would be if the file is created in conditions where hum is minimal, say in a well shielded room or way out in the country, far from power lines. The power line hum could then be recorded at the notional time and place and added into the file. A more interesting case would be a file that did have a hum signal. It seems to me that with some clever signal processing, the hum signal could be modified to match the desired hum. One would first adjust the notional time of the image a few milliseconds so that the recorded hum and the notional hum were close to being in phase for as long as possible (this might place a limit on how long the video file can be). One would then compute a difference signal to be added to the recording that would make it match the hum at the notional time. It might also be possible to observe the hum on the notional day for long enough to let the forger select a time interval during the day when the frequency is close to what was recorded. Doing this would minimize the amount of alteration needed and/or maximize the length of the recording. Given limited applicability to still images, venues where archived hum might be absent (wilderness, air planes, cruse ships, war zones, etc.) and the possibility of forgery, I don?t think the hum method obviates the need for a camera with a secure clock. Arnold Reinhold -------------- next part -------------- An HTML attachment was scrubbed... URL: From coruus at gmail.com Wed Oct 22 23:00:20 2014 From: coruus at gmail.com (David Leon Gil) Date: Wed, 22 Oct 2014 23:00:20 -0400 Subject: [Cryptography] Best internet crypto clock In-Reply-To: References: <1413910953.21882.1.camel@sonic.net> <20141021214825.GA8146@copper.astro.indiana.edu> Message-ID: On Wed, Oct 22, 2014 at 10:35 AM, Henry Baker wrote: > Perhaps a better source would be something that couldn't possibly be hacked -- e.g., variations in solar flux of neutrinos or other solar variations. How about using the earth's rotation? :) http://www.iers.org/IERS/EN/DataProducts/EarthOrientationData/eop.html;jsessionid=5F896E89B594D8D5B7A9E38DE4AD6BF0.live1 Only Superman can hack that... From dj at deadhat.com Wed Oct 22 16:17:59 2014 From: dj at deadhat.com (dj at deadhat.com) Date: Wed, 22 Oct 2014 20:17:59 -0000 Subject: [Cryptography] Simon, Speck and ISO In-Reply-To: <006601cfee13$89286930$9b793b90$@acm.org> References: <65806f68b096fbc8d116d8f80613e9f5.squirrel@www.deadhat.com> <20141022112840.20efb992@pc> <006601cfee13$89286930$9b793b90$@acm.org> Message-ID: <07266404ca2a96bafbd54a2f17752174.squirrel@www.deadhat.com> [snip] The entirely non cryptographic issue goes like this: If country X doesn't like county Y's backdoored RNG standard, they can write their own backdoored RNG spec and refuse import of devices not complying with the local national standard. The WTO will be fine with this. However if there's an international standard (E.G. an ISO standard), approved by the national bodies, then when they try to ban imports, the WTO will not be fine with it. It so happens that one well known backdoored RNG spec is copy-and-pasted into ISO/IEC 18031. So this spec is being opened up again. So if you're in the business of selling chips around the world that contain hopefully non-backdoored parts of said specs, you want to be able to keep selling your products, so you're interested in fixing the steaming pile that is currently in ISO/IEC 18031. That's why I'm here in Mexico City. In passing, the NSA turned up and proposed added Simon and Speck as the only lightweight block ciphers in ISO. It not ideal that the only internationally standardized lightweight block ciphers come directly from the organization that gave us the dual-ec-drbg. Since I expect to be at the next meeting, I'd be happy to propose some alternatives with better provenance and I don't know a better place to find a pithy put down of dodgy standards than right here on this list. From hbaker1 at pipeline.com Thu Oct 23 00:20:46 2014 From: hbaker1 at pipeline.com (Henry Baker) Date: Wed, 22 Oct 2014 21:20:46 -0700 Subject: [Cryptography] Best internet crypto clock In-Reply-To: References: <1413910953.21882.1.camel@sonic.net> <20141021214825.GA8146@copper.astro.indiana.edu> Message-ID: At 08:00 PM 10/22/2014, David Leon Gil wrote: >On Wed, Oct 22, 2014 at 10:35 AM, Henry Baker wrote: > >> Perhaps a better source would be something that couldn't possibly be hacked -- e.g., variations in solar flux of neutrinos or other solar variations. > >How about using the earth's rotation? :) > >http://www.iers.org/IERS/EN/DataProducts/EarthOrientationData/eop.html;jsessionid=5F896E89B594D8D5B7A9E38DE4AD6BF0.live1 > >Only Superman can hack that... Agreed, but how cheap/easy are variations in the Earth's rotations to check? Remember, the whole point of the exercise is to minimize the asymmetry ratio of checking-costs to manipulating-costs, and to make checking-costs cheap enough that many, many checkers will be checking. The Bitcoin blockchain is cheap to check, and there are a lot of folks who have much more to lose than a few seconds if the blockchain has been tampered with. The Bitcoin blockchain is still a little too easy to manipulate; future digital currencies with orders of magnitude more transactions will be far harder to manipulate. E.g., many still feel that the Fed is manipulating even the S&P500 these days with "plunge protection": http://nypost.com/2014/10/20/plunge-protection-behind-markets-sudden-recovery/ From l at odewijk.nl Thu Oct 23 06:20:47 2014 From: l at odewijk.nl (=?UTF-8?Q?Lodewijk_andr=C3=A9_de_la_porte?=) Date: Thu, 23 Oct 2014 12:20:47 +0200 Subject: [Cryptography] In search of random numbers In-Reply-To: References: Message-ID: There's typically a low bandwidth and there's precious little entropy in it (only non-engineered spam can count as entropy). Using hashes you can just throw more and more information at it, some if it is bound to stick. The question is, why not just hash the entire memory? Then just stream all the inputs into arrays (% arraylength; ring arrays), and voila, absolute maximum entropy collected. Hashing the entire memory can be extremely parallel. Even just a kernel modification that marks un-rehashed blocks, and partially rebuilds a hash-tree dynamically (based on oldest changed memory block) would be guaranteed to collect all that good stuff. The block size and rehash rate(s) could be adjusted for better performance. To ensure/enlarge entropy carry-over you can include the previous memory-block-hash into the next memory-block-hash. You can even dynamically decide to rehash more marked blocks when more entropy is being consumed from /dev/random or /dev/urandom. Someone would have to do some modelling and testing to see how many bits of entropy are actually squeezed out of the system, but it's totally possible. And (in instant-rehash mode) guaranteed to be the most entropy one can squeeze out of a system. It's not the most efficient, but it might very likely beat the mail-log tactic in terms of bits-of-entropy-obtained/computing-resources-eaten. Depending on how much quality spam you receive, I guess :) (don't forget to put the #cycles-between-calls-to-random-generator in a ring-array) -------------- next part -------------- An HTML attachment was scrubbed... URL: From waywardgeek at gmail.com Wed Oct 22 18:29:02 2014 From: waywardgeek at gmail.com (Bill Cox) Date: Wed, 22 Oct 2014 18:29:02 -0400 Subject: [Cryptography] A review per day of TRNGs: OneRNG In-Reply-To: <795DC9FD-27FD-4122-B2CC-0E54AAE34ECB@lrw.com> References: <795DC9FD-27FD-4122-B2CC-0E54AAE34ECB@lrw.com> Message-ID: On Wed, Oct 22, 2014 at 12:46 PM, Jerry Leichter wrote: > On Oct 22, 2014, at 10:01 AM, Bill Cox wrote: > > As for downsides.... Also, the possibility of having it reprogrammed by > an attacker who intercepts it in the mail remains an issue, since most > users will not likely re-flash their device. I am not sure if the flash > can be dumped securely over USB, or if an attacker can mod the program to > deliver the original firmware, hiding the malware. > Sounds like a great application for "sparkly nail polish" security. Paint > over the access points - the outside screws, the chips and on to the board, > over a piece of tap sealing the USB - with one of those nail polishes with > sparkly bits in it. Take photos of each spot and deliver separately from > the device itself, preferably through multiple channels (e.g., send in a > separate envelope, and put signed copies on line). The exact speckle > pattern is random and as far as I know impossible to duplicate. It's also > easy to check "by eye". > -- Jerry > > > Nice! I'll have to remember this method. -------------- next part -------------- An HTML attachment was scrubbed... URL: From waywardgeek at gmail.com Wed Oct 22 19:20:59 2014 From: waywardgeek at gmail.com (Bill Cox) Date: Wed, 22 Oct 2014 19:20:59 -0400 Subject: [Cryptography] A review per day of TRNGs: some snake oil Message-ID: Going alphabetically (I did reverse alphabetically on PHC entries), using this awesome site: http://www.cacert.at/cgi-bin/rngresults and selecting hardware RNGs, I this at the top: - Araneus Alea II The Araneus Alea II USB key can be bought for 199 Euros here: http://www.araneus.fi/products/alea2/en It passes dieharder tests, but the entropy source is zener noise, which is not that white at these speeds, so there must be some whitening going on. There is no description of the circuit, and the software is closed source. I like that they use an A/D rather than just comparing to Vref like most zener noise TRNGs. There is a microcontroller on board, and we have no idea if it can be used to PWN your system. There's really nothing else to review, so until these guys open up and let us see what's inside, I have to rate it: Snake Oil... until proven otherwise. Bill -------------- next part -------------- An HTML attachment was scrubbed... URL: From stephen.farrell at cs.tcd.ie Thu Oct 23 06:45:52 2014 From: stephen.farrell at cs.tcd.ie (Stephen Farrell) Date: Thu, 23 Oct 2014 11:45:52 +0100 Subject: [Cryptography] Simon, Speck and ISO In-Reply-To: <07266404ca2a96bafbd54a2f17752174.squirrel@www.deadhat.com> References: <65806f68b096fbc8d116d8f80613e9f5.squirrel@www.deadhat.com> <20141022112840.20efb992@pc> <006601cfee13$89286930$9b793b90$@acm.org> <07266404ca2a96bafbd54a2f17752174.squirrel@www.deadhat.com> Message-ID: <5448DC60.2030904@cs.tcd.ie> On 22/10/14 21:17, dj at deadhat.com wrote: > [snip] > > Since I expect to be at the next meeting, I'd be happy to propose some > alternatives with better provenance As of now, it looks like IETF protocols will be adopting chacha20 with poly1305 as per [1]. I believe that is being implemented in a number of prominent code bases. (I didn't go check, but you can already see such ciphersuites popping up in TLS stats even before there's an RFC.) If ISO want to do something, that'd seem like a better plan to me. But please also ask 'em not to futz around and end up with something "almost" interoperable;-) Actually, it'd be better that they did nothing at all if that outcome were likely. FWIW, I've heard of no equivalent implementer interest in the new NSA algs. Not even a squeak. (But the US govt market is probably big enough that they may get fairly widely implemented I suppose.) S. [1] https://tools.ietf.org/html/draft-irtf-cfrg-chacha20-poly1305 From waywardgeek at gmail.com Thu Oct 23 06:51:02 2014 From: waywardgeek at gmail.com (Bill Cox) Date: Thu, 23 Oct 2014 06:51:02 -0400 Subject: [Cryptography] A review per day of TRNGs: OneRNG In-Reply-To: <795DC9FD-27FD-4122-B2CC-0E54AAE34ECB@lrw.com> References: <795DC9FD-27FD-4122-B2CC-0E54AAE34ECB@lrw.com> Message-ID: On Wed, Oct 22, 2014 at 12:46 PM, Jerry Leichter wrote: > On Oct 22, 2014, at 10:01 AM, Bill Cox wrote: > > As for downsides.... Also, the possibility of having it reprogrammed by > an attacker who intercepts it in the mail remains an issue, since most > users will not likely re-flash their device. I am not sure if the flash > can be dumped securely over USB, or if an attacker can mod the program to > deliver the original firmware, hiding the malware. > Sounds like a great application for "sparkly nail polish" security. Paint > over the access points - the outside screws, the chips and on to the board, > over a piece of tap sealing the USB - with one of those nail polishes with > sparkly bits in it. Take photos of each spot and deliver separately from > the device itself, preferably through multiple channels (e.g., send in a > separate envelope, and put signed copies on line). The exact speckle > pattern is random and as far as I know impossible to duplicate. It's also > easy to check "by eye". > -- Jerry > > So long as we're considering threats such as interception in the mail, I think we should look at a more detailed threat model. The most obvious use for a TRNG is as an additional entropy source for a Linux server's entropy pool. I would like to assume: - An attacker, Mallory, is logged in as a regular user, but for some unexplainable reason is unable to obtain root access. - The attacker know the *exact* state of the Linux entropy pool at time == 0 - There are 0 bits of entropy in the pool, and the pool is blocking on a read by gnupg which is trying to create a strong cryptographic key. - The user of gnupg is being careful to not allow his new key hit disk in any non-encrypted form - Fortunately, this system has a OneRNG key attached. What attacks can Mallory mount? One problem with my review of OneRNG so far is I have not looked at any source code, even though it's open source! Such a review eventually should be done in depth by multiple people, but for now, here's two attacks this code needs to defend against: - attacks that try to guess the random bits being added to the entropy pool - attacks against the OneRNG's functioning properly Do we care about cache-timing attacks? To guess the random bits, Mallory might do a cache-timing attack. To defend against it, first you need to do never branch based on the data from the OneRNG. I need to go back and fix this in my infnoise driver. It's easy to get this wrong. This is particularly easy to get wrong if you do any health monitoring, like I do, in the driver. OneRNG does health monitoring on the USB key, which is secure against cache timing attacks. Even harder is doing no data based memory addressing. A statistical analysis such as what rngd does will read and/or write data to a lot of places that depend on the TRNG data. My infnoise driver also does this, reading from a memory location addressed by the last 14 bits of TRNG output, to find the expectation that the next bit will be a 1 or 0. Does the OneRNG driver look for a string of only 0's or 1's coming from the device, and stop the driver if only 1's or 0's are output? If you do, you help defend against faulty OneRNG devices, or devices that Mallory has compromised. However, by incrementing the 0's memory location for every 0, and the 1's memory location for every 1, you give Mallory the possibility of a cache-timing attack. So, it is unclear to me whether it makes sense to worry about this cache-timing attack. The more extensive the health server-side health monitoring, the more cache-timing susceptible we are. For now, I'll just point out these cache-timing attacks, but wont worry about whether the improved health monitoring justifies the security hole. If you use rngd, there's no point in worrying anyway, since it will violate cache-timing defence rules far worse than your driver. Can an attacker directly attack the OneRNG? For example, it would be bad for the OneRNG to be accessible to Mallory in user space. I would prefer that it only be accessible by the daemon feeding the entropy pool (rngd?). Adding udev rules for users to use the OneRNG could be dangerous. Can Mallory can load the system down and cause the OneRNG daemon to be swapped to disk? If so, Mallory wins if he can gain physical access to the disk later. Therefore, *all* buffers containing TRNG output should be considered as sensitive as passwords, and should be allocated in non-swappable memory using mmap, and a secure zeroing function (such as secure_zero from the Bake2 source) should be used to clear all TRNG bits after they are fed to /dev/random (using ioctl). Can Mallory cause any kind of time-out in the USB communication, by causing high system load? My infnoise driver does the bitbang hack to control the clocking of the circuit on the board from the infnoise driver. If left unclocked for too long, the voltage in the INM drifts to 0, allowing Mallory to guess the next several bits when clocking starts again. To defend against this, I will have to measure the time between packets read from the INM and when it is too long, I'll have to drop the packets. I think the hardware design of OneRNG is excellent for security. A solid code review of the driver probably is in order at some point, however. This stuff is *very* easy to get wrong - my infnoise driver needs a lot of work, and I'll bet most TRNG drivers out there don't even consider the security of the TRNG bits against things like swapping to disk. Bill -------------- next part -------------- An HTML attachment was scrubbed... URL: From leichter at lrw.com Thu Oct 23 07:27:51 2014 From: leichter at lrw.com (Jerry Leichter) Date: Thu, 23 Oct 2014 07:27:51 -0400 Subject: [Cryptography] Best internet crypto clock In-Reply-To: References: <1413910953.21882.1.camel@sonic.net> <20141021214825.GA8146@copper.astro.indiana.edu> Message-ID: <3FDCE2AF-DF12-4C05-A6FF-E5E1C3789B5F@lrw.com> On Oct 22, 2014, at 10:35 AM, Henry Baker wrote: > I like this AC hum idea for a crypto clock, except that: > > 1. It is highly local, so you need recordings from your local power provider to provide a time base.... Somehow we went from using the AC hum as a forensic mechanism to using it as a clock. The use of the 60 (or 50) Hz baseline power frequency to produce accurate electric clocks goes way back. In fact, this was a usage specifically supported by the power companies: While all the generators in a system need to be synchronized, there's no need for them to maintain long-range stability or remain centered at 60Hz. But cheap, accurate, synchronized electric clocks were an early selling point, so the systems were built to actually stay close to the nominal center frequency, and were deliberately manipulated to long-term average stability. To do this, the systems themselves need an accurate time scale to refer to - and most likely they rely on NIST. So you'd be getting the NIST timebase, with noise. On a human scale, over reasonable human periods of time, the errors are nil. On a scale appropriate to today's computers, the story would be very different. Deliberately manipulating the frequency across a whole grid for long enough to matter to humans would be extremely difficult and would get noticed: We now have other, accurate time providers to compare our old electric clocks to. Of course, if you're manipulating the environment, you can plug the device, not into the wall, but into your own frequency generator and make it see whatever you like. The article Jonathan Thornburg linked to (http://www.bbc.co.uk/news/science-environment-20629671) describes the *forensic* mechanism. It's based on looking at the short-time-scale variations in frequency around the nominal center point. These are caused by variations in load, variations in supply (generators coming on and off line), lightning strikes, surges due to solar weather, and the interaction of these effects with the synchronizers that are put into the system exactly to keep those variations under control. They are *not* local, but are constant within a single electric grid - synchronization across a grid is exactly the point! Grids are very large. England is covered by just one grid. The continental US is covered by something like three, if I remember correctly. (It may be a bit more, but we're still talking a handful.) These variations are unpredictable in detail, but easily recorded anywhere on the grid. Recording them *deliberately* as an absolute measure of time is an interesting idea - essentially aid the use the forensic technique. In principle, one could probably replay past variations in a highly controlled setting to make it look as if a recording was made at some point in the past, but sounds rather hard to accomplish. How successful one might be in replacing a recorded signal with a different one without leaving detectable artifacts is impossible to say without actual testing. BTW, there's an interesting contrast here between a "tick generator" - which gives you an accurate repeatable way to step a clock from some fixed point - and an "absolute time reference", which lets you map from something (like a record of hum frequency variation) to absolute date and time. We generally think of "clocks" as tick generators that we start off at some external date and time; absolute time references are relatively infrequent in day to day use. (They are omnipresent in analyses of records of the past - e.g., looking at the stratum in which a fossil is found as a way of dating it.) There's a Youtube video out there of a "clock" that measures elapsed time directly by looking at a couple of simple measurements that change as a potato rots.... -- Jerry From hanno at hboeck.de Thu Oct 23 07:30:20 2014 From: hanno at hboeck.de (Hanno =?ISO-8859-1?B?QvZjaw==?=) Date: Thu, 23 Oct 2014 13:30:20 +0200 Subject: [Cryptography] In search of random numbers In-Reply-To: References: Message-ID: <20141023133020.28ab656f@pc> Am Thu, 23 Oct 2014 04:43:05 +1100 (EST) schrieb Dave Horsfall : > A wild idea just occurred to me (I get them all the time). > > Just grep the mail logs for rejected spammers, and hash that info etc > i.e. use the system against itself; two birds, one hammer. You don't really have a problem with getting enough entropy once you have a system running with mail and an anti-spam-filter. At that point you already have network timings and disk access. The tough part is "early-boot-time-entropy" - where do you get your entropy if you don't have any filesystems and network access initialized yet? Please remember: Once you have a single source of reliable entropy for a few bytes you don't really have a problem any more if your PRNG isn't completely crap. -- Hanno B?ck http://hboeck.de/ mail/jabber: hanno at hboeck.de GPG: BBB51E42 -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 819 bytes Desc: not available URL: From anzalaya at gmail.com Thu Oct 23 08:00:13 2014 From: anzalaya at gmail.com (Alexandre Anzala-Yamajako) Date: Thu, 23 Oct 2014 14:00:13 +0200 Subject: [Cryptography] In search of random numbers In-Reply-To: <20141023133020.28ab656f@pc> References: <20141023133020.28ab656f@pc> Message-ID: Using emails/ spams seems like it gives a networked attacker read/write access to your entropy source (I can always send you more spam) which is probably not the best idea. Anyway hanno is right : if it doesn't solve the "startup problem" then it probably isn't much better than what we already have -- Alexandre Anzala-Yamajako From leichter at lrw.com Thu Oct 23 21:06:50 2014 From: leichter at lrw.com (Jerry Leichter) Date: Thu, 23 Oct 2014 21:06:50 -0400 Subject: [Cryptography] Samsung Knox Message-ID: Proving again that (a) most companies have no clue how to do security; (b) most government agencies have no clue how to audit it; we have two related bits of news: 1. Two days ago, Samsung proudly announced that "Samsung Galaxy Devices based on KNOX platform are the First Consumer Mobile Devices NIAP-Validated and Approved for U.S. Government Classified Use" -http://global.samsungtomorrow.com/?p=43522 2. And this was followed by: http://mobilesecurityares.blogspot.co.uk/2014/10/why-samsung-knox-isnt-really-fort-knox.html?m=1 which completely demolishes Knox security. (The user's password is encrypted using a key derived from a fixed constant and a device serial number available to any app on the device.) I would laugh if I weren't crying.... -- Jerry From linus at nordberg.se Thu Oct 23 04:52:52 2014 From: linus at nordberg.se (Linus Nordberg) Date: Thu, 23 Oct 2014 10:52:52 +0200 Subject: [Cryptography] Best internet crypto clock: hmmmmm... In-Reply-To: <201410221412.s9MECbCl014892@new.toad.com> (John Gilmore's message of "Wed, 22 Oct 2014 07:12:37 -0700") References: <1413910953.21882.1.camel@sonic.net> <20141021214825.GA8146@copper.astro.indiana.edu> <201410221412.s9MECbCl014892@new.toad.com> Message-ID: <87egtz88wr.fsf@nordberg.se> John Gilmore wrote Wed, 22 Oct 2014 07:12:37 -0700: | Isn't there also some research showing that over time, you can tell | what time zone a remote computer is in, by pinging it for timestamps, | and noticing when its oscillators run minutely faster during the heat | of the day, and slower during the cool of the night? Steven J. Murdoch Hot or Not: Revealing Hidden Services by their Clock Skew http://www.cl.cam.ac.uk/~sjm217/papers/ccs06hotornot.pdf From dj at deadhat.com Thu Oct 23 10:44:49 2014 From: dj at deadhat.com (dj at deadhat.com) Date: Thu, 23 Oct 2014 14:44:49 -0000 Subject: [Cryptography] Simon, Speck and ISO In-Reply-To: <5448DC60.2030904@cs.tcd.ie> References: <65806f68b096fbc8d116d8f80613e9f5.squirrel@www.deadhat.com> <20141022112840.20efb992@pc> <006601cfee13$89286930$9b793b90$@acm.org> <07266404ca2a96bafbd54a2f17752174.squirrel@www.deadhat.com> <5448DC60.2030904@cs.tcd.ie> Message-ID: > As of now, it looks like IETF protocols will be adopting chacha20 > with poly1305 as per [1]. I believe that is being implemented in Alas, AEAD schemes and stream ciphers would be a different document at a different time. I would be very happy for this to be in the specs. But since it's my first meeting in ISO I haven't earned my black belt in ISO standards setting yet. I have however verified that all the mind-control techniques that work in the IEEE work just fine in ISO. From dj at deadhat.com Thu Oct 23 13:43:49 2014 From: dj at deadhat.com (dj at deadhat.com) Date: Thu, 23 Oct 2014 17:43:49 -0000 Subject: [Cryptography] Best internet crypto clock In-Reply-To: References: <1413910953.21882.1.camel@sonic.net> <20141021214825.GA8146@copper.astro.indiana.edu> Message-ID: > On Wed, Oct 22, 2014 at 10:35 AM, Henry Baker > wrote: > >> Perhaps a better source would be something that couldn't possibly be >> hacked -- e.g., variations in solar flux of neutrinos or other solar >> variations. > > How about using the earth's rotation? :) > > http://www.iers.org/IERS/EN/DataProducts/EarthOrientationData/eop.html;jsessionid=5F896E89B594D8D5B7A9E38DE4AD6BF0.live1 > > Only Superman can hack that... > _______________________________________________ Remote sources require external inputs. Transistors have plenty of thermal noise in their gates. It's local, well understood and can be modeled for min-entropy analysis over all environmental conditions and attack scenarios. You just need know how to get at it in a robust way. We published two different circuits that do this. The one for RdRand that's been repeated in numerous papers (like this: http://www.cryptography.com/public/pdf/Intel_TRNG_Report_20120312.pdf) and the other is this one: http://www.researchgate.net/publication/224170854_2.4GHz_7mW_all-digital_PVT-variation_tolerant_True_Random_Number_Generator_in_45nm_CMOS There are many gigabits/s of data you can get out of a transistor with a high entropy distribution. You can be reasonably confident that the noise in the transistor gate is the aggregate signal from many quantum events in the particles out of which it is constructed. On silicon, the circuits are small. You may be able to do something similar with discrete components, albeit at lower speed. From mitch at niftyegg.com Thu Oct 23 20:09:54 2014 From: mitch at niftyegg.com (Tom Mitchell) Date: Thu, 23 Oct 2014 17:09:54 -0700 Subject: [Cryptography] In search of random numbers In-Reply-To: <20141023133020.28ab656f@pc> References: <20141023133020.28ab656f@pc> Message-ID: On Thu, Oct 23, 2014 at 4:30 AM, Hanno B?ck wrote: > Am Thu, 23 Oct 2014 04:43:05 +1100 (EST) > schrieb Dave Horsfall : > > > A wild idea just occurred to me (I get them all the time). ..... > > The tough part is "early-boot-time-entropy" - where do you get your > entropy if you don't have any filesystems and network access > initialized yet? > What "early" needs are there for entropy? Most devices will have a little or a lot of persistent memory that can be used to save an entropy rich seed saved from "last time" the system was live. The internet of things... are a challenge. Refrigerators and TV are expected to be resource starved... but other systems seem to have engineering options. -- T o m M i t c h e l l -------------- next part -------------- An HTML attachment was scrubbed... URL: From stephan.neuhaus at tik.ee.ethz.ch Fri Oct 24 00:46:03 2014 From: stephan.neuhaus at tik.ee.ethz.ch (Stephan Neuhaus) Date: Fri, 24 Oct 2014 06:46:03 +0200 Subject: [Cryptography] In search of random numbers In-Reply-To: References: <20141023133020.28ab656f@pc> Message-ID: <5449D98B.3010004@tik.ee.ethz.ch> On 2014-10-24 02:09, Tom Mitchell wrote: > What "early" needs are there for entropy? Most SSH keys are generated on first-time boot. Fun, Stephan From rsalz at akamai.com Fri Oct 24 04:12:05 2014 From: rsalz at akamai.com (Salz, Rich) Date: Fri, 24 Oct 2014 04:12:05 -0400 Subject: [Cryptography] Simon, Speck and ISO In-Reply-To: References: <65806f68b096fbc8d116d8f80613e9f5.squirrel@www.deadhat.com> <20141022112840.20efb992@pc> <006601cfee13$89286930$9b793b90$@acm.org> <07266404ca2a96bafbd54a2f17752174.squirrel@www.deadhat.com> <5448DC60.2030904@cs.tcd.ie> Message-ID: <2A0EFB9C05D0164E98F19BB0AF3708C71D3AF6521E@USMBX1.msg.corp.akamai.com> > Alas, AEAD schemes and stream ciphers would be a different document at a > different time. See if you can get them to un-ask the question. Those ciphers have had no public scrutiny. The WWW has zero interest and even the IoT has no problem doing AES. We don't unstudied private alternatives from an untrustworthy source. -- Principal Security Engineer, Akamai Technologies IM: rsalz at jabber.me Twitter: RichSalz From hanno at hboeck.de Fri Oct 24 04:49:08 2014 From: hanno at hboeck.de (Hanno =?ISO-8859-1?B?QvZjaw==?=) Date: Fri, 24 Oct 2014 10:49:08 +0200 Subject: [Cryptography] In search of random numbers In-Reply-To: References: <20141023133020.28ab656f@pc> Message-ID: <20141024104908.5e491da9@pc> Am Thu, 23 Oct 2014 17:09:54 -0700 schrieb Tom Mitchell : > On Thu, Oct 23, 2014 at 4:30 AM, Hanno B?ck wrote: > > > The tough part is "early-boot-time-entropy" - where do you get your > > entropy if you don't have any filesystems and network access > > initialized yet? > > > > What "early" needs are there for entropy? Networking, Stack Canaries of first processes etc. Recently saw a talk on Blackhat EU about it, this seems to be the background paper: https://www.usenix.org/system/files/conference/woot14/woot14-kaplan.pdf Interesting stuff. > Most devices will have a little or a lot of persistent > memory that can be used to save an entropy rich > seed saved from "last time" the system was live. The other issue you'll have is "first time boot". Then you don't have any entropy from previous boots. See the RSA key issue Nadia Heninger and others found a couple of years ago: https://factorable.net/paper.html > The internet of things... are a challenge. Refrigerators and > TV are expected to be resource starved... but other systems > seem to have engineering options. It's not just IoT. The RSA attack shows that there are very real problems with embedded devices on the market today. -- Hanno B?ck http://hboeck.de/ mail/jabber: hanno at hboeck.de GPG: BBB51E42 -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 819 bytes Desc: not available URL: From waywardgeek at gmail.com Fri Oct 24 05:31:51 2014 From: waywardgeek at gmail.com (Bill Cox) Date: Fri, 24 Oct 2014 05:31:51 -0400 Subject: [Cryptography] A TRNG review per day: RDRAND and the right TRNG architecture Message-ID: The "right TRNG architecture" looks like this: auditable cheap low speed TRNG -> auditable high speed CPRNG -> happy user Respectable TRNGs like the new Cryptech Tech TRNG are switching to this architecture. If you use *any* secure TRNG to feed /dev/random, regardless of it's speed, and then read your cryptographic key data from /dev/urandom, then you are already using this model. However, I happen to be something of a speed freak. Intel's RDRAND instruction is appealing to me. The architecture is the fastest TRNG I have seen. So, why not use it? Here's why: - It is probably back doored - It is not auditable - Critical portions of its design remain secret (such as whitening and how to disable it) - We don't need a fast TRNG, even though it's cool I have no knowledge of any intentional back door in this architecture, but if I were to design a TRNG for a processor, and wanted to insert a hard to detect back door, this is the architecture I would use. I simulated this architecture myself in SPICE in a .35 micron CMOS process. It seems to work, and man it's fast! It basically powers up a latch and let's it randomly initialize to either a 0 or 1, and uses that as it's random output. If you wait 20ns, the latch will have flipped one way or the other, and you can read out the result. If you don't mind reading the latch before it's sure to have settled to a 0 or 1, you can read it as fast as your clock runs, and still get a high average entropy per bit. There is simply no reason not to run this at 3GHz. That said, this TRNG has so many drawbacks that I predict no one other than Intel will ever use it. First, it requires a couple of large-ish on-chip capacitors to hold the control voltages that compensate for factors that cause the latch to prefer to power up one way or the other. Without measuring the 0/1 bias and dynamically compensating for it, this circuit simply does not work. This by itself makes Intel's TRNG both large and complex. Worse, it is *massively* sensitive to nearby signals. It is more sensitive to external signals than any other architecture I know of. No other TRNG relies on amplifying such a small noise signal, and no other architecture can be PWNed with as little injected energy. This is literally the most attacker signal sensitive TRNG ever designed. It's power supply sensitivity is so bad, Intel actually *patented* regulating the supply of a TRNG in order to reduce the impact of a power drain attack on their device, thus making it *illegal* for any of us to build this architecture securely. Simply executing a power hungry loop surely would otherwise cause this TRNG to output a long sequence of either 1's or 0's, until the bias circuit manages to charge the capacitors to different levels to compensate. There is *zero* published evidence that Intel's power supply regulation actually works well enough to defending against this attack. We can't even test for ourselves, because Intel purposely hides the raw TRNG output! The power drain attack is so obvious that I suspect Intel's engineers who aren't owned by a TLA would not allow this on their chip without a separate power regulator. This circuit is also is effected by substrate currents, which is my preferred method for back-dooring it in a way that most Intel engineers would not notice. An attacker with special knowledge of surrounding components could easily influence the device through this path, which will not show up in any schematic. It also will not be flagged as a potential problem by any DRC or signal integrity tool. No SPICE simulation will reflect this attack unless the netlist is manually modified to take this into account. Intel's hordes of EDA tool pushers would give this attack a green light, because it would pass every step in their tried-and-true IC verification process. If they did look for substrate current attacks, maybe I could PWN it using light. Certainly this device would change it's power-up behaviour by simply shining a flashlight on it. If I could get some circuit on the chip to emit some low levels of infrared, I could probably PWN it that way. Another potential attack is rapidly changing the thermal gradient. If a nearby portion of the die could be made hot very quickly, it might change the power-up preference of the latch faster than the bias correction circuit can compensate. These are just some of the ways this device could be back-doored or PWNed. I bet we could have an entertaining competition to come up with the most create way. Despite the NSA's likely back door in RDRAND, adding it to your entropy pool will still defeat anyone who lacks knowledge about the back door. Most of the bad guys we worry about lack this knowledge. However, one day the back door may be reverse-engineered and published on the Internet. Relying on RDRAND for security is a bad idea, even if you don't worry about the NSA's snooping. However, adding it to the entropy pool is a good idea, so long as we don't increase the entropy estimate. Linus was right to bash that guy who wanted RDRAND banned from the Linux entropy pool. So, why do we need true random data at high speed so badly that Intel decided to build in a device requiring large capacitors and it's own power regulator? The truth is, we don't need high speed. As many people have argued here, all any single system requires is 256 bits of true random data. That's all they *ever* need, so long as it remains secret (which is hard), and so long as a cryptographically secure PRNG (CPRNG) is used to generate all future cryptographically pseudo-random data (which is comparatively easy). A TRNG simply does not need to be fast. A Lava Lamp generates entropy fast enough for almost any application, so long as we use it to seed add a high speed CPRNG firehose. Anyone selling you a high speed TRNG for a lot of money, based on quantum voodoo or whatever, is ripping you off. Due to Intel's inexplicable reluctance to make their device auditable, while relying on what is probably the hardest TRNG architecture to get right, I have to rate RDRAND as snake-oil for use in cryptography. Bill -------------- next part -------------- An HTML attachment was scrubbed... URL: From pgut001 at cs.auckland.ac.nz Fri Oct 24 05:53:09 2014 From: pgut001 at cs.auckland.ac.nz (Peter Gutmann) Date: Fri, 24 Oct 2014 22:53:09 +1300 Subject: [Cryptography] Samsung Knox In-Reply-To: Message-ID: Jerry Leichter writes: >Samsung proudly announced that "Samsung Galaxy Devices based on KNOX platform >are the First Consumer Mobile Devices NIAP-Validated and Approved for U.S. >Government Classified Use" While this again much confirms my opinion of the value of security certification programs (although the fact that it was CC did surprise me slightly, I would have expected it from FIPS 140 but I thought CC was a bit better than that), I wonder what'll happen to the certification? Will it be withdrawn, or will the issue just be ignored. Peter. From fedor.brunner at azet.sk Fri Oct 24 07:53:44 2014 From: fedor.brunner at azet.sk (Fedor Brunner) Date: Fri, 24 Oct 2014 13:53:44 +0200 Subject: [Cryptography] Simon, Speck and ISO In-Reply-To: <65806f68b096fbc8d116d8f80613e9f5.squirrel@www.deadhat.com> References: <65806f68b096fbc8d116d8f80613e9f5.squirrel@www.deadhat.com> Message-ID: <544A3DC8.20100@azet.sk> On 22.10.2014 00:16, dj at deadhat.com wrote: > Today the NSA proposed that Simon and Speck be added the the ISO JTC1/SC27 > approved ciphers spec. > > A study period was approved. > > But no other non NSA lightweight algorithms have been proposed to ISO, > other than chaskey from Hitachi. > > If you have opinions on alternatives for lightweight block vipers, macs, > hashes etc, please let me know so I can try and set the ball rolling with > ISO. > > Simon and speck look OK. But the source is not a little bit tainted. > According to Joachim Str?mbergson: https://www.ietf.org/mail-archive/web/tls/current/msg13824.html SPECK and SIMON has been found to be weak against differential crypyanalysis: https://eprint.iacr.org/2013/568.pdf https://eprint.iacr.org/2013/543.pdf > _______________________________________________ > The cryptography mailing list > cryptography at metzdowd.com > http://www.metzdowd.com/mailman/listinfo/cryptography > -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 931 bytes Desc: OpenPGP digital signature URL: From anzalaya at gmail.com Fri Oct 24 08:18:39 2014 From: anzalaya at gmail.com (Alexandre Anzala-Yamajako) Date: Fri, 24 Oct 2014 14:18:39 +0200 Subject: [Cryptography] In search of random numbers In-Reply-To: References: <20141023133020.28ab656f@pc> Message-ID: Recent research has shown that numerous devices (headless servers for example) generate their long lived cryptographic keys upon their first start. In that case there is no "last time" that can be reliably trusted. Unless I misunderstood your point I don't clearly see the engineering option. Regards Alexandre Anzala-Yamajako From dan at geer.org Fri Oct 24 08:56:51 2014 From: dan at geer.org (dan at geer.org) Date: Fri, 24 Oct 2014 08:56:51 -0400 Subject: [Cryptography] Best internet crypto clock: hmmmmm... In-Reply-To: Your message of "Wed, 22 Oct 2014 07:12:37 -0700." <201410221412.s9MECbCl014892@new.toad.com> Message-ID: <20141024125651.AB7CF228167@palinka.tinho.net> > That's fascinating! Turning an annoyance of audio engineers and home > stereos into a forensic tool. On a similar line, see www.pindropsecurity.com which uses telephony background noise for near-real-time spoofing detection, inter alia. --dan From leichter at lrw.com Fri Oct 24 12:22:41 2014 From: leichter at lrw.com (Jerry Leichter) Date: Fri, 24 Oct 2014 12:22:41 -0400 Subject: [Cryptography] Samsung Knox In-Reply-To: References: Message-ID: On Oct 24, 2014, at 5:53 AM, Peter Gutmann wrote: >> Samsung proudly announced that "Samsung Galaxy Devices based on KNOX platform are the First Consumer Mobile Devices NIAP-Validated and Approved for U.S. Government Classified Use" > > While this again much confirms my opinion of the value of security > certification programs (although the fact that it was CC did surprise me > slightly, I would have expected it from FIPS 140 but I thought CC was a bit better than that), I wonder what'll happen to the certification? A big weakness with these certification programs is that you get to define the "box" that gets certified. A reasonable bet is that "secure storage of the password" was simply not within the certification boundaries - it was just assumed to be secure. Or, more likely, nothing in the certification boundaries had anything to do with storage of the password - it was written on the assumption that the password simply appears (presumably the user enters it) and then we go from there. BTW, if you read the linked article, you realize how bad things really were. The hole was found because the guy doing the analysis first figured out how to get at the "password hint" - which is automatically computed for you as the first and last characters, and the actual length, of your password! (This alone is *already* a big security issue.) He then wondered whether that was computed on the fly from the actual password - implying that it was somehow available in cleartext. And, indeed - in some deeply buried and obfuscated, but ultimately reversible, code, it was. > Will it be withdrawn, or will the issue just be ignored. If, indeed, my guess about how this slipped by is correct, then the validations are, according to the rules of the process, perfectly valid - which would put the validating agencies in an embarrassing position. It's not as if those of us who've ever dealt with these processes don't already know that they have very, very limited worth - it's just that *most* users have no clue. Defining the boundaries of the validation is important, of course - you don't want to fail a device because someone watching as its user enters the keystrokes can read the key. But when I last had anything to do with these validations, there were basically no rules about where you put the boundaries. Technically, you might only have certification for a low-level crypto module; but it was easy to describe things to imply to virtually everyone that it's the entire device that's been validated, all without violating the letter of any standards or regulations. -- Jerry -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 4813 bytes Desc: not available URL: From pgut001 at cs.auckland.ac.nz Fri Oct 24 12:27:38 2014 From: pgut001 at cs.auckland.ac.nz (Peter Gutmann) Date: Sat, 25 Oct 2014 05:27:38 +1300 Subject: [Cryptography] Simon, Speck and ISO In-Reply-To: <544A3DC8.20100@azet.sk> Message-ID: dj at deadhat.com writes: >Today the NSA proposed that Simon and Speck be added the the ISO JTC1/SC27 >approved ciphers spec. Does anyone know what this is? It isn't the current incarnation of ISO 9979 is it? That was created in the early 1990s when the ISO decided, all by itself, without any external pressure applied at all, not to standardise any crypto algorithms. The register was created as a compromise where the ISO appeared to do something but didn't really do anything, it's just a number- allocation method and nothing more. It's also been, after an initial burst of registrations for algorithms like DES, IDEA, and RC2/4, a graveyard of algorithms that no-one cares about (FWZ1, SPEAM1, CIPHERUNICORN, Triplo, FSAngo), if you have something that you can't get standardised anywhere then you dump it into 9979 (and possibly now the ISO JTC1/SC27 registry). Peter. From jsd at av8n.com Fri Oct 24 13:02:59 2014 From: jsd at av8n.com (John Denker) Date: Fri, 24 Oct 2014 10:02:59 -0700 Subject: [Cryptography] In search of random numbers In-Reply-To: <20141024104908.5e491da9@pc> References: <20141023133020.28ab656f@pc> <20141024104908.5e491da9@pc> Message-ID: <544A8643.4060104@av8n.com> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On 10/24/2014 01:49 AM, Hanno B?ck wrote: > The other issue you'll have is "first time boot". Then you don't have > any entropy from previous boots. That's a good description of the problem. However, there is a distinction between realism, fatalism, and defeatism. We must be realistic about the problem as it exists, but we should not fatalistically accept it as a permanent state of affairs. We need to fix this problem. We require the RNG system to work correctly at all times, even early in the startup process, even during the very first boot. To make this possible, any device, no matter how large or small, MUST be *provisioned* with some entropy. We must train people to do this, as a basic element of sound engineering practice. The idea of provisioning is discussed at "Security Recommendations for Any Device that Depends on Randomly-Generated Numbers" https://www.av8n.com/computer/htm/secure-random.htm especially https://www.av8n.com/computer/htm/secure-random.htm#advice-provisioned On 10/24/2014 05:18 AM, Alexandre Anzala-Yamajako wrote: >> In that case there is no "last time" that can be reliably trusted. >> Unless I misunderstood your point I don't clearly see the engineering option. Provisioning is an option ... AFAICT the only option. -----BEGIN PGP SIGNATURE----- Version: GnuPG v1 iQIVAwUBVEqGQ/O9SFghczXtAQKJ5g/+PkjYeZXTMAgPnqzGdgQ5DBBj4u/tL7HQ 2rwJtQKdwKvUAbnQMFyN4Yd2MeNiC3u/bVl2Mj8nG4EBXg3xNqbTHWcytteHFoDr C+aQ2CfHogm1iaDQxZFSFr544uW0ETavecsYJbJ/Ta7iHl1nu8/znDWSr1aLIsA7 wj0jJbL/XNGSLKicPnrLn5w6QG7MdMc2J7Y0PFzDPIzqvT6irQcfEcSg5lMtuWsc 8ceco7B9qSjVIX01t3ORp/uKC1uNqnjB1eZF5FVhZoJak4HSVx55Yayhui+NvkSH GS0AsYHoYk1TgyL1ekSp61+o2bsz+j9TblAHtytTWnPVqwGSTycJ2fTjb9leBn5E okQl63dXBShgxSCY020fEx+xt35alP55dSg9GrjQL5gcpQMKKA3XmHg8JW7LWpaT SWoCHKBEmtxxEyQm6z9LzcrWsEOjG19t5NrTif0Z3QYVKj0vlN4FlcXNfn5FuJwr Dfh/J1GtwvQJstxrkwRdruay9zjH8wVKvSsKSimsqRBnFBc94YH3P50JO5IEeXdV bQOHs+D15D4SI0pcRITMUm5PlsEt5E4gcX3jAeObOXrnOiibMhkdiaPFdpo/nWUK Hvs/T6cCiNwzYN7adJhqPCiF18+XYAgW3be1VjL7DVdsUW8eIF0uQtDEEbEsoXM2 HBqWhCNYc0w= =Fy6a -----END PGP SIGNATURE----- From hbaker1 at pipeline.com Fri Oct 24 13:09:08 2014 From: hbaker1 at pipeline.com (Henry Baker) Date: Fri, 24 Oct 2014 10:09:08 -0700 Subject: [Cryptography] Best internet crypto clock: hmmmmm... In-Reply-To: <20141024125651.AB7CF228167@palinka.tinho.net> References: <20141024125651.AB7CF228167@palinka.tinho.net> Message-ID: At 05:56 AM 10/24/2014, dan at geer.org wrote: > > That's fascinating! Turning an annoyance of audio engineers and home stereos into a forensic tool. > >On a similar line, see www.pindropsecurity.com which uses >telephony background noise for near-real-time spoofing >detection, inter alia. Here's Balasubramaniyan's PhD thesis describing the Pindr0p technology: From: https://smartech.gatech.edu/bitstream/handle/1853/44920/balasubramaniyan_vijay_a_201108_phd.pdf Size: 2.3 MB (2,311,948 bytes) From bear at sonic.net Fri Oct 24 13:56:35 2014 From: bear at sonic.net (Bear) Date: Fri, 24 Oct 2014 10:56:35 -0700 Subject: [Cryptography] Uncorrelated sequence length, was: A TRNG review per day In-Reply-To: References: Message-ID: <1414173395.31285.1.camel@sonic.net> On Fri, 2014-10-24 at 05:31 -0400, Bill Cox wrote: > > So, why do we need true random data at high speed so badly that Intel > decided to build in a device requiring large capacitors and it's own > power regulator? The truth is, we don't need high speed. As many > people have argued here, all any single system requires is 256 bits of > true random data. That's all they *ever* need, so long as it remains > secret (which is hard), and so long as a cryptographically secure PRNG > (CPRNG) is used to generate all future cryptographically pseudo-random > data (which is comparatively easy). I think I'm going to take issue with this. While 256 bits plus a CPRNG is enough to prevent known and practical means of predicting the stream of numbers created, it does not constitute proof that a stream of outputs of that length *CANNOT* be predicted. It restricts the uncorrelated sequence length to be provably no more than 256 bits. Actually, it forces the uncorrelated sequence length to be provably less than 256 bits assuming a CPRNG. In order to prove an uncorrelated sequence length, it is necessary to first accept some standard for correlation, and then prove that every sequence of a given length is produced by a set of the possible initial states of the PRNG none of which are larger or smaller than any of the other sets by a ratio larger than your correlation parameter. With a state of size N, there aren't even as many possible initial states as there are output sequences of length N+1, so some output sequences must be impossible - the set of initial states that produces those sequences has cardinality zero, and zero will definitely fall outside any correlation parameter. That is, whether or not there's any *known* way to mathematically predict whether the next bit is a 1 or a 0, there are guaranteed to *be* sequences at minimum 257 bits long which can never be produced, and therefore we cannot prove that there is no *unknown* way to mathematically predict whether the next bit is a 1 or a 0. The distinction may not be important for most applications; but if an attacker knows some important technique for attacking the CPRNG that you don't know, and can eliminate whole classes of bit sequences as being impossible to produce from that CPRNG, the attacker may need to search a key space no larger than the uncorrelated sequence length -- and, for some ciphers such as RSA, a 256-bit search space is for other reasons relatively trivial. Using a CPRNG proven to have an uncorrelated sequence length of N bits means that the key space he has to search even when he knows a key has been produced by it, no matter what he knows about which sequences are possible and impossible (or more-likely and less-likely) given your CPRNG, is still never smaller than N bits. So, I would say that seeding with just 256 bits of state is only reasonable if for some reason you absolutely know that there can be no *POSSIBLE* attack on your CPRNG. Because math that enables new attacks will keep being discovered..... A relatively humble lagged-Fibonacci generator of the sort we would never use for cryptography, provably produces an uncorrelated sequence the size of its state. So does a linear feedback shift register. But, as is true with every generator that *provably* produces an uncorrelated sequence the size of its state, they are trivially predictable thereafter and therefore not a candidate for use as a cryptographically secure PRNG. Any PRNG must have some impossible sequences of outputs which are no longer than its state, and cryptographic PRNGs must have correlated sequences which are shorter. For a good CPRNG, I think it's important to prove a minimum length of uncorrelated sequences. A small state proves a maximum length for uncorrelated sequences. Bear From bear at sonic.net Fri Oct 24 14:02:51 2014 From: bear at sonic.net (Bear) Date: Fri, 24 Oct 2014 11:02:51 -0700 Subject: [Cryptography] In search of random numbers In-Reply-To: <5449D98B.3010004@tik.ee.ethz.ch> References: <20141023133020.28ab656f@pc> <5449D98B.3010004@tik.ee.ethz.ch> Message-ID: <1414173771.31285.3.camel@sonic.net> On Fri, 2014-10-24 at 06:46 +0200, Stephan Neuhaus wrote: > On 2014-10-24 02:09, Tom Mitchell wrote: > > What "early" needs are there for entropy? > > Most SSH keys are generated on first-time boot. This is dumb. This is bad design. We don't need to be providing early boot-time entropy; we need to be educating people that any design which requires early boot-time entropy is a mistake. Bear From mitch at niftyegg.com Fri Oct 24 14:47:04 2014 From: mitch at niftyegg.com (Tom Mitchell) Date: Fri, 24 Oct 2014 11:47:04 -0700 Subject: [Cryptography] In search of random numbers In-Reply-To: References: <20141023133020.28ab656f@pc> Message-ID: On Fri, Oct 24, 2014 at 5:18 AM, Alexandre Anzala-Yamajako < anzalaya at gmail.com> wrote: > Recent research has shown that numerous devices (headless servers for > example) generate their long lived cryptographic keys upon their first > start. > But in this case there is a file system and it is clear that extra effort is needed to generate entropy. Long lived keys would be known to be fragile and in need of replacement if the extra effort is missing. In that case there is no "last time" that can be reliably trusted. > Unless I misunderstood your point I don't clearly see the engineering > option. > The lack of persistent storage is the issue. A refrigerator or other device that generates a fresh set of keys each time it powers up needs hardware engineering. A factory process could be engineered that would insert a trustable entropy seed in such a device as part of testing. Network MAC addresses are factory programmed and if the device containing a MAC address had room such a seed could be used once and combined with site specific local data time, temperature, arp, broadcast ping, DHCP, bootp ... input so the long lived keys can start live on improved footing. Other hardware could have a one time latch that gates read access to a seed and after reading the seed device once at boot time it would be unavailable. Future entropy seeding could look at it and also for a filesystem cookie combine them and move on. The first start is a challenge but engineering can improve it. Cost reductions can make it more difficult. Site process can connect to new headless servers quickly and insert trusted entropy, regenerate keys and more before allowing a server to connect to anything except a single trusted host. But this requires engineering at the site. Best practice analysis and models can remove a lot of mis-steps. -- T o m M i t c h e l l -------------- next part -------------- An HTML attachment was scrubbed... URL: From ljcamp at indiana.edu Fri Oct 24 14:51:19 2014 From: ljcamp at indiana.edu (L Jean Camp) Date: Fri, 24 Oct 2014 14:51:19 -0400 Subject: [Cryptography] ryptography list: making security usable Message-ID: This might interest some of you ... The Workshop on Usable Security (USEC) will be held in conjunction with NDSS on February 8, 2015. The deadline for USEC Workshop submissions is November 24, 2014. - In previous years, USEC has also been collocated with FC; for example in Okinawa, Bonaire, and Trinidad and Tobago. Additional information and paper submission instructions: http://www.internetsociety.org/events/ndss-symposium-2015/usec-workshop-call-papers ****************** The Workshop on Usable Security invites submissions on all aspects of human factors and usability in the context of security and privacy. USEC 2015 aims to bring together researchers already engaged in this interdisciplinary effort with other computer science researchers in areas such as visualization, artificial intelligence and theoretical computer science as well as researchers from other domains such as economics or psychology. We particularly encourage collaborative research from authors in multiple fields. Topics include, but are not limited to: * Evaluation of usability issues of existing security and privacy models or technology * Design and evaluation of new security and privacy models or technology * Impact of organizational policy or procurement decisions * Lessons learned from designing, deploying, managing or evaluating security and privacy technologies * Foundations of usable security and privacy * Methodology for usable security and privacy research * Ethical, psychological, sociological and economic aspects of security and privacy technologies USEC solicits short and full research papers. ***** Program Committee Jens Grossklags (The Pennsylvania State University) - Chair Rebecca Balebako (Carnegie Mellon University) Zinaida Benenson (University of Erlangen-Nuremberg) Sonia Chiasson (Carleton University) Emiliano DeCristofaro (University College London) Tamara Denning (University of Utah) Alain Forget (Carnegie Mellon University) Julien Freudiger (PARC) Vaibhav Garg (VISA) Cormac Herley (Microsoft Research) Mike Just (Glasgow Caledonian University) Bart Knijnenburg (University of California, Irvine) Janne Lindqvist (Rutgers University) Heather Lipford (University of North Carolina at Charlotte) Debin Liu (Paypal) Xinru Page (University of California, Irvine) Adrienne Porter Felt (Google) Franziska Roesner (University of Washington) Pamela Wisniewski (The Pennsylvania State University) Kami Vaniea (Indiana University) Prof. L. Jean Camp http://www.ljean.com Human-Centered Security http://usablesecurity.net/ Economics of Security http://www.infosecon.net/ Congressional Fellow http://www.ieeeusa.org/policy/govfel/congfel.asp -------------- next part -------------- An HTML attachment was scrubbed... URL: From mitch at niftyegg.com Fri Oct 24 16:27:38 2014 From: mitch at niftyegg.com (Tom Mitchell) Date: Fri, 24 Oct 2014 13:27:38 -0700 Subject: [Cryptography] A TRNG review per day: RDRAND and the right TRNG architecture In-Reply-To: References: Message-ID: On Fri, Oct 24, 2014 at 2:31 AM, Bill Cox wrote: > The "right TRNG architecture" looks like this: ...snip... So, why do we > need true random data at high speed so badly that Intel decided to build in > a device requiring large capacitors and it's own power regulator? > > Interesting.. one value of "very fast" is whitening logic has lots of bits to eliminate any color imposed by external events. Very fast has value, customers with clout often ask for and get special instructions in the instruction set. One largish site install in Utah might pay for the design feature. They might want vastly more bits for something. Cell phones do not need 64bit cores with a 32bit ABI but marketing likes it. So fast also has market value.... At the hardware level I am curious how a large structure like this is shared by multiple cores. Keeping many cores synchronized is almost impossible unless some special Tandem lock step check is turned on. Further sharing of logic for many cores and hyperthreading with speculative execution puts some of the interesting stuff at arms length... Time will tell if arm length is enough (even on an ARMcore). -- T o m M i t c h e l l -------------- next part -------------- An HTML attachment was scrubbed... URL: From waywardgeek at gmail.com Fri Oct 24 17:01:28 2014 From: waywardgeek at gmail.com (Bill Cox) Date: Fri, 24 Oct 2014 17:01:28 -0400 Subject: [Cryptography] Uncorrelated sequence length, was: A TRNG review per day In-Reply-To: <1414173395.31285.1.camel@sonic.net> References: <1414173395.31285.1.camel@sonic.net> Message-ID: On Fri, Oct 24, 2014 at 1:56 PM, Bear wrote: > On Fri, 2014-10-24 at 05:31 -0400, Bill Cox wrote: > > > > > So, why do we need true random data at high speed so badly that Intel > > decided to build in a device requiring large capacitors and it's own > > power regulator? The truth is, we don't need high speed. As many > > people have argued here, all any single system requires is 256 bits of > > true random data. That's all they *ever* need, so long as it remains > > secret (which is hard), and so long as a cryptographically secure PRNG > > (CPRNG) is used to generate all future cryptographically pseudo-random > > data (which is comparatively easy). > > > I think I'm going to take issue with this. While 256 bits plus a > CPRNG is enough to prevent known and practical means of predicting > the stream of numbers created, it does not constitute proof that > a stream of outputs of that length *CANNOT* be predicted. It > restricts the uncorrelated sequence length to be provably no more > than 256 bits. Actually, it forces the uncorrelated sequence > length to be provably less than 256 bits assuming a CPRNG. > I agree with you about on point. Turning off the TRNG leaves the CPRNG vulnerable. In that case, if the CPRNG algorithm is broken, a lot of generated keys could be compromised. Using a CPRNG with the TRNG turned off means your security drops to the level of the stream cipher or sponge chosen, which in theory is lower than a well designed TRNG. In reality, I'm not so sure a CPRNG built using Keccak or ChaCha will be the weak link... To help defend against this, the TRNG should be left on, and applications needing the highest security should still read from /dev/random, while applications that can live with the assumption that the CPRNG is secure can read from /dev/urandom. When I write data to the Linux entropy pool, I only claim to write half as much as I measure. This should enable it's CPRNG to generate any sequence, even if I am slightly off in my entropy estimate, though I would need to look under the hood at the Linux CPRNG to be sure. I suppose there are some use cases for high speed TRNGs that are auditable and trustworthy. Electronic one-time-pads for large organizations like the US military come to mind, though that seems a bit far-fetched. Bill -------------- next part -------------- An HTML attachment was scrubbed... URL: From kristian.gjosteen at math.ntnu.no Fri Oct 24 17:19:37 2014 From: kristian.gjosteen at math.ntnu.no (=?utf-8?Q?Kristian_Gj=C3=B8steen?=) Date: Fri, 24 Oct 2014 23:19:37 +0200 Subject: [Cryptography] Simon, Speck and ISO In-Reply-To: <544A3DC8.20100@azet.sk> References: <65806f68b096fbc8d116d8f80613e9f5.squirrel@www.deadhat.com> <544A3DC8.20100@azet.sk> Message-ID: <27A78A45-F99B-489A-B32F-C5483D4E3541@math.ntnu.no> 24. okt. 2014 kl. 13.53 skrev Fedor Brunner : > > According to Joachim Str?mbergson: > > https://www.ietf.org/mail-archive/web/tls/current/msg13824.html > > SPECK and SIMON has been found to be weak against differential > crypyanalysis: > > https://eprint.iacr.org/2013/568.pdf > > https://eprint.iacr.org/2013/543.pdf I looked at these papers for two minutes, and as far as I can tell, they report attacks on reduced-round variants. Which is what you would expect. What did I miss? -- Kristian Gj?steen From hanno at hboeck.de Fri Oct 24 18:40:20 2014 From: hanno at hboeck.de (Hanno =?ISO-8859-1?B?QvZjaw==?=) Date: Sat, 25 Oct 2014 00:40:20 +0200 Subject: [Cryptography] In search of random numbers In-Reply-To: <1414173771.31285.3.camel@sonic.net> References: <20141023133020.28ab656f@pc> <5449D98B.3010004@tik.ee.ethz.ch> <1414173771.31285.3.camel@sonic.net> Message-ID: <20141025004020.3a024ec9@pc> Am Fri, 24 Oct 2014 11:02:51 -0700 schrieb Bear : > On Fri, 2014-10-24 at 06:46 +0200, Stephan Neuhaus wrote: > > On 2014-10-24 02:09, Tom Mitchell wrote: > > > What "early" needs are there for entropy? > > > > Most SSH keys are generated on first-time boot. > > This is dumb. > > This is bad design. Do you have a smart alternative? What should these devices do? Pre-load them with a key? (I don't particularly like that idea) Tell users they need to generate a key on their Desktop for their new Internet of Things light switch? > We don't need to be providing early boot-time entropy; > we need to be educating people that any design which > requires early boot-time entropy is a mistake. Basically most exploit-mitigation techniques (aslr, stack canaries) these days require some kind of randomness. Sequence numbers should be random. There are a number of reasons in-kernel and early boot processes need good randomness. -- Hanno B?ck http://hboeck.de/ mail/jabber: hanno at hboeck.de GPG: BBB51E42 -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 819 bytes Desc: not available URL: From coruus at gmail.com Fri Oct 24 19:32:54 2014 From: coruus at gmail.com (David Leon Gil) Date: Fri, 24 Oct 2014 19:32:54 -0400 Subject: [Cryptography] Uncorrelated sequence length, was: A TRNG review per day In-Reply-To: References: <1414173395.31285.1.camel@sonic.net> Message-ID: On Fri, Oct 24, 2014 at 5:01 PM, Bill Cox wrote: > On Fri, Oct 24, 2014 at 1:56 PM, Bear wrote: >> On Fri, 2014-10-24 at 05:31 -0400, Bill Cox wrote: >> > So, why do we need true random data at high speed so badly that Intel >> > decided to build in a device requiring large capacitors and it's own >> > power regulator? The truth is, we don't need high speed. As many >> > people have argued here, all any single system requires is 256 bits of >> > true random data. That's all they *ever* need, so long as it remains >> > secret (which is hard), and so long as a cryptographically secure PRNG >> > (CPRNG) is used to generate all future cryptographically pseudo-random >> > data (which is comparatively easy). The current provable-security bounds on recovering from state compromise require anywhere from 2KiB to 20 KiB of input entropy to recover from "state compromise". See section 5.3 of http://www.cs.nyu.edu/~dodis/ps/prematureNext.pdf So, perhaps some applications would want a fairly large amount of entropy. From l at odewijk.nl Fri Oct 24 22:35:01 2014 From: l at odewijk.nl (=?UTF-8?Q?Lodewijk_andr=C3=A9_de_la_porte?=) Date: Sat, 25 Oct 2014 04:35:01 +0200 Subject: [Cryptography] In search of random numbers In-Reply-To: References: <20141023133020.28ab656f@pc> Message-ID: On Oct 24, 2014 4:49 AM, "Tom Mitchell" wrote: > > The internet of things... are a challenge. Refrigerators and > TV are expected to be resource starved... but other systems > seem to have engineering options. As long as they can check a digital signature (wouldn't try internet without it..) they could check the signature of a blob of random data. That would have to be supplied from a not-so-starved source, which should be possible over an internet of things. That said, the TV probably has the most entropy of any household item, given the incoming data quantity. White noise anyone? And lightbulbs probably the least. Or else doorbells? Regarding just-booted entropy, save some for later on a disk? There's also initial entropy to be scraped: temperatures, RAM state, last digits of the clock (hash of clockvalues), serial numbers, installed software packages, tempfiles, last edited files (browser history), etc. All hashed together. It works especially well since a real life hack requires actually knowing many if not (nearly) all those values. Predictable isn't a big deal if it's just too hard to predict. That idea breaks the idea of "bits of entropy equivalent" until you've modeled an adversary and guessed the probabilities of the adversary knowing of certain values. Real time protocols might gain no entropy at all from sampling the clock, whereas long term storage (many years) there can definitely be bits of entropy to be found. -------------- next part -------------- An HTML attachment was scrubbed... URL: From leichter at lrw.com Fri Oct 24 23:24:47 2014 From: leichter at lrw.com (Jerry Leichter) Date: Fri, 24 Oct 2014 23:24:47 -0400 Subject: [Cryptography] Uncorrelated sequence length, was: A TRNG review per day In-Reply-To: <1414173395.31285.1.camel@sonic.net> References: <1414173395.31285.1.camel@sonic.net> Message-ID: On Oct 24, 2014, at 1:56 PM, Bear wrote: >> So, why do we need true random data at high speed so badly that Intel >> decided to build in a device requiring large capacitors and it's own >> power regulator? The truth is, we don't need high speed. As many >> people have argued here, all any single system requires is 256 bits of >> true random data. That's all they *ever* need, so long as it remains >> secret (which is hard), and so long as a cryptographically secure PRNG >> (CPRNG) is used to generate all future cryptographically pseudo-random >> data (which is comparatively easy). > I think I'm going to take issue with this. While 256 bits plus a > CPRNG is enough to prevent known and practical means of predicting > the stream of numbers created, it does not constitute proof that > a stream of outputs of that length *CANNOT* be predicted. It > restricts the uncorrelated sequence length to be provably no more > than 256 bits. Actually, it forces the uncorrelated sequence > length to be provably less than 256 bits assuming a CPRNG.... A CPRNG that is at least as "hard" as the algorithms with which it's used cannot provide a point of attack. For example, if you rely on AES-256 for your cryptography, and your protocols are secure under the assumption (as is common these days) that AES-256 is indistinguishable from a random sequence, then generating your random numbers using AES-256 in counter mode with a true random key exposes you to no attack that wasn't already present. Now, I'll agree this is not a very "clean" assumption. You'd really like a random number generator that you can use with any cryptosystem of interest. If you want to use ChaCha, then using AES to generate your "random" numbers leaves two points for analytic attacks: AES and ChaCha. What this argument comes down to is that there is no such thing as a generic CPRNG. What's generic is the true random number generator - though it may only have to supply a fixed, fairly small, number of bits. They can be used to initialize a CPRNG that should be part of the crypto suite, chosen so that it's no weaker, under the attacks considered, than any other part of the suite that depends on it. -- Jerry From leichter at lrw.com Fri Oct 24 23:37:50 2014 From: leichter at lrw.com (Jerry Leichter) Date: Fri, 24 Oct 2014 23:37:50 -0400 Subject: [Cryptography] Simon, Speck and ISO In-Reply-To: <27A78A45-F99B-489A-B32F-C5483D4E3541@math.ntnu.no> References: <65806f68b096fbc8d116d8f80613e9f5.squirrel@www.deadhat.com> <544A3DC8.20100@azet.sk> <27A78A45-F99B-489A-B32F-C5483D4E3541@math.ntnu.no> Message-ID: On Oct 24, 2014, at 5:19 PM, Kristian Gj?steen wrote: >> According to Joachim Str?mbergson: >> >> https://www.ietf.org/mail-archive/web/tls/current/msg13824.html >> >> SPECK and SIMON has been found to be weak against differential >> crypyanalysis: >> >> https://eprint.iacr.org/2013/568.pdf >> >> https://eprint.iacr.org/2013/543.pdf > > I looked at these papers for two minutes, and as far as I can tell, they report attacks on reduced-round variants. Which is what you would expect. If these are designed with the same approach as Skipjack, they will have *exactly* enough rounds to block differential cryptanalysis and perhaps some other attacks. NSA seems to believe in designing to the edges of the envelope. (They also appear to have more sensitive techniques than any available to the public for determining exactly where those edges lie.) -- Jerry From waywardgeek at gmail.com Sat Oct 25 05:51:03 2014 From: waywardgeek at gmail.com (Bill Cox) Date: Sat, 25 Oct 2014 05:51:03 -0400 Subject: [Cryptography] Uncorrelated sequence length, was: A TRNG review per day In-Reply-To: References: <1414173395.31285.1.camel@sonic.net> Message-ID: On Fri, Oct 24, 2014 at 7:32 PM, David Leon Gil wrote: > On Fri, Oct 24, 2014 at 5:01 PM, Bill Cox wrote: > > On Fri, Oct 24, 2014 at 1:56 PM, Bear wrote: > >> On Fri, 2014-10-24 at 05:31 -0400, Bill Cox wrote: > >> > So, why do we need true random data at high speed so badly that Intel > >> > decided to build in a device requiring large capacitors and it's own > >> > power regulator? The truth is, we don't need high speed. As many > >> > people have argued here, all any single system requires is 256 bits of > >> > true random data. That's all they *ever* need, so long as it remains > >> > secret (which is hard), and so long as a cryptographically secure PRNG > >> > (CPRNG) is used to generate all future cryptographically pseudo-random > >> > data (which is comparatively easy). > > The current provable-security bounds on recovering from state > compromise require anywhere from 2KiB to 20 KiB of input entropy to > recover from "state compromise". See section 5.3 of > http://www.cs.nyu.edu/~dodis/ps/prematureNext.pdf > > So, perhaps some applications would want a fairly large amount of entropy. > Interesting paper. Here's how I would recover much faster. I write 512 bits containing over 400 bits of entropy in one call, as the minimum, with ioctl. I have to look at the kernel code to see how it works, but assuming: 1) The kernel sucks in all 512 bits at once, blocking all other users of /dev/random and /dev/urandom, and then performs a secure cryptographic one-way hash on it's entire entropy pool. 2) The cryptographic hash is ideal in the sense that it's output cannot be distinguished from true random, and cannot be reversed short of brute force guessing all 2^n input possibilities. 3) No attacker can guess a state of the pool when no state has higher than a 1/2^256 probability Under these assumptions, the pool recovers from a state compromise in one call. The pool is not full, but no state has a probability higher than about 1/2^400 so it does not matter. However, I just went and looked a bit at random.c. I would have to look a *lot* harder to feel confident I am reading it right, but at first glance, it's mixing function, _mix_pool_bytes does not satisfy my assumptions above. It does not appear to be a cryptographically secure hash function. It simply stirs data in the pool weakly, counting on lots of entropy data to make that OK. This seems insecure to me, but I suppose there are probably reasons for the Linux kernel to weakly mix the input entropy rather than performing a secure hash. If I were writing that code, I'd turn Blake2b into a sponge (similar to Lyra2), and would only mix in entropy once I'd collected at least 256 bits worth. That way, the state becomes secure again on every update. Can I use a simple hack to insure the Linux entropy is secure after every write to /dev/random? I am thinking of force-feeding it 4096 bits from the Keccac sponge, rather than just 512 bits like I do now. Is my reading of random.c's mixing accurate? Will this hack insure that the entropy pool is securely refreshed? Given that they write the entropy pool to disk on shutdown, instant recovery from state compromise seems like an important goal. Bill -------------- next part -------------- An HTML attachment was scrubbed... URL: From codesinchaos at gmail.com Sat Oct 25 07:33:41 2014 From: codesinchaos at gmail.com (CodesInChaos) Date: Sat, 25 Oct 2014 13:33:41 +0200 Subject: [Cryptography] Simon, Speck and ISO In-Reply-To: <544A3DC8.20100@azet.sk> References: <65806f68b096fbc8d116d8f80613e9f5.squirrel@www.deadhat.com> <544A3DC8.20100@azet.sk> Message-ID: On Fri, Oct 24, 2014 at 1:53 PM, Fedor Brunner wrote: > According to Joachim Str?mbergson: > > https://www.ietf.org/mail-archive/web/tls/current/msg13824.html > > SPECK and SIMON has been found to be weak against differential > crypyanalysis: I'm not sure if "weak against differential cryptoanalysis" is an accurate summary of those papers. These are attacks against round reduced versions of the ciphers, and every blockcipher suffers from such attacks. The important question is how many rounds are broken by these attacks. In the case of SIMON/SPECK roughly half the rounds are broken. This isn't exactly a confidence inspiring security margin, especially considering that these are the first analysis results. On the other hand it seems hardly surprising that the security margin of lightweight primitives is lower than that of conservative designs like SHA-3. If you want to argue for the exclusion of these ciphers based on these cryptoanalytic results, it'd be nice to compare this security margin against the margin of competing lightweight ciphers. The opinion of experienced cryptoanalysts as to how likely it is that this analysis can be extended to more rounds would be nice as well, even if this is inherently subjective. From coruus at gmail.com Sat Oct 25 09:02:51 2014 From: coruus at gmail.com (David Leon Gil) Date: Sat, 25 Oct 2014 09:02:51 -0400 Subject: [Cryptography] Uncorrelated sequence length, was: A TRNG review per day In-Reply-To: References: <1414173395.31285.1.camel@sonic.net> Message-ID: On Sat, Oct 25, 2014 at 5:51 AM, Bill Cox wrote: > On Fri, Oct 24, 2014 at 7:32 PM, David Leon Gil wrote: >> The current provable-security bounds on recovering from state >> compromise require anywhere from 2KiB to 20 KiB of input entropy to >> recover from "state compromise". See section 5.3 of >> http://www.cs.nyu.edu/~dodis/ps/prematureNext.pdf >> >> So, perhaps some applications would want a fairly large amount of entropy. > However, I just went and looked a bit at random.c. I would have to look a > *lot* harder to feel confident I am reading it right, but at first glance, > it's mixing function, _mix_pool_bytes does not satisfy my assumptions above. > It does not appear to be a cryptographically secure hash function. It > simply stirs data in the pool weakly, counting on lots of entropy data to > make that OK. That's fully accurate. (In fact, an earlier paper by Dodis points out that /dev/random is broken at present: http://www.cs.nyu.edu/~dodis/ps/rng.pdf ) > This seems insecure to me, but I suppose there are probably reasons for the > Linux kernel to weakly mix the input entropy rather than performing a secure > hash. No good reasons; OSX (and some BSDs?), for example, uses Schneier's Fortuna RNG, which is cryptographically sensible. > If I were writing that code, I'd turn Blake2b into a sponge (similar > to Lyra2), and would only mix in entropy once I'd collected at least 256 > bits worth. That way, the state becomes secure again on every update. That would be better than what the Linux kernel is doing... I believe that some folks are working on code to fix /dev/random, but it's a fair amount of work to write / get merged. > Can I use a simple hack to insure the Linux entropy is secure after every > write to /dev/random? I am thinking of force-feeding it 4096 bits from the > Keccac sponge, rather than just 512 bits like I do now. Is my reading of > random.c's mixing accurate? Will this hack insure that the entropy pool is > securely refreshed? Not sure. From jim.windle at gmail.com Sat Oct 25 13:58:15 2014 From: jim.windle at gmail.com (Jim Windle) Date: Sat, 25 Oct 2014 13:58:15 -0400 Subject: [Cryptography] Quantum Sneakernet Message-ID: Interesting proposal for a quantum sneakernet as an admittedly slow alternative to submarine cables which won't support entanglement. Entangled photons are carried in specially modified cargo containers which are then moved via existing transportation infrastructure. Technology Review piece has link to arxiv.org paper. http://www.technologyreview.com/view/532056/why-quantum-clippers-will-distribute-entanglement-across-the-oceans/?utm_source=feedburner&utm_medium=email&utm_campaign=Feed%3A+arXivblog+%28the+physics+arXiv+blog%29 -- Jim Windle (646) 470-9657 Oh, and be sure to state your name so my cyborg can tell me who is calling! -------------- next part -------------- An HTML attachment was scrubbed... URL: From pgut001 at cs.auckland.ac.nz Sat Oct 25 14:11:52 2014 From: pgut001 at cs.auckland.ac.nz (Peter Gutmann) Date: Sun, 26 Oct 2014 07:11:52 +1300 Subject: [Cryptography] Arduino Enigma simulator Message-ID: For those who want a workalike Enigma without shelling out a fortune for the real thing: https://www.tindie.com/products/ArduinoEnigma/arduino-enigma-simulator-simulates-enigma-i-m3-and-m4-machines/ Peter. From ji at tla.org Sat Oct 25 17:18:49 2014 From: ji at tla.org (John Ioannidis) Date: Sat, 25 Oct 2014 17:18:49 -0400 Subject: [Cryptography] Quantum Sneakernet In-Reply-To: References: Message-ID: On Sat, Oct 25, 2014 at 1:58 PM, Jim Windle wrote: > Interesting proposal for a quantum sneakernet as an admittedly slow > alternative to submarine cables which won't support entanglement. > Entangled photons are carried in specially modified cargo containers which > are then moved via existing transportation infrastructure. Technology > Review piece has link to arxiv.org paper. > > > http://www.technologyreview.com/view/532056/why-quantum-clippers-will-distribute-entanglement-across-the-oceans/?utm_source=feedburner&utm_medium=email&utm_campaign=Feed%3A+arXivblog+%28the+physics+arXiv+blog%29 > > The only usefulness of this so-called technology is to give another meaning to the phrase "traveling light". /ji -------------- next part -------------- An HTML attachment was scrubbed... URL: From bear at sonic.net Sat Oct 25 15:16:43 2014 From: bear at sonic.net (Bear) Date: Sat, 25 Oct 2014 12:16:43 -0700 Subject: [Cryptography] In search of random numbers In-Reply-To: <20141025004020.3a024ec9@pc> References: <20141023133020.28ab656f@pc> <5449D98B.3010004@tik.ee.ethz.ch> <1414173771.31285.3.camel@sonic.net> <20141025004020.3a024ec9@pc> Message-ID: <1414264603.10918.1.camel@sonic.net> On Sat, 2014-10-25 at 00:40 +0200, Hanno B?ck wrote: > Am Fri, 24 Oct 2014 11:02:51 -0700 > schrieb Bear : > > > On Fri, 2014-10-24 at 06:46 +0200, Stephan Neuhaus wrote: > > > On 2014-10-24 02:09, Tom Mitchell wrote: > > > > What "early" needs are there for entropy? > > > > > > Most SSH keys are generated on first-time boot. > > > > This is dumb. > > > > This is bad design. > > Do you have a smart alternative? What should these devices do? Pre-load > them with a key? (I don't particularly like that idea) Tell users they > need to generate a key on their Desktop for their new Internet of Things > light switch? In response to your immediate question: since four of the most popular current extensions to a light switch are a timer, a light sensor, a motion sensor and a microphone, it can just damn well pause the boot until it reads enough off the sensors to be ready to go. The install manual *is* allowed to say "It will function as a simple light switch until the flashing blue LED turns off after about one minute; after that, the extended functions are ready to use." But more to the point, why does a light switch need a full Von Neumann architecture so complex that new code can ever run on it? Give it a ROM of executable code that's loaded at the factory and cannot be rewritten under any circumstances, some kilobytes of non-volatile configuration which comes with "reasonable" defaults and is mounted on a dedicated memory bus that can never be the target of an instruction fetch, and some volatile memory also on the dedicated memory bus, and what valuable light-switchy task could it NOT do that a viable attack surface COULD? If the code on the ROM is discovered to be flawed or someone finds an attack surface, it's product recall time. We're not talking about something terribly expensive that people can't replace here, nor about something so large it can't be mailed back and forth cheaply to get repaired or replaced. Seriously, justify this "Internet of Things That Can Be Targets". "Things" are things people don't use as general-purpose computers, so while they need writable configuration memory, that writable memory doesn't need to be executable. "Things" don't need to be reprogrammable as such for any reason relevant to the end user. If we're talking ubiquitous, we're talking simple and replaceable and cheap. If we're talking simple and replaceable and cheap, we can get much better security by making the executable memory completely non-rewritable than we can by any application of cryptography. Bear From bear at sonic.net Sat Oct 25 15:42:15 2014 From: bear at sonic.net (Bear) Date: Sat, 25 Oct 2014 12:42:15 -0700 Subject: [Cryptography] Arduino Enigma simulator In-Reply-To: References: Message-ID: <1414266135.10918.3.camel@sonic.net> On Sun, 2014-10-26 at 07:11 +1300, Peter Gutmann wrote: > For those who want a workalike Enigma without shelling out a fortune for the > real thing: > > https://www.tindie.com/products/ArduinoEnigma/arduino-enigma-simulator-simulates-enigma-i-m3-and-m4-machines/ > > Peter. Cute. If there is sufficient interest from the community in paying some more "reasonable" rates for working mechanical replicas of these antiques, I have a tiny little machine shop adjoining my study and would be amused/interested in constructing a few. They'd still be expensive, you must understand; there are a lot of mechanical bits that I'd have to individually mill or cast. It would entail a lot of very tedious hand work. And I might be forced to use brass in a few places where the originals had steel due to limitations on the hardest metals my desktop CNC mill can handle. They wouldn't be as historically significant as the actual antiques of course; but they wouldn't be as expensive either. You'd be on your own if you wanted to add a parallel port or USB interface. Bear From tytso at mit.edu Sat Oct 25 15:53:56 2014 From: tytso at mit.edu (Theodore Ts'o) Date: Sat, 25 Oct 2014 15:53:56 -0400 Subject: [Cryptography] Uncorrelated sequence length, was: A TRNG review per day In-Reply-To: References: <1414173395.31285.1.camel@sonic.net> Message-ID: <20141025195356.GA24403@thunk.org> On Sat, Oct 25, 2014 at 05:51:03AM -0400, Bill Cox wrote: > > I write 512 bits containing over 400 bits of entropy in one call, as the > minimum, with ioctl. I have to look at the kernel code to see how it > works, but assuming: If you have that much randomness, why do you need a cryptographic hash to do the mixing? Pretty much any mixing algorithm will do. Note that even if the randomness isn't evenly distributed across the 4096 bits of the input entropy pool, we do use a secure cryptographic hash to generate the output, so if you've added 256 bits worth of uncertainty in the pool, it doesn't really matter whether it is concentrated in one part of the pool or not. Cheers, - Ted From dj at deadhat.com Sat Oct 25 16:46:39 2014 From: dj at deadhat.com (dj at deadhat.com) Date: Sat, 25 Oct 2014 20:46:39 -0000 Subject: [Cryptography] Simon, Speck and ISO In-Reply-To: References: <65806f68b096fbc8d116d8f80613e9f5.squirrel@www.deadhat.com> <544A3DC8.20100@azet.sk> Message-ID: <5651925fc8505e78a1cd49febe753a66.squirrel@www.deadhat.com> > > If you want to argue for the exclusion of these ciphers based on these > cryptoanalytic results, it'd be nice to compare this security margin > against the margin of competing lightweight ciphers. The opinion of > experienced cryptoanalysts as to how likely it is that this analysis > can be extended to more rounds would be nice as well, even if this is > inherently subjective. All the published analysis makes them look pretty good on the security/compute complexity tradeoff. I have nothing to show there is anything wrong with the algorithms. Someone at JTC1/SC7 corrected me. The spec to which it is proposed these be added already has Clefia and PRESENT which are slower and bigger. I'm concerned with how it looks and concerned that no-one else is jumping up to offer alternatives with a more transparent background. We certainly can't criticize anyone for submitting a proposal when no one else is. From dj at deadhat.com Sat Oct 25 16:58:29 2014 From: dj at deadhat.com (dj at deadhat.com) Date: Sat, 25 Oct 2014 20:58:29 -0000 Subject: [Cryptography] Uncorrelated sequence length, was: A TRNG review per day In-Reply-To: References: <1414173395.31285.1.camel@sonic.net> Message-ID: > > No good reasons; OSX (and some BSDs?), for example, uses Schneier's > Fortuna RNG, which is cryptographically sensible. > Since I'm sitting in an airport I've got plenty of time to cogitate. We are criticizing Linux for not hashing on the way into the pool. You can hash into the pool, out of the pool or both, or do something completely different. Linux hashes on the way out. Why is this better or worse than hashing on the way in? I don't know. In terms of Linux architecture it makes sense. The data on the way in comes in as a function of what the machine provides. Putting a compute heavy task on this path leads to an uncontrolled amount of effort being spent, regardless of whether or not /dev/[u]random is used. Hashing on the way out makes the effort spent a function of the rate which things call /dev/[u]random, which is an unspent effort if you aren't. This is sensible, only invoke the cost if you invoke the feature. Other things are certainly better, like in the BSDs. I thought everyone who cared had their own version of random.c. I do, but I'm too lazy to patch the kernel each time I bring up a machine. From tytso at mit.edu Sat Oct 25 17:18:52 2014 From: tytso at mit.edu (Theodore Ts'o) Date: Sat, 25 Oct 2014 17:18:52 -0400 Subject: [Cryptography] In search of random numbers In-Reply-To: <20141025004020.3a024ec9@pc> References: <20141023133020.28ab656f@pc> <5449D98B.3010004@tik.ee.ethz.ch> <1414173771.31285.3.camel@sonic.net> <20141025004020.3a024ec9@pc> Message-ID: <20141025211852.GD24403@thunk.org> On Sat, Oct 25, 2014 at 12:40:20AM +0200, Hanno B?ck wrote: > > > > > > Most SSH keys are generated on first-time boot. > > > > This is dumb. > > > > This is bad design. > > Do you have a smart alternative? What should these devices do? Pre-load > them with a key? (I don't particularly like that idea) Tell users they > need to generate a key on their Desktop for their new Internet of Things > light switch? You wait until the first time someone tries to connect to the ssh port, and generate the ssh key in a just-in-time fashion. > Basically most exploit-mitigation techniques (aslr, stack canaries) > these days require some kind of randomness. So the thing about aslr and stack canaries is that if they aren't perfectly random for the first boot, it isn't as catastrophic, especially if you end up rebooting shortly after the initial setup. But if you generate a bad SSH or SSL key, that tends to last for a much longer period of time. BTW, mixing in device personalization information (i.e., MAC addresses) is useful for making it harder to prevent embarassingly easy demonstrations that your system is insecure (because it prevents using the GCD to find common factors after scanning for all certs from various printers on the internet, for example). But it shouldn't be mistaken for truly fixing the problem. Cheers, - Ted From l at odewijk.nl Sat Oct 25 19:35:25 2014 From: l at odewijk.nl (=?UTF-8?Q?Lodewijk_andr=C3=A9_de_la_porte?=) Date: Sun, 26 Oct 2014 01:35:25 +0200 Subject: [Cryptography] In search of random numbers In-Reply-To: <20141025211852.GD24403@thunk.org> References: <20141023133020.28ab656f@pc> <5449D98B.3010004@tik.ee.ethz.ch> <1414173771.31285.3.camel@sonic.net> <20141025004020.3a024ec9@pc> <20141025211852.GD24403@thunk.org> Message-ID: 2014-10-25 23:18 GMT+02:00 Theodore Ts'o : > On Sat, Oct 25, 2014 at 12:40:20AM +0200, Hanno B?ck wrote: > > > > > > > > Most SSH keys are generated on first-time boot. > > > > > > This is dumb. > > > > > > This is bad design. > > > > Do you have a smart alternative? What should these devices do? Pre-load > > them with a key? (I don't particularly like that idea) Tell users they > > need to generate a key on their Desktop for their new Internet of Things > > light switch? > > You wait until the first time someone tries to connect to the ssh > port, and generate the ssh key in a just-in-time fashion. > How much time is considered not "first time boot"? I mean, init runs, and that's the real first time boot thingy. Everything after is already started with delay (and usually sequentially... talk about bad design..). How much delay is required? Why not delay first time generation by twice that? Doesn't /dev/random block until sufficient entropy is delivered? If not, that's asking for trouble. Maybe I'm missing something, but isn't this discussion at once really involved (sshd) and really generic (entropy collection best practice)? I rather like thinking and solving problems like these, but I'm not even sure which is really the matter here. I thought it was about "Lol get randomness here". Boottime resource starvation is inevitable, but not the application layer's fault. So let's just focus on making /dev/*** work unbreakably, it fixes everything. -------------- next part -------------- An HTML attachment was scrubbed... URL: From bear at sonic.net Sat Oct 25 20:08:54 2014 From: bear at sonic.net (Bear) Date: Sat, 25 Oct 2014 17:08:54 -0700 Subject: [Cryptography] Uncorrelated sequence length, was: A TRNG review per day In-Reply-To: References: <1414173395.31285.1.camel@sonic.net> Message-ID: <1414282134.14425.1.camel@sonic.net> On Fri, 2014-10-24 at 23:24 -0400, Jerry Leichter wrote: > > While 256 bits plus a > > CPRNG is enough to prevent known and practical means of predicting > > the stream of numbers created, it does not constitute proof that > > a stream of outputs of that length *CANNOT* be predicted. It > > restricts the uncorrelated sequence length to be provably no more > > than 256 bits. Actually, it forces the uncorrelated sequence > > length to be provably less than 256 bits assuming a CPRNG.... > A CPRNG that is at least as "hard" as the algorithms with which it's > used cannot provide a point of attack. For example, if you rely on > AES-256 for your cryptography, and your protocols are secure under the > assumption (as is common these days) that AES-256 is indistinguishable > from a random sequence, then generating your random numbers using > AES-256 in counter mode with a true random key exposes you to no attack > that wasn't already present. You're right that a CPRNG that is "hard" doesn't provide a point of attack. But, like most of our ciphers, we don't have any real mathematical proof that a particular CPRNG is in fact "hard". All we really know is that we haven't found the soft spots yet. A provably long uncorrelated sequence length is the same kind of "hard" guarantee as a one time pad -- although, like a one-time pad, it applies only to sequences shorter than that length. I think that PRNGs should be able to prove a minimum uncorrelated sequence length (hence require RNG state) that is longer than sequences (specifically keys) whose unpredictability we rely on for the security of our other components. Bear From bear at sonic.net Sat Oct 25 20:21:03 2014 From: bear at sonic.net (Bear) Date: Sat, 25 Oct 2014 17:21:03 -0700 Subject: [Cryptography] In search of random numbers In-Reply-To: <20141025211852.GD24403@thunk.org> References: <20141023133020.28ab656f@pc> <5449D98B.3010004@tik.ee.ethz.ch> <1414173771.31285.3.camel@sonic.net> <20141025004020.3a024ec9@pc> <20141025211852.GD24403@thunk.org> Message-ID: <1414282863.14425.3.camel@sonic.net> On Sat, 2014-10-25 at 17:18 -0400, Theodore Ts'o wrote: > On Sat, Oct 25, 2014 at 12:40:20AM +0200, Hanno B?ck wrote: > > > > > > > > Most SSH keys are generated on first-time boot. > > > > > > This is dumb. > > > > > > This is bad design. > > > > Do you have a smart alternative? What should these devices do? Pre-load > > them with a key? (I don't particularly like that idea) Tell users they > > need to generate a key on their Desktop for their new Internet of Things > > light switch? > > You wait until the first time someone tries to connect to the ssh > port, and generate the ssh key in a just-in-time fashion. > > > Basically most exploit-mitigation techniques (aslr, stack canaries) > > these days require some kind of randomness. > > So the thing about aslr and stack canaries is that if they aren't > perfectly random for the first boot, it isn't as catastrophic, Also, if you don't connect to the network before you're finished booting up, you can't be attacked over the network until you're finished booting up. And if you're not under attack yet, such things as stack canaries have a bit less urgency.... There is such a thing as booting an operating system before the network is connected! Bear From dave at horsfall.org Sat Oct 25 21:56:52 2014 From: dave at horsfall.org (Dave Horsfall) Date: Sun, 26 Oct 2014 12:56:52 +1100 (EST) Subject: [Cryptography] Arduino Enigma simulator In-Reply-To: References: Message-ID: On Sun, 26 Oct 2014, Peter Gutmann wrote: > For those who want a workalike Enigma without shelling out a fortune for > the real thing: > > https://www.tindie.com/products/ArduinoEnigma/arduino-enigma-simulator-simulates-enigma-i-m3-and-m4-machines/ Or try the free emulator for the Mac; it even has the Steckerbrett. http://www.macupdate.com/app/mac/25427/enigma-simulator -- Dave Horsfall (VK2KFU) "Bliss is a MacBook with a FreeBSD server." http://www.horsfall.org/spam.html (and check the home page whilst you're there) From ji at tla.org Sat Oct 25 23:20:23 2014 From: ji at tla.org (John Ioannidis) Date: Sat, 25 Oct 2014 23:20:23 -0400 Subject: [Cryptography] Arduino Enigma simulator In-Reply-To: References: Message-ID: On Sat, Oct 25, 2014 at 2:11 PM, Peter Gutmann wrote: > For those who want a workalike Enigma without shelling out a fortune for > the > real thing: > > > https://www.tindie.com/products/ArduinoEnigma/arduino-enigma-simulator-simulates-enigma-i-m3-and-m4-machines/ > > Peter. > I bought and built this a while back: http://www.cryptomuseum.com/kits/. It's PIC-based, rather than AVR-based. /ji PS: A certain member of this list for whom I bought one in exchange for his building me a proper wooden box to put it in has yet to deliver on his promise :) -------------- next part -------------- An HTML attachment was scrubbed... URL: From phill at hallambaker.com Sun Oct 26 08:36:47 2014 From: phill at hallambaker.com (Phillip Hallam-Baker) Date: Sun, 26 Oct 2014 08:36:47 -0400 Subject: [Cryptography] Arduino Enigma simulator In-Reply-To: References: Message-ID: On Sat, Oct 25, 2014 at 2:11 PM, Peter Gutmann wrote: > For those who want a workalike Enigma without shelling out a fortune for > the > real thing: > > > https://www.tindie.com/products/ArduinoEnigma/arduino-enigma-simulator-simulates-enigma-i-m3-and-m4-machines/ I have one of these, it has an emulation of the plugboard etc. http://www.stgeotronics.com/ I have not got round to building a wood box yet. -------------- next part -------------- An HTML attachment was scrubbed... URL: From coruus at gmail.com Sat Oct 25 22:49:53 2014 From: coruus at gmail.com (David Leon Gil) Date: Sat, 25 Oct 2014 22:49:53 -0400 Subject: [Cryptography] Quantum Sneakernet In-Reply-To: References: Message-ID: On Sat, Oct 25, 2014 at 5:18 PM, John Ioannidis wrote: > On Sat, Oct 25, 2014 at 1:58 PM, Jim Windle wrote: >> Interesting proposal for a quantum sneakernet as an admittedly slow >> alternative to submarine cables which won't support entanglement. High-quality transoceanic cables would be expensive (unimaginably expensive today); but there is no physical problem. Just a minor engineering problem. ;) And, contra the paper, there are people working on distributing qubits... > The only usefulness of this so-called technology is to give another meaning > to the phrase "traveling light". From bear at sonic.net Sun Oct 26 02:40:46 2014 From: bear at sonic.net (Bear) Date: Sat, 25 Oct 2014 23:40:46 -0700 Subject: [Cryptography] In search of random numbers In-Reply-To: References: <20141023133020.28ab656f@pc> <5449D98B.3010004@tik.ee.ethz.ch> <1414173771.31285.3.camel@sonic.net> <20141025004020.3a024ec9@pc> <20141025211852.GD24403@thunk.org> Message-ID: <1414305646.19989.1.camel@sonic.net> On Sun, 2014-10-26 at 01:35 +0200, Lodewijk andr? de la porte wrote: > Boottime resource starvation is inevitable, but not the application > layer's fault. So let's just focus on making /dev/*** work > unbreakably, it fixes everything. Works for me. If somebody wants entropy, he has to wait until there is some. If that delays bootup, it's because bootup is wrong. Bear From outer at interlog.com Sat Oct 25 23:13:15 2014 From: outer at interlog.com (Richard Outerbridge) Date: Sat, 25 Oct 2014 23:13:15 -0400 Subject: [Cryptography] Arduino Enigma simulator In-Reply-To: References: Message-ID: <1147435F-0609-407F-988D-4E203D027669@interlog.com> On 2014-10-25 (298), at 21:56:52, Dave Horsfall wrote: > > On Sun, 26 Oct 2014, Peter Gutmann wrote: > >> For those who want a workalike Enigma without shelling out a fortune for >> the real thing: >> >> https://www.tindie.com/products/ArduinoEnigma/arduino-enigma-simulator-simulates-enigma-i-m3-and-m4-machines/ > > Or try the free emulator for the Mac; it even has the Steckerbrett. > > http://www.macupdate.com/app/mac/25427/enigma-simulator Doesn?t do M4 :( There?s an iOS App for that, however: see ( http://www.hispalix.com ) Mninigma. There used to be another one, MREnigma, but the developer told me he got taken down by some spurious trademark litigation threat (?Enigma?: go figure) :( __outer From leichter at lrw.com Sat Oct 25 23:32:13 2014 From: leichter at lrw.com (Jerry Leichter) Date: Sat, 25 Oct 2014 23:32:13 -0400 Subject: [Cryptography] Uncorrelated sequence length, was: A TRNG review per day In-Reply-To: <1414282134.14425.1.camel@sonic.net> References: <1414173395.31285.1.camel@sonic.net> <1414282134.14425.1.camel@sonic.net> Message-ID: On Oct 25, 2014, at 8:08 PM, Bear wrote: >> A CPRNG that is at least as "hard" as the algorithms with which it's >> used cannot provide a point of attack. For example, if you rely on >> AES-256 for your cryptography, and your protocols are secure under the >> assumption (as is common these days) that AES-256 is indistinguishable >> from a random sequence, then generating your random numbers using >> AES-256 in counter mode with a true random key exposes you to no attack >> that wasn't already present. > You're right that a CPRNG that is "hard" doesn't provide a point of > attack. But, like most of our ciphers, we don't have any real > mathematical proof that a particular CPRNG is in fact "hard". All > we really know is that we haven't found the soft spots yet. It makes no difference whether AES-256 is "hard". What matters is that "it's as hard as itself". If there's an attack against it, it applies equally to the CPRNG and to the encryption. This argument only works when you can show that any attack against the generator also gives you one against the encryptor. For combinatoric algorithms like AES-256, this is unlikely to be something you can argue convincingly unless the same algorithm is used in both places. (For algorithms with more mathematical structure, the story *might* be different. In particular, something *like* Dual EC DRGB - but done in a way that ensured no one had a back door - *might* be provably as hard as some EC cryptographic algorithm.) > A provably long uncorrelated sequence length is the same kind of > "hard" guarantee as a one time pad -- although, like a one-time pad, > it applies only to sequences shorter than that length. I don't know what this means. Any *specific* property - like a long uncorrelated sequence length - is just a special instance of a way of distinguishing the output of some algorithm from a true random sequence. -- Jerry From iang at iang.org Sun Oct 26 05:45:50 2014 From: iang at iang.org (ianG) Date: Sun, 26 Oct 2014 09:45:50 +0000 Subject: [Cryptography] In search of random numbers In-Reply-To: <1414264603.10918.1.camel@sonic.net> References: <20141023133020.28ab656f@pc> <5449D98B.3010004@tik.ee.ethz.ch> <1414173771.31285.3.camel@sonic.net> <20141025004020.3a024ec9@pc> <1414264603.10918.1.camel@sonic.net> Message-ID: <544CC2CE.8030102@iang.org> On 25/10/2014 20:16 pm, Bear wrote: > On Sat, 2014-10-25 at 00:40 +0200, Hanno B?ck wrote: >> Am Fri, 24 Oct 2014 11:02:51 -0700 >> schrieb Bear : >> >>> On Fri, 2014-10-24 at 06:46 +0200, Stephan Neuhaus wrote: >>>> On 2014-10-24 02:09, Tom Mitchell wrote: >>>>> What "early" needs are there for entropy? >>>> >>>> Most SSH keys are generated on first-time boot. >>> >>> This is dumb. >>> >>> This is bad design. >> >> Do you have a smart alternative? What should these devices do? Pre-load >> them with a key? (I don't particularly like that idea) Tell users they >> need to generate a key on their Desktop for their new Internet of Things >> light switch? > > In response to your immediate question: since four of the > most popular current extensions to a light switch are a timer, > a light sensor, a motion sensor and a microphone, it can just > damn well pause the boot until it reads enough off the sensors > to be ready to go. The install manual *is* allowed to say "It > will function as a simple light switch until the flashing blue > LED turns off after about one minute; after that, the extended > functions are ready to use." Actually this is hard. To do it properly requires the measurement of entropy. This is a mess. It's a hard problem to get right. Sure there are some people who think they've got the handle on it, but they aren't enough for this task. Hence, some who are pragmatic would say, no, never block. Just do the best you can, supply what you've got and take on the knocks. Sucks if you want your light switch to have an SSH key... What might be easier is a simple 1st order approximation such as counting the number of light switch hits (and using the difference as entropy). In the manual you can have a line that says "SSH key will not be generated until lightswitch hit 10 times..." > But more to the point, why does a light switch need a full > Von Neumann architecture so complex that new code can ever > run on it? > > Give it a ROM of executable code that's loaded at the factory > and cannot be rewritten under any circumstances, some kilobytes > of non-volatile configuration which comes with "reasonable" > defaults and is mounted on a dedicated memory bus that can > never be the target of an instruction fetch, and some volatile > memory also on the dedicated memory bus, and what valuable > light-switchy task could it NOT do that a viable attack surface > COULD? That part I agree with, but the challenge is in getting the IoT people to do it. > If the code on the ROM is discovered to be flawed or someone > finds an attack surface, it's product recall time. You only get product recall when it is likely to kill the user. Bad as the randomness issue appears to us, I'm not sure we're there yet. > We're not > talking about something terribly expensive that people can't > replace here, nor about something so large it can't be mailed > back and forth cheaply to get repaired or replaced. Huh? Light switches require an electrician. Problem part is sub $1, install is o($100). A whitegoods replacement part is a callout fee, even the lightbulbs in a fridge will challenge some people. iang From dan at geer.org Sun Oct 26 09:22:03 2014 From: dan at geer.org (dan at geer.org) Date: Sun, 26 Oct 2014 09:22:03 -0400 Subject: [Cryptography] Best internet crypto clock: hmmmmm... In-Reply-To: Your message of "Fri, 24 Oct 2014 10:09:08 -0700." Message-ID: <20141026132203.AAE94228342@palinka.tinho.net> > Here's Balasubramaniyan's PhD thesis describing the Pindr0p technology: > > https://smartech.gatech.edu/bitstream/handle/1853/44920/balasubramaniyan_vijay_a_201108_phd.pdf Along similar lines, small noise in image acquisition is now well enough understood and discernible to say "This camera did take that picture" as it does to say "This rifle did fire that bullet," which extends to "These two pictures/bullets came from the same camera/rifle." --dan From pgut001 at cs.auckland.ac.nz Sun Oct 26 16:57:13 2014 From: pgut001 at cs.auckland.ac.nz (Peter Gutmann) Date: Mon, 27 Oct 2014 09:57:13 +1300 Subject: [Cryptography] Best internet crypto clock: hmmmmm... In-Reply-To: <20141026132203.AAE94228342@palinka.tinho.net> Message-ID: dan at geer.org writes: >Along similar lines, small noise in image acquisition is now well enough >understood and discernible to say "This camera did take that picture" as it >does to say "This rifle did fire that bullet," which extends to "These two >pictures/bullets came from the same camera/rifle." There is, however, an ongoing battle between the device manufacturers and people who use these artefacts, since the manufacturers see them as flaws and try and eliminate them once they're pointed out. This makes it really annoying for people who use them for image-source authentication purposes. Peter. From sandyinchina at gmail.com Sun Oct 26 18:32:37 2014 From: sandyinchina at gmail.com (Sandy Harris) Date: Sun, 26 Oct 2014 18:32:37 -0400 Subject: [Cryptography] A TRNG review per day: RDRAND and the right TRNG architecture In-Reply-To: References: Message-ID: Bill Cox wrote: > The "right TRNG architecture" looks like this: > > auditable cheap low speed TRNG -> auditable high speed CPRNG -> happy > user That is one good design, but far from the only one. One alternative is a well-designed high-speed TRNG, such as Turbid. fast process with provable minimum entropy -> auditable compressor Given some fairly mild assumptions about properties of the hash, this can provably get within epsilon of perfectly random output. Also, it is stateless, so it is completely immune to the state discovery attacks which are a threat to CPRNGs. This solves the problem, short of extremes like failure or saturation of the hardware part (sound card in Turbid). Add auditable checks for those conditions and there you go. It looks to me like Intel or others with on-chip TRNGs could reach the requirements of this model without excessive effort, at least given an assumption that the hardware actually implements its spec. Dealing with the possibility of subversion that makes the chip different from the spec is a separate problem that looks harder. Another architecture that is correct is the type of design used in various random(4) devices. several sources -> pool -> cryptographic hash This requires stronger assumptions about the hash than the Turbid-ish design does and it fails (at least short-term) if the enemy learns pool contents. All operations on the pool should be pool_word ^= input or += input, never pool_word = input, so that bad inputs cannot reduce pool entropy. Given that plus reasonable assumptions about the hash and that at least one source produces entropy unknown to an attacker, it is easy to show that this must recover from any state compromise attack eventually. From leichter at lrw.com Sun Oct 26 18:54:54 2014 From: leichter at lrw.com (Jerry Leichter) Date: Sun, 26 Oct 2014 18:54:54 -0400 Subject: [Cryptography] Best internet crypto clock: hmmmmm... In-Reply-To: References: Message-ID: On Oct 26, 2014, at 4:57 PM, Peter Gutmann wrote: >> Along similar lines, small noise in image acquisition is now well enough >> understood and discernible to say "This camera did take that picture" as it >> does to say "This rifle did fire that bullet," which extends to "These two >> pictures/bullets came from the same camera/rifle." > > There is, however, an ongoing battle between the device manufacturers and > people who use these artefacts, since the manufacturers see them as flaws and > try and eliminate them once they're pointed out. This makes it really > annoying for people who use them for image-source authentication purposes. Quite a few years ago, I argued that it should be possible to identify laser printers by small variations in toner placement. The argument the other way was that manufacturing tolerances would make this impossible. Nothing new here. Manufacturing tolerances are reduced down to the point where they produce artifacts relevant for the use at hand. For a laser printer, that means visual effects noticeable to the human eye at the closest distance a paper page is likely to be held in normal usage. For a camera, it means visual effects noticed at the largest print sizes viewed at their appropriate ranges. All of these things have fundamental limits set by human sensory capabilities. Most of our digital technologies are near those limits - most obviously in high-resolution LCD displays (what Apple calls "Retina" displays). Sure, under some circumstances, some well-trained observers can easily spot the remaining variations. But it's getting harder every day, and soon only the "golden ears" (and their analogues in different spheres) will even claim to be able to tell, and they'll consistently fail careful tests. Once you get to that point, there's no reason to go further in controlling manufacturing processes and such. (Oh, some will for advertising points, but it's a very expensive business to wring out minor variations, so few will try.) And yet it's easy to *measure* details to orders of magnitude finer than you can *control* them. We may yet be in the period where images are getting more controlled fast enough to annoy the authenticators - but that period will end soon. -- Jerry From sandyinchina at gmail.com Sun Oct 26 20:28:13 2014 From: sandyinchina at gmail.com (Sandy Harris) Date: Sun, 26 Oct 2014 20:28:13 -0400 Subject: [Cryptography] Auditable logs? Message-ID: Various computer-mediated activities may end up in court for a range of reasons and in many cases log files will be used as evidence. However for most log file formats, deleting a few lines or adding a few bogus ones is trivial. Even forging an entire file or large chunk thereof is not impossible. Lawyers for one side or the other seem quite likely to attack the credibility of log files and/or of the sys admin who provides them. In at least some cases, proof "beyond reasonable doubt" is required and that is going to be very difficult if the lawyers trying to create some doubt are good. What sort of crypto mechanisms might help here? I can see various applications of digital signatures and timestamps that might help, but noting close to a full solution. From leichter at lrw.com Sun Oct 26 20:54:29 2014 From: leichter at lrw.com (Jerry Leichter) Date: Sun, 26 Oct 2014 20:54:29 -0400 Subject: [Cryptography] A TRNG review per day: RDRAND and the right TRNG architecture In-Reply-To: References: Message-ID: <05DA5D32-23C5-4087-9128-FED2B3FF4E93@lrw.com> On Oct 26, 2014, at 6:32 PM, Sandy Harris wrote: > Another architecture that is correct is the type of design > used in various random(4) devices. > > several sources -> pool -> cryptographic hash > > This requires stronger assumptions about the hash than the > Turbid-ish design does and it fails (at least short-term) if the > enemy learns pool contents. > > All operations on the pool should be pool_word ^= input or > += input, never pool_word = input, so that bad inputs cannot > reduce pool entropy. Given that plus reasonable assumptions > about the hash and that at least one source produces entropy > unknown to an attacker, it is easy to show that this must > recover from any state compromise attack eventually. As has been mentioned here recently - and discussed in various papers - this last is false if the generator is forced to produce output while it's trying to recover. If outputs are produced at a rate at least equal to the rate in which new entropy is fed into the pool, and that feed rate is low enough, the generator may never recover. The work-around with the design as given is to block long enough to build up the necessary entropy. This may be difficult if you have only a very conservative estimate of entropy to work from - you may have to block for a while. -- Jerry From hbaker1 at pipeline.com Sun Oct 26 21:21:11 2014 From: hbaker1 at pipeline.com (Henry Baker) Date: Sun, 26 Oct 2014 18:21:11 -0700 Subject: [Cryptography] Best internet crypto clock: hmmmmm... In-Reply-To: References: Message-ID: At 03:54 PM 10/26/2014, Jerry Leichter wrote: >Quite a few years ago, I argued that it should be possible to identify laser printers by small variations in toner placement. The argument the other way was that manufacturing tolerances would make this impossible. Nothing new here. Didn't the Secret Service already fingerprint all color laser printers so that they couldn't be used to print currency ? From natanael.l at gmail.com Sun Oct 26 21:22:38 2014 From: natanael.l at gmail.com (Natanael) Date: Mon, 27 Oct 2014 02:22:38 +0100 Subject: [Cryptography] Auditable logs? In-Reply-To: References: Message-ID: Den 27 okt 2014 01:59 skrev "Sandy Harris" : > > Various computer-mediated activities may end up in court for a range > of reasons and in many cases log files will be used as evidence. > However for most log file formats, deleting a few lines or adding a > few bogus ones is trivial. Even forging an entire file or large chunk > thereof is not impossible. > > Lawyers for one side or the other seem quite likely to attack the > credibility of log files and/or of the sys admin who provides them. In > at least some cases, proof "beyond reasonable doubt" is required and > that is going to be very difficult if the lawyers trying to create > some doubt are good. > > What sort of crypto mechanisms might help here? I can see various > applications of digital signatures and timestamps that might help, but > noting close to a full solution. Look at the conversation about timestamping images. This is essentially the exact same thing but for text. You're capturing a state in time of something that needs to be recorded accurately. Hash the data, timestamp it by publishing it widely and/or hash chaining it (git, Bitcoin blockchain, as well as various online trusted timestamping services etc). To protect the generation of the logs and this authencity, you'll need a trusted hardware platform, like with a TPM. Something which can enforce what software has access to what. Something which the admin can't override. -------------- next part -------------- An HTML attachment was scrubbed... URL: From hbaker1 at pipeline.com Sun Oct 26 21:26:13 2014 From: hbaker1 at pipeline.com (Henry Baker) Date: Sun, 26 Oct 2014 18:26:13 -0700 Subject: [Cryptography] Auditable logs? In-Reply-To: References: Message-ID: At 05:28 PM 10/26/2014, Sandy Harris wrote: >Various computer-mediated activities may end up in court for a range >of reasons and in many cases log files will be used as evidence. >However for most log file formats, deleting a few lines or adding a >few bogus ones is trivial. Even forging an entire file or large chunk >thereof is not impossible. > >Lawyers for one side or the other seem quite likely to attack the >credibility of log files and/or of the sys admin who provides them. In >at least some cases, proof "beyond reasonable doubt" is required and >that is going to be very difficult if the lawyers trying to create >some doubt are good. > >What sort of crypto mechanisms might help here? I can see various >applications of digital signatures and timestamps that might help, but >noting close to a full solution. Auditability is one of the goals of the "crypto clock" thread. Unfortunately, we can't even guarantee that a log entry was made at the time that the log claims it was made. So far, it would seem that incorporating your log (or a least a hash of it) into the Bitcoin blockchain might seem to be the best bet for a pretty decent guarantee. It might be worth spending the minimum auditable Bitcoin amount to incorporate a log into the Bitcoin block chain. From matt8128 at gmail.com Sun Oct 26 22:01:24 2014 From: matt8128 at gmail.com (Matt Crawford) Date: Sun, 26 Oct 2014 21:01:24 -0500 Subject: [Cryptography] Auditable logs? In-Reply-To: References: Message-ID: <76EF12B9-9691-497B-B816-F24044B8A1B8@gmail.com> > Den 27 okt 2014 01:59 skrev "Sandy Harris" : > > > > Various computer-mediated activities may end up in court for a range > > of reasons and in many cases log files will be used as evidence. > > [...] On Oct 26, 2014, at 8:22 PM, Natanael wrote: > Hash the data, timestamp it by publishing it widely and/or hash chaining it (git, Bitcoin blockchain, as well as various online trusted timestamping services etc). > > To protect the generation of the logs and this authencity, you'll need a trusted hardware platform, like with a TPM. Something which can enforce what software has access to what. Step right up to be the first prosecutor or expert witness to try to convince a jury. From coruus at gmail.com Sun Oct 26 22:10:53 2014 From: coruus at gmail.com (David Leon Gil) Date: Sun, 26 Oct 2014 22:10:53 -0400 Subject: [Cryptography] A TRNG review per day: RDRAND and the right TRNG architecture In-Reply-To: References: Message-ID: On Fri, Oct 24, 2014 at 5:31 AM, Bill Cox wrote: > However, I happen to be something of a speed freak. Intel's RDRAND > instruction is appealing to me. The architecture is the fastest TRNG I have > seen. So, why not use it? Here's why: > > - It is probably back doored > - It is not auditable > - Critical portions of its design remain secret (such as whitening and how > to disable it) I thought that the whitening design had been published? They're using CTR-DRBG instantiated with AES-128. (This, in itself, is probably all the NSA could want; the security strength of that construction is rather low.) Here's the paper: http://www.cryptography.com/public/pdf/Intel_TRNG_Report_20120312.pdf > That said, this TRNG has so many drawbacks that I predict no one other than > Intel will ever use it. First, it requires a couple of large-ish on-chip > capacitors to hold the control voltages that compensate for factors that > cause the latch to prefer to power up one way or the other. Without > measuring the 0/1 bias and dynamically compensating for it, this circuit > simply does not work. This by itself makes Intel's TRNG both large and > complex. Worse, it is *massively* sensitive to nearby signals. It is more > sensitive to external signals than any other architecture I know of. No > other TRNG relies on amplifying such a small noise signal, and no other > architecture can be PWNed with as little injected energy. This is literally > the most attacker signal sensitive TRNG ever designed. Thanks for the terrific summary of potential side-channel attacks. > A TRNG simply does not need to be fast. A Lava Lamp generates entropy fast > enough for almost any application, so long as we use it to seed add a high > speed CPRNG firehose. Anyone selling you a high speed TRNG for a lot of > money, based on quantum voodoo or whatever, is ripping you off. Agreed; the quantum RNGs in particular make me laugh. (I'd wonder who would be so stupid as to buy one, but even Google has bought into the similarly craptastic D-wave nonsense.) > Due to Intel's inexplicable reluctance to make their device auditable, while > relying on what is probably the hardest TRNG architecture to get right, I > have to rate RDRAND as snake-oil for use in cryptography. They need to publish layout information so that third-parties can easily(-ish) verify it. Information on the "eight different operational modes" supported by the RNG would be nice too. -dlg From dave at horsfall.org Sun Oct 26 23:42:39 2014 From: dave at horsfall.org (Dave Horsfall) Date: Mon, 27 Oct 2014 14:42:39 +1100 (EST) Subject: [Cryptography] Best internet crypto clock: hmmmmm... In-Reply-To: References: Message-ID: On Sun, 26 Oct 2014, Henry Baker wrote: > Didn't the Secret Service already fingerprint all color laser printers > so that they couldn't be used to print currency ? The version I'd heard was that should they recognise (American?) currency then they wouldn't print at all, or was that copiers? Of course, colour printers were expensive in those days, and now they're practically given away (and money is made on the consumables). -- Dave Horsfall (VK2KFU) "Bliss is a MacBook with a FreeBSD server." http://www.horsfall.org/spam.html (and check the home page whilst you're there) From smueller at chronox.de Sun Oct 26 23:44:13 2014 From: smueller at chronox.de (Stephan Mueller) Date: Mon, 27 Oct 2014 04:44:13 +0100 Subject: [Cryptography] Auditable logs? In-Reply-To: References: Message-ID: <10883289.vQvXjr6nWG@tachyon.chronox.de> Am Sonntag, 26. Oktober 2014, 20:28:13 schrieb Sandy Harris: Hi Sandy, > Various computer-mediated activities may end up in court for a range > of reasons and in many cases log files will be used as evidence. > However for most log file formats, deleting a few lines or adding a > few bogus ones is trivial. Even forging an entire file or large chunk > thereof is not impossible. > > Lawyers for one side or the other seem quite likely to attack the > credibility of log files and/or of the sys admin who provides them. In > at least some cases, proof "beyond reasonable doubt" is required and > that is going to be very difficult if the lawyers trying to create > some doubt are good. > > What sort of crypto mechanisms might help here? I can see various > applications of digital signatures and timestamps that might help, but > noting close to a full solution. What about using git as a log backend? Logically it is a chronological tracker based on a good cryptographic hash. -- Ciao Stephan From smueller at chronox.de Sun Oct 26 23:47:42 2014 From: smueller at chronox.de (Stephan Mueller) Date: Mon, 27 Oct 2014 04:47:42 +0100 Subject: [Cryptography] A TRNG review per day: RDRAND and the right TRNG architecture In-Reply-To: References: Message-ID: <3395234.a0jWIekD88@tachyon.chronox.de> Am Freitag, 24. Oktober 2014, 05:31:51 schrieb Bill Cox: Hi Bill, > The "right TRNG architecture" looks like this: > > auditable cheap low speed TRNG -> auditable high speed CPRNG -> happy > user > > Respectable TRNGs like the new Cryptech Tech TRNG are switching to this > architecture. If you use *any* secure TRNG to feed /dev/random, regardless > of it's speed, and then read your cryptographic key data from /dev/urandom, > then you are already using this model. > > However, I happen to be something of a speed freak. Intel's RDRAND > instruction is appealing to me. The architecture is the fastest TRNG I > have seen. So, why not use it? Here's why: > > - It is probably back doored Another one: it is designed to cause a VM exit trap. I have a 10-line patch against KVM demonstrating this "nice" feature. -- Ciao Stephan From mitch at niftyegg.com Mon Oct 27 05:45:38 2014 From: mitch at niftyegg.com (Tom Mitchell) Date: Mon, 27 Oct 2014 02:45:38 -0700 Subject: [Cryptography] Auditable logs? In-Reply-To: References: Message-ID: On Sun, Oct 26, 2014 at 5:28 PM, Sandy Harris wrote: > Various computer-mediated activities may end up in court for a range > of reasons and in many cases log files will be used as evidence. > However for most log file formats, deleting a few lines or adding a > few bogus ones is trivial. Even forging an entire file or large chunk > thereof is not impossible. > There are risks here but crypto hashes of the source tree used to build a software product can be published without disclosing the source and any trade secrets it might contain. One risk is the difficulty of preserving the entire build system and the entire process involved. After a couple years the hardware is unlikely to be available. An OS snapshot restored opens security issues patched between then and now. Some tools like an EMC cloud resource are not under control (active and full audit) of customers. However make rules could systematically generate strong hashes of each file collect them sort compress and generate a hash... Generated binary bits are harder to reproduce because some content is date and time sensitive... Make and make clean rules can reach back a long way in time when deciding to generate an object file. The object file set might span multiple compiler releases. I have yet to see a makefile that triggers a clean when the compiler changes. I have seen engineers do it... Ada and some milspec processes try to cover this -- Modula 4? tried as well. Time is relentless... large projects take time. Good question. -- T o m M i t c h e l l -------------- next part -------------- An HTML attachment was scrubbed... URL: From leichter at lrw.com Mon Oct 27 06:17:59 2014 From: leichter at lrw.com (Jerry Leichter) Date: Mon, 27 Oct 2014 06:17:59 -0400 Subject: [Cryptography] Auditable logs? In-Reply-To: <76EF12B9-9691-497B-B816-F24044B8A1B8@gmail.com> References: <76EF12B9-9691-497B-B816-F24044B8A1B8@gmail.com> Message-ID: On Oct 26, 2014, at 10:01 PM, Matt Crawford wrote: >> Hash the data, timestamp it by publishing it widely and/or hash chaining it (git, Bitcoin blockchain, as well as various online trusted timestamping services etc). >> >> To protect the generation of the logs and this authencity, you'll need a trusted hardware platform, like with a TPM. Something which can enforce what software has access to what. > Step right up to be the first prosecutor or expert witness to try to convince a jury. That's what expert witnesses are for. Juries accept DNA evidence, which is technically much more complex than a hash chain. (Of course, juries accept evidence from "trained experts" in what latter turns out to be pseudo-science and wishful thinking - like much bite analysis, for example. But that's a different problem.) -- Jerry From dborkman at redhat.com Mon Oct 27 06:42:50 2014 From: dborkman at redhat.com (Daniel Borkmann) Date: Mon, 27 Oct 2014 11:42:50 +0100 Subject: [Cryptography] Auditable logs? In-Reply-To: <10883289.vQvXjr6nWG@tachyon.chronox.de> References: <10883289.vQvXjr6nWG@tachyon.chronox.de> Message-ID: <544E21AA.4070807@redhat.com> On 10/27/2014 04:44 AM, Stephan Mueller wrote: > Am Sonntag, 26. Oktober 2014, 20:28:13 schrieb Sandy Harris: ... >> Various computer-mediated activities may end up in court for a range >> of reasons and in many cases log files will be used as evidence. >> However for most log file formats, deleting a few lines or adding a >> few bogus ones is trivial. Even forging an entire file or large chunk >> thereof is not impossible. >> >> Lawyers for one side or the other seem quite likely to attack the >> credibility of log files and/or of the sys admin who provides them. In >> at least some cases, proof "beyond reasonable doubt" is required and >> that is going to be very difficult if the lawyers trying to create >> some doubt are good. >> >> What sort of crypto mechanisms might help here? I can see various >> applications of digital signatures and timestamps that might help, but >> noting close to a full solution. > > What about using git as a log backend? Logically it is a chronological tracker > based on a good cryptographic hash. Have you looked into journald's FSS [1]? [1] http://lwn.net/Articles/512895/ From leichter at lrw.com Mon Oct 27 07:35:05 2014 From: leichter at lrw.com (Jerry Leichter) Date: Mon, 27 Oct 2014 07:35:05 -0400 Subject: [Cryptography] Paranoia for a Monday Morning Message-ID: <40096164-E092-4521-8BF3-17991522CFDA@lrw.com> We've seen increasing evidence that the NSA influenced the choice of cryptographic standards towards designs that were extremely difficult to get right - e.g., Dan Bernstein's claims that the standard elliptic curves have arithmetic whose implementations need special-case paths that make side-channel attacks much easier than they need to be. As I look at the world around me, however, I see few proven attacks against fielded cryptographic implementations - but an ever-flowing stream of attacks against another class of standardized software. I'm talking, of course, about browsers. The complexity of browser standards - and of ancillary software like Flash - has proved way beyond our capability to program without error. It's easy to blame Adobe or the Microsoft of old for incompetent programming; but even the latest IE, produced under what may be the best "secure software development chain" in the world; and Chrome, a clean-sheet, open-source implementation by a team containing some of the best security guys out there; continue to be found to have gaping holes. At some point, you have to step back and admit that the problem doesn't lie with the developers: They are being set up to fail, handed a set of specifications that we simply too hard to get right. And that, of course, raises the question: Accident, or enemy action? -- Jerry From alexander.kjeldaas at gmail.com Mon Oct 27 07:51:06 2014 From: alexander.kjeldaas at gmail.com (Alexander Kjeldaas) Date: Mon, 27 Oct 2014 12:51:06 +0100 Subject: [Cryptography] SPHINCS: practical hash-based digital signatures In-Reply-To: References: <20141007233041.50e21497@pc> Message-ID: On Wed, Oct 8, 2014 at 5:03 PM, Ben Laurie wrote: > > > On Tue Oct 07 2014 at 10:51:21 PM Hanno B?ck wrote: > >> I like it that the whole area of post-quantum-crypto is getting more >> attention lately. >> >> However what immediately catched my attention: The webpage says >> "Signatures are 41 KB, public keys are 1 KB, and private keys are 1 KB" >> >> The signature size is a problem. It makes the claim that it's a >> "drop-in replacement" for current signature schemes somewhat >> questionable. >> >> 41 kb may not seem much, but consider a normal TLS handshake. It >> usually already contains three signatures (2 for the certificate chain >> and one for the handshake itself). That already makes 120 kb. >> >> It may not seem that much, but it definitely is an obstacle because this >> would significantly impact your loading time. >> > > Definitely a deal breaker for HTTPS. > > > TLS can re-use sessions, and re-use cached information, so this overhead can be negligible. Large signatures will just adjust the trade-off, favoring session re-use and caching. Even without session-reuse, 2 of the certificate chain hashes described above can be cached indefinitely on the client, which means the number of DNS requests is an absolute upper bound on this traffic. For example https://tools.ietf.org/html/draft-ietf-tls-cached-info-16 could be used. Btw, the same approach makes sense for the transfer of the SCT in your CT design. Further, for HTTP/2.0 the web will get longer sessions, so handshake overhead should means less. Re-sending static information in the TLS handshake is inefficient and makes no sense, and designing with that as a fundamental limit to the design space is simply not necessary. Having a signature scheme that is hard to implement incorrectly and that is quantum-computer secure seems like an obvious win. Alexander -------------- next part -------------- An HTML attachment was scrubbed... URL: From dan at geer.org Mon Oct 27 08:30:21 2014 From: dan at geer.org (dan at geer.org) Date: Mon, 27 Oct 2014 08:30:21 -0400 Subject: [Cryptography] Auditable logs? In-Reply-To: Your message of "Sun, 26 Oct 2014 21:01:24 -0500." <76EF12B9-9691-497B-B816-F24044B8A1B8@gmail.com> Message-ID: <20141027123021.E27D82284F8@palinka.tinho.net> Where I do think this has relevance is Electronic Health Records (EHRs). I do not see a unitary EHR per person but rather, like today, health records dispersed at least as broadly as the number of providers that the patient has (your internist has some, your cardiologist has some, your urologist...) plus the number of insurance carriers involved plus, perhaps, regulatory overseers of the most nannified sort. Today this is part self-protection on the part of the provider against the event of malpractice claims -- "Here is the information I was furnished from the laboratory and thus my decision was as follows." It is also partly that in most (U.S.) states, the medical record is the property of the provider, not the patient. (Ownership was legislatively swapped from patient to provider in the middle 1970s in Massachusetts when I was myself working in teaching hospitals on the then-novel idea of an automated health record; such change became a generalized trend nationwide.) In short, there is little to no likelihood of a unitary EHR for the patient and thus it is likely that logs will have increased meaning including in front of juries. On the other hand, there are those in the medical community with whom I am still in touch who believe that in due course the ownership of the medical record will revert to its previous form, i.e., that the patient will own it. Whether patients will want to have the facility of log analysis over how their providers use their EHRs, including how those to whom their providers outsource use their EHRs, remains unknown. People on this list are probably not representative of the general public in these matters. In any case, we have begun a natural experiment on the importance of auditable logs though probably a slow running one. --dan From rsalz at akamai.com Mon Oct 27 09:40:31 2014 From: rsalz at akamai.com (Salz, Rich) Date: Mon, 27 Oct 2014 09:40:31 -0400 Subject: [Cryptography] Best internet crypto clock: hmmmmm... In-Reply-To: References: Message-ID: <2A0EFB9C05D0164E98F19BB0AF3708C71D3AF655FC@USMBX1.msg.corp.akamai.com> > Quite a few years ago, I argued that it should be possible to identify laser > printers by small variations in toner placement. And they were doing it for some time. I first heard the story from an HP person probably around 2000. http://hardware.slashdot.org/story/12/02/18/0455217/foia-request-shows-which-printer-companies-cooperated-with-us-government -- Principal Security Engineer, Akamai Technologies IM: rsalz at jabber.me Twitter: RichSalz From linus at nordberg.se Mon Oct 27 10:09:42 2014 From: linus at nordberg.se (Linus Nordberg) Date: Mon, 27 Oct 2014 15:09:42 +0100 Subject: [Cryptography] Auditable logs? In-Reply-To: (Sandy Harris's message of "Sun, 26 Oct 2014 20:28:13 -0400") References: Message-ID: <874mupmwnt.fsf@nordberg.se> Sandy Harris wrote Sun, 26 Oct 2014 20:28:13 -0400: | What sort of crypto mechanisms might help here? I can see various | applications of digital signatures and timestamps that might help, but | noting close to a full solution. Merkle trees can be used as described in "Efficient Data Structures for Tamper-Evident Logging" [0]. This is what Certificate Transparency is using for logging X.509 certificates. [0] https://www.usenix.org/event/sec09/tech/full_papers/crosby.pdf From hbaker1 at pipeline.com Mon Oct 27 10:56:26 2014 From: hbaker1 at pipeline.com (Henry Baker) Date: Mon, 27 Oct 2014 07:56:26 -0700 Subject: [Cryptography] Auditable logs? In-Reply-To: <76EF12B9-9691-497B-B816-F24044B8A1B8@gmail.com> References: <76EF12B9-9691-497B-B816-F24044B8A1B8@gmail.com> Message-ID: At 07:01 PM 10/26/2014, Matt Crawford wrote: >On Oct 26, 2014, at 8:22 PM, Natanael wrote: >> Hash the data, timestamp it by publishing it widely and/or hash chaining it (git, Bitcoin blockchain, as well as various online trusted timestamping services etc). >> >> To protect the generation of the logs and this authencity, you'll need a trusted hardware platform, like with a TPM. Something which can enforce what software has access to what. > >Step right up to be the first prosecutor or expert witness to try to convince a jury. Juries believe those "time stamps" embedded into CCTV images all the time, because they look so official, with an individual serial number on each frame ! From teshrim at pdx.edu Mon Oct 27 11:11:36 2014 From: teshrim at pdx.edu (Tom Shrimpton) Date: Mon, 27 Oct 2014 08:11:36 -0700 Subject: [Cryptography] A TRNG review per day: RDRAND and the right TRNG architecture In-Reply-To: References: Message-ID: <544E60A8.1040903@cs.pdx.edu> On 10/26/14 7:10 PM, David Leon Gil wrote: > I thought that the whitening design had been published? They're using > CTR-DRBG instantiated with AES-128. (This, in itself, is probably all > the NSA could want; the security strength of that construction is > rather low.) > > Here's the paper: > > http://www.cryptography.com/public/pdf/Intel_TRNG_Report_20120312.pdf For additional analysis, along the lines of what was recently done by Dodis et al. for /dev/random and /dev/urandom, you might find "A Provable Security Analysis of Intel's Secure Key RNG" (http://eprint.iacr.org/2014/504) interesting. Cheers, -Tom From waywardgeek at gmail.com Mon Oct 27 13:09:54 2014 From: waywardgeek at gmail.com (Bill Cox) Date: Mon, 27 Oct 2014 13:09:54 -0400 Subject: [Cryptography] A TRNG review per day: Turbid Message-ID: Turbid is a FOSS TRNG that generates high quality random data from a system's sound card. It is free, and when calibrated correctly by an expert with a sound card capable of reliably amplifying thermal noise, it generates provable amounts of entropy. Being totally FOSS, it's 100% auditable, making it one of the few decent choices out there for a good TRNG, IMO. The Turbid paper is here: http://www.av8n.com/turbid/paper/turbid.htm It discusses many important concepts in TRNGs, and is a excellent contribution on it's own. It is easier to be a critic than an author of good ideas, but my role here is pointing out the bad along with the good. I applaud the authors for excellent work in general, but I do not consider Turbid's approach of using a sound card as an entropy source to be a particularly good idea for these reasons: - A sound card used by Turbid cannot be used for input, meaning most users need a second sound card. - Once a user is buying extra hardware for use as a TRNG, there is no reason to use a sound card, when a TRNG designed for the purpose can do a better job. - Turbid needs to be calibrated for each type of sound card by an expert at Turbid configuration. Given how few people there are who can do this properly, availability of properly tuned Turbid installations will likely remain low. - ALSA has to be patched to ensure exclusive access to the mic input, so a good sys-admin is also required. Given the difficulty of analyzing a system's sound card for potential for producing entropy, it makes more sense, IMO, to use a dedicated hardware TRNG, where the entropy can be proven once. In this case, using a sound card is just one possible solution among many, and not my preferred solution. A dedicated amplification of thermal noise in a carefully designed and shielded circuit for this purpose would be better, for example, than a random sound card which was not designed for generating cryptographically secure random data. There are cheap A/D based TRNGs out there that do exactly this, though I would go with a OneRNG or possibly an Entropy Key before one of those. There is a *huge* number of threats to consider, like whether a USB key can PWN your system, and sound card USB keys simply aren't designed for security. However, there is one good thing about Turbid vs custom TRNG harware. As IanG states, using a dedicated hardware TRNG is like having a "Kick me" sign on your back. That device is a prime target for attackers, while buying a sound card at Best Buy would go unnoticed. However, I would prefer to rely on the security measures proposed by the OneRNG team than trying to get a Turbid install right. Turbid is not the only system with an entropy lower bound proven by physics. For example, my Infinite Noise Multiplier gives log2(K) bits of entropy per output bit, even if the only noise is the resistors around the op-amp. In contrast, Turbid requires a skilled analyst to determine the lower bound of entropy for any given system. TRNGs using zener noise have trouble, but those amplifying thermal noise, which are also common, generate easily provable entropy. Those based on "A/D converter noise" are common thermal entropy sources. The paper states: "It harvests entropy from physical processes, and uses that entropy efficiently. The hash saturation principle is used to distill the data, so that the output has virtually 100% entropy density. This is calculated from the laws of physics, not just statistically estimated, and is provably correct under mild assumptions." I particularly like their coverage of the hash saturation principle. This is used by most TRNGs. This paper quantifies how many extra bits of entropy are needed to saturate the entropy pool, and it is surprisingly few! I use 2X the input entropy as hashed output data, which may be over-kill. Getting a system to work well with Turbid first requires a "good-quality" sound card: "We start with a raw input, typically from a good-quality sound card." I would dispute ?good-quality? here. What they need is an A/D converter with enough bits to digitize the thermal noise on the mic input. A 24-bit A/D converter is simply a marketing tool, since the low 8-ish bits will be random. That's not a ?good? sound card, IMO, probably just a waste of money, but it is wonderful for use as a TRNG. A sensible 12-bit A/D mic input probably is unusable with Turbid. Here's what they say in Apendix B about their assumptions: "Let C be a machine that endlessly emits symbols from some alphabet Z. We assume the symbols are IID, that is, independent and identically distributed. That means we can calculate things on a symbol-by-symbol basis, without worrying about strings, the length of strings, or any of that. Let PC(i) denote the probability of the ith symbol in the alphabet. Note: Nothing is ever exactly IID, but real soundcards are expected to be very nearly IID. At the least, we can say that they are expected to have very little memory. What we require for good random generation is a very mild subset of what is required for good audio performance." I had difficulty reading their proofs with this invalid assumption that samples are independent. They are not independent, or even close to independent. However, I read through the paper, and can see how the arguments can be enhanced to deal with correlation between samples easily enough. Their conclusions seem sound to me, but the short-cut of this assumption was cutting corners when they didn't have to. It also set off alarms in my head when I read it. I read this assumption a while back, and stopped reading the paper right there. I didn't return to Turbid until today, and if you had asked me about Turbid yesterday, I would have had some uncomplimentary things to say about the author's making unrealistic assumptions and "proving" things with them, just like a lot of snake-oil TRNG manufacturers do. They also say: "We use no secret internal state and therefore require no seed, no initialization." This is touted as a strength when in fact it is a weakness. Turbid uses SHA-1 to concentrate entropy and whiten it's output. If they were to use the init/update/finalize interface, and make a copy of the state before finalize, and use that copy for the next sample, they could carry entropy from one SHA-1 application to the next, which would make their output to be less predictable. Some inputs they pass to SHA-1 will be far more likely than others, and because of this, the corresponding outputs will also be more likely. They go on to say: "Best performance and maximally-trustworthy results depend on proper calibration of the hardware. This needs to be done only once for each make and model of hardware, but it really ought to be done. Turbid provides extensive calibration features." I feel this is the single most important point about Turbid. So long as someone skilled at the task calibrates Turbid for each revision of each make and model of hardware, assuming it has a suitable sound input that know one wants to use for inputting sound, it can be made as secure. How often do systems have redundant sound inputs? How many skilled technicians do we have seeking out these useless mic inputs for use with Turbid? In section 4, "surprisal" is discussed, but with the assumption that each output symbol from the sound card is independent of the others, apparently regardless of how fast the mic input is sampled, which is far from being true. However, sampling fast will capture more of the available entropy, so there's no harm in doing so. I would feel better about Turbid if they were to estimate the entropy in the input, and compare this estimate to the theoretical result, and show that there is a close match. I do this for my INM, for example, and others do this for their TRNGs. I build three of them yesterday, and all three output measured entropy within 0.5% of the model's prediction. Turbin's theory is solid, but when a Turbid technician goofs, it would be nice to catch the error. Sound outputs will be correlated when sampled at high speed. To help correct for this short-term correlation, Turbid could keep the history of the next sample given several previous samples. This would give a good estimation of surprisal, allowing more accurate entropy estimation. This could then be compared to the predicted entropy. The paper states: "If there is some 60-cycle hum or other interference, even something injected by an adversary, that cannot reduce the variability (except in ultra-extreme cases)." This is the basic concept behind an Infinite Noise Multiplier, where signals added by an attacker cannot reduce the entropy of the output. Many other TRNGs also rely on this principle, and like Turbid, an attacker who can inject a large enough signal can saturate the output, controlling the bits produced. This problem is worse in cheap zener-noise TRNGs, which saturate easily, but with a 24-bit A/D, not much amplification is required to sample thermal noise. They state: "We also need a few specialists who know how to tune a piano. Similarly, we need a few specialists who understand in detail how turbid works. Security requires attention to detail. It requires double-checking and triple-checking." "Understanding turbid requires some interdisciplinary skills. It requires physics, analog electronics, and cryptography. If you are weak in one of those areas, it could take you a year to catch up." This is the weakest point to Turbid, IMO. Security needs to be easy to be secure. As the paper states with the line-in example on a ThinkPad, if the upstream gain times the expected thermal noise level given the presence of capacitance to GND is less than 1 bit worth of input voltage on the A/D converter, then most entropy will be lost. Other TRNG architectures do not have such problems. Turbid is difficult to get right. Section 8.3 is titled, "Whitener Considered Unhelpful". This is just a matter of semantics, IMO. I would call Turbid's output hash function a whitener, so hearing them claim whiteners are not helpful seems strange to me. Most people working on TRNGs would call Turbid's output hash a whitener, I think. I would prefer a Blake2b rather than SHA-1 in Turbid, since it is faster and more secure, and they should keep the internal state for the next snippet of data to randomize the chances of any given output occurring, rather than what they have now where some outputs are more likely than others. The health checks for Turbid sound weak, such as checking for bits stuck at 1 or 0. In my INM driver, as well as drivers for OneRNG, Entropy Key, and others,, entropy is statistically estimated, and if any sample fails, it is disguarded, and if this continues for long enough, it stops all together. Here's a part in the paper I found very helpful: "A subtle type of improper reseeding or improper stretching (failure 3) is pointed out in reference 22. If you have a source of entropy with a small but nonzero rate, you may be tempted to stir the entropy into the internal state of the PRNG as often as you can, whenever a small amount of entropy (?S) becomes available. This alas leaves you open to a track-and-hold attack. The problem is that if the adversaries had captured the previous state, they can capture the new state with only 2?S work by brute-force search, which is infinitesimal compared to brute-force capture of a new state from scratch. So you ought to accumulate quite a few bits, and then stir them in all at once (?quantized reseeding?). If the source of entropy is very weak, this may lead to an unacceptable interval between reseedings, which means, once again, that you may be in the market for a HRNG with plenty of throughput, as described in this paper." This is why TRNGs should mix only a cryptographically strong entropy sample at a time into /dev/random. 256 bits at a time should do the trick. They also said: "Therefore in some sense /dev/urandom can be considered a stretched random generator, but it has the nasty property of using up all the available entropy from /dev/random before it starts doing any stretching. Therefore /dev/urandom provides an example of bad side effects (failure 4). Until the pool entropy goes to zero, every byte read from either /dev/random or /dev/urandom takes 8 bits from the pool. That means that programs that want to read modest amounts of high-grade randomness from /dev/random cannot coexist with programs reading large amounts of lesser-grade randomness from /dev/urandom. In contrast, the stretched random generator described in this paper is much better behaved, in that it doesn?t gobble up more entropy than it needs." Linux (at least Ubuntu 14.04) let's users read from /dev/random if only 64 bits of entropy exist in the pool, meaning if an attacker knows the state when the pool is at 0, he can guess your keys read from /dev/random in 2^64 guesses. I guess in real life that's a lot, but I think this makes it harder than need be to reseed the Linux pool when compromised. Force-feeding the entropy pool >= 4096 bits *might* be good enough... not 100% sure. Why isn't the lower limit something stronger, like 160 bits or more? The paper states: "The least-fundamental threats are probably the most important in practice. As an example in this category, consider the possibility that the generator is running on a multiuser machine, and some user might (inadvertently or otherwise) change the mixer gain. To prevent this, we went to a lot of trouble to patch the ALSA system so that we can open the mixer device in ?exclusive? mode, so that nobody else can write to it." Instructions are provided for patching ALSA. However, until those patches are main-stream, users of Turbid will also need to be good at system administration. Turbid violates my KISS rule. Security this complex isn't secure. All that said, I think their paper is outstanding, and I benefited substantially from reading it. I just don't expect to set up a Turbid installation any time soon. It *is* excellent work, however, and it advanced the state of the art so far as I know it (which is limited) a ton. Bill -------------- next part -------------- An HTML attachment was scrubbed... URL: From pgut001 at cs.auckland.ac.nz Mon Oct 27 14:19:38 2014 From: pgut001 at cs.auckland.ac.nz (Peter Gutmann) Date: Tue, 28 Oct 2014 07:19:38 +1300 Subject: [Cryptography] Best internet crypto clock: hmmmmm... In-Reply-To: Message-ID: Jerry Leichter writes: >All of these things have fundamental limits set by human sensory >capabilities. Most of our digital technologies are near those limits - most >obviously in high-resolution LCD displays (what Apple calls "Retina" >displays). Sure, under some circumstances, some well-trained observers can >easily spot the remaining variations. But it's getting harder every day, and >soon only the "golden ears" (and their analogues in different spheres) will >even claim to be able to tell, and they'll consistently fail careful tests. > >Once you get to that point, there's no reason to go further in controlling >manufacturing processes and such. That only applies if the final consumer of the image is a human being. In this case it's not human eyeballs that the image is intended for, and the sensor manufacturers aren't optimising for that target. Peter. From pgut001 at cs.auckland.ac.nz Mon Oct 27 14:45:12 2014 From: pgut001 at cs.auckland.ac.nz (Peter Gutmann) Date: Tue, 28 Oct 2014 07:45:12 +1300 Subject: [Cryptography] Paranoia for a Monday Morning In-Reply-To: <40096164-E092-4521-8BF3-17991522CFDA@lrw.com> Message-ID: Jerry Leichter writes: >As I look at the world around me, however, I see few proven attacks against >fielded cryptographic implementations - but an ever-flowing stream of attacks >against another class of standardized software. Interesting that you should mention this, I was talking today to a PKI practitioner (so not someone who charges you $50,000 to tell you how wonderful your PKI will be when it's working, but someone who actually has to get it working) and they mentioned that while the geeks are worrying about whether they can roll over their SHA-1 certs and whatnot quickly enough and when attackers will start forging certs, what's really hitting them is the fact that it needs constant shepherding and tweaking and maintenance to keep it running. So the problem isn't one of security but one of availability, that once you've tied your infrastructure to the inflexible rigidity of a cryptographically-bound system your concerns will be running your organisational processes within that straightjacket and not any actual attacks that the straightjacket may or may not be preventing. (And as you say, the attacks aren't against the crypto anyway, but against all the other stuff, completely ignoring the presence of the crypto. Insert Shamir's Law quote here). Peter. From natanael.l at gmail.com Mon Oct 27 14:49:27 2014 From: natanael.l at gmail.com (Natanael) Date: Mon, 27 Oct 2014 19:49:27 +0100 Subject: [Cryptography] Paranoia for a Monday Morning In-Reply-To: <40096164-E092-4521-8BF3-17991522CFDA@lrw.com> References: <40096164-E092-4521-8BF3-17991522CFDA@lrw.com> Message-ID: Den 27 okt 2014 18:00 skrev "Jerry Leichter" : > > We've seen increasing evidence that the NSA influenced the choice of cryptographic standards towards designs that were extremely difficult to get right - e.g., Dan Bernstein's claims that the standard elliptic curves have arithmetic whose implementations need special-case paths that make side-channel attacks much easier than they need to be. > > As I look at the world around me, however, I see few proven attacks against fielded cryptographic implementations - but an ever-flowing stream of attacks against another class of standardized software. I'm talking, of course, about browsers. The complexity of browser standards - and of ancillary software like Flash - has proved way beyond our capability to program without error. It's easy to blame Adobe or the Microsoft of old for incompetent programming; but even the latest IE, produced under what may be the best "secure software development chain" in the world; and Chrome, a clean-sheet, open-source implementation by a team containing some of the best security guys out there; continue to be found to have gaping holes. At some point, you have to step back and admit that the problem doesn't lie with the developers: They are being set up to fail, handed a set of specifications that we simply too hard to get right. > > And that, of course, raises the question: Accident, or enemy action? How about "complexity" and "legacy compatibility"? I'm cautiously optimistic for Mozilla's new engine under development written in the memory safe language Rust. There's also one browser with what it calls a formally verified kernel (http://goto.ucsd.edu/quark). The web was bad enough already when the browser wars just was starting. There was no need for any intelligence agency to fuel it. There were no chance for defining a common well specified target. XHTML was one attempt to create strict rules for structuring code on web pages, but browsers moved towards being more lenient instead in parsing inputs because web developers kept screwing up and you don't want to see the browser refuse to even render 90% of all pages. Maybe HTML6 will be more focused around capabilities thinking and well defined features that can be implemented securely without breaking stuff? We can all hope for it. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jsd at av8n.com Mon Oct 27 16:14:05 2014 From: jsd at av8n.com (John Denker) Date: Mon, 27 Oct 2014 13:14:05 -0700 Subject: [Cryptography] Auditable logs? In-Reply-To: References: Message-ID: <544EA78D.8080108@av8n.com> On 10/26/2014 05:28 PM, Sandy Harris wrote: > What sort of crypto mechanisms might help here? Let me start out ultra-simple and work up from there. Here is a technique that applies to any file, not just a log file. I've used this for decades. When I invent something, I type up a description. I compute a HMAC and send it to my lawyer, with instructions to date-stamp it and put it in the files. This compares very favorably to the usual practice of having a colleague countersign my lab book. Among other things -- It means there can be no suggestion that I altered the lab book after it was signed. -- It means there is no possibility of a leak; the HMAC is a one-way function and cannot be used to reconstruct the meaning of the document. -- I expect the timestamped page to be admissible under the "business records exception" http://en.wikipedia.org/wiki/Business_records_exception which might not apply to my colleague since he was not necessarily required to sign my book as a matter of routine. This suffices to prove that something was invented /before/ a certain date. In contrast, proving that something happened /after/ a certain date -- e.g. hostage proof-of-life -- is a whole different ballgame, as discussed in a previous thread. This is a subset of the infinitely-tricky double-agent triple-agent problem. The foregoing is really bare bones, not even involving a digital signature, but it gets the job done at two levels: 1) I trust it. 2) The adversaries seem to trust it. IANAL and my experience with this is limited ... but in a situation where the adversaries were spending millions of dollars to discredit everything and everybody associated with me, they didn't bother to challenge this. Starting from that bare-bones baseline, you can make a number of improvements. One possible embellishment is to publish the HMAC in a newspaper somewhere. There are small-circulation newspapers that specialize in publishing "legal notices" that nobody will ever see, yet meet the legal definition of publication. This is a crude form of date-stamping. A better option is to send the HMAC to a "notary service" who adds a timestamp, digitally signs it, and sends it back. That gives you something you can keep in your own files, without relying on the lawyer's files. For belt-and-suspenders protection, do both. Have it notarized /and/ filed by a third party. The foregoing applies to loose documents. In the case of a log file, you can do something even stronger. Every time you add something important, and also at scheduled intervals (daily, weekly, or whatever), hash the new material /along with the previous hash/. (This is basically how the git commit logs work.) Have the new hash signed and/or filed as above. This creates a /chain/ that is hard to hack. That should suffice for any application I can imagine at the moment. If there is something else that needs doing, please explain. ---------- PS: Note that much harder problems than this have been solved. In particular, there is an extensive literature on zero-knowledge proofs. This involves some elegant cryptography. From bear at sonic.net Mon Oct 27 16:44:20 2014 From: bear at sonic.net (Bear) Date: Mon, 27 Oct 2014 13:44:20 -0700 Subject: [Cryptography] Paranoia for a Monday Morning In-Reply-To: <40096164-E092-4521-8BF3-17991522CFDA@lrw.com> References: <40096164-E092-4521-8BF3-17991522CFDA@lrw.com> Message-ID: <1414442660.8050.1.camel@sonic.net> On Mon, 2014-10-27 at 07:35 -0400, Jerry Leichter wrote: > As I look at the world around me, however, I see few proven attacks > against fielded cryptographic implementations - but an ever-flowing > stream of attacks against another class of standardized software. I'm > talking, of course, about browsers. > > And that, of course, raises the question: Accident, or enemy action? Tempting as it is to look around for someone to blame, I think this is simply a result of the browser wars of the '90s. At that time leading browser manufacturers were deliberately introducing features incompatible with other browsers, implementing features introduced by other browsers in ways that were deliberately incompatible or subtly different ("extended!"), creating HTML authoring tools that deliberately caused other vendors' browsers to stumble over the differences, and scrambling to play catch-up with each other which meant that the differences and incompatibilities multiplied with every new version. This festering swamp is the environment that the current browser "standards" you're talking about grew out of. It is no remarkable thing that they are horrendously complex, inconsistent, and filled with labyrinthine masses of exceptions. Bear From clemens at ladisch.de Mon Oct 27 16:47:55 2014 From: clemens at ladisch.de (Clemens Ladisch) Date: Mon, 27 Oct 2014 21:47:55 +0100 Subject: [Cryptography] A TRNG review per day: Turbid In-Reply-To: References: Message-ID: <544EAF7B.1050101@ladisch.de> Bill Cox wrote: > - A sound card used by Turbid cannot be used for input, meaning most users > need a second sound card. Quite a few sound devices have several independent inputs. > - Once a user is buying extra hardware for use as a TRNG, there is no > reason to use a sound card, when a TRNG designed for the purpose can do a > better job. Sound devices are widely available, and cheap, and do not require an additional driver. > Sound outputs will be correlated when sampled at high speed. If the output contains _only_ white noise, there will be the same amount of noise at all frequencies, so the sample rate would not matter. > To help correct for this short-term correlation, Turbid could keep the > history of the next sample given several previous samples. This would > give a good estimation of surprisal, allowing more accurate entropy > estimation. I believe that estimating the entropy (i.e., amount of white noise) of a sound signal cannot be done reliably without computing the FFT. > The paper states: > > "The least-fundamental threats are probably the most important in > practice. As an example in this category, consider the possibility that the > generator is running on a multiuser machine, and some user might > (inadvertently or otherwise) change the mixer gain. To prevent this, we > went to a lot of trouble to patch the ALSA system so that we can open the > mixer device in ?exclusive? mode, so that nobody else can write to it." > > Instructions are provided for patching ALSA. AFAICS the patch is missing from the latest turbid version, and the makefile references an ALSA version that is over ten years old. However, no patch is needed; locking mixer controls has always been possible. Apparently, ALSA's documentation is, er, capable of improvement. Regards, Clemens From alfiej at fastmail.fm Mon Oct 27 16:53:18 2014 From: alfiej at fastmail.fm (Alfie John) Date: Tue, 28 Oct 2014 07:53:18 +1100 Subject: [Cryptography] Paranoia for a Monday Morning In-Reply-To: <40096164-E092-4521-8BF3-17991522CFDA@lrw.com> References: <40096164-E092-4521-8BF3-17991522CFDA@lrw.com> Message-ID: <1414443198.2784343.183930265.7C4A808F@webmail.messagingengine.com> On Mon, Oct 27, 2014, at 10:35 PM, Jerry Leichter wrote: > It's easy to blame Adobe or the Microsoft of old for incompetent > programming; but even the latest IE, produced under what may be the > best "secure software development chain" in the world; Citation needed. > and Chrome, a clean-sheet, open-source implementation by a team > containing some of the best security guys out there; continue to be > found to have gaping holes. Clean-sheet? No. Chrome and Chromium for a long time used WebKit as the rendering engine. > At some point, you have to step back and admit that the problem > doesn't lie with the developers: They are being set up to fail, > handed a set of specifications that we simply too hard to get right. Have a look at what Mozilla is doing. They developed a new language called Rust which has a focus on safety, and are using it to build a new rendering engine called Servo. It's not that the specifications are too hard, it's more that complexity in general is hard to manage. And with Firefox having over 12 million lines of code with over 3000 contributors, you're now applying Brooks' Law at absurd levels. > And that, of course, raises the question: Accident, or enemy action? I'd put this into the paranoia basket. Alfie -- Alfie John alfiej at fastmail.fm From bear at sonic.net Mon Oct 27 16:54:19 2014 From: bear at sonic.net (Bear) Date: Mon, 27 Oct 2014 13:54:19 -0700 Subject: [Cryptography] In search of random numbers In-Reply-To: <544CC2CE.8030102@iang.org> References: <20141023133020.28ab656f@pc> <5449D98B.3010004@tik.ee.ethz.ch> <1414173771.31285.3.camel@sonic.net> <20141025004020.3a024ec9@pc> <1414264603.10918.1.camel@sonic.net> <544CC2CE.8030102@iang.org> Message-ID: <1414443259.8050.3.camel@sonic.net> On Sun, 2014-10-26 at 09:45 +0000, ianG wrote: > You only get product recall when it is likely to kill the user. Bad as > the randomness issue appears to us, I'm not sure we're there yet. The randomness issue doesn't look bad to me. You just boot a non-networked OS and don't load any networking software or generate any keys until something actually needs a network connection or a key. Bear From iang at iang.org Tue Oct 28 09:32:03 2014 From: iang at iang.org (ianG) Date: Tue, 28 Oct 2014 13:32:03 +0000 Subject: [Cryptography] Paranoia for a Monday Morning In-Reply-To: <40096164-E092-4521-8BF3-17991522CFDA@lrw.com> References: <40096164-E092-4521-8BF3-17991522CFDA@lrw.com> Message-ID: <544F9AD3.2050303@iang.org> On 27/10/2014 11:35 am, Jerry Leichter wrote: > We've seen increasing evidence that the NSA influenced the choice of cryptographic standards towards designs that were extremely difficult to get right - e.g., Dan Bernstein's claims that the standard elliptic curves have arithmetic whose implementations need special-case paths that make side-channel attacks much easier than they need to be. > > As I look at the world around me, however, I see few proven attacks against fielded cryptographic implementations - but an ever-flowing stream of attacks against another class of standardized software. As some of us have been saying for many a year, the crypto is the least of your problems. Concentrate on the software engineering, and when you've got that right, you may have cause to demand better crypto. And stop this sophistry with algorithm agility and other vanity concepts... > I'm talking, of course, about browsers. The complexity of browser standards - and of ancillary software like Flash - has proved way beyond our capability to program without error. It's easy to blame Adobe or the Microsoft of old for incompetent programming; but even the latest IE, produced under what may be the best "secure software development chain" in the world; and Chrome, a clean-sheet, open-source implementation by a team containing some of the best security guys out there; continue to be found to have gaping holes. At some point, you have to step back and admit that the problem doesn't lie with the developers: They are being set up to fail, handed a set of specifications that we simply too hard to get right. Right. The experience I had reduced to this, if I can compress it. Following the Browser wars that destroyed Netscape, Mozilla swore to implement standards, and clawed back the high ground. Others sort of followed suit. But, standards don't change in ways that improve security for end users. Standards have no feedback loop back to endusers because they aren't represented, and standards groups are stuck in 1990s security model thinking (e.g., the ITM). Hence, the secure browsing system might have started out with some notion X of security back in 1994. But absent a correcting feedback loop back to users, it was set up to deviate from X in a negative direction. Spencian mechanics apply, market for silver bullets and all that. > And that, of course, raises the question: Accident, or enemy action? Enemy action is part of it, but what the enemy did was leverage our own accident-proneness. The notion of security from PKI was strongly pushed by the NSA, through many channels, because they knew they could backdoor the CAs. This was a big thing at the time, it was written somewhere that they 'bet the farm' on this strategy. In contrast, it's not clear to me that they understood the MITM agenda per se which undermined the entire Internet security. But, once they saw how it raised the complexity barrier, they would have been all for it. "You must defend against the MITM at all costs!" Including security for all, unfortunately, but we've got a great story to tell about MITMs. So I'd expect that as shills in standards groups and browsers are outed over time, their actions will be strongly correlated with MUST-anti-MITM-ness. Once the PKI was bedded in with standards, the security failure over time was a certainty. I suspect the NSA bungled this strategy. They probably rationalised that the USG would be protected by their own strong CAs, but it turned out that the weakness outside that vector was so endemic that everyone suffered, equally. iang From zooko at leastauthority.com Tue Oct 28 00:33:27 2014 From: zooko at leastauthority.com (Zooko Wilcox-OHearn) Date: Tue, 28 Oct 2014 04:33:27 +0000 Subject: [Cryptography] Auditable logs? Message-ID: We have a fairly thorough design for extending the vocabulary of the Tahoe-LAFS storage system for this. The added vocabulary item would be an "add-only set", a set of items that I can authorize you to add things into without authorizing you to remove or overwrite any of the things. This would be straightforward if we would just rely on some third party to run a server which will accept new ciphertexts from you but will refuse to delete or overwrite any of your old ciphertexts. Then the set would have the "add-only" property with respect to you, but not with respect to that server! The server would have the power to rollback to earlier versions of the set. We weren't satisfied with this, because all of the current vocabulary items in Tahoe-LAFS are enforced by end-to-end cryptography *without* relying on any single server to enforce the properties and without being vulnerable to any single server being able to violate the properties. (Those vocabulary items are: immutable things vs. mutable things, files vs. directories, and read-only access vs. read-write access.) So, we went pretty far in defining a data-structure/crypto-structure that minimized the power of servers. The resulting design is still vulnerable to rollback attack by a collusion of *all* of the servers, but if the reader connects to at least one server who is not in the collusion, then the add-only property holds. Here's the resulting design: https://tahoe-lafs.org/trac/tahoe-lafs/ticket/795#comment:13 If you want even more detail, but more telegraphic in style, read the rest of the comments after comment 13, and follow the link in comment 16 back to a mailing list post. Regards, Zooko From leichter at lrw.com Mon Oct 27 19:37:57 2014 From: leichter at lrw.com (Jerry Leichter) Date: Mon, 27 Oct 2014 19:37:57 -0400 Subject: [Cryptography] Paranoia for a Monday Morning In-Reply-To: <1414443198.2784343.183930265.7C4A808F@webmail.messagingengine.com> References: <40096164-E092-4521-8BF3-17991522CFDA@lrw.com> <1414443198.2784343.183930265.7C4A808F@webmail.messagingengine.com> Message-ID: On Oct 27, 2014, at 4:53 PM, Alfie John wrote: >> It's easy to blame Adobe or the Microsoft of old for incompetent >> programming; but even the latest IE, produced under what may be the >> best "secure software development chain" in the world; > Citation needed. Widely described that way by people who I trust. Feel free to accept or reject the characterization. >> and Chrome, a clean-sheet, open-source implementation by a team >> containing some of the best security guys out there; continue to be >> found to have gaping holes. > Clean-sheet? No. Chrome and Chromium for a long time used WebKit as the > rendering engine. Yes, but they put a huge effort into isolating the rendering engine to contain any problems. And you need only look at the bugs reported and fixed *by the Chrome team* to see how seriously they looked at the code they did take. >> At some point, you have to step back and admit that the problem >> doesn't lie with the developers: They are being set up to fail, >> handed a set of specifications that we simply too hard to get right. > Have a look at what Mozilla is doing. They developed a new language > called Rust which has a focus on safety, and are using it to build a new > rendering engine called Servo. I'm developing a new, completely secure-by-design language in which I'll implement a *really* secure browser. It's not available yet, but trust me, it will *finally* kill off all those pesky browser security bugs. OK, Mozilla and Rust are a real effort with real publications. But let's keep in mind that Mozilla itself started out to be, among other things, a secure browser (not just "more secure than IE6", a rather low bar, but really secure). So did Chrome. Both were major steps forward; both have been attacked successfully. No claim that some new implementation *will be secure* is worth very much. We'll see how their new effort holds up "in the heat of battle". > It's not that the specifications are too hard, it's more that complexity > in general is hard to manage. And with Firefox having over 12 million > lines of code with over 3000 contributors, you're now applying Brooks' > Law at absurd levels. > >> And that, of course, raises the question: Accident, or enemy action? > > I'd put this into the paranoia basket. You know, two years ago I would have similarly classified claims that the NSA was recording every phone call in whole countries, keeping records on every US citizen - and deliberately working to weaken cryptographic standards. I agree with you that the NSA probably had little or nothing to do with the state of browsers - but mainly because they really didn't need to do anything, masses of attackable software were given to them for free. -- Jerry -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 4813 bytes Desc: not available URL: From mitch at niftyegg.com Mon Oct 27 21:14:32 2014 From: mitch at niftyegg.com (Tom Mitchell) Date: Mon, 27 Oct 2014 18:14:32 -0700 Subject: [Cryptography] Best internet crypto clock: hmmmmm... In-Reply-To: <20141026132203.AAE94228342@palinka.tinho.net> References: <20141026132203.AAE94228342@palinka.tinho.net> Message-ID: On Sun, Oct 26, 2014 at 6:22 AM, wrote: > > Here's Balasubramaniyan's PhD thesis describing the Pindr0p technology: > > > > > https://smartech.gatech.edu/bitstream/handle/1853/44920/balasubramaniyan_vijay_a_201108_phd.pdf > > Along similar lines, small noise in image acquisition is now well > enough understood and discernible to say "This camera did take that > picture" as it does to say "This rifle did fire that bullet," which > extends to "These two pictures/bullets came from the same camera/rifle." > > How much of that image survives the conversion to a JPG compressed image. At a RAW level this makes sense but RAW is very uncommon and most of the raw formats are vendor specific. EXIF data can be edited... Extraction of noise from image data would be helped a lot by knowing something about the image source. An opaque cover might be enough but would require local physical activity. Noise in a white image and noise in a black image would likely be very different. Images of clouds (like lava lamps) or leaves on trees would prove difficult to correlate well enough to understand noise. Of interest Nikon went through a recall on their D600 camera where splatters of lubrication were showing up on the sensor over time. Sensors (shutters) were replaced and cleaned. Also the more expensive Nikon permit saving a reference image to help remove noise and also calibrate white balance. The reference dust image changes with time enough that it is not a factory installed reference. For images I am not convinced. It is just too easy for a bad guy to scuff an image in difficult to detect ways. To my knowledge there is nothing kin to the yellow dot code of copiers and printers in camera sensors (yet). Printers good enough to worry the counterfeit folk early on were expensive enough to be engineered but cell phone image sensors and image software has no budget room to make hiding easy. -- T o m M i t c h e l l -------------- next part -------------- An HTML attachment was scrubbed... URL: From waywardgeek at gmail.com Mon Oct 27 19:55:57 2014 From: waywardgeek at gmail.com (Bill Cox) Date: Mon, 27 Oct 2014 19:55:57 -0400 Subject: [Cryptography] In search of random numbers In-Reply-To: <1414443259.8050.3.camel@sonic.net> References: <20141023133020.28ab656f@pc> <5449D98B.3010004@tik.ee.ethz.ch> <1414173771.31285.3.camel@sonic.net> <20141025004020.3a024ec9@pc> <1414264603.10918.1.camel@sonic.net> <544CC2CE.8030102@iang.org> <1414443259.8050.3.camel@sonic.net> Message-ID: On Mon, Oct 27, 2014 at 4:54 PM, Bear wrote: > On Sun, 2014-10-26 at 09:45 +0000, ianG wrote: > > > You only get product recall when it is likely to kill the user. Bad as > > the randomness issue appears to us, I'm not sure we're there yet. > > The randomness issue doesn't look bad to me. You just boot a > non-networked OS and don't load any networking software or > generate any keys until something actually needs a network > connection or a key. > > Bear > I'm not sure we can't just have all our IoT devices have their own TRNG. It's hard to trust an unauditable TRNG in someone else's IC, but if it's my custom ASIC I design or even just an FPGA, it's easy to trust the TRNG design you drop in, so long is it isn't rocket science to get right. Ring oscillator noise sounds like a decent candidate, though for even smaller size and higher speed with predictable entropy output, I prefer an infinite noise multiplier. For board level designs, there should be a $0.25 highly auditable TRNG chip you can buy that just spits out 0's and 1's when clocked. If they go into many designs, we can tear down enough of them chosen at random to show that at at most only a small percentage of them are back-doored. Bill -------------- next part -------------- An HTML attachment was scrubbed... URL: From waywardgeek at gmail.com Mon Oct 27 21:46:19 2014 From: waywardgeek at gmail.com (Bill Cox) Date: Mon, 27 Oct 2014 21:46:19 -0400 Subject: [Cryptography] A TRNG review per day: Turbid In-Reply-To: <544EAF7B.1050101@ladisch.de> References: <544EAF7B.1050101@ladisch.de> Message-ID: On Mon, Oct 27, 2014 at 4:47 PM, Clemens Ladisch wrote: > Bill Cox wrote: > > - A sound card used by Turbid cannot be used for input, meaning most > users > > need a second sound card. > > Quite a few sound devices have several independent inputs. > > > - Once a user is buying extra hardware for use as a TRNG, there is no > > reason to use a sound card, when a TRNG designed for the purpose can do a > > better job. > > Sound devices are widely available, and cheap, and do not require > an additional driver. > Some TRNGs also require no additional driver (like the one I'm working on). If you use a well accepted USB interface chip, you don't need one. Turbid ideally uses a 24-bit sound card, though a 16-bit might work. I see a Creative Labs Sound Blaster 24-bit audio card at New Egg. Is this the sort of card recommended? > > Sound outputs will be correlated when sampled at high speed. > > If the output contains _only_ white noise, there will be the same amount > of noise at all frequencies, so the sample rate would not matter. > This is inaccurate. White noise with energy in every frequency would have infinite energy and destroy the universe. Thank goodness for quantum mechanics! All white noise rolls off somewhere. In this application, I believe the frequency of interest is the cutoff frequency of the anti-aliasing filter, which is somewhat lower than 1/2 the sample rate (Niquist frequency). That gives you a decent thermal noise estimate. If you sample at the maximum supported sample frequency, you will do a better job capturing the entropy that is there, but sampling at a rate beyond the anti-aliasing filter cut-off frequency insures successive samples are highly correlated, violating the Turbid paper's assumptions. However, the results are still valid, if they correct for short-term correlations, which can be done easily. > To help correct for this short-term correlation, Turbid could keep the > > history of the next sample given several previous samples. This would > > give a good estimation of surprisal, allowing more accurate entropy > > estimation. > > I believe that estimating the entropy (i.e., amount of white noise) of > a sound signal cannot be done reliably without computing the FFT. > Let's use Turbid's definition of entropy (which happens to agree with mine - what are the odds?). Entropy is the expected "surprise" (a good word by the way...) in a snippet of samples. Surprise is log2(1/probability of snippet occurring). I wrote this exact equation a few days ago in my entropy estimator code before reading the Turbid paper. The correlation I am concerned about here is correlation that normally exists between successive samples from the sound card, which can be quite high if you are sampling faster than the anti-aliasing cut-off frequency. However, drop a decade in frequency and the correlation is likely negligible. This can be done easily by sampling 10X less frequently, at the cost of losing most of the entropy available in the sound stream. Alternatively, we can estimate the surprise in each sample based on prior history, and adaptively tune our entropy estimator. A linear predictive code based surprise estimator would work OK, though it would be optimistic about the level of surprise in each sample. In my TRNG, I have a 1-bit output, which lets me have a table of probabilities of the next bit being a 1 or 0 based on the prior 14 outputs. This enables me to eliminate all but a small fraction of the nearby sample correlations from my entropy estimate. The entropy estimator matches the expected entropy of log2(K) to within 0.5% on the first three boards I built (which I built yesterday - see github/waywardgeek/infnoise). The health monitor kills the process if estimated entropy drops below predicted entropy by more than 5%. Turbid, from what I read in the paper, does not adaptively estimate entropy, which makes it's health monitor fairly weak, IMO. Instead it requires hand testing of the sound system to come up with a good entropy estimate, which is then set for all time. Is this right? Is there any adaptation at all? Even if someone put my TRNG in a cryogenic freezer, it's entropy estimator would adapt, and it would kill the process when it dropped more than 5% below expected entropy. However, because I effectively compress so many bits of thermal noise together while it's still an analog voltage (about 15 bits in the first design), the temperature required to get a 2X drop in entropy would be on the order of 293K/2^28. This would drop the noise voltage by 2^14, leaving only about 1 bit of noise in for every bit of entropy out, > > The paper states: > > > > "The least-fundamental threats are probably the most important in > > practice. As an example in this category, consider the possibility that > the > > generator is running on a multiuser machine, and some user might > > (inadvertently or otherwise) change the mixer gain. To prevent this, we > > went to a lot of trouble to patch the ALSA system so that we can open the > > mixer device in ?exclusive? mode, so that nobody else can write to it." > > > > Instructions are provided for patching ALSA. > > AFAICS the patch is missing from the latest turbid version, and the > makefile references an ALSA version that is over ten years old. > Cool. > However, no patch is needed; locking mixer controls has always been > possible. Apparently, ALSA's documentation is, er, capable of > improvement. > > > Regards, > Clemens > > Yeah... I have trouble with it myself when I get that into the sound system. I'm basically still a noob at ALSA, but whatever I've tried seemed to require shot-gunning many different possible API call sequences to see what combination does what I need. This is another reason I am reluctant to use the ALSA sound system as an entropy source that I count on. How many of us can dabble in ALSA, and understand thermal noise well enough to estimate accurately the noise that must be there? Now, adding it to the pool is a good thing, just like adding entropy from RDRAND, but I think both should be set to add nothing to the pool's entropy level. That way, I don't have to worry about all the mistakes I make all the time (like the ones I'm sure I made above). Bill -------------- next part -------------- An HTML attachment was scrubbed... URL: From pgut001 at cs.auckland.ac.nz Tue Oct 28 14:05:12 2014 From: pgut001 at cs.auckland.ac.nz (Peter Gutmann) Date: Wed, 29 Oct 2014 07:05:12 +1300 Subject: [Cryptography] EMV as a fraud enabler Message-ID: Brian Krebs has an interesting writeup at: http://krebsonsecurity.com/2014/10/replay-attacks-spoof-chip-card-charges/ on EMV as a fraud enabler. The trick is that instead of spoofing non-EMV on an EMV card, you spoof EMV on a non-EMV card because it's assumed to be secure so there's less checking done by the back-end processing systems. Peter. From codesinchaos at gmail.com Tue Oct 28 16:53:40 2014 From: codesinchaos at gmail.com (CodesInChaos) Date: Tue, 28 Oct 2014 21:53:40 +0100 Subject: [Cryptography] EMV as a fraud enabler In-Reply-To: References: Message-ID: If I read that article correctly, the main issue is that certain banks didn't bother to verify signatures and as a secondary issue don't bother checking nonce uniqueness either. http://xkcd.com/1181/ From frantz at pwpconsult.com Tue Oct 28 16:58:46 2014 From: frantz at pwpconsult.com (Bill Frantz) Date: Tue, 28 Oct 2014 13:58:46 -0700 Subject: [Cryptography] Paranoia for a Monday Morning In-Reply-To: Message-ID: On 10/27/14 at 4:37 PM, leichter at lrw.com (Jerry Leichter) wrote: >OK, Mozilla and Rust are a real effort with real publications. Let's not underestimate the value of "secure" languages in improving the security of systems. Indeed they are not a panacea, but they can effectively eliminate common problems such as buffer overrun and use after free. One of the strengths of Firefox is the large body of code that is in Javascript, a secure language. Indeed, as a friend says, "You can write Fortran in any language.", but a secure language makes the secure way the easy way. The immediate question is, why do we still write security sensitive code in C? (Or other insecure languages like C++.) IMHO, some of the reason is in trying to interface with an infrastructure built on C programs and designed for C calling conventions. If any of the secure systems programming languages, like Rust, can get enough of a toe hold in the sea of C conventions, perhaps we can, at least, make the successful attacks more interesting than the same-old same-old of buffer overrun and use after free. Cheers - Bill -------------------------------------------------------------- Bill Frantz | There are now so many exceptions to the 408-356-8506 | Fourth Amendment that it operates only by www.pwpconsult.com | accident. - William Hugh Murray From clemens at ladisch.de Tue Oct 28 17:28:40 2014 From: clemens at ladisch.de (Clemens Ladisch) Date: Tue, 28 Oct 2014 22:28:40 +0100 Subject: [Cryptography] A TRNG review per day: Turbid In-Reply-To: References: <544EAF7B.1050101@ladisch.de> Message-ID: <54500A88.4040607@ladisch.de> Bill Cox wrote: > Turbid ideally uses a 24-bit sound card, though a 16-bit might work. I see > a Creative Labs Sound Blaster 24-bit audio card at New Egg. Is this the > sort of card recommended? Creative builds many kinds of cards, good ones and somewhat cheap ones. Turbid needs a sensitive input because its proof of the entropy lower bound requires that the thermal noise of the resistor at the input connector (assuming that such a resistor exists) can be measured. Even a bad sound card can have a high enough sensitivity if it amplifies its input by a large enough factor (a bad sound card will add more noise, but that does not matter for the proof). 24-bit cards do not need much amplification and should be good enough. > On Mon, Oct 27, 2014 at 4:47 PM, Clemens Ladisch wrote: >> Bill Cox wrote: >>> Sound outputs will be correlated when sampled at high speed. >> >> If the output contains _only_ white noise, there will be the same amount >> of noise at all frequencies, so the sample rate would not matter. > > This is inaccurate. White noise with energy in every frequency would have > infinite energy and destroy the universe. :-) Thermal noise will go high enough for any sampling rate we can use. > In this application, I believe the frequency of interest is the cutoff > frequency of the anti-aliasing filter, which is somewhat lower than > 1/2 the sample rate (Niquist frequency). Sound cards do not have single anti-aliasing filter. A typical ADC chip has a delta-sigma modulator running at about 6 MHz, which requires an external analog filter that reduces noise at that frequency. The modulator is followed by a digital decimation filter that goes very near the Nyquist frequency of the currently used sample rate. (There also is a high-pass filter to remove any DC offset from the input.) > If you sample at the maximum supported sample frequency, you will do a > better job capturing the entropy that is there, but sampling at a rate > beyond the anti-aliasing filter cut-off frequency ... This cut-off frequency is not independent of the sample rate. > Turbid, from what I read in the paper, does not adaptively estimate > entropy, which makes it's health monitor fairly weak, IMO. Does it monitor anything _at all_? As far as I can see, it blindly stuffs samples into the hash function and trusts the calibration (and that nobody attenuated or muted the input, accidentally or not). Regards, Clemens From hbaker1 at pipeline.com Tue Oct 28 18:17:08 2014 From: hbaker1 at pipeline.com (Henry Baker) Date: Tue, 28 Oct 2014 15:17:08 -0700 Subject: [Cryptography] Paranoia for a Monday Morning In-Reply-To: References: Message-ID: At 01:58 PM 10/28/2014, Bill Frantz wrote: >One of the strengths of Firefox is the large body of code that is in Javascript, a secure language. Huh?!? Just because Javascript has garbage-collection and array-bounds checking doesn't make it a 'secure' language. As a long-time user of Lisp, I agree that GC & bounds-checking can improve security, but these features are only a start. Compile-time type checking can help some more, but _removing_ features can help a lot more -- e.g., removing 'eval'. [As an aside, the earliest Lisp Machines on the ethernet at MIT had an 'eval' server; no Bash shellshock needed! On the other hand, some of the safest email & web servers were written in Lisp (w/o eval!), and some servers ran continuously for more than a year until electrical power failure forced a reboot.] Perhaps the place where higher level languages help security the most is that they allow the programmer to not worry about so many low-level details, so that (s)he can allocate more effort towards making the whole program correct. Of course, this depends critically upon having a well-tested implementation that gets the library functions correct. For example, a mathematician utilizing the Macsyma symbolic algebra system found a bug in the Lisp Machine's bignum integer division routine several years after the Lisp Machines were placed into service. [That sort of bug couldn't happen anymore in the 21st century, could it?] The next place where errors creep in is during *optimization* -- usually for speed (gcc, anyone?). Assumptions are made about the range of inputs, and generality is squeezed out of the code in order to gain performance. Then some user provides an unexpected input, and we're off to the races. To optimize without breaking something requires a language that is capable of precision targeting of semantics-preserving transformations. For example, a particular loop can be unrolled by simply tagging it; a function call can be 'inlined' by simply tagging it; an expensive 'functional' function call can be memoized by simply tagging it, etc. These tags don't affect the semantics -- i.e., the value computed -- but only the nature & amount of resources consumed in the computation. Yes, I know that compiler people want all of this stuff to be automatic, but making optimizations like this automatic requires profiling data, which opens up a huge attack surface & side-channels. From billstewart at pobox.com Wed Oct 29 03:35:16 2014 From: billstewart at pobox.com (Bill Stewart) Date: Wed, 29 Oct 2014 00:35:16 -0700 Subject: [Cryptography] A TRNG review per day: Turbid In-Reply-To: References: Message-ID: <20141029073523.CB29A107B0@pb-sasl1.pobox.com> At 10:09 AM 10/27/2014, Bill Cox wrote: >- A sound card used by Turbid cannot be used for input, meaning most >users need a second sound card. There are two times that you need good randomness - system initialization - later. You can use pseudo-random generators later, as long as you've seeded them with good initial entropy, and you'll usually have system entropy coming in from a range of devices and events. During your system initialization, it's not a problem if you want to borrow the sound card to crunch up some random entropy, and most users aren't going to need to use the sound card for audio input until everything's up to speed and the user interface has fired up. It's probably fine for that. From waywardgeek at gmail.com Wed Oct 29 07:13:48 2014 From: waywardgeek at gmail.com (Bill Cox) Date: Wed, 29 Oct 2014 07:13:48 -0400 Subject: [Cryptography] A TRNG review per day: Turbid In-Reply-To: <54500A88.4040607@ladisch.de> References: <544EAF7B.1050101@ladisch.de> <54500A88.4040607@ladisch.de> Message-ID: On Tue, Oct 28, 2014 at 5:28 PM, Clemens Ladisch wrote: > > On Mon, Oct 27, 2014 at 4:47 PM, Clemens Ladisch > wrote: > >> Bill Cox wrote: > >>> Sound outputs will be correlated when sampled at high speed. > >> > >> If the output contains _only_ white noise, there will be the same amount > >> of noise at all frequencies, so the sample rate would not matter. > > > > This is inaccurate. White noise with energy in every frequency would > have > > infinite energy and destroy the universe. > > :-) > Thermal noise will go high enough for any sampling rate we can use. > Agreed. > > In this application, I believe the frequency of interest is the cutoff > > frequency of the anti-aliasing filter, which is somewhat lower than > > 1/2 the sample rate (Niquist frequency). > > Sound cards do not have single anti-aliasing filter. > > A typical ADC chip has a delta-sigma modulator running at about 6 MHz, > which requires an external analog filter that reduces noise at that > frequency. The modulator is followed by a digital decimation filter > that goes very near the Nyquist frequency of the currently used sample > rate. (There also is a high-pass filter to remove any DC offset from > the input.) > Duh... I was thinking of SAR ADCs, which would never go to 24 bits. Of course they're sigma-deltas. The external filter is what I normally hear called the anti-aliasing filter, even for SD-ADCs, but it's cut-off can be much higher than the decimation filter's cut-off, so it is irrelevant for calculation of thermal noise. In that case it's the decimation filter cut-off that counts. That's typically 2X the highest audio frequency of interest, isn't it? There will still be significant correlation between samples. There is thermal noise in a band from 9X to 10X below the sample rate which will turn into a significant short-term correlation between samples 10 away from each other. It's not a big deal, since the math works anyway, but it's there. > > > If you sample at the maximum supported sample frequency, you will do a > > better job capturing the entropy that is there, but sampling at a rate > > beyond the anti-aliasing filter cut-off frequency ... > > This cut-off frequency is not independent of the sample rate. > > > Turbid, from what I read in the paper, does not adaptively estimate > > entropy, which makes it's health monitor fairly weak, IMO. > > Does it monitor anything _at all_? As far as I can see, it blindly > stuffs samples into the hash function and trusts the calibration (and > that nobody attenuated or muted the input, accidentally or not). > It should at least do some basic tests... Bill -------------- next part -------------- An HTML attachment was scrubbed... URL: From iang at iang.org Wed Oct 29 07:20:17 2014 From: iang at iang.org (ianG) Date: Wed, 29 Oct 2014 11:20:17 +0000 Subject: [Cryptography] In search of random numbers In-Reply-To: References: <20141023133020.28ab656f@pc> <5449D98B.3010004@tik.ee.ethz.ch> <1414173771.31285.3.camel@sonic.net> <20141025004020.3a024ec9@pc> <1414264603.10918.1.camel@sonic.net> <544CC2CE.8030102@iang.org> <1414443259.8050.3.camel@sonic.net> Message-ID: <5450CD71.6050000@iang.org> On 27/10/2014 23:55 pm, Bill Cox wrote: > On Mon, Oct 27, 2014 at 4:54 PM, Bear > wrote: > > On Sun, 2014-10-26 at 09:45 +0000, ianG wrote: > > > You only get product recall when it is likely to kill the user. Bad as > > the randomness issue appears to us, I'm not sure we're there yet. > > The randomness issue doesn't look bad to me. You just boot a > non-networked OS and don't load any networking software or > generate any keys until something actually needs a network > connection or a key. > > Bear > > > I'm not sure we can't just have all our IoT devices have their own > TRNG. It's hard to trust an unauditable TRNG in someone else's IC, but > if it's my custom ASIC I design or even just an FPGA, it's easy to trust > the TRNG design you drop in, so long is it isn't rocket science to get > right. Precisely. It needs to be so cheap that you'd drop it in if it solved any problem you had. The way to approach this problem is strategically: 1. create enough free designs such that there aren't any barriers to deployment. No excuses! 2. create some demand 'pull' from the market such that users, customers, journos are asking questions of the builders. 3. create some supply 'push' where those who claim to use an RNG are rewarded by attention and recommendations. In order. No point in saying anything until 1. is in place. Muzzle the journes for now. Go Bill, go Paul! > Ring oscillator noise sounds like a decent candidate, though for > even smaller size and higher speed with predictable entropy output, I > prefer an infinite noise multiplier. For board level designs, there > should be a $0.25 highly auditable TRNG chip you can buy that just spits > out 0's and 1's when clocked. I'm shocked that a bit of silicon costs that much! > If they go into many designs, we can tear > down enough of them chosen at random to show that at at most only a > small percentage of them are back-doored. Of course they'll be backdoored! But this is the wrong way to look at things. We need to seed these things throughout the market place. Once the standard is established that "you must use an RNG any RNG" then we can ratchet up the pressure. The grad students will do that for us, with a little nudging. Whole classes of IoTs will be broken. Armageddon, apocalypse, gosh oh my. Then, things will get better as equipment suppliers get sick of their name being dragged through the mud. The reason this works is that it is impossible to just fix a broken industry. You have to introduce a framework and understanding at all levels first. Seed it from the bottom. Then, when it's pervaded, start the ball rolling for continuous improvement. Love your arms race. iang ps, the precise wrong way to do it is to involve NIST, IETF, national standards bodies. From clemens at ladisch.de Wed Oct 29 08:55:02 2014 From: clemens at ladisch.de (Clemens Ladisch) Date: Wed, 29 Oct 2014 13:55:02 +0100 Subject: [Cryptography] A TRNG review per day: Turbid In-Reply-To: References: <544EAF7B.1050101@ladisch.de> <54500A88.4040607@ladisch.de> Message-ID: <5450E3A6.4090406@ladisch.de> Bill Cox wrote: > On Tue, Oct 28, 2014 at 5:28 PM, Clemens Ladisch wrote: > > A typical ADC chip has a delta-sigma modulator running at about 6 MHz, > > which requires an external analog filter that reduces noise at that > > frequency. The modulator is followed by a digital decimation filter > > that goes very near the Nyquist frequency of the currently used sample > > rate. (There also is a high-pass filter to remove any DC offset from > > the input.) > > [...] In that case it's the decimation filter cut-off that counts. > That's typically 2X the highest audio frequency of interest, isn't it? Yes, if with "2X" you mean what I'd call "half". For example, the CS5381 datasheet says: Passband (-0.1 dB): from 0 Fs to 0.47 Fs Stopband (-95 dB): from 0.58 Fs This filter and the HPF will attenuate extremly high and low frequencies; what remains is usable white noise over almost the entire frequency range. > There will still be significant correlation between samples. There is > thermal noise in a band from 9X to 10X below the sample rate Why are you singling out this band? > which will turn into a significant short-term correlation between > samples 10 away from each other. The noise in all the other bands will cancel out these correlations. Regards, Clemens From waywardgeek at gmail.com Wed Oct 29 09:22:26 2014 From: waywardgeek at gmail.com (Bill Cox) Date: Wed, 29 Oct 2014 09:22:26 -0400 Subject: [Cryptography] A TRNG review per day: Turbid In-Reply-To: <5450E3A6.4090406@ladisch.de> References: <544EAF7B.1050101@ladisch.de> <54500A88.4040607@ladisch.de> <5450E3A6.4090406@ladisch.de> Message-ID: On Wed, Oct 29, 2014 at 8:55 AM, Clemens Ladisch wrote: > Bill Cox wrote: > > On Tue, Oct 28, 2014 at 5:28 PM, Clemens Ladisch > wrote: > > > A typical ADC chip has a delta-sigma modulator running at about 6 MHz, > > > which requires an external analog filter that reduces noise at that > > > frequency. The modulator is followed by a digital decimation filter > > > that goes very near the Nyquist frequency of the currently used sample > > > rate. (There also is a high-pass filter to remove any DC offset from > > > the input.) > > > > [...] In that case it's the decimation filter cut-off that counts. > > That's typically 2X the highest audio frequency of interest, isn't it? > > Yes, if with "2X" you mean what I'd call "half". For example, the > CS5381 datasheet says: > Passband (-0.1 dB): from 0 Fs to 0.47 Fs > Stopband (-95 dB): from 0.58 Fs > > This filter and the HPF will attenuate extremly high and low frequencies; > what remains is usable white noise over almost the entire frequency > range. > > > There will still be significant correlation between samples. There is > > thermal noise in a band from 9X to 10X below the sample rate > > Why are you singling out this band? > > > which will turn into a significant short-term correlation between > > samples 10 away from each other. > > The noise in all the other bands will cancel out these correlations. > > > Not in my experience, but that is somewhat limited. A simple test would be seeing if the zero crossings are correlated between adjacent samples. My guess is they are highly correlated, as in I have a 70% chance of guessing if your next sample is greater or less than zero if you tell me the full value of the previous sample. If you send me some typical Turbid maximum sample rate sound samples, I'd be happy to do that test. However, this does not invalidate Turbid's entropy estimate in any way. Feeding every sample, even if there is some correlation, into the hash function is the right thing to do to collect all that entropy. Bill -------------- next part -------------- An HTML attachment was scrubbed... URL: From waywardgeek at gmail.com Wed Oct 29 09:34:08 2014 From: waywardgeek at gmail.com (Bill Cox) Date: Wed, 29 Oct 2014 09:34:08 -0400 Subject: [Cryptography] In search of random numbers In-Reply-To: <5450CD71.6050000@iang.org> References: <20141023133020.28ab656f@pc> <5449D98B.3010004@tik.ee.ethz.ch> <1414173771.31285.3.camel@sonic.net> <20141025004020.3a024ec9@pc> <1414264603.10918.1.camel@sonic.net> <544CC2CE.8030102@iang.org> <1414443259.8050.3.camel@sonic.net> <5450CD71.6050000@iang.org> Message-ID: On Wed, Oct 29, 2014 at 7:20 AM, ianG wrote: > On 27/10/2014 23:55 pm, Bill Cox wrote: > Precisely. It needs to be so cheap that you'd drop it in if it solved > any problem you had. > > The way to approach this problem is strategically: > > 1. create enough free designs such that there aren't any barriers to > deployment. No excuses! > 2. create some demand 'pull' from the market such that users, > customers, journos are asking questions of the builders. > 3. create some supply 'push' where those who claim to use an RNG are > rewarded by attention and recommendations. > > In order. No point in saying anything until 1. is in place. Muzzle the > journes for now. > > Go Bill, go Paul! > > I completely agree. Make it so cheap and easy that every IoT device just does it. > > Ring oscillator noise sounds like a decent candidate, though for > > even smaller size and higher speed with predictable entropy output, I > > prefer an infinite noise multiplier. For board level designs, there > > should be a $0.25 highly auditable TRNG chip you can buy that just spits > > out 0's and 1's when clocked. > > I'm shocked that a bit of silicon costs that much! > By the time IoT devices are in every home, we'd see these selling for $0.05 in decent volume. However, any company building them early on is going to want a decent ROI, and they'll sink maybe $500K into such a project ($200K in NRE, an engineer for a year at $200K (design, test, packaging, reliability - several engineers, but for only a part of a year), marketing, stocking distributors, G&A... It is hard to get lower than that $0.05 number because the fab wants $0.05 for the silicon, the packaging house wants $0.05 for the package, and the test house wants $0.05 for test. Getting them all to be reasonable and make a small profit at the same time is hard! Bill -------------- next part -------------- An HTML attachment was scrubbed... URL: From johnl at iecc.com Wed Oct 29 10:45:24 2014 From: johnl at iecc.com (John Levine) Date: 29 Oct 2014 14:45:24 -0000 Subject: [Cryptography] EMV as a fraud enabler In-Reply-To: Message-ID: <20141029144524.56295.qmail@ary.lan> >If I read that article correctly, the main issue is that certain banks >didn't bother to verify signatures and as a secondary issue don't >bother checking nonce uniqueness either. http://xkcd.com/1181/ Assuming you mean crypto signatures and not ink signatures, right. In this case it was just bizarre, since the network was approving chip transactions for an issuer that wasn't yet certifed to issue chip cards. A long-standing problem with chip cards is that the banks don't use the data they have to validate transactions. If they actually use the data, and track the sequence numbers, they're pretty secure. But through a combination of laziness and the duct-tape-and-baling-wire architecture of banking networks, they don't. When I was attending the weekly security seminars at Cambridge a few years ago, this was a frequent topic of discussions, ever more ways that banks got chip+pin wrong. R's, John From agr at me.com Wed Oct 29 12:36:02 2014 From: agr at me.com (Arnold Reinhold) Date: Wed, 29 Oct 2014 12:36:02 -0400 Subject: [Cryptography] Randomness for Cryptography wiki Message-ID: <11782464-CAD1-4819-B085-568DAABDB3D1@me.com> We seem to be in the midst of another rearguing of the issues surrounding randomness generation for cryptography applications. The last time this happened, earlier this year, I suggested creating a wiki so these debates can at least be recorded once and for all, if not resolved. I started building one, but the discussion died down and I got busy with other things. I?d like to resurrect my proposal. The ?Randomness for Cryptography? wiki I started is at http://en.diceware.shoutwiki.com/ I attempted to outline the subject, not to provide a definitive resource. I?d need a lot of help to accomplish the later, of course, and I?m not up for a solo effort. I spent some time researching wiki farms and shoutwiki seemed the best match. The Randomness for Cryptography wiki is ad supported at the moment, but ads can be removed for ~$50 a year, which I?d spring for if enough interest develops. Right now this wiki is open for read, but private for write. I?m happy to give editing privileges to anyone here who wants them. You will need to open a (free) shoutwiki.com account first, then email me your user ID and I?ll do the necessary hocus-pocus. Note that all contributions must be released under a Creative Commons CC-BY 3.0 license. Please take a look. Arnold Reinhold From jsd at av8n.com Wed Oct 29 15:38:55 2014 From: jsd at av8n.com (John Denker) Date: Wed, 29 Oct 2014 12:38:55 -0700 Subject: [Cryptography] SSLv3 in the wild Message-ID: <5451424F.4030409@av8n.com> As John Oliver might say: SSLv3 -- How is that still a thing? SSLv3 was deprecated and superseded by TLS1.0 in 1999 http://tools.ietf.org/html/rfc2246 I was disappointed to find large SSLv3-only servers existing in the wild, 15 years post TLS, and two weeks post-POODLE. I was expecting a few small clients, but I'm not sure I was expecting large servers. Here is an example that you may find useful, as a test-target or perhaps a talking point. Canadian tax dollars at work: https://flightplanning.navcanada.ca/ Note that there is no "http" access to the navcanada site. This is relevant because it removes a possible workaround, and violates the dictum that says if you can't encrypt properly you shouldn't encrypt at all. The overall situation is a pain in the neck because it means I can't just eradicate all traces of SSLv3 and forget about it. Firefox says: > Secure Connection Failed > > An error occurred during a connection to flightplanning.navcanada.ca. > Cannot communicate securely with peer: no common encryption > algorithm(s). (Error code: ssl_error_no_cypher_overlap) Nmap seems to have an overoptimistic notion of "strong": nmap --script ssl-enum-ciphers -p 443 flightplanning.navcanada.ca > Starting Nmap 6.40 ( http://nmap.org ) at 2014-10-29 12:07 MST > Nmap scan report for flightplanning.navcanada.ca (207.236.24.143) > Host is up (0.076s latency). > rDNS record for 207.236.24.143: www.metcambeta.navcanada.ca > PORT STATE SERVICE > 443/tcp open https > | ssl-enum-ciphers: > | SSLv3: > | ciphers: > | TLS_DHE_RSA_WITH_3DES_EDE_CBC_SHA - strong > | TLS_DHE_RSA_WITH_AES_128_CBC_SHA - strong > | TLS_DHE_RSA_WITH_AES_256_CBC_SHA - strong > | TLS_RSA_WITH_3DES_EDE_CBC_SHA - strong > | TLS_RSA_WITH_AES_128_CBC_SHA - strong > | TLS_RSA_WITH_AES_256_CBC_SHA - strong > | compressors: > | NULL > |_ least strength: strong > > Nmap done: 1 IP address (1 host up) scanned in 2.82 seconds From crypto.jmk at gmail.com Wed Oct 29 16:23:30 2014 From: crypto.jmk at gmail.com (John Kelsey) Date: Wed, 29 Oct 2014 16:23:30 -0400 Subject: [Cryptography] Best internet crypto clock In-Reply-To: <4C0050A4-8CD7-4E29-9013-53C22D72BCFF@lrw.com> References: <542F4158.8080907@kc.rr.com> <4C0050A4-8CD7-4E29-9013-53C22D72BCFF@lrw.com> Message-ID: You can solve one end of this problem with beacons--nobody could have known this information before this time. You can do the same thing with public information that's unpredictable, like the complete contents of the New York Times front page, or today's sports scores or stock prices. You can use a digital timestamping service to solve the other end--this information had to be available by this time. I don't know about the kidnapping scenario, but consider some program that takes an RNG seed, or some experiment which requires some random inputs. I use the beacon values for today at noon to run the experiment, and as soon as I have the results, I get them digitally timestamped--say, at 1PM today. This binds the experiment in time--it can't have happened before noon today or after 1 PM today. --John From waywardgeek at gmail.com Wed Oct 29 17:24:31 2014 From: waywardgeek at gmail.com (Bill Cox) Date: Wed, 29 Oct 2014 17:24:31 -0400 Subject: [Cryptography] A TRNG review per day: Turbid In-Reply-To: <20141029073523.CB29A107B0@pb-sasl1.pobox.com> References: <20141029073523.CB29A107B0@pb-sasl1.pobox.com> Message-ID: On Wed, Oct 29, 2014 at 3:35 AM, Bill Stewart wrote: > At 10:09 AM 10/27/2014, Bill Cox wrote: > >> - A sound card used by Turbid cannot be used for input, meaning most >> users need a second sound card. >> > > There are two times that you need good randomness > - system initialization > - later. > You can use pseudo-random generators later, as long as you've seeded them > with good initial entropy, and you'll usually have system entropy coming in > from a range of devices and events. > > During your system initialization, it's not a problem if you want to > borrow the sound card to crunch up some random entropy, and most users > aren't going to need to use the sound card for audio input until > everything's up to speed and the user interface has fired up. It's > probably fine for that. > > > _______________________________________________ > The cryptography mailing list > cryptography at metzdowd.com > http://www.metzdowd.com/mailman/listinfo/cryptography > Good points. I suppose Linux distros should look at borrowing the mic input for a fraction of a second on boot to refresh the entropy pool. Bill -------------- next part -------------- An HTML attachment was scrubbed... URL: From leichter at lrw.com Wed Oct 29 17:26:45 2014 From: leichter at lrw.com (Jerry Leichter) Date: Wed, 29 Oct 2014 17:26:45 -0400 Subject: [Cryptography] Best internet crypto clock In-Reply-To: References: <542F4158.8080907@kc.rr.com> <4C0050A4-8CD7-4E29-9013-53C22D72BCFF@lrw.com> Message-ID: <790C1E64-52F8-4F2D-9471-895DAA45D0E4@lrw.com> On Oct 29, 2014, at 4:23 PM, John Kelsey wrote: > You can solve one end of this problem with beacons--nobody could have known this information before this time. I guessed this was a response to a posting of mine - and it is, to one back on October 4th! I had to re-read it to guess that the "ends of the problem" might be. > You can do the same thing with public information that's unpredictable, like the complete contents of the New York Times front page, or today's sports scores or stock prices. As I pointed out, in and of itself, this doesn't do anything very interesting. Using my notation - where S(T) is a new, unpredictable value made available to all no earlier than T - yes, if I utter S(T), that proves my utterance occurred no earlier than T. But why would anyone care? To be useful, I somehow need to bind S(T) to something else in such a way that I end up with the proof that the something else "occurred" no earlier than T. For example, that the picture of my kidnap victim, clearly alive, was taken no earlier than T. But given the picture and S(T) as bit strings, it appears to be impossible to do that. There are ways of binding *events* to S(T) to produce proofs that the events occurred no earlier than T - but once you've frozen an event into a bit string, you loose the ability to establish how late it occurred. > You can use a digital timestamping service to solve the other end--this information had to be available by this time. Yes, this one is easy. > I don't know about the kidnapping scenario, but consider some program that takes an RNG seed, or some experiment which requires some random inputs. I use the beacon values for today at noon to run the experiment, and as soon as I have the results, I get them digitally timestamped--say, at 1PM today. This binds the experiment in time--it can't have happened before noon today or after 1 PM today. Yes, the running of the experiment is an "event", and if you use S(T) as an *input* in an appropriate way, you can prove that the event could not have occurred before T. There's probably some kind of covariance/contravariance thing hiding here if you can set up the model correctly: You can prove something occurs *after* T if it takes a published S(T) as an input parameter; you can prove something occurs *before* T if you take its output combine it with some public information (e.g., a public hash chain). -- Jerry -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 4813 bytes Desc: not available URL: From lynn at garlic.com Wed Oct 29 17:46:12 2014 From: lynn at garlic.com (Anne & Lynn Wheeler) Date: Wed, 29 Oct 2014 14:46:12 -0700 Subject: [Cryptography] EMV as a fraud enabler In-Reply-To: <20141029144524.56295.qmail@ary.lan> References: <20141029144524.56295.qmail@ary.lan> Message-ID: <54516024.90401@garlic.com> On 10/29/14 07:45, John Levine wrote: >> If I read that article correctly, the main issue is that certain banks >> didn't bother to verify signatures and as a secondary issue don't >> bother checking nonce uniqueness either. http://xkcd.com/1181/ > > Assuming you mean crypto signatures and not ink signatures, right. In > this case it was just bizarre, since the network was approving chip > transactions for an issuer that wasn't yet certifed to issue chip > cards. > > A long-standing problem with chip cards is that the banks don't use > the data they have to validate transactions. If they actually use the > data, and track the sequence numbers, they're pretty secure. But > through a combination of laziness and the duct-tape-and-baling-wire > architecture of banking networks, they don't. When I was attending > the weekly security seminars at Cambridge a few years ago, this was a > frequent topic of discussions, ever more ways that banks got chip+pin > wrong. There are various quotes about those that don't learn from history are doomed to repeat the same mistakes. the current payment infrastructure somewhat grew up during the days of trusted value added networks (VANs) during the 70s&80s ... which were mostly obsoleted by the internet over the last 20yrs. about the time the card associations were first drafting the POS/EMV in Europe, they also had a totally different group drafting a payment specification for the Internet. They shared the characteristic that the integrity checking was being done at the perimeter/boundary ... which is then dependent on internal trusted VAN (lots of business interest in preserving that status quo) ... however it creates an enormous attack surface ... both at the boundary/perimeter as well as inside the infrastructure. Not long after the early deployment of their internet payment specification, some of the business people discovered that they were getting transactions that had flag set that the perimeter had performed the crypto integrity check ... and they could prove no such crypto was ever involved (not all the different from the current attacks nearly 20yrs later). In the internet case, the attempts to preserve the status quo of the existing payment networks (trusted VAN) were masked by selecting crypto technology the bloated the payload size of a payment transaction by two orders of magnitude (100 times) ... resulting in the justification that the crypto integrity checks had to be performed at the boundary and only a single bit sent through (indicating successful integrity check) because the payment networks couldn't stand a factor of 100 times increase in transaction payload size. Note that there is little or no evidence of that early standard still in existence. There were other teething problems with the POS version. There was a large early deployment in the US during the "YES CARD" period ... it turns out that it was possible to use the same skimming technology to create a counterfeit magstripe to create a counterfeit chip. The fraud was actually worse because a countermeasure to counterfeit magstripe is to deactivate the account number. However for the "YES CARD", business rules had been moved into the chip ... and a (counterfeit) chip could tell the POS terminal that correct PIN was entered (regardless of what was typed), that the transaction was offline (no online checking for deactivated account) and the transaction was approved. In the wake of the "YES CARD", all evidence of the US deployment disappears and speculation that it would be quite some time before it is tried again (the people involved loosing credibility). Reference to the "YES CARD" at the bottom of this Cartes2002 trip report (gone 404 but lives on at the wayback machine) http://web.archive.org/web/20030417083810/http://www.smartcard.co.uk/resources/articles/cartes2002.html disclaimer: we had been brought in as consultants to small client/server startup that wanted to do payment transactions on their server, they had also invented this technology called "SSL" they wanted to use; the result is now frequently called "electronic commerce". In part for having done "electronic commerce", we were asked to participate in the x9a10 financial standard working group (about the same time the card associations were drafting their POS and internet specifications) which had been given the requirement to preserve the integrity of the financial infrastructure for *ALL* retail payments. For this financial transaction standard we slightly tweaked the paradigm and defined end-to-end integrity ... with fast crypto and minimal payload size so that it could easily travel through the existing payment networks ... but because of end-to-end integrity no longer required trusted VANs or hiding the transaction information. As a result it enormously reduces the attack surface. Now since the major use of SSL in the world today is this early ecommerce work for hiding transaction details ... the standard also eliminates the requirement for SSL for that purpose. It also eliminates the motivation for most of the current financial breaches since crooks can't use the information from previous transactions for fraudulent transactions. -- virtualization experience starting Jan1968, online at home since Mar1970 From clemens at ladisch.de Wed Oct 29 18:16:10 2014 From: clemens at ladisch.de (Clemens Ladisch) Date: Wed, 29 Oct 2014 23:16:10 +0100 Subject: [Cryptography] A TRNG review per day: Turbid In-Reply-To: References: <544EAF7B.1050101@ladisch.de> <54500A88.4040607@ladisch.de> <5450E3A6.4090406@ladisch.de> Message-ID: <5451672A.6020704@ladisch.de> Bill Cox wrote: > On Wed, Oct 29, 2014 at 8:55 AM, Clemens Ladisch wrote: >> Bill Cox wrote: >>> There will still be significant correlation between samples. There is >>> thermal noise in a band from 9X to 10X below the sample rate >>> which will turn into a significant short-term correlation between >>> samples 10 away from each other. >> >> The noise in all the other bands will cancel out these correlations. > > Not in my experience, but that is somewhat limited. A simple test would be > seeing if the zero crossings are correlated between adjacent samples. My > guess is they are highly correlated, as in I have a 70% chance of guessing > if your next sample is greater or less than zero if you tell me the full > value of the previous sample. And as it turns out, there are different decimation filters for different sample rates (typically, higher rates have smaller passbands). I made a quick test (using only the sign of adjacent samples), and there are indeed lots of correlations (at any sample rate). In any case, higher rates capture more useful noise. (And Turbid uses the highest rate by default.) > If you send me some typical Turbid maximum sample rate sound samples, I'd > be happy to do that test. Knock yourself out: (The last one is an extreme case where Turbid would refuse to work. And it doesn't have much entropy even under optimistic assumptions.) Regards, Clemens From waywardgeek at gmail.com Wed Oct 29 19:42:19 2014 From: waywardgeek at gmail.com (Bill Cox) Date: Wed, 29 Oct 2014 19:42:19 -0400 Subject: [Cryptography] A TRNG review per day: Turbid In-Reply-To: <5451672A.6020704@ladisch.de> References: <544EAF7B.1050101@ladisch.de> <54500A88.4040607@ladisch.de> <5450E3A6.4090406@ladisch.de> <5451672A.6020704@ladisch.de> Message-ID: On Wed, Oct 29, 2014 at 6:16 PM, Clemens Ladisch wrote: > Bill Cox wrote: > > On Wed, Oct 29, 2014 at 8:55 AM, Clemens Ladisch > wrote: > >> Bill Cox wrote: > >>> There will still be significant correlation between samples. There is > >>> thermal noise in a band from 9X to 10X below the sample rate > >>> which will turn into a significant short-term correlation between > >>> samples 10 away from each other. > >> > >> The noise in all the other bands will cancel out these correlations. > > > > Not in my experience, but that is somewhat limited. A simple test would > be > > seeing if the zero crossings are correlated between adjacent samples. My > > guess is they are highly correlated, as in I have a 70% chance of > guessing > > if your next sample is greater or less than zero if you tell me the full > > value of the previous sample. > > And as it turns out, there are different decimation filters for different > sample rates (typically, higher rates have smaller passbands). > > I made a quick test (using only the sign of adjacent samples), and there > are indeed lots of correlations (at any sample rate). > > In any case, higher rates capture more useful noise. > (And Turbid uses the highest rate by default.) > In this case, it sounds like we're in complete agreement. This is what *should* happen. Turbid is doing the right thing. > > If you send me some typical Turbid maximum sample rate sound samples, I'd > > be happy to do that test. > > Knock yourself out: > > > > > > > > > (The last one is an extreme case where Turbid would refuse to work. > And it doesn't have much entropy even under optimistic assumptions.) > > > Regards, > Clemens > I'll dive into this in the morning, though I expect to find what you found. Again, I am a fan of the the models the Turbid authors put forward. I don't know if they coined the term "surprise", but I don't know how to measure entropy in a sample without it. Bill -------------- next part -------------- An HTML attachment was scrubbed... URL: From pgut001 at cs.auckland.ac.nz Fri Oct 31 06:47:59 2014 From: pgut001 at cs.auckland.ac.nz (Peter Gutmann) Date: Fri, 31 Oct 2014 23:47:59 +1300 Subject: [Cryptography] Vulnerability of RSA vs. DLP to single-bit faults Message-ID: Most, if not all, publications on the topic of fault attacks on RSA and DLP- based algorithms (DSA, ECDSA) use a very abstract model of the fault, assuming merely "a fault" or, for example, that an attacker can: modify any intermediate value by setting it to either a random value (randomizing fault) or zero (zeroing fault), such a fault can be either permanent or transient skip any number of consecutive instructions (skipping fault) or at the individual-bit level: If an adversary has full control over the injected fault, it is possible to manipulate bits at will with the optional ability to inject a fault with accurate timing control, typically in the middle of a signature computation. While I haven't been able to track down every publication on the topic, there doesn't seem to be much that specifically addresses the case of random single-bit faults, e.g. due to alpha particles, and of a non-malicious nature, so your in-memory private-key component x becomes x' at some point with the difference being a single bit. Has any work been done on this? Is RSA more robust against random single-bit faults than the DLP-based algorithms? Peter. From hanche at math.ntnu.no Fri Oct 31 08:45:16 2014 From: hanche at math.ntnu.no (Harald Hanche-Olsen) Date: Fri, 31 Oct 2014 13:45:16 +0100 Subject: [Cryptography] Best internet crypto clock: hmmmmm... In-Reply-To: References: Message-ID: <5453845C.1000508@math.ntnu.no> Dave Horsfall wrote: > The version I'd heard was that should they recognise (American?) currency > then they wouldn't print at all, or was that copiers? Are you perhaps thinking of the so-called EURion constellation? https://en.wikipedia.org/wiki/EURion_constellation ? Harald From bear at sonic.net Fri Oct 31 14:31:12 2014 From: bear at sonic.net (Bear) Date: Fri, 31 Oct 2014 11:31:12 -0700 Subject: [Cryptography] Uncorrelated sequence length, was: A TRNG review per day In-Reply-To: References: <1414173395.31285.1.camel@sonic.net> <1414282134.14425.1.camel@sonic.net> Message-ID: <1414780272.25713.3.camel@sonic.net> On Sat, 2014-10-25 at 23:32 -0400, Jerry Leichter wrote: > On Oct 25, 2014, at 8:08 PM, Bear wrote: > > > A provably long uncorrelated sequence length is the same kind of > > "hard" guarantee as a one time pad -- although, like a one-time pad, > > it applies only to sequences shorter than that length. > I don't know what this means. Any *specific* property - like a long > uncorrelated sequence length - is just a special instance of a way of > distinguishing the output of some algorithm from a true random > sequence. I am completely baffled by this comment. A provable uncorrelated sequence length of N or greater is a proof that it is NOT even theoretically possible to distinguish any generated sequence having length less than N from a true random sequence. That is the opposite of being a way to distinguish a generated sequence from a truly random sequence. This is "like a one-time pad" in that there are an equal number of possible initial states of the PRNG that could result in any output sequence of the uncorrelated sequence length or less, just as in a one-time pad there are an equal number (in that case, one) of possible pads that could have produced the observed ciphertext regardless of the plaintext. If we want protection from unforeseen mathematical insights into the PRNG, we can obtain it (to some extent) by using PRNGs which have an uncorrelated sequence length strictly longer than the length of any single output we intend to generate. In exactly the same way that a one-time pad is immune to any cryptanalysis no matter how advanced but can protect only messages shorter than itself, a PRNG of long uncorrelated sequence length is immune to any possible way to distinguish a PRNG output sequence from a random sequence, but that immunity is limited to output sequences shorter than that length. If someone is using a PRNG with a 32-bit state to generate his 128-bit keys, my brute force search space for the key is 2^32 possible keys, not 2^128, because there are only 2^32 values which are both valid keys _and_ could have been produced by that generator. If additional constraints on the output sequence eliminate a much larger fraction of the possible sequences (eg, if it has to be a valid RSA key) then we're talking about the intersection of two sets -- the set of valid keys and the set of sequences that length which could have been produced from your generator. And even if both the number of possible keys and the number of possible sequences are beyond the reach of brute force, the intersection of the two might be within it. An attacker with the right insight into the mathematics would only need to brute-force a set the size of the intersection, rather than a set the size of all possible keys or a set the size of all possible PRNG-produced sequences. An uncorrelated sequence length strictly longer than the RSA key means that even for an attacker with omniscient mathematical insight there is no POSSIBLE attack that can consider less than the full set of valid RSA keys, because with that uncorrelated length, every valid key is provably equally likely to be produced by the generator. We are allowed to disagree about whether this is important, depending on how likely we consider attackers with greater mathematical insight than ourselves to be. But I believe that it is a significant property for PRNGs, and that attackers with greater mathematical insight are a more significant risk, comparatively speaking, than attackers with detailed knowledge of the PRNG's internal state at a chosen instant or attackers who influence our sources of entropy at a chosen instant, particularly when we are talking about the long-term security of data at rest. Bear From gnu at toad.com Fri Oct 31 14:40:07 2014 From: gnu at toad.com (John Gilmore) Date: Fri, 31 Oct 2014 10:40:07 -0800 Subject: [Cryptography] EFF, ACLU to Present Oral Argument in NSA Spying Case on Nov. 4 Message-ID: <201410311840.s9VIe7Cl001584@new.toad.com> Cryptography followers are invited to attend this court hearing in Washington, DC on November 4, opposing NSA's mass collection of telephone records. Observe government lawyers using twisted arguments and new meanings of simple words to justify spooky outrageous behavior! Support civil rights attorneys using principled arguments rooted in constitutional and societal norms to defend YOUR rights! Perceive the wheels of justice or just-us grinding the constitution into effect or into irrelevance! Show the judges and the press that the public cares whether NSA gets away with using totalitarian methods! See the constitutional issues around mass surveillance actually be discussed in an open, public court that actually hears from someone other than the government! The good guys won this case at the district court (the judge declared the NSA's actions unconstitutional), and the government had to appeal it to stop the ruling from killing off the program. This case could be very interesting, and these judges could make the final decision if the Supreme Court decides not to review their decision. Please be respectful, wear a costume (banker or politician duds suggested), and arrive without contraband, weapons, penknives, cameras, nor most other tools for resisting official oppression. Bring either a lawyer (who can sign you in) or bring unconstitutionally required identifying documents, or the US Marshals at the door will not admit you to this "public trial". I won't be there (wrong coast), but perhaps a DC local will organize a nearby place to have lunch afterward and discuss the hearing. John Gilmore Electronic Frontier Foundation Media Release For Immediate Release: Friday, October 31, 2014 Contact: Dave Maass Media Relations Coordinator Electronic Frontier Foundation press at eff.org +1 415 436-9333 x177 Media Alert: EFF, ACLU to Present Oral Argument in NSA Spying Case on Nov. 4 Court Should Rule That Mass Telephone Records Collection Is Unconstitutional in Klayman v. Obama Washington, D.C. - The Electronic Frontier Foundation (EFF) will appear before a federal appeals court next week to argue the National Security Agency (NSA) should be barred from its mass collection of telephone records of million of Americans. The hearing in Klayman v. Obama is set for 9:30 am on Tuesday, Nov. 4 in Washington, D.C. Appearing as an amicus, EFF Legal Director Cindy Cohn will present oral argument at the U.S. Court of Appeals for the District of Columbia Circuit on behalf of EFF and the American Civil Liberties Union (ACLU), which submitted a joint brief in the case. Conservative activist and lawyer Larry Klayman filed the suit in the aftermath of the first Edward Snowden disclosure, in which The Guardian revealed how the NSA was collecting telephone records on a massive scale from the telecommunications company Verizon. In December, District Court Judge Richard Leon issued a preliminary injunction in the case, declaring that the mass surveillance program was likely unconstitutional. EFF argues that the call-records collection, which the NSA conducts with claimed authority under Section 215 of the USA PATRIOT Act, violates the Fourth Amendment rights of millions of Americans. Separately, EFF is counsel in two other lawsuits against the program -- Jewel v. NSA and First Unitarian Church of Los Angeles v. NSA -- and is co-counsel with the ACLU in a third, Smith v. Obama. What: Oral Argument in Klayman v. Obama Who: EFF Legal Director Cindy Cohn When: 9:30 am (ET), Nov. 4, 2014 Where: E. Barrett Prettyman U.S. Courthouse and William B. Bryant Annex Courtroom 20 333 Constitution Ave., NW Washington, D.C. 20001 For background and legal documents: https://www.eff.org/cases/klayman-v-obama The audio of the oral arguments is expected to be available on the court's website sometime after the hearing: http://www.cadc.uscourts.gov/recordings/recordings.nsf/ For this release: https://www.eff.org/press/releases/media-alert-eff-aclu-present-oral-argument-nsa-spying-case-nov-4 About EFF The Electronic Frontier Foundation is the leading organization protecting civil liberties in the digital world. Founded in 1990, we defend free speech online, fight illegal surveillance, promote the rights of digital innovators, and work to ensure that the rights and freedoms we enjoy are enhanced, rather than eroded, as our use of technology grows. EFF is a member-supported organization. Find out more at https://www.eff.org. From dave at horsfall.org Fri Oct 31 14:57:42 2014 From: dave at horsfall.org (Dave Horsfall) Date: Sat, 1 Nov 2014 05:57:42 +1100 (EST) Subject: [Cryptography] Best internet crypto clock: hmmmmm... In-Reply-To: <5453845C.1000508@math.ntnu.no> References: <5453845C.1000508@math.ntnu.no> Message-ID: On Fri, 31 Oct 2014, Harald Hanche-Olsen wrote: > Are you perhaps thinking of the so-called EURion constellation? > > https://en.wikipedia.org/wiki/EURion_constellation That *could* be it; I'd heard a few different versions of the story that some copiers would not do currency, and naturally the banks themselves are going to be less than forthcoming about their techniques (just like in the crypto world). Personally, I was baffled about copiers refusing to print greenbacks; aren't they all the same colour anyway? I can just imagine the algorithm: if hue_is_greenback then select from (refuse, greyscale, invert) with optional call_SS() fi against which the obvious attack is of course: vary_hue_until_success Sorry for my horrible pseudo-code... -- Dave Horsfall (VK2KFU) "Bliss is a MacBook with a FreeBSD server." http://www.horsfall.org/spam.html (and check the home page whilst you're there) From leichter at lrw.com Fri Oct 31 15:08:11 2014 From: leichter at lrw.com (Jerry Leichter) Date: Fri, 31 Oct 2014 15:08:11 -0400 Subject: [Cryptography] Uncorrelated sequence length, was: A TRNG review per day In-Reply-To: <1414780272.25713.3.camel@sonic.net> References: <1414173395.31285.1.camel@sonic.net> <1414282134.14425.1.camel@sonic.net> <1414780272.25713.3.camel@sonic.net> Message-ID: <72C89BD6-6FB6-49C4-9E49-1EFEA8EF4137@lrw.com> On Oct 31, 2014, at 2:31 PM, Bear wrote: >>> A provably long uncorrelated sequence length is the same kind of >>> "hard" guarantee as a one time pad -- although, like a one-time pad, >>> it applies only to sequences shorter than that length. > >> I don't know what this means. Any *specific* property - like a long >> uncorrelated sequence length - is just a special instance of a way of >> distinguishing the output of some algorithm from a true random >> sequence. > > I am completely baffled by this comment. > > A provable uncorrelated sequence length of N or greater is a proof > that it is NOT even theoretically possible to distinguish any > generated sequence having length less than N from a true random > sequence. That is the opposite of being a way to distinguish a > generated sequence from a truly random sequence. The *test* "Has an uncorrelated sequence length of N or greater" is a special case of distinguisher from a random sequence. Yes, if you are asking the question "Is this sequence distinguishable from a known random sequence?" you have to invert the output of the "USL > N" test, but that's a triviality. BTW, I've responded based on the assumption that "uncorrelated sequence length" is actually a well-defined concept with a meaning based on the plain English words. I just did what I should have done earlier: A Google search in an attempt to find the technical definition. The search finds exactly four instances of this exact phrase - all of them in the present discussion! So I guess on the statement "Uncorrelated sequence length is a thing", the *correct* response is "citation needed". -- Jerry -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 4813 bytes Desc: not available URL: From dj at deadhat.com Fri Oct 31 15:09:00 2014 From: dj at deadhat.com (dj at deadhat.com) Date: Fri, 31 Oct 2014 19:09:00 -0000 Subject: [Cryptography] Uncorrelated sequence length, was: A TRNG review per day In-Reply-To: <1414780272.25713.3.camel@sonic.net> References: <1414173395.31285.1.camel@sonic.net> <1414282134.14425.1.camel@sonic.net> <1414780272.25713.3.camel@sonic.net> Message-ID: <01e80827c60dac205670ba30f933fc03.squirrel@www.deadhat.com> > If we want protection from unforeseen mathematical insights into > the PRNG, we can obtain it (to some extent) by using PRNGs which > have an uncorrelated sequence length strictly longer than the length > of any single output we intend to generate. In exactly the same way > that a one-time pad is immune to any cryptanalysis no matter how > advanced but can protect only messages shorter than itself, a PRNG > of long uncorrelated sequence length is immune to any possible way > to distinguish a PRNG output sequence from a random sequence, but > that immunity is limited to output sequences shorter than that > length. > I'm confused. Wouldn't any CAZAC code meet this definition without being remotely useful for cryptography. Presumably the term 'uncorrelated' isn't sufficiently precisely defined. A optimal CS-PRNG should produce both correlated and uncorrelated outputs for any definition of which output strings are correlated and which are not. From bear at sonic.net Fri Oct 31 16:41:08 2014 From: bear at sonic.net (Bear) Date: Fri, 31 Oct 2014 13:41:08 -0700 Subject: [Cryptography] Uncorrelated sequence length, was: A TRNG review per day In-Reply-To: <01e80827c60dac205670ba30f933fc03.squirrel@www.deadhat.com> References: <1414173395.31285.1.camel@sonic.net> <1414282134.14425.1.camel@sonic.net> <1414780272.25713.3.camel@sonic.net> <01e80827c60dac205670ba30f933fc03.squirrel@www.deadhat.com> Message-ID: <1414788068.25713.5.camel@sonic.net> On Fri, 2014-10-31 at 19:09 +0000, dj at deadhat.com wrote: > I'm confused. Wouldn't any CAZAC code meet this definition without being > remotely useful for cryptography. > > Presumably the term 'uncorrelated' isn't sufficiently precisely defined. A > optimal CS-PRNG should produce both correlated and uncorrelated outputs > for any definition of which output strings are correlated and which are > not. The property is necessary (IMO) but definitely not sufficient. XOR with a repeating pattern has an uncorrelated length the size of the pattern - and is completely useless for cryptography if your message is so long that any part of the pattern is used more than once. But it is also a completely unbreakable one-time pad if the message is not that long. Anyway, this is a property that is relatively easy to add to a system without diminishing the security of any CSPRNG you're using; you can just "whiten" the output of a PRNG having a long uncorrelated sequence length and VERY long repeat period (such as a lagged- Fibonacci generator with several thousand words of state) by XOR with the CSPRNG output. The result will be have the provably uncorrelated sequence length of the PRNG, and will certainly be no less unpredictable than the CSPRNG. That said, for efficiency's sake I'd prefer to simply use a CSPRNG that has enough bits of state that there are no sequences of any length remotely close to the size of the largest single output I'll ever need which are impossible (or significantly more or less likely) for it to produce. Bear