[Cryptography] defaults, black boxes, APIs, and other engineering thoughts
ianG
iang at iang.org
Mon Jan 6 02:10:59 EST 2014
Hi Kevin,
finally, some thinking aimed at the thought target :)
On 6/01/14 04:19 AM, Kevin W. Wall wrote:
> On Sun, Jan 5, 2014 at 2:56 AM, ianG <iang at iang.org
> <mailto:iang at iang.org>> wrote:
>
> ...<snip>...
> So the notion of putting in extra algorithms up front so we can
> switch from one to the other on signs of trouble doesn't make as
> much sense. We can replace the whole lot in the update cycle. We
> don't need to ask the sysadm to start fiddling with these strings:
>
> SSLCipherSuite
> EDH+CAMELLIA:EDH+aRSA:EECDH+__aRSA+AESGCM:EECDH+aRSA+SHA384:__EECDH+aRSA+SHA256:EECDH:+__CAMELLIA256:+AES256:+__CAMELLIA128:+AES128:+SSLv3:!__aNULL:!eNULL:!LOW:!3DES:!MD5:!__EXP:!PSK:!SRP:!DSS:!RC4:!SEED:__!ECDSA:CAMELLIA256-SHA:AES256-__SHA:CAMELLIA128-SHA:AES128-SHA
>
> (which never worked as a security practice anyway). (BTW, this
> comes from BetterCrypto.org project's draft:
> https://bettercrypto.org/__static/applied-crypto-__hardening.pdf
> <https://bettercrypto.org/static/applied-crypto-hardening.pdf> )
>
> Apply SUITE1. We can just work on SUITE2 in the background and when
> the failure occurs, roll it out entirely.
>
> OK, so I hear from here that people are shaking their heads and
> saying, he's crazy, loco, off his rocker. Granted, this is a
> *thought experiment* . Start from the facts we know:
>
> * the world is moving to frequent dynamic updating *
> * the old algorithm agility suite promiscuity idea failed *
> * we will always need an ability to upgrade bits & pieces *
>
> What else can we do?
>
>
> I see a few problems with it. Let's divide this into
> two major crypto use cases... one is using crypto to
> secure data at rest and the other to secure data in
> transit.
!
(Those are the typical notions. There are others. There is for
example, authorisation schemes, digital money, chat (which is data at
rest *and* in flight), shared data schemes (goggledox), etc.)
> For the data-at-rest use case, let's suppose that you
> start with a single algorithm, 'AlgA', and then for
> some reason, we find we need to deprecate it and get
> people to start using 'AlgB'.
The problem with this scenario is that you are imagining something, and
then developing a solution without generalising it to the reality.
With any scenario, we can always have this issue:
Let's suppose we have a single method to store data-at-rest, call it
'MethodA' and then for some reason, we find the need to deprecate it and
get people to start using 'MethodB'.
Now, what in the above makes people change their viewpoints just because
Methods A,B are crypto algorithms?
Let's take a real live data example from my work. I have migrated my
data at rest formats several times. Here are the reasons: language
change, software mirroring to alternate disks, adding hash macs for
integrity, discovery of a corruption bug... Upcoming is insertion of
stream ciphers, live backup to other databases, replication.
In each case I've had to write code that takes in the old formats,
understands them, and then writes out the new formats. The old formats
had to stick around until all old stuff was guaranteed to have gone.
Which date could be measured and planned for, with various inducements
at the business level.
(in 2014 I got rid of formats invented in 1995...)
Point being, the problem is exactly the same - for crypto or for
non-crypto. But we don't imagine that we should have MethodA in
parallel with MethodB, do we? Just in case one fails?
Who runs MySQL alongside Oracle? Yet, both can corrupt the database and
refuse to deliver...
> Because this is a
> data-at-rest scenario, you are stuck with at least
> keeping 'AlgA' around in your software so that data
> previously encrypted (or signed) with it can be
> decrypted (or have its signature verified). You
> should of course prevent new data from being encrypted
> (or signed) using 'AlgA' and force the use of 'AlgB'
> for this, but unless you wish to have people stop
> using your software, that's the best you can do. And
> you have to do the same thing when changing from
> 'AlgB' to 'AlgC' (which you of course hope will never
> happen).
Yep, basic software engineering.
> And should that case ever happen, you *STILL*
> might not be able to completely drop 'AlgA' because
> you can't be certain of the data retention practices
> that a company policy or regulatory practice dictate.
Indeed. Or you still might not be able to drop using tape reels
(remember those old circular things with the black tape wrapped around
them tightly) because your backup process is locked into trucking them
offsite...
> Of course, if you are going to have to do this, then
> when it comes to (say) decryption, you either have
> to also know what algorithm it was encrypted with
> or you try decrypting them all in some predetermined
> order (e.g., probably newest to oldest).
Hmmm... ever had to handle a transition from cpio to tar to zip? It has
probably meant that you had to keep around the older programs...
The point that seems to be missing in everyone's viewpoint is that this
process has to exist regardless, for every feature / method / layout /
protocol.
Which is to say, the migration problem exists at the wider application
level, not only at the algorithm level. This is the same problem as
migrating DOC95 to whatever today's is. Cpio to tar. XP to OSX.
And, here's the clanger: the wider application level succeeds in better
migrating things, to the extent it does or doesn't. Because it is more
focussed on business.
So, kick the problem up to the higher layer. Lock the suite to that
year's format. You won't do worse than you already do, and you might do
a lot better because more time can be spent on more important things.
> The other use case is securing data in transit.
> Here it sounds as though you may have a bit more
> success with your proposal, but even here, I think
> there are difficulties.
>
> Suppose you are trying to support a TLS library.
> Something like OpenSSL or GnuTLS or NSS. The first
> issue that you are going to have is that you are
> going to have to choose some common cipher suite
> that is supported by the majority of other TLS
> implementations or otherwise there goes
> interoperability. So if you want to have the
> OneTrueAlgorithm, your choices are probably
> going to be limited from the start. Secondly,
> let's assume that your OneTrueAlgorithm becomes
> problematic because of some newly discovered
> cryptographic weakness,
Remember, this is a false fear. It's never really happened in the
terms&conditions that we contracted for in our nightmare. If we do our
work well, and we know how to do that, there is no reason to believe it
is remotely possible.
For example, history of TLS. It's algorithm problems are mostly or all
to do with bad/old deprecated algorithms that should have been dropped a
decade ago, but for some reason the committees thought nice to keep
around. Even there, the algorithm problems were swamped with
complicated protocol cases. Or, anal policy choices that interfere with
business of deployment.
Their problem has never been the failure of a good algorithm. Notice
also that universally, we've found that the last round contestants in
the competitions have all survived well. Indeed this is why some
competitions don't even pick a winner, they go with the best 5.
> so you decide to go
> with your second choice of cipher suite,
> RemainingBestAlgorithm when you upgrade. As before,
> you will be limited in your choices because of
> interoperability with other TLS implementations.
And indeed that's what happened. TLS had *protocol* problems so
everyone was advised to switch to .... RC4!
ROTFLMAO....
> I will even give you the benefit of the doubt
> and assume that either every other implementation
> either supports your desired choice or maybe, if
Which becomes much easier when there is only one :) Either you support
version 2013 or you don't.
> you can convince all the other implementations to
> follow the same strategy, they all agree to upgrade
> at the same time. The problem is that even though
> the changes to all the TLS libraries may be
> interoperable and simultaneously released (and
> drop support for the old version), there decision
> of when to upgrade the libraries that are used
> rest either with the OS distro or (if statically
> compiled) with the application itself that uses
> the TLS libraries. And getting that to happen
> lock-step is something that I just don't see
> happening. For example, you likely will find LOBs who
> insist that they can't upgrade from something like
> Red Hat Enhanced Linux 3.0 or even older because
> they have some old 3rd party COTs software running
> on it that is "too expensive" to upgrade. So that
> LOB generally signs some sort of risk acceptance
> letter and it gets filed. But you can't do anything
> to "break" their interoperability with other systems
> because theirs is a "mission critical" application
> and there would be hell to pay if you did. That is
> the sad truth about the state of operations support.
> It's also one reason why the 2013 OWASP Top 10 list
> (https://www.owasp.org/index.php/Top_10_2013-Top_10)
> now includes for the first time:
> A9 - Using Components with Known Vulnerabilities
Good. So we know that the upgrade problem is the devil, right? We know
that the upgrade problem is where all the problems lie? And we know
that history-wise, TLS has carried the cross of upgrade forever, with
all of the problems it has faced not being solvable without a major
point upgrade. SSL v2 -> v3 -> TLS v1.0 -> 1.1 -> 1.2.
This is a major design flaw, it's a design failure even, in engineering
terms. What TLS never anticipated was how to upgrade those major
points. They seem to think that major point upgrades are unfortunate
and hopefully will go away soon. They don't understand that they are
actual real live business life.
Anyone in the long term business knows that upgrade of software is
inevitable.
Sure, we all know there are the XP laggards. But they get what the ask
for, longevity of process with security that is no longer reliable.
They also face dramatic business risks because if their reliance on old
stuff is too tight, then their business can be overtaken by upstarts
with new gen software.
> It turns out, in many F500 mission critical business
> applications, outages (or undue concern of outages)
> trumps potential security breaches. That is probably
> not something that most CISOs would admit (and
> certainly not something they generally support),
(Right, so there is a sense that privacy and security 'authorities'
which is a contradiction in terms, but never mind, will turn up and fine
you for breaches. This changes the payoffs. It works, to the extent
that the so-called authorities have a clue...)
(There's a thought... if this concept worked, we should agitate for an
'authority' to run around and fine laggards for use of RC4 and MD5, or
for shipping dodgy RNGs. Perhaps a new direction for NIST :-p )
> but the truth is that its the revenue generating
> LOBs that rule the company, not the security
> organizations. (But I'm sure you already knew that;
> I'm just trying to stimulate you from working through
> the difficult steps of operational support that you
> eventually will have to face if you wish to go
> forth with your 'one true cipher' strategy.
Absolutely. The point is that this is business. Business is the business.
The decision is made at a business level. Either you can handle it or
not. There is therefore no point in trying to handle it at the
microcosm of cryptography algorithms because any attempt will be (a)
swamped at the business level and (b) will add complexity that generates
problems and never ever sees a return on investment.
The philosophy point is: integrate into the business more.
iang
More information about the cryptography
mailing list