[Cryptography] Are dynamic libs compatible with security? was: Apple and OpenSSL

ianG iang at iang.org
Sun Apr 20 07:30:46 EDT 2014


On 20/04/2014 00:50 am, Bear wrote:
> 
> 
> I have to say the OpenSSL guys really and truly had a point when they 
> pointed out that dynamic linking is not fully or easily compatible 
> with the goal of security libraries.  


Yes, it's a point.  But, Apple are in the real world, serving real
people with real product.  OpenSSL are not, they're in the
geek/crypto/IETF/PKIX/x509/1990s world serving ... geeks,
cryptoplumbers, other WGs, PKI committees, vendors, and some 1980s
masters thesis which nobody can remember the name of.


> Inconvenient as it was for Apple (and inconvenient as it is for others) 
> we really shouldn't be relying on dynamic linking to load security 
> modules.  Remember, in security we're in the business of guaranteeing 
> that things we haven't quite planned for do *NOT* happen.  Dynamic
> linking is a great way to get things to happen even though you haven't
> quite planned for them.  In fact that's its main selling point; dynamic
> linking means you automatically link to versions of the libraries other
> than the ones you had available when you shipped, even if they contain 
> code you don't quite know about and haven't quite planned for.  We 
> assume this new code is better in the usual case; but in trying to 
> secure systems against attacks, we should be regarding that new code 
> as an attack vector and assuming that it is written by Shaitan.


Indeed.  A good point.  But the measurement of static linking point is
not against dynamic linking, it is within the context of the overall
delivery of security to a userbase.  With Apple's context, it's pretty
easy to see that dynamic linking was a better security delivery, because
it enabled them to control more of the overall process, which got more
security delivered for less maintenance cost across to more users.


> The other mindset, where we're busily trying to get good things 
> we haven't quite planned for to happen, values the hell out of 
> dynamically linking to new bug fixes.  But in security we have to look 
> at it from the POV of professionals in the art of being certain that 
> bad things (like dynamically linking to new bugs!) do NOT happen, 
> and I simply do not see that as being compatible with dynamically 
> linking to, and therefore trusting, newly updated code sight unseen.


Yeah, I do that myself.  I replace everything.  I've seen every line of
code in my stuff.

And somewhere I write that you should write your own crypto and protocol
and do it all, then shoulder that burden.  But the reason I say that
isn't because (only) it is more secure.  It is because the overall
effect is better.  That's also including the ability to control and
maintain code.

I personally don't have the resource to manage the 1000 library suck
that the linux people love.  I've had times where I've spent 2 entire
weeks discovering I cannot get old stuff up and going because it depends
on some arcane library, and I've spent another 2 weeks to replace it.
And never looked back, free!

But Apple are different.  They know *their context* and they have the
resources to efficiently deliver a library context.  If they say it is
easier for them to deliver, and securely, with dynamic ABIs, then I
believe them.  Coz they know their customers.

If OpenSSL says they should be using static libraries, then this just
suggests to me, like most open source library projects, that they are
too far from the real world of customers to give as much weight to their
words.


> The other thing about dynamic linking is that from the POV of the 
> people who are updating the code that you're linking to, YOUR software
> is at least possibly an unknown quantity.  They probably haven't tested 
> with it unless yours is one of the four or five best-known applications
> that uses their stuff.  Even if the update isn't part of an attack and 
> isn't written by Shaitan, it's a bit much to trust that your usage 
> pattern which worked with the previous release absolutely is guaranteed
> not to expose a flaw in the current release.  Hard linking guarantees 
> that *somebody* tests the library together with the product before the 
> new behavior created by their combination is unleashed on the public.


This is all true.  But, again we're missing the context, and this is
that Apple take the whole cradle-to-grave responsibility for their users
in a way that others (library projects) won't understand.  So if they
say they are going to take on that responsibility, it again overrides
theoretical concerns in a static, no-user context.

(OTOH, they seem to have chosen to replace OpenSSL entirely.  That I
approve of.  Hopefully they're also replacing SSL.)


> Is this something that Apple can explain to its users?  I don't know.
> Apple has a history of cultivating a user culture that puts a negative 
> value on complex technical explanations, and makes a point of not 
> requiring them to sit through those, so it really was sort of between 
> a rock and a hard place there.  To some extent though, everybody with 
> a modern user community that expects everything to work the same way 
> faces the same problem.


I like Apple.  They take hard decisions.  What I don't like about them
is their secretiveness, like google and Microsoft, it erodes trust and
when they are apparently self-serving and make mistakes, there is
nothing but downwards for trust.  But overall, they seem to make the
best decisions, have the best focus, the best mix and the best process.

Which is to say, what you point at above is coming to the same
conclusion;  there are hard choices and sometimes you have to go against
the herd.  It's hard for us counterculturalists and pioneers, but in the
end, public opinion comes second to results.



iang



More information about the cryptography mailing list