[Cryptography] HP accidentally signs malware, will revoke certificate
leichter at lrw.com
Sat Oct 11 20:28:07 EDT 2014
On Oct 11, 2014, at 7:05 PM, Theodore Ts'o <tytso at mit.edu> wrote:
> It seems the real problem is that while we have Certificate Revocation
> Lists when a CA wants to revoke its signature on a certificate, there
> isn't the same concept of a Signed Software Revocation List where a
> code signer can revoke a signature on a piece of code that it has
> signed. Of course, this presumes that all code that verifies code
> also attempts to pull down and check the latest SSRL, just as
> certification verification code must pull down and verify against the
> latest CRL.
Microsoft has had such a mechanism - known as a killbit http://en.wikipedia.org/wiki/Killbit - for many years. It applies only to Active-X controls - it's not clear why they never extended the idea to arbitrary code. However, they could probably get essentially the same effect with their malware scanner.
OS X has a similar mechanism with its simple-minded malware blacklisting mechanism, which has a special-purpose extension to do such things as blacklisting outdated versions of Java and Flash.
iOS apparently includes a "kill application" mechanism which would allow Apple to quickly prevent a malicious app from running. (Apple has never used this, saying it's there for emergencies.) I don't think Android has an equivalent mechanism, and it certainly wouldn't work for stuff installed from alternative stores.
Generally, these are integrated with patch mechanisms (though I think the iOS one is an "instant push"): You don't poll the "CRL", you update it on a schedule like anything else. While instant revocation might be useful in extreme situations, in practice even something polled every week would be a huge improvement over the (almost nothing) we have today.
I'm not aware of any similar mechanism in the OSS world (which doesn't mean it isn't out there).
The extreme version of all this is whitelisting of software - pioneered in the Windows world by bit9, now also available (though I don't know any details) in Windows itself.
> Given that we don't have this, can we really blame HP for deciding to
> ask their CA to revoke their code signing certificate, as the least
> bad option?
Actually, they could have done something else: Send out a patch that specifically looks for this malware and kills it, and also updates the patch mechanism to filter out any subsequent attempts to install the malware. Microsoft has done this in the past enough times that they have the mechanism fully developed. HP would have had to start from scratch. Frankly, it's not clear that what they did makes a whole load of sense. Since signatures are only checked during installation, they haven't done anything at all to protect customers who already installed the malware - and it's been out there for quite some time.
When all you have is a hammer and you want to look like you're *doing* something ... go bang on whatever it is.
>> The problem of the validity of signed material has been discussed
>> for years, and my comment about the need for timestamping is not
>> new. (It probably appeared in the papers discussing uses for
>> digital timestamps!)
> I don't think we need to have timestamping here. What we need instead
> is to have the same concept of a CRL, but applied to signed software.
I was deliberately distinguishing between two problems: The bad software with a proper signature, and the leaked signature. The timestamp is useful only for the latter case, where it's really an optimization of a CRL.
> ...I would aruge that the whole point of having signed code is not to
> bind it to the signer forever, but it's the signer saying, "this is
> good code". It may be the case that the signer had legitimately
> signed some piece of code as being "good stuff", but then later on,
> the signer discovers that said signed code included bash with the
> Shellshock bug, or openssl with the with the Heartbleed bug.
Again, two distinct problems: The signer declaring "don't trust this signature (if made after time T)" vs. "don't trust this piece of code, I no longer believe it's safe".
> So one could imagine, especially in a world where legislation is
> passed per Dan Geer's proposal eliminating the "we warrant that this
> code contains ones and zeros, and even if your business loses
> gazillions of dollars, we'll refund the cost of your software... on a
> prorated basis" is no longer legally operative, that the software
> signer might want to not only release a new version of their software
> without the Heartbleed or Shellshock bug, but also put the older
> version of their software on the SSRL list, to limit their liability
That would be a fine idea. As I pointed out above, the closed-source world does this kind of thing. I suspect it hasn't made much headway in the OSS world because many people - especially the developers - use OSS exactly because they want the freedom to run whatever they want. The notion that *someone else* - even the author of the software - could shut down their ability to do what they want on their own box would be anathema to many in the OSS community.
That doesn't mean such a mechanism couldn't be built for those who want to use it, of course.
More information about the cryptography