[Cryptography] Ada vs Rust vs safer C
huitema at huitema.net
Tue Sep 20 13:15:10 EDT 2016
On Tuesday, September 20, 2016 12:08 AM, John Gilmore wrote:
> Refactoring a long, complex function is very likely to introduce new
Yes, that's definitely a possibility.
> So what makes refactoring the "correct" option? Suppose there is
> *not* a complete test suite that gets 100% test coverage of the code
> in question? (That's extremely common.)
In the cases that I am familiar with, we actually had extensive test code. But of course, one almost never gets to 100%, especially in a long complex function. One may get close to 100% of code blocks, ensuring that each line of code is executed at least once, but one cade almost never test all combinations of code paths.
> ... If so, then you don't just
> have to rewrite the function; you also have to write test cases from
> scratch, and validate them on the old code before trying them on the
> new code. Just doing that is a major project, and when you're done
> you haven't started the refactoring yet.
That depends a lot on how the refactoring is done. For example, one simple case is to export the inner block of a loop to its own function. It makes analysis simpler, by reducing analysis of the loop to analysis of the signature of the new function, and it minimizes the risks.
> Or you could just refactor, introduce bugs, and not really try very
> hard to detect them before shipping the buggy code. But I thought the
> point of the exercise was to REDUCE vulnerabilities in the code, not
> increase them.
In practice, developers use code reviews, tests, automated analyses, beta deployments and user feedback. The experience shows that reviews are fallible, and that tests never provide complete coverage of corner cases. The point of automated analyses, or for that matter language restrictions, is to provide some guarantees of safety.
But yes, there is an obvious tradeoff. If I rewrite a piece of code so that it can be best analyzed, I increase the efficiency of analysis, which is likely to reduce vulnerabilities -- detecting stuff like Heart Bleed or Go To Fail. But I am likely to change some behavior, which is not validated by the previous beta deployments and user feedback.
Prefast does not exactly belong to the category of "non-production-quality tools for analyzing programs" built by "academic or researcher, whose main objective is to write a paper or get a degree". But even production quality tools do have their limits. The more tool developers push these limits, the fewer code falls in the "too complex" category, and the easier the tradeoffs. But it is a tradeoff.
On the other hand, if code has grown so complex that automated analyzers get confused, chances are that code reviewers are also confused, and that test developers struggle to get adequate coverage. So there is a tradeoff there too.
-- Christian Huitema
More information about the cryptography