[Cryptography] What do we mean by Secure?

Bill Frantz frantz at pwpconsult.com
Mon Feb 9 01:56:00 EST 2015


On 2/8/15 at 9:16 PM, amccullagh at live.com (Adrian McCullagh) wrote:

>Obviously, Yourdan’s was making a point that a really secure system is unrealistic and uncommercial.

I certainly hope Yourdan was wrong. If he is right, we need to 
keep our computers as game machines only and go back to paper 
for anything where there is real value to be stolen. But, I know 
we can do a lot better than we have.


>I also remember Professor Bill Caelli once telling me that the 
>US Navy had developed an A1 (TCSEC) operating system in the 
>1970’s but unfortunately it was entirely unusable and was 
>later abandoned.  Compromise has to be taken and it really 
>depends on each situation.  The real problem is that most 
>organisations do not undertake the basic risk assessment to 
>understand what their respective risk appetite is.

These TCSEC systems never addressed the problems of data 
sharing. They were designed to follow the US military policies, 
which were unsuitable for the needs of commercial enterprises.

They were also probably unsuitable for military needs as well. 
An example is how you share information with allies. Detail 
weather forecasts are a real-world example.

My favorite A1 story comes from an A1 system which was 
discovered to have a big covert channel. However it still met 
all the Orange Book criteria. One of the designers of the system 
was heard to say, "I don't want to call is secure, I just want 
to call it A1." The moral of this story is that it is really 
hard to enumerate all the attacks that haven't been invented yet.

>In not understanding their respective risk appetite they are not is a position to ever know whether the so called secure system fits within their respective position.

This thought seems more oriented toward how much assurance you 
have that the system actually implements the policy under 
attack, rather than my main interest in this thread: Whether the 
system can implement reasonable policies, and what are 
reasonable policies anyway.

If this thought means how much deviation is from their ideal 
policy due to implementation constraints is a organization 
willing to tolerate, then this statement addresses my topic.


On 2/8/15 at 6:42 PM, andreas.junius at gmail.com (Andreas Junius) wrote:

>I think software is secure if it does what it is supposed to 
>do, nothing else, nothing unintentional. Most attacks probably 
>try to trigger some unintended behaviour using malicious (i.e. 
>unexpected) input. Considering all sources of input for a 
>program, it'll be a broad subject and it's obviously hard to achieve.

Well, the Microsoft autorun feature for floppy disks and flash 
drives did exactly what it was supposed to do and was one of the 
bigest security problems of a system that was rife with problems.

Cheers - Bill

-----------------------------------------------------------------------
Bill Frantz        | Truth and love must prevail  | Periwinkle
(408)356-8506      | over lies and hate.          | 16345 
Englewood Ave
www.pwpconsult.com |               - Vaclav Havel | Los Gatos, 
CA 95032



More information about the cryptography mailing list