Microsoft marries RSA Security to Windows

Ed Gerck egerck at nma.com
Tue Oct 15 17:58:58 EDT 2002


[I'm reducing the reply level to 2, for context please see former msg]

"Arnold G. Reinhold" wrote:

> At 8:40 AM -0700 10/11/02, Ed Gerck wrote:
> >Cloning the cell phone has no effect unless you also have the credentials
> >to initiate the transaction. The cell phone cannot initiate the authentication
> >event. Of course, if you put a gun to the user's head you can get it all but
> >that is not the threat model.
>
> If we're looking at high security applications, an analysis of a
> two-factor system has to assume that one factor is compromised (as
> you point out at the end of your response). I concede that there are
> large classes of low security applications where using a cell phone
> may be good enough, particularly where the user may not be
> cooperative. This includes situations where users have an economic
> incentive to share their login/password, e.g. subscriptions, and in
> privacy applications ("Our logs show you accessed Mr. Celebrity's
> medical records, yet he was never your patient." "Someone must have
> guessed my password." "How did they get your cell phone too?")

I like the medical record dialogue. But please note that what you wrote is
much stronger than asking "How did they get your hardware token too?"
because you could justifiably go for days without noticing that the hardware
token is missing but you (especially if you are an MD) would almost
immediately notice that your cell phone is missing. Traffic logs and call
parties for received and dialed calls could also be used to prove that you
indeed used your cell phone both before and after the improper access. Also,
if you lose your cell phone you are in a lot more trouble.

The point made here is that the aggregate value associated with the cell
phone used for receiving a SMS one-time code is always higher than that
associated with the hardware token (it is token +), hence its usefulness
in the security scheme. Denying possession of the cell phone would be
harder to do -- and easier to disprove -- than denying possession of the
hardware token.

> Here the issue is preventing the user from cloning his account or denying
> its unauthorized use, not authentication.

The main objective of two-channel, two-factor authentication (as we
are discussing) is to prevent unauthorized access EVEN if the user's
credentials are compromised. This includes what you mentioned, in addition
to assuring authentication (i.e., preventing the user from cloning his account;
allowing enterprises to deny the unauthorized use of user's accounts).

Now, why should the second channel be provided ONLY by a hardware
token?  There is no such need, or security benefit.

The second channel can be provided by a hardware token, by an SMS-
enabled cell phone, by a pager or by ANY other means that creates a
second communication channel that is at least partially independent from
the first one. There is no requirement for the channels to be 100%
independent. Even though 100% independency is clearly desirable and can
be provided in some systems, it is hard to accomplish for a number of reasons
(indexing being one of them). In RSA SecurID, for example, the user's
PIN (which is a shared secret) is used both in the first channel (authenticating
the user) as well as in the second channel (authenticating the  passcode). Note also
that in SecurID systems without a PIN pad, the PIN is simply prefixed in plain
text to the random code and both are sent in the passcode.

The second channel could even be provided, for example, by an HTTPS (no
MITM) response in the same browser session (where the purported user
entered the correct credentials) if the response can be processed by an
independent means that is inacessible to others except the authorized user
(for example, a code book, an SMS query-response, a crypto calculator, etc.)
and the result fed back into the browser (i.e., as a challenge response).

>
> >
> >A local solution on the PDA side is possible too, and may be helpful where
> >the mobile service may not work. However, it has less potential for wide
> >use. Today, 95% of all cell phones used in the US are SMS enabled.
>
> What percentage are enabled for downloadable games? A security
> program would be simpler than most games.  It might be feasible to
> upload a new "game" periodically for added security.

There is nothing dowloaded on the cell phone.  Mobile RSA SecurID and
NMA ZSentryID are zero foot print applications.

BTW, requiring the download of a game or code opens another can of worms
-- whether the code is trusted by both sender and receiver (being trusted by
just one of them is not enough).

> >> 2. Even if the phone is tamperproof, SMS messages can be intercepted.
> >> I can imagine a man-in-the-middle attack where the attacker cuts the
> >> user off after getting the SMS message, before the user has a chance
> >> to enter their code.
> >
> >Has no effect if the system is well-designed. It's possible to make
> >it mandatory
> >(under strong crypto assurances) to enter the one-time code using the *same*
> >browser page provided in response to the authentication request -- which
> >page is supplied under server-authenticated SSL (no MITM).
>
> You may be right here, though assuming SSL lets one solve a lot of
> security problems associated with traditional password login.

SSL does NOT solve what these systems solve (the main objective I list above).
Namely, to prevent unauthorized access EVEN if the user's credentials are
compromised.

> >No solution works everywhere. Cell phones are no exception. But it is
> >possible to design the system in a such a way that the user can use
> >a different
> >access class (with less privileges, for example) if the cell phone does
> >not work. After all, the user is authenticated before the message is sent to
> >the cell phone.
> >
> >That said, cell phone coverage is becoming ubiquitous and the solution also
> >works with pagers (while they still exist), email accounts (blackberrys) and
> >other means of communication -- including voice.
>
> Security tokens work everywhere I can think of.  I'm not sure the
> cell companies are spending much to push into rural areas given the
> current economy.  Might be a new market for Iridium, but that doesn't
> work well inside buildings.

SMS messages work where cell phone conversations are not so clear. And,
a pager works almost anywhere (that's why some people still have pagers
and cell phones).

On another slant, if the hardware token stops working for some reason,
you have no recourse. Your cell phone is working, if only you had decided
to use a system with a cell phone channel!

> >Let's be careful with generalizations. During the tragic events of 9/11, cell
> >phones emerged as the solution for communication  under a
> >distributed terrorist
> >attack.
>
> The WTC collapse took out a major portion of lower Manhattan's
> landline capacity. Cell phones were better than nothing, but many
> people experienced difficulty placing calls.  It is simply too
> expensive to design a switched system to handle all the calls people
> want to make in a major crisis. Military systems include priority
> tags to deal with this.
>
> This does raise an interesting possibility: giving SMS messages
> priority over voice could be very useful in an emergency. SMS
> messages take much less bandwidth than voice and the entry mechanism
> on most cell phones is very slow. So existing cell infrastructure
> might be able to handle all the SMS traffic generated a crisis.
> Anyone know if cell phone companies are doing this?

This might be the case -- note also that SMS is store-and-forward and
so much more reliable than voice connections.  You do not require immediate
connectivity, the system may find a route in the next 5 or 10 seconds that
it did not find before.  And if the system does not find a route in the
60 or 90 seconds that the time code is allowed to live, you just try again
and get a new code sent to you -- a dictionary attack using the "try again"
option is not relevant.

> >
> >Second, as I hint somewhere above, the important point here is not to rely on
> >something that will never fail (which is, clearly, impossible) but to offer
> >the user a recourse -- alternative ways to get the job done, even
> >under reduced
> >privileges.
>
> Reduced privileges won't do in a crisis. That's when people need
> maximum privileges: to authorize emergency expenditures, reroute
> circuits, reboot servers or reprogram firewalls defeat cyber-attacks.
> The Internet is based on technology designed to survive nuclear war.
> Making its most valuable applications dependant for security on
> other, less robust communication systems seems a bad idea.

If you start with the unproved assumption that "X" is less robust,
then you may derive all sorts of wrong conclusions. Cell phone
commnications have proven themselves in critical war-like situations.
And cell  phone technology is just starting -- I'm sure that a lot of
the business of cell phone providers rides on their reliability.  SMS
messages are getting very common too, hence they have an increasing
business value to the service providers, to provide reliable SMS service.

> >I see your argument backwards -- you are more likely to notice that
> >you lost or forgot your cell phone than a hardware token that you
> >seldom use, or even notice.  Cell phones also prevent a silent compromise
> >more effectively (as in theft, which involves no violence) because you
> >have a tendency to notice its absence -- the cell phone is sueful for many
> >other purposes!  And, you don't have to carry additional devices, that you
> >may lose or forget.
>
> I'd like to see a personal token that is used for multiple
> applications. Particularly if it were on your key chain you'd notice
> its loss soon enough.  It would be handy if it told the time of day
> and also included an LED flashlight (on a separate battery).

The question here is not so much what you and I would like to see but
what the market would like to see. Cell phones are becoming more popular
and they are adding multiple uses -- location services, SMS, directory
services, news, Internet browsing, email, etc.  They are becoming more
and more valuable to the users. This aggregate value is important here.

> >
> >Additionally, just having my cell phone does not get you anywhere. You
> >also need my credentials to begin the authentication process and they are
> >NOT in my cell phone.
>
> Maybe not in YOUR cell phone, but I'll bet a lot of people store
> passwords in their cell phone address books. With the combined
> cellphone/PDAs of the future it will be common to do so. And for a
> high value attack, you have to assume the password is already
> compromised.

That is a good point. But you still need the URL to use that password,
and the userid. And, even if you know all that, it will still be useless after
I see that my cell phone is missing and notify the service to block access.
So, just getting my cell phone does not get you the keys to the kindgom.
And, since I am bound to quickly note that my cell phone is not with me,
there is not much time to carry out an attack.

> The price of an SMS message is actually twice that -- both the sender
> and receiver pay.

That's not so. You can SMS for free. And you can also receive SMS free
with your phone account.

> >[authentication tokens] It's costly, makes you carry an additional thing and --
> > most important
> >of all -- needs that pesky interface at the other end.
>
> That "pesky interface at the other end" seems less of an issue in the
> context of a deal between RSA and Microsoft. This is an opportunity
> to do the right thing and build it into the operating system once and
> for all.  Support for USB and Bluetooth tokens that reveal a public
> key and can sign a nonce should also be included in the OS. There is
> no need for a PKI here. Token binding would be done by the
> application owner when access is initially arranged.

While you wait for that to happen (and I'm holding my breath) you
can use the cell phone today.

> >There is always room for evolution, and that's why we shan't run out of
> >work ;-)
> >
> >However, not everyone wants to have an implant or carry a ring on their
> >finger -- which can be scanned and the subject targeted for a more serious
> >threat. My general remark on biometrics applies here -- when you are the
> >key (eg, your live fingerprint),  key compromise has the potential to be
> >much serious and harmful to you.
>
> If a personal (as opposed to special purpose, for
> bank-vice-presidents-only) tokens become more common, their
> possession won't mean that much.  In a cellphone-based security
> world, traffic analysis of SMS messages could reveal a lot about who
> does important work.

No, in fact the argument is in reverse since only the one-time codes are
sent -- not what/where/how/who they are authorizing. The more SMS
traffic you have, the more "safety in numbers" work. If everyone and
their cat get one-time codes by SMS, there is no one special getting them.

>
> >BTW, what is the main benefit of two-channel (as opposed to just two-factor)
> >authentication? The main benefit is that security can be assured
> >even if the user's
> >credentials are compromised -- for example, by writing their
> >passwords on stick-it
> >notes on their screen, or under their keyboards, or by using weak
> >passwords, or
> >even having their passwords silently sniffed by malicious sofware/hardware,
> >problems that are very thorny  today and really have no solution but to add
> >another, independent, communication channel. Trust on authentication
> >effectiveness
> >depends on using more than one channel, which is a general
> >characteristic of trust
> >( http://nma.com/papers/it-trust-part1.pdf  )
>
> A second channel is very helpful in establishing trust, but for most
> applications, that trust can be stored in something the user carries.
> Often it is more important that the second channel be truly secure
> than that it operate contemporaneously with access.  I would hope
> that any Microsoft/RSA standard solution allows for both models.

;-) If you indeed have a second channel that is "truly secure" then why
do you need a first channel?

There is a paradigm shift here, contraditcting what I call the "Fort Knox
Syndrome". There is no Fort Knox. The solution to reach higher levels of
security is not to demand the impossible ("This will never fail" or
"This channel is truly secure") but to work with the possible ("Even if this
and this would fail, the system still works as designed). This approach
requires the use of trust channels, as those channels designed to allow
for at least a partially independent confirmation of the information
received.

Cheers,
Ed Gerck



---------------------------------------------------------------------
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majordomo at wasabisystems.com



More information about the cryptography mailing list