[Cryptography] phishing attack again - $300m in losses?

Jerry Leichter leichter at lrw.com
Tue Feb 17 15:52:45 EST 2015


On Feb 17, 2015, at 8:41 AM, Phillip Hallam-Baker <phill at hallambaker.com> wrote:
> I think it would be very easy to set up a scheme for program installation where all code has to be signed to run.
MacOS does this.  You can run your Mac in one of three modes:  Allow only stuff signed by Apple (this is stuff from the Apple App store); allow only stuff signed by Apple or a developer approved by Apple (i.e., by a developer who has a key that was signed by Apple); allow anything.  Signature checks are done every time the application is run; an admin can override the checks once or forever.  (There's a separate, older check that goes off the first time you run an code that was downloaded - it tells you the Web site it came from and the date.)

Of course, iOS had only the first mode - a source of endless whining by some.

> The permissions are granted to the signer. The first time code runs, the user is asked what set of permissions it should run with. 'Game' would be a standard minimum priv setting that causes the program to run in a sandbox.
MacOS has a sandbox (based on the BSD sandbox facility), but they haven't figured out a way to expose this to end users - programming API or command line only.  Some Apple-provided stuff runs in a sandbox, and stuff from the Apple App store is also sandbox.  The former works well, because of course Apple can tune its own code, its own sandbox implementation, and the sandbox configuration it uses to make *sure* it all works well.  The latter has been problematic, as the sandbox isn't tunable and as configured prevents sandboxed apps from doing many useful things.

It's a tough tradeoff.  No one I know has produced a reasonable way for users to answer questions about how to configure a sandbox.  Android has a pretty fine-grained privilege system, hardly anyone really looks at the privileges that apps ask for - it's too hard to understand what they *should* reasonably be asking for, and what the implications are.  At best, cautious people will refuse to install apps that seem to ask for inappropriate privileges.  (Even cautious people can get trapped:  A new version of an app may ask for additional privileges.  Android will ask you to approve the change, but it's easy to miss the implications.)  Unfortunately, this is a binary decision:  You can't say "install this app but don't give it access to the location information it asked for".

On iOS, certain API's are controllable - e.g., whether an app can gain access to location information.  iOS handles this in an interesting way:  It asks on the applications first attempt to use the privilege, and then for some privileges asks for confirmation - once - after you've been using the app for a while.  For example, it'll pop up a query saying "App such-and-so has been using your location while in the background for the last week, would you like it to continue?"  The basic idea is that people have trouble answer such questions in the abstract, but when tied to when they are actually using the application, and it's trying to do something for them, they may be able to give a reasonable answer.

The limitation is that this kind of model can't scale.  It works fine for a handful of easy-to-understand privileges, but if there were a dozen, you'd be annoyed by questions all the time - and probably many of those privileges would no longer be easy to understand, even in context.

> Any attempt at privilege escalation is reported and goes to the signer's reputation.
Why?  If the system works, you just block the escalation.  I'm not sure why anyone would care about this kind of reputation.  If the app works fine but is consistently doing something that the sandbox consistently blocks - so what?
                                                        -- Jerry



More information about the cryptography mailing list