[Cryptography] On New York's new "Cybersecurity Requirements for Financial Services Companies"

Perry E. Metzger perry at piermont.com
Wed Mar 1 12:38:32 EST 2017


New York State's Department of Financial Services recently published
its brand new regulation for banks, insurers and other similar
companies entitled "Cybersecurity Requirements for Financial Services
Companies":

http://www.dfs.ny.gov/legal/regulations/adoptions/rf23-nycrr-500_cybersecurity.pdf

The regulation will likely apply to a large swath of the world's
commercial and investment banks and insurers because they do business
in New York. It got a bunch of notice in the financial press as a
result.

The document is short, and I suggest anyone interested in the debate
about what sort of involvement government can usefully have in
computer security regulation should read it.

Often, people propose that the problem in computer security is that we
don't have enough regulation. If only the government were involved, we
would be doing a better job. I've heard this quite a lot, especially
from politicians and other people who are not security professionals.

However, no one can ever quite articulate what it would be that the
government would ask companies to do differently. Would it be to tell
them to rotate passwords frequently or similar worst practices that
actually reduce security? Would it be to tell them to make sure that
their software has no bugs?

Or would they choose the more benign but none the less not very useful
approach of telling everyone to make sure they've been careful and to
mandate that they be careful by making them file lots of paperwork
saying they're being careful and punishing them if they fail to file
the paperwork on time?

New York State has taken this latter approach.

I will not claim that the people charged with writing this regulation
did badly given what they were told to do. They did what was
reasonable given an impossible demand made on them by the governor and
other politicians, in so far as they were mandated to write a
regulation and they produced one that is, at least, not asking
anything impossible and not trying to set bad technological decisions
in stone.

If I were the regulator, I might have written a very similar
document. Possibly I would have added some sort of requirements about
patching policy, but really, under the circumstances they did what one
could have reasonably expected of them.

However, the demand that they create such a regulation wasn't
particularly useful, and the output also isn't particularly useful,
probably because it inherently couldn't ever be particularly useful.

Most of the useful things it calls for, like having people who are
responsible security, and having policies about auditing and periodic
testing, are already in place at essentially 100% of financial
institutions. After all, financial services firms spend a fortune
trying to keep themselves secure, and have for many years. However, in
spite of the fact that all the newly mandated regulatory requirements
are already in place at essentially every single firm, security
breaches happen quite regularly.

In addition to what is done today, the regulation imposes a lot of
paperwork requirements, especially for filings with the state and for
the presence of loads of specific written policies which you will be
penalized for failing to have and to file. I suspect that when
breaches happen, if there are public calls for blood, regulators will
find minor paperwork violations and punish firms for making them, and
thus will have been seen to have done something. Note that even if
these minor violations had not happened, it is doubtful the results
would have been different. The problem in security is not, after all,
failing to file notice within 30 days that your new subsidiary
company's security was covered by the parent firm's security
department.

The real issues in security are, of course, elsewhere. One of the
biggest issues is people making bad decisions about security, which
is unfortunately not really something you can quantify or regulate for
the most part, since there is no objective measure for "bad
decision". If bad decisions were truly obvious to everyone,
then people wouldn't make them. Even if you mandate particular
technologies, particular training regimes, particular licensing, and
a host of other burdensome requirements, people will still make
mistakes, and likely at an undiminished rate.

So what has happened has been, in essence, security theater. New York
State's politicians (including the governor) wanted to be seen as
having "done something" about computer security. They mandated that
the regulators would make rules, without really knowing what the rules
might say since they don't know anything themselves about computer
security.

The rules were drafted, to the best of the ability of the regulators,
and they tell financial services firms to do what more or less 100% of
them already do, plus to file paperwork saying that they were doing
those things. When breaches happen, which they will as following
essentially these policies hasn't stopped breaches so far, everyone
can console themselves by knowing that there were written policies in
place that failed to have any real effect.

I think there is an obvious lesson here, which is that government
regulation isn't a magic force capable of solving problems that
dedicated professionals in a field have not been able to solve on
their own.


Perry
-- 
Perry E. Metzger		perry at piermont.com


More information about the cryptography mailing list