[Cryptography] Reproducible results, scientific method, fraud enforcement, etc.

John Denker jsd at av8n.com
Thu Oct 8 09:32:39 EDT 2015


In the thread about "blockchain and trustworthy computing",
on 10/06/2015 05:36 AM, Lodewijk andré de la porte wrote:
 
> We are supposed to be doing this with academic papers. Reproduce
> the result at another university that's as different as possible
> (different sponsors, affiliations, religions, researchers, nations,
> etc), and the trustworthiness increases. Thing is - the VW cars
> fooled the procedure. Replication does not make a difference. The
> procedure was faulty.

Replication /does/ make a difference.  It makes all the difference
in the world.  The fact that the performance claimed by VW could 
not be reproduced under real-world conditions is exactly what led 
to the discovery of the fraud.

  Also note that there is a big difference between inexactitude
  and fraud.  Making a mistake is not a crime.  Deliberate,
  systematic falsification of test results is a crime.


On 10/07/2015 10:17 AM, Robert L. Wilson wrote:

>> I am interested in whether that is actually being carried out as a
>> standard procedure in any, or many, disciplines. I certainly don't
>> object to it as a goal and hence as "supposed to be doing".  But
>> *Science* magazine, one of the major publishers of research
>> articles in many scientific subjects, recently carried the
>> suggestion that this ought to be done, as a reaction to the number
>> of scientific papers that have been retracted (worldwide, not just
>> in *Science*) recently. As a mathematician, where results don't
>> usually depend on lab equipment and procedures, the whole discovery
>> and publication activity is a little different from many of the
>> subjects they publish. But, from what I read in *Science*, I got
>> the impression this was not usually a condition for journal
>> acceptance in any subject. Is that so?

That has never been the criterion for publication in scientific
journals.  Anybody who has ever done scientific research would 
object to making it a criterion.

The actual "method" of doing research is a lot less methodical
than most people imagine it to be.  It does not even remotely
resemble the five-step "scientific method" that people are
taught in grade school.

For example, consider the Bohr model of the atom, i.e. electrons
going around the nucleus in "orbitals", like a miniature solar
system.  It was known at the time that it did *not* fit the
spectroscopic data for any element other than hydrogen.  It
was known at the time that it was inconsistent with the Maxwell
equations.  Now, apparently the suggestion is that Bohr should
have been prevented from publishing his model.  Maybe you don't
object to that suggestion, but you should.

In the experimental realm, consider the other Bob Wilson.  Suppose
you and Penzias have just discovered the microwave background
radiation.  Should you be prevented from announcing this, until
somebody else builds a fancy antenna and a super-sensitive
receiver and replicates the result?  Maybe you don't object
to that suggestion, but you should.

The fact is, *all* theories are imperfect and *all* experimental
data is imperfect.  Also, the first result in any area has to be
announced before it has been verified ... with the rarest of
exceptions, e.g. at CERN, where they intentionally did two
more-or-less independent Higgs searches in parallel.

Imperfect data and imperfect theories are stepping stones along
the road to better data and better theories.  This is how it
has always been, and how it will always be.

In science, reviewers are flatly forbidden from holding up
publication of a manuscript while they try to reproduce the
result.  Among other things, the manuscript is considered
confidential information, and using it as a guide to what
experiment to do would be highly unethical.

  Mathematics is an exception.  There is a weird process 
  whereby a novel result (especially if it is important)
  circulates informally for a time, while people look for
  errors in the proof, so that by the time the result is
  "officially" published it has a high degree of polish.

  This is in effect a two-tier publication system.  Everybody
  else uses a single tier, where the preliminary results are
  openly published.  Truly novel results are prized, not
  blocked.

=======

There is an important difference between the /data/ and the
/interpretation/ that is placed upon the data.  Enrico
Fermi got the Nobel prize for some interesting experiments.
The experiments were OK, but essentially every word of his 
interpretation was wrong, and about half of the prize citation
was wrong.  At first he (and everybody else) thought he had 
discovered new transuranic elements, when in fact it was 
just fission, yielding isotopes of old elements.

There are also experimental mistakes, such as dirty cable 
connectors leading to erroneous observations of faster-than-
light propagation.

Again it must be emphasized that there is a huge difference
between:
  1) Statistical fluctuations in the data, which are normal,
   not even a mistake.
  2a) Misinterpreting the data, which is a mistake.
  2b) Systematic error in the data, which is a mistake.
  3) Fraud, which is a crime.

This is important, because the mechanisms to detect and compensate
for mistakes are very different from the mechanisms to defend
against fraud.  Fraud, almost by definition, is /designed/ to
defeat the first line of defenses.  The penalty for making an 
honest mistake is near zero.  Nobody thought Fermi should return
his Nobel prize.  Indeed, when they wanted to build the first 
nuclear reactor, Fermi was the obvious guy to rely on to build 
it.  In contrast, if you get caught painting the mice, it's the 
end of your career.
In the VW case, several guys have already lost their jobs, and 
the company is on the hook for billions of dollars in fines.
Not to mention possible criminal sanctions.  Fraud, like arson 
and murder, is virtually impossible to /prevent/ completely.
Efforts to suppress fraud involve a large measure of /deterrence/.
That is, there are retrospective penalties.

James Randi, a respected expert in the art of fooling people,
famously said "scientists are easier to fool than children".
That's because they are not accustomed to being lied to.
Experiments on the natural world can be subtle and confusing, 
but the atoms are not trying to attack you.  It is not an
adversarial situation.  OTOH scientists are not stupid, and
when they detect fraud they respond with harsh penalties.

  This sets natural science apart from (say) law and politics,
  where everybody lies all the time and it's considered no 
  big deal.

  This also sets natural science apart from crypto in particular
  and security more generally, where we have to assume that the
  system is under attack from all directions.

The original network protocols were designed by scientists for
scientists.  This explains the enormous security holes.  They
assumed that good guys would voluntarily comply with ethernet
and TCP protocols, and anybody who got caught intentionally
cheating on (say) the backoff algorithms would get caught 
eventually ... and then fired.  This approach -- relying on 
retrospective enforcement -- is not necessarily a bad idea.
It can be made to work, and sometimes it is the only thing 
that works.

For example, in the late 1960s and early 1970s there was a
rash of people hijacking airliners to Cuba.  Airlines tried 
all sorts of defensive measures, but the thing that really 
worked was a US/Cuba extradition agreement.

  The same approach could be used in cyberspace.  For example,
  one could say to Nigeria:  We will cut you off from the
  internet backbone unless you catch and extradite the 419
  lads.

I do not understand why people get so worked up over the VW
situation.  I do not see it as a breakdown of The System.
It's a crime -- a rather banal crime.  Existing punishments
fit the crime:  We catch the guys, throw them in jail, levy 
a fine on the order of 10^10 dollars, and life goes on.  The 
next guy who wants to try something similar will think twice.

  OTOH if VW manages to weasel out of the sanctions, then
  *at that point* we will have a breakdown of The System, 
  and at that point we should get seriously up in arms.

In the meantime ... it's just a crime.  It's notable for
its magnitude ... but it's otherwise quite banal.  It's 
nothing we can't handle.

We should be infinitely more worried about attacks where
the perps are beyond the reach of the law.


More information about the cryptography mailing list