<html>
<head>
<meta content="text/html; charset=ISO-8859-1"
http-equiv="Content-Type">
</head>
<body bgcolor="#FFFFFF" text="#000000">
<div class="moz-cite-prefix">On 2014-02-04 07:13, Tom Mitchell
wrote:<br>
</div>
<blockquote
cite="mid:CAAMy4USTOKeRdsb80=xkNsb26hVvCJMxNtRBB69uT8djb=2z7w@mail.gmail.com"
type="cite">
<div dir="ltr">
<div class="gmail_extra">
<div class="gmail_quote"><br>
<blockquote class="gmail_quote" style="margin:0px 0px 0px
0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex">Go
back to the paper that proposed using turbulence and
repeat some of their tests in a virtual environment. Let
us know what you *actually observe*.<br>
</blockquote>
</div>
</div>
</div>
</blockquote>
<br>
<br>
They made everything artificially simple - because otherwise there
are so many sources of timing randomness that you could not
distinguish the turbulence induced timing randomness.<br>
<br>
If you look at timing in a complex system, it looks random.<br>
<br>
To conclude that something that looks random truly is random, you
have to understand and measure the underlying causes of randomness.
<br>
<br>
To isolate and identify <i>one</i> such source of randomness,
required them to artificially constrain the system in a way that was
not realistic, nor intended to be realistic.<br>
<br>
So their argument, in essence was that when they took all these
extremely drastic measures to make timing of events predictable,
timing of events was <i>still</i> not predictable due to underlying
physical processes.<br>
<br>
From which we may confidently conclude that in more complex
situations, timing will be less predictable, not more predictable,
because we have more sources of randomness, many poorly
characterized sources of randomness interacting with other sources
of randomness, <i>one</i> of which is well characterized.<br>
</body>
</html>