[Cryptography] Amazing possibilities for steganography

mok-kong shen mok-kong.shen at t-online.de
Tue Sep 26 05:49:34 EDT 2017


Am 25.09.2017 um 22:55 schrieb Henry Baker:
> At 01:43 PM 9/25/2017, Jerry Leichter wrote:
> [snip]
>> There's a whole bunch of work like this recently.  Initially it showed that you could alter the inputs at the bit level and fool the recognizers.  By now the state of the art is showing *physically realizable* changes that will fool the recognizers.  For example, one paper showed how to print "artistic" eyeglass frames that either caused the image recognizers to fail to recognize anything - or in some cases cause them to recognize a chosen different face.  Another paper showed how to print covers you could paste over street signs that would look like someone's idea of abstract art, or sometimes would be unnoticeable to humans - but which could cause, for example, a recognizer to see a speed limit sign where there was really a stop sign.  Really scary for AI's that will drive cars, actually.
>>
>> We really should have known better.  For years now, we've found that systems *specifically designed to be safe against attacks* fall to intelligent attackers.  And yet we assumed that deep learning systems - which no one really understands - will magically be safe when directly attacked.  It turns out that all you need to do is turn their own algorithms loose looking for ways in and - bang, the whole facade falls apart.
>>
>> BTW, some of the results show that, yes, if you feed some of the attack images in as negative examples, the systems will get better at avoiding them - but you can still generate attacks that get through.  The space of attack images is huge and there are plenty of ways to sample from it.
> Oh, but we're going to put all of our faith in neural-net AI to find & recognize malware?  How many $$$billions are going to go down that rathole...  "I see England, I see France; I see Putin's underpants!" ;-)
I fail yet to see why nation-states should sensibly investigate any 
money/efforts
in whatever kind of steganographical channels for purposes of protection 
of their
own essential communications. For they can entirely freely apply 
whatever strong
encryptions as they wish. Only the common people in certain 
non-democratic regions
of the world, where the regimes officially or inofficially erode their 
communication
privacy, forbit their use of encryption and under circumstances even put 
them into
prison, could in my view steganography be a viable valuable means for 
them to
communicate with one another. But then by nature these people are 
incapable of
employ anything even having the remote affinity to the high technical 
expenditure
and complexities involved in deep learning etc. etc. So my personal 
conjecture is
that the techniques being discussed are of significance neither for the 
nation-states
nor for the common people in non-democratic regions, though further R&D 
of the
techniques may certainly bear highly valuable fruits in science and 
commerce.

M. K. Shen


More information about the cryptography mailing list