[Cryptography] Amazing possibilities for steganography

Henry Baker hbaker1 at pipeline.com
Mon Sep 25 16:55:09 EDT 2017


At 01:43 PM 9/25/2017, Jerry Leichter wrote:
>> On Sep 25, 2017, at 9:06 AM, Henry Baker <hbaker1 at pipeline.com> wrote:
>> 
>> You've GOT to watch this 5 minute video about AI image recognition systems, in order to gain a better appreciation for what they can and can't do.
>> 
>> You can easily appreciate that someone could invert this process to produce high quality steganography for both images and sounds.
>> 
>> If this video interests you, the rest of these links tell more.
>> 
>> https://www.youtube.com/watch?v=M2IebCN9Ht4
>> 
>> Deep Neural Networks are Easily Fooled        109,422 views
>> 
>> Evolving AI Lab
>> 
>> Published on Dec 16, 2014
>> 
>> A video summary of the paper: Nguyen A, Yosinski J, Clune J.  Deep Neural Networks are Easily Fooled: High Confidence Predictions for Unrecognizable Images.  In Computer Vision and Pattern Recognition (CVPR '15), IEEE, 2015.
>> 
>> http://www.evolvingai.org/fooling
>> 
>> https://arxiv.org/pdf/1412.1897.pdf
>> 
>> http://www.evolvingai.org/files/DNNsEasilyFooled_cvpr15.pdf
>> 
>> http://www.evolvingai.org/files/DNNsEasilyFooled.zip
>> 
>> http://www.evolvingai.org/share/fooling_images_5000_cppn.tar.gz
>> 
>> http://www.evolvingai.org/files/70_images_entry_v2_web.jpg
>> On Sep 25, 2017, at 9:06 AM, Henry Baker <hbaker1 at pipeline.com> wrote:
>> 
>> You've GOT to watch this 5 minute video about AI image recognition systems, in order to gain a better appreciation for what they can and can't do.
>> 
>> You can easily appreciate that someone could invert this process to produce high quality steganography for both images and sounds.
>> 
>> If this video interests you, the rest of these links tell more.
>> 
>> https://www.youtube.com/watch?v=M2IebCN9Ht4
>> 
>> Deep Neural Networks are Easily Fooled        109,422 views
>> 
>> Evolving AI Lab
>> 
>> Published on Dec 16, 2014
>> 
>> A video summary of the paper: Nguyen A, Yosinski J, Clune J.  Deep Neural Networks are Easily Fooled: High Confidence Predictions for Unrecognizable Images.  In Computer Vision and Pattern Recognition (CVPR '15), IEEE, 2015.
>> 
>> http://www.evolvingai.org/fooling
>> 
>> https://arxiv.org/pdf/1412.1897.pdf
>> 
>> http://www.evolvingai.org/files/DNNsEasilyFooled_cvpr15.pdf
>> 
>> http://www.evolvingai.org/files/DNNsEasilyFooled.zip
>> 
>> http://www.evolvingai.org/share/fooling_images_5000_cppn.tar.gz
>> 
>> http://www.evolvingai.org/files/70_images_entry_v2_web.jpg
>There's a whole bunch of work like this recently.  Initially it showed that you could alter the inputs at the bit level and fool the recognizers.  By now the state of the art is showing *physically realizable* changes that will fool the recognizers.  For example, one paper showed how to print "artistic" eyeglass frames that either caused the image recognizers to fail to recognize anything - or in some cases cause them to recognize a chosen different face.  Another paper showed how to print covers you could paste over street signs that would look like someone's idea of abstract art, or sometimes would be unnoticeable to humans - but which could cause, for example, a recognizer to see a speed limit sign where there was really a stop sign.  Really scary for AI's that will drive cars, actually.
>
>We really should have known better.  For years now, we've found that systems *specifically designed to be safe against attacks* fall to intelligent attackers.  And yet we assumed that deep learning systems - which no one really understands - will magically be safe when directly attacked.  It turns out that all you need to do is turn their own algorithms loose looking for ways in and - bang, the whole facade falls apart.
>
>BTW, some of the results show that, yes, if you feed some of the attack images in as negative examples, the systems will get better at avoiding them - but you can still generate attacks that get through.  The space of attack images is huge and there are plenty of ways to sample from it.

Oh, but we're going to put all of our faith in neural-net AI to find & recognize malware?  How many $$$billions are going to go down that rathole...  "I see England, I see France; I see Putin's underpants!" ;-)



More information about the cryptography mailing list