Deep neural networks are easily fooled! Phd student Jason Yosinski, together with coauthors from the University of Wyoming, is making news with work showing that "it is easy to produce images that are completely unrecognizable to humans, but that state-of-the- art DNNs believe to be recognizable objects with 99.99% confidence (e.g. labeling with certainty that white noise static is a lion)".
This result hit number one on Hacker News, was written up by New Scientist, Gizmodo, phys.org, and Extreme Tech, and an article by Wired is anticipated.