The answer appears to be yes!

Evidently, “the algorithms that computers use to determine what objects are – a cat, a dog, or a toaster, for instance–have a vulnerability.” This vulnerability is called an adversarial example and according to this post it has the potential to trick Computers Into Thinking A Banana Is A Toaster.  That may sound silly but also potentially dangerous and this post explains why.

(Thanks to John Riley of Gabriel Books for providing ATG Quirkies.)