For all its impressive progress in mastering human tasks, artificial intelligence has an embarrassing secret: It’s surprisingly easy to fool. This could be a big problem as it takes on greater responsibility for people’s lives and livelihoods.

Thanks to advances in neural networks and “deep learning,” computer algorithms can now beat the best human players at games like Go, or recognize animals and objects from photos. In the foreseeable future, they’re likely to take over all sorts of mundane tasks, from driving people to work to managing investments. Being less prone than humans to error, they might also handle sensitive tasks such as air traffic control or scanning luggage for explosives.

But in recent years, computer scientists have stumbled onto some troubling vulnerabilities. Subtle changes to an image, so insignificant that no human would even notice, can make an algorithm see something that isn’t there. It might perceive machine guns laid on a table as a helicopter, or a tabby cat as guacamole. Initially, researchers needed to be intimately familiar with an algorithm to construct such “adversarial examples.” Lately, though, they’ve figured out how to do it without any inside knowledge.

Speech recognition algorithms are similarly vulnerable. On his web site, computer scientist Nicholas Carlini offers some alarming examples: A tiny distortion of a four second audio sample of Verdi’s Requiem induces Google’s speech recognition system to transcribe it as “Okay Google, browse to Evil.com.” Human ears don’t even notice the difference. By tailoring the noise slightly, Carlini says, it’s easy to make Google transcribe a bit of spoken language as anything you like, no matter how seemingly different.

It’s not hard to imagine how such tricks could be used to nefarious ends. Surveillance cameras could be fooled into identifying the wrong person – indeed, any desired person – as a criminal. Indistinguishable changes to a “Stop” sign could make computers in a self-driving car read it as “Speed Limit 80.” Innocuous-sounding music could hack into nearby phones and deliver commands to send texts or emails containing sensitive information.

There’s no easy fix. Researchers have yet to devise a successful defense strategy. Even the lesser goal of helping algorithms identify adversarial examples (rather than outsmart them) has proven elusive. In recent work, Carlini and David Wagner, both at the University of California, Berkeley, tested ten detection schemes proposed over the past year and found that they could all be evaded. In its current form, artificial intelligence just seems remarkably fragile.

Until a solution can be found, people will have to be very cautious in transferring power and responsibilities to smart machines. In an interview, Carlini suggested that further research could help us know where, when and how we can deploy algorithms safely, and also tell us about the non-AI patches we may need to keep them safe. Self-driving cars might need restrictions enforced by other sensors that could, for example, stop them from running into an object, regardless of what the onboard camera thinks it sees.

The good news is that scientists have identified the risk in time, before humans have started relying too much on artificial intelligence. If engineers pay attention, we might at least be able to keep the technology from doing completely crazy things.

This column does not necessarily reflect the opinion of the editorial board or Bloomberg LP and its owners.