Google is currently in a bit of hot water with with some of the world’s most powerful companies, who are peeved that their ads have been appearing next to racist, anti-semitic, and terrorist videos on YouTube. Recent reports brought the issue to light and in response, brands have been pulling ad campaigns while Google piles more AI resources into verifying videos’ content. But the problem is, the search giant’s current algorithms might just not be up to the task.
A recent research paper, published by the University of Washington and spotted by Quartz, makes the problem clear. It tests Google’s Cloud Video Intelligence API, which is used to automatically classify the content of videos using object recognition. (The system is currently in private beta, but has been “applied on large-scale media platforms like YouTube,” says Google.) The API, which is powered by deep neural networks, works very well against regular videos, but researchers found it was easily tricked by a determined adversary.
In the paper, the University of Washington researchers describe how a test video (provided by Google and named Animals.MP4) is given the tags “Animal,” “Wildlife,” “Zoo,” “Nature,” and “Tourism” by the company’s API. However, when the researchers inserted pictures of a car into the video the API said, with 98 percent certainty, that the video should be given the tag “Audi.” The frames — called “adversarial images” in this context — were inserted roughly once every two seconds.
“Such vulnerability seriously undermines the applicability of the API in adversarial environments,” write the researchers. “For example […] an adversary can bypass a video filtering system by inserting a benign image into a video with illegal contents.”
This work underscores a clear trend in the tech world. As companies like Google, Facebook, and Twitter deal with unsavory content on their platforms, they’re increasingly turning to artificial intelligence to help sort and classify data. However, AI systems are never perfect, and often make mistakes or are capable of being tricked. This already been proved with Google’s anti-troll filters, which are designed to classify insults but can be fooled by slang, rogue punctuation, and typos. It seems it still takes human to reliably tell us what humans are really up to.
You must log in to post a comment.