Artificial Intelligence Is Learning How To Outsmart Hackers

Adversarial attacks are subtly altered images, objects or sounds that fool AI and trick humans. For example, an image of a perfectly described 3D printed turtle looked like a rifle to the AI. Additionally, a 3D printed baseball was seen as an espresso.

These can become more common unless AI can find a way to outsmart these attacks. A real attack can cause a self-driving car to crash after misinterpreting a stop sign as a speed limit sign due to strategically placed stickers.

Researchers at the International Conference on Learning Representations have found a new way to defend against adversarial attacks.

Some AI are spotting patterns in images that humans can’t, which makes them vulnerable to those patterns and potential attacks. Researchers realized that the AI will need to be trained to learn labels so it can correctly identify objects. Although the programming is working perfectly, its the training data that needs to be adjusted. It will take some time to create strong defenses against attacks, but researchers are going in the right direction to make AI safer.

For more details, read the original article from ScienceMag here

To learn more about Adversarial attacks go here

Like the post? Share it:
Andre Moncayo
Andre Moncayo
Marketing Associate