Adversarial attacks are subtly altered images, objects or sounds that fool AI and trick humans. For example, an image of a perfectly described 3D printed turtle looked like a rifle to the AI. Additionally, the artificial intelligence saw a 3D printed baseball as an espresso. The future of artificial intelligence will see all of these things in real time.
These can become more common unless AI can find a way to outsmart these attacks. A real attack can cause a self-driving car to crash after misinterpreting a stop sign as a speed limit sign due to strategically placed stickers.
Researchers at the International Conference on Learning Representations have found a new way to defend against adversarial attacks.
Some AI are spotting patterns in images that humans can’t, which makes them vulnerable to those patterns and potential attacks. Training AI to learn labels so it can correctly identify objects, will need to be the first step. Although the programming is working perfectly, its the training data needs adjusting. It will take some time to create strong defenses against attacks, but researchers are going in the right direction to make the future of artificial intelligence a bit safer.
For more details, read the original article from ScienceMag here
To learn more about Adversarial attacks go here
For more information, check out these posts:
- Enhance Customer Experience with Artificial Intelligence and Robotics
- Quarantine Cyber Attackers With Artificial Intelligence
- In The News: The Global State Of Enterprise Analytics 2018 – How Cloud, Big Data And AI Are Key To The Future
- In the News: Attainable AI, from science fiction to science fact: The reality of today’s AI
- Squirrel AI: Adaptive Learning with AI and Big Data
- The Changing Landscape Of Disruptive Technologies: IoT, AI and Robotics