Protecting image classification in artificial intelligence
Machine learning, a subset of artificial intelligence, has been utilized in an incredible number of applications. One popular application is image classification. From facial recognition on social media sites to computer vision of self-driving cars, image classification is a varied and useful tool. With its success and widespread use come concerns regarding its safety from attacks.
To address vulnerability concerns, a new subfield of machine learning has emerged called adversarial machine learning, which focuses on the security of machine learning algorithms. Thomas Hogan, a doctoral student of mathematics at the University of California (UC) Davis, spent his summer investigating this new area of research during the National Science Foundation’s (NSF) Mathematical Sciences Graduate Internship (MSGI) Program.
The NSF MSGI program offers research opportunities for mathematical sciences doctoral students to participate in internships at national laboratories, industries and other facilities. NSF MSGI seeks to provide hands-on experience for the use of mathematics in a nonacademic setting.
During his internship with NSF MSGI, Hogan was stationed in the Computational Engineering Division at Lawrence Livermore National Laboratory (LLNL), Livermore, California. Under the mentorship of Bhavya Kailkhura, Ph.D., and Ryan Goldhahn, Ph.D., Hogan attempted to find universal adversarial perturbations that would attempt to fool image classifiers.
Universal adversarial perturbations are near-imperceptible amounts of noise that can be added to an image with the intent to have the image misclassified. For instance, a universal perturbation could be added to a picture of a cat, causing the picture to be misclassified as a dog. The perturbations are called universal because the same perturbation works on any image, regardless of the image content.
In certain situations, these perturbations could be seriously dangerous. “Recent research shows that an attacker can put a sticker on a stop sign, causing an image classifier to classify the stop sign as a ‘Speed Limit 70’ sign instead,” Hogan explained. “Obviously, this is a great concern for the safety of self-driving cars.”
Hogan spent much time reviewing existing literature of universal perturbations for inspiration for new algorithms to test. He ran several experiments on a high-performance computer and reviewed the results to modify his code and algorithms for improvement. Hogan’s efforts contribute to the general understanding of how attacks function in order to avoid them in the future.
During his experience, Hogan strengthened his skills in programming, expanded his professional network, and most importantly to him, confirmed his decision to pursue a career as a data scientist.
“This internship has shown me many exciting aspects of data science, not only in the research I conducted, but in other problems I was exposed to in seminars and projects I discussed in group meetings,” Hogan said. “I would highly recommend this program. I think I will look back on it as an extremely important step in my career development,” he continued.
Hogan returns to UC Davis to complete his doctoral degree in mathematics. As a result of his research with NSF MSGI, Hogan is working on a manuscript intended for publication in a mathematical journal. After graduation, Hogan would like to pursue a career at a national laboratory or in industry as a data scientist.
The NSF MSGI Program is funded by NSF and administered through the U.S. Department of Energy’s (DOE) Oak Ridge Institute for Science and Education (ORISE). ORISE is managed for DOE by ORAU.