Scientists Prove That Current Deepfake Detectors Can Still Be Fooled
Multiple companies including Microsoft and Facebook as well as researchers from The University of Southern California have developed technologies to combat deepfakes and prevent their spread of false media and misinformation. A group of scientists have still managed to fool them, however.
A group of computer scientists from UC San Diego has warned that it is still possible to fool current deepfake detection systems by inserting inputs called “adversarial examples” into every video frame. The scientists presented their findings at the WACV 2021 computer vision conference that took place online last month.
Adversarial examples are slightly manipulated inputs that cause artificial intelligence systems such as machine learning models to make mistakes. In addition, the team showed that the attack still works after videos are compressed. In the video above, the scientists show that XceptionNet, a deepfake detector, labels the adversarial video they created as “real.”
As explained by Engadget, most of these detectors work by tracking faces in videos and sending cropped face data into a neural network for analysis. The neural network will then analyze this data and look for elements that are not typically reproduced well in deepfakes, such as blinking.
By inserting adversarial examples, the scientists found that they could fool those deepfake detectors into believing videos were the real deal. As stated in their paper:
To use these deepfake detectors in practice, we argue that it is essential to evaluate them against an adaptive adversary who is aware of these defenses and is intentionally trying to foil these defenses. We show that the current state of the art methods for deepfake detection can be easily bypassed if the adversary has complete or even partial knowledge of the detector.
Ever since they started to become more popular and easier to produce — even Adobe built deepfake-like filters into its programs as a slider — the fear that deepfakes could be used maliciously has become a very real threat. Facebook has been struggling with them since 2019. As these scientists have proven, the automation technologies being developed to combat misinformation may not yet be up to the task.
(via Engadget)