[Research] Professor Woo Simon’s Research Lab Publishes a Paper in The ACMMM 2021
- College of Software and Engineering
- Hit914
- 2021-07-16
Research work from Professor Woo and his Data-driven AI Security HCI (DASH Lab) research lab's students Minha Kim and Shahroz Tariq was accepted in top tier multimedia computer science conference, the ACM Multimedia conference (ACMMM 2021) (BK IF=4). The work will be presented in October 2021 in Chengdu, China.
In this work, the authors propose a method to detect fake media such as deepfakes and synthetic face images, which have recently emerged as a significant social issue. It covers deep learning-based algorithms such as Continual Learning, Knowledge Distillation, and Representation Learning that can efficiently detect not only previous generation techniques of Deepfakes but also current ones.
Previously, while there have been methods to detect deepfake video and GAN-generated images with high performance, new generation techniques of Deepfake and GAN are diversifying. Accordingly, it requires countermeasures to detect these new manipulation techniques while remaining effective on old ones. However, general methods to detect deepfake videos and GAN images require a large amount of training data for each generation technique, which is realistically constrained and also takes a long time to train for the model.
The authors proposed a domain-adaptive transfer learning method to address these problems, which requires a small subset of data used for old tasks (source dataset) to prevent knowledge forgetting of the model. However, in practice, long-term preservation is challenging, and retraining source domain data may raise privacy concerns. Therefore, in this work, they developed CoReD algorithm, a continual learning-based approach using representation learning and knowledge distillation. As a result, they demonstrated effective performance improvement for target domains while maintaining detection accuracy on the source domain using CoReD algorithm as compared to other baselines.