[Research] Professor Woo Simon’s Research Lab Publishes a Paper in The Web Conference 2021
- College of Software and Engineering
- Hit965
- 2021-01-25
Professor Woo Simon’s Research Lab Publishes a Paper in The Web Conference 2021
A paper from Professor Woo and his DASH (Data-driven AI Security HCI http://dash.skku.edu/) research lab's Ph.D. students Tariq Shahroz and Sangyup Lee got accepted into one of the top conferences in Web/Data Mining, The Web Conference (WWW) 2021 (BK IF=4). The work will be presented in April. In this work, they propose a deep learning-based (using Convoluntary LSTM, Transfer Learning, and Domain Adaptation techniques) algorithm to detect Deepfakes efficiently. The generation of Deepfakes is susceptible to potential abuse; many people with malicious intentions have taken advantage of these methods to generate fake female celebrity videos. However, detecting these deepfakes or forged images/videos is challenging due to the lack of data. Also, designing a generalized classifier that performs well universally on different types of deepfakes is what we desperately need today. To prevent potential abuse caused by such deepfakes, this work introduces a Convolutional LSTM-based Residual Network (CLRNet), which applies a unique model training procedure and explores spatial and temporal information in deepfakes. The proposed model obtained a performance improvement over state-of-the-art deepfake detection models and validated its performance with the Deepfake-in-the-wild dataset.
[Title] “One Detector to Rule Them All: Towards a General Deepfake Attack Detection Framework”, The Web Conference 2021 (WWW 2021)
Deep learning-based video manipulation methods have become widely accessible to the masses. With little to no effort, people can quickly learn how to generate deepfake (DF) videos. In particular, females have been occasional victims of deepfake, which are widely spread on the Web. While deep learning-based detection methods have been proposed to identify specific types of DFs, their performance suffers for other types of deepfake methods, including real-world deepfakes, on which they are not sufficiently trained. In other words, most of the proposed deep learning-based detection methods lack transferability and generalizability. Beyond detecting a single type of DF from benchmark deepfake datasets, we focus on developing a generalized approach to detect multiple types of DFs, including deepfakes from unknown generation methods such as DeepFake-in-the-Wild (DFW) videos. To better cope with unknown and unseen deepfakes, we introduce a Convolutional LSTM-based Residual Network (CLRNet), which adopts a unique model training strategy and explores spatial as well as the temporal information in a deepfakes. Through extensive experiments, we show that existing defense methods are not ready for real-world deployment. Whereas our defense method (CLRNet) achieves far better generalization when detecting various benchmark deepfake methods (97.57% on average). Furthermore, we evaluate our approach with a high-quality DeepFake-in-the-Wild dataset, collected from the Internet containing numerous videos and having more than 150,000 frames. Our CLRNet model demonstrated that it generalizes well against high-quality DFW videos by achieving 93.86% detection accuracy, outperforming existing state-of-the-art defense methods by a considerable margin.