


Motivation
A number of critical monitoring and decision systems make decisions based on image, video, audio, voice commands, and other sensor inputs. The decisions nowadays are often made by Artificial Intelligence (AI) systems with complex algorithms such as those based on deep neural networks (DNNs), which have achieved state-of-the-art performance in computer vision, speech recognition, and other areas. However, AI-based algorithms, such as DNNs, have been shown to be easily fooled by adversarial perturbations which slightly modify a legitimate input (such as image or voice) in a specific direction and are perceptually indistinguishable from the original. This presents a significant security risk for the aforementioned systems. This project tackles the problem of detecting such "forgeries" by constructing detectors based on fundamental theories of statistical inference, that are well suited to detecting weak signal perturbations. The general approach is closely related to steganalysis (detection of secret patterns embedded in cover signals).
Achievements
AI-based algorithms, such as DNNs, have been shown to be easily fooled by adversarial perturbations which slightly modify a legitimate input (such as image or voice) in a specific direction and are perceptually indistinguishable from the original. This project constructs detectors of such forgeries based on fundamental theories of statistical inference, that are well suited to detecting weak signal perturbations.
Achieved milestones:
- Developed a novel AI-based approach to actively detect adversarial attacks towards the deep learning systems, without any fine tuning of the networks themselves
- Developed a novel pre-processing method which can further purify the signals which may involve adversarial perturbation, thus the valuable signal can be safely processed and analysed by the desired AI system without being fooled
- Developed a novel AI-based method to disentangle various variants of face image data which may cause error or mistakes for the face analysis AI modules (e.g., verification, recognition, etc.)
- Developed a GAN-based method to help authentication of the AR code developed by i-Sprint (collaboration with our industry partner).