Vision-Language Models (VLMs) are becoming increasingly vulnerable to adversarial attacks as various novel attack strategies are being proposed against these models. While existing defenses excel in unimodal contexts, they currently fall short in safeguarding VLMs against adversarial threats. To mitigate this vulnerability, we propose a novel, yet elegantly simple approach for detecting adversarial samples in VLMs. Our method leverages Text-to-Image (T2I) models to generate images based on captions produced by target VLMs. Subsequently, we calculate the similarities of the embeddings of both input and generated images in the feature space to identify adversarial samples. Empirical evaluations conducted on different datasets validate the efficacy of our approach, outperforming baseline methods adapted from image classification domains. Furthermore, we extend our methodology to classification tasks, showcasing its adaptability and model-agnostic nature. Theoretical analyses and empirical findings also show the resilience of our approach against adaptive attacks, positioning it as an excellent defense mechanism for real-world deployment against adversarial threats.
@article{fares2024mirrorcheck,
title = {Mirrorcheck: Efficient Adversarial Defense for Vision-Language Models},
author = {Fares, Samar and Ziu, Klea and Aremu, Toluwani and Durasov, Nikita and Takáč, Martin and Fua, Pascal and Nandakumar, Karthik and Laptev, Ivan},
journal={arXiv preprint arXiv:2406.09250},
year = {2024}
}