|

Cheatsheet of AI security papers from CVPR ’22

In my AI security @ CVPR ’22 blog post series, I cover all AI security papers that were presented at the conference. This post is a cheatsheet with links. The papers are categorized by their contribution, using the same categorization as the blog posts: each clickable heading takes you to the blog post covering the respective category.

Adversarial attacks

Classic adversarial attacks

  1. Byun et al.: Improving the Transferability of Targeted Adversarial Examples through Object-Based Diverse Input
  2. Cai et al.: Zero-Query Transfer Attacks on Context-Aware Object Detectors
  3. Dhar et al.: EyePAD++: A Distillation-based approach for joint Eye Authentication and Presentation Attack Detection using Periocular Images
  4. Feng et al.: Boosting Black-Box Attack with Partially Transferred Conditional Adversarial Distribution
  5. He et al.: Transferable Sparse Adversarial Attack
  6. Jia et al.: LAS-AT: Adversarial Training with Learnable Attack Strategy
  7. Jin et al.: Enhancing Adversarial Training with Second-Order Statistics of Weights
  8. Lee et al.: Masking Adversarial Damage: Finding Adversarial Saliency for Robust and Sparse Network
  9. Li et al.: Subspace Adversarial Training
  10. Liu et al.: Practical Evaluation of Adversarial Robustness via Adaptive Auto Attack
  11. Lovisotto et al.: Give Me Your Attention: Dot-Product Attention Considered Harmful for Adversarial Patch Robustness
  12. Luo et al.: Frequency-driven Imperceptible Adversarial Attack on Semantic Similarity
  13. Pang et al.: Two Coupled Rejection Metrics Can Tell Adversarial Examples Apart
  14. Sun et al.: Exploring Effective Data for Surrogate Training Towards Black-box Attack
  15. Tsiligkaridis & Roberts: Understanding and Increasing Efficiency of Frank-Wolfe Adversarial Training
  16. Xiong et al.: Stochastic Variance Reduced Ensemble Adversarial Attack for Boosting the Adversarial Transferability
  17. Vellaichamy et al.: DetectorDetective: Investigating the Effects of Adversarial Examples on Object Detectors
  18. Wang et al.: DST: Dynamic Substitute Training for Data-free Black-box Attack
  19. Xu et al.: Bounded Adversarial Attack on Deep Content Features
  20. Yu et al.: Towards Robust Rain Removal Against Adversarial Attacks: A Comprehensive Benchmark Analysis and Beyond
  21. C. Zhang et al.: Investigating Top-k White-Box and Transferable Black-box Attack
  22. Jianping Zhang et al.: Improving Adversarial Transferability via Neuron Attribution-Based Attacks
  23. Jie Zhang et al.: Towards Efficient Data Free Black-box Adversarial Attack
  24. Zhou et al.: Adversarial Eigen Attack on Black-Box Models

Certifiable defense

  1. Chen et al.: Towards Practical Certifiable Patch Defense with Vision Transformer
  2. Salman et al.: Certified Patch Robustness via Smoothed Vision Transformers

Non-classic adversarial attacks

  1. Berger et al.: Stereoscopic Universal Perturbations across Different Architectures and Datasets
  2. Chen et al.: NICGSlowDown: Evaluating the Efficiency Robustness of Neural Image Caption Generation Models
  3. Dong et al.: Improving Adversarially Robust Few-Shot Image Classification With Generalizable Representations
  4. Gao et al.: Can You Spot the Chameleon? Adversarially Camouflaging Images from Co-Salient Object Detection
  5. Huang et al.: Shape-invariant 3D Adversarial Point Clouds
  6. Li et al.: Robust Structured Declarative Classifiers for 3D Point Clouds: Defending Adversarial Attacks with Implicit Gradients
  7. Özdenizci et al.: Improving Robustness Against Stealthy Weight Bit-Flip Attacks by Output Code Matching
  8. Pérez et al.: 3DeformRS: Certifying Spatial Deformations on Point Clouds
  9. Ren et al.: Appearance and Structure Aware Robust Deep Visual Graph Matching: Attack, Defense and Beyond
  10. Schrodi et al.: Towards Understanding Adversarial Robustness of Optical Flow Networks
  11. Thapar et al.: Merry Go Round: Rotate a Frame and Fool a DNN
  12. Wang et al.: Bandits for Structure Perturbation-based Black-box Attacks to Graph Neural Networks with Theoretical Guarantees
  13. Wei et al.: Cross-Modal Transferable Adversarial Attacks from Images to Videos
  14. Zhang et al.: 360-Attack: Distortion-Aware Perturbations From Perspective-Views
  15. Zhou & Patel: Enhancing Adversarial Robustness for Deep Metric Learning

Physical adversarial attacks

  1. Hu et al.: Adversarial Texture for Fooling Person Detectors in the Physical World
  2. Liu et al.: Segment and Complete: Defending Object Detectors against Adversarial Patch Attacks with Robust Patch Detection
  3. Suryanto et al.: DTA: Physical Camouflage Attacks using Differentiable Transformation Network
  4. Zhang et al.: On Adversarial Robustness of Trajectory Prediction for Autonomous Vehicles
  5. Zhong et al.: Shadows can be Dangerous: Stealthy and Effective Physical-world Adversarial Attack by Natural Phenomenon
  6. Zhu et al.: Infrared Invisible Clothing: Hiding from Infrared Detectors at Multiple Angles in Real World

Backdoor/Trojan attacks

  1. Chen et al.: Quarantine: Sparsity Can Uncover the Trojan Attack Trigger for Free
  2. Feng et al.: FIBA: Frequency-Injection based Backdoor Attack in Medical Image Analysis
  3. Guan et al.: Few-shot Backdoor Defense Using Shapley Estimation
  4. Liu et al.: Complex Backdoor Detection by Symmetric Feature Differencing
  5. Qi et al.: Towards Practical Deployment-Stage Backdoor Attack on Deep Neural Networks
  6. Saha et al.: Backdoor Attacks on Self-Supervised Learning
  7. Tao et al.: Better Trigger Inversion Optimization in Backdoor Scanning
  8. Walmer et al.: Dual-Key Multimodal Backdoors for Visual Question Answering
  9. Wang et al.: BppAttack: Stealthy and Efficient Trojan Attacks against Deep Neural Networks via Image Quantization and Contrastive Adversarial Learning
  10. Zhao et al.: DEFEAT: Deep Hidden Feature Backdoor Attacks by Imperceptible Perturbation and Latent Representation Constraints

Model inversion attacks

  1. Del Grosso et al.: Leveraging Adversarial Examples to Quantify Membership Information Leakage
  2. Hatamizadeh et al.: GradViT: Gradient Inversion of Vision Transformers
  3. Kahla et al.: Label-Only Model Inversion Attacks via Boundary Repulsion
  4. Kim: Robust Combination of Distributed Gradients Under Adversarial Perturbations
  5. J. Li et al.: ResSFL: A Resistance Transfer Framework for Defending Model Inversion Attack in Split Federated Learning
  6. Z. Li et al.: Auditing Privacy Defenses in Federated Learning via Generative Gradient Leakage
  7. Lu et al.: APRIL: Finding the Achilles’ Heel on Privacy for Vision Transformers
  8. Ng et al.: NinjaDesc: Content-Concealing Visual Descriptors via Adversarial Learning
  9. Peng et al.: Fingerprinting Deep Neural Networks Globally via Universal Adversarial Perturbations
  10. Sanyal et al.: Towards Data-Free Model Stealing in a Hard Label Setting

Image manipulation & deepfake detection

  1. Asnani et al.: Proactive Image Manipulation Detection
  2. Chen et al.: Self-supervised Learning of Adversarial Example: Towards Good Generalizations for Deepfake Detection
  3. Hu et al.: Protecting Facial Privacy: Generating Adversarial Identity Masks via Style-robust Makeup Transfer
  4. Jia et al.: Exploring Frequency Adversarial Attacks for Face Forgery Detection
  5. Shiohara & Yamasaki: Detecting Deepfakes with Self-Blended Images
  6. Wu et al.: Robust Image Forgery Detection Over Online Social Network Shared Images

Subscribe

Enjoying the blog? Subscribe to receive blog updates, post notifications, and monthly post summaries by e-mail.

Similar Posts