AI security @ CVPR ’22: Backdoor/Trojan attacks research
|

AI security @ CVPR ’22: Backdoor/Trojan attacks research

A backdoor or Trojan attack compromises the model to produce outputs desired by the attacker when an input is manipulated in a certain way. Backdoor attacks give attackers a permanent unauthorized security pass, which poses a great AI security risk. In this post, I cover backdoor attacks research at CVPR ’22.

From AI security to the real world: Physical adversarial attacks research @ CVPR ’22
|

From AI security to the real world: Physical adversarial attacks research @ CVPR ’22

This post is a part of the AI security at CVPR ’22 series. Physical adversarial attacks At a glance, attacks on computer vision models might not sound so serious. Sure, somebody’s visual data processing pipeline might be compromised, and if that’s a criminal matter, let the courts decide. Big deal, it doesn’t really impact me that…

Are there guarantees in AI security? Certifiable defense research @ CVPR ’22
|

Are there guarantees in AI security? Certifiable defense research @ CVPR ’22

This post is a part of the AI security at CVPR ’22 series. The issue of CV and AI security can feel quite scary. Stronger and more sophisticated attacks keep coming. Defense efforts are a race that must be run, but cannot be definitively won. We patch the holes in our model, then better attacks…

AI & security @ CVPR ’22: Classic adversarial attacks research
|

AI & security @ CVPR ’22: Classic adversarial attacks research

This blog post is a part of the AI & security at CVPR ’22 series. Here I cover the adversarial attack terminology and research on classic adversarial attacks. Terminology and state of the art The cornerstone of AI & security research, and indeed the classic CV attack, is the adversarial attack, first presented in the…