From AI security to the real world: Physical adversarial attacks research @ CVPR ’22

This post is a part of the AI security at CVPR ’22 series. Physical adversarial attacks At a glance, attacks on computer vision models might not sound like such a big deal. Sure, somebody’s visual data processing pipeline might be compromised, and if that’s a criminal matter, let the courts decide. Big deal, it doesn’t really…

Are there guarantees in AI security? Certifiable defense research @ CVPR ’22

This post is a part of the AI security at CVPR ’22 series. The issue of CV and AI security can feel quite scary. Stronger and more sophisticated attacks keep coming. Defense efforts are a race that must be run, but cannot be definitively won. We patch the holes in our model, then better attacks…

AI & security @ CVPR ’22: Classic adversarial attacks research

This blog post is a part of the AI & security at CVPR ’22 series. Here I cover the adversarial attack terminology and research on classic adversarial attacks. Terminology and state of the art The cornerstone of AI & security research, and indeed the classic CV attack, is the adversarial attack, first presented in the…

How secure is computer vision? AI & security at CVPR ’22

Computer vision (CV) is one of the vanguards of AI, and its importance is rapidly surging. For example, CV models perform personal identity verification, assist physicians in diagnosis, enable self-driving vehicles. There has been a remarkable increase in performance of CV models, dispelling much of the doubt related to their efficiency. With their increased involvement…