Skip to content
Jan Zahálka

Jan Zahálka

AI & Security

  • Blog
  • Subscribe
  • My workExpand
    • Publications
    • Teaching
Email Twitter Linkedin Github
Jan Zahálka
Jan Zahálka
AI & Security
Email Twitter Linkedin Github

Subscribe

CVPR ’23

AI security @ CVPR ’23: Honza’s highlights & conclusion
CVPR '23 | Science

AI security @ CVPR ’23: Honza’s highlights & conclusion

ByJan Zahálka 1 November 2023

This post presents “Honza’s highlights”—CVPR ’23 AI security papers that are worthy of your attention and have not received the official highlight status—and conclusions from CVPR ’23.

Read More AI security @ CVPR ’23: Honza’s highlights & conclusionContinue

Reality can be lying: Deepfakes and image manipulation @ CVPR ’23
CVPR '23 | Science

Reality can be lying: Deepfakes and image manipulation @ CVPR ’23

ByJan Zahálka 18 October 2023

Deepfakes & image manipulation are increasingly used for spreading fake news or falsely incriminating people, presenting a security and privacy threat. This post summarizes CVPR ’23 work on the topic.

Read More Reality can be lying: Deepfakes and image manipulation @ CVPR ’23Continue

Privacy attacks @ CVPR ’23: How to steal models and data
CVPR '23 | Science

Privacy attacks @ CVPR ’23: How to steal models and data

ByJan Zahálka 4 October 2023

This post summarizes CVPR ’23 work on privacy attacks that threaten to steal an AI model (model stealing) or its training data (model inversion).

Read More Privacy attacks @ CVPR ’23: How to steal models and dataContinue

Backdoor attacks & defense @ CVPR ’23: How to build and burn Trojan horses
CVPR '23 | Science

Backdoor attacks & defense @ CVPR ’23: How to build and burn Trojan horses

ByJan Zahálka 20 September 202320 September 2023

Backdoor (or Trojan) attacks poison an AI model during training, essentially giving attackers the keys. This post summarizes CVPR ’23 research backdoor attacks and defense.

Read More Backdoor attacks & defense @ CVPR ’23: How to build and burn Trojan horsesContinue

From “maybe” to “absolutely sure”: Certifiable security at CVPR ’23
CVPR '23 | Science

From “maybe” to “absolutely sure”: Certifiable security at CVPR ’23

ByJan Zahálka 13 September 2023

Certifiable security (CS) gives security guarantees to AI models, which is highly desirable for practical AI applications. Learn about CS work at CVPR ’23 in this post.

Read More From “maybe” to “absolutely sure”: Certifiable security at CVPR ’23Continue

How to see properly: Adversarial defense by data inspection
CVPR '23 | Science

How to see properly: Adversarial defense by data inspection

ByJan Zahálka 6 September 2023

Data inspection is a promising adversarial defense technique. Inspecting the data properly can reveal and even remove adversarial attacks. This post summarizes data inspection work from CVPR ’23.

Read More How to see properly: Adversarial defense by data inspectionContinue

Adversarial training: a security workout for AI models
CVPR '23 | Science

Adversarial training: a security workout for AI models

ByJan Zahálka 30 August 2023

Adversarial training (AT) amends the training data of an AI model to make it more robust. How does AT fare against modern attacks? This post covers AT work presented at CVPR ’23.

Read More Adversarial training: a security workout for AI modelsContinue

Better model architecture, better adversarial defense
CVPR '23 | Science

Better model architecture, better adversarial defense

ByJan Zahálka 16 August 2023

Adversarial defense is a crucial topic: many attacks exist, and their numbers are surging. This post covers CVPR ’23 work on bolstering model architectures.

Read More Better model architecture, better adversarial defenseContinue

Which model architecture is the best in adversarial defense?
CVPR '23 | Science

Which model architecture is the best in adversarial defense?

ByJan Zahálka 9 August 20239 August 2023

A mini-tool for comparison of adversarial defense of various computer vision model architectures, based on the CVPR ’23 work by A. Liu et al.

Read More Which model architecture is the best in adversarial defense?Continue

Real-world AI security: Physical adversarial attacks research from CVPR ’23
CVPR '23 | Science

Real-world AI security: Physical adversarial attacks research from CVPR ’23

ByJan Zahálka 2 August 2023

Physical adversarial attacks fool AI models with physical object modifications, harming real-world AI security. This post covers CVPR ’23 work on the topic.

Read More Real-world AI security: Physical adversarial attacks research from CVPR ’23Continue

Page navigation

1 2 Next PageNext

Recent Posts

  • What’s more powerful than one adversarial attack?
  • Can ChatGPT read who you are?
  • Elves explain how to understand adversarial attacks
  • A cyberattacker’s little helper: Jailbreaking LLM security
  • Judging LLM security: How to make sure large language models are helping us?

Archives

  • February 2024
  • January 2024
  • December 2023
  • November 2023
  • October 2023
  • September 2023
  • August 2023
  • July 2023
  • June 2023
  • May 2023

Categories

  • AI
  • CVPR '22
  • CVPR '23
  • Science
  • Security

© 2025 Jan Zahálka | Privacy policy

We are using cookies to give you the best experience on our website.

You can find out more about which cookies we are using or switch them off in .

  • Blog
  • Subscribe
  • My work
    • Publications
    • Teaching
Privacy Overview

This website uses cookies so that we can provide you with the best user experience possible. Cookie information is stored in your browser and performs functions such as recognising you when you return to our website and helping our team to understand which sections of the website you find most interesting and useful.

Strictly Necessary Cookies

Strictly Necessary Cookie should be enabled at all times so that we can save your preferences for cookie settings.

3rd Party Cookies

This website uses Google Analytics to collect anonymous information such as the number of visitors to the site, and the most popular pages.

Keeping this cookie enabled helps us to improve our website.

Please enable Strictly Necessary Cookies first so that we can save your preferences!

Powered by  GDPR Cookie Compliance