Skip to content
Jan Zahálka

Jan Zahálka

AI & Security

  • Blog
  • Subscribe
  • My workExpand
    • Publications
    • Teaching
Email Twitter Linkedin Github
Jan Zahálka
Jan Zahálka
AI & Security
Email Twitter Linkedin Github

Subscribe

Month: August 2023

Adversarial training: a security workout for AI models
CVPR '23 | Science

Adversarial training: a security workout for AI models

ByJan Zahálka 30 August 2023

Adversarial training (AT) amends the training data of an AI model to make it more robust. How does AT fare against modern attacks? This post covers AT work presented at CVPR ’23.

Read More Adversarial training: a security workout for AI modelsContinue

Better model architecture, better adversarial defense
CVPR '23 | Science

Better model architecture, better adversarial defense

ByJan Zahálka 16 August 2023

Adversarial defense is a crucial topic: many attacks exist, and their numbers are surging. This post covers CVPR ’23 work on bolstering model architectures.

Read More Better model architecture, better adversarial defenseContinue

Which model architecture is the best in adversarial defense?
CVPR '23 | Science

Which model architecture is the best in adversarial defense?

ByJan Zahálka 9 August 20239 August 2023

A mini-tool for comparison of adversarial defense of various computer vision model architectures, based on the CVPR ’23 work by A. Liu et al.

Read More Which model architecture is the best in adversarial defense?Continue

Real-world AI security: Physical adversarial attacks research from CVPR ’23
CVPR '23 | Science

Real-world AI security: Physical adversarial attacks research from CVPR ’23

ByJan Zahálka 2 August 2023

Physical adversarial attacks fool AI models with physical object modifications, harming real-world AI security. This post covers CVPR ’23 work on the topic.

Read More Real-world AI security: Physical adversarial attacks research from CVPR ’23Continue

Recent Posts

  • What’s more powerful than one adversarial attack?
  • Can ChatGPT read who you are?
  • Elves explain how to understand adversarial attacks
  • A cyberattacker’s little helper: Jailbreaking LLM security
  • Judging LLM security: How to make sure large language models are helping us?

Archives

  • February 2024
  • January 2024
  • December 2023
  • November 2023
  • October 2023
  • September 2023
  • August 2023
  • July 2023
  • June 2023
  • May 2023

Categories

  • AI
  • CVPR '22
  • CVPR '23
  • Science
  • Security

© 2025 Jan Zahálka | Privacy policy

We are using cookies to give you the best experience on our website.

You can find out more about which cookies we are using or switch them off in .

  • Blog
  • Subscribe
  • My work
    • Publications
    • Teaching
Privacy Overview

This website uses cookies so that we can provide you with the best user experience possible. Cookie information is stored in your browser and performs functions such as recognising you when you return to our website and helping our team to understand which sections of the website you find most interesting and useful.

Strictly Necessary Cookies

Strictly Necessary Cookie should be enabled at all times so that we can save your preferences for cookie settings.

3rd Party Cookies

This website uses Google Analytics to collect anonymous information such as the number of visitors to the site, and the most popular pages.

Keeping this cookie enabled helps us to improve our website.

Please enable Strictly Necessary Cookies first so that we can save your preferences!

Powered by  GDPR Cookie Compliance