Skip to content
Jan Zahálka

Jan Zahálka

AI & Security

  • Blog
  • Subscribe
  • My workExpand
    • Publications
    • Teaching
Email Twitter Linkedin Github
Jan Zahálka
Jan Zahálka
AI & Security
Email Twitter Linkedin Github

Subscribe

Science

Adversarial training: a security workout for AI models
CVPR '23 | Science

Adversarial training: a security workout for AI models

ByJan Zahálka 30 August 2023

Adversarial training (AT) amends the training data of an AI model to make it more robust. How does AT fare against modern attacks? This post covers AT work presented at CVPR ’23.

Read More Adversarial training: a security workout for AI modelsContinue

Better model architecture, better adversarial defense
CVPR '23 | Science

Better model architecture, better adversarial defense

ByJan Zahálka 16 August 2023

Adversarial defense is a crucial topic: many attacks exist, and their numbers are surging. This post covers CVPR ’23 work on bolstering model architectures.

Read More Better model architecture, better adversarial defenseContinue

Which model architecture is the best in adversarial defense?
CVPR '23 | Science

Which model architecture is the best in adversarial defense?

ByJan Zahálka 9 August 20239 August 2023

A mini-tool for comparison of adversarial defense of various computer vision model architectures, based on the CVPR ’23 work by A. Liu et al.

Read More Which model architecture is the best in adversarial defense?Continue

Real-world AI security: Physical adversarial attacks research from CVPR ’23
CVPR '23 | Science

Real-world AI security: Physical adversarial attacks research from CVPR ’23

ByJan Zahálka 2 August 2023

Physical adversarial attacks fool AI models with physical object modifications, harming real-world AI security. This post covers CVPR ’23 work on the topic.

Read More Real-world AI security: Physical adversarial attacks research from CVPR ’23Continue

From one model to another: Transferable attacks research @ CVPR ’23
CVPR '23 | Science

From one model to another: Transferable attacks research @ CVPR ’23

ByJan Zahálka 19 July 2023

This post summarizes the CVPR ’23 work on transferable attacks, optimized on a surrogate model controlled by the attacker to also work on black-box targets.

Read More From one model to another: Transferable attacks research @ CVPR ’23Continue

New adversarial attacks on computer vision from CVPR ’23
CVPR '23 | Science

New adversarial attacks on computer vision from CVPR ’23

ByJan Zahálka 12 July 202312 July 2023

Adversarial attacks are a core discipline of AI security. This post summarizes pioneering adversarial attacks on computer vision models seen at CVPR ’23 that focus on underexplored tasks of computer vision or bring a new view on attack methodology.

Read More New adversarial attacks on computer vision from CVPR ’23Continue

The best AI security papers from CVPR ’23: Official highlights
CVPR '23 | Science

The best AI security papers from CVPR ’23: Official highlights

ByJan Zahálka 5 July 202312 July 2023

The AI security papers from CVPR ’23 among the top-ranked papers by reviewer score.

Read More The best AI security papers from CVPR ’23: Official highlightsContinue

AI security @ CVPR ’23
CVPR '23 | Science

AI security @ CVPR ’23

ByJan Zahálka 28 June 20231 November 2023

CVPR ’23 has brought a large number of new, exciting AI security papers. This post kicks off a blog post series covering the work with an introduction, paper stats, and overall topical structure.

Read More AI security @ CVPR ’23Continue

How to intuitively understand adversarial attacks on AI models
Science

How to intuitively understand adversarial attacks on AI models

ByJan Zahálka 20 June 202320 June 2023

How is it possible that we can make anything look like something completely different in the eyes of an AI model? This post brings a real-world-inspired intuition of adversarial attacks on AI models.

Read More How to intuitively understand adversarial attacks on AI modelsContinue

CVPR '22 | Science

Cheatsheet of AI security papers from CVPR ’22

ByJan Zahálka 16 June 202328 September 2023

All AI security papers from CVPR ’22 with paper link, categorized by attack type.

Read More Cheatsheet of AI security papers from CVPR ’22Continue

Page navigation

Previous PagePrevious 1 2 3 Next PageNext

Recent Posts

  • What’s more powerful than one adversarial attack?
  • Can ChatGPT read who you are?
  • Elves explain how to understand adversarial attacks
  • A cyberattacker’s little helper: Jailbreaking LLM security
  • Judging LLM security: How to make sure large language models are helping us?

Archives

  • February 2024
  • January 2024
  • December 2023
  • November 2023
  • October 2023
  • September 2023
  • August 2023
  • July 2023
  • June 2023
  • May 2023

Categories

  • AI
  • CVPR '22
  • CVPR '23
  • Science
  • Security

© 2025 Jan Zahálka | Privacy policy

We are using cookies to give you the best experience on our website.

You can find out more about which cookies we are using or switch them off in .

  • Blog
  • Subscribe
  • My work
    • Publications
    • Teaching
Privacy Overview

This website uses cookies so that we can provide you with the best user experience possible. Cookie information is stored in your browser and performs functions such as recognising you when you return to our website and helping our team to understand which sections of the website you find most interesting and useful.

Strictly Necessary Cookies

Strictly Necessary Cookie should be enabled at all times so that we can save your preferences for cookie settings.

3rd Party Cookies

This website uses Google Analytics to collect anonymous information such as the number of visitors to the site, and the most popular pages.

Keeping this cookie enabled helps us to improve our website.

Please enable Strictly Necessary Cookies first so that we can save your preferences!

Powered by  GDPR Cookie Compliance