Skip to content
Jan Zahálka

Jan Zahálka

AI & Security

  • Blog
  • Subscribe
  • My workExpand
    • Publications
    • Teaching
Email Twitter Linkedin Github
Jan Zahálka
Jan Zahálka
AI & Security
Email Twitter Linkedin Github

Subscribe

Month: June 2023

AI security @ CVPR ’23
CVPR '23 | Science

AI security @ CVPR ’23

ByJan Zahálka 28 June 20231 November 2023

CVPR ’23 has brought a large number of new, exciting AI security papers. This post kicks off a blog post series covering the work with an introduction, paper stats, and overall topical structure.

Read More AI security @ CVPR ’23Continue

How to intuitively understand adversarial attacks on AI models
Science

How to intuitively understand adversarial attacks on AI models

ByJan Zahálka 20 June 202320 June 2023

How is it possible that we can make anything look like something completely different in the eyes of an AI model? This post brings a real-world-inspired intuition of adversarial attacks on AI models.

Read More How to intuitively understand adversarial attacks on AI modelsContinue

CVPR '22 | Science

Cheatsheet of AI security papers from CVPR ’22

ByJan Zahálka 16 June 202328 September 2023

All AI security papers from CVPR ’22 with paper link, categorized by attack type.

Read More Cheatsheet of AI security papers from CVPR ’22Continue

AI security @ CVPR ’22: Image manipulation & deepfake detection research
CVPR '22 | Science

AI security @ CVPR ’22: Image manipulation & deepfake detection research

ByJan Zahálka 13 June 202312 July 2023

Image manipulation is an attack that alters images to change their meaning, create false narratives, or forge evidence. This post summarizes AI security work on this topic presented at CVPR ’22.

Read More AI security @ CVPR ’22: Image manipulation & deepfake detection researchContinue

AI security @ CVPR ’22: Model inversion attacks research
CVPR '22 | Science

AI security @ CVPR ’22: Model inversion attacks research

ByJan Zahálka 6 June 202330 August 2023

One of the key AI security tasks is protecting data privacy. A model inversion attack can steal training data directly from a trained model, which should be prevented. This post covers CVPR ’22 work on model inversion attacks.

Read More AI security @ CVPR ’22: Model inversion attacks researchContinue

Recent Posts

  • What’s more powerful than one adversarial attack?
  • Can ChatGPT read who you are?
  • Elves explain how to understand adversarial attacks
  • A cyberattacker’s little helper: Jailbreaking LLM security
  • Judging LLM security: How to make sure large language models are helping us?

Archives

  • February 2024
  • January 2024
  • December 2023
  • November 2023
  • October 2023
  • September 2023
  • August 2023
  • July 2023
  • June 2023
  • May 2023

Categories

  • AI
  • CVPR '22
  • CVPR '23
  • Science
  • Security

© 2025 Jan Zahálka | Privacy policy

We are using cookies to give you the best experience on our website.

You can find out more about which cookies we are using or switch them off in .

  • Blog
  • Subscribe
  • My work
    • Publications
    • Teaching
Privacy Overview

This website uses cookies so that we can provide you with the best user experience possible. Cookie information is stored in your browser and performs functions such as recognising you when you return to our website and helping our team to understand which sections of the website you find most interesting and useful.

Strictly Necessary Cookies

Strictly Necessary Cookie should be enabled at all times so that we can save your preferences for cookie settings.

3rd Party Cookies

This website uses Google Analytics to collect anonymous information such as the number of visitors to the site, and the most popular pages.

Keeping this cookie enabled helps us to improve our website.

Please enable Strictly Necessary Cookies first so that we can save your preferences!

Powered by  GDPR Cookie Compliance