Welcome!
Hi, I am Jan, and I work on AI & security. The performance, impact, and importance of AI is rapidly increasing, and my mission is to make it more secure, trustworthy, respectful of privacy, and robust. Presently, I am a researcher at the Czech Technical University in Prague (CTU) and the founder & AI specialist at BohemAI, an AI development & consultancy company. I hold a doctorate in Computer Science from the University of Amsterdam and a master’s degree with honours in Artificial Intelligence from the CTU.
Latest blog posts
- What’s more powerful than one adversarial attack?Using a single attack won’t do, unless you are in a Hollywood film. This post covers AutoAttack, the pioneer ensemble adversarial attack, and shows how to test the adversarial robustness of AI models more rigorously.
- Can ChatGPT read who you are?ChatGPT is excellent in extracting structured information from text. Can it evaluate our personality traits? This post describes our work on LLM personality assessment, accepted to the CAIHu workshop @ AAAI ’24.
- Elves explain how to understand adversarial attacksIntuitive understanding of adversarial attacks is core for understanding AI security. This post aims to explain adversarial attacks with… Elves (instead of technical terminology).
- A cyberattacker’s little helper: Jailbreaking LLM securityAttacks, lies, and deceit to bypass the security of (an older version of) ChatGPT. Jailbreaking is an open LLM security challenge, as LLM services should not assist in malicious activity.
- Judging LLM security: How to make sure large language models are helping us?Large language models (LLMs) have taken the world by storm, but LLM security is still in its infancy. Read about our contribution: a comprehensive, practical LLM security taxonomy.