About Me
I am currently a third-year Ph.D. student at CISPA Helmholtz Center for Information Security, co-supervised by Prof. Michael Backes and Prof. Yang Zhang. Before that, I obtained my master’s degree at Shanghai Jiao Tong University and obtained my bachelor’s degree at Shandong University.
Research Interests
- Machine Learning Privacy and Security
- Safety of Foundation Models
- Online Hate, Hateful Memes
Publications
FAKEPCD: Fake Point Cloud Detection via Source Attribution
Yiting Qu, Zhikun Zhang, Yun Shen, Michael Backes, Yang Zhang; ACM ASIACCS 2024
[arXiv] [Code]
Prompt Stealing Attacks Against Text-to-Image Generation Models
Xinyue Shen, Yiting Qu, Michael Backes, Yang Zhang; USENIX Security 2024
[arXiv]
UnsafeBench: Benchmarking Image Safety Classifiers on Real-World and AI-Generated Images
Yiting Qu, Xinyue Shen, Yixin Wu, Michael Backes, Savvas Zannettou, Yang Zhang
[arXiv] [Website] [Code]
Unsafe Diffusion: On the Generation of Unsafe Images and Hateful Memes From Text-To-Image Models
Yiting Qu, Xinyue Shen, Xinlei He, Michael Backes, Savvas Zannettou, Yang Zhang; ACM CCS 2023
[arXiv] [Code] [Media Coverage]
On the Evolution of (Hateful) Memes by Means of Multimodal Contrastive Learning
Yiting Qu, Xinlei He, Shannon Pierson, Michael Backes, Yang Zhang, Savvas Zannettou; IEEE S&P 2023
[PDF] [arXiv] [Code]
Automatic Permission Optimization Framework for Privacy Enhancement of Mobile Applications
Yiting Qu, Suguo Du, Shaofeng Li, Yan Meng, Le Zhang, Haojin Zhu; IoTJ 2020
[PDF]
What’s New
- [2024.02] Our paper titled “Prompt Stealing Attacks Against Text-to-Image Generation Models” got accepted in USENIX 2024. See you in Philadelphia!
- [2024.01] One paper titled “FAKEPCD: Fake Point Cloud Detection via Source Attribution” got accepted in ACM ASIACCS 2024. See you in Singapore!
- [2023.05] One paper titled “Unsafe Diffusion: On the Generation of Unsafe Images and Hateful Memes From Text-To-Image Models” got accepted in ACM CCS 2023! See you in Copenhagen!
- [2023.01] Our paper titled “Prompt Stealing Attacks Against Text-to-Image Generation Models” is online. This is the first study on prompt stealing attacks. You can read it here!
- [2022.12] One paper titled “On the Evolution of (Hateful) Memes by Means of Multimodal Contrastive Learning” got accepted in Oakland 2023!
- [2022.11] I passed my PhD Qualifying Exam!
- [2021.11] I joined CISPA to start my PhD!