STAR Group

@ ICT, CAS


Safe & Trustworthy AI Resarch (STAR) group studies AI safety and trustworthiness. We are part of the State Key Laboratory of AI Safety.

At STAR Group, we believe interpretability is a key to building safe and trustworthy AI. Our research focuses on the knowledge mechanisms of AI models—how they learn, memorize, recall, update/edit, and forget knowledge. We also explore the security and privacy impacts of deploying AI in real-world applications, with a special focus on LLMs and recommender systems.


Open Positions

We are looking for self-motivated interns/postdoc to do research with us. Feel free to contact us if you are interested. For more information, please visit the Join us page.

Latest News

Jan 25, 2026 Four papers are accepted by ICLR 2026 about Model Editing, Agent Planing, RecSys, and RAG. Congrats to Wanli, Yilin, and Kaike, and Chenyu.
Jan 14, 2026 Two papers are accepted by WebConf 2026 about RecSys and Membership Inference Attack. Congrats to Danyang and Hanqi.
Nov 09, 2025 We will hold The 2st Workshop on Human-Centered Recommender Systems on WWW 26. Contributions are welcome !
Oct 31, 2025 One paper is accepted by AAAI 2026 Demo about algorithm auditing. Congrats to Zhenxing!
Aug 21, 2025 Three papers are accepted by EMNLP 2025 about Hallucination/Uncertainty Estimation, Backdoor, and Jailbreaking.
Aug 01, 2025 Congratulations to Wanli on receiving the Best Paper Award at the KnowFM @ ACL25 Workshop—a well-deserved recognition of his excellent research!
May 15, 2025 Two papers are accepted by ACL2025 about model editing and watermarking. Congrats to Wanli and Beining!
Dec 22, 2024 We will hold The 1st Workshop on Human-Centered Recommender Systems on WWW 25. Contributions are welcome !
Sep 15, 2024 Our paper The Fall of ROME is accepted by EMNLP2024 finding. Congrats to Wanli!
May 16, 2024 Three papers are accepted by ACL2024 about model editing, bias in knowledge conflict, and confidence alignment. Congrats to Hexiang, Wanli, and Shuchang!
View All →