STAR Group

@ ICT, CAS


Safe & Trustworthy AI Resarch (STAR) group studies AI safety and trustworthiness. We are part of the CAS Key Laboratory of AI Safety.

At STAR Group, we believe interpretability is a key to building safe and trustworthy AI. Our research focuses on the knowledge mechanisms of AI models—how they learn, memorize, recall, update/edit, and forget knowledge. We also explore the security and privacy impacts of deploying AI in real-world applications, with a special focus on LLMs and recommender systems.


Open Positions

We are looking for self-motivated interns/postdoc to do research with us. Feel free to contact us if you are interested. For more information, please visit the Join us page.

Latest News

Dec 22, 2024 We will hold The 1st Workshop on Human-Centered Recommender Systems on WWW 25. Contributions are welcome !
Sep 15, 2024 Our paper The Fall of ROME is accepted by EMNLP2024 finding. Congrats to Wanli!
May 16, 2024 Three papers are accepted by ACL2024 about model editing, bias in knowledge conflict, and confidence alignment. Congrats to Hexiang, Wanli, and Shuchang!
Mar 22, 2024 Our paper Unlink to Unlearn: Simplifying Edge Unlearning in GNNs is accepted by WebConf2024. Congrats to Jiajun!
View All →