AI-generated code can introduce security vulnerabilities if not properly
checked.
This area focuses on methods and tools to ensure that code produced by AI systems is secure,
robust, and free from common vulnerabilities.
Publications
Jan. 2025
ObscuraCoder: Powering Efficient Code LM Pre-Training Via
Obfuscation Grounding
Indraneil Paul, Haoyi Yang, Goran Glavaš, Kristian Kersting, Iryna Gurevych ICLR 2025
Paper: Link
Code: GitHub
Aug. 2024
Problem Solving Through Human-AI Preference-Based Cooperation
Subhabrata Dutta, Timo Kaufmann, Goran Glavaš, Ivan Habernal, Kristian Kersting, Frauke Kreuter,
Mira Mezini, Iryna Gurevych, Eyke Hüllermeier, Hinrich Schuetze Preprint under review.
Paper: Link
Aug. 2024
DOCE: Finding the Sweet Spot for Execution-Based Code Generation
Haau-Sing Li, Patrick Fernandes, Iryna Gurevych, André F.T. Martins Preprint under review.
Paper: Link
Code: GitHub
Aug. 2024
IRCoder: Intermediate Representations Make Language Models Robust Multilingual Code
Generators
Paul, Indraneil; Glavaš, Goran; Gurevych, Iryna ACL 2024
July 2023
Python Code Generation by Asking Clarification Questions
Haau-Sing Li, Mohsen Mesgar, André F. T. Martins, Iryna Gurevych ACL 2023
Paper: Link