NextGenAISafety 2024 @ ICML Vienna

ICML 2024 Workshop on the Next Generation of AI Safety

Friday, July 26, 2024

Messe Wien Exhibition Congress Center, Hall A1

Call for Papers

Accepted papers can be found on OpenReview. Poster instructions from ICML.

Contact info: next-gen-ai-safety [at] googlegroups [dot] com

What does the next frontier in AI safety look like? How do we evaluate it? And how can we develop strong safeguards for tomorrow's AI systems?


Combatting the novel challenges of next generation AI systems necessitates new safety techniques, spanning areas such as synthetic data generation and utilization, content moderation, and model training methodologies. The proliferation of open-source and personalized models tailored for various applications widens the scope of deployments, and amplifies the already-urgent need for robust safety tools. Moreover, this diverse range of potential deployments entails complex trade-offs between safety objectives and operational efficiency. Taken together, there is a broad set of urgent and unique research challenges and opportunities to ensure the safety of tomorrow's AI systems.

In this workshop, we take a proactive approach to safety and focus on five emerging trends in AI and explore the challenges associated with deploying these technologies safely:

We will bring together researchers across academia and industry working on improving safety and alignment of state-of-the-art AI systems as they are deployed. We aim for the event to facilitate sharing of challenges, best practices, new research ideas, data, and evaluations, that both practically aid development and spur progress in this area.

Call for Papers

Camera-ready versions due July 8th AoE! Read the full Call for Papers here.

Submission Instructions

Important Dates


May 22: Submission portal opens

May 30 (AoE): Deadline for papers

June 18: Decision notifications

July 8 (AoE): Camera-ready deadline (for the final camera-ready version of the paper, authors can add one extra page to the main body)

July 26: Workshop!

Schedule

The workshop will be held on Friday, July 26, 2024, in the Messe Wien Exhibition Congress Center, Hall A1. All conference schedule times are in Central European Summer Time (Vienna local time).

Conference Schedule
9:00 AM - 9:45 AM
Invited Talk: Kamalika Chaudhuri
Title: Privacy in Representation Learning: Measurement and Mitigation
Bio: Kamalika Chaudhuri is a Professor in the department of Computer Science and Engineering at University of California San Diego, and a Research Scientist in the FAIR team at Meta AI. Her research interests are in the foundations of trustworthy machine learning, which includes problems such as learning from sensitive data while preserving privacy, learning under sampling bias, and in the presence of an adversary. She is particularly interested in privacy-preserving machine learning, which addresses how to learn good models and predictors from sensitive data, while preserving the privacy of individuals.
9:45 AM - 10:30 AM
Invited Talk: Inioluwa Deborah Raji
Title: Safety by Any Other Name
Bio: Deborah Raji is a senior Trustworthy AI fellow at the Mozilla Foundation and a CS PhD student at University of California, Berkeley, who is interested in questions on AI auditing and evaluation. She works closely with civil society, investigative journalists, policymakers and corporations on various projects to investigate, assess and better understand AI deployments. Recently, she was named to Forbes 30 Under 30, MIT Tech Review 35 Under 35 Innovators, and TIME100 Most Influential in AI.
10:30 AM - 11:00 AM
Oral Session #1
Papers:
  • 10:30 AM - 10:40 AM: Privacy Auditing of Large Language Models (Ashwinee Panda, Xinyu Tang, Milad Nasr, Christopher A. Choquette-Choo, Prateek Mittal)
  • 10:40 AM - 10:50 AM: Alignment Calibration: Machine Unlearning for Contrastive Learning under Auditing (Yihan Wang, Yiwei Lu, Guojun Zhang, Franziska Boenisch, Adam Dziedzic, Yaoliang Yu, Xiao-Shan Gao)
  • 10:50 AM - 11:00 AM: BELLS: A Framework Towards Future Proof Benchmarks for the Evaluation of LLM Safeguards (Diego Dorn, Alexandre Variengien, Charbel-Raphael Segerie, Vincent Corruble)
11:00 AM - 11:45 AM
Invited Talk: Joelle Pineau
Title: A few (modest) lessons on open-sourcing AI systems
Bio: Joelle Pineau is a Professor and William Dawson Scholar at the School of Computer Science at McGill University, where she co-directs the Reasoning and Learning Lab. She is a core academic member of Mila and a Canada CIFAR AI chairholder. She is also a VP, AI research at Meta (previously Facebook), where she leads the Fundamental AI Research (FAIR) team. She holds a BASc in Systems Design Engineering from the University of Waterloo, and an MSc and PhD in Robotics from Carnegie Mellon University. Dr. Pineau's research focuses on developing new models and algorithms for planning and learning in complex partially-observable domains. She also works on applying these algorithms to complex problems in robotics, health care, games and conversational agents. She serves on the editorial board of the Journal of Machine Learning Research and is Past-President of the International Machine Learning Society. She is a recipient of NSERC's E.W.R. Steacie Memorial Fellowship (2018), a Fellow of the Association for the Advancement of Artificial Intelligence (AAAI), a Senior Fellow of the Canadian Institute for Advanced Research (CIFAR), a member of the College of New Scholars, Artists and Scientists by the Royal Society of Canada, and a 2019 recipient of the Governor General's Innovation Awards.
11:45 AM - 12:30 PM
Panel
12:30 PM - 2:00 PM
Lunch
2:00 PM - 3:00 PM
Poster Session #1 (odd-numbered accepted posters)
3:00 PM - 3:30 PM
Oral Session #2
Papers:
  • 3:00 PM - 3:10 PM: Exploiting LLM Quantization (Kazuki Egashira, Mark Vero, Robin Staab, Jingxuan He, Martin Vechev)
  • 3:10 PM - 3:20 PM: Accuracy on the wrong line: On the pitfalls of noisy data for OOD generalisation (Amartya Sanyal, Yaxi Hu, Yaodong Yu, Yian Ma, Yixin Wang, Bernhard Schölkopf)
  • 3:20 PM - 3:30 PM: Generating Potent Poisons and Backdoors from Scratch with Guided Diffusion (Hossein Souri, Arpit Bansal, Hamid Kazemi, Liam H Fowl, Aniruddha Saha, Jonas Geiping, Andrew Gordon Wilson, Rama Chellappa, Tom Goldstein, Micah Goldblum)
3:30 PM - 4:30 PM
Poster Session #2 (even-numbered accepted posters), coffee break
4:30 PM - 5:00 PM
Invited Talk: Lilian Weng
Title: Towards Safe AGI
Bio: Lilian is the head of Safety Systems at OpenAI, where she leads a group of engineers and researchers to enable safety deployment of our models and products. Previously, she led Applied Research to leverage LLMs to address real-world applications. In the early days of her OpenAI time, Lilian contributed to OpenAI’s Robotics team, tackling complex robotic manipulation tasks like solving a Rubik’s Cube using a robot hand. With a wide range of research interests, she shares her insights on diverse topics in deep learning through her popular ML blog https://lilianweng.github.io/.

Organizers