ICLR 2022 Workshop on Socially Responsible Machine Learning (SRML)
Date: April 29, 2022 (Friday)
Location: Virtual Only (co-located with ICLR 2022)
Abstract:
Machine learning (ML) systems have been increasingly used in many applications, ranging from decision-making systems (e.g., automated resume screening and pretrial release tool) to safety-critical tasks (e.g., face recognition, financial analytics and autonomous driving). Recently, the concept of foundation models has received significant attention in the ML community, which refers to the rise of models (e.g., BERT, GPT-3) that are trained on large-scale data and work surprisingly well in a wide range of downstream tasks. While there are many opportunities regarding foundation models, ranging from capabilities (e.g., language, vision, robotics, reasoning, human interaction), applications (e.g., law, healthcare, education, transportation), and societal impact (e.g., inequity, misuse, economic and environmental impact, legal and ethical considerations), concerns and risks have been incurred that the models can inflict harm if they are not developed or used with care. It has been well-documented that ML models can:
- Inherit pre-existing social biases and exhibit discrimination against already-disadvantaged or marginalized social groups, such as BIPOC and LGBTQ+.
- Be vulnerable to security and/or privacy attacks that deceive the models and leak the sensitive information in training data, such as medical records and personal identifiable information (PII).
- Make hard-to-justify predictions with a lack of transparency and explanation.
- Be unreliable and unpredictable under domain shift or other input variations in real-life scenarios.
This workshop aims to build connections by bringing together both theoretical and applied researchers from various communities (e.g., machine learning, fairness & ethics, security, privacy, etc.), with a focus on recent research and future directions for socially responsible machine learning problems in real-world systems.
Accepted Papers
Our workshop accepted 28 high quality papers: List of accepted papers. Our best paper award winners are:- Modelnet40-c: a robustness benchmark for 3d point cloud recognition under corruption
Jiachen Sun (University of Michigan); Qingzhao Zhang (University of Michigan, Ann Arbor); Bhavya Kailkhura (Lawrence Livermore National Laboratory); Zhiding Yu (NVIDIA); Zhuoqing Morley Mao (University of Michigan)
- Provably fair federated learning via bounded group loss
Shengyuan Hu (Carnegie Mellon University); Steven Wu (Carnegie Mellon University); Virginia Smith (Carnegie Mellon University )
Schedule
Workshop day: 04/29/2022
Time Zone: US Eastern Time Zone (UTC-05:00)
09:20-09:40 | Opening Remarks (Prof. Bo Li) | 09:40-10:20 | Invited talk from Prof. Ziwei Liu: Rethinking Generalization in Vision Models: Architectures, Modalities, and Beyond. |
---|---|---|
10:20-11:00 | Invited talk from Prof. Aleksander Mądry: Data Matters. | |
Coffee Break | ||
11:10- 11:50 | Invited talk from Prof. Anqi Liu: Uncertainty Calibration for Robust Cross Domain Transfer in Vision and Language Tasks. | |
11:50-12:30 | Invited talk from Prof. Judy Hoffman : Understanding and Mitigating Bias in Vision Models. | |
Lunch | ||
13:30-13:50 | Contributed Talk #1: Modelnet40-c: a robustness benchmark for 3d point cloud recognition under corruption (Best Paper Award Winner) | |
13:50-14:30 | Invited talk from Neil Gong: Secure Self-supervised Learning. | |
14:30-15:10 | Invited talk from Virginia Smith: Privacy Meets Heterogeneity. | |
Break | ||
15:20-16:00 | Invited talk from Prof. Marco Pavone and Dr. Apoorva Sharma: Runtime Introspection of Pre-trained DNNs for Out-of-distribution Detection. | |
16:00-16:40 | Invited talk from Prof Diyi Yang: Dialect Inclusive and Robust Natural Language Understanding. | |
16:40-17:00 | Contributed Talk #2: Provably fair federated learning via bounded group loss (Best Paper Award Winner) | |
17:00-17:20 | Contributed Talk #3: Debiasing neural networks using differentiable classification parity proxies | |
17:20-17:40 | Contributed Talk #4: Can non-lipschitz networks be robust? the power of abstention and data-driven decision making for robust non-lipschitz networks | |
17:40-18:00 | Contributed Talk #5: Differential Privacy Amplification in Quantum and Quantum-inspired Algorithms | |
18:00-18:20 | Contributed Talk #6: Incentive Mechanisms in Strategic Learning | |
19:00-20:00 | Gathertown Poster Session. (link: https://gather.town/oWwNGOinBf9nm2gR/iclr2022srml) |
Invited Speakers
Ziwei Liu
(Nanyang Technological University)
Aleksander Mądry
(MIT)
Anqi (Angie) Liu
(Johns Hopkins University)
Judy Hoffman
(Georgia Tech)
Organizing Committee
Chaowei Xiao
(NVIDIA)
Huan Zhang
(CMU)
Xueru Zhang
(Ohio State University)
Hongyang Zhang
(Waterloo)
Cihang Xie
(UCSC)
Beidi Chen
(Stanford)
Yuke Zhu
(UT Austin/NVIDIA)
Bo Li
(UIUC)
Zico Kolter
(CMU)
Dawn Song
(UC Berkeley)
Anima Anandkumar
(Caltech/NVIDIA)
Program Committee
Important Dates
- Workshop paper submission deadline: 02/25/2022 (AoE)
- Notification to authors: 03/25/2022
- Camera ready deadline: 04/10/2022
- Workshop day: 04/29/2022
Call For Papers
Although extensive studies have been conducted to increase trust in ML, many of them either focus on well-defined problems that enable nice tractability from a mathematical perspective but are hard to be adapted to real-world systems, or they mainly focus on mitigating risks in real-world applications without providing theoretical justifications. Moreover, most work studies fairness, privacy, transparency, and interpretability separately, while the connections among them are less explored. This workshop aims to bring together both theoretical and applied researchers from various communities (e.g., machine learning, fairness & ethics, security, privacy, etc.). We invite submissions on any aspect of the social responsibility and trustworthiness of machine learning, which includes but not limited to:
- The intersection of various aspects of socially responsible and trustworthy ML: fairness, transparency, interpretability, privacy, robustness.
- The state-of-the-art research of socially responsible and trustworthy ML and their usage in applications.
- The possibility of using the most recent theoretical advancements to inform practice guidelines for deploying socially responsible and trustworthy ML systems.
- Providing insights about how we can automatically detect, verify, explain, and mitigate potential biases, privacy or other societal problems in existing models.
- Understanding the trade-offs or costs of achieving different goals in reality.
- Studying the social impacts of machine learning models that may inherently have bias, exhibit discrimination, or cause other undesired harms.
Reviewing will be performed in double-blind, with criteria include (a) relevance, (b) quality of the methodology and experiments, (c) novelty, (d) societal impacts.
Poster deadline: April 17, 2022 Anywhere on Earth (AoE)
Camera-ready deadline: April 10, 2022 Anywhere on Earth (AoE)
Submission deadline: Feb 25, 2022 Anywhere on Earth (AoE)
Notification sent to authors: Mar 25, 2022 Anywhere on Earth (AoE)
Submission server: https://cmt3.research.microsoft.com/ICLRSRML2022/
Submission Format: We welcome submissions up to 4 pages in ICLR Proceedings format (double-blind), excluding references and appendix. Style files are available. We allow an unlimited number of pages for references and supplementary material, but reviewers are not required to review the supplementary material. Unless indicated by the authors, we will provide PDFs of all accepted papers on https://iclrsrml.github.io/. There will be no archival proceedings. We are using CMT3 to manage submissions.
Contact: Please email Chaowei Xiao (xiaocw [at] umich [dot] edu) and Huan Zhang (huan [at] huan-zhang [dot] com) for submission issues.