ICLR 2022 Workshop on Socially Responsible Machine Learning (SRML)

Date: April 29, 2022 (Friday)

Location: Virtual Only (co-located with ICLR 2022)

Abstract:

Machine learning (ML) systems have been increasingly used in many applications, ranging from decision-making systems (e.g., automated resume screening and pretrial release tool) to safety-critical tasks (e.g., face recognition, financial analytics and autonomous driving). Recently, the concept of foundation models has received significant attention in the ML community, which refers to the rise of models (e.g., BERT, GPT-3) that are trained on large-scale data and work surprisingly well in a wide range of downstream tasks. While there are many opportunities regarding foundation models, ranging from capabilities (e.g., language, vision, robotics, reasoning, human interaction), applications (e.g., law, healthcare, education, transportation), and societal impact (e.g., inequity, misuse, economic and environmental impact, legal and ethical considerations), concerns and risks have been incurred that the models can inflict harm if they are not developed or used with care. It has been well-documented that ML models can:

This workshop aims to build connections by bringing together both theoretical and applied researchers from various communities (e.g., machine learning, fairness & ethics, security, privacy, etc.), with a focus on recent research and future directions for socially responsible machine learning problems in real-world systems.

Accepted Papers

Our workshop accepted 28 high quality papers: List of accepted papers. Our best paper award winners are:

Schedule

Workshop day: 04/29/2022

Time Zone: US Eastern Time Zone (UTC-05:00)

09:20-09:40 Opening Remarks (Prof. Bo Li)
09:40-10:20 Invited talk from Prof. Ziwei Liu: Rethinking Generalization in Vision Models: Architectures, Modalities, and Beyond.
10:20-11:00 Invited talk from Prof. Aleksander Mądry: Data Matters.
Coffee Break
11:10- 11:50 Invited talk from Prof. Anqi Liu: Uncertainty Calibration for Robust Cross Domain Transfer in Vision and Language Tasks.
11:50-12:30 Invited talk from Prof. Judy Hoffman : Understanding and Mitigating Bias in Vision Models.
Lunch
13:30-13:50 Contributed Talk #1: Modelnet40-c: a robustness benchmark for 3d point cloud recognition under corruption (Best Paper Award Winner)
13:50-14:30 Invited talk from Neil Gong: Secure Self-supervised Learning.
14:30-15:10 Invited talk from Virginia Smith: Privacy Meets Heterogeneity.
Break
15:20-16:00 Invited talk from Prof. Marco Pavone and Dr. Apoorva Sharma: Runtime Introspection of Pre-trained DNNs for Out-of-distribution Detection.
16:00-16:40 Invited talk from Prof Diyi Yang: Dialect Inclusive and Robust Natural Language Understanding.
16:40-17:00 Contributed Talk #2: Provably fair federated learning via bounded group loss (Best Paper Award Winner)
17:00-17:20 Contributed Talk #3: Debiasing neural networks using differentiable classification parity proxies
17:20-17:40 Contributed Talk #4: Can non-lipschitz networks be robust? the power of abstention and data-driven decision making for robust non-lipschitz networks
17:40-18:00 Contributed Talk #5: Differential Privacy Amplification in Quantum and Quantum-inspired Algorithms
18:00-18:20 Contributed Talk #6: Incentive Mechanisms in Strategic Learning
19:00-20:00 Gathertown Poster Session. (link: https://gather.town/oWwNGOinBf9nm2gR/iclr2022srml)

Invited Speakers

Ziwei Liu
(Nanyang Technological University)

Anqi (Angie) Liu
(Johns Hopkins University)

Judy Hoffman
(Georgia Tech)

Neil Gong
(Duke University)

Apoorva Sharma
(Stanford)

Diyi Yang
(Georgia Tech)

Organizing Committee

Chaowei Xiao
(NVIDIA)

Huan Zhang
(CMU)

Xueru Zhang
(Ohio State University)

Hongyang Zhang
(Waterloo)

Cihang Xie
(UCSC)

Beidi Chen
(Stanford)

Yuke Zhu
(UT Austin/NVIDIA)

Bo Li
(UIUC)

Zico Kolter
(CMU)

Dawn Song
(UC Berkeley)

Anima Anandkumar
(Caltech/NVIDIA)

Program Committee

  • Zelun Luo
  • Chirag Agarwal (Adobe)
  • Yulong Cao (University of Michigan, Ann Arbor)
  • Junheng Hao (UCLA)
  • Lifeng Huang (SunYat-sen university)
  • Kun Jin (University of Michigan, Ann Arbor)
  • Mohammad Mahdi Khalili (University of Delaware)
  • Adam Kortylewski (Max Planck Institute for Informatics)
  • Jia Liu (The Ohio State University)
  • Xingjun Ma (Deakin University)
  • Parinaz Naghizadeh (Ohio State University)
  • Xinlei Pan (UC Berkeley)
  • Swetasudha Panda (Oracle Labs)
  • Won Park (University of Michigan)
  • Maura Pintor (University of Cagliari)
  • Nataniel Ruiz (Boston University)
  • Aniruddha Saha (University of Maryland Baltimore County)
  • Gaurang Sriramanan (University of Maryland, College Park)
  • Akshayvarun Subramanya (UMBC)
  • Jiachen Sun (University of Michigan)
  • Anshuman Suri (University of Virginia)
  • Rajkumar Theagarajan (University of California, Riverside)
  • Chang Xiao (Columbia University)
  • Chulin Xie (University of Illinois at Urbana-Champaign)
  • Xinchen Yan (Waymo)
  • Hongyang Zhang (University of Waterloo)
  • Xinwei Zhao (Drexel University)
  • Important Dates

    Call For Papers

    Although extensive studies have been conducted to increase trust in ML, many of them either focus on well-defined problems that enable nice tractability from a mathematical perspective but are hard to be adapted to real-world systems, or they mainly focus on mitigating risks in real-world applications without providing theoretical justifications. Moreover, most work studies fairness, privacy, transparency, and interpretability separately, while the connections among them are less explored. This workshop aims to bring together both theoretical and applied researchers from various communities (e.g., machine learning, fairness & ethics, security, privacy, etc.). We invite submissions on any aspect of the social responsibility and trustworthiness of machine learning, which includes but not limited to:

    Reviewing will be performed in double-blind, with criteria include (a) relevance, (b) quality of the methodology and experiments, (c) novelty, (d) societal impacts.

    Poster deadline: April 17, 2022 Anywhere on Earth (AoE)

    Camera-ready deadline: April 10, 2022 Anywhere on Earth (AoE)

    Submission deadline: Feb 25, 2022 Anywhere on Earth (AoE)

    Notification sent to authors: Mar 25, 2022 Anywhere on Earth (AoE)

    Submission server: https://cmt3.research.microsoft.com/ICLRSRML2022/

    Submission Format: We welcome submissions up to 4 pages in ICLR Proceedings format (double-blind), excluding references and appendix. Style files are available. We allow an unlimited number of pages for references and supplementary material, but reviewers are not required to review the supplementary material. Unless indicated by the authors, we will provide PDFs of all accepted papers on https://iclrsrml.github.io/. There will be no archival proceedings. We are using CMT3 to manage submissions.

    Contact: Please email Chaowei Xiao (xiaocw [at] umich [dot] edu) and Huan Zhang (huan [at] huan-zhang [dot] com) for submission issues.