ICLR 2022 Socially Responsible Machine Learning
Accepted Papers
- Modelnet40-c: a robustness benchmark for 3d point cloud recognition under corruption
Jiachen Sun (University of Michigan); Qingzhao Zhang (University of Michigan, Ann Arbor); Bhavya Kailkhura (Lawrence Livermore National Laboratory); Zhiding Yu (NVIDIA); Zhuoqing Morley Mao (University of Michigan)
- Debiasing neural networks using differentiable classification parity proxies
Ričards Marcinkevičs (ETH Zurich); Ece Ozkan (ETH Zurich); Julia Vogt (ETH Zurich)
- Provably fair federated learning via bounded group loss
Shengyuan Hu (Carnegie Mellon University); Steven Wu (Carnegie Mellon University); Virginia Smith (Carnegie Mellon University )
- Can non-lipschitz networks be robust? the power of abstention and data-driven decision making for robust non-lipschitz networks
Maria-Florina Balcan (Carnegie Mellon University); Avrim Blum (Toyota Technological Institute of Chicago); Dravyansh Sharma (Carnegie Mellon University); Hongyang Zhang (University of Waterloo)
- Incentive mechanisms in strategic learning
Kun Jin (University of Michigan, Ann Arbor); Xueru Zhang (Ohio State University ); Mohammad Mahdi Khalili (University of Delaware); Parinaz Naghizadeh (Ohio State University); Mingyan Liu (University of Michigan, Ann Arbor)
- Lost in translation: generating adversarial examples robust to round-trip translation
Neel Bhandari (RV College of Engineering); Pin-Yu Chen (IBM Research AI)
- Feder: communication-efficient byzantine-robust federated learning
Yukun Jiang (Sichuan University); Xiaoyu Cao (Duke University); Hao Chen (UC Davis); Neil Zhenqiang Gong (Duke University)
- Evaluating the adversarial robustness for fourier neural operators
Abolaji D Adesoji (IBM); Pin-Yu Chen (IBM Research)
- Towards differentially private query release for hierarchical data
Terrance Liu (Carnegie Mellon University); Steven Wu (Carnegie Mellon University)
- The impacts of labeling biases on fairness criteria
Yiqiao Liao (The Ohio State University); Parinaz Naghizadeh (Ohio State University)
- Fair machine learning under limited demographically labeled data
Mustafa S Ozdayi (UNIVERSITY OF TEXAS AT DALLAS); Murat Kantarcioglu (UT Dallas); Rishabh Iyer (University of Texas at Dallas)
- Improving cooperative game theory-based data valuation via data utility learning
Tianhao Wang (Princeton University); Yu Yang (Tsinghua University); Ruoxi Jia (Virginia Tech)
- Secure aggregation for privacy-aware federated learning with limited resources
Irem Ergun (University of California, Riverside); Hasin Us Sami (University of California, Riverside); Basak Guler (University of California, Riverside)
- Unirex: a unified learning framework for language model rationale extraction
Aaron Chan (University of Southern California); Maziar Sanjabi (Facebook AI); Lambert Mathias (Facebook); Liang Tan (Facebook); Shaoliang Nie (Facebook); Xiaochang Peng (); Xiang Ren (University of Southern California); Hamed Firooz (Facebook)
- Dynamic positive reinforcement for long-term fairness
Bhagyashree Puranik (University of California Santa Barbara); Upamanyu Madhow (University of California, Santa Barbara); Ramtin Pedarsani (University of California, Santa Barbara)
- Differential privacy amplification in quantum and quantum-inspired algorithms
Armando Angrisani (LIP6, Sorbonne Université); Mina Doosti (University of Edinburgh); Elham Kashefi (LIP6, CNRS, Sorbonne Université, University of Edinburgh, Quantum Algorithms Institute)
- Robust and accurate - compositional architectures for randomized smoothing
Miklos Z. Horvath (ETH Zurich); Mark Niklas Müller (ETH Zurich); Marc Fischer (ETH Zurich); Martin Vechev (ETH Zurich)
- Learning stabilizing policies in stochastic control systems
Đorđe Žikelić (IST Austria); Mathias Lechner (IST Austria); Krishnendu Chatterjee (IST Austria); Thomas A Henzinger (IST Austria)
- Disentangling algorithmic recourse
Martin Pawelczyk (University of Tuebingen); Lea Tiyavorabun (University of Amsterdam); Gjergji Kasneci (University of Tuebingen)
- Transfer fairness under distribution shift
Bang An (University of Maryland, College Park); Zora Che (Boston University); Mucong Ding (University of Maryland); Furong Huang (University of Maryland)
- Towards learning to explain with concept bottleneck models: mitigating information leakage
Joshua Lockhart (J.P. Morgan AI Research); Nicolas Marchesotti (JP Morgan AI Research); Daniele Magazzeni (J.P. Morgan AI Research); Manuela Veloso (JP Morgan)
- Few-shot unlearning
Youngsik Yoon (POSTECH); Jinhwan Nam (POSTECH); Dongwoo Kim (POSTECH); Jungseul Ok (POSTECH)
- Towards data-free model stealing in a hard label setting
Sunandini Sanyal (Indian Institute of Science, Bengaluru); Sravanti Addepalli (Indian Institute of Science); Venkatesh Babu Radhakrishnan (Indian Institute of Science)
- Algorithmic recourse in the face of noisy human responses
Martin Pawelczyk (University of Tuebingen); Teresa Datta (Harvard University); Johannes van-den-Heuvel (University of Tuebingen); Gjergji Kasneci (University of Tuebingen); Himabindu Lakkaraju (Harvard)
- Perfectly fair and differentially private selection using the laplace mechanism
Mina Samizadeh (University of Delaware); Mohammad Mahdi Khalili (University of Delaware)
- Data augmentation via wasserstein geodesic perturbation for robust electrocardiogram prediction
Jiacheng Zhu (Carnegie Mellon University); Jielin Qiu (Carnegie Mellon University); Zhuolin Yang (UIUC); Michael Rosenberg ( University of Colorado Anschutz Medical Campus); Emerson Liu (Allegheny General Hospital); Bo Li (UIUC); Ding Zhao (Carnegie Mellon University)
- Maximizing predictive entropy as regularization for supervised classification
Amrith Setlur (UC Berkeley); Benjamin Eysenbach (CMU); Sergey Levine (UC Berkeley)
- Rationale-inspired natural language explanations with commonsense
Bodhisattwa Prasad Majumder (University of California San Diego); Oana Camburu (Oxford University); Thomas Lukasiewicz (University of Oxford); Julian McAuley (UCSD)