Thursday Series

Events are bi-weekly, on Thursdays at 12-1pm ET (convert to other time zones). Occasionally, events may be weekly.

To receive announcements of upcoming events, join our mailing list or subscribe to our YouTube. To participate in Zoom, locate the link below (posted a few days before the event). If you are interested in contributing to an event, contact trustworthyml@gmail.com.

FORMAT and instructions

Events take place in Zoom; some may be recorded and live-streamed to YouTube. In our first year, these events consisted of seminars. Recorded seminars can be accessed here.

In 2022, we have 2 types of events:

  • Mentorship events like panels and fireside chats, to help students learn from the experiences of experts

  • Reading group events: a space that encourages research discussion and collaboration, with special focus on newcomers.

Joining Zoom: There is a limit of 100 participants in Zoom, first-come-first-serve. We recommend downloading and installing Zoom in advance.

Live-stream and recording: If Zoom reaches capacity, please watch the YouTube live-stream and check Zoom again in case spots open up.

By participating in the event, you agree to abide by the Code of Conduct. Please report any issues to trustworthyml@gmail.com.

Recent Events

Sep 29, 2022: Reading group: The Curious Case of Hallucinations in Neural Machine Translation paper

For this week, we will read and discuss the paper The Curious Case of Hallucinations in Neural Machine Translation by Vikas Raunak, Arul Menezes, Marcin Junczys-Dowmunt and associated blog post Towards Reliable Neural Machine Translation.

The lead author Vikas will join us for the discussion, and here is a quick summary of the paper from Vikas:

  • Machine Translation (MT) represents one of the more mature applications in the domain of Artificial Intelligence. State-of-the-art Neural MT (NMT) systems have reached a level of quality where MT outputs in the average-case are often indistinguishable from that of humans. Yet, reliability problems plague state-of-the-art NMT systems, with errors ranging from incorrect translations of salient content to generations untethered from the input (hallucinations). In this paper, we try to obtain a mechanistic understanding of one of the extreme failure modes in NMT, namely hallucinations. We show how the mechanisms behind hallucinations create an inherent reliability problem not only for NMT, but also for related applications which rely heavily on (noisy) web crawled data for training high-capacity neural networks.

Zoom link: https://illinois.zoom.us/j/83896357171?pwd=dEV0bnIwMFdNS1U1QjZxRmdQWjFtQT09

YouTube live-stream and recording: This event will not be streamed or recorded.

Sep 15, 2022: Reading group: How to Robustify Black-Box ML Models? A Zeroth-Order Optimization Perspective paper

For this week, we will read and discuss the paper How to Robustify Black-Box ML Models? A Zeroth-Order Optimization Perspective by Yimeng Zhang and Yuguang Yao.

The lead author Yimeng will join us for the discussion, and here is a quick summary of the paper from Yimeng:

  • Given the prevalence of adversarial attacks, methods to robustify ML models are now a major focus in research. Nearly all existing works ask a defender to perform over white-box ML models (assuming non-confidential model architectures and parameters). However, the white-box assumption may restrict the defense application in practice. As the result, we propose a novel black-box defense approach, Zeroth-Order AutoEncoder-based Denoised Smoothing (ZO-AE-DS), which is able to tackle the challenge of ZO optimization in high dimensions and convert a pre-trained non-robust ML model into a certifiably robust model using only function queries. We verify the efficacy of our method in the task of image classification and even image reconstruction.

Zoom link: https://illinois.zoom.us/j/83896357171?pwd=dEV0bnIwMFdNS1U1QjZxRmdQWjFtQT09

YouTube live-stream and recording: This event will not be streamed or recorded.

Sep 1, 2022: Reading group: Formalizing Trust in Artificial Intelligence paper

For this week, we will read and discuss the paper Formalizing Trust in Artificial Intelligence: Prerequisites, Causes and Goals of Human Trust in AI by Alon Jacovi, Ana Marasović, Tim Miller, Yoav Goldberg.

The lead author Alon will join us for the discussion, and here is a quick summary of the paper from Alon:

  • This work aims to discuss the nature of trust in AI---the necessary conditions to this trust, what causes it, for what goal it manifests, and under what conditions this goal is achieved. We discuss a model of trust inspired by, but not identical to, sociology's interpersonal trust (i.e., trust between people). This model rests on two key properties of the vulnerability of the user and the ability to anticipate the impact of the AI model's decisions. In the paper we utilize a formalization of 'contractual trust', such that trust between a user and an AI is trust that some implicit or explicit contract will hold, and a formalization of 'trustworthiness' (which detaches from the notion of trustworthiness in sociology), and with it concepts of 'warranted' and 'unwarranted' trust. We then present the possible causes of warranted trust as intrinsic reasoning and extrinsic behavior, and discuss how to design trustworthy AI, how to evaluate whether trust has manifested, and whether it is warranted. Finally, we elucidate the connection between trust and XAI using our formalization.

Zoom link: https://illinois.zoom.us/j/83896357171?pwd=dEV0bnIwMFdNS1U1QjZxRmdQWjFtQT09

YouTube live-stream and recording: This event will not be streamed or recorded.

May 5, 2022: Reading group: Causal Neural Connection paper

For this week, we will read and discuss the paper The Causal-Neural Connection: Expressiveness, Learnability, and Inference by Kevin Xia (Columbia University), Kai-Zhan Lee (Columbia University), Yoshua Bengio (U Montreal / Mila), Elias Bareinboim (Columbia University).

The lead author Kevin will join us for the discussion, and here is a quick summary of the paper from Kevin:

  • The Causal-Neural Connection discusses the tension between expressivity and learnability of neural models. Neural networks are universal function approximators and have strong results in practice, so it may be tempting to believe that they can solve any problem in the form of functions, possibly even making causal claims. Tasks in causal inference typically model reality with structural causal models (SCMs), which represents a collection of mechanisms and exogenous sources of random variation of the system under investigation. Given the expressiveness of neural networks, one may be tempted to surmise that a collection of neural nets can learn any SCM by training on data generated by that SCM. However, this is not the case due to the Neural Causal Hierarchy Theorem, which describes the limitations of what can be learned from data. For example, even with infinite size and perfect optimization, a neural network will never be able to predict the effects of interventions given observational data alone.

  • Given these limitations, the paper introduces a new type of SCM parameterized with neural networks called the neural causal model (NCM), which, importantly, is fitted with a graphical inductive bias. This means that NCMs are forced to take a specific structure represented by a graph to encode constraints necessary for performing causal inferences. The NCM is used to solve two canonical tasks found in the literature known as causal effect identification and estimation. Causal identification decides whether a causal query can be computed from the observational data once the graphical constraints are applied, and causal estimation involves subsequently computing this query given finite samples. Leveraging the neural toolbox, an algorithm is developed that is both sufficient and necessary to decide identifiability and estimates the causal effect if identifiable. This algorithm is implemented in practice using a maximum likelihood estimation approach, and simulations are shown demonstrating its effectiveness in both problems.

Zoom link: https://us02web.zoom.us/j/83664690773?pwd=WlJOQzJDY0lHVm0rVjNsaEJWazhDdz09

YouTube live-stream and recording: This event will not be streamed or recorded.

Apr 21, 2022: Reading group: Generalized Out-of-Distribution Detection paper

For this week, we will read and discuss the paper Generalized Out-of-Distribution Detection by Jingkang Yang (NTU Singapore), Kaiyang Zhou (NTU Singapore), Yixuan Li (U Wisconsin Madison), Ziwei Liu (NTU Singapore).

The leading author Jingkang will join us for the discussion, and here is a quick summary of the paper from Jingkang:

  • This survey comprehensively reviews the similar topics of outlier detection (OD), anomaly detection (AD), novelty detection (ND), open set recognition (OSR), and out-of-distribution (OOD) detection, extensively compares their commomality and differences, and eventually unifies them under a big umbrella of "generalized OOD detection" framework.

  • We hope that this survey can help readers and participants better understand the open-world field centered on OOD detection. At the same time, it urges future work to learn, compare, and develop ideas and methods from the broader scope of generalized OOD detection, with clear problem definition and proper benchmarking.

Zoom link: https://us02web.zoom.us/j/83664690773?pwd=WlJOQzJDY0lHVm0rVjNsaEJWazhDdz09

YouTube live-stream and recording: This event will not be streamed or recorded.

Apr 7, 2022: Reading group: Two papers on Causal Inference and Double Machine Learning

For this week, we will read and discuss two papers by Yonghan Jung (Purdue University) , Jin Tian (Iowa State University), and Elias Bareinboim (Columbia University):

The lead author Yonghan will join us for the discussion, and here is a quick summary of these two papers from Yonghan:

  • Inferring causal effects from observational data is a fundamental task throughout the empirical sciences. In practice, however, there are still challenges to estimating identifiable causal functionals from finite samples. We aim to fill this gap between causal identification and causal estimation.

  • I will discuss a double/debiased machine learning (DML)-based causal effect estimation problem. Specifically, I will give two versions of the problem. First, I will introduce a DML-based estimation strategy working for any identifiable function. The formal result is in [paper 1]. Second, I will extend the DML-based causal effect estimators for the scenario where the causal graph is unknown [paper2].

  • In this reading group session, I hope to have a discussion on how causal effect identification & estimation can be leveraged in trustworthy AI.

Zoom link: https://us02web.zoom.us/j/83664690773?pwd=WlJOQzJDY0lHVm0rVjNsaEJWazhDdz09

YouTube live-stream and recording: This event will not be streamed or recorded.

Mar 24, 2022: Panel on "Tenure Track in Trustworthy ML" with Kai-Wei Chang, Maria De-Arteaga, Gautam Kamath, Olga Russakovsky, Marinka Zitnik

A conversation about research, challenges, opportunities, and work-life balance in tenure-track careers with Kai-Wei Chang (UCLA), Maria De-Arteaga (UT Austin), Gautam Kamath (U Waterloo), Olga Russakovsky (Princeton U), Marinka Zitnik (Harvard Medical School), moderated by Hima Lakkaraju (Harvard U).

Zoom registration: https://us02web.zoom.us/webinar/register/WN_PDsfR3_aRXGSJRGyRsEbMA

YouTube live-stream and recording: https://youtu.be/WOzMGaoCXY4

Dec 16, 2021: Fireside Chat with Cynthia Rudin

A conversation with Prof. Cynthia Rudin (Duke University, Winner of AI Nobel 2021) about her life, research journey, and perspectives on academia and industry. Moderated by Hima Lakkaraju (Harvard U).

Zoom registration: us02web.zoom.us/webinar/register/WN_ON1ur4NqRBi-BMHzGjaLJQ

YouTube live-stream and recording: youtu.be/-Dq3pcYSAmg

Dec 9, 2021: Reading group: Desiderata for Representation Learning

For this week, we will read and discuss Desiderata for Representation Learning: A Causal Perspective by Yixin Wang (UC Berkeley) and Michael I. Jordan (UC Berkeley). Yixin has generously provided a TL;DR summary for people who may have time to read the paper. Also, one may find the associated slides useful.

Zoom link: https://us02web.zoom.us/j/83664690773?pwd=WlJOQzJDY0lHVm0rVjNsaEJWazhDdz09

YouTube live-stream and recording: This event will not be streamed or recorded.

Dec 2, 2021: Reading group: Towards Causal Representation Learning

For this week, we will read and discuss Towards Causal Representation Learning by Bernhard Schölkopf, Francesco Locatello, Stefan Bauer, Nan Rosemary Ke, Nal Kalchbrenner, Anirudh Goyal, Yoshua Bengio.

Zoom link: https://us02web.zoom.us/j/83664690773?pwd=WlJOQzJDY0lHVm0rVjNsaEJWazhDdz09

YouTube live-stream and recording: This event will not be streamed or recorded.

Nov 18, 2021: Reading group: On Pearl's Hierarchy and Foundations of Causal Inference

For this week, we will read and discuss On Pearl’s Hierarchy and the Foundations of Causal Inference by Elias Bareinboim, Juan D. Correa, Duligur Ibeling, and Thomas Icard.

Zoom link: https://us02web.zoom.us/j/83664690773?pwd=WlJOQzJDY0lHVm0rVjNsaEJWazhDdz09

YouTube live-stream and recording: This event will not be streamed or recorded.