Thursday Series

Events are bi-weekly, on Thursdays at 12-1pm ET (convert to other time zones). Occasionally, events may be weekly.

To receive announcements of upcoming events, join our mailing list or subscribe to our YouTube. To participate in Zoom, locate the link below (posted a few days before the event). If you are interested in contributing to an event, contact trustworthyml@gmail.com.

FORMAT and instructions

Events take place in Zoom; some may be recorded and live-streamed to YouTube. In our first year, these events consisted of seminars. Recorded seminars can be accessed here.

In 2022, we have 2 types of events:

  • Mentorship events like panels and fireside chats, to help students learn from the experiences of experts

  • Reading group events: a space that encourages research discussion and collaboration, with special focus on newcomers.

Joining Zoom: There is a limit of 100 participants in Zoom, first-come-first-serve. We recommend downloading and installing Zoom in advance.

Live-stream and recording: If Zoom reaches capacity, please watch the YouTube live-stream and check Zoom again in case spots open up.

By participating in the event, you agree to abide by the Code of Conduct. Please report any issues to trustworthyml@gmail.com.

Recent Events

May 5, 2022: Reading group: Causal Neural Connection paper

For this week, we will read and discuss the paper The Causal-Neural Connection: Expressiveness, Learnability, and Inference by Kevin Xia (Columbia University), Kai-Zhan Lee (Columbia University), Yoshua Bengio (U Montreal / Mila), Elias Bareinboim (Columbia University).

The lead author Kevin will join us for the discussion, and here is a quick summary of the paper from Kevin:

  • The Causal-Neural Connection discusses the tension between expressivity and learnability of neural models. Neural networks are universal function approximators and have strong results in practice, so it may be tempting to believe that they can solve any problem in the form of functions, possibly even making causal claims. Tasks in causal inference typically model reality with structural causal models (SCMs), which represents a collection of mechanisms and exogenous sources of random variation of the system under investigation. Given the expressiveness of neural networks, one may be tempted to surmise that a collection of neural nets can learn any SCM by training on data generated by that SCM. However, this is not the case due to the Neural Causal Hierarchy Theorem, which describes the limitations of what can be learned from data. For example, even with infinite size and perfect optimization, a neural network will never be able to predict the effects of interventions given observational data alone.

  • Given these limitations, the paper introduces a new type of SCM parameterized with neural networks called the neural causal model (NCM), which, importantly, is fitted with a graphical inductive bias. This means that NCMs are forced to take a specific structure represented by a graph to encode constraints necessary for performing causal inferences. The NCM is used to solve two canonical tasks found in the literature known as causal effect identification and estimation. Causal identification decides whether a causal query can be computed from the observational data once the graphical constraints are applied, and causal estimation involves subsequently computing this query given finite samples. Leveraging the neural toolbox, an algorithm is developed that is both sufficient and necessary to decide identifiability and estimates the causal effect if identifiable. This algorithm is implemented in practice using a maximum likelihood estimation approach, and simulations are shown demonstrating its effectiveness in both problems.

Zoom link: https://us02web.zoom.us/j/83664690773?pwd=WlJOQzJDY0lHVm0rVjNsaEJWazhDdz09

YouTube live-stream and recording: This event will not be streamed or recorded.

Apr 21, 2022: Reading group: Generalized Out-of-Distribution Detection paper

For this week, we will read and discuss the paper Generalized Out-of-Distribution Detection by Jingkang Yang (NTU Singapore), Kaiyang Zhou (NTU Singapore), Yixuan Li (U Wisconsin Madison), Ziwei Liu (NTU Singapore).

The leading author Jingkang will join us for the discussion, and here is a quick summary of the paper from Jingkang:

  • This survey comprehensively reviews the similar topics of outlier detection (OD), anomaly detection (AD), novelty detection (ND), open set recognition (OSR), and out-of-distribution (OOD) detection, extensively compares their commomality and differences, and eventually unifies them under a big umbrella of "generalized OOD detection" framework.

  • We hope that this survey can help readers and participants better understand the open-world field centered on OOD detection. At the same time, it urges future work to learn, compare, and develop ideas and methods from the broader scope of generalized OOD detection, with clear problem definition and proper benchmarking.

Zoom link: https://us02web.zoom.us/j/83664690773?pwd=WlJOQzJDY0lHVm0rVjNsaEJWazhDdz09

YouTube live-stream and recording: This event will not be streamed or recorded.

Apr 7, 2022: Reading group: Two papers on Causal Inference and Double Machine Learning

For this week, we will read and discuss two papers by Yonghan Jung (Purdue University) , Jin Tian (Iowa State University), and Elias Bareinboim (Columbia University):

The lead author Yonghan will join us for the discussion, and here is a quick summary of these two papers from Yonghan:

  • Inferring causal effects from observational data is a fundamental task throughout the empirical sciences. In practice, however, there are still challenges to estimating identifiable causal functionals from finite samples. We aim to fill this gap between causal identification and causal estimation.

  • I will discuss a double/debiased machine learning (DML)-based causal effect estimation problem. Specifically, I will give two versions of the problem. First, I will introduce a DML-based estimation strategy working for any identifiable function. The formal result is in [paper 1]. Second, I will extend the DML-based causal effect estimators for the scenario where the causal graph is unknown [paper2].

  • In this reading group session, I hope to have a discussion on how causal effect identification & estimation can be leveraged in trustworthy AI.

Zoom link: https://us02web.zoom.us/j/83664690773?pwd=WlJOQzJDY0lHVm0rVjNsaEJWazhDdz09

YouTube live-stream and recording: This event will not be streamed or recorded.

Mar 24, 2022: Panel on "Tenure Track in Trustworthy ML" with Kai-Wei Chang, Maria De-Arteaga, Gautam Kamath, Olga Russakovsky, Marinka Zitnik

A conversation about research, challenges, opportunities, and work-life balance in tenure-track careers with Kai-Wei Chang (UCLA), Maria De-Arteaga (UT Austin), Gautam Kamath (U Waterloo), Olga Russakovsky (Princeton U), Marinka Zitnik (Harvard Medical School), moderated by Hima Lakkaraju (Harvard U).

Zoom registration: https://us02web.zoom.us/webinar/register/WN_PDsfR3_aRXGSJRGyRsEbMA

YouTube live-stream and recording: https://youtu.be/WOzMGaoCXY4

Dec 16, 2021: Fireside Chat with Cynthia Rudin

A conversation with Prof. Cynthia Rudin (Duke University, Winner of AI Nobel 2021) about her life, research journey, and perspectives on academia and industry. Moderated by Hima Lakkaraju (Harvard U).

Zoom registration: us02web.zoom.us/webinar/register/WN_ON1ur4NqRBi-BMHzGjaLJQ

YouTube live-stream and recording: youtu.be/-Dq3pcYSAmg

Dec 9, 2021: Reading group: Desiderata for Representation Learning

For this week, we will read and discuss Desiderata for Representation Learning: A Causal Perspective by Yixin Wang (UC Berkeley) and Michael I. Jordan (UC Berkeley). Yixin has generously provided a TL;DR summary for people who may have time to read the paper. Also, one may find the associated slides useful.

Zoom link: https://us02web.zoom.us/j/83664690773?pwd=WlJOQzJDY0lHVm0rVjNsaEJWazhDdz09

YouTube live-stream and recording: This event will not be streamed or recorded.

Dec 2, 2021: Reading group: Towards Causal Representation Learning

For this week, we will read and discuss Towards Causal Representation Learning by Bernhard Schölkopf, Francesco Locatello, Stefan Bauer, Nan Rosemary Ke, Nal Kalchbrenner, Anirudh Goyal, Yoshua Bengio.

Zoom link: https://us02web.zoom.us/j/83664690773?pwd=WlJOQzJDY0lHVm0rVjNsaEJWazhDdz09

YouTube live-stream and recording: This event will not be streamed or recorded.

Nov 18, 2021: Reading group: On Pearl's Hierarchy and Foundations of Causal Inference

For this week, we will read and discuss On Pearl’s Hierarchy and the Foundations of Causal Inference by Elias Bareinboim, Juan D. Correa, Duligur Ibeling, and Thomas Icard.

Zoom link: https://us02web.zoom.us/j/83664690773?pwd=WlJOQzJDY0lHVm0rVjNsaEJWazhDdz09

YouTube live-stream and recording: This event will not be streamed or recorded.