Past seminars & Recordings

Click to see abstract and bio. Recorded seminars can be replayed below.

Oct 29, 2020: Percy Liang, Stanford University

Surprises in the Quest for Robust Machine Learning

Percy's Abstract: Standard machine learning produces models that are accurate on average but degrade dramatically on when the test distribution of interest deviates from the training distribution. We consider three settings where this happens: when test inputs are subject to adversarial attacks, when we are concerned with performance on minority subpopulations, and when the world simply changes (classic domain shift). Our aim is to produce methods that are provably robust to such deviations. In this talk, I will (attempt to) summarize all the work my group has done on this topic over the last three years. We have found many surprises in our quest for robustness: for example, that the "more data" and "bigger models" strategy that works so well for average accuracy sometimes fails out-of-domain. On the other hand, we have found that certain tools such as analysis of linear regression and use of unlabeled data (e.g., robust self-training) have reliably delivered promising results across a number of different settings.

Percy's Bio: Percy Liang is an Associate Professor of Computer Science at Stanford University (B.S. from MIT, 2004; Ph.D. from UC Berkeley, 2011). His two research goals are (i) to make machine learning more robust, fair, and interpretable; and (ii) to make computers easier to communicate with through natural language. His awards include the Presidential Early Career Award for Scientists and Engineers (2019), IJCAI Computers and Thought Award (2016), an NSF CAREER Award (2016), a Sloan Research Fellowship (2015), and a Microsoft Research Faculty Fellowship (2014).

Zoom registration: https://us02web.zoom.us/webinar/register/WN_-M5Y9REHTMS2tf7e1X1-4w

YouTube live-stream and recording: https://www.youtube.com/watch?v=jCEo8PRJ9NA

Nov 12, 2020 Rising Star Spotlights: Irene Chen and Arpita Biswas

Irene Chen, MIT. Ethical Machine Learning for Healthcare

Arpita Biswas, Indian Institute of Science. Two-Sided Fairness Guarantees for Recommendation Systems

Irene's Abstract: Machine learning (ML) has demonstrated the potential to fundamentally improve healthcare because of its ability to find latent patterns in large observational datasets and scale insights rapidly. However, the use of ML in healthcare also raises numerous ethical concerns, especially as models can amplify existing health inequities. In this talk, I briefly outline two approaches to characterize inequality in ML and adapt models for patients without reliable access to healthcare. First, I decompose cost-based metrics of discrimination in supervised learning into bias, variance, and noise, and propose actions aimed at estimating and reducing each term. Second, I describe a deep generative model for disease subtyping while correcting for patient misalignment in disease onset time. I conclude with a pipeline for ethical machine learning in healthcare, ranging from problem selection to post-deployment considerations, and recommendations for future research.

Irene's Bio: Irene Chen is a computer science PhD student at MIT, advised by David Sontag. Her research focuses on machine learning methods to improve clinical care and deepen our understanding of human health, with applications in areas such as heart failure and intimate partner violence. Her work has been published in both machine learning conferences (NeurIPS) and medical journals (Nature Medicine, AMA Journal of Ethics), and covered by media outlets including MIT Tech Review, NPR/WGBH, and Stat News. Prior to her PhD, Irene received her AB in applied math and SM in computational engineering from Harvard University.

Arpita's Abstract: Major B2C eCommerce websites (such as Amazon, Spotify, etc.) are two-sided platforms, with customers on one side and producers on the other. Traditionally, recommendation protocols of these platforms are customer-centric---focusing on maximizing customer satisfaction by tailoring the recommendation according to the personalized preferences of individual customers. However, this may lead to unfair distribution of exposure among the producers and adversely impact their well-being. As more and more people depend on such platforms to earn a living, it is important to strike a balance between fairness among the producers and customer satisfaction. The problem of two-sided fairness in recommendation can be formulated as a hierarchically constrained fair allocation problem. This problem naturally captures a number of other resource-allocation applications, including budgeted course allocation and allocation of cloud computing resources. Our main contribution is to develop a polynomial time algorithm for the problem. In this talk, I’ll discuss the constrained fair allocation problem, and show how the solution can be applied to ensure a two-sided fair recommendation.

Arpita's Bio: Arpita Biswas completed her Ph.D. from the Department of Computer Science and Automation, Indian Institute of Science (IISc). During her Ph.D., she was a recipient of Google Ph.D. Fellowship award. Her Ph.D. dissertation provides algorithms and provable guarantees for fair decision making in resource allocation, recommendation, and classification domains. After completing her Ph.D., she joined Google Research as a Visiting Researcher, where she worked closely with a non-profit organization that aims at improving maternal health among low-income households in India by carrying out a free call-based program for spreading maternal care information. She is joining Harvard University as a Postdoctoral Research Fellow, starting from November 2020. Her primary areas of interest include Algorithmic Game Theory, Optimization, and Machine Learning---in particular, multi-agent learning, incentive mechanisms, market algorithms, scheduling, etc. Thus far, she has worked on problems arising from real-world scenarios like online crowd-sourcing, resource allocation, healthcare, dynamic pricing in transportation, ride-sharing, etc.

Zoom registration: https://us02web.zoom.us/webinar/register/WN_mOjYGQYcS2y8y9qYiPpNIQ

YouTube live-stream and recording: https://youtu.be/KM2vwajbasU

Nov 19, 2020: Ayanna Howard, Georgia Tech

Making the World Better with AI

Ayanna's Abstract: At 27, Dr. Ayanna Howard was hired by NASA to lead a team designing a robot for future Mars exploration missions that could “think like a human and adapt to change.” Her accomplishments since then include being named as one of 2015’s most powerful women engineers in the world and as one of Forbes’ 2018 U.S. Top 50 Women in Tech. From creating robots to studying the impact of global warming on the Antarctic ice shelves to founding a company that develops STEM education and therapy products for children and those with varying needs, Professor Howard focuses on our role in being responsible global citizens. In this talk, Professor Howard will delve into the implications of recent advances in robotics and AI and explain the critical importance of ensuring diversity and inclusion at all stages to reduce the risk of unconscious bias and ensuring robots are designed to be accessible to all. Throughout the talk, Professor Howard will weave in her own experience on developing new AI technologies through her technical leadership roles at NASA, Georgia Tech, and in technology startups.

Ayanna's Bio: Dr. Ayanna Howard is Chair of the School of Interactive Computing at the Georgia Institute of Technology. She also serves on the Board of Directors for Autodesk and the Partnership on AI. Prior to Georgia Tech, Dr. Howard was at NASA's Jet Propulsion Laboratory where she functioned as Deputy Manager in the Office of the Chief Scientist. To date, Dr. Howard’s unique accomplishments have been highlighted through a number of awards and articles, including being recognized as one of the 23 most powerful women engineers in the world by Business Insider and one of the Top 50 U.S. Women in Tech by Forbes. She regularly advises on issues concerning robotics, AI, and workforce development. Howard also serves on the board of CRA-WP, a nonprofit dedicated to broadening participation in computing research and education, as well as AAAS COOS, a board-appointed committee with the mandate to advise the Association on matters related to diversity in science, engineering, and related fields.

Zoom registration: https://us02web.zoom.us/webinar/register/WN_hlQKlJ52S0qCNMT_mXnDCg

YouTube live-stream and recording: This seminar will not be live-streamed or recorded.

Dec 3, 2020: Jenn Wortman Vaughan, Microsoft Research

Intelligibility Throughout the Machine Learning Life Cycle

Jenn's Abstract: People play a central role in the machine learning life cycle. Consequently, building machine learning systems that are reliable, trustworthy, and fair requires that relevant stakeholders—including developers, users, and the people affected by these systems—have at least a basic understanding of how they work. Yet what makes a system “intelligible” is difficult to pin down. Intelligibility is a fundamentally human-centered concept that lacks a one-size-fits-all solution. I will explore the importance of evaluating methods for achieving intelligibility in context with relevant stakeholders, ways of empirically testing whether intelligibility techniques achieve their goals, and why we should expand our concept of intelligibility beyond machine learning models to other aspects of machine learning systems, such as datasets and performance metrics.

Jenn's Bio: Jenn Wortman Vaughan is a Senior Principal Researcher at Microsoft Research, New York City. Her research background is in machine learning and algorithmic economics. She is especially interested in the interaction between people and AI, and has often studied this interaction in the context of prediction markets and other crowdsourcing systems. In recent years, she has turned her attention to human-centered approaches to transparency, interpretability, and fairness in machine learning as part of MSR's FATE group and co-chair of Microsoft’s Aether Working Group on Transparency. Jenn came to MSR in 2012 from UCLA, where she was an assistant professor in the computer science department. She completed her Ph.D. at the University of Pennsylvania in 2009, and subsequently spent a year as a Computing Innovation Fellow at Harvard. She is the recipient of Penn's 2009 Rubinoff dissertation award for innovative applications of computer technology, a National Science Foundation CAREER award, a Presidential Early Career Award for Scientists and Engineers (PECASE), and a handful of best paper awards. In her "spare" time, Jenn is involved in a variety of efforts to provide support for women in computer science; most notably, she co-founded the Annual Workshop for Women in Machine Learning, which has been held each year since 2006.

Zoom registration: https://us02web.zoom.us/webinar/register/WN_tFiL3Oc0S4qUEqd6_g2hng

YouTube live-stream and recording: https://youtu.be/bogHfN-RkaA

Dec 17, 2020: Pin-Yu Chen, IBM Research

Practical Backdoor Attacks and Defenses in Machine Learning Systems

Pin-Yu's Abstract: Backdoor attack is a practical adversarial threat to modern machine learning systems, especially for deep neural networks. It is a training-time adversarial attack that embeds Trojan patterns to a well-trained model for gaining the ability to manipulate machine decision-making at the testing phase. In this talk, I will start by providing a comprehensive overview of adversarial robustness in the lifecycle of machine learning systems. Then, I will delve into recent backdoor attacks and practical defenses in different scenarios, including standard training and federated learning. The defenses include methods to detect and repair backdoored models. I will also cover a novel application of transfer learning with access-limited models based on the lessons learned from backdoor attacks.

Pin-Yu's Bio: Dr. Pin-Yu Chen is a research staff member at IBM Thomas J. Watson Research Center, Yorktown Heights, NY, USA. He is also the chief scientist of RPI-IBM AI Research Collaboration and PI of ongoing MIT-IBM Watson AI Lab projects. Dr. Chen received his Ph.D. degree in electrical engineering and computer science from the University of Michigan, Ann Arbor, USA, in 2016. Dr. Chen’s recent research focuses on adversarial machine learning and robustness of neural networks. His long-term research vision is building trustworthy machine learning systems. He has published more than 30 papers related to trustworthy machine learning at major AI and machine learning conferences, given tutorials at CVPR’20, ECCV’20, ICASSP’20, KDD’19, and Big Data’18, and organized several workshops for adversarial machine learning. He received a NeurIPS 2017 Best Reviewer Award, and was also the recipient of the IEEE GLOBECOM 2010 GOLD Best Paper Award.

Zoom registration: https://us02web.zoom.us/webinar/register/WN_3wXz3lgJTcSvGtC0re7NKA

YouTube live-stream and recording: https://youtu.be/RY8j_2zIvPY

Jan 7, 2021 Rising Star Spotlights: Lizzie Kumar and Amirata Ghorbani

Lizzie Kumar, University of Utah. Epistemic values in feature importance methods: Lessons from feminist epistemology

Amirata Ghorbani, Stanford University. Equitable Valuation of Data

Lizzie's Abstract: As the public seeks greater accountability and transparency from machine learning algorithms, the research literature on methods to explain algorithms and their outputs has rapidly expanded. Feature importance, or the practice of assigning quantitative importance values to the input features of a machine learning model, form a popular class of such methods. Much of the research on feature importance rests on formalizations that attempt to capture universally desirable properties. We investigate the ways in which epistemic values are implicitly embedded in these methods and analyze the ways in which they conflict with ideas from feminist philosophy. We offer some suggestions on how to conduct research on explanations that respects feminist epistemic values, taking into account the importance of social context, the epistemic privileges of subjugated knowers, and adopting more interactional ways of knowing.

Lizzie's Bio: Lizzie Kumar is a second-year Computing Ph.D. student advised by Suresh Venkatasubramanian at the University of Utah where her work has previously been supported by the ARCS Foundation. She is interested in the practice of analyzing the social impact of machine learning systems and developing responsible AI law and policy. Previously, she developed risk models on the Data Science team at MassMutual while completing her M.S. in Computer Science at the University of Massachusetts, and also holds a B.A. in Mathematics from Scripps College.

Amirata's Abstract: As data becomes the fuel driving technological and economic growth, a fundamental challenge is how to quantify the value of data in algorithmic predictions and decisions. For example, in healthcare and consumer markets, it has been suggested that individuals should be compensated for the data that they generate, but it is not clear what is an equitable valuation for individual data. In this talk, we discuss a principled framework to address data valuation in the context of supervised machine learning. Given a learning algorithm trained on a number of data points to produce a predictor, we propose data Shapley as a metric to quantify the value of each training datum to the predictor performance. Data Shapley value uniquely satisfies several natural properties of equitable data valuation. We introduce Monte Carlo and gradient-based methods to efficiently estimate data Shapley values in practical settings where complex learning algorithms, including neural networks, are trained on large datasets. We then briefly discuss the notion of distributional Shapley, where the value of a point is defined in the context of underlying data distribution.

Amirata's Bio: Amirata Ghorbani is a fifth year Ph.D. student at Stanford University working with James Zou. His research is focused on problems in machine learning including equitable methods for data valuation, algorithms to interpret machine learning models, ways to make existing ML predictors more interpretable and fair, and creating ML systems for healthcare applications such as cardiology and dermatology. He has also done work as a research intern in Google Brain, Google Brain Medical, and Salesforce Research. Before joining Stanford, he got his bachelor's degree in Electrical Engineering from Sharif University of Technology after doing some work in Signal Processing and Game Theory.

Zoom registration: https://us02web.zoom.us/webinar/register/WN_mwddSBuHRROcDmuqDl-q7A

YouTube live-stream and recording: https://youtu.be/_vL1Gy_6m-A

Jan 21, 2021 Zachary Lipton, Carnegie Mellon University

Prediction and Data-Driven Decision-Making in Real World Environments

Zack's Abstract: Most machine learning methodology is developed to address prediction problems under restrictive assumptions and applied to drive decisions in environments where those assumptions are violated. This disconnect between our methodological frameworks and their application has caused confusion both among researchers (who often lack the right formalism to tackle these problems coherently) and practitioners (who have developed a folks tradition of ad hoc practices for deploying and monitoring systems). In this talk I'll discuss some of the critical disconnects plaguing the application of machine learning and our fledgling efforts to bridge some of these gaps.

Zack's Bio: Zachary Chase Lipton is the BP Junior Chair Assistant Professor of Operations Research and Machine Learning at Carnegie Mellon University and a Visiting Scientist at Amazon AI. His research spans core machine learning methods and their social impact and addresses diverse application areas, including clinical medicine and natural language processing. Current research focuses include robustness under distribution shift, breast cancer screening, the effective and equitable allocation of organs, and the intersection of causal thinking and the messy high-dimensional data that characterizes modern deep learning applications. He is the founder of the Approximately Correct blog (approximatelycorrect.com) and a co-author of Dive Into Deep Learning, an interactive open-source book drafted entirely through Jupyter notebooks. Find on Twitter (@zacharylipton) or GitHub (@zackchase).

Zoom registration: https://us02web.zoom.us/webinar/register/WN_HCip_6VzQSOucLtL97Crng

YouTube live-stream and recording: https://youtu.be/fvL6MSzsQ6Q

Feb 4, 2021 Steven Wu, Carnegie Mellon University

Involving Stakeholders in Building Fair ML Systems

Steven's Abstract: Recent work in fair machine learning has proposed dozens of technical definitions of algorithmic fairness and methods for enforcing these definitions. However, we still lack a comprehensive understanding of how to develop machine learning systems with fairness criteria that reflect relevant stakeholders' nuanced viewpoints in real-world contexts. This talk will cover our recent work that aims to address this gap. We will first discuss an algorithmic framework that enforces the individual fairness criterion through interactions with a human auditor, who can identify fairness violations without enunciating a fairness (similarity) measure. We then discuss an empirical study on how to elicit stakeholders' fairness notions in the context of a child maltreatment predictive system.

Steven's Bio: Steven Wu is an Assistant Professor in the School of Computer Science at Carnegie Mellon University. His research focuses on (1) how to make machine learning better aligned with societal values, especially privacy and fairness, and (2) how to make machine learning more reliable and robust when algorithms interact with social and economic dynamics. In 2017, he received his Ph.D. in computer science at the University of Pennsylvania, where his doctoral dissertation received Penn’s Morris and Dorothy Rubinoff Award for best thesis. After spending one year as a post-doc researcher at Microsoft Research-New York City, he was an Assistant Professor at the University of Minnesota from 2018 to 2020. His research is supported by an Amazon Research Award, a Facebook Research Award, a Mozilla research grant, a Google Faculty Research Award, a J.P. Morgan Research Faculty Award, and the National Science Foundation.

Zoom registration: https://us02web.zoom.us/webinar/register/WN__GqfnrkaRJOaGiSP_vOAfA

YouTube live-stream and recording: https://youtu.be/gV8f9ZEQxb8

Feb 18, 2021 Celia Cintas, IBM Research Africa

A tale of adversarial attacks & out-of-distribution detection stories

Celia's Abstract: Most deep learning models assume ideal conditions and rely on the assumption that test/production data comes from the in-distribution samples from the training data. However, this assumption is not satisfied in most real-world applications. Test data could differ from the training data either due to adversarial perturbations, new classes, noise, or other distribution changes. These shifts in the input data can lead to classifying unknown types, classes that do not appear during training, as known with high confidence. On the other hand, adversarial perturbations in the input data can cause a sample to be incorrectly classified. We will discuss approaches based on group-based and individual subset scanning methods from the anomalous pattern detection domain and how they can be applied over off-the-shelf DL models.

Celia's Bio: Celia Cintas is a Research Scientist at IBM Research Africa - Nairobi, Kenya. She is a member of the AI Science team at the Kenya Lab. Her current research focuses on the improvement of ML techniques to address challenges on Global Health in developing countries and exploring subset scanning for anomaly detection under generative models. Previously, grantee from National Scientific and Technical Research Council (CONICET) working on Deep Learning and Geometrics Morphometrics for populations studies at LCI-UNS and IPCSH-CONICET (Argentina) as part of the Consortium for Analysis of the Diversity and Evolution of Latin America (CANDELA). During her PhD, she was a visiting student at the University College of London (UK). She was also a Postdoc researcher visitor at Jaén University (Spain) applying ML to Heritage and Archeological studies. She holds a Ph.D. in Computer Science from Universidad del Sur (Argentina). Co-chair of several Scipy Latinamerica conferences and happy member of LinuxChix Argentina. Financial Aid Co-Chair for the SciPy (USA) Committee (2016-2019) and Diversity Co-Chair for SciPy 2020.

Zoom registration: https://us02web.zoom.us/webinar/register/WN_s9rGThypSBuHRuRT3wHLoA

YouTube live-stream and recording: https://youtu.be/XWaaWWvPwDA

Mar 4, 2021 Rising Star Spotlights

Shibani Santurkar, MIT. How Do ML Models Make Decisions?

Victor Farias, Universidade Federal do Ceará, Brazil. Differential Privacy for Non-numeric Queries via Local Sensitivity

Shibani's Abstract: Machine learning models today attain impressive accuracy on many benchmark tasks. Yet, these models remain remarkably brittle---small perturbations of natural inputs can completely degrade their performance. Why is this the case? In this talk, we take a closer look at this brittleness, and examine how it can, in part, be attributed to the fact that our models often make decisions very differently to humans. Viewing neural networks as feature extractors, we study how these extracted features may diverge from those used by humans. We then take a closer look at the building blocks of the ML pipeline to identify potential sources of this divergence and discuss how we can make progress towards mitigating it.

Shibani's Bio: Shibani Santurkar is a PhD student in the MIT EECS Department, advised by Aleksander Mądry and Nir Shavit. Her research revolves around two broad themes: developing a precise understanding of widely-used deep learning techniques; and identifying avenues to make machine learning be robust and reliable. Prior to joining MIT, she received a bachelor's degree in electrical engineering from IIT Bombay, India. She is a recipient of the Google Fellowship.

Victor's Abstract: Differential privacy is the state-of-the-art formal definition for data release under strong privacy guarantees. A variety of mechanisms have been proposed in the literature for privately releasing the noisy output of non-numeric queries (i.e., queries that produce discrete outputs) by perturbing the output of the query. Those mechanisms use the notion of global sensitivity to calibrate the amount of noise one should inject to cover the individuals’ identity. A related notion named local sensitivity has been used in many numeric queries (i.e., queries that produce numeric outputs) to reduce the noise injected however it has not been used for non-numeric queries. In this talk, we discuss how to adapt the notion of local sensitivity for non-numeric queries and present a generic approach to apply it. We illustrate the effectiveness of this approach by applying it to two diverse problems: influential node analysis and decision tree induction.

Victor's Bio: Victor is a fifth year PhD student at Universidade Federal do Ceará - Brazil advised by Prof. Javam Machado. His research interests include Differential Privacy, Machine Learning and Databases. His thesis is on applying local sensitivity on differentially private selection with applications on graph analysis and tree induction algorithms. This work has been carried out in collaboration with Divesh Srivastava at AT&T Labs Research. Victor completed a Masters in Computer Science from the department of computer Science in Universidade Federal do Ceará – Brazil where he worked on elasticity for distributed databases using machine learning with a research visit period at Télécom SudParis - France.

Zoom registration: https://us02web.zoom.us/webinar/register/WN_JjYVljUhRqOB9Oj59uA4ZA

YouTube live-stream and recording: https://youtu.be/_s-Li0I18vU

Apr 1, 2021 Gautam Kamath, University of Waterloo

CoinPress: Practical Private Estimation

Gautam's Abstract: We introduce a simple framework for differentially private estimation. As a case study, we will focus on mean estimation for sub-Gaussian data. In this setting, our algorithm is highly effective both theoretically and practically, matching state-of-the-art theoretical bounds, and concretely outperforming all previous methods. Specifically, previous estimators either have weak empirical accuracy at small sample sizes, perform poorly for multivariate data, or require the user to provide strong a priori estimates for the parameters. No knowledge of differential privacy will be assumed. Based on joint work with Sourav Biswas, Yihe Dong, and Jonathan Ullman.

Gautam's Bio: Dr. Gautam Kamath is an Assistant Professor at the University of Waterloo’s Cheriton School of Computer Science, and a faculty affiliate at the Vector Institute. He is mostly interested in principled methods for statistics and machine learning, with a focus on settings which are common in modern data analysis (high-dimensions, robustness, and privacy). He was a Microsoft Research Fellow at the Simons Institute for the Theory of Computing for the Fall 2018 semester program on Foundations of Data Science and the Spring 2019 semester program on Data Privacy: Foundations and Applications. Before that, he completed his Ph.D. at MIT, affiliated with the Theory of Computing group in CSAIL.

Zoom registration: https://us02web.zoom.us/webinar/register/WN_X9QB0ynaSTWwDoa77i7zTQ

YouTube live-stream and recording: https://youtu.be/OqYjRT4Z8M8

Apr 15, 2021 Suresh Venkatasubramanian, University of Utah

The limits of Shapley values as a method for explaining the predictions of an ML system

Suresh's Abstract: One of the more pressing concerns around the deployment of ML systems is explainability: can we understand why an ML system made the decision that it did. This question can be unpacked in a variety of ways, and one approach that has become popular is the idea of feature influence: that we can assign a score to features that represents their (relative) influence in an outcome (either locally for particular input, or globally). One of the most influential of such approaches has been one based on cooperative game theory, where features are modeled as “players” and feature influence is captured as “player contribution” via the Shapley value of a game. The argument is that the axiomatic framework provided by Shapley values is well-aligned with the needs of an explanation system. But is it? I’ll talk about two pieces of work that nail down mathematical deficiencies of Shapley values as a way of estimating feature influence and quantify the limits of Shapley values via a fascinating geometric interpretation that comes with interesting algorithmic challenges.

Suresh's Bio: Suresh Venkatasubramanian is a professor at the University of Utah. His background is in algorithms and computational geometry, as well as data mining and machine learning. His current research interests lie in algorithmic fairness, and more generally the impact of automated decision-making systems in society. Suresh was the John and Marva Warnock Assistant Professor at the U, and has received a CAREER award from the NSF for his work in the geometry of probability, as well as a test-of-time award at ICDE 2017 for his work in privacy. His research on algorithmic fairness has received press coverage across North America and Europe, including NPR’s Science Friday, NBC, and CNN, as well as in other media outlets. He is a member of the Computing Community Consortium Council of the CRA, a member of the board of the ACLU in Utah, and a member of New York City’s Failure to Appear Tool (FTA) Research Advisory Council, as well as the Research Advisory Council for the First Judicial District of Pennsylvania.

Zoom registration: https://us02web.zoom.us/webinar/register/WN_FJBqBy7qSS2B_PJbDJEr_A

YouTube live-stream and recording: https://youtu.be/5izWQN3SKQs

May 13, 2021 Alexander D'Amour, Google Brain

Underspecification Presents Challenges for Credibility in Modern Machine Learning

Alexander's Abstract: ML models often exhibit unexpectedly poor behavior when they are deployed in real-world domains. We identify underspecification as a key reason for these failures. An ML pipeline is underspecified when it can return many predictors with equivalently strong held-out performance in the training domain. Underspecification is common in modern ML pipelines, such as those based on deep learning. Predictors returned by underspecified pipelines are often treated as equivalent based on their training domain performance, but we show here that such predictors can behave very differently in deployment domains. This ambiguity can lead to instability and poor model behavior in practice, and is a distinct failure mode from previously identified issues arising from structural mismatch between training and deployment domains. We show that this problem appears in a wide variety of practical ML pipelines, using examples from computer vision, medical imaging, natural language processing, clinical risk prediction based on electronic health records, and medical genomics. Our results show the need to explicitly account for underspecification in modeling pipelines that are intended for real-world deployment in any domain.

Alexander's Bio: Alexander D'Amour is a Senior Research Scientist at Google Brain in Cambridge, MA. Alex’s work is primarily focused on the interface between machine learning and causality; both on how machine learning techniques can be used to estimate causal effects more effectively, and on how core concepts from causal inference can be used to improve real-world outcomes when machine learning is deployed. In research and in consulting, he has worked on applications in fairness, social network analysis, sports, healthcare, education, marketing, finance, microfinance, and entertainment. Formerly, he was a Neyman Visiting Assistant Professor in the Department of Statistics at UC Berkeley. Alex earned his PhD in the Department of Statistics at Harvard University.

Zoom registration: https://us02web.zoom.us/webinar/register/WN_8SP2pFsOQ8aIrVe8pveF9g

YouTube live-stream and recording: https://youtu.be/lGeUtmKCmKs

June 10, 2021 Katherine Heller and Jessica Schrouff, Google

Integrating predictions from Electronic Health Records in the clinical setting

Abstract: Predicting patient deterioration has been considered as a high-yield goal by clinical systems: can we detect when the patient’s health starts deteriorating such that we can avoid the worst-case scenario? To answer this question, many research works have developed machine learning models based on Electronic Health Records (EHRs, the time series of the patient’s interactions with the clinical system). In this talk, we will highlight two models predicting patient deterioration: one predicting sepsis (Futoma et al., 2017), and another predicting Acute Kidney Injury (AKI, Tomasev et al., 2020). We will then discuss how integrating these models into clinical practice is not a straightforward operation (Elish et al., 2021), with consequences on the clinical staff’s workflow. We will also discuss how providing explanations from these EHR models can be challenging (Mincu et al., 2021), with many technical and human factors still to be investigated.

Katherine's Bio: Katherine is a research scientist in Google Brain and a member of Brain Health Research. She works at the boundary of Machine Learning (ML) and Healthcare, particularly focusing on fairness and ethics in the ML+Health space, and the development of inclusive mobile health technology. Prior to joining Google, she was Statistical Science faculty at Duke University, where she developed a sepsis detection system now in use at Duke University Hospital, and a nationally released iOS app which tries to complete the picture of peoples' Multiple Sclerosis course between clinic visits. Katherine received a BS in CS and Applied Math from SUNY Stony Brook, an MS in CS from Columbia University, and a PhD in Machine Learning from the Gatsby Computational Neuroscience Unit at UCL. She was then a postdoc on an EPSRC fellowship in Engineering at the University of Cambridge, and an NSF postdoc fellow in Brain and Cognitive Sciences at MIT.

Jessica's Bio: Jessica is a research scientist at Google Research working on machine learning for healthcare. Before joining Google in 2019, she was a Marie Curie post-doctoral fellow at University College London (UK) and Stanford University (USA), developing machine learning techniques for neuroscience discovery and clinical predictions. Throughout her career, Jessica's interests have lied not only in the technical advancement of machine learning methods, but also in critical aspects of their deployment such as their credibility, fairness, robustness and interpretability. She has published papers in both science and machine learning venues, and released open-source software now used by 300+ neuroscience teams across the world. She is also involved in DEI initiatives, such as Women in Machine Learning (WiML) and founded the Women in Neuroscience Repository (www.winrepo.org).

Zoom registration: https://us02web.zoom.us/webinar/register/WN_9KGcGA51Q1uQMo1swzN9JA

YouTube live-stream and recording: https://youtu.be/gH0yQN4Yprs

June 24, 2021 Industry Spotlights: Rumman Chowdhury and Jiahao Chen

Rumman Chowdhury, Twitter

Jiahao Chen, JP Morgan

Rumman and Jiahao will talk about the unique practical challenges of deploying trustworthy ML in industry.

Zoom registration: https://us02web.zoom.us/webinar/register/WN_pscCi8cjQQCC_--ss2js1A

YouTube live-stream and recording: This seminar will not be live-streamed or recorded.

July 8, 2021 Cynthia Rudin, Duke University

Do Simpler Models Exist?

Abstract: Many data science problems admit accurate models that are surprisingly simple. We have a preliminary hypothesis for why this occurs, which is that many datasets admit a lot of almost-equally-accurate models, and among these are simple models. I will present some interesting experiments illustrating that large “Rashomon” sets of almost-equally-accurate models correlate with good performance across different algorithms. This experiment leads to an easy calculation to check for the possibility of a simpler-yet-equally-accurate model before finding one. I will briefly discuss work on “variable importance diagrams” that help us visualize the “Rashomon set” of approximately-equally-good models.

Bio: Cynthia Rudin is a professor of computer science, electrical and computer engineering, and statistical science at Duke University, and directs the Prediction Analysis Lab, whose main focus is interpretable machine learning. Previously, Prof. Rudin held positions at MIT, Columbia, and NYU. She holds an undergraduate degree from the University at Buffalo, and a PhD from Princeton University. She is a three-time winner of the INFORMS Innovative Applications in Analytics Award and a fellow of the American Statistical Association and the Institute of Mathematical Statistics.

Zoom registration: https://us02web.zoom.us/webinar/register/WN_Ler3QWPTRxqpeGO_AftjGw

YouTube live-stream and recording: https://youtu.be/Jy_AgSVSJMA

July 22, 2021 Marinka Zitnik, Harvard Medical School

Graph Representation Learning for Biomedical Discovery

Abstract: Graphs are pervasive in science and medicine, from maps of interactions between molecules in a cell to dependencies between diseases in a person, all the way to relationships between individuals in a population. First, I will describe our efforts in learning deep graph representations that are actionable and allow users to receive robust predictions that can be interpreted meaningfully. These methods optimize graph transformation functions to represent graphs in compact vector spaces such that performing algebraic operations in the space reflects the graph topology. Second, I will describe applications in drug development and precision medicine. Our methods enabled repurposing of drugs for an emerging disease where predictions were experimentally confirmed in human cells, and explanations gave insights leading to the discovery of a new class of drugs. The methods also enabled discovering dozens of drug combinations safer for patients than today's treatments. Last, I will highlight Therapeutics Data Commons (https://tdcommons.ai), a resource with ML-ready datasets and tasks that we are developing to facilitate algorithmic innovation in the broad area of drug discovery and development.

Bio: Marinka Zitnik is an Assistant Professor at Harvard University with appointments in the Department of Biomedical Informatics, Broad Institute of MIT and Harvard. Dr. Zitnik investigates machine learning, focusing on challenges brought forward by interconnected data in science, medicine, and health. Her research recently won best paper and research awards from the International Society for Computational Biology, the Bayer Early Excellence in Science Award, Amazon Faculty Research Award, a Rising Star Award in EECS, and a Next Generation Recognition in Biomedicine, being the only young scientist who received such recognition in both EECS and Biomedicine.

Zoom registration: https://us02web.zoom.us/webinar/register/WN_jJ69jgJVS6OrLfsfzCsgAg

YouTube live-stream and recording: https://youtu.be/CoOCcVPk2gs

August 5, 2021 Hoda Heidari, Carnegie Mellon University

A Discussion of Mathematical Formulations of Fairness through the Lens of Equality of Opportunity

Abstract: I begin by presenting a simple mapping between existing mathematical notions of fairness for Machine Learning and models of Equality of opportunity (EOP)—an extensively studied ideal of fairness in political philosophy. Through this conceptual mapping, I will argue that many existing definitions of fairness, such as predictive value parity and equality of odds, can be interpreted as special cases of EOP. In this respect, the EOP interpretation serves as a unifying framework for understanding the normative assumptions underlying existing notions of fairness. Additionally, the EOP view provides a systematic approach for defining new, context-aware mathematical formulations of fairness. I will conclude with a discussion of limitations and directions for future work.

Bio: Hoda Heidari is currently an Assistant Professor in Machine Learning and Societal Computing at the School of Computer Science, Carnegie Mellon University. Her research is broadly concerned with the societal and economic aspects of Artificial Intelligence, and in particular, unfairness and opaqueness through Machine Learning. Her work has won a best-paper award at the ACM Conference on Fairness, Accountability, and Transparency (FAccT) and an exemplary track award at the ACM Conference on Economics and Computation (EC). She has organized several academic events around responsible and trustworthy AI, including a tutorial at the Web Conference (WWW), and workshops at the Neural and Information Processing Systems (NeurIPS) conference, and the International Conference on Learning Representations (ICLR).

Zoom registration: https://us02web.zoom.us/webinar/register/WN_ThLzR0KfQCiwFYWi1o6nbA

YouTube live-stream and recording: This seminar will not be live-streamed or recorded

August 19, 2021 Rising Star Spotlights

Shiori Sagawa, Stanford. Improving Robustness to Distribution Shifts: Methods and Benchmarks

Vihari Piratla, IIT Bombay. Machine Learning as a Service: The next million users

Shiori's Abstract: Machine learning models deployed in the real world constantly face distribution shifts, yet current models are not robust to these shifts; they can perform well when the train and test distributions are identical, but still have their performance plummet when evaluated on a different test distribution. In this talk, I will discuss methods and benchmarks for improving robustness to distribution shifts. First, we consider the problem of spurious correlations and show how to mitigate it with a combination of distributionally robust optimization (DRO) and controlling model complexity---e.g., through strong L2 regularization, early stopping, or underparameterization. Second, we present WILDS, a curated and diverse collection of 10 datasets with real-world distribution shifts, that aims to address the under-representation of real-world shifts in the datasets widely used in the ML community today. We observe that existing methods fail to mitigate performance drops due to these distribution shifts, underscoring the need for new training methods that produce models which are more robust to the types of distribution shifts that arise in practice.

Shiori's Bio: Shiori Sagawa is a third-year PhD student at Stanford University, advised by Percy Liang. She studies robustness to distribution shifts, and to this end, she has developed methods based on distributionally robust optimization, analyzed these algorithms in the context of deep learning models, and recently built a benchmark on distribution shifts in the wild. She is a recipient of Apple Scholars in AI/ML PhD Fellowship and Herbert Kunzel Stanford Graduate Fellowship.

Vihari's Abstract: Increasing compute and memory requirements of ML models led to the broad adoption of Machine Learning (ML) as a Service APIs. However, the one-size-fits-all paradigm of ML Services is naive when catering to millions of users; We will look at two specific challenges.

ML Services should report performance as a surface over the combination of environment-characterizing attributes instead of a single aggregate. Surface mapping is non-trivial since labelling data spanning combinatorially large attribute combinations is practically impossible. We will look at an efficient estimation technique that handles the challenge.

ML Services magnify the prevailing performance gaps (in ML systems) between domains, such as demographic groups. We will discuss a potential cause of such performance gaps and their fix.

Vihari's Bio: Vihari Piratla is a PhD student at IIT Bombay, advised by Prof. Sunita Sarawagi and Prof. Soumen Chakrabarty. His research focuses on problems that could enable wider reception of ML systems, such as their evaluation, generalization, and adaptation to new deploy environments. He is a recipient of the Google PhD Fellowship.

Zoom registration: https://us02web.zoom.us/webinar/register/WN_zUsV30_rTtGP9J3U7ITQRA

YouTube live-stream and recording: https://www.youtube.com/watch?v=PZvE-DEudsA

September 2, 2021 Sara Magliacane, University of Amsterdam

Causality-inspired ML: what can causality do for ML? The domain adaptation case

Abstract: Applying machine learning to real-world cases often requires methods that are trustworthy and robust w.r.t. heterogeneity, missing not at random or corrupt data, selection bias, non i.i.d. data etc. and that can generalize across different domains. Moreover, many tasks are inherently trying to answer causal questions and gather actionable insights, a task for which correlations are usually not enough. Several of these issues are addressed in the rich causal inference literature. On the other hand, often classical causal inference methods require either a complete knowledge of a causal graph or enough experimental data (interventions) to estimate it accurately.

Recently, a new line of research has focused on causality-inspired machine learning, i.e. on the application ideas from causal inference to machine learning methods without necessarily knowing or even trying to estimate the complete causal graph. In this talk, I will present an example of this line of research in the unsupervised domain adaptation case, in which we have labelled data in a set of source domains and unlabelled data in a target domain ("zero-shot"), for which we want to predict the labels. In particular, given certain assumptions, our approach is able to select a set of provably "stable" features (a separating set), for which the generalization error can be bound, even in case of arbitrarily large distribution shifts. As opposed to other works, it also exploits the information in the unlabelled target data, allowing for some unseen shifts w.r.t. to the source domains. While using ideas from causal inference, our method never aims at reconstructing the causal graph or even the Markov equivalence class, showing that causal inference ideas can help machine learning even in this more relaxed setting.

Bio: Sara Magliacane is an assistant professor in the Informatics Institute at the University of Amsterdam and a Research Scientist at the MIT-IBM Watson AI Lab. She received her PhD at the VU Amsterdam on logics for causal inference under uncertainty in 2017, focusing on learning causal relations jointly from different experimental settings, especially in the case of latent confounders and small samples. After a year in IBM Research NY as a postdoc, she joined the MIT-IBM Watson AI Lab in 2019 as a Research Scientist, where she has been working on methods to design experiments that would allow one to learn causal relations in a sample-efficient and intervention-efficient way. Her current focus is on causality-inspired machine learning, i.e. applications of causal inference to machine learning and especially transfer learning, and formally safe reinforcement learning.

Zoom registration: https://us02web.zoom.us/webinar/register/WN_2YwbW4O-QH6Zp1tTEFokOA

YouTube live-stream and recording: https://youtu.be/X8foHlW-Dsw

September 16, 2021 Sherri Rose, Stanford University

Fair Machine Learning for Continuous Outcomes in Risk Adjustment

Abstract: It is well-known in health policy that financing changes can lead to improved health outcomes and gains in access to care. More than 50 million people in the U.S. are enrolled in an insurance product that risk adjusts payments, and this has huge financial implications—hundreds of billions of dollars. Unfortunately, current risk adjustment formulas are known to undercompensate payments to health insurers for certain marginalized groups of enrollees (by underpredicting their spending). This incentivizes insurers to discriminate against these groups by designing their plans such that individuals in undercompensated groups will be less likely to enroll, impacting access to health care for these groups. We will discuss new fair statistical machine learning methods for continuous outcomes designed improve risk adjustment formulas for undercompensated groups. Then, we combine these tools with other approaches (e.g., leveraging variable selection to reduce health condition upcoding) for simplifying and improving the performance of risk adjustment systems, while centering fairness. Lastly, we discuss the paucity of methods for identifying marginalized groups in risk adjustment, and more broadly in the algorithmic fairness literature, including groups defined by multiple intersectional attributes. Extending the concept of variable importance, we construct a new measure of "group importance" to identify groups defined by multiple attributes. This work provides policy makers with a tool to uncover incentives for selection in insurance markets and a path towards more equitable health coverage. (Joint work with Anna Zink, Harvard & Tom McGuire, Harvard.)

Bio: Sherri Rose, Ph.D. is an Associate Professor and Co-Director of the Health Policy Data Science Lab at Stanford University. Her methodological research focuses on machine learning for prediction and causal inference. Within health policy, Dr. Rose works on risk adjustment, ethical algorithms in health care, comparative effectiveness research, and health program evaluation. In 2011, Dr. Rose coauthored the first book on machine learning for causal inference, with a sequel text released in 2018. She is a fellow of the American Statistical Association and her other honors include the Bernie J. O’Brien New Investigator Award, an NIH Director’s New Innovator Award, and the Mortimer Spiegelman Award, which recognizes the statistician under age 40 who has made the most significant contributions to public health statistics. Dr. Rose comes from a low-income background and is committed to increasing justice, equity, diversity and inclusion in the mathematical and health sciences.

Zoom registration: https://us02web.zoom.us/webinar/register/WN_x8nHs8OTQlODVbecZ5mWow

YouTube live-stream and recording: https://youtu.be/mTqQbsvgQSw

September 30, 2021 Sanmi Koyejo, University of Illinois at Urbana-Champaign

Towards Algorithms for Measuring and Mitigating ML Unfairness

Abstract: It is increasingly evident that widely-deployed machine learning (ML) models can lead to discriminatory outcomes and exacerbate group disparities. The renewed interest in measuring and mitigating (un)fairness has led to various metrics and mitigation strategies. Nevertheless, the measurement problem remains challenging, as existing metrics may not capture tradeoffs relevant to the context at hand, and different fairness definitions can lead to incompatible outcomes. To this end, I will outline metric elicitation as a framework for addressing this metric selection problem -- by efficiently estimating implicit preferences from stakeholders via interactive feedback. Towards mitigation, I will briefly outline some new approaches for overlapping groups, unknown sensitive attributes, and other scenarios beyond the most widely studied settings.

Bio: Sanmi (Oluwasanmi) Koyejo is an Associate Professor in the Department of Computer Science at the University of Illinois at Urbana-Champaign. Koyejo also spends time at Google as a part of the Google Brain team. Koyejo's research interests are in developing the principles and practice of trustworthy machine learning. Additionally, Koyejo focuses on applications to neuroscience and healthcare. Koyejo completed his Ph.D. in Electrical Engineering at the University of Texas at Austin, advised by Joydeep Ghosh, and completed postdoctoral research at Stanford University. His postdoctoral research was primarily with Russell A. Poldrack and Pradeep Ravikumar. Koyejo has been the recipient of several awards, including a best paper award from the conference on uncertainty in artificial intelligence (UAI), a Skip Ellis Early Career Award, a Sloan Fellowship, an NSF CAREER award, a Kavli Fellowship, an IJCAI early career spotlight, and a trainee award from the Organization for Human Brain Mapping (OHBM). Koyejo serves as the president of the Black in AI organization.

Zoom registration: https://us02web.zoom.us/webinar/register/WN_jaEkcA8PQom3qqupt-AAcw

YouTube live-stream and recording: https://youtu.be/QgeZyuo6eD4

Nov. 11, 2021 Rising Star Spotlights

Biwei Huang: Learning and Using Causal Knowledge: Opportunities and Challenges

Aahlad Puli: Predictive Modeling in the Presence of Nuisance-Induced Spurious Correlations

Biwei's Abstract: Understanding causal relationships is a fundamental problem in scientific research, and recently, causal analysis has also attracted much interest in computer science and statistics. One focus of this talk is on how to find causal relationships from observational data, which is known as causal discovery. It serves as an appropriate alternative to interventions and randomized experiments in practice and is able to identify causal structure and quantitative models. Specifically, I will introduce recent methodological developments in causal discovery in complex environments with distribution shifts and unobserved confounders. In addition, I will also discuss the challenges we face towards more reliable causal discovery. Besides learning causality, another problem of interest is how causality can help understand and advance machine learning and artificial intelligence. Specifically, I will show what and how we can leverage from causal understanding to facilitate efficient, effective, and interpretable generalizations in transfer-learning tasks.

Biwei's Bio: Biwei Huang is a final-year Ph.D. candidate at Carnegie Mellon University. Her research interests are mainly in three aspects: (1) automated causal discovery in complex environments with theoretical guarantees, (2) advancing machine learning from the causal perspective, and (3) scientific applications of causal discovery approaches. Her research contributions have been published in JMLR, ICML, NeurIPS, KDD, AAAI, IJCAI, and UAI. She successfully led a NeurIPS’20 workshop on causal discovery and causality-inspired machine learning and co-organizes the first Conference on Causal Learning and Reasoning (CLeaR 2022). She is a recipient of the Presidential Fellowship at CMU and the Apple Scholar in AI/ML.

Aahlad's Abstract: In many prediction problems, spurious correlations are induced by a changing relationship between the label and a nuisance variable that is also correlated with the covariates. For example, in classifying animals in natural images, the background, which is the nuisance, can predict the type of animal. This nuisance-label relationship does not always hold, and the performance of a model trained under one such relationship may be poor on data with a different nuisance-label relationship. In this talk, I will describe an algorithm, Nuisance-Randomized Distillation (NuRD ), for building predictive models that perform well regardless of the nuisance-label relationship. NuRD constructs representations that distill out the influence of the nuisances while maximizing its information with the label. We evaluate NuRD on several tasks including chest X-ray classification where, using non-lung patches as the nuisance, NuRD produces models that predict pneumonia under strong spurious correlations.

Aahlad's Bio: Aahlad is a fourth-year Ph.D. student in Computer Science at NYU, advised by Prof. Rajesh Ranganath. He currently works on developing algorithms for predictive modeling in the presence of spurious correlations. His prior work has tackled causal effect estimation and survival analysis. He has a master’s degree in Computer Science from NYU and bachelor's and master’s degrees in Electrical Engineering from IIT Madras.

Zoom registration: https://us02web.zoom.us/webinar/register/WN_PLbzQyxTSyWHDL-s_GY4iQ

YouTube live-stream and recording: https://youtu.be/YUIBmOlzG7M

November 11, 2021 Nicholas Carlini, Google Brain

Extracting training data from neural networks

Abstract: Trustworthy machine learning models must respect the privacy of their training datasets, especially when training on sensitive or personal data. Unfortunately, we have found that current models are not private. Given access to a pre-trained language model, we show that it is possible to extract individual examples from the training dataset that were used to train the model. We then investigate various heuristic approaches to improve privacy, and show that many defenses can be easily attacked. We conclude with potential next steps that might allow us to better understand and control the ways in which models memorize their training data.

Bio: Nicholas Carlini is a research scientist at Google Brain. He studies the security and privacy of machine learning, and for this has received best paper awards at ICML, USENIX Security and IEEE S&P. He obtained his PhD from the University of California, Berkeley in 2018.

Zoom registration: https://us02web.zoom.us/webinar/register/WN_XGRuyFUWRmeZ2kzhOJpqvQ

YouTube live-stream and recording: https://www.youtube.com/watch?v=2Xl2B2R7_1M