Past seminars & Recordings

Click to see abstract and bio. Recorded seminars can be replayed below.

Oct 29, 2020: Percy Liang, Stanford University

Surprises in the Quest for Robust Machine Learning

Percy's Abstract: Standard machine learning produces models that are accurate on average but degrade dramatically on when the test distribution of interest deviates from the training distribution. We consider three settings where this happens: when test inputs are subject to adversarial attacks, when we are concerned with performance on minority subpopulations, and when the world simply changes (classic domain shift). Our aim is to produce methods that are provably robust to such deviations. In this talk, I will (attempt to) summarize all the work my group has done on this topic over the last three years. We have found many surprises in our quest for robustness: for example, that the "more data" and "bigger models" strategy that works so well for average accuracy sometimes fails out-of-domain. On the other hand, we have found that certain tools such as analysis of linear regression and use of unlabeled data (e.g., robust self-training) have reliably delivered promising results across a number of different settings.

Percy's Bio: Percy Liang is an Associate Professor of Computer Science at Stanford University (B.S. from MIT, 2004; Ph.D. from UC Berkeley, 2011). His two research goals are (i) to make machine learning more robust, fair, and interpretable; and (ii) to make computers easier to communicate with through natural language. His awards include the Presidential Early Career Award for Scientists and Engineers (2019), IJCAI Computers and Thought Award (2016), an NSF CAREER Award (2016), a Sloan Research Fellowship (2015), and a Microsoft Research Faculty Fellowship (2014).

Zoom registration: https://us02web.zoom.us/webinar/register/WN_-M5Y9REHTMS2tf7e1X1-4w

YouTube live-stream and recording: https://www.youtube.com/watch?v=jCEo8PRJ9NA

Twitter thread to continue the conversation: https://twitter.com/trustworthy_ml/status/1321863535144529926?s=20

Nov 12, 2020 Rising Star Spotlights: Irene Chen and Arpita Biswas

Irene Chen, MIT. Ethical Machine Learning for Healthcare

Arpita Biswas, Indian Institute of Science. Two-Sided Fairness Guarantees for Recommendation Systems

Irene's Abstract: Machine learning (ML) has demonstrated the potential to fundamentally improve healthcare because of its ability to find latent patterns in large observational datasets and scale insights rapidly. However, the use of ML in healthcare also raises numerous ethical concerns, especially as models can amplify existing health inequities. In this talk, I briefly outline two approaches to characterize inequality in ML and adapt models for patients without reliable access to healthcare. First, I decompose cost-based metrics of discrimination in supervised learning into bias, variance, and noise, and propose actions aimed at estimating and reducing each term. Second, I describe a deep generative model for disease subtyping while correcting for patient misalignment in disease onset time. I conclude with a pipeline for ethical machine learning in healthcare, ranging from problem selection to post-deployment considerations, and recommendations for future research.

Irene's Bio: Irene Chen is a computer science PhD student at MIT, advised by David Sontag. Her research focuses on machine learning methods to improve clinical care and deepen our understanding of human health, with applications in areas such as heart failure and intimate partner violence. Her work has been published in both machine learning conferences (NeurIPS) and medical journals (Nature Medicine, AMA Journal of Ethics), and covered by media outlets including MIT Tech Review, NPR/WGBH, and Stat News. Prior to her PhD, Irene received her AB in applied math and SM in computational engineering from Harvard University.

Arpita's Abstract: Major B2C eCommerce websites (such as Amazon, Spotify, etc.) are two-sided platforms, with customers on one side and producers on the other. Traditionally, recommendation protocols of these platforms are customer-centric---focusing on maximizing customer satisfaction by tailoring the recommendation according to the personalized preferences of individual customers. However, this may lead to unfair distribution of exposure among the producers and adversely impact their well-being. As more and more people depend on such platforms to earn a living, it is important to strike a balance between fairness among the producers and customer satisfaction. The problem of two-sided fairness in recommendation can be formulated as a hierarchically constrained fair allocation problem. This problem naturally captures a number of other resource-allocation applications, including budgeted course allocation and allocation of cloud computing resources. Our main contribution is to develop a polynomial time algorithm for the problem. In this talk, I’ll discuss the constrained fair allocation problem, and show how the solution can be applied to ensure a two-sided fair recommendation.

Arpita's Bio: Arpita Biswas completed her Ph.D. from the Department of Computer Science and Automation, Indian Institute of Science (IISc). During her Ph.D., she was a recipient of Google Ph.D. Fellowship award. Her Ph.D. dissertation provides algorithms and provable guarantees for fair decision making in resource allocation, recommendation, and classification domains. After completing her Ph.D., she joined Google Research as a Visiting Researcher, where she worked closely with a non-profit organization that aims at improving maternal health among low-income households in India by carrying out a free call-based program for spreading maternal care information. She is joining Harvard University as a Postdoctoral Research Fellow, starting from November 2020. Her primary areas of interest include Algorithmic Game Theory, Optimization, and Machine Learning---in particular, multi-agent learning, incentive mechanisms, market algorithms, scheduling, etc. Thus far, she has worked on problems arising from real-world scenarios like online crowd-sourcing, resource allocation, healthcare, dynamic pricing in transportation, ride-sharing, etc.

Zoom registration: https://us02web.zoom.us/webinar/register/WN_mOjYGQYcS2y8y9qYiPpNIQ

YouTube live-stream and recording: https://youtu.be/KM2vwajbasU

Twitter thread to continue the conversation: https://twitter.com/trustworthy_ml/status/1326944468122132480?s=20

Nov 19, 2020: Ayanna Howard, Georgia Tech

Making the World Better with AI

Ayanna's Abstract: At 27, Dr. Ayanna Howard was hired by NASA to lead a team designing a robot for future Mars exploration missions that could “think like a human and adapt to change.” Her accomplishments since then include being named as one of 2015’s most powerful women engineers in the world and as one of Forbes’ 2018 U.S. Top 50 Women in Tech. From creating robots to studying the impact of global warming on the Antarctic ice shelves to founding a company that develops STEM education and therapy products for children and those with varying needs, Professor Howard focuses on our role in being responsible global citizens. In this talk, Professor Howard will delve into the implications of recent advances in robotics and AI and explain the critical importance of ensuring diversity and inclusion at all stages to reduce the risk of unconscious bias and ensuring robots are designed to be accessible to all. Throughout the talk, Professor Howard will weave in her own experience on developing new AI technologies through her technical leadership roles at NASA, Georgia Tech, and in technology startups.

Ayanna's Bio: Dr. Ayanna Howard is Chair of the School of Interactive Computing at the Georgia Institute of Technology. She also serves on the Board of Directors for Autodesk and the Partnership on AI. Prior to Georgia Tech, Dr. Howard was at NASA's Jet Propulsion Laboratory where she functioned as Deputy Manager in the Office of the Chief Scientist. To date, Dr. Howard’s unique accomplishments have been highlighted through a number of awards and articles, including being recognized as one of the 23 most powerful women engineers in the world by Business Insider and one of the Top 50 U.S. Women in Tech by Forbes. She regularly advises on issues concerning robotics, AI, and workforce development. Howard also serves on the board of CRA-WP, a nonprofit dedicated to broadening participation in computing research and education, as well as AAAS COOS, a board-appointed committee with the mandate to advise the Association on matters related to diversity in science, engineering, and related fields.

Zoom registration: https://us02web.zoom.us/webinar/register/WN_hlQKlJ52S0qCNMT_mXnDCg

YouTube live-stream and recording: This seminar will not be live-streamed or recorded.

Twitter thread to continue the conversation: https://twitter.com/trustworthy_ml/status/1329481200340123648?s=20

Dec 3, 2020: Jenn Wortman Vaughan, Microsoft Research

Intelligibility Throughout the Machine Learning Life Cycle

Jenn's Abstract: People play a central role in the machine learning life cycle. Consequently, building machine learning systems that are reliable, trustworthy, and fair requires that relevant stakeholders—including developers, users, and the people affected by these systems—have at least a basic understanding of how they work. Yet what makes a system “intelligible” is difficult to pin down. Intelligibility is a fundamentally human-centered concept that lacks a one-size-fits-all solution. I will explore the importance of evaluating methods for achieving intelligibility in context with relevant stakeholders, ways of empirically testing whether intelligibility techniques achieve their goals, and why we should expand our concept of intelligibility beyond machine learning models to other aspects of machine learning systems, such as datasets and performance metrics.

Jenn's Bio: Jenn Wortman Vaughan is a Senior Principal Researcher at Microsoft Research, New York City. Her research background is in machine learning and algorithmic economics. She is especially interested in the interaction between people and AI, and has often studied this interaction in the context of prediction markets and other crowdsourcing systems. In recent years, she has turned her attention to human-centered approaches to transparency, interpretability, and fairness in machine learning as part of MSR's FATE group and co-chair of Microsoft’s Aether Working Group on Transparency. Jenn came to MSR in 2012 from UCLA, where she was an assistant professor in the computer science department. She completed her Ph.D. at the University of Pennsylvania in 2009, and subsequently spent a year as a Computing Innovation Fellow at Harvard. She is the recipient of Penn's 2009 Rubinoff dissertation award for innovative applications of computer technology, a National Science Foundation CAREER award, a Presidential Early Career Award for Scientists and Engineers (PECASE), and a handful of best paper awards. In her "spare" time, Jenn is involved in a variety of efforts to provide support for women in computer science; most notably, she co-founded the Annual Workshop for Women in Machine Learning, which has been held each year since 2006.

Zoom registration: https://us02web.zoom.us/webinar/register/WN_tFiL3Oc0S4qUEqd6_g2hng

YouTube live-stream and recording: https://youtu.be/bogHfN-RkaA

Twitter thread to continue the conversation: https://twitter.com/trustworthy_ml/status/1334554412673544194?s=20

Dec 17, 2020: Pin-Yu Chen, IBM Research

Practical Backdoor Attacks and Defenses in Machine Learning Systems

Pin-Yu's Abstract: Backdoor attack is a practical adversarial threat to modern machine learning systems, especially for deep neural networks. It is a training-time adversarial attack that embeds Trojan patterns to a well-trained model for gaining the ability to manipulate machine decision-making at the testing phase. In this talk, I will start by providing a comprehensive overview of adversarial robustness in the lifecycle of machine learning systems. Then, I will delve into recent backdoor attacks and practical defenses in different scenarios, including standard training and federated learning. The defenses include methods to detect and repair backdoored models. I will also cover a novel application of transfer learning with access-limited models based on the lessons learned from backdoor attacks.

Pin-Yu's Bio: Dr. Pin-Yu Chen is a research staff member at IBM Thomas J. Watson Research Center, Yorktown Heights, NY, USA. He is also the chief scientist of RPI-IBM AI Research Collaboration and PI of ongoing MIT-IBM Watson AI Lab projects. Dr. Chen received his Ph.D. degree in electrical engineering and computer science from the University of Michigan, Ann Arbor, USA, in 2016. Dr. Chen’s recent research focuses on adversarial machine learning and robustness of neural networks. His long-term research vision is building trustworthy machine learning systems. He has published more than 30 papers related to trustworthy machine learning at major AI and machine learning conferences, given tutorials at CVPR’20, ECCV’20, ICASSP’20, KDD’19, and Big Data’18, and organized several workshops for adversarial machine learning. He received a NeurIPS 2017 Best Reviewer Award, and was also the recipient of the IEEE GLOBECOM 2010 GOLD Best Paper Award.

Zoom registration: https://us02web.zoom.us/webinar/register/WN_3wXz3lgJTcSvGtC0re7NKA

YouTube live-stream and recording: https://youtu.be/RY8j_2zIvPY

Twitter thread to continue the conversation: Check back again after the seminar.

Jan 7, 2021 Rising Star Spotlights: Lizzie Kumar and Amirata Ghorbani

Lizzie Kumar, University of Utah. Epistemic values in feature importance methods: Lessons from feminist epistemology

Amirata Ghorbani, Stanford University. Equitable Valuation of Data

Lizzie's Abstract: As the public seeks greater accountability and transparency from machine learning algorithms, the research literature on methods to explain algorithms and their outputs has rapidly expanded. Feature importance, or the practice of assigning quantitative importance values to the input features of a machine learning model, form a popular class of such methods. Much of the research on feature importance rests on formalizations that attempt to capture universally desirable properties. We investigate the ways in which epistemic values are implicitly embedded in these methods and analyze the ways in which they conflict with ideas from feminist philosophy. We offer some suggestions on how to conduct research on explanations that respects feminist epistemic values, taking into account the importance of social context, the epistemic privileges of subjugated knowers, and adopting more interactional ways of knowing.

Lizzie's Bio: Lizzie Kumar is a second-year Computing Ph.D. student advised by Suresh Venkatasubramanian at the University of Utah where her work has previously been supported by the ARCS Foundation. She is interested in the practice of analyzing the social impact of machine learning systems and developing responsible AI law and policy. Previously, she developed risk models on the Data Science team at MassMutual while completing her M.S. in Computer Science at the University of Massachusetts, and also holds a B.A. in Mathematics from Scripps College.

Amirata's Abstract: As data becomes the fuel driving technological and economic growth, a fundamental challenge is how to quantify the value of data in algorithmic predictions and decisions. For example, in healthcare and consumer markets, it has been suggested that individuals should be compensated for the data that they generate, but it is not clear what is an equitable valuation for individual data. In this talk, we discuss a principled framework to address data valuation in the context of supervised machine learning. Given a learning algorithm trained on a number of data points to produce a predictor, we propose data Shapley as a metric to quantify the value of each training datum to the predictor performance. Data Shapley value uniquely satisfies several natural properties of equitable data valuation. We introduce Monte Carlo and gradient-based methods to efficiently estimate data Shapley values in practical settings where complex learning algorithms, including neural networks, are trained on large datasets. We then briefly discuss the notion of distributional Shapley, where the value of a point is defined in the context of underlying data distribution.

Amirata's Bio: Amirata Ghorbani is a fifth year Ph.D. student at Stanford University working with James Zou. His research is focused on problems in machine learning including equitable methods for data valuation, algorithms to interpret machine learning models, ways to make existing ML predictors more interpretable and fair, and creating ML systems for healthcare applications such as cardiology and dermatology. He has also done work as a research intern in Google Brain, Google Brain Medical, and Salesforce Research. Before joining Stanford, he got his bachelor's degree in Electrical Engineering from Sharif University of Technology after doing some work in Signal Processing and Game Theory.

Zoom registration: https://us02web.zoom.us/webinar/register/WN_mwddSBuHRROcDmuqDl-q7A

YouTube live-stream and recording: https://youtu.be/_vL1Gy_6m-A

Twitter thread to continue the conversation: Check back again after the seminar.

Jan 21, 2021 Zachary Lipton, Carnegie Mellon University

Prediction and Data-Driven Decision-Making in Real World Environments

Zack's Abstract: Most machine learning methodology is developed to address prediction problems under restrictive assumptions and applied to drive decisions in environments where those assumptions are violated. This disconnect between our methodological frameworks and their application has caused confusion both among researchers (who often lack the right formalism to tackle these problems coherently) and practitioners (who have developed a folks tradition of ad hoc practices for deploying and monitoring systems). In this talk I'll discuss some of the critical disconnects plaguing the application of machine learning and our fledgling efforts to bridge some of these gaps.

Zack's Bio: Zachary Chase Lipton is the BP Junior Chair Assistant Professor of Operations Research and Machine Learning at Carnegie Mellon University and a Visiting Scientist at Amazon AI. His research spans core machine learning methods and their social impact and addresses diverse application areas, including clinical medicine and natural language processing. Current research focuses include robustness under distribution shift, breast cancer screening, the effective and equitable allocation of organs, and the intersection of causal thinking and the messy high-dimensional data that characterizes modern deep learning applications. He is the founder of the Approximately Correct blog (approximatelycorrect.com) and a co-author of Dive Into Deep Learning, an interactive open-source book drafted entirely through Jupyter notebooks. Find on Twitter (@zacharylipton) or GitHub (@zackchase).

Zoom registration: https://us02web.zoom.us/webinar/register/WN_HCip_6VzQSOucLtL97Crng

YouTube live-stream and recording: https://youtu.be/fvL6MSzsQ6Q

Twitter thread to continue the conversation: twitter.com/trustworthy_ml/status/1352311719465537537?s=20

Feb 4, 2021 Steven Wu, Carnegie Mellon University

Involving Stakeholders in Building Fair ML Systems

Steven's Abstract: Recent work in fair machine learning has proposed dozens of technical definitions of algorithmic fairness and methods for enforcing these definitions. However, we still lack a comprehensive understanding of how to develop machine learning systems with fairness criteria that reflect relevant stakeholders' nuanced viewpoints in real-world contexts. This talk will cover our recent work that aims to address this gap. We will first discuss an algorithmic framework that enforces the individual fairness criterion through interactions with a human auditor, who can identify fairness violations without enunciating a fairness (similarity) measure. We then discuss an empirical study on how to elicit stakeholders' fairness notions in the context of a child maltreatment predictive system.

Steven's Bio: Steven Wu is an Assistant Professor in the School of Computer Science at Carnegie Mellon University. His research focuses on (1) how to make machine learning better aligned with societal values, especially privacy and fairness, and (2) how to make machine learning more reliable and robust when algorithms interact with social and economic dynamics. In 2017, he received his Ph.D. in computer science at the University of Pennsylvania, where his doctoral dissertation received Penn’s Morris and Dorothy Rubinoff Award for best thesis. After spending one year as a post-doc researcher at Microsoft Research-New York City, he was an Assistant Professor at the University of Minnesota from 2018 to 2020. His research is supported by an Amazon Research Award, a Facebook Research Award, a Mozilla research grant, a Google Faculty Research Award, a J.P. Morgan Research Faculty Award, and the National Science Foundation.

Zoom registration: https://us02web.zoom.us/webinar/register/WN__GqfnrkaRJOaGiSP_vOAfA

YouTube live-stream and recording: https://youtu.be/gV8f9ZEQxb8

Twitter thread to continue the conversation: Check back again after the seminar.

Feb 18, 2021 Celia Cintas, IBM Research Africa

A tale of adversarial attacks & out-of-distribution detection stories

Celia's Abstract: Most deep learning models assume ideal conditions and rely on the assumption that test/production data comes from the in-distribution samples from the training data. However, this assumption is not satisfied in most real-world applications. Test data could differ from the training data either due to adversarial perturbations, new classes, noise, or other distribution changes. These shifts in the input data can lead to classifying unknown types, classes that do not appear during training, as known with high confidence. On the other hand, adversarial perturbations in the input data can cause a sample to be incorrectly classified. We will discuss approaches based on group-based and individual subset scanning methods from the anomalous pattern detection domain and how they can be applied over off-the-shelf DL models.

Celia's Bio: Celia Cintas is a Research Scientist at IBM Research Africa - Nairobi, Kenya. She is a member of the AI Science team at the Kenya Lab. Her current research focuses on the improvement of ML techniques to address challenges on Global Health in developing countries and exploring subset scanning for anomaly detection under generative models. Previously, grantee from National Scientific and Technical Research Council (CONICET) working on Deep Learning and Geometrics Morphometrics for populations studies at LCI-UNS and IPCSH-CONICET (Argentina) as part of the Consortium for Analysis of the Diversity and Evolution of Latin America (CANDELA). During her PhD, she was a visiting student at the University College of London (UK). She was also a Postdoc researcher visitor at Jaén University (Spain) applying ML to Heritage and Archeological studies. She holds a Ph.D. in Computer Science from Universidad del Sur (Argentina). Co-chair of several Scipy Latinamerica conferences and happy member of LinuxChix Argentina. Financial Aid Co-Chair for the SciPy (USA) Committee (2016-2019) and Diversity Co-Chair for SciPy 2020.

Zoom registration: https://us02web.zoom.us/webinar/register/WN_s9rGThypSBuHRuRT3wHLoA

YouTube live-stream and recording: https://youtu.be/XWaaWWvPwDA

Twitter thread to continue the conversation: Check back again after the seminar.

Mar 4, 2021 Rising Star Spotlights

Shibani Santurkar, MIT. How Do ML Models Make Decisions?

Victor Farias, Universidade Federal do Ceará, Brazil. Differential Privacy for Non-numeric Queries via Local Sensitivity

Shibani's Abstract: Machine learning models today attain impressive accuracy on many benchmark tasks. Yet, these models remain remarkably brittle---small perturbations of natural inputs can completely degrade their performance. Why is this the case? In this talk, we take a closer look at this brittleness, and examine how it can, in part, be attributed to the fact that our models often make decisions very differently to humans. Viewing neural networks as feature extractors, we study how these extracted features may diverge from those used by humans. We then take a closer look at the building blocks of the ML pipeline to identify potential sources of this divergence and discuss how we can make progress towards mitigating it.

Shibani's Bio: Shibani Santurkar is a PhD student in the MIT EECS Department, advised by Aleksander Mądry and Nir Shavit. Her research revolves around two broad themes: developing a precise understanding of widely-used deep learning techniques; and identifying avenues to make machine learning be robust and reliable. Prior to joining MIT, she received a bachelor's degree in electrical engineering from IIT Bombay, India. She is a recipient of the Google Fellowship.

Victor's Abstract: Differential privacy is the state-of-the-art formal definition for data release under strong privacy guarantees. A variety of mechanisms have been proposed in the literature for privately releasing the noisy output of non-numeric queries (i.e., queries that produce discrete outputs) by perturbing the output of the query. Those mechanisms use the notion of global sensitivity to calibrate the amount of noise one should inject to cover the individuals’ identity. A related notion named local sensitivity has been used in many numeric queries (i.e., queries that produce numeric outputs) to reduce the noise injected however it has not been used for non-numeric queries. In this talk, we discuss how to adapt the notion of local sensitivity for non-numeric queries and present a generic approach to apply it. We illustrate the effectiveness of this approach by applying it to two diverse problems: influential node analysis and decision tree induction.

Victor's Bio: Victor is a fifth year PhD student at Universidade Federal do Ceará - Brazil advised by Prof. Javam Machado. His research interests include Differential Privacy, Machine Learning and Databases. His thesis is on applying local sensitivity on differentially private selection with applications on graph analysis and tree induction algorithms. This work has been carried out in collaboration with Divesh Srivastava at AT&T Labs Research. Victor completed a Masters in Computer Science from the department of computer Science in Universidade Federal do Ceará – Brazil where he worked on elasticity for distributed databases using machine learning with a research visit period at Télécom SudParis - France.

Zoom registration: https://us02web.zoom.us/webinar/register/WN_JjYVljUhRqOB9Oj59uA4ZA

YouTube live-stream and recording: https://youtu.be/_s-Li0I18vU

Twitter thread to continue the conversation: Check back again after the seminar.

Apr 1, 2021 Gautam Kamath, University of Waterloo

CoinPress: Practical Private Estimation

Gautam's Abstract: We introduce a simple framework for differentially private estimation. As a case study, we will focus on mean estimation for sub-Gaussian data. In this setting, our algorithm is highly effective both theoretically and practically, matching state-of-the-art theoretical bounds, and concretely outperforming all previous methods. Specifically, previous estimators either have weak empirical accuracy at small sample sizes, perform poorly for multivariate data, or require the user to provide strong a priori estimates for the parameters. No knowledge of differential privacy will be assumed. Based on joint work with Sourav Biswas, Yihe Dong, and Jonathan Ullman.

Gautam's Bio: Dr. Gautam Kamath is an Assistant Professor at the University of Waterloo’s Cheriton School of Computer Science, and a faculty affiliate at the Vector Institute. He is mostly interested in principled methods for statistics and machine learning, with a focus on settings which are common in modern data analysis (high-dimensions, robustness, and privacy). He was a Microsoft Research Fellow at the Simons Institute for the Theory of Computing for the Fall 2018 semester program on Foundations of Data Science and the Spring 2019 semester program on Data Privacy: Foundations and Applications. Before that, he completed his Ph.D. at MIT, affiliated with the Theory of Computing group in CSAIL.

Zoom registration: https://us02web.zoom.us/webinar/register/WN_X9QB0ynaSTWwDoa77i7zTQ

YouTube live-stream and recording: https://youtu.be/OqYjRT4Z8M8

Twitter thread to continue the conversation: Check back again after the seminar.

Apr 15, 2021 Suresh Venkatasubramanian, University of Utah

The limits of Shapley values as a method for explaining the predictions of an ML system

Suresh's Abstract: One of the more pressing concerns around the deployment of ML systems is explainability: can we understand why an ML system made the decision that it did. This question can be unpacked in a variety of ways, and one approach that has become popular is the idea of feature influence: that we can assign a score to features that represents their (relative) influence in an outcome (either locally for particular input, or globally). One of the most influential of such approaches has been one based on cooperative game theory, where features are modeled as “players” and feature influence is captured as “player contribution” via the Shapley value of a game. The argument is that the axiomatic framework provided by Shapley values is well-aligned with the needs of an explanation system. But is it? I’ll talk about two pieces of work that nail down mathematical deficiencies of Shapley values as a way of estimating feature influence and quantify the limits of Shapley values via a fascinating geometric interpretation that comes with interesting algorithmic challenges.

Suresh's Bio: Suresh Venkatasubramanian is a professor at the University of Utah. His background is in algorithms and computational geometry, as well as data mining and machine learning. His current research interests lie in algorithmic fairness, and more generally the impact of automated decision-making systems in society. Suresh was the John and Marva Warnock Assistant Professor at the U, and has received a CAREER award from the NSF for his work in the geometry of probability, as well as a test-of-time award at ICDE 2017 for his work in privacy. His research on algorithmic fairness has received press coverage across North America and Europe, including NPR’s Science Friday, NBC, and CNN, as well as in other media outlets. He is a member of the Computing Community Consortium Council of the CRA, a member of the board of the ACLU in Utah, and a member of New York City’s Failure to Appear Tool (FTA) Research Advisory Council, as well as the Research Advisory Council for the First Judicial District of Pennsylvania.

Zoom registration: https://us02web.zoom.us/webinar/register/WN_FJBqBy7qSS2B_PJbDJEr_A

YouTube live-stream and recording: https://youtu.be/5izWQN3SKQs

Twitter thread to continue the conversation: Check back again after the seminar.