Bi-Weekly Seminar Series
The seminar series will start on October 29. Seminars are bi-weekly, on Thursdays at 12pm ET (9am PT / 4pm GMT / 7pm Addis Ababa / more time zones). The seminars will take place in Zoom Webinar; some may be recorded and live-streamed to YouTube.
To receive announcements of upcoming seminars, join our mailing list or subscribe to our YouTube channel. To participate in Zoom, locate the registration link below. If you are interested in giving a talk, contact firstname.lastname@example.org.
Each seminar consists of a 40-min talk by the speaker, followed by a 20-min moderated chat on the speaker's journey in ML and research process. After a 5-min break, we will reconvene for discussions with fellow participants. Our hope is that this participant-driven discussion in the second hour allows participants to meet new people working in similar research areas.
Click here for instructions to join Zoom, watch the live-stream/recording, or ask questions.
Joining Zoom: There is a limit of 100 participants in Zoom, first-come-first-serve; registering does not guarantee you a spot. You can join Zoom by clicking the link in your registration confirmation email shortly before the seminar starts. We recommend downloading and installing Zoom in advance.
Live-stream and recording: If Zoom reaches capacity, please watch the YouTube live-stream and check Zoom again in case spots open up. In particular, we anticipate spots to open up during the 5-min break at 1pm ET. If you do not wish to appear in the live-stream and recording, please only join Zoom in the second hour.
Asking questions: In Zoom, you will be muted in the first hour however you can ask questions using Zoom's Q&A tool. You can also upvote and leave comments on questions. The moderator will select questions to convey to the speaker. In the second hour, you will be able participate in more free-form discussions with fellow participants. We will also have a Twitter thread to continue the conversation after the seminar.
Click talk title to see abstract, bio, registration link (posted a few days before seminar)
Oct 29, 2020: Percy Liang, Stanford University
Surprises in the Quest for Robust Machine Learning
(Registration now open; click for link)
Abstract: Standard machine learning produces models that are accurate on average but degrade dramatically on when the test distribution of interest deviates from the training distribution. We consider three settings where this happens: when test inputs are subject to adversarial attacks, when we are concerned with performance on minority subpopulations, and when the world simply changes (classic domain shift). Our aim is to produce methods that are provably robust to such deviations. In this talk, I will (attempt to) summarize all the work my group has done on this topic over the last three years. We have found many surprises in our quest for robustness: for example, that the "more data" and "bigger models" strategy that works so well for average accuracy sometimes fails out-of-domain. On the other hand, we have found that certain tools such as analysis of linear regression and use of unlabeled data (e.g., robust self-training) have reliably delivered promising results across a number of different settings.
Bio: Percy Liang is an Associate Professor of Computer Science at Stanford University (B.S. from MIT, 2004; Ph.D. from UC Berkeley, 2011). His two research goals are (i) to make machine learning more robust, fair, and interpretable; and (ii) to make computers easier to communicate with through natural language. His awards include the Presidential Early Career Award for Scientists and Engineers (2019), IJCAI Computers and Thought Award (2016), an NSF CAREER Award (2016), a Sloan Research Fellowship (2015), and a Microsoft Research Faculty Fellowship (2014).
Zoom registration: https://us02web.zoom.us/webinar/register/WN_-M5Y9REHTMS2tf7e1X1-4w
YouTube live-stream: https://www.youtube.com/watch?v=jCEo8PRJ9NA
Dec 3, 2020: Jenn Wortman Vaughan, Microsoft Research
Intelligibility Throughout the Machine Learning Life Cycle
Abstract: People play a central role in the machine learning life cycle. Consequently, building machine learning systems that are reliable, trustworthy, and fair requires that relevant stakeholders—including developers, users, and the people affected by these systems—have at least a basic understanding of how they work. Yet what makes a system “intelligible” is difficult to pin down. Intelligibility is a fundamentally human-centered concept that lacks a one-size-fits-all solution. I will explore the importance of evaluating methods for achieving intelligibility in context with relevant stakeholders, ways of empirically testing whether intelligibility techniques achieve their goals, and why we should expand our concept of intelligibility beyond machine learning models to other aspects of machine learning systems, such as datasets and performance metrics.
Bio: Jenn Wortman Vaughan is a Senior Principal Researcher at Microsoft Research, New York City. Her research background is in machine learning and algorithmic economics. She is especially interested in the interaction between people and AI, and has often studied this interaction in the context of prediction markets and other crowdsourcing systems. In recent years, she has turned her attention to human-centered approaches to transparency, interpretability, and fairness in machine learning as part of MSR's FATE group and co-chair of Microsoft’s Aether Working Group on Transparency. Jenn came to MSR in 2012 from UCLA, where she was an assistant professor in the computer science department. She completed her Ph.D. at the University of Pennsylvania in 2009, and subsequently spent a year as a Computing Innovation Fellow at Harvard. She is the recipient of Penn's 2009 Rubinoff dissertation award for innovative applications of computer technology, a National Science Foundation CAREER award, a Presidential Early Career Award for Scientists and Engineers (PECASE), and a handful of best paper awards. In her "spare" time, Jenn is involved in a variety of efforts to provide support for women in computer science; most notably, she co-founded the Annual Workshop for Women in Machine Learning, which has been held each year since 2006.