Symposium

The Trustworthy ML 2nd Anniversary Symposium

Thursday, Oct 27, 2022 9.45am to 2pm ET

Registration link: https://us02web.zoom.us/webinar/register/WN_a5anHmDaRIWY6xYupiD0mg
The event takes place in a Zoom webinar (register above to receive Zoom link). 9.45am to 1.35pm will be streamed on YouTube (https://youtu.be/vDMTKlYl2Lw). The social starting at 1.35pm will not be streamed. If you do not wish to appear in the live-stream and recording, please only join Zoom starting 1.35pm ET.

Participation Instructions: You are encouraged to leave comments/questions in the Zoom Q&A tool any time during the event. Session moderators may unmute and call upon specific participants to express their comments/questions verbally. If unmuted, it should also be possible to turn off video, if you wish.


Agenda (Eastern Time):

9.45am -- Opening remarks. Speaker: Hima Lakkaraju (Harvard University)

10am -- Panel on "Trustworthy ML in a Time of Large Pre-trained Models".

Panelists: Sasha Luccioni (Hugging Face), Pang Wei Koh (Google Brain), Florian Tramer (ETH Zurich), Dhruv Mahajan (Meta)

Moderator: Sara Hooker (Cohere For AI)

11am -- Break

11.15am -- Research Brainstorm: "The Future of Trustworthy ML". Discussants:

Chandan Singh (Microsoft Research), "Direct data explanations: a new frontier for interpretability"

Degan Hao (University of Pittsburgh), "Trustworthy ML resisting adversarial attacks"

Subho Majumdar (Splunk), "Are bug bounties the future of Trustworthy ML?"

Moderator: Haohan Wang (University of Illinois Urbana-Champaign)

12.15pm -- Break

12.30pm -- Celebrating Young Researchers: 10-min Lightning Talks. Speakers:

Tessa Han (Harvard University), "Which Explanation Should I Choose? A Function Approximation Perspective to Characterizing Post Hoc Explanations"

Chhavi Yadav (UCSD), "A Learning-Theoretic Framework for Certified Auditing with Explanations"

Avijit Ghosh (Northeastern University), “Subverting Fair Image Search with Generative Adversarial Perturbations”

Marta Lemanczyk (Hasso Plattner Institute), "Influence of Genomic Motif Interactions on Post-hoc Attribution Methods"

Harsh Raj (Delhi Technical University), "Measuring Reliability of Large Language Models through Semantic Consistency"

Moderator: Subho Majumdar (Splunk)

1.30pm -- Sneak Peak: Community Resources Board. Speaker: Marta Lemanczyk (Hasso Plattner Institute)

1.35pm -- Social. Moderator: Chirag Agarwal (Adobe Research)

Call for Proposals: Research Brainstorm on “The Future of Trustworthy ML” at Trustworthy ML Initiative Symposium on 10/27

In this research brainstorming session on the future of trustworthy ML, we’ll discuss future directions on designing new trustworthy ML methods and operationalizing them to make sure that such research will be impactful in making ML systems more trustworthy. The session will be facilitated by Haohan Wang (Assistant Professor, University of Illinois Urbana-Champaign).

We invite the community to participate by submitting proposals to discuss at this session. Please email haohanw@illinois.edu a single slide that you would present at the session, if accepted, and be ready to lead a 15-min discussion using that slide.

Up to 4 proposals will be selected for discussion at the session, and all submitted proposals will be made available for comments beforehand on our Discord channel https://t.co/bcgDfvB8Qt.

Proposal submission deadline: Rolling deadline, no later than Monday, Oct 24.

Session date and time: Thursday, Oct 27, 11.15am to 12.15pm ET