Trustworthy ML Initiative

OUR Mission

As machine learning (ML) systems are increasingly being deployed in real-world applications, it is critical to ensure that these systems are behaving responsibly and are trustworthy.  To this end, there has been growing interest from researchers and practitioners to develop and deploy ML models and algorithms that are not only accurate, but also explainable, fair, privacy-preserving, causal, and robust. This broad area of research is commonly referred to as trustworthy ML

While it is incredibly exciting that researchers from diverse domains ranging from machine learning to health policy and law are working on trustworthy ML, this has also resulted in the emergence of critical challenges such as information overload and lack of visibility for work of early career researchers. Furthermore, the barriers to entry into this field are growing day-by-day -- researchers entering the field are faced with an overwhelming amount of prior work without a clear roadmap of where to start and how to navigate the field. 

To address these challenges, we are launching the Trustworthy ML Initiative (TrustML) with the following goals: 

We envision our initiative as complementary to other existing conferences and forums on topics related to trustworthy ML such as FaCCT, AIES, and FORC.

OUR EFFORTS

Forum to discuss cutting-edge research and applications


Educational resources to
lower barriers to entry

Platform to disseminate
latest news and research

Gathering to foster collaboration and networking

Organizers

Hima Lakkaraju

Harvard University

Jaydeep Borkar 

Northeastern University

Sara Hooker

Cohere For AI

Chhavi Yadav

UC San Diego

Chirag Agarwal

Adobe Research

Haohan Wang

University of Illinois Urbana-Champaign

Marta Lemanczyk

Hasso Plattner Institute

Advisory Committee

Kamalika Chaudhuri

UC San Diego

Kush Varshney

IBM Research

Tom Dietterich

Oregon State University