Project: Towards End-User-Centered Explainable Artificial Intelligence
Team: Weina Jin
Advisors: Ghassan Hamarneh (SFU), Xiaoxiao Li (UBC)
Description: Being able to explain its decisions to end-users in understandable ways is a necessity to deploy artificial intelligence (AI)-backed decision support systems in risk-sensitive domains such as healthcare. Yet, existing explainable AI (XAI) techniques are designed for technical users and disproportionately ignore end-users' high demand for AI explainability. To democratize AI and make AI explanations unbiased and accessible for end-users, I collaborated with end-users (including laypersons and physicians) with a participatory design process to discover end-users' requirements for XAI. Grounded in users' insights, I then developed the Clinical XAI Guidelines, the End-User-Centered Explainable AI Framework EUCA, revealed ill practices of XAI evaluation in the community, and proposed new end-user-centered XAI techniques. These efforts inform AI researchers of end-users perspectives, and facilitate their technical specification and development of end-user-centered XAI techniques that respect end-users' reasoning and decision process, follow human communication norms with explanations, and align with end-users' values and utility.