CS Diversity Project Presentations 2024

Please join us on March 27, 2024; 3:00 pm - 5:00 pm at the Big Data Hub Presentation Studio to learn about the amazing projects our students are doing to advance Diversity, Inclusion, Equity, and Justice in Computing Science Research and Professional Practices.

The event will start with a brief introduction to CSDC Diversity Awards, and continue with Project Presentations.

Our four shortlisted projects for CS Diversity Award 2024 are:

Project: Constructing a Different Imagination Beyond “AI Outperforming Humans”

Team: Weina Jin

Advisors: Ghassan Hamarneh, Xiaoxiao Li

Abstract:  As artificial intelligence (AI) is becoming increasingly impactful to society and in high-stakes domains, it is important to develop ethical AI techniques to ensure AI is equal and just. To do so, our project first proposed that the use of AI does not devalue workers’ work, and that the development of AI should prioritize essential human and social values. Following these principles, we conducted a clinical user study and found that, when AI is collaborating with humans, the utility of AI is to help users achieve complementary task performance that outperforms either user of AI alone (Jin and Fatehi et al., 2024). Explainable AI (XAI) techniques are a necessary condition to support humans in achieving complementary performance. We developed the clinical XAI guidelines that guide the responsible evaluation of XAI algorithms for clinical settings (Jin et al. 2023). The guidelines emphasize on how well XAI can collaborate with humans. Our project shows when we implement ethical AI principles in technical development and evaluation, the objective of AI is to collaborate with, rather than to outcompete, humans.

[1] Weina Jin, Mostafa Fatehi, Ru Guo, and Ghassan Hamarneh. Evaluating the Clinical Utility of Artificial Intelligence Assistance and its Explanation on the Glioma Grading Task. Artificial Intelligence In Medicine 2024.

[2] Weina Jin, Xiaoxiao Li, Mostafa Fatehi, and Ghassan Hamarneh. Guidelines and evaluation for clinical explainable AI in medical image analysis. Medical Image Analysis (MedIA), 102684:1-30, 2023

View Project Presentation Slides

Audience’s Choice - 2024 CS Diversity Award

Project: Global Mental Health Initiatives and Social Media Analysis for Anti-Stigma Awareness Campaigns

Team: Maureen Herbert, Naveen Nandakumar, Fayad Chowdhury

Advisors: Steven Bergner (PhD CS, SFU)

Abstract: Our project provides a data-driven digital solution for mental health anti-stigma and awareness campaigns. We begin by analyzing global mental health research initiatives outlined in the WHO's Mental Health ATLAS Reports 2020 across 150 countries. Through descriptive statistics, we assess each country's preparedness regarding resources allocated to mental health research, availability of medical personnel and facilities, and existing education programs, focusing on high, middle, and low-income regions as defined by the WHO. We define a preparedness metric and identify gaps among the regions. Utilizing data science, natural language processing, and machine learning techniques such as topic modelling on Reddit data, we discover underlying themes and topics of discussion within the community. Using this, we suggest various campaign strategies. Our findings are presented on a website to promote public discussion and community awareness, encouraging help-seeking behaviour and reducing mental health stigma globally.

View Project Presentation Slides

Project: MosaicMate

Team: Kian Hosseinkhani, Peiman Zhiani, Enoch Muwanguzi, Alison Lu, Kasey Le and SFU Blueprint 

Adviser: Tyler D'silva (SFU Blueprint Exec) 

Abstract: The MosaicMate targets a diverse audience, including newcomers, immigrants, refugees, and individuals from varied backgrounds, aiming to facilitate their settlement and employment in Canada. By tailoring our design to accommodate users with different levels of language proficiency and educational backgrounds, we focused on our commitment to cultural sensitivity with debiasing AI. By employing easier access to essential settlement and employment services, we help newcomers, immigrants, refugees, and individuals from varied backgrounds find services and programs to assist their settlement and employment in Canada, making the chatbot directly contribute to digital inclusion and justice.

View Project Presentation Slides

Winner of the 2024 CS Diversity Award

Project: A Duo is All You Need for Fairness in Distilled Models

Team: Hoi Fai Lam, Hong Dung Nguyen, JangHyeon Lee

Abstract: Due to the advent of large language models (LLMs), the need for model compression techniques like knowledge distillation (KD) has become crucial. These methods are key to enhancing AI's efficiency, making advanced technologies accessible to a wider audience, including those with limited computing resources. However, challenges arise as smaller models often show greater biases, particularly in critical applications like hate speech detection, potentially leading to unfair treatment of marginalized groups. Addressing this issue, our project aims to develop compact models that not only match the performance of their larger counterparts but also emphasize fairness and equity, in alignment with Equity, Diversity, Inclusion, and Justice (EDIJ) principles. By refining the KD process and conducting thorough evaluations, we seek to minimize these biases, contributing to the creation of more inclusive and fair digital spaces for all.

View Project Presentation Slides