Ellen Tsaprailis, June 15, 2022
New Frontiers Funding for Research Using AI to Provide Better Supports for Students with Disabilities
Carleton’s Boris Vukovic and Majid Komeili have been given $250,000 by the federal government’s New Frontiers in Research Fund to explore whether artificial intelligence (AI) can support the work of disability service professionals in assessing student support needs to benefit both the service providers and the students with disabilities.
Vukovic is the project PI and Director of the READ Initiative (Research, Education, Accessibility and Design) as well as the National Director of the Canadian Accessibility Network. Komeili is the project’s Co-PI and an Assistant Professor at the School of Computer Science.
As participation and success in post-secondary education are key factors in eliminating gaps in rates of employment for persons with disabilities in Canada, Vukovic is hopeful this project will better assess functional limitations.
“We hope to develop a machine learning prototype that will be able to analyze assessment data, ask follow-up questions, and interpret student responses, to ultimately suggest recommendations for supports,” says Vukovic.
The need for expertise and efficiency of procedures is driving this project and Vukovic says while there are critical ethical and technical challenges, the potential to use AI to increase access and participation for persons with disabilities in higher education is worth investigating.
“The human relationship in disability services is and will remain the core factor to supporting student needs. AI could augment this relationship by processing some of the assessment steps to support the work of disability services. It may also provide structure in the assessment process for novice practitioners and generate suggestions for consideration and discussion between students and service providers,” says Vukovic.
According to Komeili, the majority of powerful AI models are black-box models, meaning their inner workings are too complex for humans to understand. The use of such black-box models has contributed to many issues such as AI bias and fairness which has often hindered the adoption of AI in the real world due to a lack of trust. Komeili says one of the important aspects of this project is its emphasis on explainability and interpretability—using AI interpretability tools to provide rationale for individual recommendations made by AI.
“This can be achieved, for example, by highlighting the parts of a student’s responses that have had the biggest impact on the AI’s decision,” says Komeili.
“If a service provider feels the offered explanation is inconsistent with the AI recommendation, it is a sign that the AI recommendation should be ignored. However, if AI has a valid rationale in favour of a recommendation that does not appear to be beneficial at first, the service provider may reconsider what AI has to say. Interpretability is an essential part of our design process. We believe it will facilitate adoption of the resulting tool by service providers.”
The goal of this project is to develop a prototype system that would be widely shared.
“If successful, we will welcome further collaborations with universities and disability services in Canada to continue rigorous and ethical testing and development—always prioritizing the participation of persons with disabilities,” says Vukovic.
“Disability services in higher education are facing an unprecedented increase in the number of students self-identifying with disabilities. We hope our future-facing research can help contribute to collective efforts to address current trends and continue effective supports for post-secondary students with disabilities.”
Share: Twitter, Facebook