Within our contexts of use – including education, social care and rehabilitation, AI can be understood as (predominantly) a set of pattern‑spotting and generative tools that we are using within and alongside virtual and mixed‑reality experiences in real time to transform assessment, treatment, education, care and opportunity for those who need it most. Our basic premise is that an algorithm does not have to make the same choice as a human would make, but a human should at least be able to understand the process by which the decision is made. Understanding the process provides transparency and clarity about how decisions are made – which builds trust, protects rights, and allows for challenging and greater understanding of outcomes that significantly impact lives. We put co-designers (persons directly affected by our research) and co-researchers with disabilities more formally and centrally into the human centric design approaches to these technologies. When we reframe our community partners as co-researchers, then we are much more likely to develop AI-enhanced enabling systems which are trustworthy, unbiased, privacy preserving and will go on to being used and create impact.
AI isn’t just a tool—it’s a responsibility.