< Back

How can we ensure Generative AI supports disabled people in the workplace, rather than cause harm?

In this blog, award-winning workplace accessibility specialist Rachael Mole shares her insights on how we can ensure Generative AI becomes a tool for supporting disabled people in the workplace, rather than causing harm. Disabled since the age of 12, Rachael is a passionate advocate for using inclusion to drive meaningful cultural change.

By Rachael Mole

Award winning accessibility in the workplace specialist

4th Dec 2024

Generative AI—those tools that can write text, generate images, and even make decisions—are becoming a bigger part of our work lives. It’s not surprising, that 75% of people already use AI at work in some capacity. For disabled people, this technology could be a game-changer in making workplaces more accessible, levelling the playing field. But without the right care, AI also has the potential to do more harm than good.

With new laws on the horizon, like the UK Employment Rights Bill and AI-related regulations, we’re facing a moment of change that will define how the UK tackles employment, disability, and AI - and how all of these topics are inextricably linked. 24% of the working population here in the U.K. are disabled. Of that percentage, short term disabilities account year on year for one in three people who self-identify as disabled, no longer identify that way the year after (meaning that acquired disabilities such as broken bones, short term health conditions make up a huge proportion of disabled people in the UK every year.) 

We can no longer talk about just one of these topics without  the other. So, how can we make sure AI supports disabled employees, rather than creating more challenges? Let’s explore a few key ways.

1. Using AI to Improve Accessibility

AI has the potential to open doors for disabled employees. Think about tools like voice-to-text, predictive text, or AI that can help with scheduling and task management—these can allow people with different disabilities to work more independently and effectively. AI can help reduce barriers, making workplaces more inclusive for everyone.

But here’s the catch: AI tools need to be designed with all users in mind. They need to involve disabled people in the development process to ensure that the technology actually helps. There’s also a lot of choice out there, so checking in with disabled employees to make sure that the software you use works with the tools they have is also key. AI can be powerful, but if it’s not built for the people who need it most, it’s just another roadblock.

2. Tackling Bias in AI Systems

One of the biggest concerns with AI is its potential to amplify biases. AI systems are only as good as the data they’re trained on, and if that data carries biases—like assumptions about disabled people— AI can end up making discriminatory decisions. For example, an AI hiring tool might automatically screen out candidates with employment gaps, which could penalise disabled people who’ve taken time off for health reasons. They could also screen out for spelling mistakes and not maintaining eye contact in a video interview.  We already know that things like higher education is a common screening tool, with AI-driven software automatically removing applicants who don’t have the ‘required’ level of education. Disabled people are less likely to have a degree (unsurprising, higher education is also notoriously inaccessible!) But this could be removing highly qualified and capable candidates without further examination.

That’s why it’s crucial for companies to audit their AI systems regularly, ensuring they don’t unintentionally harm disabled employees. In the upcoming AI regulations, there’s a strong emphasis on transparency and bias mitigation, meaning companies will need to prove that their AI tools are fair and explainable.

3. Keeping Up with Legal Changes

With new laws like the UK Employment Rights Bill, companies will have to pay close attention to how they use AI, especially in recruitment. For example, from Day 1 of employment, new hires are now protected by rights that previously took years to earn. This means AI tools used in hiring must be carefully designed to avoid discrimination, particularly for disabled candidates.

Similarly, the EU AI Act is introducing strict requirements for fairness and transparency in AI systems. Employers need to ensure that their AI tools are compliant, so disabled employees are treated equitably from the start.

4. Building a Culture of Trust Around AI

AI can’t just be thrown into the workplace and expected to work wonders. To truly support disabled employees, AI needs to be part of a bigger strategy focused on building trust and inclusivity. A workplace culture that values accessibility will create a safer environment for employees to ask for adjustments and use AI tools to enhance their work– which if used well will only drive up productivity and hold the space for more innovation.

For example, it’s important to think about how AI-generated content—like reports or schedules—can be made accessible for people with visual or hearing impairments. Are the reports you create able to be read by a screen reader? Are the links you’ve added to your text described enough that someone can navigate them rather than hearing a list of ‘click here’... ‘click here.’ without any context as to what they’re clicking. AI tools should also be flexible, allowing employees to customise them based on their needs, so the tools are genuinely helpful.

5. Training and Education on AI

It’s not enough to install AI tools and call it a day. HR professionals and managers need to be properly trained to understand the ethical use of AI. This includes learning how to spot biases in AI systems and how to use the technology to support, rather than hinder, disabled employees. The human touch is so valuable, and shouldn’t be taken out of the equation altogether.

Training also means understanding the legal obligations that come with using AI, especially if (and hopefully when!) The changes are brought in by the UK Employment Rights Bill and AI regulations. Managers need to know the law and be prepared to adapt their recruitment and management practices accordingly. It will be a year until the bill has passed through the needed reviews to become legislation, by which time the laws may look very different, but it is my hope that there will be something that makes companies accountable at the very least, for determining if the AI they are buying and using in their organisations, on the front line of their people solutions, are built ethically or not. We need to be able to spot this from the outset as a priority.

6. Co-Creating AI Solutions with Disabled Employees

The best way to make sure AI works for disabled employees? Involve them in the process. Co-creating AI tools with disabled employees means their voices are heard, and their needs are met from the very start.

This approach, highlighted in the Beyond Barriers report, emphasises the importance of listening to disabled people when designing and implementing AI tools. By doing this, businesses can avoid making assumptions about what’s helpful to their teams, and create and use tools that genuinely support inclusion, making this the benchmark as we see the next generation of generative AI take shape.