Can AI make us less sexist at work?

Smarter technology could make everyday communication smoother, more inclusive – and effective, too. Alys Key looks at a new program designed to help.
Good communication is essential in the workplace
Alamy / PA

Changing the everyday language we use at work will improve the workplace for women. That’s the promise of firms who say new AI tools can reduce casual sexism – or various other prejudices – by automatically helping employees to rethink their written messages.

“Words reflect our deep unconscious impulses,” says May Habib, the chief executive of AI writing platform Writer. “And, when you are more consciously checking those impulses, I do think it is more likely to lead to strategies that create a more inclusive workplace.”

The San Francisco-based business has a suite of automated products that companies may use to ensure communications are both on-message and in a tone that represents everyone fairly. This could involve anything from suggesting gender-neutral language in a Slack message to helping employees to write clearer, more confident emails or blog posts.

The technology works like a living style guide. Rather than attempting to edit text after it’s been written, Writer makes suggestions during the writing process. Companies that use it can personalise the guide so that employees are consistent not just in their grammar, but also in how they talk about certain topics. The good news is that this novel approach can enhance text that’s been written by real people or generated by AI bots.

One of Writer’s clients, Ellevest, uses the tech to prompt people to say “women in the workforce” instead of “working women”, the idea being that the latter does not acknowledge the unpaid work that many women do in the home. Using “women” and “men” as adjectives instead of “female” and “male” is also part of the style guide.

“As a company built by women, for women, Ellevest has always talked about investing and personal finance with a distinct brand voice that’s unlike any other in the financial industry,” says Kate Gustafson, director of copy and content at Ellevest. She says these AI tools help ensure that all the company’s output is “inclusive and intersectional”.

The AI boom and workplace equality

Other companies are trying similar ideas. In 2020, the tool BigUpAI was launched to help women write more bullish job applications.

Google has been offering a feature to help users write in gender-neutral language since 2021, while writing assistant Grammarly has a feature that makes suggestions on how to be more sensitive to marginalised groups.

There has also been a lot of excitement over how AI could help address bias when employers hire new people, perhaps by removing names and age information from CVs. Other companies are using AI text-editing tools to check if their job postings use language that will appeal to the broadest range of people. Some companies are removing words like “ninja” and “rockstar” from their listings, since these can have gendered connotations.

May Habib, chief executive of Writer
Christopher Che

Habib says that this hyper-focus on recruitment neglects to take into account the bigger picture

“Great, you have an inclusive job description. What actually happens when you start the job, right?”

It also might not work. Last year, researchers at the University of Cambridge slammed AI tools designed to help boost diversity in the workplace as no better than “automated pseudoscience”.

To really challenge latent prejudices in the workplace, Habib says, we have to re-evaluate “everything from the way managers write emails to their team members, to the way corporate communications is done, to the press releases you put out and who gets quoted, to the subtle ways that language can be used in feedback and review cycles to put down women and people of colour”.

Human society has meaning and complex values that are hard to program or replicate

Andreea Gorbatai, an associate professor at the Vlerick Business School in Belgium

How AI might improve the working world

One way this might affect day-to-day life would be when a manager writes a performance review, the AI could prompt them to reconsider any biased language they’ve used. The hope is that this not only fixes the phrasing, but gets them to stop in their tracks and evaluate whether they themselves are being fair.

Another way the AI might intervene is in communications between employees on Slack and email, suggesting phrasing that is clearer and less passive-aggressive, with the idea being that this will make workplaces more congenial for everyone.

On top of language changes, Habib sees a broader way in which AI writing tools are going to help women who occupy certain kinds of writing-heavy jobs.

“I think this is truly the first technology revolution that could actually reduce [how many hours] knowledge workers are putting in every week, rather than increase,” she says. While this would benefit everybody, sectors such as marketing and communications which tend to be dominated by women could benefit the most.

“The feedback from our power users is we’re giving them time back to be more creative. We’re giving them time back to think.”

Fighting fire with fire: Tackling AI’s own biases

It’s not just the assumptions made by people which are a problem. As AI becomes a more frequent part of our daily lives, so could the biases baked into the technology.

“Since AI is learning about the world from human-generated data and humans are biased, the learning one gets from these patterns – no matter how sophisticated – will be biased,” says Andreea Gorbatai, an associate professor at the Vlerick Business School in Belgium. She teaches entrepreneurship and has a particular focus on how technology increases or decreases opportunities for women and minorities.

“And [it] can even further reinforce biases because it detects even small biases in the data and exploits them, or further replicates them or amplifies them, and can easily mislead unless the AI program is explicitly designed to counteract the biases in the data.”

After ChatGPT burst onto the scene late last year, prompting an explosion of chatbot-generated content, the need to address both our own biases and those we have fed into AI has become more pressing, according to Habib.

“The output here, when not monitored really closely in an automated way, can be very dangerous and send us backwards in terms of gender equality.”

If companies start leaning on AI-written text for all kinds of communications, that could mean biases end up replicated en masse. One solution is editing tools like Writer’s, that can enhance the text after it’s generated, but the holy grail would be an AI that can produce something without bias.

Right now, Habib says, that’s impossible. However, there are ways to tip the balance at least a little.

Writer now has its own large language model (LLM), a program that can generate sentences based on billions of other pieces of writing, a concept many will now know thanks to ChatGPT. Writer’s team trained its LLM – known as Palmyra – on a more select pool of data.

“We have really looked through and refined the data that we have put in there to be more business-focused.”

The hope is that this will reflect recent efforts to avoid prejudiced language in corporate settings.

Limitations of AI’s role in gender equality

Of course, some will see this kind of thing as, quite literally, all talk. What good does eliminating the phrase “hey guys” do for women if they aren’t being paid on par with men, or find themselves unable to move up?

Discussing priorities ahead of International Women’s Day, several top women leaders in tech have highlighted the importance of company culture making workplaces more equitable.

“Actions, such as normalising and celebrating parental leave for all genders, can play a crucial role in this,” says Deann Evans, director of EMEA expansion and partnerships at Shopify.

“Being a digital-first company can also help to create a safe space for people to work around their needs outside of work.”

Ellevest

Habib agrees that changing language needs to be backed up with policies that support employees – “a company that speaks inclusively but has horrible parental leave policies is a hypocritical workplace” – but argues that even a small prompt can remind managers to keep others in mind, and that this can even lead to a healthier workplace.

“I think those serve to make sure people are just awake and alert for the fact that there are diverse perspectives on a team.”

There’s also a challenge for the Writer team to keep language up to date with fast-moving social norms. This is done through a combination of individual customer feedback and data. If a suggested edit is being repeatedly rejected by users, the team will investigate why and adjust accordingly.

Ultimately, says Professor Gorbatai, these kinds of human interventions will always be needed because AI itself does not think, and does not understand the very human issue of gender inequality.

“Unsupervised AI doesn’t account for complexities of human existence and often relies on very simplistic rules and unsophisticated objectives whereas human society has meaning and complex values that are hard to program or replicate,” she says.

“I think AI can help us solve very complex computational problems – in disease detection and drug manufacturing, Space exploration – but a lot of the basics of reducing gender inequality are within our reach as humans and rely on much simpler solutions that have been known for a long time.

“We can teach girls and young women (and men and boys) that women can lead and innovate and be entrepreneurial. We can fix childcare and introduce paternity leave for all parents. We don’t need tech to solve these problems.”

Create a FREE account to continue reading

eros

Registration is a free and easy way to support our journalism.

Join our community where you can: comment on stories; sign up to newsletters; enter competitions and access content on our app.

Your email address

Must be at least 6 characters, include an upper and lower case character and a number

You must be at least 18 years old to create an account

* Required fields

Already have an account? SIGN IN

By clicking Create Account you confirm that your data has been entered correctly and you have read and agree to our Terms of use , Cookie policy and Privacy policy .

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

Thank you for registering

Please refresh the page or navigate to another page on the site to be automatically logged in