Investigating gender bias in ChatGPT-generated content in 2024

Ben Davies-Romano
16 min readApr 18, 2024

This article was co-authored with Annelie Tinworth, Lead UX Content Designer at Volvo Cars, localisation legend, and word nerd extraordinaire.

We’re going to start this one in a slightly unusual way, friends. Bias in AI is a huge topic, and to pretend that we’re able to present a comprehensive and thoroughly rounded view of it in one article here would be not only arrogant but also grossly oversimplified.

A photorealistic AI generated image of a robot with a moustache, lipstick, wearing a pink pleated top.
ChatGPT dressed itself this morning. Generated with Midjourney.

What is this article about, in that case? This piece is an exploration. We’ll take a look at evidence of bias in prompts to generate content with ChatGPT. And we’ll take you through what we’ve found through plenty of examples, and consider what this means for how we as writers use the tool.

As we’re mainly interested in content generation rather than using ChatGPT to carry out any kind of critical analysis or logical reasoning-based tasks, most of our examples will centre on gender bias.

That’s not to say there aren’t other kinds of bias in ChatGPT or other kinds of AI tools. Of course, all biases we as a society have are reflected in these tools built by us. That’s why at the end of this article, we want to share a list of some further reading for you to dive deeper into this as a topic and learn how other kinds of bias surface and can be mitigated.

--

--

Ben Davies-Romano

UX and Product evangelist | https://www.linkedin.com/in/benjamin-w-davies/ Leading content design at Klarna | Founder of Tech Outcasts | ☕️ and 🏳️‍🌈