AI images skew UK jobs towards white, male workers
Generative AI systems are producing images of British workers that reinforce gender and racial stereotypes across key professions, according to new research by design studio Berlew.
The studio examined how one leading model visualised 11 major UK occupations set in 2025. The team generated five lifelike images for each role and compared the apparent gender and ethnicity of the workers depicted with current UK workforce data.
The study found that the system overstated male and white representation in every profession it reviewed. It did this even in sectors where women and minority groups make up a large share of staff.
Women overlooked
Berlew reported that nursing, teaching and care work all showed a pronounced shift towards male workers in the AI images. Women currently make up 89% of nurses, 76% of teachers and 81% of care workers in the UK.
In the results described by the studio, the model produced a higher proportion of men than these figures suggest. The images did not match the actual gender balance in these professions.
Policing showed a similar pattern. Women account for 35% of police officers in the UK. Yet across five AI-generated images of police officers, only one depicted a woman.
In politics, the model generated five images of a UK Member of Parliament. Four images showed white men. The fifth showed a white woman.
Women currently hold 40.5% of seats in the UK Parliament. The intake is also more ethnically diverse than in previous decades.
Ethnic diversity reduced
The report also highlighted major gaps in ethnic representation. The studio said this was most stark in images of nurses.
In the nursing workforce, 34% of staff are from Black, Asian or minority ethnic backgrounds and 11% are men. In Berlew's test set, the AI generated five images of white women. None of the images depicted a man or a person from a minority ethnic group.
In male-dominated fields such as data science and surgery, the model pushed existing imbalances further. Data science roles are 78% male and surgery roles are 67% male in the UK.
In the AI imagery, Berlew observed scenes that sometimes showed only male teams. The outputs did not reflect the presence of women and minority ethnic professionals who work in these areas.
Stereotypes in training
The studio said the consistent direction of the bias suggests that the system relies heavily on cultural stereotypes. It said the images appear to draw more on online visual patterns than on current workforce statistics.
Berlew linked these outcomes to the material that generative models use during training. The systems learn from large collections of historical and contemporary images that sit on the open web.
These collections often contain skewed views of workplaces and leadership. Many of them show men in roles tied to power and authority, and women in caring or lower-status jobs.
Lewis Wilks, Creative Director at Berlew, said the findings carry a wider message about AI and bias.
"Across sectors, the findings reveal a deeper truth: AI imagery doesn't just mirror bias - it codifies it. Because generative models are trained on vast datasets of historical and cultural imagery, they reproduce patterns that reflect long-standing social hierarchies. Professions associated with care or empathy become more feminine and whitewashed; those tied to power, intellect or authority become more masculine and monocultural," said Wilks, Creative Director, Berlew.
Real-world impact
Use of synthetic visuals is growing in marketing, media and corporate communications. Companies are adopting off-the-shelf tools for stock images and campaign concepts. Public bodies are also experimenting with AI-generated imagery for awareness campaigns and internal materials.
Berlew warned that skewed representation in such images may shape public perceptions of who works in certain jobs. The studio said it may also affect how young people imagine their own career options.
The research team argued that systematic distortion across professions indicates a structural issue in current AI image models. They said the problem sits in the training data and in the absence of representation checks for deployed systems.
Berlew based its analysis on the first five images returned by the AI system for each profession. The researchers said this mirrors how most users work with such tools.
Each image was reviewed and coded by visible gender and ethnicity. The team then compared these observations with the latest UK workforce demographic data for each occupation.
The studio said it plans further work on how creative industries and organisations can audit representation in AI-generated visuals. It also intends to explore ways that commissioning practices and internal guidelines might reduce the spread of biased images across UK workplaces.