Gender bias seen in AI-generated content on leadership, new research says- QHN

New research has revealed an inherent gender bias in the content – text, images, other media – generated by artificial intelligence (AI).

Analysing AI-generated content about what made a ‘good’ and ‘bad’ leader, men were consistently depicted as strong, courageous, and competent, while women were often portrayed as emotional and ineffective, researchers at the University of Tasmania, Australia, and Massey University, New Zealand, found.

Thus, AI-generated content can preserve and perpetuate harmful gender biases, they said in their study published in the journal Organizational Dynamics.

“Any mention of women leaders was completely omitted in the initial data generated about leadership, with the AI tool providing zero examples of women leaders until it was specifically asked to generate content about women in leadership.

“Concerningly, when it did provide examples of women leaders, they were proportionally far more likely than male leaders to be offered as examples of bad leaders, falsely suggesting that women are more likely than men to be bad leaders,” said Toby Newstead, the study’s corresponding author.

Generative AI learns the patterns in input data, using which the AI is trained, and then creates content bearing similar characteristics. The AI depends on machine learning concepts for content creation.

For training these generative AI technologies, vast amounts of data from the internet along with human intervention to reduce harmful or biases are processed.

Therefore, AI-generated content needs to be monitored to ensure it does not propagate harmful biases, said study author Bronwyn Eager, adding that the findings highlighted the need for further oversight and investigation into AI tools as they become part of daily life.

“Biases in AI models have far-reaching implications beyond just shaping the future of leadership. With the rapid adoption of AI across all sectors, we must ensure that potentially harmful biases relating to gender, race, ethnicity, age, disability, and sexuality aren’t preserved,” she said.

“We hope that our research will contribute to a broader conversation about the responsible use of AI in the workplace,” said Eager.

Note:- (Not all news on the site expresses the point of view of the site, but we transmit this news automatically and translate it through programmatic technology on the site and not from a human editor. The content is auto-generated from a syndicated feed.))