Skip to main content

Lack of STEM diversity is causing AI to have a ‘white male’ bias


report from New York University’s AI Now Institute has found a predominantly white male coding workforce is causing bias in algorithms.
The report highlights that – while gradually narrowing – the lack of diverse representation at major technology companies such as Microsoft, Google, and Facebook is causing AIs to cater more towards white males.
For example, at Facebook just 15 percent of the company’s AI staff are women. The problem is even more substantial at Google where just 10 percent are female.
Report authors Sarah Myers West, Meredith Whittaker and Kate Crawford wrote:
“To date, the diversity problems of the AI industry and the issues of bias in the systems it builds have tended to be considered separately. 
We suggest that these are two versions of the same problem: issues of discrimination in the workforce and in system building are deeply intertwined.”
As artificial intelligence becomes used more across society, there’s a danger of some groups being left behind from its advantages while “reinforcing a narrow idea of the ‘normal’ person”.
The researchers highlight examples of where this is already happening:
  • Amazon’s controversial Rekognition facial recognition AI struggled with dark-skin females in particular, although separate analysis has found other AIs also face such difficulties with non-white males.
  • A résumé-scanning AI which relied on previous examples of successful applicants as a benchmark. The AI downgraded people who included “women’s” in their résumé or who attended women’s colleges.
AI is currently being deployed in few life-changing areas, but that’s rapidly changing. Law enforcement is already looking to use the technology for identifying criminals, even preemptively in some cases, and for making sentencing decisions – including whether someone should be granted bail.
“The use of AI systems for the classification, detection, and prediction of race and gender is in urgent need of re-evaluation,” the researchers noted. “The commercial deployment of these tools is cause for deep concern.”

Comments

Popular posts from this blog

Babylon Health erases AI test event for its chatbot doctor

Babylon Health  has removed all traces of an AI test event it held last year to promote its chatbot doctor. The company’s  GP at Hand  app, which features the chatbot and can provide a video link with a doctor, was promoted by former UK digital secretary Matt Hancock and is backed by the NHS. Furthermore, Samsung  partnered  with Babylon Health last year to integrate the service with compatible Galaxy devices Babylon Health’s AI-powered chatbot aims to provide guidance on how a patient should proceed. The idea is to reduce the pressure on the health service from patients whose symptoms could be dealt with at home. In theory, it’s a great idea and will one day be how we access healthcare. However, as AI News has  reported  in the past, it’s currently not robust enough and has presented advice which could result in fatalities. Twitter user  ‘Dr Murphy’  has been highlighting the failures of GP at Hand over the pas...

The History of Women In Programming World !!

Khanday Jeelani | .NET Web Developer

Funny Programming Quotes

Khanday Jeelani | .NET WEB DEVELOPER