News

CAHRS Partners Reflect on Artificial Intelligence and Bias

Human Resource Executive's "Decoding Descrimination" features several CAHRS partner companies such as Amazon, IBM, Mastercard and Accenture, as they share the challenges of artificial intelligence and bias.

Amazon was forced to abandon a top-secret AI-hiring program it had been working on since 2015 because it discriminated against women. The aborted project aimed to help recruiters by giving applicants star ratings on a scale from 1-5, much like the products that the online retailer sells—but its ratings were based largely on data about its past applicants, who’d been predominantly male. Embarrassed Amazon insiders told Reuters the program even discounted resumes with the word “women’s” on activities, or with degrees from two predominantly women’s colleges.

At IBM, HR executives say they're well aware of how a male-designed personality test crushed an incipient revolution in women becoming computer programmers during the 1960s and ’70s. This historical knowledge has informed current work with its popular and widely promoted Watson AI systems.

This fall, Big Blue rolled out what it calls the Adverse Impact Analysis feature for its new AI toolset that the high-tech giant calls IBM Watson Recruitment. The system, which IBM officials describe as a “bias radar,” goes through an organization’s historical hiring data to identify potential unconscious biases in areas such as gender, race, age or education.

The AI systems for Mastercard can only make judgments based on the user’s past behaviors. When firms design AI programs around job seekers, their available data tend to cover only the applicants who’ve already been hired—an incomplete picture.

Harvard - and MIT-trained neuroscientist Frida Polli made testing for and removing potential bias a key part of her pitch to a client roster that now includes firms like Accenture.



© 2016 Cornell University | ILR School