Subscribe
Albert McKeon

Jan 11th 2021

Preventing Bias in AI Technology Requires Human and Tech Oversight

TwitterRedditLinkedInFacebookPinterestEmailPrint

As its advocates have espoused for generations now, technology aims to improve how humans live, work and have fun.

Technology can also hinder that progress, particularly when bad actors or design flaws make machines work in ways that don’t help humans. Those shortcomings have been evident with AI technology, like machine learning algorithms, which, instead of preventing bias, have advanced the prejudices held by humans.

From a chatbot that unleashed racially insensitive tweets to an algorithm that predicted crime based on race, these AI-supported programs have behaved just as imperfectly as people. But several new tools propose to end bias in AI, while observers in and out of the tech industry call for human designers to go a step further and be more mindful of what they create.

Biases Found in Machine Learning Algorithms

The algorithms of machine learning, natural language processing (NLP) and other foundational elements of AI supposedly operate outside the boundaries of human imperfection. AI programs promise to neutrally screen job candidates, handle loan applications, predict criminal activity and make other decisions that are subject to human biases. But a string of discoveries has shown many programs — even those designed to perform seemingly frictionless tasks — can’t make objective or even good “choices.”

A University of Massachusetts study revealed that many NLP algorithms are programmed with only natural English, decreasing its ability to detect dialects. Many widely used NLP tools identify African-American English as “not English” at higher rates than expected.

A ProPublica investigation of a software that was designed to detect future criminals found that its algorithms were more likely to falsely flag black defendants as candidates to again commit crimes, wrongly labeling them as reoffenders at almost twice the rate as white defendants. The software also mislabeled white people as low risk more often than black people.

Tech giants were not immune to AI-produced biases. An MIT study found IBM visual recognition technology had an error rate of 34.7 percent when identifying women with darker skin tones, while the error rate for identifying that same group of people with Microsoft technology was 20.8 percent. And as CNBC observed, Facebook and YouTube could stand to learn a lesson from Microsoft, whose Twitter chatbot generated racist and anti-Semitic tweets in 2016; two years later, both social media platforms came under fire for offering offensive content in their search bars.

Building the Ethics of AI

Although the list of biased technology looks long, the list of bias technology and ideas to counter prejudice just might be longer.

On the tech side, IBM has created a tool that the company says can review AI-backed software for unintentional bias, as Fortune reported. Google introduced a “What-if Tool,” an open-source web application that lets people analyze machine learning software for algorithm fairness. Microsoft also made significant improvements to clear the biases in its facial recognition technology. According to Fortune, the software didn’t have a diverse enough data set to include as many photos of women with darker skin as lighter-skinned men.

Some observers contend that humans are inherently biased and AI technology itself will correct those leanings, because it is “learning” largely without human involvement. Others, though, argue that if humans design the programs, the algorithms will be flawed from the start.

“It’s not enough to try to avoid bias in AI—we must actively work to identify and counteract unintended bias in our algorithms,” said Amanda Muller, Technical fellow, Northrop Grumman. “Technology advances like Explainable AI help us identify potential bias by providing insights into how and why AI decisions are reached. Advances in the development of synthetic data allow us to reduce or eliminate bias in our training data sets. And inclusion of bias testing in our AI governance processes provide for the identification and correction of bias during development and operations. As AI technology continues to evolve, so must our diligence in recognizing and actively countering AI bias.”

Business leaders whose companies use AI tools and who lean toward increasing human responsibility can consider following five standards that two professors proposed in MIT Technology Review. To hold algorithms accountable, the professors say, companies should have someone always responsible for the technology, be able to explain decisions made by the program, understand the errors that AI makes, let the public or a private entity audit the product and strive to be fair by evaluating any discriminatory effects.

Machines, like humans, might always have some inclination toward biases. But as the professors believe, humans have an undeniable duty to ensure that the machine learning algorithms don’t run astray.

Are you interested in all things related to technology? We are, too. Check out Northrop Grumman career opportunities to see how you can participate in this fascinating time of discovery.

Popular