X
Innovation

AI bias detection (aka: the fate of our data-driven world)

Rooting out implicit bias in AI is fundamental to ensuring an equitable society. Is it even possible?
Written by Greg Nichols, Contributing Writer
art-intellig.jpg

Here's an astounding statistic: Between 2015 and 2019, global use of artificial intelligence grew by 270%. It's estimated that 85% of Americans are already using AI products daily, whether they now it or not. 

It's easy to conflate artificial intelligence with superior intelligence, as though machine learning based on massive data sets leads to inherently better decision-making. The problem, of course, is that human choices undergird every aspect of AI, from the curation of data sets to the weighting of variables. Usually there's little or no transparency for the end user, meaning resulting biases are next to impossible to account for. Given that AI is now involved in everything from jurisprudence to lending, it's massively important for the future of our increasingly data-driven society that the issue of bias in AI be taken seriously. 

This cuts both ways -- development in the technology class itself, which represents massive new possibilities for our species, will only suffer from diminished trust if bias persists without transparency and accountability. In one recent conversation, Booz Allen's Kathleen Featheringham, Director of AI Strategy & Training, told me that adoption of the technology is being slowed by what she identifies as historical fears:

Because AI is still evolving from its nascency, different end users may have wildly different understandings about its current abilities, best uses and even how it works. This contributes to a blackbox around AI decision-making. To gain transparency into how an AI model reaches end results, it is necessary to build measures that document the AI's decision-making process. In AI's early stage, transparency is crucial to establishing trust and adoption.

While AI's promise is exciting, its adoption is slowed by historical fear of new technologies. As a result, organizations become overwhelmed and don't know where to start. When pressured by senior leadership, and driven by guesswork rather than priorities, organizations rush to enterprise AI implementation that creates more problems. 

One solution that's becoming more visible in the market is validation software. Samasource, a prominent supplier of solutions to a quarter of the Fortune 50, is launching AI Bias Detection, a solution that helps to detect and combat systemic bias in artificial intelligence across a number of industries. The system, which leaves a human in the loop, offers advanced analytics and reporting capabilities that help AI teams spot and correct bias before it's implemented across a variety of use-cases, from identification technology to self-driving vehicles.

"Our AI Bias Detection solution proves the need for a symbiotic relationship between technology and a human-in-the-loop team when it comes to AI projects," says Wendy Gonzalez, President and Interim CEO of Samasource. "Companies have a responsibility to actively and continuously improve their products to avoid the dangers of bias and humans are at the center of the solution." 

That responsibility is reinforced by alarmingly high error rates in current AI deployments. One MIT study found that "gender classification systems sold by IBM, Microsoft, and Face++" were found to have "an error rate as much as 34.4 percentage points higher for darker-skinned females than lighter-skinned males." Samasource also references a Broward County, Florida, law enforcement program used to predict the likelihood of crime, which was found to "falsely flag black defendants as future criminals (...) at almost twice the rate as white defendants." 

The company's AI Bias Detection looks specifically at labeled data by class and discriminates between ethically sourced, properly diverse data and sets that may lack diversity. It pairs that detection capability with a reporting architecture that provides details on dataset distribution and diversity so AI teams can pinpoint problem areas in datasets, training, or algorithms in order to root out biases 

Pairing powerful detection tools with a broader understanding of how insidious AI bias can be will be an important step in the early days of AI/ML adoption. Part of the onus, certainly, will have to be on consumers of AI applications, particularly in spheres like governance and law enforcement, where the stakes couldn't possibly be higher.

Editorial standards