Mitigate AI bias

Want to Mitigate AI Bias? Start with Unbiased Data

In Technology by Daniel NewmanLeave a Comment

 

Most people are under the impression that artificial intelligence (AI) is bias free. After all, what could have fewer opinions than a machine? But as it turns out, research shows AI carries the same biases as whoever created it. Which means the data we’ve now grown accustomed to relying upon to make any number of decisions—from who should receive a loan or our marketing campaigns to who should be let out on parole or into a specific university—isn’t as reliable as we originally thought. Long story short: our AI is only as good as the data we feed it, and from the looks of it, we’re feeding it a whole lot of human bias. So, what can we do to mitigate AI bias? It all starts with collecting unbiased data.

How do you Mitigate AI Bias? Understand it First

To understand how to mitigate AI bias, we have to understand how AI bias happens in the first place. As I’ve shared previously, there are four main ways that bias gets “baked in” to our AI algorithms.

  • Data-driven bias: Unlike humans, machines don’t question the data they’re given. In other words, if your data is biased from the start, your results will be, as well.
  • Interactive bias: When it comes to machine learning—AI in which machines are continuously updating their knowledge based with information they learn from those around them—machines can become biased, even if they weren’t built that way. For instance, Tay, Microsoft’s short-lived Twitter chatbot, turned into an aggressive racist through mere interaction with racist Twitter followers. When machines are taught to learn, they learn everything—good and bad, both.
  • Emergent bias: You know how sometimes your friends suddenly disappear off the face of the social media planet? That’s what happens with emergent bias. AI can be used by Facebook, for instance, to decide whose friends’ updates we’re most interested in seeing.
  • Similarity bias: Similar to emergent data, similarity bias is what happens when companies decide the types of information we want to see—for instance, the types of ads Google decides to show us, or the types of news articles a publication might choose to share with us. It doesn’t mean other news isn’t available—it means the machine is feeding us what it thinks we want to know—or will agree with. This is one reason not to get your news from Facebook, for instance—it’s slanted.

As you can see—none of these types of biases are necessarily purposeful. It’s possible each company has even done its best to mitigate AI bias to begin with. But as they say: you don’t know what you don’t know. And in many companies—especially when it comes to black box AI—it’s really hard to know when those biases exist.

Why AI Bias Is Problematic

There are lots of reasons to mitigate AI bias. As companies seek to use algorithms to automate decision-making, machines now have the power to determine who gets a job, a loan, a raise, or a positive job review. Those decisions can have lifelong implications. For instance, if someone seeks a loan to go to college but is denied because he lives in a low-income zip code, that could make it difficult for the person to attend college, get a job, or make a good salary later in life. Sure, the zip code seems like an unbiased piece of information—it’s an easy way to determine a loan candidate’s general income level, for instance. But it doesn’t take into account his character, goals, or dreams. That’s where algorithms fall short—and why it’s so important to mitigate AI bias in the first place.

The thing is, as I noted above “black box” algorithms make understanding the inherent bias in AI difficult. In most cases, the algorithms are far too difficult for humans to understand. In other cases, they simply aren’t transparent—companies can dump whatever data they want into an algorithm, with no requirement to divulge the factors they’re using to make a decision to anyone, including you. That means they have no responsibility to tell you why your resume was rejected for a job … why you weren’t approved for that increased credit limit … or why your home loan fell out of escrow. Yes, it’s bad—considered a weapon of “math destruction” to some. And that’s why trying to mitigate AI bias is so important.

Eliminating AI Bias: Is It Really Possible?

Honestly: no. As long as humans are involved in making machines, the bias will be there. But there are some tools that can help to mitigate AI bias, and society is beginning to recognize how important it is to use them.

IBM, for instance, has proposed a three-level ranking system to determine whether data is bias free. Essentially, it determines if an AI system is not biased; if it inherits the bias of its data/training; and/or if it carries the potential to data bias, regardless of whether it starts out bias-free. It’s not fail safe, but it’s a start. And along with the numerous other organizations and conferences launching to address the issue of bias in data—and other new regulations surrounding data privacy—it could do a lot to help to mitigate AI bias.

In the end, the only sure-fire way to eliminate bias in AI is data is to eliminate the bias in humans. Is that likely? No. But with transparency, commitment, and awareness, we can get a whole lot closer than where we are.

Daniel Newman is the Principal Analyst of Futurum Research and the CEO of Broadsuite Media Group. Living his life at the intersection of people and technology, Daniel works with the world’s largest technology brands exploring Digital Transformation and how it is influencing the enterprise. From Big Data to IoT to Cloud Computing, Newman makes the connections between business, people and tech that are required for companies to benefit most from their technology projects, which leads to his ideas regularly being cited in CIO.Com, CIO Review and hundreds of other sites across the world. A 5x Best Selling Author including his most recent “Building Dragons: Digital Transformation in the Experience Economy,” Daniel is also a Forbes, Entrepreneur and Huffington Post Contributor. MBA and Graduate Adjunct Professor, Daniel Newman is a Chicago Native and his speaking takes him around the world each year as he shares his vision of the role technology will play in our future.

Leave a Comment