How Artificial Intelligence Works and the Black Box Problem

How Artificial Intelligence Works and the Black Box Problem

Artificial Intelligence

Artificial intelligence has a “black box” problem. We are often unsure of what happens in the middle of the machine learning system because we cram data on one side of the machine learning system, and we get results out the other. Developers and researchers nearly had the issue licked, with “transparent artificial intelligence” and “explainable algorithms” tending over the past few years. Lawyers came after that.

Some experts describe black box AI is more complicated than it is. Imagine that someone has a million different herbs and a million different spices. And they only have a couple of hours to crack Kentucky Fried Chicken’s secret recipe. The person has all the ingredients, but they are not sure which eleven spices and herbs to use. There is no time to guess because it will require at least a billion years to try every combination manually. The brute force won’t solve problems realistically, at least not under not kitchen conditions.

But what if the person had a magic chicken fryer, which did all the work in seconds? A person can pour in all the ingredients, and the KFC chicken is ready for comparison. Chicken fryers cannot “taste” chicken, it would rely on a cook’s taste-buds to confirm if the person achieved the goal.

Briefly, it is how the black box artificial intelligence works. The cooker has no idea how the magic fryer worked on the recipe, nor how many ingredients it used. But it doesn’t matter. Our primary concern is for artificial intelligence to be much faster than humans.

 

Black Box of Artificial Intelligence

Black Box

It’s fine to use black box artificial intelligence to determine if something is a hot dog or not. It is also fine if Instagram uses it to post something that could be offensive. But it’s not okay when we can’t explain why artificial intelligence sentenced a black man with no priors to more time than a white man who had a criminal history.

Transparency is the answer. If there was no black box, we could tell where the things went wrong. If, for example, artificial intelligence sentences black people for a long time because of its’ over-reliant on external sentencing guidance, we can fix the problem in the system.

To transparency, there is a considerable downside. If the world can figure out how your artificial intelligence works, it can figure out how to make it work without you. Companies Google, Amazon, Facebook, and Palantir, who have managed to entrench biased AI within government systems, are making money off black box artificial intelligence. They don’t want to open the black box any more than they want their competitors to have access to their research. Transparency might show how unethical some companies are using artificial intelligence, and it’s expensive.

In a review of Harvard Business legal expert, Andrew Burt wore that companies attempting to utilize AI need to recognize that there are costs associated with transparency. Of course, it’s not a suggestion that openness isn’t worth achieving.

We hope it’s a more or less clear explanation for how the black box of artificial intelligence is working. Transparency has it’s disadvantaged, but there is a tremendous need for it.