An algorithm with Racial elements when working on Image

An algorithm with Racial elements when working on Image

Barack Obama is the United States’ first black President. If you put a low-resolution image of him into a certain algorithm that generates depixelated faces, the output will be a white man.

This is the case not only for Obama. This also goes if you put pictures of Congresswoman Alexandria Ocasio-Cortez or actress Lucy Liu in it. Low resolutions images of non-white women in high-resolution inputs are going to be distinctly white. These images speak volumes concerning the dangers of bias in artificial intelligence.

To explain the problem, we need to know a little bit concerning the technology that is used here. PULSE is the name of the algorithm program that generates those images. It uses an upscaling technique to process visual data. Upscaling is like those “zoom and enhances” tropes you see in film and TV. Nevertheless, unlike in Hollywood, real software can not just generate new data from nothing. The software must fill in the blanks using machine learning, to train a low-resolution image into a high-resolution one.

For PULSE, the algorithm doing this work is StyleGAN. This was a technology created by researchers from NVIDIA. You may not have heard of StyleGAN, however. Nevertheless, most likely, you are familiar with its work. It is the algorithm responsible for making some eerily human faces.

You can see these faces on websites like ThisPersonDoesNotExist.com. The faces are so realistic that they are often used to generate face social media profiles.

Image being Racial

PULSE “imagines” the high-res version of pixelated inputs with the help of StyleGAN. It does not do that by “enhancing” the original low-resolution image. It generates an entirely new high-resolution face that looks the same as the one input by the user.

Artificial Intelligence

That means each depixelated image upscales in a variety of ways. It is in the same way as a single set of ingredients that can make different dishes. You can use PULSE to see what crying emoji, Doom guy, or the hero of Wolfenstein 3D will look like at high resolution. It is not that the algorithm “finds” new detail in the image as in the “zoom and enhance” trope. Nevertheless, it does invent new faces that revert the input data.

That sort of work was theoretically possible for a few years now. Nevertheless, as is often the case in artificial intelligence work, it reached a broader audience when an easy-to-run version of the code was made available online this weekend. This was when racial disparities began to become apparent.

Creators of PULSE say the trend is clear: when you use an algorithm for scaling up pixelated images, the algorithm usually generates faces with Caucasian features.

Creators of the algorithm wrote it on Github. They wrote that it does appear that PULSE produces white faces more frequently than faces of people of color. That bias was inherited from the dataset StyleGAN was trained on. Nevertheless, there could be other factors that we do not know about.

When StyleGAN tries to come up with a face that looks like the pixelated input image, it usually defaults to white features.