Hospitals, marketers, law enforcement, and other bodies utilise AI (Artificial Intelligence) to decide on matters such as who gets hired, who gets medical treatment, who is likely to buy what product at what price, and who profiles as a criminal. Those entities increasingly predict and monitor our behaviour with such technology, often motivated by profits and power.
It is not uncommon now for artificial intelligence experts to ask whether an AI is “for good” and “fair”. Nevertheless, “good” and “fair” are infinitely spacious words that any artificial system may be squeezed into. Therefore, the question one should pose is much more profound: how is artificial intelligence shifting power?
Thousands of researchers will meet virtually at the week-long International Conference on Machine Learning from 12 July. It is the most extensive artificial intelligence meeting in the world. Many researchers think that artificial intelligence is often beneficial and neutral and marred only by biased data drawn from an unfair society. An indifferent world serves the powerful.
Those who work in artificial intelligence need to elevate those who have been out from shaping it. Thus, by doing so, they will require them to restrict relationships with the dominant institution. These are those institutions that benefit from monitoring people. Therefore, researchers should collaborate, cite, amplify, and listen to communities that have borne the brunt of surveillance: disabled or poor, LGBT+, Indigenous, people who are black, and women. Research institutions and conferences should cede prominent time slots, and leadership roles to members of those communities, funding, and spaces. Additionally, discussions on how AI research shifts power should be necessary for publications and grant applications.
Last year, Radical AI Network was created. Black feminist scholar Angela Davis’s observation that “radical simply means grasping things at the root” inspired the group. The root problem is that power distributes unevenly. The network listens to people who are AI impacted and marginalized. It advocates for anti-oppressive technologies.
Researchers in artificial intelligence overwhelmingly focus on providing highly accurate information to decision-makers. Unfortunately, little research focuses on serving data subjects. They need ways for people to investigate artificial intelligence, dismantle, influence, and contest it. For instance, the advocacy group ‘Our Data Bodies’ puts forward ways to protect personal data when interacting with United States child-protection and fair-housing services. Usually, such work receives little attention. Meanwhile, mainstream research creates systems that empower already powerful institutions that are extraordinarily expensive to train, from Facebook, Amazon, and Google to military programs and domestic surveillance.
Many researchers have trouble seeing their intellectual work with artificial intelligence as furthering inequity. Researchers spend their days working on mathematically useful and beautiful systems. They also hear of artificial intelligence success stories, such as the promise in detecting cancer or the victories in Go championships. It is the researchers’ responsibility to recognize their skewed perspectives and to listen to those impacted by AI.
It is possible to see why efficient, accurate, and generalizable artificial intelligence systems are not suitable for everyone. In the hands of oppressive law enforcement or exploitative companies, a more accurate facial recognition system is harmful.