Op-Ed: Artificial Intelligence Is A Human Problem
Folks: It's time to get involved.
Graphic: Rain Embuscado for Stephanie Dinkins. Courtesy Creative Commons.
Artificial intelligence has already arrived, infiltrating our civic and personal lives, and quietly reshaping the ways we live, love, work, and interact. AI’s ever-growing computational potential, powered by wellsprings of near-limitless data, has resulted in significant turning points over the course of the technology’s development.
It's reasonable to declare that humans and learning machines are on the precipice of a new epoch. (Skeptics of the claim are invited to remember that the iPhone has only been around since 2007.) The imminent wave of artificially-intelligent cars, homes, medical interventions, and the like is set to alter human life all over again.
Rather than fear the impending AI revolution, we would be wiser to get involved—even those among us who can’t code, design, or even comprehend AI's prismatic complexities. The importance of transparency in the creation and use of algorithmic systems—particularly those employed to make life-altering decisions (like the length of jail sentences, or the depth and breadth of medical care)—cannot be understated. We must be aware of the decisions artificially-intelligent systems are making, understand how they are making them, and realistically anticipate the ramifications these decisions will yield.
The author with Bina48. Courtesy the artist.
That bias and discrimination can be and already is encoded in AI systems is no secret. One need only recall a scandalous episode from 2015, in which a Google photo search tagged an image of two black friends as 'gorillas.' By most published accounts, the search engine's classification was concluded to be unintentional. Deliberate or not, the incident points to a disturbing and systemic problem that persists today. Algorithms, like the ones used in the photo search, were created by a largely homogeneous pool of programmers applying a limited dataset that did not represent or describe the diversity of the human family.
As society becomes more and more reliant on AI, the demand for us all to work toward inclusion and transparency in the creation and implementation of artificial intelligence is of paramount importance. At the very least, we must work to understand where AI intersects our lives and use strategies to manage the influence of related systems. This call to action is even more urgent for people of color, differently-abled people, and LGBTQA people, whose interests are largely mishandled, misunderstood, or overlooked altogether.
We must demand the inclusion of multiple stories; ensure accountability in AI's design; and build partnerships between research entities, companies, citizens, and the AI interface itself to ensure we are coding with broad, inclusive, and responsible information. Even those who are not computer scientists or engineers can contribute to the future of AI, which is, for better or worse, intricately intertwined with the trajectory of our families, our work, our medicine, our laws, our education, our guardianships, our love, and our lives.
Here’s what we know:
Roboy. Courtesy Wikimedia Commons.
1. Artificial intelligence is already here.
Machine learning and artificially-intelligent systems abound in our daily lives. It is only a matter of time before our cars chauffeur us from place to place while we work or entertain ourselves in mobile sitting rooms. (I, for one, cannot wait until I can use an app to command my car to pick me up at the back of the office after a long day of work, instead of walking the quarter-mile to the parking lot.)
This one innovation has already spawned countless debates over safety and ethics as programmers ponder whether a car will save its passengers or pedestrians in the case of an accident. What's more, this advancement is also bound to start conversations on auto insurance and related laws: Who is liable if the automobile is driving itself? Mercedes Benz has already publicized that its autonomous cars will put the driver first. Does this mean, by extension, that the lives of those who can afford their products are more valuable than the lives of those who can't?
In the medical industry, the field is incorporating robotics in shared and collaborative tasks. Imagine robots and nurses sharing the burden of carrying patients from place to place, or robots helping doctors completing surgery more precisely. The daVinci Surgical System is a robotic technology that's already helping doctors complete minimally-invasive surgeries that are often less painful and require shorter recovery times. The practice has been FDA-approved in the United States since the early 2000’s. Many of us have benefited from the life- and cost-saving capacity of this type of robot for almost two decades.
In manufacturing, medicine, and many other industries, robots will soon be indispensable co-workers: teachers and arbiters that help humans do their work. But deepening and expanding these technologies' access to knowledge will directly determine who these efficient, high-quality enhancements will benefit in the long run.
2. Algorithms contain biases. Period.
Algorithms—plug-and-play action-steps computer-programmed to accomplish a task—are increasingly the ubiquitous, unseen arbiters that dictate our decision-making processes. In the context of artificially intelligent systems, algorithms are used to govern and mediate our communications, our medical and legal records, our contractual transactions, our judicial and education systems—the scope is truly boundless. Anyone who has ever surfed the web, or applied for credit or insurance, has been subjected to the logic of an algorithm.
But algorithms are only as sound and complete as the datasets and the programmers that feed them. The instances of discrimination we've seen in recent memory point to a simple albeit dangerous systemic failure to integrate a more diversified pool of data and programmers. The easiest way to address racism, sexism, and other biases within artificially-intelligent systems is to ensure that people of color, and others who inherently understand the need for inclusion, equity, ethics, and multimodal testing, participate in the design, production, and testing of 'smart' technologies.
3. AI is a double-edged sword.
Like all technology, AI can be a welcome complement to humanity rather than a looming threat to its future. But allowing AI to grow unsupervised into black boxes that obfuscate how information is parsed, or how numbers are crunched, will certainly give us reason to worry. AI systems designed in bubbles risk creating a world that replicates the discriminations and dysfunctions we are grappling with today.
From assessing the probability that criminals will commit subsequent crimes, to dictating which advertisements follow us around the internet, the algorithms we design will have serious implications. When the consequences can be fatal, complacency is unacceptable.
Courtesy Pixabay Creative Commons.
4. The responsibility and consequences are shared by all.
There is a lot of talk these days about how we're going to care for the growing population of the elderly. Many fear that housing our elders in homes run by robot caretakers is inevitable. But instead of writing off robots in eldercare, we can instead imagine an AI companion that enhances their quality of life.
Take Paro, an advanced interactive robot in the form of a cuddly white seal. It functions much like a therapy animal, learning to respond to its users individual needs, listening to them without judgment, and responding with the signals of an attentive listener. According to its manufacturer, AIST, Paro has been found to reduce stress in both patients and caregivers. Independent reports and empirical data from nursing homes confirm Paro as a great assistant in managing distress in some dementia patients.
To quote Kate Crawford, who published an op-ed on the subject in The New York Times earlier this year: “Like all technologies before it, artificial intelligence will reflect the values of its creators.” For the sake of all of our shared futures, we must all, each and every one of us, find ways to advocate for, call out, and develop AI that is open, transparent, equitable, and trustworthy.
5. AI is a necessary 21st century competency.
I have dedicated my practice to developing methods that encourage citizens, namely citizens of color, to acknowledge AI as the invisible hand of daily life that it is, and further, to urge their participation in its development. From passive engagement like beta-testing, to actively programming algorithms, it is crucial to inform AI’s growth with considerations to access, accountability, transparency, and bias assessment. In short, it’s time to get involved. Here are a few steps you can take moving forward:
Stay alert, and pay attention to daily interactions to uncover AI’s role in your life. Figure out where and how your life comes into contact with AI. Use tools like the Data Selfie to discover how machine-learning algorithms are tracking you online. Use Instinet, and understand when systems are making biased decisions via data.
Call it out:
Report biases, inconsistencies, and incomplete or limited histories you discover in the AI you encounter. Expose the problem broadly, and request that the problem be fixed. Demand that the faulty assumptions and limits in the code and/or dataset that caused the problem to be corrected immediately. Use social media, the press, and elected officials to push the offending organization into action if necessary.
Find ways to disrupt biased assumptions in AI. Even those who know nothing about code can test AI, report biased data, and attempt to confound AI that is working against your interests. If an AI is set up as the gatekeeper in applications for jobs, schools, or mortgages, learn the keywords and actions that get the desired results from an AI.
Create AI to understand how such systems work and how they can be improved. There is an amazing array of open source and free software available on the web that will help you get started on training your own AI. Use it to understand what is going on under the hood in AI systems. Try API.AI Wit Ai, and Open AI System. These are all based on proprietary code, but they will get you on the road to understanding.
Lastly, and in all things: Be vigilant for, and work to correct, systemic injustices within your own lives and practices, whatever they maybe.
Stephanie Dinkins is an artist and professor at Stony Brook University, creating platforms for ongoing dialogues about artificial intelligence as it intersects race, gender, aging, and our future histories. She is particularly driven to work with communities of color to develop deep-rooted AI literacy, and to co-create more culturally-inclusive and equitable artificial intelligence. She is currently an Artist-in-Residence at NEW INC, a project catalyst for Team Haptics, Cyborg Futures 2017, and a recipient of the 2017 Blade of Grass Fellowship for Socially Engaged Art.
Editor: Rain Embuscado