American science fiction writer, Mrs. Ursula Le Guin, pictured at the Florida Harbour Side … [+]
Fairfax Media via Getty Images
When we think of the future of AI, many of us think of sentient robots or self-driving cars. These current visions, and subsequent actions to materialize them, are very much influenced by popular science fiction – say Blade Runner or Her. However, what if our innovations were inspired instead by feminist science fiction writers and theorists like Ursula K. Le Guin, Octavia Butler and Donna Harraway.
When you shift the technology sector’s definition of progress away from new machines, or faster systems, but instead inventions and improvements that move society towards peace and equality – inclusion driven AI suddenly becomes much more than just a nice thing to have, but the foundation of technological innovation.
When we speak about inclusive AI, the first problem that needs to be addressed is that of bias – particularly the ways in which human-trained machines replicate and exacerbate societal discriminations. Wired reported on an interesting and poignant example of the exponential effects of just one human’s biases in the process of training AI:
“In other exercises, employees would sometimes mischaracterize ads based on their own inherent biases. In one glaring example, an associate mistakenly categorized a pro-LGBT ad run by a conservative group as an anti-LGBT ad. When I pointed out that she had let her assumptions about conservative groups’ opinions on LGBT issues lead to incorrect labeling, my response was met by silence up and down the chain.
“These mischaracterizations are incorporated into manuals that train both human reviewers and machines. These are mistakes made while trying to do the right thing. But they demonstrate why tasking untrained engineers and data scientists with correcting bias is, at the broader level, naïve, and at a leadership level insincere.”
It is well-reported, yet still desperately under-addressed, that the root of algorithmic inequality stems from a chronic lack of diversity in the technology field. We need diverse voices building these platforms – we need diverse voices training these platforms – we need diverse voices overseeing these platforms.
Diversity means people of different experiences, ethnicities, gender identities, religious, socio-economic backgrounds and more. An algorithm shouldn’t be built for one type of person. It should be built for all types of people.
Our current algorithms are created by persons with systemic biases, and who are untrained to recognize or even be made aware of these biases. Take for example the fact that social media moderation algorithms are more likely to flag content shared by women, queer folks, and trans folks. Hard to believe? The Huffington Post recently published an article citing a common practice on Instagram that helps boost exposure and engagement— accounts changing their gender from female to male.
The problem is, instead of addressing these issues – many technology companies and innovators are simply continuing to push forward with faulty AIs. Discussions of Inclusive AI are being pushed to the margins, and as a result our AIs are ineffective, inaccurate and far from achieving any meaningful improvements for society.
This not to say no one is talking about this, and there are some exciting initiatives taking place. For example, IBM’s AI Fairness 360 is an open source toolkit that helps developers test for bias in their datasets. Just this week, Facebook AI announced a new technique that marks the images in a dataset so that researchers can understand if a machine learning model was trained using those images. This verification method, called “radioactive” data, allows for greater transparency when it comes to the data that a model is trained on.
However, these are not the norm nor the focus of AI innovation. According to a 2019 University of Cambridge Study, “At the current rate, AI will continue to perpetuate gender-based discrimination…This occurs through the design of AI systems which reinforce restrictive gender stereotypes; law and policy which is not focused on issues of gender equality; the widespread use of bias datasets; and a lack of diversity in the AI workforce.”
We are pushing forward, creating more and more use cases, despite a major flaw in this powerful technology being left unaddressed and often unspoken.
The intersection of feminist theory and artificial intelligence brings about new possibilities with far-reaching impact. In the section of the above-mentioned Cambridge Study titled, “Bridging Gender Theory and AI Practice” the authors state, “Achieving the goal of social equality would be aided by dialogue between gender theorists and technologists. But at present, gender theory and AI practice ‘are speaking completely different languages.’”
In short, researchers fluent in gender and racial theory are not writing the papers on removal of biases that IBM or Facebook’s teams are reading.
The paper also highlights the growing distance between those who are designing and deploying these systems, and those who are affected by these systems.
“What will ordinary people do to respond to challenge, adapt and give feedback that will be crucial for the positive evolution of these systems?” the researchers ask. They continue, “We have to think about the importance of the ‘social shaping of AI’, which would include designing workshops with users and including them in the discussion of how systems could be adapted to work for their benefit.”
True innovation, the kind of innovation with the potential to change the world, will be an AI that not only does not replicate human bias, but in fact corrects it.
Technology entrepreneurs, developers and investors are known for thinking big, but perhaps they are not thinking big enough. The most pressing problems we face cannot be solved by same-day delivered packages, virtual assistants that understand our calendars or an app that knows what song we want to hear.
Thinking big when it comes to innovation means talking about the big problems, such as the disproportionate incarceration of people of color, or why spending just an hour on social media causes lower self esteem among women and girls.
Perhaps we should be building a technological future that aspires to a different kind of science fiction – one in which we can use AI to address systemic discrimination and reduce poor decisions based on deeply engrained biases.
Le Guin, Butler and Harraway imagined futures in which the inclusiveness (or lack thereof) of our technologies was the primary focus of their writing – as well as the crux on which the distinction between dystopian/utopian rested.
For both better technology and a better world, let’s make inclusiveness the primary focus of real-world innovation as well.