ALEXA, DOES AI HAVE GENDER?
ALEXA, DOES AI HAVE GENDER?
Professor Gina Neff talks to Ruth Abrahams about gender and AI
Published: 15 October 2018
Share this article
It’s March 23, 2016. Two days earlier Barack Obama visits Cuba, the first US president to do so since 1928; a week earlier a mission launches to Mars; and two months earlier Tsai Ing-wen becomes president of Taiwan, the first female leader for China. It is exactly three months before Britain will vote to leave the European Union and eight months before Donald Trump will be elected the 45th president of the United States. These are interesting times. Pivotal times, times that will set the course of history on a different track.
On this spring day, while the world is in flux, Microsoft launches the AI chatter bot ‘Tay’ on Twitter. Humanised as a young woman, Tay is programmed to interact with the Twitter community. Harmless, neutral, innocent chatbot Tay turns into a public relations headache for Microsoft when she starts spewing misogyny and racism on the developers’ platform of choice.
According to Microsoft, trolls attacked the service and exploited the design of Tay, who makes replies based on her interactions. Tay is closed down only 16 hours after she launched.
Gina Neff, Associate Professor at the Oxford Internet Institute, believes disaster could have been averted if women had been at the table during Tay’s conception and realisation. In her recent talk ‘Does AI Have Gender?’ she says: ‘Microsoft could have solved the problem. Any woman with a public voice knows that today’s Twitter is not a safe place for women to have an opinion and a voice. They could have listened to women about what Twitter is actually like.’ Dr Neff believes that lessons have been learned. Tay version two – known as ‘Zo’ – now communicates in direct messages only, protected from darker motivations to publicly shame and contort responses.
But this anecdote is hugely significant for several reasons. It raises issues of what happens when we humanise technology: who is behind the design and what function in society do we want human-fronted algorithms to serve?
Such fundamentals are not going to change of their own accord. It’s worth noting that in 1984, 37% of US graduates in computer science were women; in 2014 it had shrunk to 18%. Professor Neff says: ‘These are problems that we need to work on if we want to increase the creative and innovative ability of the teams designing technologies.’
Take Alexa as another example, Amazon’s voice-controlled AI assistant. Alexa is female. Why? As children and adults enthusiastically shout instructions, questions and demands at Alexa, what messages are being reinforced? Professor Neff wonders if this is how we would secretly like to treat women: ‘We are inadvertently reproducing stereotypical behaviour that we wouldn’t want to see,’ she says.
In the home, AI assistance with a subservient role is female. In the realm of law or finance, AI assistance is coded as male, affording the male computer voice a context of authority and professionalism.
These scenarios of a gendered, humanised means of interacting with technology are on some level obvious. We hear the voice and see the avatar – our own imaginations make the leap to engage on a personal level with a machine and some code. This is something tangible and in the public realm. We can choose to embrace, reject, push back. However, the issues of gender and power balances go deeper still and further from sight.
Facial recognition software is starting to be used by police forces. While it is rolled out, questions are being raised as to whether it’s really fit for purpose when recent research reveals that major AI systems identified the gender of light-skinned men reasonably well, whereas the error rates for women with dark skin ranged from 20-34%. Professor Neff says: ‘This shows that we’re having an extreme problem about where training data is coming from and how it’s being implemented.’
Meredith Bourssard says in her book Artificial Unintelligence: ‘We have a small, elite group of men who tend to overestimate their mathematical abilities who have systematically excluded women and people of colour in favour of machines for centuries, who tend to want to make science fiction real, who have little regard for social convention, who don’t believe social convention, who won’t believe social norms or rules apply to them, who have unused piles of government money sitting around, and who have adopted the rhetoric of far-right libertarian anarcho-capitalists. What could possibly go wrong?’
And what – more importantly – could go right, and how can things change? It’s hard to think that, given the choice, most people would object to AI being infused with the values that have been hard-won over centuries, those that underpin democracies and are touted as essential for human wellbeing: equality, trust, respect, fairness, accountability and transparency.
These questions are for everyone.
AI depends on the quality and quantity of the data that it is given. What it then does with that data, and how it processes it and ‘learns’ is dependent on how a human has programmed it to process and interpret that data. And it is this ‘black box’ of the algorithm that a growing movement of academics, journalists and the public are seeking to open up. There is currently a case being played out in the American courts – Sandvig v Sessions – that proposes an amendment to Facebook’s terms of service to create a safe haven for journalists and academics investigating the social media site.
Under Facebook’s current rules, the Knight First Amendment Institute has said that investigators risk not just being shut out of the site but also liable under the Computer Fraud and Abuse Act for violations of Facebook’s terms of service. In a letter addressed to Facebook CEO Mark Zuckerberg, Knight calls on Facebook to change the rules to allow bona fide journalists and academics to scrape data and use temporary accounts in order to conduct their research.
‘Digital journalism and research are crucial to the public’s understanding of Facebook’s platform and its influence on our society,’ the letter says. ‘Facebook’s terms of service limit this kind of journalism and research because they ban tools that are often necessary to it.’
Professor Neff advocates for a pathway for scholars to map out societal responses and challenges to AI.
She believes that policymakers need to be involved at all stages of development, and that concern about harming business is being put above the good of society. ‘We can’t leave regulation to companies themselves,’ says Professor Neff.
Questions are being raised around to what extent our sense of self and autonomy will change with AI. Will people feel empowered to challenge an AI decision? Will people have the power to spot mistakes? There is a challenge of making decisions fair and transparent when they are locked in a closed AI system.
Values of efficiency and speed are privileged over everything else in AI design, argues Professor Neff. ‘The development of AI doesn’t take into account that there’s pleasure, joy or aesthetics in some work,’ she says.
So then, what can possibly go right? According to Professor Neff this is what needs to happen:
- Increase representation and diversity in tech
- Increase diversity in training data
- Ensure systems preserve voice and judgment from end users
- Ensure systems are designed for user response and feedback from end users
- Design for use; plan for misuse; prevent abuse
- Increase transparency and accountability for implementation
It is a bright autumn morning, September 2018. The sun is low, making everything vivid and rich; shadows stretch across a narrow Oxford street. Gina Neff is clear about what can steer AI in a positive and open direction. New battles for old problems in new guises will need to be formulated.
‘This is where I’m hopeful,’ she says. ‘I think that we can begin to have and build better organisations and work places and systems of governance if we think about our technology coming from the kinds of social infrastructure we have now and working with those realities. We’ll be better off if we work with those social realities and infrastructures of those very organisations, rather than design for a utopian future, general AI. We should be thinking about what kind of knowledge and what kinds of people do we want to empower with the tools that are being designed.’