How do we ensure that AI is our friend, not our foe?

19 Feb 2019

By Alison Maitland

How vulnerable are we humans to smart machines? We got a nasty glimpse of the possibilities with the air travel chaos caused by drone sightings over London’s Gatwick airport in the busy run up to Christmas.

The drone use was illegal, but, even when it is in the ‘right’ hands, we have reason to be vigilant about how artificial intelligence is rapidly entering many parts of our lives.

At work, there is the potential for bias and discrimination to be automated. Machine learning relies on large sets of data to detect patterns and make predictions. This typically reflects past human behaviour, which inevitably contains prejudices and assumptions, whether conscious or unconscious.

One example of what could go wrong, reported by Reuters, was an experimental recruiting tool that Amazon decided to scrap when it was found to discriminate against women. The hiring engine was rejecting women for technical posts because it was programmed to vet people based on patterns in the CVs of previous candidates, who were mostly men.

Caution is needed in applying machine intelligence to decisions like who is ‘the best fit’ for the job

Companies are hoping that machine learning will speed up their searches for candidates and determine who is ‘the best fit’. But there is a danger that teaching machines to search for things like the speech patterns and body language of top performers will lead to hiring the same type of people, and entrench groupthink in organisations.

Women, as well as some ethnic minorities, are poorly represented in the tech profession, and this lack of diversity has implications for how products are developed and used.

A case in point is speech recognition systems that are not able to ‘hear’ women’s voices as well as they hear men’s, so that transcriptions of female voices are much less accurate.

There’s another area where vigilance is needed: workers’ human rights. The Guardian recently summarised many of the issues under the headline ‘Employers are monitoring computers, toilet breaks – even emotions. Is your boss watching you?’

Some employer monitoring may start out with positive intent, for example to check that employees are not being overstretched. But monitoring is becoming more intrusive, and it is easy to see how it can be used to restrict workers’ freedom and autonomy. From recording the number of keystrokes or the length of toilet breaks taken by virtual workers, it is spreading to mechanisms including:

  • Sensors in helmets and hats that monitor fatigue and emotions
  • Wrist bands to monitor the productivity of warehouse workers
  • Smart badges that track who talks to whom in the office, designed to map human networks
  • Employees being offered Fitbit devices to monitor how active they are
  • Microchips embedded in people’s fingers to securely open doors or computers

Are you being monitored by any of these systems?

AI can be helpful too

There is, of course, a huge number of ways in which AI is making our lives easier, and safer, such as rapid fraud detections systems, and emerging computer vision technologies that can detect diseases and fractures as well as, or better than, humans can.

What about our working relationships? What needs to be done to ensure that smart machines help to build trust between people at work, rather than destroying it?

AI can help to highlight and reduce bias, if it’s developed inclusively

The good news is that artificial intelligence is being developed to highlight and minimise bias at work. One example from the media sector is a software tool developed by the Geena Davis Institute on Gender in Media, in collaboration with Google’s machine learning technology. The ‘GD-Inclusion Quotient’ analyses how long male and female characters are seen and heard in box office hit films. It found not only are women under-represented in films, but when they do appear on screen they are seen and heard far less than men.

Ironically, the institute also found that films with female lead characters made 15.8%  more  money on average than films led by men. And the highest grossing films were those with male and female co-leads. So gender balance is good for the movie industry too.

Another interesting development is using virtual reality to teach empathy by better understanding what life is like for other people. Stanford University’s Virtual Human Interaction Lab One has worked on a collaborative project to create an immersive experience that enables a white person to walk in the shoes of a black man, encountering racism at first hand.

Be on your guard, and ask challenging questions about new systems rather than just accepting them

What else can be done? The AI Now Institute has called for robust testing and auditing standards, and continuous testing of new systems to check for bias. It says the industry should hire outside experts from fields such as law, medicine, education, ethics and social science to better understand structural biases in society and workplaces.

Organisations thinking of introducing AI to assist decisions about people should analyse and check carefully how new systems work, and what the implications and side effects may be in hiring, promotions, and how projects are assigned.

As individuals, we must also be on our guard, and ask challenging questions about new technology rather than just accepting it.

Change is happening so fast that we need to take collective responsibility for shaping the future – a future that works for all humans, not one in which humans are ruled by machines.

 This article was first published by IWE in January 2019

©Alison Maitland 2019, all rights reserved

This entry was posted in Recent Articles. Bookmark the permalink.