In 1956, when Philip K. Dick wrote The Minority Report, the internet wasn’t around. In fact, the internet’s forbears wouldn’t appear until the next decade. But whilst the detection of ‘precrime’ in Dick’s short story was through the power of unfortunate mutants, we are rapidly moving into a present where the power of big data and algorithms are to solve crimes. The supposedly cold rationality of computing is supposed to trump our own prejudices.
And yet, it won’t.
The fear of algorithms is not exactly a new topic, but it’s one that only grows more relevant over time. Algorithms decide what news you see on Facebook – which not only pushed out valuable workers, but also doesn’t really fix underlying issues of exclusion and bias. Then there’s the complaints about the exact algorithm which Facebook uses to push different contacts to your newsfeed: another black box, which the company is unlikely to crack. The other social media titan of our time, Twitter, has also quietly pushed algorithms to shape the content we view, including one which is designed to ‘support conversation’ – by listing potentially controversial comments lower in a list replies. When those controversial tweets are often more conservative, it’s unsurprising that the right cries out against media bias (try looking at a statement by Trump, and you’ll often find tweets skewering him for incompetence at the top, in spite of the dates). Uber, which threatened to bring down the cab industry around the world before a series of corporate missteps and outright illegal acts stymied its progress, is built upon the algorithm which routes drivers to passengers, allows for the complexity of UberPool, and keeps drivers on the job longer (for the good for the good of the company). And unseen to all of us are the advertisers who use algorithmic information to work out with which ads to target us to best effect, building up a composite image of our lives. They might not be totally accurate, but they offer a far greater amount of information than any survey did before.
Civilian deployment of algorithms is concerning, but manageable – an inconvenience which can be outwitted with enough time and energy. Search engines like DuckDuckGo can keep you off their radar; as a last ditch measure, there’s always Tor. Admittedly, staying off Facebook and Twitter is toxic for your social life (and for professions like journalists, dangerous for your work life too), but it’s not a matter of life and death.
Unlike, say, an algorithm which US Immigration and Customs Enforcement (ICE) wants to bring in, to help with tasks like “determin[ing] and evaluat[ing] an applicant’s probability of becoming a positively contributing member of society as well as their ability to contribute to national interests in order to meet the EOs outlined by the President.” If you thought that having real human beings deciding whether you should be allowed into a country was a worrying thought, imagine outsourcing that to an algorithm.
Assuming that it doesn’t break down – always a big assumption – the real fear lies in the coding behind it. As in the cases described above, algorithms aren’t neutral entities: they reflect the beliefs of their designers. It’s safe to assume that if ICE – an enforcement agency not known for its charitable views on immigrants – is designing something to do their job for them, it’s stance won’t be a liberal one.
And it doesn’t stop there: just as algorithmic job interviews are coming into practice, so is algorithmic sentencing. In theory, it offers redress through the power of big data. In practice, it amplifies the biases we practice everyday, but it gives authorities an excuse for their decisions: ‘computers can’t be wrong’, or so the argument goes.