Time to face (unintended) consequences

Posted on 24th September 2019

Written by Adam Thilthorpe, Director of Professionalism, BCS, The Chartered Institute for IT

Our lives and outlook, and those of our children, are fast being shaped by the digital world. These changes are unplanned, largely unregulated and already happening. This, warns Adam Thilthorpe, leaves us reliant on the ethical fortitude of developers.

In a digital world with ubiquitous technology, our lives are increasingly shaped by unintended consequences. Trying to get a measure on this, or even some semblance of control back on our own lives, is proving not only difficult but a challenge that has everything to do with the very real issue of ethics in IT.  Unintended consequences in our digital world shape our physical reality. When Mark Zuckerberg and friends were kicking around the original ideas for Facebook, they just had in mind a book of pictures of the students in Harvard – literally, a face book. Today, Facebook has grown to be one of the largest corporations in the world and, it is alleged, has been used to undermine the world’s largest democracy.

We’re now raising a generation who won’t recognise a world without communal artificial intelligence. Whether it’s Apple’s Siri, or Amazon’s Alexa, parents are being confronted by AI that disrupts the natural ‘call and response’ of learnt conversation in the home to such an extent that we ask whether it’s still appropriate to teach children to say please and thank you.

Or is the opposite true? It is said that true digital natives can clearly distinguish the difference between human interaction, simple voice recognition and even natural language understanding. But do we really believe that?

It’s not just about being polite. According to a NSPCC/Children’s Commissioner report, 40% of 11 year-olds ‘sext’, and with half of 11-16 year- olds reporting seeing online pornography. How can that be good for the future of human interpersonal relationships?

What role do all of us, parents, educators and regulators have? We’re seeing the daily use of biometrics at our borders and in our courts. Police forces are experimenting with AI software that can interpret images, match faces and analyse patterns of communication, all with the aim of speeding up the examination of mobiles. These are not planned changes, these are in use, here, now. Do you remember being asked if you wanted, let alone consented to, these incremental but important changes to the way that we conduct our lives? No, me neither. Yet step by technical step, we are seeing a change to the fundamental relationship between citizen and state. Instead of presumed innocent are we now simply all un-convicted people?

As our technologies move so quickly, inevitably public policy, legislation and our regulators lag far behind. Nowhere is that more starkly evident than in Cambridge Analytica’s rise and fall. The firm extracted data about millions of us from Facebook, used it to profile voters and then targeted users with personalised political advertising – all designed to help the Cambridge firm’s paymasters to achieve their political goals. Be that Brexit or the election of President Trump.

The Observer’s Carole Cadwalladr, speaking about this mass abuse of data in her TED talk, asks us to consider whether free and fair elections are a thing of the past. I’ll leave you to consider your own conclusion.

So, where does that leave us? Sadly, at the mercy of the ethical fortitude of those developers, designers, coders and makers who are forging ahead in this digital age, if not at our behest, certainly then at least with our enthusiasm for greater integration and insight.

Let’s face it, what’s more useful: online ads for a bulk buy of nappies that I’ll never click, or ads for the new road bike I’ve been promising myself? These developers, designers and coders and makers are the very people that need to understand not only the intentions and motivations, but, importantly, also the potential for unintended consequences.

IT people must be great sociologists… the chances are that, if you’re reading this, you know some or all of this already. You’ll be in the know and probably already have your own opinions about the various issues I’ve raised. That’s what I’d expect.

But the big question for me is how do those of us who work in, or at the edges of, some of this technology, raise these big, difficult questions with politicians, with civil society leaders and with the public at large?

Whose role is it to ensure that the magnitude and complexity of the world that is being created around us? The US tech giants – not a great track record so far. Our own governments and regulators, perhaps. What about our national news media?

For me, it’s simple. We need those who work in the sector, who are developing these technologies, to understand that they owe it to their families, and to society at large, to develop within an ethical framework. With great power comes massive responsibility. Massive personal responsibility.

We need our makers, doers, coders and data analysts to think about the consequences of their work – before they put their hands on their keyboards. We all have a part to play in fostering this much needed personal responsibility. We need to create an environment where people creating these world-changing technologies can safely debate and discuss their products’ consequences. And we need to support them when they say: “No.”


Adam Thilthorpe is the Director for Professionalism at BCS, the Chartered Institute of IT, and will be speaking at the Power and Responsibility Summit on the 9th October. 

whois: Andy White Freelance WordPress Developer London