News this week that the Ministry of Justice’s AI tool for prisoners may increase the risk of discriminatory outcomes will come as no surprise to many, but precious few people seem to be suggesting solutions to what is an emerging but potentially widespread problem. As the failures and inbuilt bias of AI tools hit the headlines once again, it is important to look not only at the problems with new tech, but also at how we utilise it to the benefit of all.
The backlash against facial recognition software, for example, is only the latest symptom of how a failure to include people in discussions and decisions about new technologies are threatening their legitimacy and the positive opportunities they could offer.
Nowhere is the risk of disconnection and disempowerment greater than in the workplace. Employers are increasingly looking to tech as the solution to their problems and the future of their business, but with little thought about how to engage or involve their workforce.
Challenges to this trajectory are now erupting within the tech sector itself, with an increase in organising activity among tech developers themselves, from Google to the gaming industry, and among workers subject to surveillance or pushed into precarious employment by tech-enabled businesses like Uber or Amazon.
Research we commissioned from YouGov earlier in the year found that 58 per cent of UK workers felt they would be locked out of any discussion about how technology would affect their jobs. No wonder many see AI as a threat, rather than something with possibilities for improving their working lives. This is why Prospect is a partner to this year’s Women Leading in AI conference on accountability and trust in Artificial Intelligence. Our members are optimistic about the future of work, but concerned about the rules that will govern it. We need to get serious about how we fix the culture of tech before it extends even further into our way of life. Ignoring workers in this debate is a sure-fire way of entrenching distrust and provoking opposition.
The real issue is not the technology but the power relationships behind it. These issues are familiar to unions like Prospect, but the speed of change means we urgently need to keep updating our ways of addressing them. In the last century, collective bargaining and campaigning focused on regulating human relationships and physical working conditions. We now need to understand a future in which critical relationships will be between humans, computer programs and data. The danger facing us is that AI and related technologies build in existing inequalities and insecurity, and in some cases make them worse.
A new agenda is already emerging. DIY unions and tech activism are spreading in America. Precarious workers are fighting back against exploitation by platform companies. Our friends in the GMB are working with unions worldwide to organise Amazon warehouse workers. At an international level we have been working through Uni Global Union, our international federation, on privacy and worker-focussed AI rules, as well as using new tech to empower employees. This month, as an alternative to employer-controlled surveillance and monitoring, we are piloting a new app, a bit like a FitBit for workers, that allows employees to collect their own data on working patterns and pressures. In the UK we are working with the Institute on the Future of Work to look at how the Equality Act can be used to tackle discrimination in algorithms and machine learning, and with the Fabian Commission on Workers and Technology on how we ensure automation is used to benefit everyone.
The UK has an opportunity to benefit from early adoption of technologies like AI. But if we don’t talk about power and the imbalances it creates in work and society, then we won’t get ethics right or start to deal with distrust. There are four principles that should define our approach.
First, worker voice and co-operation – so that those developing, using and impacted by new technologies have a real say on their purpose, design, and implementation. AI ethics need to extend beyond the boardroom and actively engage and use the experiences of workers. Unions are leading the way with New Technology Agreements and increasing attention paid to issues such as transparency and data ownership in their bargaining agendas.
Second, a new focus on the social benefits of technology – because a narrow focus on the technical intricacies (or commercial applications) will feel aloof and alienate people from the solutions technology can bring.
Third, we need to hear much, much more about job transformation so that workers are at the centre of the debate about the transition to a new economy. The government’s Industrial Strategy singularly fails to include workers in its plans. But nearly two thirds of CEOs recently surveyed by PwC recognised that we need a national strategy around AI and work which the state needs to play a key role in developing.
Finally, we need a national framework that all social partners can buy into on what national policy we need to develop innovative, transformative technology that is ethically responsible and socially beneficial. That should include employee and trade union representation on the board of the AI Council and ensuring worker voice is part of the work of bodies like the ICO and Centre for Data Ethics. New EU Commissioner Margrethe Vestager is already talking of plans for tougher regulation of big tech and ethical rules for AI. The EU social partnership model will mean that workers will be involved in these policy discussions. If the UK is to leave the EU, we must look to at least match this commitment, not think we can win by cutting workers out of the conversation.
There is a saying in the equality movement that is apt here: nothing about us, without us. Imposing change rarely gets the best outcomes. Taking people with you always gets you further.
Andrew Pakes is Research Director at Prospect Union. He tweets @andrew4mk
Originally posted here