“I’m sorry Bill; I’m afraid we can’t do that”*

Crossing my Twitter feed quite a lot in the last few days has been snippets of information contemplating the future of work in the context of the growth of applications of artificial intelligence. Frequently, the recent debate uses the somewhat ancient terminology of robots, but the focus of the analysis is mostly the same: robots have already stolen the futures of much of the now left-behind working class; and are now coming again to steal the futures of much of the middle class, leaving behind in employment only artists, carers and supervisors. (And, probably, (Polish) plumbers.) The result of that is, of course, sheer panic among the chattering classes much of whom were either fairly silent first time round or otherwise insistent that people had simply to adapt to market forces and get on with it.

The spark for these thoughts here in this post was firstly a brief report of UNI Global Union’s contribution to the Trade Union Advisory Committee of the OECD (hat-tip: Denise McGuire), which was recently considering the issue under the somewhat more sophisticated title of ‘digitalisation and the future of work’; together with a thoughtful post on ToUChstone from the TUC’s own Tim Page (hat-tip: Sue Ferns) (and building on top of Helen Nadin’s earlier series of posts).

The threat posed to employment by new technology is of course a very real one – even if it is not one that is particularly new. Trade unions have been grappling with issues of applications in the workplace of new, computer-aided technology, initially in manufacturing industry, since at least the 1960s. The issue of ensuring how a just transition can take place is not only a reasonable, entirely rational call, as Page argues, but is also likely to continue to dominate the approach typically adopted by trade union negotiators faced with local arguments for change.

Whether the threats now being posed by AI do represent a quantitatively-different series of scythes through our employment base and structure than what we have seen before of course remains to be seen. I’m a little sceptical: capitalism is, by force of necessity, endlessly creative at establishing new forms of work (and, indeed, so are workers) and has been since the days of the Luddites and Captain Swing; the list of jobs unheard of ten years ago is fairly legendary (in the World Economic Forum’s list or that of many others) and, of course, all these robots will need servicing and maintaining not least to prevent them from going wrong. And software can, as we all know, be notoriously buggy. Some future jobs will be very well-paid, others less so – pretty much as now – but I tend to share less the fairly apocalyptic vision that this level of disruption will lead to mass unemployment and bankrupt states.

Enter Bill Gates, with his ‘robot tax’. To be fair, though, it’s not just Bill, as Market Watch‘s excoriating and mostly on-the-button review illustrates. Gates’s concern is really two-fold: to slow down the process of automation; and to prevent the process of automation becoming discredited. The obvious news on the first is that ‘well, you can’t’, although I am with him a bit more on the second. But a robot tax is not the right solution. That it’s so against the zeitgeist in the UK and in the US, among others, is neither here nor there in terms of its value as a policy prescription, although this does reduce its likely potential for adoption; the key here is actually in persuading the likes of Google and Amazon to pay their fair share of the current tax take rather than be endlessly creative around the tax laws, as well as in persuading right-wing governments not to engage in tax competition policies. (If only there was an international bloc to which we could belong that made tackling both of these a little easier: you know, like a Union of Europe, or something.) Secondly, automation should lead to improved productivity, and the UK needs a lot more of that, so anything that has the potential to inhibit investment has to be rejected; here, the major policy issue lies in narrowing the growing gap between wages and productivity and in addressing the share of national income taken by wages. In short, ending inequality. And thirdly, taxing a robot for taking someone’s job – and precisely how difficult would that be in the detail? – tends to lead to workers affected allocating the blame for that job loss on the robot rather than on the manager who has actually taken the decision to automate it.

Applications of new technology in the UK have, as they were supposed to, led to a continuing reduction in working time – at least, at the average level. What has happened is that this reduction has led to increasingly precarious forms of work being introduced for some workers (involuntary part-time working; bogus ‘freelance’ employment or self-employment); while others, in the ‘core’, tend to be working even harder, and longer. The rewards of lower working time have not only been unfairly distributed; but management has found a way to make that reduction actually seem like a penalty; and on both those who have too little work as well as on those who have enough of it. There is a debate to be had on the introduction of a basic income such that the rewards that automation has brought are better distributed (and, indeed, valued). And, of course, workers in precarious forms of employment need to be better protected – which includes treating those who are clearly workers as such.

The question nevertheless remains of how to ensure a just transition.

Firstly, and remembering that people in cities in northern England feel that they have been ‘left behind’ substantially because there was no serious, concerted attempt to deal with the impact of manufacturing job loss in the 1980s, we need to have a proper national industrial strategy which approaches digitalisation recognising the benefits of automatisation but which also systematically attempts to deal with the impact. The lesson we should be learning about areas like Stoke and Copeland is that it is the market solutions that we tried in the 1980s and 1990s that do not work. It is precisely the market, not politicians, that has left people behind (and if people need any arguments about the disconnect between people and the policy process, just look at the turnout in Stoke – just 36.7%). Reinvesting in areas of decline will take money, and substantial amounts of it – of course, one of the arguments behind the uses to which a ‘robot tax’ could be dedicated although the drawbacks sketched above still lead me away from it.

Secondly, the collective, societal issues sparked by automatisation require collective solutions. Individual responses often lead to the expressions of political frustration that we are seeing because individual voices appear incoherent. Consequently, we need to find ways of re-collectivising our society around establishing a meaningful and coherent social dialogue around the variety of issues raised by digitalisation. At company level, this means a re-focus on establishing proper collective bargaining in the interests of a fairer workplace; and it probably means worker directors, and in the form perfectly encapsulated in the fifth paragraph of Janet Williamson’s piece for ToUChstone (and nothing other than this). At national level, establishing collective social dialogue in the interests of a fairer society means changing the language around trade unions, such that effective industrial action is not immediately demonised by the government either in parliament or in terms of reaching for the statute book; and it means inviting trade union leaders into specific dialogue, and with a view not just to listening but to reaching agreement. Brexit, and the plethora of issues that will be raised once the process of withdrawal has been triggered, represents an important test of the realism of the government’s intentions n this respect.

Giving effective voice to people demands that we listen, however uncomfortable that might be and however inconvenienced we might be by it. The alternative – around automatisation as well as any other aspect of the national dialogue that we might consider – is that we create (or that we entrench) pathways for nationalism and for extremism.

 

* Of course, an adapted quote from HAL9000, the computer whose sentience continues to influence our thoughts and fears about the dangers of AI.

Advertisements