Quantcast
Channel: University of Cambridge - Latest news
Viewing all articles
Browse latest Browse all 4346

Humans need not apply

0
0

On googling ‘will a robot take my job?’ I find myself on a BBC webpage that invites me to discover the likelihood that my work will be automated in the next 20 years. I type in ‘editor’. “It’s quite unlikely, 8%” comes back. Quite reassuring – but, coming from a farming family, it’s a sobering moment when I type in ‘farmer’: “It’s fairly likely, 76%”.

The results may well be out of date – such is the swiftness of change in labour market predictions – but the fact that the webpage even exists says something about the focus of many of today’s conversations around the future of work.

Many of the discussions are driven by stark numbers. According to a scenario suggested recently by consultancy McKinsey, 75–375 million workers (3–14% of the global workforce) will need to switch occupational categories by 2030, and all workers will need to adapt “as their occupations evolve alongside increasingly capable machines”.

Just recently, online retailer Shop Direct announced the closure of warehouses and a move to automation, putting nearly 2,000 jobs at risk. Automation – or ‘embodied’ artificial intelligence (AI) – is one aspect of the disruptive effects of technology on the labour market. ‘Disembodied AI’, like the algorithms running in our smartphones, is another.

Dr Stella Pachidi from Cambridge Judge Business School believes that some of the most fundamental changes in work are happening as a result of ‘algorithmication’ of jobs that are dependent on information rather than production – the so-called knowledge economy.

Algorithms are capable of learning from data to undertake tasks that previously needed human judgement, such as reading legal contracts, analysing medical scans and gathering market intelligence.

“In many cases, they can outperform humans,” says Pachidi. “Organisations are attracted to using algorithms because they want to make choices based on what they consider is ‘perfect information’, as well as to reduce costs and enhance productivity.”

But these enhancements are not without consequences, says Pachidi, who has recently started to look at the impact of AI on the legal profession.

“If routine cognitive tasks are taken over by AI, how do professions develop their future experts?” she asks. “Expertise and the authority it gives you is distributed in the workplace. One way of learning about a job is ‘legitimate peripheral participation’ – a novice stands next to experts and learns by observation. If this isn’t happening, then you need to find new ways to learn.”

Another issue is the extent to which the technology influences or even controls the workforce. For over two years, Pachidi was embedded in a telecommunications company. There she observed “small battles” playing out that could have vast consequences for the future of the company.

“The way telecoms salespeople work is through personal and frequent contact with clients, using the benefit of experience to assess a situation and reach a decision. However, the company had started using a data analytics algorithm that defined when account managers should contact certain customers about which kinds of campaigns and what to offer them.”

The algorithm – usually built by external designers – often becomes the curator of knowledge, she explains. “In cases like this, a myopic view begins to creep into working practices whereby workers learn through the ‘algorithm’s eyes’ and become dependent on its instructions. Alternative explorations – the so-called technology of foolishness where innovation comes out of experimentation and intuition – is effectively discouraged.”

Pachidi and colleagues have even observed the development of strategies to ‘game’ the algorithm. “Decisions made by algorithms can structure and control the work of employees. We are seeing cases where workers feed the algorithm with false data to reach their targets.”

It’s scenarios like these that many researchers in Cambridge and beyond are working to avoid by increasing the trustworthiness and transparency of AI technologies (see issue 35 of Research Horizons), so that organisations and individuals understand how AI decisions are made.

In the meantime, says Pachidi, in our race to reap the undoubted benefits of new technology, it’s important to avoid taking a laissez-faire approach to algorithmication: “We need to make sure we fully understand the dilemmas that this new world raises regarding expertise, occupational boundaries and control.”

While Pachidi sees changes ahead in the nature of work, economist Professor Hamish Low believes that the future of work will involve major transitions across the whole life course for everyone: “The traditional trajectory of full-time education followed by full-time work followed by a pensioned retirement is a thing of the past.”

“Disruptive technologies, the rise of the ad hoc ‘gig economy’, living longer and the fragile economics of pension provision will mean a multistage employment life: one where retraining happens across the life course, and where multiple jobs and no job happen by choice at different stages.”

His research examines the role of risk and the welfare system in relation to work at these various life stages. “When we are talking about the future of work,” he says, “we should have in mind these new frameworks for what people’s lives will look like, and prepare new generations for a different perspective on employment.”

On the subject of future job loss, he believes the rhetoric is based on a fallacy: “It assumes that the number of jobs is fixed. If in 30 years, half of 100 jobs are being carried out by robots that doesn’t mean we are left with just 50 jobs for humans. The number of jobs will increase: we would expect there to be 150 jobs.”

Dr Ewan McGaughey, at Cambridge’s Centre for Business Research and King’s College London, agrees that “apocalyptic” views about the future of work are misguided. “It’s the laws that restrict the supply of capital to the job market, not the advent of new technologies that causes unemployment.”

His recently published research answers the question of whether automation, AI and robotics will mean a ‘jobless future’ by looking at the causes of unemployment. “History is clear that change can mean redundancies – after World War II, 42% of UK jobs were redundant, but social policy maintained full employment. Yes, technology can displace people. But social policies can tackle this through retraining and redeployment.”

He adds: “The big problem won’t be unemployment it will be underemployment – people who want to work but can’t because they have zero-hours contracts. If there is going to be change to jobs as a result of AI and robotics then I’d like to see governments seizing the opportunity to improve policy to enforce good job security. We can ‘reprogramme’ the law to prepare for a fairer future of work and leisure.”

This might mean revitalising fiscal and monetary policies such as a universal social security and taxing the owners of robots.

McGaughey’s findings are a call to arms to leaders of organisations, governments and banks to pre-empt the coming changes with bold new policies that ensure full employment, fair incomes and a thriving economic democracy.

“The promises of these new technologies are astounding. They deliver humankind the capacity to live in a way that nobody could have once imagined,” he adds. “Just as the industrial revolution brought people past subsistence agriculture, and the corporate revolution enabled mass production, a third revolution has been pronounced. But it will not only be one of technology. The next revolution will be social.”

Inset image: read more about our research on the topic of work in the University's research magazine; download a pdf; view on Issuu.

Will automation, AI and robotics mean a jobless future, or will their productivity free us to innovate and explore? Is the impact of new technologies to be feared, or a chance to rethink the structure of our working lives and ensure a fairer future for all?

If routine cognitive tasks are taken over by AI, how do professions develop their future experts?
Stella Pachidi
Linking research to policy makers

Dr Koen Jonkers is at the Joint Research Centre– the European Commission’s science and knowledge service in Brussels – and also a policy fellow at Cambridge’s Centre for Science and Policy (CSaP).

Over the past few months, Jonkers has been discussing the future of work with academic experts in Cambridge as part of his research for a special JRC report aimed at providing evidence for the European Commission’s employment and social affairs policies.

“Among the megatrends that will affect the future of work – an ageing workforce, migration, globalisation, urbanisation, and so on – the impact of technology is one where we seem to be witnessing a step change in the relationship that many people have with their work,” says Jonkers, who is one of the scientists employed by the JRC to provide independent scientific advice and support to EU policy.

“Some people have said there will be a major shock in terms of joblessness. Others that it is part of a trend that is ongoing and that it will bring opportunity. We want to give an overview of all the viewpoints, to analyse how well societies are equipped to deal with change, to mitigate potential adverse consequences, and to come up with an idea of what is likely to happen.

“As well as reskilling and upskilling current workers, governments will be keen to look at anticipatory actions to prepare young people to have a different type of work life to that of their parents and grandparents, so that they will be used to a world where people and machines work together.”

The mission of CSaP is to improve public policy – in the UK and Europe – through the more effective use of evidence and expertise. “Through the CSaP Fellowship, it’s been very refreshing to talk with people with very high levels of expertise in fields other to my own,” says Junkers. “In such a multifaceted areas as the future of work, it’s been important for me to have expert analysis of the themes that are playing out.”

Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. Images, including our videos, are Copyright ©University of Cambridge and licensors/contributors as identified.  All rights reserved. We make our image and video content available in a number of ways – as here, on our main website under its Terms and conditions, and on a range of channels including social media that permit your use and sharing of our content under their respective Terms.

Yes

Viewing all articles
Browse latest Browse all 4346

Latest Images

Trending Articles





Latest Images