AI will take away lots of jobs. And we are nowhere near ready to replace them
A new report finds that much faster measures need to be implemented to prepare workers for the jobs of the future.
The scale of the challenge that automation poses to the jobs market needs to be met with much stronger action to upskill the workforce, finds a new report published by a committee in the UK Parliament.
The House of Lords’ select committee on artificial intelligence raised concerns at the “inertia” that is slowing down the country when it comes to digital skills, and urged the government to take steps to make sure that people have the opportunity to reskill and retrain, to be able to adapt to the changing labor market that AI is bringing about.
Citing research carried out by Microsoft, the committee stressed that only 17% of UK employees say that they have been part of reskilling efforts, which sits well below the global average of 38%.
SEE: Managing AI and ML in the enterprise 2020: Tech leaders increase project development and implementation (TechRepublic Premium)
Microsoft also recently reported that almost 70% of business leaders in the UK believe that their organization currently has a digital skills gap, and that two-thirds of employees feel that they do not have the appropriate digital skills to fulfil new and emerging roles in their industry.
Even basic digital skills are lacking: a recent Lloyds Bank survey found that 19% of individuals in the UK couldn’t complete tasks such as using a web browser.
For the past three years, the government has been offering a national retraining scheme, which aims to upskill UK citizens, partly as a result of automation. The scheme has been piloted in six areas in the country, and up to 3,600 people have had access to the program – but the Lords committee described the results as insufficient. “The pace, scale and ambition of the scheme does not match the challenge facing many people working in the UK,” reads the report, before recommending that the government “move much more swiftly”.
Wendy Hall, a professor of computer science at the University of Southampton, who provided evidence to the Lords for the report, said that the UK is currently “nowhere near ready” when it comes to building up the skills that are necessary to mitigate the impact of automation on jobs.
Meanwhile, found the report, AI systems are growing at a fast pace. While in 2015, the UK saw £245 million ($326 million) invested in AI, that number jumped to £1.3 billion ($1.73 billion) in 2019. Automated systems are now prevalent in many industries, ranging from agriculture to healthcare, through to financial services, retail and logistics.
The COVID-19 pandemic has only accelerated the adoption of automated systems in industry: research carried out by the World Economic Forum (WEF) this year showed that 80% of decision-makers around the world are now planning on accelerating the automation of their work processes. The technology is expected to displace roles such as data entry clerks, accountants or factory workers.
Michael Wooldridge, professor of computer science at the University of Oxford, who also contributed to the select committee’s new report, told ZDNet: “Certainly some jobs will be lost, and many more will be created. The difficulty is that the jobs created are not necessarily in the same place as those lost.”
“It is not an AI-specific problem,” he continues. “Technology evolves at a rapid pace, and this is about technology skills generally. Retraining and upskilling are issues that will unwind over the next decades.”
KEEPING AUTOMATED SYSTEMS IN CHECK
The new report comes two years after the select committee made a series of recommendations to the UK government to ensure that AI systems are built responsibly. Since then, a number of actions have been taken to maintain ethical standards in AI; for example, the committee hailed the establishment of advisory bodies such as the Center for Data Ethics and Innovation (CDEI).
Warning against “complacency,” however, the committee’s report pointed to the imperative of transforming ethical frameworks into a reality. Transparency is a particular point of contention, which will be key to securing the public’s trust as algorithms are increasingly used to make decisions about citizens’ lives.
Failure to inform the public about the use of their data will inevitably lead to a loss of momentum in the progress of AI, as challenges will be raised against the technology. The UK’s track-record in this space is questionable: a survey carried out by the department for business, energy and industrial strategy (BEIS) earlier this year found that only 28% of respondents felt positive about AI. The government, said the committee, must start actively explaining how data is being used, rather than expect the public to find out.
The report stressed that to enable a better deployment of AI, the government should urgently appoint a dedicated leader. In 2017, the government announced that it would recruit a new Chief Data Officer (CDO) by 2020, to champion the use of data in public services; so far, the role is still vacant. Highlighting that better coordination starts at the top, the select committee recommended that “immediate steps” be taken to appoint the CDO.
Another role, that of Government Chief Digital Officer (GCDO), was also announced over a year ago to lead the digital transformation of public services; similarly, no appointment has been made yet. Yet the issue of leadership is critical. “In a time like this, when things keep changing so quickly, it is absolutely essential that we have very clear leadership around these issues,” said Wooldridge.
Without an AI leader with a strong plan to deploy the technology efficiently and responsibly, the UK is at risk of falling behind other countries in the “race for AI”. The select committee’s report noted that the UK is currently third on the global AI index, which ranks countries based on eight different factors; looking at those factors more closely, however, shows deficiencies that need resolving.
While the UK is top of the league for AI operating environments and research, for example, the country is only seventh for government strategy, eight for infrastructure and eleventh for development.
Wooldridge, for his part, believes that the UK is less internationally competitive on AI than it was two years ago. This is due to other countries investing heavily in the technology, but also to the potential loss of attractiveness that an exit from the EU might cause for international talent.
“I feel very comfortable about the quality of the technology we have, and the talent base for AI in the UK,” he says. “But if you graduate in AI from a leading university, you could go anywhere, and it’s our attractiveness as a home for international talent that I worry about.”
Scientists are concerned, for example, that they might not be able to apply for funding from the European Research Council (ERC) after Brexit, explained Wooldridge, who found in his personal experience that some scientists relocated to continental Europe as a result of the “tone of the debate” around the decision to leave the EU.