There is a lot of hype about Artificial intelligence (AI).
We need to step back and face up to the critical issues that will shape the future economy and our lives.
Here we draw on key messages from our recent book Beyond Genuine Stupidity – Ensuring AI Serves Humanity to highlight five of the most critical issues and the choices facing governments, businesses, society, and individuals as we prepare for AI’s impact on us all.
1. Technological Unemployment and New Jobs
The AI technology vendors are struggling to hold a consistent line. On one hand they are selling the return on investment case for AI – predicated on headcount reductions. However, as this is a contentious issue they are now arguing the “augmented intelligence” angle. The new line is that AI will free people from routine tasks to do more creative/problem solving work. Whilst this is attractive, the evidence to date suggests most employers are going for cost base reduction.
Some evangelists argue that AI will create a host of new jobs and industries. Whilst this is possible, the majority of new jobs will require at least degree level education. Many new businesses will be highly automated, and also there could be a major time lag between bank staff and truckers being made redundant and the new jobs being created.
The challenge for governments is to model a range of scenarios, including extreme ones. From this, they can start assessing the tax implications of different levels of unemployment, explore policy options they might pursue, and identify immediate actions they should be taking because they are valid under all scenarios.
2. Reskilling and Education
For adults, in most countries, the provisions for retraining and lifelong learning are woeful. However, the facilities already exist in schools and colleges, and there is no shortage of people who can deliver training. Exponential change requires an exponential increase in provision for retraining – the cost of inaction will be higher unemployment costs, rising mental health issues, and skill shortages.
For schools, we need to take a hard look at the assumptions underpinning current curriculums. For primary school children the bulk of the jobs they’ll do probably don’t exist yet. Hence, we need to equip them with skills that will allow them to take up these new opportunities when they arise. This means a far greater emphasis on social and collaborative skills, conflict resolution, problem solving, scenario thinking etc.
3. Universal / Guaranteed Basic Incomes
There will inevitably be employment casualties from automation. How people will be able to afford the goods and services being produced by the machines if they no longer have jobs?
Many have argued for a guaranteed basic income (GBI) across society – that pays a living wage to everyone – at a rate typically higher than unemployment benefit. Countries around the world from Canada and Finland to India and Namibia have been experimenting with different models for GBI.
Governments need to work together to try different experiments and see the impacts on funding costs, economic activity, the shadow economy, social wellbeing, crime, domestic violence, and mental health. The experiments will provide evidence on which to base policy decisions when the need for action arises.
4. New Responsibilities for employers
Many potential issues around the introduction of AI and other disruptive technologies will arise from the choices made by employers. Will they retain staff freed up by technology or release them to make higher profits?
If unemployment costs rise, or GBI schemes are introduced – who will pay for them? One option is the introduction of “robot taxes”, where firms effectively pay a higher rate of taxes on the profits they derive from increased automation.
Opponents of GBI schemes and robot taxes have yet to offer substantive alternative policy options for what is likely to be a genuine issue.
Large employers and governments need to think about viable policy alternatives for a world where we might need fewer workers.
5. Ethics, Governance, and Ownership of the Technology
Should the evolution of AI be left to the private sector? Voluntary ethical charters are starting to emerge to govern the development and application of AI and robotics. The challenge here is that AI is recognised as a critical future technology by leading industrial nations. Hence it has become an economic battleground, and a race for AI superpower status. In response, there is a growing argument for state regulation and oversight of AI.
Given the challenges, an option being put forward is for governments to nationalise the ownership of AI intellectual property and then licence it back to firms that deploy it. In this way, governments could regulate the deployment and raise revenues to cover the expected social costs discussed above.
AI is advancing at such a rate that identifying the future implications and impacts is getting beyond the reach of governments, businesses etc. There are difficulties but we need to take an enlightened and forward-thinking approach. This means beginning to seriously analyse and assessment the most radical possible outcomes. We need to develop policy options for the worst case scenarios, and take actions now which we know will be beneficial to humanity however the game of AI plays out in the long run.
About the authors
Rohit Talwar, Steve Wells, Alexandra Whittington, April Koury, and Helena Calle are futurists with Fast Future – a professional foresight firm specializing in delivering keynote speeches, executive education, research, and consulting on the emerging future and the impacts of change for global clients. Fast Future publishes books from leading future thinkers around the world, exploring how developments such as AI, robotics, exponential technologies, and disruptive thinking could impact individuals, societies, businesses, and governments and create the trillion-dollar sectors of the future. Fast Future has a particular focus on ensuring these advances are harnessed to unleash individual potential and enable a very human future. See: www.fastfuture.com