AI has huge potential – but it won’t solve all our problems
Governments are now competing in a technological and rhetorical arms race to dominate the burgeoning machine learning sector Image: REUTERS/Charles Platiau
Hysteria about the future of artificial intelligence (AI) is everywhere. There is no shortage of sensationalist news about how AI can cure diseases, accelerate human innovation and improve human creativity. From the headlines alone, you would think we already live in a future where AI has infiltrated every aspect of society.
While AI has opened up a wealth of promising opportunities, it has also led to a mindset that can be best described as "AI solutionism". This is the attitude that, given enough data, machine learning algorithms can solve all of humanity’s problems.
There is a big problem with this idea. Instead of supporting AI progress, it actually jeopardises the value of machine intelligence by disregarding important AI safety principles and setting unrealistic expectations about what AI can really do for humanity.
In only a few years, AI solutionism has spread from Silicon Valley’s technology evangelists to government officials and policymakers around the world. The pendulum has swung, away from the dystopian notion that AI will destroy humanity, towards the utopian belief that our algorithmic saviour has arrived.
Governments are now pledging support to national AI initiatives and competing in a technological and rhetorical arms race to dominate the burgeoning machine learning sector. The UK government has vowed to invest £300m in AI research, to position itself as a leader in the field. Enamoured with the transformative potential of AI, French president Emmanuel Macron has committed to turning France into a global AI hub.
Meanwhile, the Chinese government is increasing its AI prowess with a national plan to create a Chinese AI industry worth $150 billion by 2030. Many countries are hoping to dominate the Fourth Industrial Revolution. AI solutionism is on the rise, and it is here to stay.
While many political manifestos tout the transformative effects of the looming "AI revolution", they tend to understate the complexity around deploying advanced machine learning systems in the real world. There are limits to what AI can do, and they are linked to how machine learning actually works.
One of the most promising varieties of AI technologies is neural networks. This form of machine learning is loosely modelled after the neuronal structure of the human brain, but on a much smaller scale. Many AI-based products use neural networks to infer patterns and rules from large volumes of data. But what many politicians do not understand is that simply adding a neural network to a problem does not automatically create a solution.
If policymakers start deploying neural networks left right and centre, they should not assume that AI will instantly make our government institutions more agile or efficient. Simply adding a neural network to a democracy will not suddenly make it more inclusive, more fair or more personalized.
Consider, by way of analogy, transforming a physical shopping mall into a company such as Amazon. Simply launching a website is not sufficient to become an internet company. Something more is needed.
AI systems need a lot of data to function. But the public sector typically does not have the appropriate data infrastructure to support advanced machine learning. Most of its data remain stored in offline archives. The few digitised sources of data that exist tend to be buried in bureaucracy. More often than not, data are spread across different government departments, each requiring special permission to be accessed. Above all, the public sector typically lacks the human talent with the right technological capabilities to reap the full benefits of machine intelligence.
For these reasons, the sensationalism around AI has attracted many critics. Stuart Russell, a professor of computer science at Berkeley, has long advocated a more realistic approach to neural networks, focuseing on simple everyday applications of AI instead of the hypothetical robot takeover by a super-intelligent AI. Similarly, MIT’s professor of robotics, Rodney Brooks, writes that "almost all innovations in robotics and AI take far, far, longer to be really widely deployed than people in the field and outside the field imagine". Real progress is painful and slow. And in the case of AI, it requires a lot of data.
Furthermore, one of the many difficulties in deploying machine learning systems is that AI is extremely susceptible to adversarial attacks. This means that a malicious AI can target another AI to force it to make wrong predictions or to behave in a certain way. Many researchers have warned against rolling out AI without appropriate security standards and defence mechanisms. Still, AI security remains an often-overlooked topic in the political rhetoric of policymakers.
If we are to reap AI’s benefits and minimize its potential harms, we must start thinking about how machine learning can be meaningfully applied to specific areas of government, business and society. This means we need to have a discussion about AI ethics, and the distrust that many people have towards machine learning.
Most importantly, we need to be aware of AI's limitations, and of where humans still need to take the lead. Instead of painting an unrealistic picture of the power of AI, it is important to take a step back and separate AI’s actual technological capabilities from magical fairy dust. Machine learning is neither magic, nor the solution to everything.
Even Facebook recently accepted that AI is not always the answer. For a long time, the social network believed that problems such as the spread of misinformation and hate speech could be algorithmically identified and stopped. But under recent pressure from legislators, the company quickly pledged to replace its algorithms with an army of more than 10,000 human reviewers.
The medical profession has also recognised that AI cannot be treated as panacea for all problems. The IBM Watson for Oncology programme was a piece of AI that was meant to help doctors treat cancer. Even though it was developed to deliver the best recommendations, human experts found it difficult to trust the machine. As a result, the AI programme was abandoned in most hospitals where it was trialled, despite significant investments in the technology.
Similar problems arose in the legal domain, when algorithms were used in courts in America to sentence criminals. An opaque algorithm was used to calculate risk assessment scores and determine the likelihood that someone will commit another crime. The AI was designed to help judges make more data-centric decisions in court. However, the system was found to amplify structural racial discrimination, resulting in a major backlash from legal professionals and the general public.
These examples demonstrate that there is no AI solution for everything. Using AI simply for the sake of AI may not always be productive or useful. Not every problem is best addressed by applying machine intelligence.
In an almost prophetic treatise published in 1964, the American philosopher Abraham Kaplan described this tendency as the "law of the instrument". He formulated it as follows: "Give a small boy a hammer, and he will find that everything he encounters needs pounding".
Only this time, the mentality of Kaplan’s small boy is shared by influential world leaders, and the AI hammer is not only very powerful, but also very expensive. The crucial lessons for everyone aiming to boost investments in national AI programmes are these: all solutions come with a cost, and not everything that can be automated should be.
We cannot afford to sit and wait until we reach general-level sentient AI in the distant future. Nor can we rely on narrow AI to solve all our problems for us today. We need to solve them ourselves, and actively shape AI systems to help us in this monumental task.
Don't miss any update on this topic
Create a free account and access your personalized content collection with our latest publications and analyses.
License and Republishing
World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.
The views expressed in this article are those of the author alone and not the World Economic Forum.
Stay up to date:
Emerging Technologies
Related topics:
The Agenda Weekly
A weekly update of the most important issues driving the global agenda
You can unsubscribe at any time using the link in our emails. For more details, review our privacy policy.
More on Emerging TechnologiesSee all
Filipe Beato and Jamie Saunders
November 21, 2024