Why we need a legal definition of artificial intelligence
When we talk about artificial intelligence (AI) – which we have done lot recently, including my outline on The Conversation of liability and regulation issues – what do we actually mean?
AI experts and philosophers are beavering away on the issue. But having a usable definition of AI – and soon – is vital for regulation and governance because laws and policies simply will not operate without one.
This definition problem crops up in all regulatory contexts, from ensuring truthful use of the term “AI” in product advertising right through to establishing how next-generation automated weapons systems (AWSs) are treated under the laws of war.
True, we may eventually need more than one definition (just as “goodwill” means different things in different contexts). But we have to start somewhere so, in the absence of a regulatory definition at the moment, let’s get the ball rolling.
Defining the terms: artificial and intelligence
For regulatory purposes, “artificial” is, hopefully, the easy bit. It can simply mean “not occurring in nature or not occurring in the same form in nature”. Here, the alternative given after the “or” allows for the possible future use of modified biological materials.
This, then, leaves the knottier problem of “intelligence”.
From a philosophical perspective, “intelligence” is a vast minefield, especially if treated as including one or more of “consciousness”, “thought”, “free will” and “mind”. Although traceable back to at least Aristotle’s time, profound arguments on these Big Four concepts still swirl around us.
In 2014, seeking to move matters forward, Dmitry Volkov, a Russian technology billionaire, convened a summit on board a yacht of leading philosophers, including Daniel Dennett, Paul Churchland and David Chalmers.
Perhaps unsurprisingly, no consensus was reached, and Chalmers suggested that it was unlikely to emerge within the next century.
Fortunately for would-be regulators, though, the philosophical arguments might be sidestepped, at least for a while. Let’s take a step back and ask what a regulator’s immediate interest is here?
I would say that it is the work products of AI scientists and engineers, and any public welfare or safety risks that might arise from those products.
Logically, then, it is the way that the majority of AI scientists and engineers treat “intelligence” that is of most immediate concern.
Intelligence and the AI community
Until the mid 2000s, there was a tendency in the AI community to contrast artificial intelligence with human intelligence, an action that merely passed the buck to psychologists.
In November 2007, John McCarthy, an AI pioneer at Stanford University, addressed this issue:
Q: Isn’t there a solid definition of intelligence that doesn’t depend on relating it to human intelligence?
A: Not yet. The problem is that we cannot yet characterize in general what kinds of computational procedures we want to call intelligent. We understand some of the mechanisms of intelligence and not others.
It was partly because of this difficulty that much research effort had been redirected from artificial general intelligence (AGI) to artificial narrow intelligence (ANI).
But just after McCarthy wrote this, a general “human-independent” definition of “intelligence” emerged. Alongside the formalisation of a universal algorithmic entity called “AIXI”, Marcus Hutter (now at ANU) and Shane Legg (now at Google DeepMind) proposed the following informal definition to supersede those that they had previously catalogued:
Intelligence measures an agent’s ability to achieve goals in a wide range of environments.
This informal definition signposts things that a regulator could manage, establishing and applying objective measures of ability (as defined) of an entity in one or more environments (as defined). The core focus on achievement of goals also elegantly covers other intelligence-related concepts such as learning, planning and problem solving.
But many hurdles remain
First, the informal definition may not be directly usable for regulatory purposes because of AIXI’s own underlying constraints. One constraint, often emphasised by Hutter, is that AIXI can only be “approximated” in a computer because of time and space limitations.
Another constraint is that AIXI lacks a “self-model” (but a recently proposed variant called “reflective AIXI” may change that).
Second, for testing and certification purposes, regulators have to be able to treat intelligence as something divisible into many sub-abilities (such as movement, communication, etc.). But this may cut across any definition based on general intelligence.
From a consumer perspective, this is ultimately all a question of drawing the line between a system defined as displaying actual AI, as opposed to being just another programmable box.
If we can jump all the hurdles, there will be no time for quiet satisfaction. Even without the Big Four, increasingly capable and ubiquitous AI systems will have a huge effect on society over the coming decades, not least for the future of employment.
But if the Big Four do ever (seem to) show up in AI systems, we can safely say that we’ll need not just a yacht of philosophers, but an entire regatta.
This article is published in collaboration with The Conversation. Publication does not imply endorsement of views by the World Economic Forum.
To keep up with the Agenda subscribe to our weekly newsletter.
Author: Gary Lea, Visiting Researcher in Artificial Intelligence Regulation, Australian National University.
Image: The hand of humanoid robot AILA (artificial intelligence lightweight android) operates a switchboard. REUTERS/Fabrizio Bensch.
Don't miss any update on this topic
Create a free account and access your personalized content collection with our latest publications and analyses.
License and Republishing
World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.
The views expressed in this article are those of the author alone and not the World Economic Forum.
Stay up to date:
Fourth Industrial Revolution
The Agenda Weekly
A weekly update of the most important issues driving the global agenda
You can unsubscribe at any time using the link in our emails. For more details, review our privacy policy.
More on Fourth Industrial RevolutionSee all
David Elliott
November 25, 2024