As generative AI gains pace, industry leaders explain how to make it a force for good
The future of AI looks bright with generative AI and large language models with socially beneficial use cases, including waste elimination and fraud detention. Image: Unsplash/Andy Kelly
Listen to the article
- The World Economic Forum AI & Machine Learning Platform Quarterly Connect took place in March 2023, with C-suite executives explaining what’s ahead for AI.
- Fairness and bias remain a persistent challenge in AI development but the problems befalling individual companies are varied and nuanced, which is why diverse mitigation approaches are needed.
- The future of AI looks bright with generative AI and large language models with socially beneficial use cases, including waste elimination and fraud detention.
Artificial intelligence (AI) is becoming more ubiquitous, gaining more social uses and is now more accessible to the everyday person. Those prospects are exciting but there remain challenges regarding fairness and de-biased results. And what about the unintended consequences or harms of AI? How can those working in the AI and machine learning industry ensure that AI remains a force for good?
Panellists of the World Economic Forum AI & Machine Learning Platform Quarterly Connect grappled with those questions in a webinar on 23 March 2023.
The panel, moderated by Kay Firth-Butterfield, Head of AI & Machine Learning at the Forum, included Armughan Ahmad, Chief Executive Officer and President at Appen, the global leader in data for the AI lifecycle; Michael Schmidt, Chief Technology Officer at DataRobot, a company focused on value-driven AI; and Daniela Braga, Founder and Chief Executive Officer at Defined.ai, which prides itself on being the largest marketplace of training data in the world with a strong ethical focus including the privacy of data and transparency.
Training data for ethical AI
All three speakers are sitting across the helm of companies that have recognized the ethical dilemmas that arise from AI when trained on bad data. All foresee fairness and bias to be persistent themes when it comes to achieving their goals. That said, Schmidt acknowledged there had been a lot of progress in the data community for tackling bias and fairness and, more recently, best practices for mitigating bias issues.
Every company may look at bias and fairness a bit differently, so there’s a need to be flexible with solutions, said Schmidt.
He added, “There are all kinds of other practical concerns, like we see lots of mistakes with companies, they go after the most ambitious AI projects... we recommend to start simple and help solve some of these practical challenges and build up to the really high impact ones.”
“This is the first time in the fourth industrial revolution that [farmers in developing countries] can take access to a properly trained LLM model, a generative AI model, and ask a question in their own language to then get a government subsidy because someone can very quickly train that model and give them access to that,” Ahmad said.
“So how do you make sure that the opportunity becomes an income equality opportunity, not an inequality opportunity?" he added.
At Appen, Ahmad explains that they ensure good AI through three pillars:
- Good data.
- Responsibility from a compute perspective.
- The people building the models and their diversity.
Achieving those goals is not without good action by the company themselves as Ahmad acknowledged, they have to report and deliver on fair pay, their carbon footprint from data generation and diversity. If they get purpose and perspective right in their approach, then prosperity will follow he said.
However, some companies will have to slow down, especially if they are jumping onto generative AI, as it requires large amounts of data, said Braga. It is important that there are properly monitored internal systems – regular audits, consent and copyright considerations. Increasing numbers of people are using scraped data from the web, which draws in bias and they fail to train employees.
Generative AI’s opportunities in 2023
In 2023, generative AI is the big thing, built on large language models that don’t need a lot of humans to train it. However, assuming humans aren’t required in the loop at all would be a mistake, as Ahmad points out; they, we, are still crucial to get the best out of AI.
ChatGPT3, for instance, is dependent on prompt engineering – the crafting of a statement or question that returns accurate and apt results and ensures the AI is not “hallucinating,” as seen in some publicized examples
“If AI is the enabler, humans are the transformers,” said Ahmad.
Schmidt also pointed out that there will be exciting use cases to tackle social ills, such as eliminating waste and fraud detection and anti-money laundering use cases: “We are seeing a lot more adoption of using AI to radically make these more effective at chasing down and eliminating these sources of fraud.”
Meanwhile, Braga said that AI would determine the future of productivity.
“It is clear that AI is here to stay and to work alongside humans, to augment us," she said. "But at the same time, we see an opportunity to build reverse engineering tools to track transparency and reliability of these data sources,” she added.
Building trust
As useful and game-changing as AI may be and while acknowledging the need to address bias, there are still concerns about use cases around generative AI.
For education providers to really build trustworthy systems, for instance, Braga suggests they should seek to partner with companies or universities building generative AI models that are not feeding into ChatGPT to strengthen the tool and build trust with educators.
Meanwhile, in the European Union, the proposed AI Act, which seeks to harmonize rules on AI systems across the bloc, recently added a definition on “general purpose AI” to accommodate generative AI, acknowledging it can be used for low and high risks applications.
Schmidt admits it is hard to get generative AI models to focus on specific topics to great accuracy; there are sophisticated prompting strategies currently in the works within the community but that is still very much a dark art.
Braga distinguishes the risk attached to generative AI from other high-risk applications that have been banned or are being considered for a ban in the 27 member states, such as social credit scoring.
When it comes to generative AI, Braga suggests, “Rather than forbidding, certifying the applications that they go through all of the ethical principles would be a better idea.”
Don't miss any update on this topic
Create a free account and access your personalized content collection with our latest publications and analyses.
License and Republishing
World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.
The views expressed in this article are those of the author alone and not the World Economic Forum.
Stay up to date:
Tech and Innovation
The Agenda Weekly
A weekly update of the most important issues driving the global agenda
You can unsubscribe at any time using the link in our emails. For more details, review our privacy policy.
More on Emerging TechnologiesSee all
Filipe Beato and Jamie Saunders
November 21, 2024