How can organizations prepare for generative AI?
Generative AI is changing the paradigm, and risks around specific use cases will continue to arise. Image: Getty Images/iStockphoto.
Listen to the article
- Organizations across the board significantly underappreciate AI risk.
- Poor implementation of AI can perpetuate existing workplace inequities.
- We examine how businesses can ensure the ethical deployment of generative AI.
Amid recent hype around ChatGPT and generative artificial intelligence (AI), many are eager to harness the technology's increasingly sophisticated potential.
However, findings from Baker McKenzie's 2022 North America AI survey indicate that business leaders may currently underappreciate AI-related risks to their organization. Only 4% of C-suite level respondents said they consider the risks associated with using AI to be “significant," and less than half said they have AI expertise at the board level.
These figures spotlight a concerning reality: many organizations are underprepared for AI, lacking the proper oversight and expertise from key decision-makers to manage risk. And if unaddressed, organizational blind spots around the technology’s ethical and effective deployment are likely to overshadow transformative opportunities while causing organizations to lose pace with the explosive growth of the technology.
How is generative AI changing the risk landscape?
These days, AI-related progress and adoption is happening at an exponential rate – some argue too quickly.
While this exponential growth has renewed focus on the use of AI, the reality is that academics, scientists, policy-makers, legal professionals and others have been campaigning for some time now for the ethical and legal use and deployment of AI, particularly in the workplace where existing applications of AI in the HR function are abundant (e.g., talent acquisition, administrative duties, employee training).
According to our survey, 75% of companies already use AI tools and technology for hiring and HR purposes.
In this new phase of generative AI, core tenets around AI adoption – such as governance, accountability and transparency – are more important than ever, as are concerns over the consequences of poorly deployed AI.
For example, unchecked algorithms can result in biased and discriminatory outcomes, perpetuating inequities and dampening workforce diversity progress. Data privacy and breaches are another concern, easily occurring through the non-anonymization and collection of employee data.
Generative AI has also given way to new IP considerations, raising questions around ownership of both inputs and outputs from third-party programmes and subsequent copyright infringement concerns.
Broadly, we have seen governments and regulators scrambling to implement AI-related legislation and regulatory enforcement mechanisms. In the US, a key focus of emerging legislation will be on the use case of AI in hiring and HR-related operations.
Litigation, including class actions, is also on the horizon. We are already seeing the first wave of generative AI IP litigation in the US, and these early court decisions are shaping the legal landscape absent of existing regulation.
Organizations who implement generative AI also should assume that data fed into AI tools and queries will be collected by third-party providers of the technology. In some cases, these providers will have rights to use and/or disclose these inputs.
As employers look to equip their workforces with generative AI tools, are they putting sensitive data and trade secrets at risk? In short, yes. All in all, each new development seems to open questions faster than organizations, regulators and courts can answer them.
How is the World Economic Forum ensuring the responsible use of technology?
How can organizations enhance their AI preparedness?
Generative AI is changing the paradigm, and risks around specific use cases will continue to arise. To stay ahead, organizations will need to move current approaches beyond siloed efforts and bring together discrete functions under the umbrella of a strong governance framework.
While many organizations rely on data scientists to spearhead AI initiatives, all relevant stakeholders, including legal, the C-suite, boards, privacy, compliance and HR, need to be involved throughout the entire decision-making process.
This representation gap was made clear in our survey findings. Currently, only 54% of respondents said their organization involves HR in the decision-making process for AI tools, and only 36% of respondents said they have a Chief AI Officer (CAIO) in place.
In this high-risk environment, the CAIO will play a critical role in ensuring relevant governance and oversight is in place at the C-Suite level and involve HR in training and fostering a cross-functional AI team.
Hand in hand with this, organizations should prepare and follow an internal governance framework that accounts for enterprise risks across use cases and allows the company to efficiently make the correct compliance adjustments once issues are identified.
The risk for companies with no AI governance structure and a lack of oversight from key stakeholders – or ones that rely wholesale on third-party tools – is the use of AI tools in a way that creates organizational legal liability (e.g., discrimination claims).
Virtually all decision-making, whether AI-based or otherwise, creates bias. Companies that use these tools must develop a framework that identifies an approach to assessing bias and a mechanism for testing and avoiding unlawful bias, as well as ensuring relevant data privacy requirements are met.
Efforts to combat bias should be further supported by effective measures for pre and post-deployment testing.
Companies who deploy AI must also ensure that there are processes in place that provide a clear understanding of the data sets being used, algorithmic functionality and technological limitations, as proposed legislation will likely include reporting requirements.
The final outlook for AI
The takeaway is simple: AI is being widely and quickly adopted and provides many benefits. But it is being rolled out and developed so rapidly that strategic oversight and governance become even more critical for its responsible use and risk mitigation.
Many organizations both are unprepared for AI and underappreciate the risks, making the willingness to deploy this technology without proper guardrails concerning.
Fortunately, by establishing strong governance and oversight structures, organizations can withstand these technological tides no matter where they are in their AI journeys.
Beyond this, the longer-term solution to managing AI-related risk will rely on informed stakeholders across legal, regulatory and the private sector joining forces to advance legislation, codes of practice or guidance frameworks that recognize both the opportunities and risks the technology presents.
With a secure framework in place, organizations can enable themselves to deploy AI technology and harness its benefits more confidently.
Don't miss any update on this topic
Create a free account and access your personalized content collection with our latest publications and analyses.
License and Republishing
World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.
The views expressed in this article are those of the author alone and not the World Economic Forum.
Stay up to date:
Tech and Innovation
Related topics:
Forum Stories newsletter
Bringing you weekly curated insights and analysis on the global issues that matter.
More on Forum InstitutionalSee all
Beatrice Di Caro
December 17, 2024