Business

How companies – and their people – can accelerate generative AI use in 2024

A group of people working on a genAI project.

Businesses need to think about how to integrate genAI – and how to use their people to enhance it. Image: Pexels/fauxels

May Habib
Co-Founder and Chief Executive Officer, Writer AI
  • Many organisations are thinking about how to get the best return on investment from integrating generative artificial intelligence (genAI) tools such as large language models (LLMs) into their businesses.
  • Companies need to think about the best LLM model for them – whether it’s a 'build your own' or 'off the shelf' system. Guidance is also emerging about using AI safely and securely.
  • People should be a key part of any company’s AI implementation, but they should accelerate its use in the business, not be replaced by tools like LLMs.

Generative AI looks set to continue to be a force in the year ahead, making 2024 the year of proof for this new technology. The biggest proof point for businesses, of course, will be the actual return on investment from pursuing genAI technology such as large language models (LLMs). These algorithms can summarise masses of information to generate text that conveys ideas and answers questions.

The return directly results from the architectural choices you make, the genAI model you choose and how you involve your people in its development and use. Writer customers like L’Oreal, Vanguard and Accenture are already seeing success after making thoughtful technology choices about genAI. For these companies, 2024 should be an exciting year. But what will the year hold for your business?

Loading...

From US President Joe Biden’s 30 October Executive Order establishing safety and security standards for AI, to the recent saga of OpenAI CEO Sam Altman’s firing and rehiring, there has been more than the usual flurry of AI news to keep up with of late. And because these events are so consequential for the industry, you must not only keep up, but assess in real time what they mean for your organisation – and for your own AI strategy, which may already be well underway.

Foundational, build-versus-buy decisions and ethical considerations concerning data and people are crucial considerations. You may already be grappling with questions like:

  • What does it mean to own my own LLM? What are my responsibilities with respect to safety testing and reporting?
  • As I integrate generative AI into my company's workflows, what architectural choices do I have and what are their implications?
  • How should I plan for interactions between my employees and AI?

As the CEO of Writer, an enterprise AI company that has built our own family of LLMs, I’ve advised hundreds of business leaders on a variety of topics ranging from deployment to ethics to change management. And here’s what I tell them: Don’t panic! Stay sane and keep building. If you recognise the promise of genAI but you’re also proceeding thoughtfully, you’re on the right track.

Have you read?

So, first of all, what does it mean to own your own LLM? There are at least three ways to interpret this:

1) Built by you.

Believe it or not you can build your own LLM. Yes, it takes time (lots of it) and capital (even more of it), but it is possible. However, there are easier ways to gain access to custom language models that are cheaper, faster and more accessible than "build your own".

2. Open source and trained by you.

This model is free and available to anyone for any purpose, and to modify and distribute. Popular open source models include Meta’s Llama 2 and Google’s BERT. The main risk with open source models is security: a breach can expose proprietary data or compromise the model integrity. Downloader beware.

3. Provided to you and fine-tuned by you.

Most enterprises consider this approach their best option. You can start with a proven, high quality model, and then fine-tune it with your industry context and business data, keeping it safe and secure.

President Biden’s Executive Order

Once you’ve made a decision about the kind of LLM your organisation should own, you need to think about safety and security. The US Executive Order (EO) published in October 2023 requires that US-based owners of LLM technologies ensure the security, privacy, equity and fairness of those models.

The practicalities of this will shift and change over the coming months. Organizations such as the US National Institute of Standards and Technology (NIST) are diligently working through testing and reporting criteria to inform organisations about how to proceed in order to minimise risk to their own organisations and to users of their technology.

To comply with the EO (as well as run a successful programme), pay attention to what should be the three main characteristics of your genAI system: quality, accuracy and scalability. Your people will be crucial here, especially when it comes to ensuring your system provides you with high quality output.

People-powered AI

As you build your company's genAI expertise and start to get the most out of your implementation, be wary of quick fixes and hearken back to the spirit of what Biden’s EO says about pro-labor practices:

"AI should not be deployed in ways that undermine rights ... [or] cause harmful labor-force disruptions [and] AI development should be built on the views of workers, labor unions, educators, and employers to support responsible uses of AI that improve workers’ lives [and] positively augment human work."

To get the most out of your genAI programme, people should always be at the heart of your business – they should accelerate your AI programmes, not be replaced by them.

Don't miss any update on this topic

Create a free account and access your personalized content collection with our latest publications and analyses.

Sign up for free

License and Republishing

World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.

The views expressed in this article are those of the author alone and not the World Economic Forum.

Stay up to date:

Artificial Intelligence

Related topics:
BusinessEmerging Technologies
Share:
The Big Picture
Explore and monitor how Artificial Intelligence is affecting economies, industries and global issues
A hand holding a looking glass by a lake
Crowdsource Innovation
Get involved with our crowdsourced digital platform to deliver impact at scale
World Economic Forum logo
Global Agenda

The Agenda Weekly

A weekly update of the most important issues driving the global agenda

Subscribe today

You can unsubscribe at any time using the link in our emails. For more details, review our privacy policy.

Extended producer responsibility and a global plastics treaty – what do the experts say?

Jeet Kar, Madeleine Sophia Brandes and Audrey Helstroffer

November 18, 2024

The mindset change businesses need for a climate-resilient future

About us

Engage with us

  • Sign in
  • Partner with us
  • Become a member
  • Sign up for our press releases
  • Subscribe to our newsletters
  • Contact us

Quick links

Language editions

Privacy Policy & Terms of Service

Sitemap

© 2024 World Economic Forum