Fourth Industrial Revolution

Human-centric tech will make AI faster and fairer. Here's how.

The best AI doesn't rely 100% on the machine nor on the human; leveraging the strengths of both creates better outcomes. Image: UNSPLASH

Wilson Pang
Chief Technology Officer , Appen
  • We need human-centric technology that enhances productivity and drives greater ROI.
  • With ML-assisted technology embedded into the data labelling pipeline, you can reduce the time, money and people required.
  • There are many opportunities to prevent contributors from being biased by the model’s predictions.

Under traditional machine learning (ML) methods, humans perform the often time-consuming and expensive task of annotating each and every row of data needed; successful artificial intelligence (AI) models require thousands, if not millions, of units of accurately labeled training data.

As we evolve our approaches to AI, this level of manual effort becomes questionable.

Now that we have the foundation of multiple state-of-the-art pre-labeling models at our disposal, it’s imperative that we leverage them for process improvements in the end-to-end AI deployment cycle. These models include pre-labeling for autonomous vehicle image pixel labeling, pre-labeling for image and document transcription, pre-labeling for audio segmentation, and several other pre-label or classification models. In advancing our tooling, we need to invest in a certain type of human-centric technology: one that both enhances productivity and drives greater ROI.

Human-centred technology considers the operator to be an asset rather than an impediment. It recognises the value of the operator's skill, knowledge, flexibility and creativity.

Our goal in optimizing human-centered technology should be two-fold: to create faster and more efficient AI pipelines without sacrificing quality, and to advance the fair treatment of contributors by reducing the human burden of labelling tasks which are often repetitive and mentally draining. Annotating videos often requires labeling the separate frames of the videos with very small changes to the annotations. For example, a video of cars driving down the road is broken into multiple frames and each vehicle needs to be labelled. These types of annotations would be exceedingly time consuming to do by hand given the number of frames which make up even a short video. By using machine learning, we can automate the annotation process, applying the annotation predictions immediately to the frames so the annotator can simply adjust as needed instead of having to create each annotation.

ML-assisted tools serve as the bedrock of our endeavour towards human-centric technology. With ML-assisted technology embedded into the data labelling pipeline, you can reduce the time, money and people required for this crucial step in model build.

It also provides the chance to automate and improve the quality and delivery of data annotation. In this approach (at Appen, we call it "smart labelling"), critical touchpoints exist before, during and after job completion.

Have you read?

Touch point one: before the job

Before you run an annotation job, you can leverage pre-trained or trainable models to provide an initial hypothesis for your data labels. Unlike manual labelling processes, your contributors will be checking the hypothesis for accuracy rather than adding a label from scratch.

For example, if you’re working on an image annotation job to identify cars on the road, you can use a pre-trained model to pre-classify those target objects or cars.

Various models can accomplish specific tasks, depending on your use case. These range from censoring explicit content to blurring out personal details and adding bounding boxes around objects. Using existing models to provide initial data labels saves time and money by automating a portion of the annotation process. The accuracy will depend on the model or combination of models that you select.

But how do we prevent contributors from being biased by the model’s predictions, you might ask?

In fact, we tested this by running large-scale A/B testing for several annotation projects and found quite the opposite to be true: pre-labelling data resulted in improved label quality. In other words, data that has the initial labels or annotations completed by an ML model before handing over to the contributor for final annotations resulted in higher quality labels than data that did not have initial labels.

In one image-pixel-labelling project for autonomous vehicles, using an ML model for initial labelling improved contributor productivity by 91.5% and annotation quality by 10% across all of our trials.

If your team is still concerned about bias, there are further opportunities for mitigation in the next two phases of the pipeline.

Touch point two: during the job

Once inside the job, you can leverage ML models to assist human judgments. As an example, if your job includes video annotation, a manual process might look like this: videos are split into frame-by-frame sequences and contributors label each target object in each frame.

With a standard frame rate of 24 frames per second, this labelling task becomes laborious and repetitive quickly. Using ML-assisted technology instead, the contributor can label the target object once and a model can track and predict its location in subsequent frames. Following the same example of cars on the road, the contributor would label each car in the first frame and the model would track its location to annotate the cars in the remaining frames.

Contributors then take on the role of reviewer for the remaining frames, making corrections as needed.

With help from ML-assisted technology during the job, contributors are equipped to work more quickly and with greater accuracy. Using this method can result in annotation speeds that are up to 100 times faster than manual methods, without sacrificing quality. The benefits extend to contributors as well: this method reduces cognitive strain, improving comfortability throughout the task.

Final touch point: after-work

After the model and contributor have made judgments on your data, you can enter the validation phase. In this step, you can use ML models to verify the judgments made and notify contributors if their input isn’t within the expected quality thresholds.

This approach has a couple of notable benefits. Notably, it removes any need for test questions or peer reviews and it also reduces the risk that you will end up paying for judgments that don’t fit your requirements. After model validation, the contributor can submit the job.

We need to invest not just in AI solutions, but also in improving the processes that support them.

Wilson Pang

If you have a text utterance project, for example, you can utilise ML-assisted validation tools combined with set indicators, such as coherence or language. The model will flag any data labels that don’t meet your accuracy requirements for these indicators.

A human annotator then reviews and corrects the labels. Appen tested ML-assisted validation tools in a text-utterance project involving the training of chatbots. We found a 35% reduction in error rates using real-time models.

'It's not just about AI but about better AI processes'

Combining machine learning with human effort in the form of human-centric technology is the way forward for AI innovation.

ML-assisted features in data annotation pipelines help both companies and contributors: companies can expend fewer resources in their launching of high-quality AI solutions and do so faster, and contributors can work on tasks that provide less mental strain and repetition. The latter is especially important in bolstering fair AI practices for all of the individuals who work on AI projects.

We need to invest not just in AI solutions, but also in improving the processes that support them. This way, we can evolve our approach to ethical AI and accelerate our ability to solve global issues with machine-driven solutions.

AI isn’t meant to rely on the machine or the human exclusively; rather, leveraging a combination of the two can enhance each other's strengths and promote successful outcomes.

Don't miss any update on this topic

Create a free account and access your personalized content collection with our latest publications and analyses.

Sign up for free

License and Republishing

World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.

The views expressed in this article are those of the author alone and not the World Economic Forum.

Stay up to date:

Fourth Industrial Revolution

Share:
The Big Picture
Explore and monitor how Fourth Industrial Revolution is affecting economies, industries and global issues
A hand holding a looking glass by a lake
Crowdsource Innovation
Get involved with our crowdsourced digital platform to deliver impact at scale
World Economic Forum logo
Global Agenda

The Agenda Weekly

A weekly update of the most important issues driving the global agenda

Subscribe today

You can unsubscribe at any time using the link in our emails. For more details, review our privacy policy.

We asked 5 tech strategy leaders about inclusive, ethical and responsible use of technology. Here's what they said

Daniel Dobrygowski and Bart Valkhof

November 21, 2024

Why is human-first design essential to the future of the internet?

About us

Engage with us

  • Sign in
  • Partner with us
  • Become a member
  • Sign up for our press releases
  • Subscribe to our newsletters
  • Contact us

Quick links

Language editions

Privacy Policy & Terms of Service

Sitemap

© 2024 World Economic Forum