Emerging Technologies

Dubai is introducing robotic policemen, to make up 25% of the force by 2030

A general view shows Dubai's cityscape September 24, 2013. Dubai's government is working on new rules to protect its real estate market and prevent any excessive rise of property prices that could end in a crash, a senior official said on Tuesday. REUTERS/Ahmed Jadallah (UNITED ARAB EMIRATES - Tags: REAL ESTATE BUSINESS CITYSCAPE) - RTX13Y0C

Robotic police officers or soldiers are old sci-fi idea, but they are becoming more and more of a reality. Image: REUTERS/Ahmed Jadallah

Tom Ward
Writer, Futurism

Robocop

On Wednesday, May 24, Dubai will launch a new police robot that marks the first phase of the integration of robots into the police force. This modified version of the REEM robot (Designed by PAL robotics and unveiled in 2011) is capable of feeding video to a command center, forwarding reported crimes to police, settling fines, facial recognition, and speaking nine languages. It will operate at most malls and tourist attractions.

Dubai hopes robots will constitute 25 percent of its police force by 2030, with the next stage being to use them as receptionists in police stations. Brigadier Khalid Nasser Alrazooqi, General Director of Dubai Police’s Smart Services Department, told CNN that they eventually want to release a “fully-functional robot that can work as [a] normal police officer.”

Image: City Source

A New Sheriff In Town?

Robotic police officers or soldiers are old sci-fi idea, but they are becoming more and more of a reality. In February, China started using the AnBot that uses facial recognition to identify criminals and is capable of following them until the police arrive. The Russian robot, Final Experimental Demonstration Object Research (FEDOR), has prompted comparisons to Robocop when a video showed it shooting with deadly accuracy, lifting dumbbells, and walking.

The biggest ethical concern raised by these developments concerns who is culpable if a robot makes the wrong decision and hurts someone in a criminal situation. Elon Musk, Steven Hawking, and other prolific scientists have identified AI as a serious existential risk, arguing that robots should never be allowed to kill people.

Alan Winfield, professor of robot ethics at the University of West England, writes about this issue on his Blog. “The problem is that you can’t make a machine responsible for its mistakes,” Winfeild said in an interview with CNN. “How do you punish it? How do you sanction it? You can’t.”

Don't miss any update on this topic

Create a free account and access your personalized content collection with our latest publications and analyses.

Sign up for free

License and Republishing

World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.

The views expressed in this article are those of the author alone and not the World Economic Forum.

Stay up to date:

Innovation

Related topics:
Emerging TechnologiesFourth Industrial Revolution
Share:
The Big Picture
Explore and monitor how Innovation is affecting economies, industries and global issues
World Economic Forum logo

Forum Stories newsletter

Bringing you weekly curated insights and analysis on the global issues that matter.

Subscribe today

Here’s why it’s important to build long-term cryptographic resilience

Michele Mosca and Donna Dodson

December 20, 2024

How digital platforms and AI are empowering individual investors

About us

Engage with us

  • Sign in
  • Partner with us
  • Become a member
  • Sign up for our press releases
  • Subscribe to our newsletters
  • Contact us

Quick links

Language editions

Privacy Policy & Terms of Service

Sitemap

© 2024 World Economic Forum