Are we entering the dawn of creative artificial intelligence?
Data scientists are creating robo-artists that use machine learning to create artworks and even develop an artistic style. Image: REUTERS/Fabrizio Bensch
Something went very wrong with one of Google’s neural networks. It was designed for a simple task: identify dogs in photos. But a curious developer reversed the algorithm and it began to hallucinate dogs where there were none before. The psychedelic images resembled those of Salvador Dali, and echoed across the internet with the short hand “Deep Dream”.
Within a few months of this discovery, an academic paper repeated the same magic feat for famous painters. Data scientists built a set of robo-artists out of digital neuron clusters called recurrent neural networks. They used machine learning and artificial intelligence to reverse engineer visual art resembling Picasso’s dancing lines, Van Gogh’s hypnotic brush strokes and Edvard Munch’s emotional impact. We have taught robots how to make art by teaching them what makes an artistic style. And so “Deep Style” was born.
What should we make of our creative automatons? In human affairs, children of successful lawyers and accountants often have the freedom to become creators, liberated from monetary constraints and able to dance, paint and make music. This time, our software progeny are transcending their humble beginnings. They just might become humanity’s greatest artists, amplifying and robotizing creativity.
The computer revolution has catalyzed tremendous automation, in physical labor in places like factories, and increasingly now in intellectual labor, from legal discovery to roboadvisors. As the Marc Andreessen saying goes, “Software is eating the world”. A recent McKinsey study projects that 45% of all office work will be automated in the near future. Software processes our paperwork, searches for results, takes payments, directs cars, and talks with other systems to create lattices of efficiency. But our programs to date have been deeply analytical, following prescribed top-down rules to implement productivity tasks.
That left-brained set of rigid algorithms is about to meet its right-brained counterpart. The key is that this new sort of software isn’t replicating a set of rules to distort an image per human design. Rather, it is using sophisticated math to process visual information, extract unique patterns, and recursively learn what makes any particular artistic style unique. Then it can take off from there. Think of it as statistical intuition, not unlike our own instincts and gut impulses. Mobile apps like Dreamscope (free, amazing, on iOS/Android) allow a user to apply this machine-learned creativity to a photo on command. Dreamscope has indexed dozens of creative algorithms—a robot for each painter—and enables a user to “seed” their own machine artist. How long until every creative human endeavor has been patterned in this way?
Already, we find machine learning applications in the visual arts, music and writing. The programs are young and often spit out creations that seem somehow wrong, though we cannot put a finger on why. These machine artbots are from the wrong side of the Uncanny Valley – a category of things that attempt to mimic humanity but in their artifice create unease.
And yet, we have never been closer to a room of monkeys typing out the collected works of Shakespeare. Just ask a robot that has ingested all of Shakespeare’s works and is trained to generate soulful prose on command, ad infinitum. Or turn on machine-Bach, mathematically generating emotional sound vibrations that, some day, may be indistinguishable from the real thing. The below texts are neural network generated samples based on Shakespeare, which can be created ad infinitum. Source: Andrej Karpathy
KING LEAR:
One loyal of my love, the wedding-body touchest thee: I pray,
Henceforwards, and submiss the truth! though my throne
Lives as mock’d my pardon with some untold
Attore sack lop and shrum’ them up:
But be preserved with spirits, so brimfibed again!
My voices were so early, I was enough.
MACBETH:
Then let him withdraw them debour to branch ere any any
day, but to prevail’d be penny of a merry tongue
Which the exploits of fools look with their veins.
Beware, artists. Automation will impact not only the analytical industries, but also those that require creativity, originality and intuition—domains that were once believed to be uniquely human. If you are an artist, musician, or writer, artificial intelligence is about to present challenges and opportunities that rivaled the ones posed to painters by the invention of photography in the 1800s. What now seems like a crude hollow reproduction of a mystical human endeavor could eventually be responsible for the bulk of all art, initiated by humans but outsourced to machines.
There are many objections to the idea that true art can even be made by software. Isn’t the human always the root of the process? Isn’t the artist’s impulse to create profoundly human? Isn’t the point of art to in some way symbolize and instantiate the unique point of view of the human artist in order to evoke a uniquely human response in the viewer or listener? Aren’t our cultural values—a result of the arbitrary and arduous evolution of a mammalian body—the only lens capable of authoring and appreciating art, as such? So what will be the message or set of values implicit in machine-generated art? These questions are fair, but in my opinion only partially relevant.
As the shift toward the machine continues, there will be increasingly less space for human execution of what qualified for creative endeavors in the past. Instead of composing music, we will create randomization algorithms that combine software-composers on the fly, reacting to our quantified moods and surroundings. Instead of learning to paint, aspiring artists will be better served to learn how to code programs that render creative outcomes in simulated virtual reality environments.
The raw materials for this revolution are in place. Wearable sensors will make it possible to create an essentially infinite data set of the images, sounds and text that humans exchange every day. Google Photos and other cognitive computing tools are processing millions of such inputs daily. Our culture can increasingly be mapped, studied and statistically modeled. Hard rules about aesthetics are not necessary when we can just point our learning machines to the recorded history of what humans believe is beautiful and meaningful. The Golden Ratio is timeless.
What will be the meaning of such “art”? Critics of the future will wrestle with such questions.
We can also simulate evolution and reward the most creative software with fitness and something resembling life itself. In 2013, engineers at Cornell Creative Machines Lab used evolutionary programming to create 3D cubes that learned how to walk: the randomized critters that ambled fastest were allowed digital offspring that moved faster with each generation.
Our robo-artist could be motivated by a different outcome – to move the human spirit – using the vast data generated by human activity both as inputs and to determine the success and impact of their new creations . Yes, humans will set many of the creative programs into motion, but the ultimate outcome will be the product of machines. We will be the builders, accountants, and lawyers—our digital children will dance, paint and sing.
Don't miss any update on this topic
Create a free account and access your personalized content collection with our latest publications and analyses.
License and Republishing
World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.
The views expressed in this article are those of the author alone and not the World Economic Forum.
Stay up to date:
Artificial Intelligence
Related topics:
Forum Stories newsletter
Bringing you weekly curated insights and analysis on the global issues that matter.
More on Emerging TechnologiesSee all
Michele Mosca and Donna Dodson
December 20, 2024