In April of 2019 the OpenAI project released a preview of MuseNet, a deep neural network that generates musical compositions. Its mimetic abilities means the project can start with just a few given notes and extrapolate out an entire song, and do so in styles ranging from Chopin to Lady Gaga. The results are staggeringly realistic. A neural network can now, or within the next few years, be trained in the style of Philip Larkin, or The Beatles, and produce new works in the style of those artists, works that are nigh indistinguishable from the originals. It’s now just a matter of time until our world becomes a Jurassic Park filled with newly-issued work by long-dead creators. Perhaps this means future creators will become judicious about how much work they publish, and therefore how much data they provide, to prevent such “style cloning.” Or perhaps creators will embrace this strange sort of immortality. Maybe they will eagerly train neural networks on their own work; every artist becoming the master of an atelier formed from themselves. Perhaps the future of the creative arts is an assembly line of style clones. (View Highlight)
The limits of these new technologies are unknown. In November of 2019 another OpenAI project called GPT-2 was released, doing for writing what MuseNet did for music. For a while the OpenAI team even debated not releasing it, out of fear its abilities would be misused. After it was eventually made available, The Economistused the AI to answer an essay question: “What fundamental economic and political change, if any, is needed for an effective response to climate change?” The AI’s first paragraph in response was this: (View Highlight)
Will the art forms that are easily produced by these new technologies become diluted, maybe even abandoned? What if that includes article writing, or at minimum, generic form stuff like press releases? With works available at the click of a button, some creative forms are poised to drown in abundance. And while we’ve accepted that humans will never beat a trained neural network at chess, what happens when neural networks create musical compositions more original, catchy, and beautiful than those by even the best human composers? (View Highlight)
There are some areas in which this has already occurred. Indeed, professional chess is a perfect example, for it is still a viable subculture and activity decades after computer domination. Yet it has always struck me that Magnus Carlsen is one of the more tragic living figures. Magnus, now 29, is the best chess player there has ever been. He would thrash Bobby Fisher. He is good at every part of the game, a savant. Ironically, his abilities have been described as inhuman. Yet my iPhone can beat him at his chosen profession. He’s never shown that this bothers him in any way I’m aware of, but it would bother me, that low existential itch, which every zoo animal must feel, no matter how comfortable in their cages: the implicit knowledge of being a novelty, or rather, in a darker light, a joke. (View Highlight)
Lee Se-dol, the South Korean Go world champion, must have felt this. He announced this year he would stop playing professional Go because of the AI program AlphaGo, which trounced him in a series of games. “Even if I become number one,” the once-champion said, “there is an entity that cannot be defeated.” (View Highlight)
Humans will still play Go. Of course they will. But games occur on a restricted playing field in order to see which person (each abiding by the same arbitrary constraints) wins—it’s what makes them games. In terms of artistic creation, we don’t read poems or watch TV shows to witness the drama of competition behind their creation. We actually consume the product itself. Art is meant to be imbibed. How much will it really matter if there is a “certified human” sticker on a script or a song or a painting? (View Highlight)
And so, with little knowledge or fanfare, and all in just the last five years, the creative world has been split in two. On one side are the types of artistic production that can be automated by these neural networks. On the other side are those that cannot be, or will resist it for decades or centuries. Improvisational jazz and classic music are easy to mimic, but the vibrato intonations of the human voice are surprisingly difficult to emulate. Neural networks can write beautiful poetry or short articles, but their novels collapse into nonsensical heaps. So far, AI cannot write books or longer structured stories because they cannot “fake it” for that long—unable to understand causation or time, or even object permanence, the characters in AI-written works come and go like ghosts, looping back on themselves. It certainly may be just a matter of time until these remaining issues are solved, or slowly dissolve away, as through sheer statistical enumeration AIs appear to learn these concepts. The technology can already produce convincing works for shorter pieces where long-term coherence isn’t a factor (think poetry, painting, music, or paragraphs and shorter essays). (View Highlight)
How small a room our imaginations now have. For any work of art that contains a pattern, there are now statistical tools to complete it, tools that bear no reference to human minds. They can never begin, but they can always finish. (View Highlight)
Perhaps the most terrifying thing is that these neural networks aren’t doing anything different than what we’ve been doing all along. Maybe we’ve been fooling ourselves for millennia that our pretty patterns are original rather than purely mimetic. As Mark Twain wrote in his essay “Corn-pone Opinions” from 1901*,* “We are creatures of outside influences; as a rule we do not think, we only imitate.” (View Highlight)