Artificial intelligence and music
Minds, Machines, and Centralisation: Why Musicians Need to Hack AI Now
In this article, CTM Hacklab Director Peter Kirn provides a brief history of the co-opting of music and listening by centralized industry and corporations, identifying Muzak as a precursor to the use of Artificial Intelligence for "pre-programmed culture". He goes on to discuss productive ways for those who value "choice and surprise" to react to and interact with technologies like these that grow more inescapable by the day.
By Peter Kirn
It's now a defunct entity, but »Muzak,« the company that provided background music, was once everywhere. Its management saw to it that their sonic product was ubiquitous, intrusive, and even engineered to impact behaviour — and so the word »Muzak« became synonymous with all that was hated and insipid in manufactured culture.
Anachronistic as it may seem now, Muzak was a sign of how tele-communications technology would shape cultural consumption. Muzak may be known for its sound, but its delivery method is telling. Nearly a hundred years before Spotify, founder Major General George Owen Squier originated the idea of sending music over wires — phone wires, to be fair, but still not far off from where we’re at today. The patent he got for »electrical signalling« doesn’t mention music, or indeed even sound content. But the Major General was the first successful business founder to prove in practice that electronic distribution of music was the future, one that would take power out of the hands of radio broadcasters and give the delivery company additional power over content. (He also came up with the now-loathed »Muzak« brand name.)
What we now know as the conventional music industry has its roots in pianola rolls, then in jukeboxes, and finally in radio stations and physical media. Muzak was something different, as it sidestepped the whole structure: playlists were selected by an unseen, centralised corporation, then piped everywhere. You’d hear Muzak in your elevator ride in a department store (hence the phrase, »elevator music«). There were speakers tucked into potted plants. The White House and NASA at some points subscribed. Anywhere there was silence, it might be replaced with pre-programmed music.
Modern elevator with music
| © Colourbox
Muzak added to its notoriety by marketing the notion of using its product to boost worker productivity, through a pseudo-scientific regimen it called the »stimulus progression.« And in that, we see a notion that presages today’s app behaviour loops and motivators, meant to drive consumption and engagement, ad clicks and app swipes. Muzak for its part didn’t last forever, with stimulus progression long since debunked, customers preferring licensed music to this mix of original sounds, and newer competitors getting further ahead in the marketplace.
But what about the idea of homogenised, pre-programmed culture delivered by wire, designed for behaviour modification? That basic concept seems to be making a comeback.
AUTOMATION AND POWER
»AI« or machine intelligence has been tilted in the present moment to focus on one specific area: the use of self-training algorithms to process large amounts of data. This is a necessity of our times, and it has special value to some of the big technical players who just happen to have competencies in the areas machine learning prefers — lots of servers, top mathematical analysts, and big data sets. That shift in scale is more or less inescapable, though, in its impact. Radio implies limited channels; limited channels implies human selectors — meet the DJ. The nature of the internet as wide-open for any kind of culture means wide open scale. And it will necessarily involve machines doing some of the sifting, because it’s simply too large to operate otherwise.There’s danger inherent in this shift. One, users may be lazy, willing to let their preferences be tipped for them rather than face the tyranny of choice alone. Two, the entities that select for them may have agendas of their own. Taken as an aggregate, the upshot could be greater normalisation and homogenisation, plus the marginalisation of anyone whose expression is different, unviable commercially, or out of sync with the classes of people with money and influence. If the dream of the internet as global music community seems in practice to lack real diversity, here’s a clue as to why.
At the same time, this should all sound familiar — the advent of recording and broadcast media brought with it some of the same forces, and that led to the worst bubblegum pop and the most egregious cultural appropriation. Now, we have algorithms and corporate channel editors instead of charts and label execs — and the worries about payola and the eradication of anything radical or different are just as well-placed.
What’s new is that there’s now also a real-time feedback loop between user actions and automated cultural selection (or perhaps even soon, production). Squier’s stimulus progression couldn’t monitor metrics representing the listener. Today’s online tools can. That could blow apart past biases, or it could reinforce them — or it could do a combination of the two.
In any case, it definitely has power. At last year’s CTM hacklab, Cambridge University’s Jason Rentfrow looked at how music tastes could be predictive of personality and even political thought. The connection was timely, as the talk came the same week as Trump assumed the U.S. presidency, his campaign having employed social media analytics to determine how to target and influence voters.
We can no longer separate musical consumption — or other consumption of information and culture — from the data it generates, or from the way that data can be used. We need to be wary of centralised monopolies on that data and its application, and we need to be aware of how these sorts of algorithms reshape choice and remake media. And we might well look for chances to regain our own personal control.
Even if passive consumption may seem to be valuable to corporate players, those players may discover that passivity suffers diminishing returns. Activities like shopping on Amazon, finding dates on Tinder, watching television on Netflix, and, increasingly, music listening, are all experiences that push algorithmic recommendations. But if users begin to follow only those automated recommendations, the suggestions fold back in on themselves, and those tools lose their value. We’re left with a colourless growing detritus of our own histories and the larger world’s. (Just ask someone who gave up on those Tinder dates or went to friends because they couldn’t work out the next TV show to binge-watch.)
There’s also clearly a social value to human recommendations — expert and friend alike. But there’s a third way: use machines to augment humans, rather than diminish them, and open the tools to creative use, not only automation. Music is already reaping benefits of data training’s power in new contexts. By applying machine learning to identifying human gestures, Rebecca Fiebrink has found a new way to make gestural interfaces for music smarter and more accessible. Audio software companies are now using machine learning as a new approach to manipulating sound material in cases where traditional DSP tools are limited. What’s significant about this work is that it makes these tools meaningful in active creation rather than passive consumption.
AI BACK IN USER HANDS
Machine learning techniques will continue to expand as tools by which the companies mining big data make sense of their resources — from ore into product. It’s in turn how they’ll see us, and how we’ll see ourselves. We can’t simply opt out, because those tools will shape the world around us with or without our personal participation, and because the breadth of available data demands their use. What we can do is to better understand how they work and reassert our own agency. When people are literate in what these technologies are and how they work, they can make more informed decisions in their own lives and in the larger society. They can also use and abuse these tools themselves, without relying on magical corporate products to do it for them. Abuse itself has special value. Music and art are fields in which these machine techniques can and do bring new discoveries. There’s a reason Google has invested in these areas — because artists very often can speculate on possibilities and find creative potential. Artists lead.The public seems to respond to rough edges and flaws, too. In the 60s, when researcher Joseph Weizenbaum attempted to parody a psychotherapist with crude language pattern matching in his program, ELIZA, he was surprised when users started to tell the program their darkest secrets and imagine understanding that wasn’t there. The crudeness of Markov chains as predictive text tool — they were developed for analysing Pushkin statistics and not generating language, after all — has given rise to breeds of poetry based on their very weirdness. When Google’s style transfer technique was applied using a database of dog images, the bizarre, unnatural images that warped photos into dogs went viral online. Since then, Google has made vastly more sophisticated techniques that apply realistic painterly effects and… well, it seems that’s attracted only a fraction of the interest that the dog images did.
Maybe there’s something even more fundamental at work. Corporate culture dictates predictability and centralised value. The artist does just the opposite, capitalising on surprise. It’s in the interest of artists if these technologies can be broken. Muzak represents what happens to aesthetics when centralised control and corporate values win out — but it’s as much the widespread public hatred that’s the major cautionary tale. The values of surprise and choice win out, not just as abstract concepts but also as real personal preferences.
We once feared that robotics would eliminate jobs; the very word is derived (by Czech writer Karel Čapek’s brother Joseph) from the word for slave. Yet in the end, robotic technology has extended human capability. It has brought us as far as space and taken us through Logo and its Turtle, even taught generations of kids math, geometry, logic, and creative thinking through code. We seem to be at a similar fork in the road with machine learning. These tools can serve the interests of corporate control and passive consumption, optimised only for lazy consumption that extracts value from its human users. Or, we can abuse and misuse the tools, take them apart and put them back together again, apply them not in the sense that »everything looks like a nail« when all you have is a hammer, but as a precise set of techniques to solve specific problems. Muzak, in its final days, was nothing more than a pipe dream. What people wanted was music — and choice. Those choices won’t come automatically. We may well have to hack them.
Comments
Comment