Researchers in Surrey are exploring how so-called “adaptive music” can improve the experience of playing video games.
Because of the interactive nature of e-games, the music composer often can’t know in advance what the player will do and when, explains associate professor Philippe Pasquier, who heads the Metacreation Lab for Creative AI in SFU’s School of Interactive Arts and Technology, at the Surrey campus.
“The events and timings are all dependent on the player’s actions,” he explained.
“While music can easily be written to align with events, scenes and storylines in linear media such as films and television, it’s not that simple when it comes to video games.”
Pasquier says today’s generative music algorithms are trained on human composer musical inputs, but generate new music or variations of the human-composed music in real-time, during play, to best match what is happening in a video game.
In a journal paper posted to sciencedirect.com, Pasquier and PhD student/composer Cale Plut survey how generative music is being used in video games, with an eye on what’s coming next.
The video game industry is one of the largest media industries in the world, with 65 per cent of American adults reporting playing video games, according to the report’s introduction.
“As the games industry becomes larger, more and more attention is being paid to the rigorous study and examination of games,” the report says. “While much of this study centers around the design of interaction, or the visual aspects of games, one of the most key components of games is audio. Even before the advent of digital games, audio has been a key component in the pure foundations of play. Audio is so key to play that it transcends human designed play – young animals vocalize their play with yips and growls. Audio is so fundamental to games that in 1958, when the first video game ‘Tennis for Two’ was developed, its gameplay was accompanied by audio.”