How audio professionals “ mix ” vintage tracks and give them new life
AudioSourceRE and Audionamix Xtrax Stems are among the first consumer software options for automated demixing. Feed a song into Xtrax, for example, and the software spits out tracks for vocals, bass, drums and “other”, the latter term doing the heavy lifting for the range of sounds heard in most music. Ultimately, perhaps, a universal app will genuinely and instantly demix an entire recording; so far, it’s one track at a time, and it turns into an art form in its own right.
What the ear can hear
At Abbey Road, James Clarke began to seriously shake up his demixing project around 2010. In his research, he came across an article written in the 1970s about a technique used to split video signals into component images, such as faces. and backgrounds. . The paper reminded him of his time as a master’s student in physics, working with spectrograms that show the changing frequencies of a signal over time.
Spectrograms could visualize the signals, but the technique described in the article – called non-negative matrix factorization – was one way to process the information. If this new technique worked for video signals, it could also work for audio signals, Clarke thought. “I started looking at how instruments made up a spectrogram,” he says. “I could start to recognize, ‘This is what a drums looks like, which looks like a voice, which looks like a bass guitar. “About a year later, he produced software that could do a convincing job of dismantling. audio by its frequencies. His first big breakthrough can be heard on the 2016 remaster the beatles Live at the Hollywood Bowl, the band’s only official live album. The original LP, released in 1977, is difficult to listen to due to the shrill cries of the crowd.
After unsuccessful attempts to reduce the noise from the crowd, Clarke finally had a “moment of serendipity.” Rather than treating the screaming fans as noise in the signal that needed to be cleaned up, he decided to model the fans as another instrument in the mix. By identifying the crowd as her own individual voice, Clarke was able to tame the Beatlemaniacs, isolate them, and displace them in the background. This then moved the four musicians to the foreground sound.
Clarke has become a leading industry expert when it comes to overmixing. He helped save 38 Grammy-nominated CDs Woodstock – Back to the garden: the definitive archives of the 50th anniversary, which aimed to bring together all the performances from the 1969 mega-festival. (Disclosure: I contributed liner notes to the ensemble.) At one point during some of the festival’s heaviest rains, sitar virtuoso Ravi Shankar went on stage. The biggest problem with recording the performance, however, was not the rain, but the fact that then producer Shankar ran away with the multitrack tapes. After listening to them in the studio, Shankar deemed them unusable and released a fake in the studio. At the Woodstock Festival LP instead, without a note from Woodstock itself. The original festival multitracks were long gone, leaving future reissue producers only a damaged sounding mono recording on the concert soundboard.
Using only this monophonic recording, Clarke was able to separate the sitar master’s instrument from the rain, sonic crud, and the tabla player sitting a few feet away. The result was “both completely authentic and precise,” with ambient tracks still in the mix, says box set co-producer Andy Zax.
“The over-mixing possibilities give us to recover the unrecoverable is really exciting,” says Zax. Some might see the technique as a method of coloring classic black and white films. “There is always this tension. You want to be a reconstructor and you don’t really want to impose your will on him. So that’s the challenge. “
On the way to the Deep End
Around the time that Clarke finished working on The Beatles Hollywood bowl project, he and other researchers were up against a wall. Their techniques could handle fairly simple patterns, but they couldn’t keep up with instruments with a lot of vibrato – the subtle changes in pitch that characterize some instruments and the human voice. Engineers realized they needed a new approach. “This is what led to deep learning,” says Derry Fitzgerald, founder and chief technology officer of AudioSourceRE, a music software company.