The CSAIL researchers suggest that as the software improves, and it learns to tell the difference between instruments in the same family, it could be a vital tool when it comes to remixing and remastering older performances where the original recordings no longer exist. For example, the sound of a trumpet could be boosted, while a piano was reduced, to improve the overall mix, years after a performance was first mixed. Or, musicians who are still learning an instrument could easily focus on a specific part of a song they’re trying to master.


The software also has the potential to revolutionize the process of remixing songs, or creating mashups, which is probably an application MIT doesn’t want to promote at this point. But being able to just click and extract a specific instrument’s performance would be a feature that plenty of remix artists would love to add to their toolkits.