Automatic music similarity retrieval aims to have computers find songs that are similar to other songs. Most successful similarity retrieval methods rely on human-annotated tags to songs or social techniques. Content-based retrieval on the other hand attempts to design algorithms that allow computers to identify similarity based on the actual song content, i.e. the digital signature. As you might imagine, quantifying the aesthetics of a song is a difficult task, but it has the great advantage of not having to rely on meta-knowledge (e.g. artist, genre, etc) about musical pieces. In Bill Manaris‘ lab at the College of Charleston, him and his students are engaged in such research. The tech website arstechnica featured a short article on our work.
While on the topic, I’ll also mention some other related work we have done in this group. Using quantitative features similar to those used in the similarity retrieval approach, combined with Genetic Programming and Artificial Neural Networks (ANNs), we have developed automated music compositiong system called NEvMusE (Neuro-Evolutionary Music Environment). Samples of evolved music pieces can be found here: NEvMuse