Scientists develop revolutionary AI system to find music you’ll like

By Mr Khalil Damak (graduate student, Department of Computer Science and Engineering, University of Louisville), Dr Olfa Nasraoui (professor, Department of Computer Engineering and Computer Science, University of Louisville), and Dr William Scott Sanders (assistant professor, Department of Communication, University of Louisville)

Image credit: Tada Images / Shutterstock.com

Recommendation systems are intelligent online user interfaces like Amazon, Youtube, or Netflix, where the system identify music, films, or books you might like based on your previous interaction with the system. Such systems use so-called 'Deep Learning', a branch of artificial intelligence (AI). In a recent paper in Frontiers in Big Data, scientists from the University of Louisville, KY, USA, use Deep Learning to propose a more accurate and personalized recommendation system for finding songs online. Here, the authors explain how their novel system works, and what makes it different.

What are recommender systems and why are they important?

Recommender systems are computational engines whose goal is to learn the users’ preferences in order to personalize their experiences online. These systems have become prevalent in various fields, and have become especially essential on e-commerce websites, streaming platforms, and other internet-based platforms that offer millions of products or options for users to choose from, (e.g. Amazon.com, Netflix, Spotify, etc.). If you have ever used any one of these platforms, then know that your experience in browsing for the next TV series to watch, the next artist to discover, or the perfect desk chair that you needed while working from home, was most likely uniquely tailored to you by specialized recommendation algorithms, and that it was likely different from the experience of other users seeking the same products or services. And this is all thanks to recommender systems. What recommender systems do in a nutshell, is help guide you when browsing through millions, or sometimes billions, of items by filtering and ranking the items predicted to be the most relevant to you on top of the list of recommendations you see, or by presenting you with curated finds that you might like.

How do recommender systems work?

State of the art recommender systems rely on machine learning algorithms that ingest millions of clicks, ratings, or other historical transaction data, to discover patterns of interest that would generalize to future and unseen similar situations. Some recommender systems additionally rely on user or item content or metadata to capture more complex patterns. Those recommender systems are called hybrids.

Leveraging song content in recommendation

In our recently published paper Sequence-based explainable hybrid song recommendation in Frontiers in Big Data, we proposed a hybrid recommender system for song recommendation that relies on user interactions with songs as well as the sequential content of these songs, to generate personalized song recommendations. Even more fascinating, our proposed recommender system was able to identify the 10-second melodial snippet, from a recommended song, that represents the part of the song that the user would likely enjoy the most from the recommended song. The melody snippet, which can serve as an explanation for the recommendation, is in the same format as the input data that was used to train the machine learning model, which is Musical Instrument Digital Interface (MIDI) data. This personalized musical snippet is unique to each user and promotes transparency in AI, as the user will have a clearer idea as to why a song was recommended.


Read original article

Download original article (pdf)


MIDI files are polyphonic digital instrumental audios that are used to create music. They are composed of event messages that are consecutive in time. Each message includes a type (such as a note), notation (the note played), time (the time it is played), and velocity (how rapidly and forcefully it is played). These events are distributed over 16 channels of information, which are independent paths over which messages travel. Each channel can be programmed to play one instrument.

We used MIDI data as input because it encapsulates a large amount of information about the instrumental portion of the song and is at the same time simple enough to convert into a sequence of numerical features (also known as multi-dimensional time series). These time series are the perfect inputs to feed to a special type of machine learning models known as Recurrent Neural Networks (RNNs). To the extent of our knowledge, our work is the first to exploit sequential song content, and more specifically MIDI content, as input to learn state of the art explainable recommender systems, and the first to introduce the personalized instrumental snippet explanation for a song recommendation.

Why is 'explainability' important in recommendation systems?

Explaining why an item was recommended to a user is important. First, explanations can reveal to the user which part of the input data played a role in algorithmic predictions which can help them scrutinize the predictions for potential bias or errors. Second, explanations can help users make more informed decisions based on the algorithmic recommendations that they receive, such as whether to follow a recommendation. Imagine Jane likes opera music but is surprised when being recommended a rock song like Bohemian Rhapsody by Queen. The recommendation is much less shocking if it is accompanied by an explanation that consists of an opera segment from the song because the recommended song happens to contain elements from the opera genre.

What are the main findings and what can we expect next?

Our evaluation results show that our proposed recommender system achieves a competitive performance compared to state-of-the-art algorithms in the field, with the added advantage of being able to explain its predictions. The results of a user study have furthermore shown that the 10-second music segment explanation improved the user’s satisfaction with the recommendations and enhanced the transparency of the system.

In the future, we can expect to see similar successful explainable recommendation approaches in other domains such as news, where the most relevant sentence from the news article is used to explain the recommendation.

If you have recently published a research paper with Frontiers and would like to write an editorial about your research, get in touch with the Science Communications team at press@frontiersin.org with ‘guest editorial’ in your subject line.

REPUBLISHING GUIDELINES: Open access and sharing research is part of Frontiers’ mission. Unless otherwise noted, you can republish articles posted in the Frontiers news site — as long as you include a link back to the original research. Selling the articles is not allowed.