CAME: Content- and Context-Aware Music Embedding for Recommendation

CAME: Content- and Context-Aware Music Embedding for Recommendation

Abstract:

Traditional recommendation methods suffer from limited performance, which can be addressed by incorporating abundant auxiliary/side information. This article focuses on a personalized music recommender system that incorporates rich content and context data in a unified and adaptive way to address the abovementioned problems. The content information includes music textual content, such as metadata, tags, and lyrics, and the context data incorporate users' behaviors, including music listening records, music playing sequences, and sessions. Specifically, a heterogeneous information network (HIN) is first presented to incorporate different kinds of content and context data. Then, a novel method called content- and context-aware music embedding (CAME) is proposed to obtain the low-dimension dense real-valued feature representations (embeddings) of music pieces from HIN. Especially, one music piece generally highlights different aspects when interacting with various neighbors, and it should have different representations separately. CAME seamlessly combines deep learning techniques, including convolutional neural networks and attention mechanisms, with the embedding model to capture the intrinsic features of music pieces as well as their dynamic relevance and interactions adaptively. Finally, we further infer users' general musical preferences as well as their contextual preferences for music and propose a content- and context-aware music recommendation method. Comprehensive experiments as well as quantitative and qualitative evaluations have been performed on real-world music data sets, and the results show that the proposed recommendation approach outperforms state-of-the-art baselines and is able to handle sparse data effectively