Research Article
BibTex RIS Cite

Subgenre classification in hip hop music an analysis of machine learning architectures

Year 2025, Volume: 10 Issue: 2, 235 - 247, 30.04.2025
https://doi.org/10.31811/ojomus.1624182

Abstract

Digitalisation and the proliferation of online music listening platforms have led to the exponential growth of music data on the Internet, thus necessitating the development of automated systems for data organisation and analysis. In this context, automatic genre classification practices have become a significant approach for the efficiency of music discovery and recommendation processes. While significant progress has been made in genre classification, subgenre classification remains an under-researched area, despite its potential to provide more personalised listening experiences. This study aims to address this gap by focusing on the classification of hip-hop music subgenres, namely boombap, jazzrap and trap, utilising a comprehensive dataset comprising 750 audio files. The study extracts a total of 31 features, encompassing both spectral and psychoacoustic characteristics. Machine learning models such as Logistic Regression, K-Nearest Neighbours, Decision Tree and Random Forest are employed, along with the Artificial Neural Network, which attains the highest accuracy of 85%. The findings reveal that subgenre classification poses challenges, especially for categories such as jazzrap and boombap, which share overlapping musical characteristics. In contrast, trap with different timbral characteristics was classified with higher accuracy. This study contributes to the scant research on subgenre classification by underscoring the viability of employing deep learning techniques to enhance the precision of comprehensive datasets and intricate subgenre categorisations. Moreover, this research underscores the pivotal role of subgenre classification within the ambit of digital music platforms. The accurate identification of subgenres not only elevates the overall auditory experience for users but also facilitates the discovery of music selections that resonate closely with their individual preferences.

References

  • AllMusic. (n.d.). Jazz rap. AllMusic. Retrieved February 17, 2025, from https://www.allmusic.com/subgenre/jazz-rap-ma0000012180
  • Ashraf, M., Ahmad, F., Rauqir, R., Abid, F., Naseer, M., & Haq, E. (2021). Notice of violation of IEEE publication principles: Emotion recognition based on musical instrument using deep neural network. In 2021 International Conference on Frontiers of Information Technology (FIT) (pp. 323-328). IEEE. https://doi.org/10.1109/FIT53504.2021.00066
  • Atahan, Y., Elbir, A., Keskin, A. E., Kiraz, O., Kirval, B., & Aydın, N. (2021). Music genre classification using acoustic features and autoencoders. In 2021 Innovations in Intelligent Systems and Applications Conference (ASYU) (pp. 1-5). IEEE. https://doi.org/10.1109/ASYU52992.2021.9598979
  • Babar, K. (2024). Performance evaluation of decision trees with machine learning algorithm. International Journal of Scientific Research in Engineering & Management, 8(5), 1-5. https://doi.org/10.55041/ijsrem34179
  • Bahuleyan, E. (2018). Music genre classification using machine learning techniques. arXiv. https://arxiv.org/abs/1804.01149
  • Baladram, S. (2024, Sep 6). Scaling numerical data explained: A visual guide with code examples for beginners. Medium. https://medium.com/towards-data-science/scaling-numerical-data-explained-a-visual-guide-with-code-examples-for-beginners-11676cdb45cb
  • Boyko, N. I., & Mykhaylyshyn, V. Y. (2023). K-NN’s nearest neighbors method for classifying text documents by their topics. Radio Electronics, Computer Science, Control, 3, 83-96. https://doi.org/10.15588/1607-3274-2023-3-9
  • Caparrini, A., Arroyo, J., Pérez-Molina, L., & Sánchez-Hernández, J. (2020). Automatic subgenre classification in an electronic dance music taxonomy. Journal of New Music Research, 49(3), 269-284. https://doi.org/10.1080/09298215.2020.1761399
  • D’Errico, M. (2015). Off the grid: Instrumental hip-hop and experimentation after the golden age. In J. A. Williams (Ed.), The Cambridge companion to hip-hop (pp. 280-291). Cambridge University Press.
  • Elbir, A., Çam, H. B., İyican, M. E., Öztürk, B., & Aydın, N. (2018). Music genre classification and recommendation by using machine learning techniques. In 2018 IEEE Signal Processing and Communications Applications Conference (SIU) (pp. 1-5). IEEE. https://doi.org/10.1109/ASYU.2018.8554016
  • Exarchos, M. (2019). Boom bap ex machina: Hip-hop aesthetics and the Akai MPC. In Producing music. Perspectives on music production. Routledge.
  • Eyben, F. (2016). Real-time speech and music classification by large audio feature space extraction. Springer. https://doi.org/10.1007/978-3-319-27299-3
  • James, G., Witten, D., Hastie, T., & Tibshirani, R. (2013). An introduction to statistical learning: with applications in R. Springer.
  • Jijo, B. T., & Abdulazeez, A. M. (2021). Classification based on decision tree algorithm for machine learning. Journal of Applied Science and Technology Trends, 2(1), 20-28. https://doi.org/10.38094/jastt20165
  • Jurafsky, D., & Martin, J. H. (2024). Speech and language processing. https://web.stanford.edu/~jurafsky/slp3/old_aug24/
  • Karatana, A., & Yıldız, O. (2017). Music genre classification with machine learning techniques. In 2017 25th Signal Processing and Communications Applications Conference (SIU) (pp. 1-4). IEEE. https://doi.org/10.1109/SIU.2017.7960694
  • Knees, P., & Schedl, M. (2016). Music similarity and retrieval: An introduction to audio- and web-based strategies. Springer-Verlag Berlin Heidelberg. https://doi.org/10.1007/978-3-662-49722-7
  • Li, Z., & Song, P. (2021). Audio similarity detection algorithm based on Siamese LSTM network. In 6th International Conference on Intelligent Computing and Signal Processing (ICSP) (pp. 182-186). IEEE. https://doi.org/10.1109/ICSP51882.2021.9408942
  • Mehrotra, K., Mohan, C. K., & Ranka, S. (1997). Elements of artificial neural nets. MIT Press.
  • Mlynar, P. (2013). In search of boom bap. Red Bull Music Academy Daily. https://daily.redbullmusicacademy.com/2013/11/in-search-of-boom-bap
  • Najafabadi, M. M., Villanustre, F., Khoshgoftaar, T. M., Seliya, N., Wald, R., & Muharemagic, E. (2015). Deep learning applications and challenges in big data analytics. Journal of Big Data, 2(1), 1-21. https://doi.org/10.1186/s40537-014-0007-7
  • Reese, E. (2022). The history of trap. Eric Reese.
  • Scaringella, N., & Zoia, G. (2005). On the modeling of time information for automatic genre recognition systems in audio signals. Proceedings of the 6th International Conference on Music Information Retrieval (ISMIR 2005). Queen Mary, University of London.
  • Segal, M. R. (2004). Machine learning benchmarks and random forest regression. eScholarship. https://escholarship.org/uc/item/35x3v9t4
  • Sönmez, Y. Ü., & Varol, A. (2019). New trends in speech emotion recognition. 2019 7th International Symposium on Digital Forensics and Security (ISDFS) (pp. 1-7). IEEE. https://doi.org/10.1109/ISDFS.2019.8757528
  • Tzanetakis, G., & Cook, P. (2002). Musical genre classification of audio signals. In IEEE Transactions on Speech and Audio Processing (pp. 293-302). https://doi.org/10.1109/TSA.2002.800560
  • Quinto, R. J. M., Atienza, R. O., & Tiglao, N. M. C. (2017). Jazz music sub-genre classification using deep learning. In TENCON 2017 - 2017 IEEE Region 10 Conference (pp. 3111-3116). IEEE. https://doi.org/10.1109/TENCON.2017.8228396
  • Williams, J. A. (2010). The construction of jazz rap as high art in hip-hop music. The Journal of Musicology, 27(4), 435-459.
  • Wu, Y., & Feng, J. (2018). Development and application of artificial neural network. Wireless Personal Communications, 102(4), 1645-1656. https://doi.org/10.1007/s11277-017-5224-x
  • Yadav, H. (2022, Jul 5). Dropout in neural networks. Medium. https://towardsdatascience.com/dropout-in-neural-networks-47a162d621d9

Hip hop müzikte alt tür sınıflandırması: Makine öğrenimi mimarilerinin analizi

Year 2025, Volume: 10 Issue: 2, 235 - 247, 30.04.2025
https://doi.org/10.31811/ojomus.1624182

Abstract

Dijitalleşme ve çevrimiçi müzik dinleme platformlarının yaygınlaşması, internet ortamında müzik verilerinin katlanarak büyümesine yol açmıştır. Bu durum veri organizasyonu ve analizi için otomatik sistemlerin gerekliliğini ortaya koymuştur. Bu bağlamda, otomatik tür sınıflandırma pratikleri, müzik keşif ve tavsiye süreçlerinin verimliliği adına önemli bir yaklaşım haline gelmiştir. Tür sınıflandırmasında önemli ilerlemeler kaydedilmiş olsa da, daha kişiselleştirilmiş dinleme deneyimleri sunma potansiyeline rağmen alt tür sınıflandırması daha az araştırılmış bir alan olmaya devam etmektedir. Çalışma, 750 ses dosyasından oluşan bir veri kümesi kullanarak hip hop müzik alt türlerinin -boombap, jazzrap ve trap- sınıflandırılmasını ele almaktadır. Çalışmada, spektral ve psikoakustik öznitelikler de dahil olmak üzere toplam 31 özellik çıkarılmıştır. Lojistik Regresyon, K-En Yakın Komşular, Karar Ağacı ve Rastgele Orman gibi makine öğrenimi modellerinin yanı sıra %85’lik en yüksek doğruluğa ulaşan Yapay Sinir Ağı kullanılmıştır. Bulgular, alt tür sınıflandırmasının, özellikle örtüşen müzikal özellikleri paylaşan jazzrap ve boombap gibi kategoriler için zorluklar yarattığını ortaya koymaktadır. Buna karşılık, farklı tınısal özelliklere sahip trap daha yüksek doğrulukla sınıflandırılmıştır. Çalışma, alt tür sınıflandırması konusundaki sınırlı araştırmayı geliştirmekte ve derin öğrenme tekniklerinin kapsamlı veri kümelerinde ve karmaşık alt tür sınıflandırmalarında hassasiyeti artırmada umut verici şekilde uygulanabilirliğini vurgulamaktadır. Ayrıca bu araştırma, dijital müzik platformları bağlamında alt tür sınıflandırmasının hayati önemini vurgulamaktadır. Alt türlerin doğru bir şekilde tanımlanması, kullanıcılar için genel işitsel deneyimi geliştirmekle kalmaz, aynı zamanda bireysel tercihleriyle yakından uyumlu müzik seçimlerini keşfetmelerini de kolaylaştırmaktadır.

References

  • AllMusic. (n.d.). Jazz rap. AllMusic. Retrieved February 17, 2025, from https://www.allmusic.com/subgenre/jazz-rap-ma0000012180
  • Ashraf, M., Ahmad, F., Rauqir, R., Abid, F., Naseer, M., & Haq, E. (2021). Notice of violation of IEEE publication principles: Emotion recognition based on musical instrument using deep neural network. In 2021 International Conference on Frontiers of Information Technology (FIT) (pp. 323-328). IEEE. https://doi.org/10.1109/FIT53504.2021.00066
  • Atahan, Y., Elbir, A., Keskin, A. E., Kiraz, O., Kirval, B., & Aydın, N. (2021). Music genre classification using acoustic features and autoencoders. In 2021 Innovations in Intelligent Systems and Applications Conference (ASYU) (pp. 1-5). IEEE. https://doi.org/10.1109/ASYU52992.2021.9598979
  • Babar, K. (2024). Performance evaluation of decision trees with machine learning algorithm. International Journal of Scientific Research in Engineering & Management, 8(5), 1-5. https://doi.org/10.55041/ijsrem34179
  • Bahuleyan, E. (2018). Music genre classification using machine learning techniques. arXiv. https://arxiv.org/abs/1804.01149
  • Baladram, S. (2024, Sep 6). Scaling numerical data explained: A visual guide with code examples for beginners. Medium. https://medium.com/towards-data-science/scaling-numerical-data-explained-a-visual-guide-with-code-examples-for-beginners-11676cdb45cb
  • Boyko, N. I., & Mykhaylyshyn, V. Y. (2023). K-NN’s nearest neighbors method for classifying text documents by their topics. Radio Electronics, Computer Science, Control, 3, 83-96. https://doi.org/10.15588/1607-3274-2023-3-9
  • Caparrini, A., Arroyo, J., Pérez-Molina, L., & Sánchez-Hernández, J. (2020). Automatic subgenre classification in an electronic dance music taxonomy. Journal of New Music Research, 49(3), 269-284. https://doi.org/10.1080/09298215.2020.1761399
  • D’Errico, M. (2015). Off the grid: Instrumental hip-hop and experimentation after the golden age. In J. A. Williams (Ed.), The Cambridge companion to hip-hop (pp. 280-291). Cambridge University Press.
  • Elbir, A., Çam, H. B., İyican, M. E., Öztürk, B., & Aydın, N. (2018). Music genre classification and recommendation by using machine learning techniques. In 2018 IEEE Signal Processing and Communications Applications Conference (SIU) (pp. 1-5). IEEE. https://doi.org/10.1109/ASYU.2018.8554016
  • Exarchos, M. (2019). Boom bap ex machina: Hip-hop aesthetics and the Akai MPC. In Producing music. Perspectives on music production. Routledge.
  • Eyben, F. (2016). Real-time speech and music classification by large audio feature space extraction. Springer. https://doi.org/10.1007/978-3-319-27299-3
  • James, G., Witten, D., Hastie, T., & Tibshirani, R. (2013). An introduction to statistical learning: with applications in R. Springer.
  • Jijo, B. T., & Abdulazeez, A. M. (2021). Classification based on decision tree algorithm for machine learning. Journal of Applied Science and Technology Trends, 2(1), 20-28. https://doi.org/10.38094/jastt20165
  • Jurafsky, D., & Martin, J. H. (2024). Speech and language processing. https://web.stanford.edu/~jurafsky/slp3/old_aug24/
  • Karatana, A., & Yıldız, O. (2017). Music genre classification with machine learning techniques. In 2017 25th Signal Processing and Communications Applications Conference (SIU) (pp. 1-4). IEEE. https://doi.org/10.1109/SIU.2017.7960694
  • Knees, P., & Schedl, M. (2016). Music similarity and retrieval: An introduction to audio- and web-based strategies. Springer-Verlag Berlin Heidelberg. https://doi.org/10.1007/978-3-662-49722-7
  • Li, Z., & Song, P. (2021). Audio similarity detection algorithm based on Siamese LSTM network. In 6th International Conference on Intelligent Computing and Signal Processing (ICSP) (pp. 182-186). IEEE. https://doi.org/10.1109/ICSP51882.2021.9408942
  • Mehrotra, K., Mohan, C. K., & Ranka, S. (1997). Elements of artificial neural nets. MIT Press.
  • Mlynar, P. (2013). In search of boom bap. Red Bull Music Academy Daily. https://daily.redbullmusicacademy.com/2013/11/in-search-of-boom-bap
  • Najafabadi, M. M., Villanustre, F., Khoshgoftaar, T. M., Seliya, N., Wald, R., & Muharemagic, E. (2015). Deep learning applications and challenges in big data analytics. Journal of Big Data, 2(1), 1-21. https://doi.org/10.1186/s40537-014-0007-7
  • Reese, E. (2022). The history of trap. Eric Reese.
  • Scaringella, N., & Zoia, G. (2005). On the modeling of time information for automatic genre recognition systems in audio signals. Proceedings of the 6th International Conference on Music Information Retrieval (ISMIR 2005). Queen Mary, University of London.
  • Segal, M. R. (2004). Machine learning benchmarks and random forest regression. eScholarship. https://escholarship.org/uc/item/35x3v9t4
  • Sönmez, Y. Ü., & Varol, A. (2019). New trends in speech emotion recognition. 2019 7th International Symposium on Digital Forensics and Security (ISDFS) (pp. 1-7). IEEE. https://doi.org/10.1109/ISDFS.2019.8757528
  • Tzanetakis, G., & Cook, P. (2002). Musical genre classification of audio signals. In IEEE Transactions on Speech and Audio Processing (pp. 293-302). https://doi.org/10.1109/TSA.2002.800560
  • Quinto, R. J. M., Atienza, R. O., & Tiglao, N. M. C. (2017). Jazz music sub-genre classification using deep learning. In TENCON 2017 - 2017 IEEE Region 10 Conference (pp. 3111-3116). IEEE. https://doi.org/10.1109/TENCON.2017.8228396
  • Williams, J. A. (2010). The construction of jazz rap as high art in hip-hop music. The Journal of Musicology, 27(4), 435-459.
  • Wu, Y., & Feng, J. (2018). Development and application of artificial neural network. Wireless Personal Communications, 102(4), 1645-1656. https://doi.org/10.1007/s11277-017-5224-x
  • Yadav, H. (2022, Jul 5). Dropout in neural networks. Medium. https://towardsdatascience.com/dropout-in-neural-networks-47a162d621d9
There are 30 citations in total.

Details

Primary Language English
Subjects Sound and Music Computing, Music Technology and Recording
Journal Section Research article
Authors

Can Paşa 0000-0001-9459-2200

Abdurrahman Tarikci 0000-0002-7100-3570

Publication Date April 30, 2025
Submission Date January 21, 2025
Acceptance Date March 14, 2025
Published in Issue Year 2025 Volume: 10 Issue: 2

Cite

APA Paşa, C., & Tarikci, A. (2025). Subgenre classification in hip hop music an analysis of machine learning architectures. Online Journal of Music Sciences, 10(2), 235-247. https://doi.org/10.31811/ojomus.1624182