Paper Key : IRJ************856
Author: Asst. Prof. Mathews Abraham,Chris M F,Donald Mathew Sajan,Janice Francis,Niba Babu
Date Published: 02 Apr 2025
Abstract
The increased diversity of music and need for on-the-fly analysis have promoted the interest in hardware that can automatically detect and interpret the instrument content of a song. This project aims to develop software that is able to determine musical instruments from a music file and generate associated musical notes. These instruments are beneficial to music producers, researchers, and students through automatic tagging of instruments and notation that makes music easier to learn and compose. The system uses audio feature extraction through STFT and CQT, and a CNN-based U-Net model processes spectrogram representations in an attempt to detect instruments and produce accurate musical notes. The model is trained using a labeled data set and gives classifications within accuracy across any genre. Its real-time functionality makes it highly adaptable, offering instrumental arrangement analysis to music producers and aiding teachers in music theory instruction. Future implementations include real-time MIDI translation and incorporation into Digital Audio Workstations (DAWs) to automate music composition. Overall, this project presents a end-to-end solution for real-time music analysis, combining instrument separation, deep learning-based classification, and music notation generation to enrich our interaction with music.