Music Language Models for Automatic Music Transcription (2020)
Abstract / truncated to 115 words
Much like natural language, music is highly structured, with strong priors on the likelihood of note sequences. In automatic speech recognition (ASR), these priors are called language models, which are used in addition to acoustic models and participate greatly to the success of today's systems. However, in Automatic Music Transcription (AMT), ASR's musical equivalent, Music Language Models (MLMs) are rarely used. AMT can be defined as the process of extracting a symbolic representation from an audio signal, describing which notes were played at what time. In this thesis, we investigate the design of MLMs using recurrent neural networks (RNNs) and their use for AMT. We first look into MLM performance on a polyphonic prediction task. ...
automatic music transcription – music language models – symbolic music modelling – music prediction – neural networks – long short-term memory – music information retrieval
Information
- Author
- Ycart, Adrien
- Institution
- Queen Mary University of London
- Supervisors
- Publication Year
- 2020
- Upload Date
- Oct. 1, 2020
The current layout is optimized for mobile phones. Page previews, thumbnails, and full abstracts will remain hidden until the browser window grows in width.
The current layout is optimized for tablet devices. Page previews and some thumbnails will remain hidden until the browser window grows in width.