Full text available here.
About • Citations • Acknowledgments • License
Music score generator implemented in Python. The developed system essentially provides two outputs: the expected transcription of the generated score in multiple encodings and the corresponding score image, both clean and artificially distorted. These outputs yield the necessary pairs for end-to-end deep learning Optical Music Recognition (OMR) systems.
For the generation process, three different methods of algorithmic composition are used to obtain compositions with diverse musical features:
- Random generation based on the normal distribution. We employ the normal distribution to create a symmetric distribution centered around the pitch corresponding to the space surrounding the central line of the staff.
- Random walk. A random walk is a mathematical formalization of a trajectory resulting from successive random steps. In this system, the random walk always begins at the central pitch of the pitch range determined for the system. There are three possible random steps, all equally likely, after emitting a pitch:
- One step forward: the following pitch is the next higher one from the previous.
- One step backward: the following pitch is the next lower one from the previous.
- No step: the following pitch remains the same as the previous.
- Sonification of the logistic equation.
For complete details about the implementation, please refer to any of the works cited below.
There are two versions available:
old_scoregenerator
: It contains the score generator considering all three different methods of algorithmic composition.scoregenerator
: This is a newer version that only uses the random walk algorithm for composition, as it has yielded the best results in end-to-end OMR transcription. Check the corresponding README for a detailed explanation of this version.
However, please bear in mind that neither of these versions is currently maintained.
@inproceedings{alfaro2019approaching,
title = {{Approaching End-to-End Optical Music Recognition for Homophonic Scores}},
author = {Alfaro-Contreras, Mar{\'\i}a and Calvo-Zaragoza, Jorge and I{\~n}esta, Jos{\'e} M},
booktitle = {{Proceedings of the 9th Iberian Conference on Pattern Recognition and Image Analysis}},
pages = {147--158},
year = {2019},
publisher = {Springer},
address = {Madrid, Spain},
month = jul,
doi = {10.1007/978-3-030-31321-0_13},
}
@article{alfaro2023optical,
title = {{Optical music recognition for homophonic scores with neural networks and synthetic music generation}},
author = {Alfaro-Contreras, Mar{\'\i}a and I{\~n}esta, Jos{\'e} M and Calvo-Zaragoza, Jorge},
journal = {{International Journal of Multimedia Information Retrieval}},
volume = {12},
number = {1},
pages = {12--24},
year = {2023},
publisher = {Springer},
doi = {10.1007/s13735-023-00278-5},
}
This work is part of the I+D+i PID2020-118447RA-I00 (MultiScore) project, funded by MCIN/AEI/10.13039/501100011033.
This work is under a MIT license.