Let people know what your project can do specifically. Provide context and add a link to any reference visitors might be unfamiliar with. A list of Features or a Background subsection can also be added here. If there are alternatives to your project, this is a good place to list differentiating factors.
This repository gathers python scripts to use auto-encoder neural networks on vocalisation spectrograms, allowing to cluster them by frequency-contour similarity.
It was developped to assist bioacousticians in their repertoire discovery procedures, making deep self-supervised learning accessible to non-experts of the field.
## Badges
On some READMEs, you may see small images that convey metadata, such as whether or not all the tests are passing for the project. You can use Shields to add some to your README. Many services also have instructions for adding a badge.
## Visuals
Depending on what you are making, it can be a good idea to include screenshots or even a video (you'll frequently see GIFs rather than actual videos). Tools like ttygif can help, but check out Asciinema for a more sophisticated method.
For a detailled description of the scientific motivation and experiments corresponding to this repository, please see
For a detailled description of the scientific motivation and experiments corresponding to this repository, please see
Best, P., Marxer, R., Paris, S., & Glotin, H. (2023). Deep audio embeddings for vocalisation clustering. bioRxiv, 2023-03.
Best, P., Marxer, R., Paris, S., & Glotin, H. (2023). Deep audio embeddings for vocalisation clustering. bioRxiv, 2023-03.