Skip to content
Snippets Groups Projects
Commit be6c0b9f authored by Alain Riou's avatar Alain Riou
Browse files

initial commit

parents
No related branches found
No related tags found
No related merge requests found
Showing
with 544 additions and 0 deletions
.idea
cache
data/
logs
wandb
!src/data
**/__pycache__
\ No newline at end of file
# this file is required for inferring the project root directory
# do not delete
\ No newline at end of file
GNU Lesser General Public License
=================================
_Version 3, 29 June 2007_
_Copyright © 2007 Free Software Foundation, Inc. &lt;<http://fsf.org/>&gt;_
Everyone is permitted to copy and distribute verbatim copies
of this license document, but changing it is not allowed.
This version of the GNU Lesser General Public License incorporates
the terms and conditions of version 3 of the GNU General Public
License, supplemented by the additional permissions listed below.
### 0. Additional Definitions
As used herein, “this License” refers to version 3 of the GNU Lesser
General Public License, and the “GNU GPL” refers to version 3 of the GNU
General Public License.
“The Library” refers to a covered work governed by this License,
other than an Application or a Combined Work as defined below.
An “Application” is any work that makes use of an interface provided
by the Library, but which is not otherwise based on the Library.
Defining a subclass of a class defined by the Library is deemed a mode
of using an interface provided by the Library.
A “Combined Work” is a work produced by combining or linking an
Application with the Library. The particular version of the Library
with which the Combined Work was made is also called the “Linked
Version”.
The “Minimal Corresponding Source” for a Combined Work means the
Corresponding Source for the Combined Work, excluding any source code
for portions of the Combined Work that, considered in isolation, are
based on the Application, and not on the Linked Version.
The “Corresponding Application Code” for a Combined Work means the
object code and/or source code for the Application, including any data
and utility programs needed for reproducing the Combined Work from the
Application, but excluding the System Libraries of the Combined Work.
### 1. Exception to Section 3 of the GNU GPL
You may convey a covered work under sections 3 and 4 of this License
without being bound by section 3 of the GNU GPL.
### 2. Conveying Modified Versions
If you modify a copy of the Library, and, in your modifications, a
facility refers to a function or data to be supplied by an Application
that uses the facility (other than as an argument passed when the
facility is invoked), then you may convey a copy of the modified
version:
* **a)** under this License, provided that you make a good faith effort to
ensure that, in the event an Application does not supply the
function or data, the facility still operates, and performs
whatever part of its purpose remains meaningful, or
* **b)** under the GNU GPL, with none of the additional permissions of
this License applicable to that copy.
### 3. Object Code Incorporating Material from Library Header Files
The object code form of an Application may incorporate material from
a header file that is part of the Library. You may convey such object
code under terms of your choice, provided that, if the incorporated
material is not limited to numerical parameters, data structure
layouts and accessors, or small macros, inline functions and templates
(ten or fewer lines in length), you do both of the following:
* **a)** Give prominent notice with each copy of the object code that the
Library is used in it and that the Library and its use are
covered by this License.
* **b)** Accompany the object code with a copy of the GNU GPL and this license
document.
### 4. Combined Works
You may convey a Combined Work under terms of your choice that,
taken together, effectively do not restrict modification of the
portions of the Library contained in the Combined Work and reverse
engineering for debugging such modifications, if you also do each of
the following:
* **a)** Give prominent notice with each copy of the Combined Work that
the Library is used in it and that the Library and its use are
covered by this License.
* **b)** Accompany the Combined Work with a copy of the GNU GPL and this license
document.
* **c)** For a Combined Work that displays copyright notices during
execution, include the copyright notice for the Library among
these notices, as well as a reference directing the user to the
copies of the GNU GPL and this license document.
* **d)** Do one of the following:
- **0)** Convey the Minimal Corresponding Source under the terms of this
License, and the Corresponding Application Code in a form
suitable for, and under terms that permit, the user to
recombine or relink the Application with a modified version of
the Linked Version to produce a modified Combined Work, in the
manner specified by section 6 of the GNU GPL for conveying
Corresponding Source.
- **1)** Use a suitable shared library mechanism for linking with the
Library. A suitable mechanism is one that **(a)** uses at run time
a copy of the Library already present on the user's computer
system, and **(b)** will operate properly with a modified version
of the Library that is interface-compatible with the Linked
Version.
* **e)** Provide Installation Information, but only if you would otherwise
be required to provide such information under section 6 of the
GNU GPL, and only to the extent that such information is
necessary to install and execute a modified version of the
Combined Work produced by recombining or relinking the
Application with a modified version of the Linked Version. (If
you use option **4d0**, the Installation Information must accompany
the Minimal Corresponding Source and Corresponding Application
Code. If you use option **4d1**, you must provide the Installation
Information in the manner specified by section 6 of the GNU GPL
for conveying Corresponding Source.)
### 5. Combined Libraries
You may place library facilities that are a work based on the
Library side by side in a single library together with other library
facilities that are not Applications and are not covered by this
License, and convey such a combined library under terms of your
choice, if you do both of the following:
* **a)** Accompany the combined library with a copy of the same work based
on the Library, uncombined with any other library facilities,
conveyed under the terms of this License.
* **b)** Give prominent notice with the combined library that part of it
is a work based on the Library, and explaining where to find the
accompanying uncombined form of the same work.
### 6. Revised Versions of the GNU Lesser General Public License
The Free Software Foundation may publish revised and/or new versions
of the GNU Lesser General Public License from time to time. Such new
versions will be similar in spirit to the present version, but may
differ in detail to address new problems or concerns.
Each version is given a distinguishing version number. If the
Library as you received it specifies that a certain numbered version
of the GNU Lesser General Public License “or any later version”
applies to it, you have the option of following the terms and
conditions either of that published version or of any later version
published by the Free Software Foundation. If the Library as you
received it does not specify a version number of the GNU Lesser
General Public License, you may choose any version of the GNU Lesser
General Public License ever published by the Free Software Foundation.
If the Library as you received it specifies that a proxy can decide
whether future versions of the GNU Lesser General Public License shall
apply, that proxy's public statement of acceptance of any version is
permanent authorization for you to choose that version for the
Library.
\ No newline at end of file
README.md 0 → 100644
# PESTO: Pitch Estimation with Self-Supervised Transposition-equivariant Objective
**tl;dr:** Fast pitch estimation with self-supervised learning
This repository implements the full code of the [PESTO](https://arxiv.org/abs/2309.02265) paper,
which received the Best Paper Award at [ISMIR 2023](https://ismir2023.ismir.net/).
The purpose of this repository is to provide the whole pipeline for training a PESTO model.
End-users that do not need to know the specific implementation details can check instead [this repository](https://github.com/SonyCSLParis/pesto).
## Setup
```shell
git clone https://github.com/SonyCSLParis/pesto-full.git
cd pesto-full
pip install -r requirements
# or
conda env create -f environment.yml
```
**Extra dependencies:**
- [mir_eval](https://craffel.github.io/mir_eval/) for computing metrics
- [scikit-learn](https://scikit-learn.org) for cross-validation
- [wandb](https://wandb.ai) for cool logging
**Troubleshooting:** Latest version of `nnAudio` (0.3.2) uses some deprecated NumPy functions, which leads to errors.
Just overwrite the problematic files by replacing the `np.float` by `float`.
## Usage
This repository is implemented in [PyTorch](https://pytorch.org/) and relies on [Lightning](https://lightning.ai/) and [Hydra](https://hydra.cc/).
It follows the structure of the [lightning-hydra-template](https://github.com/ashleve/lightning-hydra-template).
### Basic usage
The main training script is `src/train.py`.
To train the model and log metrics in a csv file, you can run the following command:
```shell
python src/train.py data=mir-1k logger=csv
```
To use different loggers, just pick one with an existing configuration (see `configs/logger`) or create your own config,
then replace `logger=csv` by `logger=my_logger`.
In particular, some logging features are supposed to be used with [W&B](https://wandb.ai).
### Training on a custom dataset
To deal with arbitrary dataset nested structures, datasets are specified as `csv` files.
For training on your own data, create a new YAML file `configs/data/my_dataset.yaml` and specify the path to:
- a text file containing the list of audio files in your dataset
- (optional) a text file containing the corresponding pitch annotations
Since PESTO is a fully self-supervised method, pitch annotations will never be used during training,
however if they are provided they will be used in the validation step to compute the metrics.
To generate such files, one can take advantage of the command `find`. For example:
```shell
find MIR-1K/Vocals -name "*.wav" | sort > mir-1k.csv
find MIR-1K/PitchLabel -name "*.csv" | sort > mir-1k_annot.csv
```
will explore recursively the appropriate directories to generate the text files containing the list of audios and annotations, respectively.
Note the use of `sort` to ensure the audios and annotations to be provided in the same order.
An example config `configs/data/mir-1k.yaml` is provided as reference.
## Code organization
The code follows the structure of [this repository](https://github.com/ashleve/lightning-hydra-template).
It contains two main folders: `configs` contains the YAML config files and `src` contains the code.
In practice, configurations are built by Hydra, hence the disentangled structure of the config folder.
The `src` folder that contains the code is divided as follows:
- `train.py` is the main script. Everything is instantiated from the built config in this script using `hydra.instantiate`.
- `models` contains the main PESTO `LightningModule` as well as the transposition-equivariant architecture
- `data` contains the main `AudioDatamodule` that handles all data loading, as well as several transforms (Harmonic CQT, pitch-shift, data augmentations...)
- `losses` contains the implementation of the fancy SSL losses that we use to train our model.
- `callbacks` contains the code for computing metrics, the procedure for weighting the loss terms based on their respective gradients, as well as additional visualization callbacks.
- `utils` contains stuff.
## Miscellaneous
### Training on different devices
By default, the model is trained on one GPU.
The memory requirements being very low (~500 MB), we do not support multi-GPU.
However, training on CPU (while discouraged) is possible by setting option `trainer=cpu` in the CLI.
### Changing sampling rates
The model takes as inputs individual CQT frames so it doesn't care about the sampling rate of your audios.
In particular, CQT kernels are computed dynamically so you never have to care about sampling rate.
### Data caching
In order to simplify the implementation, all the CQT frames of the dataset are
automatically computed from the audio files at the beginning of the first training and cached for avoiding having to always recompute them.
The cache directory `./cache` by default can be overwritten by setting the `cache_dir` option of the `AudioDatamodule`.
Moreover, CQT frames are stored with a unique hash that takes into account the path to audio files as well as the CQT options.
If you change the dataset or an option from the CQT (e.g. hop size), CQT frames will be recomputed and cached as well.
The only case where you should be careful with the caching system is if you change the content of the `audio_files`/`annot_files` text file
or the audios/annotations themselves.
### Data splitting
All the data loading logic is handled within `src/data/audio_datamodule.py`.
There are several options for splitting data into training and validation set:
- **Naive:** If you provide a `annot_files`, the model will be trained and validated on the same dataset.
Otherwise, a dummy `val_dataloader()` will be created to avoid weird Lightning bugs, but the logged metrics won't of course make any sense.
- **Manual:** You can manually provide a validation set by setting `val_audio_files` and `val_annot_files` in your YAML config.
The structure of those files should be identical to the ones of the training set.
- **Cross-validation:** If you provide an annotated training set but no validation set,
you can however perform cross-validation by setting args `fold` and `n_folds` in your YAML config.
Note that one corresponds to one single training, so in order to perform the whole cross-validation you should run the
script `n_folds` times, either manually or by taking advantage of Hydra's multirun option for example.
Note also that the splitting strategy in that case has its own random state: for a given value of `n_folds`,
`fold=<i>` will always correspond to the same train/val split **even if you change the global seed**.
### Variable interpolation
This repository aims at taking advantage as much as possible from Hydra and OmegaConf for handling configurations.
In particular, several variables are interpolated automatically to limit the number of changes to try new stuff.
For example, changing the resolution of the CQT has a strong influence on many parameters
such as the input/output dimension of the network, the construction of the loss, etc.
However, thanks to variable interpolation, you can try to increase this CQT resolution by just typing:
```shell
python src/train.py data=my_data logger=csv data.bins_per_semitone=5
```
In practice, you shouldn't need to overwrite parameters that are defined through variable interpolation.
For more details about the configuration system management, please check [OmegaConf](https://omegaconf.readthedocs.io/en/2.3_branch/) and [Hydra](https://hydra.cc/) docs.
## Cite
If you want to use this work, please cite:
```
@inproceedings{PESTO,
author = {Riou, Alain and Lattner, Stefan and Hadjeres, Gaëtan and Peeters, Geoffroy},
booktitle = {Proceedings of the 24th International Society for Music Information Retrieval Conference, ISMIR 2023},
publisher = {International Society for Music Information Retrieval},
title = {PESTO: Pitch Estimation with Self-supervised Transposition-equivariant Objective},
year = {2023}
}
```
## Credits
- [lightning-hydra-template](https://github.com/ashleve/lightning-hydra-template) for the main structure of the code
- [nnAudio](https://github.com/KinWaiCheuk/nnAudio) for the original CQT implementation
- [multipitch-architectures](https://github.com/christofw/multipitch_architectures) for the original architecture of the model
- [mir_eval](https://craffel.github.io/mir_eval/) for computing MIR metrics (RPA, RCA...)
```
@ARTICLE{9174990,
author={K. W. {Cheuk} and H. {Anderson} and K. {Agres} and D. {Herremans}},
journal={IEEE Access},
title={nnAudio: An on-the-Fly GPU Audio to Spectrogram Conversion Toolbox Using 1D Convolutional Neural Networks},
year={2020},
volume={8},
number={},
pages={161981-162003},
doi={10.1109/ACCESS.2020.3019084}}
@ARTICLE{9865174,
author={Weiß, Christof and Peeters, Geoffroy},
journal={IEEE/ACM Transactions on Audio, Speech, and Language Processing},
title={Comparing Deep Models and Evaluation Strategies for Multi-Pitch Estimation in Music Recordings},
year={2022},
volume={30},
number={},
pages={2814-2827},
doi={10.1109/TASLP.2022.3200547}}
```
## TODO
### Research
- Implement confidence score
- Handle continuous changes in time
- Handle velocity
- Smaller model
- train on MUSDB to make it robust to background music
- Optimal Transport
- circular distributions to avoid shitty initialization
### Implementation
- Viterbi smoothing
### Refactoring
- simplify `datamodules`
- split `augmentations/cqt` in two files + implement data augmentations directly in this repo
- general evaluation loop: use same codebase for evaluating all baselines
### Pretrained models
- Train good models, evaluate them and release them
# this file is needed here to include configs when building project as a package
defaults:
- model_checkpoint
- lr_monitor
- model_summary
- progress_bar
- loss_weighting
- mir_eval
- pitch_histogram
- _self_
model_summary:
max_depth: 1
loss_weighting:
_target_: src.callbacks.loss_weighting.GradientsLossWeighting
weights:
invariance: 0.
shift_entropy: 1.
equivariance: 0.
ema_rate: 0.999
# https://lightning.ai/docs/pytorch/latest/api/lightning.pytorch.callbacks.RichProgressBar.html
lr_monitor:
_target_: lightning.pytorch.callbacks.LearningRateMonitor
mir_eval:
_target_: src.callbacks.mir_eval.MIREvalCallback
cdf_resolution: 10
# https://lightning.ai/docs/pytorch/stable/api/lightning.pytorch.callbacks.ModelCheckpoint.html
model_checkpoint:
_target_: lightning.pytorch.callbacks.ModelCheckpoint
dirpath: null # directory to save the model file
filename: null # checkpoint filename
monitor: null # name of the logged metric which determines when model is improving
verbose: False # verbosity mode
save_last: null # additionally always save an exact copy of the last checkpoint to a file last.ckpt
save_top_k: 1 # save k best models (determined by above metric)
mode: "min" # "max" means higher metric value is better, can be also "min"
auto_insert_metric_name: True # when True, the checkpoints filenames will contain the metric name
save_weights_only: true # if True, then only the model’s weights will be saved
every_n_train_steps: null # number of training steps between checkpoints
train_time_interval: null # checkpoints are monitored at the specified time interval
every_n_epochs: null # number of epochs between checkpoints
save_on_train_epoch_end: null # whether to run checkpointing at the end of the training epoch or the end of validation
# https://lightning.ai/docs/pytorch/stable/api/lightning.pytorch.callbacks.RichModelSummary.html
model_summary:
_target_: lightning.pytorch.callbacks.RichModelSummary
max_depth: 1 # the maximum depth of layer nesting that the summary will include
pitch_histogram:
_target_: src.callbacks.pitch_histogram.PitchHistogramCallback
# https://lightning.ai/docs/pytorch/latest/api/lightning.pytorch.callbacks.RichProgressBar.html
progress_bar:
_target_: lightning.pytorch.callbacks.RichProgressBar
# @package _global_
# default debugging setup, runs 1 full epoch
# other debugging configs can inherit from this one
# overwrite task name so debugging logs are stored in separate folder
task_name: "debug"
# disable callbacks and loggers during debugging
callbacks: null
logger: null
extras:
ignore_warnings: False
enforce_tags: False
# sets level of all command line loggers to 'DEBUG'
# https://hydra.cc/docs/tutorials/basic/running_your_app/logging/
hydra:
job_logging:
root:
level: DEBUG
# use this to also set hydra loggers to 'DEBUG'
# verbose: True
trainer:
max_epochs: 1
accelerator: cpu # debuggers don't like gpus
devices: 1 # debuggers don't like multiprocessing
detect_anomaly: true # raise exception if NaN or +/-inf is detected in any tensor
data:
num_workers: 0 # debuggers don't like multiprocessing
pin_memory: False # disable gpu memory pin
# @package _global_
# runs 1 train, 1 validation and 1 test step
defaults:
- default
trainer:
fast_dev_run: true
# @package _global_
# uses only 1% of the training data and 5% of validation/test data
defaults:
- default
trainer:
max_epochs: 3
limit_train_batches: 0.01
limit_val_batches: 0.05
limit_test_batches: 0.05
# @package _global_
# overfits to 3 batches
defaults:
- default
trainer:
max_epochs: 20
overfit_batches: 3
# model ckpt and early stopping need to be disabled during overfitting
callbacks: null
# @package _global_
# runs with execution time profiling
defaults:
- default
trainer:
max_epochs: 1
profiler: "simple"
# profiler: "advanced"
# profiler: "pytorch"
# @package _global_
defaults:
- _self_
- data: mnist # choose datamodule with `test_dataloader()` for evaluation
- model: mnist
- logger: null
- trainer: default
- paths: default
- extras: default
- hydra: default
task_name: "eval"
tags: ["dev"]
# passing checkpoint path is necessary for evaluation
ckpt_path: ???
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Please register or to comment