| Paper | [](https://doi.org/10.1101/2020.07.18.209957)|
## 1 - Setup (with installation on local computer)
## 1 - Setup (with installation on local computer)
First, you need to setup a Python Anaconda environment (https://www.anaconda.com/products/individual) or miniconda (Python 3.5 or higher), and clone this repository on your computer.
Docker install ??
Then create a virtual environment and process to the python requirements install by typing into the cloned repository on your computer
- To train a model from scratch,use the *Segmentation/model_training.ipynb* Jupyter Notebook (See the notebook for detailed step by step instructions)
- To train a model from scratch,first create an experiment that will organize your file
- experiment name will name the folder of the experiment after what you entered
- To segment an image using a trained model, use *Segmentation/segmentation.ipynb* Jupyter Notebook (See the notebook for detailed step by step instructions)
- the experiment path should be the location you want it to be on your hard drive
- the clean data path should be the location of the ground truth data (in order to import them without duplicating them)
- the noisy data path serve the same purpose as clean data path for the lower quality image counterpart
- Once this is done, the script will create a hyperparameters.json file that can be edited to put the values you want for the training
- There you will find all parameters such as learning rates, batch size, number of epochs max (early stopping should stop before), the loss weight (how much impact the generator loss will have comparing to the discriminator loss),the tile size, the patience for the learning rate scheduler and early stopping and the validation set proportion.
- The training script will just need the experiment path (so experiment folder + experiment name) and use the information provided earlier
- The prediction script will predict all images located at the test data path location. If not provided in the new experiment script, you can still provide the path directly in the hyperparameters.json file.
### Hardware
### Hardware
All models (provided in the link above) have been trained using one Nvidia GV100 GPU card (32Go GPU RAM).
- A complete training on the provided dataset was done on a L40 card with a duration of 7 hours.
- Training time should vary greatly according to the hardware used (GPU, CPU), but you can build a model on both architecture even if we advise you to train the model on a GPU card.
- Segmentation can be done on a full 1024x1024 stack in a couple of seconds on a GPU (dozen seconds on a CPU)
According to your hardware specifications, you should also consider decreasing the training *batch size* parameter into the Notebook to avoid GPU/CPU RAM crashes.
Some insight on model training time according to CPU and GPU hardware tested with a fixed 400 steps per epochs parameters:
| Hardware | Time per epoch (in seconds) |
|----------------|----------|
| Nvidia GV 100 | 27 s |
| CPU | 769 s |
| MyBinder (CPU) | 2980 s |
## 3 - Analysis : Shape classification and quantification
## 3 - Analysis : Shape classification and quantification
...
@@ -98,7 +64,7 @@ Once the segmentation is done, the segmentation result is passed to Fiji's Morph
...
@@ -98,7 +64,7 @@ Once the segmentation is done, the segmentation result is passed to Fiji's Morph
A CSV file is generated and passed to the *Analysis/analysis.ipynb* where all shape classification and quantification are done and Figures are generated.
A CSV file is generated and passed to the *Analysis/analysis.ipynb* where all shape classification and quantification are done and Figures are generated.
*__Running time__* : This notebook can be fully executed in a couple of minuts on a normal computer.
*__Running time__* : Depending on your hardware, the running time could vary a lot. The size of the dataset for training will also affect the training time.
## 4 - License
## 4 - License
This code repository is release under the [CC-SA License ??](https://gitlab-lis-lab.fr/sicomp/mupix/LICENSE???)
This code repository is release under the [CC-SA License ??](https://gitlab-lis-lab.fr/sicomp/mupix/LICENSE???)