From 29f1a1334cd8a5204548580018f18a5cadc1f1d3 Mon Sep 17 00:00:00 2001 From: Fabrice Daian <fabrice.daian@lis-lab.fr> Date: Mon, 10 Mar 2025 12:38:54 +0100 Subject: [PATCH] typo --- README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/README.md b/README.md index 194dd33..541ff19 100644 --- a/README.md +++ b/README.md @@ -353,7 +353,7 @@ Step>3, Generator loss : 1.941e+05 ``` By default, the model is trained for 100 epochs (see ```hyperparameters.json```) but it includes an ```EarlyStopping``` mechanism (See the paper methods section for details) governed by the ```patience``` parameters (see ```hyperparameters.json```). -You can stop the training at anytime. The best checkpoints of your model is available in the ```experiments/metrology_experiment/results/networks/``` directory. +You can stop the training at anytime. The best checkpoint of your model is available in the ```experiments/metrology_experiment/results/networks/``` directory. If the training stops because it has reached the maximum number of epochs defined into the ```hyperparameters.json``` configuration file, and you want to continue training your model for more epochs, you can use the ```--retrain``` parameters to resume the training where it stops: -- GitLab