- Dec 18, 2019
-
-
Charly Lamothe authored
- Definitely use the correct forest size (either the one from best hyperparameters or the one specified in parameter); - Use a number of extracted forest sizes proportional as the forest size instead of fixed forest size; - Add an option to save the current command line name instead of using the unamed directory; - Add new california housing dataset best hyperparameters, and convert all value types that are number from string to int/float in other best hyperparameter files; - Remove useless code from compute_results.py in prevision of the changes; - Before best hyperparameters saving, save number as int or float instead of string; - Add job_number option for parallelisation in both train.py and compute_hyperparameters.py scripts; - Clean-up TODO list.
-
Charly Lamothe authored
- Add new best params for 7 datasets.
-
- Dec 01, 2019
-
-
Charly Lamothe authored
-
Charly Lamothe authored
-
Charly Lamothe authored
-
Charly Lamothe authored
-
Charly Lamothe authored
-
Charly Lamothe authored
- Ignore unamed experiment configuration file backups; - Factorize default dataset loading parameters; - Add missing return_X_y in basic dataset loaders.
-
- Nov 22, 2019
-
-
Charly Lamothe authored
- Move bolsonaro imports on top of the file in compute_hyperparam file; - Move change_binary_func_load to utils file.
-
Léo Bouscarrat authored
When training, look if there is bayesian search results, if yes use this. Exception: forest_size use the one given by parser if applicable
-
- Nov 20, 2019
-
-
Léo Bouscarrat authored
-