Skip to content
Snippets Groups Projects
Commit 9a9a3bff authored by Charly LAMOTHE's avatar Charly LAMOTHE
Browse files

Introduce the notion of "stage" in the experiments (see TODO comment in...

Introduce the notion of "stage" in the experiments (see TODO comment in compute_results.py file. Add example of experiment configuration file tree in experiments/boston/stage3 for stage 3 (unoptimized parameters due to missing stage 1 and 2 results).
parent 9830bbe0
No related branches found
No related tags found
1 merge request!3clean scripts
Showing
with 224 additions and 7 deletions
......@@ -119,13 +119,14 @@ if __name__ == "__main__":
"""
TODO:
For each dataset:
0) A figure for the selection of the best base forest model hyperparameters (best vs default/random hyperparams)
1) A figure for the selection of the best dataset normalization method
2) A figure for the selection of the best combination of dataset: normalization vs D normalization vs weights normalization
3) A figure for the selection of the most relevant subsets combination: train,dev vs train+dev,train+dev vs train,train+dev
4) A figure to finally compare the perf of our approach using the previous selected parameters vs the baseline vs other papers
2)
Stage 1) A figure for the selection of the best base forest model hyperparameters (best vs default/random hyperparams)
Stage 2) A figure for the selection of the best dataset normalization method
Stage 3) A figure for the selection of the best combination of dataset: normalization vs D normalization vs weights normalization
Stage 4) A figure for the selection of the most relevant subsets combination: train,dev vs train+dev,train+dev vs train,train+dev
Stage 5) A figure for the selection of the best extracted forest size?
Stage 6) A figure to finally compare the perf of our approach using the previous selected parameters vs the baseline vs other papers
Stage 3)
In all axis:
- untrained forest
- trained base forest (straight line cause it doesn't depend on the number of extracted trees)
......
{
"dataset_name": "boston",
"normalize_D": false,
"dataset_normalizer": "standard",
"forest_size": 100,
"extracted_forest_size": [
10,
20,
30
],
"models_dir": ".\\models",
"dev_size": 0.2,
"test_size": 0.2,
"random_seed_number": 3,
"seeds": null,
"subsets_used": "train+dev,train+dev",
"normalize_weights": false
}
\ No newline at end of file
{
"dataset_name": "boston",
"normalize_D": true,
"dataset_normalizer": "standard",
"forest_size": 100,
"extracted_forest_size": [
10,
20,
30
],
"models_dir": ".\\models",
"dev_size": 0.2,
"test_size": 0.2,
"random_seed_number": 3,
"seeds": null,
"subsets_used": "train+dev,train+dev",
"normalize_weights": false
}
\ No newline at end of file
{
"dataset_name": "boston",
"normalize_D": true,
"dataset_normalizer": "standard",
"forest_size": 100,
"extracted_forest_size": [
10,
20,
30
],
"models_dir": ".\\models",
"dev_size": 0.2,
"test_size": 0.2,
"random_seed_number": 3,
"seeds": null,
"subsets_used": "train+dev,train+dev",
"normalize_weights": true
}
\ No newline at end of file
{
"dataset_name": "boston",
"normalize_D": false,
"dataset_normalizer": "standard",
"forest_size": 100,
"extracted_forest_size": [
10,
20,
30
],
"models_dir": ".\\models",
"dev_size": 0.2,
"test_size": 0.2,
"random_seed_number": 3,
"seeds": null,
"subsets_used": "train+dev,train+dev",
"normalize_weights": true
}
\ No newline at end of file
{
"dataset_name": "boston",
"normalize_D": false,
"dataset_normalizer": "standard",
"forest_size": 100,
"extracted_forest_size": [
10,
20,
30
],
"models_dir": ".\\models",
"dev_size": 0.2,
"test_size": 0.2,
"random_seed_number": 3,
"seeds": null,
"subsets_used": "train,dev",
"normalize_weights": false
}
\ No newline at end of file
{
"dataset_name": "boston",
"normalize_D": true,
"dataset_normalizer": "standard",
"forest_size": 100,
"extracted_forest_size": [
10,
20,
30
],
"models_dir": ".\\models",
"dev_size": 0.2,
"test_size": 0.2,
"random_seed_number": 3,
"seeds": null,
"subsets_used": "train,dev",
"normalize_weights": false
}
\ No newline at end of file
{
"dataset_name": "boston",
"normalize_D": true,
"dataset_normalizer": "standard",
"forest_size": 100,
"extracted_forest_size": [
10,
20,
30
],
"models_dir": ".\\models",
"dev_size": 0.2,
"test_size": 0.2,
"random_seed_number": 3,
"seeds": null,
"subsets_used": "train,dev",
"normalize_weights": true
}
\ No newline at end of file
{
"dataset_name": "boston",
"normalize_D": false,
"dataset_normalizer": "standard",
"forest_size": 100,
"extracted_forest_size": [
10,
20,
30
],
"models_dir": ".\\models",
"dev_size": 0.2,
"test_size": 0.2,
"random_seed_number": 3,
"seeds": null,
"subsets_used": "train,dev",
"normalize_weights": true
}
\ No newline at end of file
{
"dataset_name": "boston",
"normalize_D": false,
"dataset_normalizer": "standard",
"forest_size": 100,
"extracted_forest_size": [
10,
20,
30
],
"models_dir": ".\\models",
"dev_size": 0.2,
"test_size": 0.2,
"random_seed_number": 3,
"seeds": null,
"subsets_used": "train,train+dev",
"normalize_weights": false
}
\ No newline at end of file
{
"dataset_name": "boston",
"normalize_D": true,
"dataset_normalizer": "standard",
"forest_size": 100,
"extracted_forest_size": [
10,
20,
30
],
"models_dir": ".\\models",
"dev_size": 0.2,
"test_size": 0.2,
"random_seed_number": 3,
"seeds": null,
"subsets_used": "train,train+dev",
"normalize_weights": false
}
\ No newline at end of file
{
"dataset_name": "boston",
"normalize_D": true,
"dataset_normalizer": "standard",
"forest_size": 100,
"extracted_forest_size": [
10,
20,
30
],
"models_dir": ".\\models",
"dev_size": 0.2,
"test_size": 0.2,
"random_seed_number": 3,
"seeds": null,
"subsets_used": "train,train+dev",
"normalize_weights": true
}
\ No newline at end of file
{
"dataset_name": "boston",
"normalize_D": false,
"dataset_normalizer": "standard",
"forest_size": 100,
"extracted_forest_size": [
10,
20,
30
],
"models_dir": ".\\models",
"dev_size": 0.2,
"test_size": 0.2,
"random_seed_number": 3,
"seeds": null,
"subsets_used": "train,train+dev",
"normalize_weights": true
}
\ No newline at end of file
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment