Skip to content
Snippets Groups Projects
Commit 4fd56d5e authored by Julien Dejasmin's avatar Julien Dejasmin
Browse files

update last 2

parent 31edd1e6
No related branches found
No related tags found
No related merge requests found
Showing with 0 additions and 1103 deletions
/data1/home/julien.dejasmin/anaconda3/envs/pytorch/lib/python3.7/site-packages/torch/nn/_reduction.py:43: UserWarning: size_average and reduce args will be deprecated, please use reduction='sum' instead.
warnings.warn(warning.format(ret))
## OAR [2020-06-24 16:29:57] Job 2068271 KILLED ##
Namespace(batch_size=256, beta=4, ckpt_dir='checkpoints', ckpt_name='last', cont_capacity=None, dataset='rendered_chairs', disc_capacity=None, epochs=400, experiment_name='beta_VAE_bs_256', gpu_devices=[0, 1], is_beta_VAE=True, latent_name='', latent_spec_cont=10, latent_spec_disc=None, load_expe_name='', load_model_checkpoint=True, lr=0.0001, num_worker=4, print_loss_every=50, record_loss_every=50, save_model=True, save_reconstruction_image=False, save_step=1, verbose=True)
load dataset: rendered_chairs, with: 69120 train images of shape: (3, 64, 64)
use 2 gpu who named:
GeForce RTX 2080 Ti
GeForce RTX 2080 Ti
DataParallel(
(module): VAE(
(img_to_last_conv): Sequential(
(0): Conv2d(3, 32, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1))
(1): ReLU()
(2): Conv2d(32, 32, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1))
(3): ReLU()
(4): Conv2d(32, 64, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1))
(5): ReLU()
(6): Conv2d(64, 64, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1))
(7): ReLU()
)
(last_conv_to_continuous_features): Sequential(
(0): Conv2d(64, 256, kernel_size=(4, 4), stride=(1, 1))
(1): ReLU()
)
(features_to_hidden_continue): Sequential(
(0): Linear(in_features=256, out_features=20, bias=True)
(1): ReLU()
)
(latent_to_features): Sequential(
(0): Linear(in_features=10, out_features=256, bias=True)
(1): ReLU()
)
(features_to_img): Sequential(
(0): ConvTranspose2d(256, 64, kernel_size=(4, 4), stride=(1, 1))
(1): ReLU()
(2): ConvTranspose2d(64, 64, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1))
(3): ReLU()
(4): ConvTranspose2d(64, 32, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1))
(5): ReLU()
(6): ConvTranspose2d(32, 32, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1))
(7): ReLU()
(8): ConvTranspose2d(32, 3, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1))
(9): Sigmoid()
)
)
)
The number of parameters of model is 765335
don't use continuous capacity
=> loaded checkpoint 'trained_models/rendered_chairs/beta_VAE_bs_256/checkpoints/last (iter 171)'
0/69092 Loss: 147.665
12800/69092 Loss: 158.181
25600/69092 Loss: 159.590
38400/69092 Loss: 158.551
51200/69092 Loss: 158.112
64000/69092 Loss: 157.238
Training time 0:03:48.393375
Epoch: 1 Average loss: 158.38
=> saved checkpoint 'trained_models/rendered_chairs/beta_VAE_bs_256/checkpoints/last' (iter 172)
0/69092 Loss: 159.925
12800/69092 Loss: 158.833
25600/69092 Loss: 159.460
38400/69092 Loss: 157.054
51200/69092 Loss: 157.525
64000/69092 Loss: 158.482
Training time 0:03:41.025229
Epoch: 2 Average loss: 158.12
=> saved checkpoint 'trained_models/rendered_chairs/beta_VAE_bs_256/checkpoints/last' (iter 173)
0/69092 Loss: 152.283
12800/69092 Loss: 157.230
25600/69092 Loss: 157.103
38400/69092 Loss: 158.817
/data1/home/julien.dejasmin/anaconda3/envs/pytorch/lib/python3.7/site-packages/torch/nn/_reduction.py:43: UserWarning: size_average and reduce args will be deprecated, please use reduction='sum' instead.
warnings.warn(warning.format(ret))
## OAR [2020-06-24 16:29:57] Job 2068272 KILLED ##
Namespace(batch_size=64, beta=4, ckpt_dir='checkpoints', ckpt_name='last', cont_capacity=None, dataset='rendered_chairs', disc_capacity=None, epochs=400, experiment_name='beta_VAE_bs_64', gpu_devices=[0, 1], is_beta_VAE=True, latent_name='', latent_spec_cont=10, latent_spec_disc=None, load_expe_name='', load_model_checkpoint=True, lr=0.0001, num_worker=4, print_loss_every=50, record_loss_every=50, save_model=True, save_reconstruction_image=False, save_step=1, verbose=True)
load dataset: rendered_chairs, with: 69120 train images of shape: (3, 64, 64)
use 2 gpu who named:
Tesla K80
Tesla K80
DataParallel(
(module): VAE(
(img_to_last_conv): Sequential(
(0): Conv2d(3, 32, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1))
(1): ReLU()
(2): Conv2d(32, 32, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1))
(3): ReLU()
(4): Conv2d(32, 64, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1))
(5): ReLU()
(6): Conv2d(64, 64, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1))
(7): ReLU()
)
(last_conv_to_continuous_features): Sequential(
(0): Conv2d(64, 256, kernel_size=(4, 4), stride=(1, 1))
(1): ReLU()
)
(features_to_hidden_continue): Sequential(
(0): Linear(in_features=256, out_features=20, bias=True)
(1): ReLU()
)
(latent_to_features): Sequential(
(0): Linear(in_features=10, out_features=256, bias=True)
(1): ReLU()
)
(features_to_img): Sequential(
(0): ConvTranspose2d(256, 64, kernel_size=(4, 4), stride=(1, 1))
(1): ReLU()
(2): ConvTranspose2d(64, 64, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1))
(3): ReLU()
(4): ConvTranspose2d(64, 32, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1))
(5): ReLU()
(6): ConvTranspose2d(32, 32, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1))
(7): ReLU()
(8): ConvTranspose2d(32, 3, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1))
(9): Sigmoid()
)
)
)
The number of parameters of model is 765335
don't use continuous capacity
=> loaded checkpoint 'trained_models/rendered_chairs/beta_VAE_bs_64/checkpoints/last (iter 304)'
0/69092 Loss: 153.033
3200/69092 Loss: 151.558
6400/69092 Loss: 150.062
9600/69092 Loss: 154.221
12800/69092 Loss: 151.615
16000/69092 Loss: 153.931
19200/69092 Loss: 153.015
22400/69092 Loss: 153.433
25600/69092 Loss: 151.584
28800/69092 Loss: 154.002
32000/69092 Loss: 153.302
35200/69092 Loss: 151.154
38400/69092 Loss: 152.741
41600/69092 Loss: 153.137
44800/69092 Loss: 152.327
48000/69092 Loss: 152.095
51200/69092 Loss: 150.511
54400/69092 Loss: 151.692
57600/69092 Loss: 151.058
60800/69092 Loss: 155.055
64000/69092 Loss: 150.898
67200/69092 Loss: 155.036
Training time 0:01:58.162781
Epoch: 1 Average loss: 152.50
=> saved checkpoint 'trained_models/rendered_chairs/beta_VAE_bs_64/checkpoints/last' (iter 305)
0/69092 Loss: 143.092
3200/69092 Loss: 153.010
6400/69092 Loss: 155.530
9600/69092 Loss: 155.364
12800/69092 Loss: 151.123
16000/69092 Loss: 154.058
19200/69092 Loss: 151.657
22400/69092 Loss: 153.832
25600/69092 Loss: 151.606
28800/69092 Loss: 153.287
32000/69092 Loss: 150.634
35200/69092 Loss: 152.244
38400/69092 Loss: 152.817
41600/69092 Loss: 152.458
44800/69092 Loss: 153.017
48000/69092 Loss: 151.141
51200/69092 Loss: 154.387
54400/69092 Loss: 149.705
57600/69092 Loss: 151.770
60800/69092 Loss: 151.618
64000/69092 Loss: 152.999
67200/69092 Loss: 154.086
Training time 0:01:57.931589
Epoch: 2 Average loss: 152.66
=> saved checkpoint 'trained_models/rendered_chairs/beta_VAE_bs_64/checkpoints/last' (iter 306)
0/69092 Loss: 149.850
3200/69092 Loss: 153.249
6400/69092 Loss: 153.873
9600/69092 Loss: 153.335
12800/69092 Loss: 153.749
16000/69092 Loss: 148.933
19200/69092 Loss: 154.976
22400/69092 Loss: 153.382
25600/69092 Loss: 151.603
28800/69092 Loss: 152.808
32000/69092 Loss: 151.032
35200/69092 Loss: 151.926
38400/69092 Loss: 155.662
41600/69092 Loss: 150.252
44800/69092 Loss: 152.976
48000/69092 Loss: 153.162
51200/69092 Loss: 153.542
54400/69092 Loss: 152.422
57600/69092 Loss: 150.808
60800/69092 Loss: 152.001
64000/69092 Loss: 153.256
67200/69092 Loss: 152.930
Training time 0:01:57.656130
Epoch: 3 Average loss: 152.64
=> saved checkpoint 'trained_models/rendered_chairs/beta_VAE_bs_64/checkpoints/last' (iter 307)
0/69092 Loss: 161.186
3200/69092 Loss: 151.119
6400/69092 Loss: 154.165
9600/69092 Loss: 150.897
12800/69092 Loss: 152.695
16000/69092 Loss: 151.460
19200/69092 Loss: 154.138
22400/69092 Loss: 153.239
25600/69092 Loss: 152.014
28800/69092 Loss: 151.173
32000/69092 Loss: 155.464
35200/69092 Loss: 154.407
38400/69092 Loss: 150.135
41600/69092 Loss: 154.000
44800/69092 Loss: 152.920
48000/69092 Loss: 151.982
51200/69092 Loss: 155.144
54400/69092 Loss: 151.637
57600/69092 Loss: 150.082
60800/69092 Loss: 153.631
64000/69092 Loss: 152.659
67200/69092 Loss: 153.400
Training time 0:01:58.079109
Epoch: 4 Average loss: 152.67
=> saved checkpoint 'trained_models/rendered_chairs/beta_VAE_bs_64/checkpoints/last' (iter 308)
0/69092 Loss: 173.856
3200/69092 Loss: 152.023
6400/69092 Loss: 153.829
9600/69092 Loss: 150.343
12800/69092 Loss: 151.734
16000/69092 Loss: 152.576
19200/69092 Loss: 153.110
22400/69092 Loss: 155.344
25600/69092 Loss: 151.083
28800/69092 Loss: 151.468
32000/69092 Loss: 150.122
35200/69092 Loss: 150.961
38400/69092 Loss: 152.861
41600/69092 Loss: 150.400
44800/69092 Loss: 154.243
48000/69092 Loss: 156.053
51200/69092 Loss: 151.848
54400/69092 Loss: 154.020
57600/69092 Loss: 152.957
60800/69092 Loss: 154.637
64000/69092 Loss: 151.895
67200/69092 Loss: 152.106
Training time 0:01:58.719240
Epoch: 5 Average loss: 152.53
=> saved checkpoint 'trained_models/rendered_chairs/beta_VAE_bs_64/checkpoints/last' (iter 309)
0/69092 Loss: 161.423
3200/69092 Loss: 151.636
6400/69092 Loss: 150.785
/data1/home/julien.dejasmin/anaconda3/envs/pytorch/lib/python3.7/site-packages/torch/nn/_reduction.py:43: UserWarning: size_average and reduce args will be deprecated, please use reduction='sum' instead.
warnings.warn(warning.format(ret))
## OAR [2020-06-24 16:29:57] Job 2068273 KILLED ##
Namespace(batch_size=256, beta=None, ckpt_dir='checkpoints', ckpt_name='last', cont_capacity=None, dataset='rendered_chairs', disc_capacity=None, epochs=400, experiment_name='VAE_bs_256', gpu_devices=[0, 1], is_beta_VAE=False, latent_name='', latent_spec_cont=10, latent_spec_disc=None, load_expe_name='', load_model_checkpoint=True, lr=0.0001, num_worker=4, print_loss_every=50, record_loss_every=50, save_model=True, save_reconstruction_image=False, save_step=1, verbose=True)
load dataset: rendered_chairs, with: 69120 train images of shape: (3, 64, 64)
use 2 gpu who named:
GeForce GTX 1080 Ti
GeForce GTX 1080 Ti
DataParallel(
(module): VAE(
(img_to_last_conv): Sequential(
(0): Conv2d(3, 32, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1))
(1): ReLU()
(2): Conv2d(32, 32, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1))
(3): ReLU()
(4): Conv2d(32, 64, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1))
(5): ReLU()
(6): Conv2d(64, 64, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1))
(7): ReLU()
)
(last_conv_to_continuous_features): Sequential(
(0): Conv2d(64, 256, kernel_size=(4, 4), stride=(1, 1))
(1): ReLU()
)
(features_to_hidden_continue): Sequential(
(0): Linear(in_features=256, out_features=20, bias=True)
(1): ReLU()
)
(latent_to_features): Sequential(
(0): Linear(in_features=10, out_features=256, bias=True)
(1): ReLU()
)
(features_to_img): Sequential(
(0): ConvTranspose2d(256, 64, kernel_size=(4, 4), stride=(1, 1))
(1): ReLU()
(2): ConvTranspose2d(64, 64, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1))
(3): ReLU()
(4): ConvTranspose2d(64, 32, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1))
(5): ReLU()
(6): ConvTranspose2d(32, 32, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1))
(7): ReLU()
(8): ConvTranspose2d(32, 3, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1))
(9): Sigmoid()
)
)
)
The number of parameters of model is 765335
don't use continuous capacity
=> loaded checkpoint 'trained_models/rendered_chairs/VAE_bs_256/checkpoints/last (iter 143)'
0/69092 Loss: 116.257
12800/69092 Loss: 116.865
25600/69092 Loss: 115.704
38400/69092 Loss: 116.840
51200/69092 Loss: 116.758
64000/69092 Loss: 116.974
Training time 0:05:34.767176
Epoch: 1 Average loss: 116.75
=> saved checkpoint 'trained_models/rendered_chairs/VAE_bs_256/checkpoints/last' (iter 144)
0/69092 Loss: 115.425
12800/69092 Loss: 116.366
25600/69092 Loss: 115.519
38400/69092 Loss: 116.832
51200/69092 Loss: 116.514
/data1/home/julien.dejasmin/anaconda3/envs/pytorch/lib/python3.7/site-packages/torch/nn/_reduction.py:43: UserWarning: size_average and reduce args will be deprecated, please use reduction='sum' instead.
warnings.warn(warning.format(ret))
## OAR [2020-06-24 16:29:57] Job 2068274 KILLED ##
Namespace(batch_size=64, beta=None, ckpt_dir='checkpoints', ckpt_name='last', cont_capacity=None, dataset='rendered_chairs', disc_capacity=None, epochs=400, experiment_name='VAE_bs_64', gpu_devices=[0, 1], is_beta_VAE=False, latent_name='', latent_spec_cont=10, latent_spec_disc=None, load_expe_name='', load_model_checkpoint=True, lr=0.0001, num_worker=4, print_loss_every=50, record_loss_every=50, save_model=True, save_reconstruction_image=False, save_step=1, verbose=True)
load dataset: rendered_chairs, with: 69120 train images of shape: (3, 64, 64)
use 2 gpu who named:
GeForce RTX 2080 Ti
GeForce RTX 2080 Ti
DataParallel(
(module): VAE(
(img_to_last_conv): Sequential(
(0): Conv2d(3, 32, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1))
(1): ReLU()
(2): Conv2d(32, 32, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1))
(3): ReLU()
(4): Conv2d(32, 64, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1))
(5): ReLU()
(6): Conv2d(64, 64, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1))
(7): ReLU()
)
(last_conv_to_continuous_features): Sequential(
(0): Conv2d(64, 256, kernel_size=(4, 4), stride=(1, 1))
(1): ReLU()
)
(features_to_hidden_continue): Sequential(
(0): Linear(in_features=256, out_features=20, bias=True)
(1): ReLU()
)
(latent_to_features): Sequential(
(0): Linear(in_features=10, out_features=256, bias=True)
(1): ReLU()
)
(features_to_img): Sequential(
(0): ConvTranspose2d(256, 64, kernel_size=(4, 4), stride=(1, 1))
(1): ReLU()
(2): ConvTranspose2d(64, 64, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1))
(3): ReLU()
(4): ConvTranspose2d(64, 32, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1))
(5): ReLU()
(6): ConvTranspose2d(32, 32, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1))
(7): ReLU()
(8): ConvTranspose2d(32, 3, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1))
(9): Sigmoid()
)
)
)
The number of parameters of model is 765335
don't use continuous capacity
=> loaded checkpoint 'trained_models/rendered_chairs/VAE_bs_64/checkpoints/last (iter 139)'
0/69092 Loss: 110.847
3200/69092 Loss: 115.390
6400/69092 Loss: 112.621
9600/69092 Loss: 112.659
12800/69092 Loss: 114.223
16000/69092 Loss: 112.621
19200/69092 Loss: 115.178
22400/69092 Loss: 113.874
25600/69092 Loss: 113.745
28800/69092 Loss: 114.605
32000/69092 Loss: 113.612
35200/69092 Loss: 114.793
38400/69092 Loss: 116.839
41600/69092 Loss: 114.658
44800/69092 Loss: 114.111
48000/69092 Loss: 112.756
51200/69092 Loss: 114.016
54400/69092 Loss: 114.316
57600/69092 Loss: 112.581
60800/69092 Loss: 112.926
64000/69092 Loss: 112.703
67200/69092 Loss: 112.203
Training time 0:04:30.608806
Epoch: 1 Average loss: 113.81
=> saved checkpoint 'trained_models/rendered_chairs/VAE_bs_64/checkpoints/last' (iter 140)
0/69092 Loss: 102.792
3200/69092 Loss: 112.805
6400/69092 Loss: 114.190
9600/69092 Loss: 114.078
12800/69092 Loss: 113.312
16000/69092 Loss: 112.534
19200/69092 Loss: 113.381
22400/69092 Loss: 114.327
25600/69092 Loss: 114.343
28800/69092 Loss: 114.635
32000/69092 Loss: 114.228
35200/69092 Loss: 113.063
38400/69092 Loss: 113.853
41600/69092 Loss: 114.526
44800/69092 Loss: 113.829
48000/69092 Loss: 114.176
51200/69092 Loss: 113.607
54400/69092 Loss: 113.869
57600/69092 Loss: 113.135
60800/69092 Loss: 113.620
64000/69092 Loss: 114.794
67200/69092 Loss: 113.848
Training time 0:04:18.213843
Epoch: 2 Average loss: 113.78
=> saved checkpoint 'trained_models/rendered_chairs/VAE_bs_64/checkpoints/last' (iter 141)
0/69092 Loss: 123.686
3200/69092 Loss: 115.639
6400/69092 Loss: 113.484
9600/69092 Loss: 112.216
12800/69092 Loss: 112.320
16000/69092 Loss: 115.109
/data1/home/julien.dejasmin/anaconda3/envs/pytorch/lib/python3.7/site-packages/torch/nn/_reduction.py:43: UserWarning: size_average and reduce args will be deprecated, please use reduction='sum' instead.
warnings.warn(warning.format(ret))
## OAR [2020-06-24 16:29:57] Job 2068275 KILLED ##
Namespace(batch_size=64, beta=4, ckpt_dir='checkpoints', ckpt_name='last', cont_capacity=None, dataset='rendered_chairs', disc_capacity=None, epochs=400, experiment_name='beta_VAE_bs_64_ls_15', gpu_devices=[0, 1], is_beta_VAE=True, latent_name='', latent_spec_cont=15, latent_spec_disc=None, load_expe_name='', load_model_checkpoint=False, lr=0.0001, num_worker=4, print_loss_every=50, record_loss_every=50, save_model=True, save_reconstruction_image=False, save_step=1, verbose=True)
creare new diretory experiment: rendered_chairs/beta_VAE_bs_64_ls_15
load dataset: rendered_chairs, with: 69120 train images of shape: (3, 64, 64)
use 2 gpu who named:
GeForce RTX 2080 Ti
GeForce RTX 2080 Ti
DataParallel(
(module): VAE(
(img_to_last_conv): Sequential(
(0): Conv2d(3, 32, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1))
(1): ReLU()
(2): Conv2d(32, 32, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1))
(3): ReLU()
(4): Conv2d(32, 64, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1))
(5): ReLU()
(6): Conv2d(64, 64, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1))
(7): ReLU()
)
(last_conv_to_continuous_features): Sequential(
(0): Conv2d(64, 256, kernel_size=(4, 4), stride=(1, 1))
(1): ReLU()
)
(features_to_hidden_continue): Sequential(
(0): Linear(in_features=256, out_features=30, bias=True)
(1): ReLU()
)
(latent_to_features): Sequential(
(0): Linear(in_features=15, out_features=256, bias=True)
(1): ReLU()
)
(features_to_img): Sequential(
(0): ConvTranspose2d(256, 64, kernel_size=(4, 4), stride=(1, 1))
(1): ReLU()
(2): ConvTranspose2d(64, 64, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1))
(3): ReLU()
(4): ConvTranspose2d(64, 32, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1))
(5): ReLU()
(6): ConvTranspose2d(32, 32, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1))
(7): ReLU()
(8): ConvTranspose2d(32, 3, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1))
(9): Sigmoid()
)
)
)
The number of parameters of model is 769185
don't use continuous capacity
=> no checkpoint found at 'trained_models/rendered_chairs/beta_VAE_bs_64_ls_15/checkpoints/last'
0/69092 Loss: 2891.769
3200/69092 Loss: 2825.642
6400/69092 Loss: 1205.228
9600/69092 Loss: 556.734
12800/69092 Loss: 484.027
16000/69092 Loss: 465.437
19200/69092 Loss: 448.356
22400/69092 Loss: 437.048
25600/69092 Loss: 401.074
28800/69092 Loss: 289.548
32000/69092 Loss: 238.668
35200/69092 Loss: 234.046
38400/69092 Loss: 226.796
41600/69092 Loss: 222.254
44800/69092 Loss: 225.392
48000/69092 Loss: 221.208
51200/69092 Loss: 220.618
54400/69092 Loss: 213.913
57600/69092 Loss: 210.644
60800/69092 Loss: 213.087
64000/69092 Loss: 216.019
67200/69092 Loss: 208.903
Training time 0:04:53.937253
Epoch: 1 Average loss: 460.47
=> saved checkpoint 'trained_models/rendered_chairs/beta_VAE_bs_64_ls_15/checkpoints/last' (iter 1)
0/69092 Loss: 188.720
3200/69092 Loss: 204.200
6400/69092 Loss: 204.811
9600/69092 Loss: 200.863
12800/69092 Loss: 202.928
16000/69092 Loss: 205.920
19200/69092 Loss: 195.467
22400/69092 Loss: 199.383
25600/69092 Loss: 200.195
28800/69092 Loss: 196.327
32000/69092 Loss: 196.155
35200/69092 Loss: 193.812
38400/69092 Loss: 193.730
41600/69092 Loss: 194.544
44800/69092 Loss: 196.956
48000/69092 Loss: 190.843
51200/69092 Loss: 192.657
54400/69092 Loss: 189.354
57600/69092 Loss: 192.162
60800/69092 Loss: 189.223
/data1/home/julien.dejasmin/anaconda3/envs/pytorch/lib/python3.7/site-packages/torch/nn/_reduction.py:43: UserWarning: size_average and reduce args will be deprecated, please use reduction='sum' instead.
warnings.warn(warning.format(ret))
## OAR [2020-06-24 16:39:19] Job 2068276 KILLED ##
Namespace(batch_size=64, beta=4, ckpt_dir='checkpoints', ckpt_name='last', cont_capacity=None, dataset='rendered_chairs', disc_capacity=None, epochs=400, experiment_name='beta_VAE_bs_64_ls_20', gpu_devices=[0, 1], is_beta_VAE=True, latent_name='', latent_spec_cont=20, latent_spec_disc=None, load_expe_name='', load_model_checkpoint=False, lr=0.0001, num_worker=4, print_loss_every=50, record_loss_every=50, save_model=True, save_reconstruction_image=False, save_step=1, verbose=True)
creare new diretory experiment: rendered_chairs/beta_VAE_bs_64_ls_20
load dataset: rendered_chairs, with: 69120 train images of shape: (3, 64, 64)
use 2 gpu who named:
GeForce RTX 2080 Ti
GeForce RTX 2080 Ti
DataParallel(
(module): VAE(
(img_to_last_conv): Sequential(
(0): Conv2d(3, 32, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1))
(1): ReLU()
(2): Conv2d(32, 32, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1))
(3): ReLU()
(4): Conv2d(32, 64, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1))
(5): ReLU()
(6): Conv2d(64, 64, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1))
(7): ReLU()
)
(last_conv_to_continuous_features): Sequential(
(0): Conv2d(64, 256, kernel_size=(4, 4), stride=(1, 1))
(1): ReLU()
)
(features_to_hidden_continue): Sequential(
(0): Linear(in_features=256, out_features=40, bias=True)
(1): ReLU()
)
(latent_to_features): Sequential(
(0): Linear(in_features=20, out_features=256, bias=True)
(1): ReLU()
)
(features_to_img): Sequential(
(0): ConvTranspose2d(256, 64, kernel_size=(4, 4), stride=(1, 1))
(1): ReLU()
(2): ConvTranspose2d(64, 64, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1))
(3): ReLU()
(4): ConvTranspose2d(64, 32, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1))
(5): ReLU()
(6): ConvTranspose2d(32, 32, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1))
(7): ReLU()
(8): ConvTranspose2d(32, 3, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1))
(9): Sigmoid()
)
)
)
The number of parameters of model is 773035
don't use continuous capacity
=> no checkpoint found at 'trained_models/rendered_chairs/beta_VAE_bs_64_ls_20/checkpoints/last'
0/69092 Loss: 3095.371
3200/69092 Loss: 3013.162
6400/69092 Loss: 1142.397
9600/69092 Loss: 583.239
12800/69092 Loss: 500.184
16000/69092 Loss: 479.822
19200/69092 Loss: 462.136
22400/69092 Loss: 446.272
25600/69092 Loss: 445.532
28800/69092 Loss: 451.249
32000/69092 Loss: 444.728
35200/69092 Loss: 436.249
38400/69092 Loss: 434.357
41600/69092 Loss: 450.662
44800/69092 Loss: 436.829
48000/69092 Loss: 444.385
51200/69092 Loss: 442.255
54400/69092 Loss: 431.708
57600/69092 Loss: 444.837
60800/69092 Loss: 445.616
64000/69092 Loss: 438.358
67200/69092 Loss: 448.892
Training time 0:04:22.264375
Epoch: 1 Average loss: 608.54
=> saved checkpoint 'trained_models/rendered_chairs/beta_VAE_bs_64_ls_20/checkpoints/last' (iter 1)
0/69092 Loss: 419.617
3200/69092 Loss: 441.728
6400/69092 Loss: 436.680
9600/69092 Loss: 440.386
12800/69092 Loss: 437.215
16000/69092 Loss: 439.624
19200/69092 Loss: 436.731
22400/69092 Loss: 436.506
25600/69092 Loss: 439.104
28800/69092 Loss: 433.798
32000/69092 Loss: 443.554
35200/69092 Loss: 442.369
38400/69092 Loss: 441.461
41600/69092 Loss: 428.932
44800/69092 Loss: 458.124
48000/69092 Loss: 440.610
51200/69092 Loss: 433.183
54400/69092 Loss: 445.103
57600/69092 Loss: 448.074
60800/69092 Loss: 437.783
64000/69092 Loss: 442.622
67200/69092 Loss: 435.505
Training time 0:04:38.790417
Epoch: 2 Average loss: 439.99
=> saved checkpoint 'trained_models/rendered_chairs/beta_VAE_bs_64_ls_20/checkpoints/last' (iter 2)
0/69092 Loss: 419.540
3200/69092 Loss: 431.518
6400/69092 Loss: 444.740
9600/69092 Loss: 437.046
12800/69092 Loss: 446.255
/data1/home/julien.dejasmin/anaconda3/envs/pytorch/lib/python3.7/site-packages/torch/nn/_reduction.py:43: UserWarning: size_average and reduce args will be deprecated, please use reduction='sum' instead.
warnings.warn(warning.format(ret))
## OAR [2020-06-24 16:40:03] Job 2068277 KILLED ##
Namespace(batch_size=64, beta=4, ckpt_dir='checkpoints', ckpt_name='last', cont_capacity=None, dataset='rendered_chairs', disc_capacity=None, epochs=400, experiment_name='beta_VAE_bs_64_ls_5', gpu_devices=[0, 1], is_beta_VAE=True, latent_name='', latent_spec_cont=5, latent_spec_disc=None, load_expe_name='', load_model_checkpoint=False, lr=0.0001, num_worker=4, print_loss_every=50, record_loss_every=50, save_model=True, save_reconstruction_image=False, save_step=1, verbose=True)
creare new diretory experiment: rendered_chairs/beta_VAE_bs_64_ls_5
load dataset: rendered_chairs, with: 69120 train images of shape: (3, 64, 64)
use 2 gpu who named:
GeForce RTX 2080 Ti
GeForce RTX 2080 Ti
DataParallel(
(module): VAE(
(img_to_last_conv): Sequential(
(0): Conv2d(3, 32, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1))
(1): ReLU()
(2): Conv2d(32, 32, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1))
(3): ReLU()
(4): Conv2d(32, 64, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1))
(5): ReLU()
(6): Conv2d(64, 64, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1))
(7): ReLU()
)
(last_conv_to_continuous_features): Sequential(
(0): Conv2d(64, 256, kernel_size=(4, 4), stride=(1, 1))
(1): ReLU()
)
(features_to_hidden_continue): Sequential(
(0): Linear(in_features=256, out_features=10, bias=True)
(1): ReLU()
)
(latent_to_features): Sequential(
(0): Linear(in_features=5, out_features=256, bias=True)
(1): ReLU()
)
(features_to_img): Sequential(
(0): ConvTranspose2d(256, 64, kernel_size=(4, 4), stride=(1, 1))
(1): ReLU()
(2): ConvTranspose2d(64, 64, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1))
(3): ReLU()
(4): ConvTranspose2d(64, 32, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1))
(5): ReLU()
(6): ConvTranspose2d(32, 32, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1))
(7): ReLU()
(8): ConvTranspose2d(32, 3, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1))
(9): Sigmoid()
)
)
)
The number of parameters of model is 761485
don't use continuous capacity
=> no checkpoint found at 'trained_models/rendered_chairs/beta_VAE_bs_64_ls_5/checkpoints/last'
0/69092 Loss: 2840.949
3200/69092 Loss: 2696.334
6400/69092 Loss: 878.800
9600/69092 Loss: 529.528
12800/69092 Loss: 473.136
16000/69092 Loss: 459.979
19200/69092 Loss: 427.518
22400/69092 Loss: 312.154
25600/69092 Loss: 251.443
28800/69092 Loss: 243.150
32000/69092 Loss: 238.743
35200/69092 Loss: 232.739
38400/69092 Loss: 233.067
41600/69092 Loss: 230.312
44800/69092 Loss: 223.895
48000/69092 Loss: 230.252
51200/69092 Loss: 228.705
54400/69092 Loss: 232.979
57600/69092 Loss: 229.967
60800/69092 Loss: 224.165
64000/69092 Loss: 224.971
67200/69092 Loss: 228.978
Training time 0:04:33.155919
Epoch: 1 Average loss: 426.85
=> saved checkpoint 'trained_models/rendered_chairs/beta_VAE_bs_64_ls_5/checkpoints/last' (iter 1)
0/69092 Loss: 220.633
3200/69092 Loss: 223.511
6400/69092 Loss: 224.541
9600/69092 Loss: 225.067
12800/69092 Loss: 222.152
16000/69092 Loss: 222.201
19200/69092 Loss: 225.091
22400/69092 Loss: 220.790
25600/69092 Loss: 219.536
28800/69092 Loss: 214.187
32000/69092 Loss: 221.030
35200/69092 Loss: 217.724
38400/69092 Loss: 214.937
41600/69092 Loss: 221.160
44800/69092 Loss: 214.006
48000/69092 Loss: 216.696
51200/69092 Loss: 208.186
54400/69092 Loss: 198.221
57600/69092 Loss: 202.476
60800/69092 Loss: 191.258
64000/69092 Loss: 196.500
67200/69092 Loss: 196.036
Training time 0:04:43.151928
Epoch: 2 Average loss: 213.44
=> saved checkpoint 'trained_models/rendered_chairs/beta_VAE_bs_64_ls_5/checkpoints/last' (iter 2)
0/69092 Loss: 181.796
3200/69092 Loss: 190.527
6400/69092 Loss: 192.437
/data1/home/julien.dejasmin/anaconda3/envs/pytorch/lib/python3.7/site-packages/torch/nn/_reduction.py:43: UserWarning: size_average and reduce args will be deprecated, please use reduction='sum' instead.
warnings.warn(warning.format(ret))
## OAR [2020-06-24 16:40:04] Job 2068278 KILLED ##
Namespace(batch_size=64, beta=None, ckpt_dir='checkpoints', ckpt_name='last', cont_capacity=None, dataset='rendered_chairs', disc_capacity=None, epochs=400, experiment_name='VAE_bs_64_ls_5', gpu_devices=[0, 1], is_beta_VAE=False, latent_name='', latent_spec_cont=5, latent_spec_disc=None, load_expe_name='', load_model_checkpoint=False, lr=0.0001, num_worker=4, print_loss_every=50, record_loss_every=50, save_model=True, save_reconstruction_image=False, save_step=1, verbose=True)
creare new diretory experiment: rendered_chairs/VAE_bs_64_ls_5
load dataset: rendered_chairs, with: 69120 train images of shape: (3, 64, 64)
use 2 gpu who named:
GeForce RTX 2080 Ti
GeForce RTX 2080 Ti
DataParallel(
(module): VAE(
(img_to_last_conv): Sequential(
(0): Conv2d(3, 32, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1))
(1): ReLU()
(2): Conv2d(32, 32, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1))
(3): ReLU()
(4): Conv2d(32, 64, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1))
(5): ReLU()
(6): Conv2d(64, 64, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1))
(7): ReLU()
)
(last_conv_to_continuous_features): Sequential(
(0): Conv2d(64, 256, kernel_size=(4, 4), stride=(1, 1))
(1): ReLU()
)
(features_to_hidden_continue): Sequential(
(0): Linear(in_features=256, out_features=10, bias=True)
(1): ReLU()
)
(latent_to_features): Sequential(
(0): Linear(in_features=5, out_features=256, bias=True)
(1): ReLU()
)
(features_to_img): Sequential(
(0): ConvTranspose2d(256, 64, kernel_size=(4, 4), stride=(1, 1))
(1): ReLU()
(2): ConvTranspose2d(64, 64, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1))
(3): ReLU()
(4): ConvTranspose2d(64, 32, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1))
(5): ReLU()
(6): ConvTranspose2d(32, 32, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1))
(7): ReLU()
(8): ConvTranspose2d(32, 3, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1))
(9): Sigmoid()
)
)
)
The number of parameters of model is 761485
don't use continuous capacity
=> no checkpoint found at 'trained_models/rendered_chairs/VAE_bs_64_ls_5/checkpoints/last'
0/69092 Loss: 2903.199
3200/69092 Loss: 2704.227
6400/69092 Loss: 982.409
9600/69092 Loss: 523.329
12800/69092 Loss: 363.221
16000/69092 Loss: 276.723
19200/69092 Loss: 250.323
22400/69092 Loss: 229.455
25600/69092 Loss: 220.906
28800/69092 Loss: 220.425
32000/69092 Loss: 216.671
35200/69092 Loss: 213.795
38400/69092 Loss: 211.775
41600/69092 Loss: 205.819
44800/69092 Loss: 206.077
48000/69092 Loss: 208.456
51200/69092 Loss: 210.820
54400/69092 Loss: 204.282
57600/69092 Loss: 204.736
60800/69092 Loss: 204.576
64000/69092 Loss: 200.975
67200/69092 Loss: 198.954
Training time 0:05:45.368750
Epoch: 1 Average loss: 390.23
=> saved checkpoint 'trained_models/rendered_chairs/VAE_bs_64_ls_5/checkpoints/last' (iter 1)
0/69092 Loss: 193.590
3200/69092 Loss: 194.946
6400/69092 Loss: 199.709
9600/69092 Loss: 200.276
12800/69092 Loss: 193.180
16000/69092 Loss: 192.735
19200/69092 Loss: 190.256
22400/69092 Loss: 190.013
25600/69092 Loss: 188.325
28800/69092 Loss: 189.229
32000/69092 Loss: 189.882
35200/69092 Loss: 190.248
38400/69092 Loss: 188.566
41600/69092 Loss: 185.072
44800/69092 Loss: 189.564
48000/69092 Loss: 181.238
51200/69092 Loss: 181.543
54400/69092 Loss: 182.526
/data1/home/julien.dejasmin/anaconda3/envs/pytorch/lib/python3.7/site-packages/torch/nn/_reduction.py:43: UserWarning: size_average and reduce args will be deprecated, please use reduction='sum' instead.
warnings.warn(warning.format(ret))
## OAR [2020-06-24 16:40:20] Job 2068279 KILLED ##
Namespace(batch_size=64, beta=None, ckpt_dir='checkpoints', ckpt_name='last', cont_capacity=None, dataset='rendered_chairs', disc_capacity=None, epochs=400, experiment_name='VAE_bs_64_ls_15', gpu_devices=[0, 1], is_beta_VAE=False, latent_name='', latent_spec_cont=15, latent_spec_disc=None, load_expe_name='', load_model_checkpoint=False, lr=0.0001, num_worker=4, print_loss_every=50, record_loss_every=50, save_model=True, save_reconstruction_image=False, save_step=1, verbose=True)
creare new diretory experiment: rendered_chairs/VAE_bs_64_ls_15
load dataset: rendered_chairs, with: 69120 train images of shape: (3, 64, 64)
use 2 gpu who named:
GeForce RTX 2080 Ti
GeForce RTX 2080 Ti
DataParallel(
(module): VAE(
(img_to_last_conv): Sequential(
(0): Conv2d(3, 32, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1))
(1): ReLU()
(2): Conv2d(32, 32, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1))
(3): ReLU()
(4): Conv2d(32, 64, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1))
(5): ReLU()
(6): Conv2d(64, 64, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1))
(7): ReLU()
)
(last_conv_to_continuous_features): Sequential(
(0): Conv2d(64, 256, kernel_size=(4, 4), stride=(1, 1))
(1): ReLU()
)
(features_to_hidden_continue): Sequential(
(0): Linear(in_features=256, out_features=30, bias=True)
(1): ReLU()
)
(latent_to_features): Sequential(
(0): Linear(in_features=15, out_features=256, bias=True)
(1): ReLU()
)
(features_to_img): Sequential(
(0): ConvTranspose2d(256, 64, kernel_size=(4, 4), stride=(1, 1))
(1): ReLU()
(2): ConvTranspose2d(64, 64, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1))
(3): ReLU()
(4): ConvTranspose2d(64, 32, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1))
(5): ReLU()
(6): ConvTranspose2d(32, 32, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1))
(7): ReLU()
(8): ConvTranspose2d(32, 3, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1))
(9): Sigmoid()
)
)
)
The number of parameters of model is 769185
don't use continuous capacity
=> no checkpoint found at 'trained_models/rendered_chairs/VAE_bs_64_ls_15/checkpoints/last'
0/69092 Loss: 3015.823
3200/69092 Loss: 2867.494
6400/69092 Loss: 899.528
9600/69092 Loss: 536.005
12800/69092 Loss: 478.343
16000/69092 Loss: 455.459
19200/69092 Loss: 457.437
22400/69092 Loss: 370.021
25600/69092 Loss: 263.633
28800/69092 Loss: 232.440
32000/69092 Loss: 210.459
35200/69092 Loss: 217.661
38400/69092 Loss: 216.086
41600/69092 Loss: 215.461
44800/69092 Loss: 208.221
48000/69092 Loss: 205.981
51200/69092 Loss: 208.176
54400/69092 Loss: 204.615
57600/69092 Loss: 205.887
60800/69092 Loss: 203.826
64000/69092 Loss: 202.141
67200/69092 Loss: 198.343
Training time 0:03:51.208936
Epoch: 1 Average loss: 427.71
=> saved checkpoint 'trained_models/rendered_chairs/VAE_bs_64_ls_15/checkpoints/last' (iter 1)
0/69092 Loss: 187.281
3200/69092 Loss: 200.427
6400/69092 Loss: 196.083
9600/69092 Loss: 202.029
12800/69092 Loss: 196.254
16000/69092 Loss: 196.466
19200/69092 Loss: 195.587
22400/69092 Loss: 192.062
25600/69092 Loss: 197.137
28800/69092 Loss: 196.870
32000/69092 Loss: 193.763
35200/69092 Loss: 196.194
38400/69092 Loss: 193.444
41600/69092 Loss: 186.353
44800/69092 Loss: 184.125
48000/69092 Loss: 179.607
51200/69092 Loss: 181.214
54400/69092 Loss: 179.105
57600/69092 Loss: 173.470
60800/69092 Loss: 163.793
64000/69092 Loss: 163.068
67200/69092 Loss: 164.580
Training time 0:03:46.574199
Epoch: 2 Average loss: 186.57
=> saved checkpoint 'trained_models/rendered_chairs/VAE_bs_64_ls_15/checkpoints/last' (iter 2)
0/69092 Loss: 164.363
3200/69092 Loss: 157.112
6400/69092 Loss: 152.674
9600/69092 Loss: 154.297
12800/69092 Loss: 155.036
16000/69092 Loss: 151.871
19200/69092 Loss: 151.537
22400/69092 Loss: 152.374
25600/69092 Loss: 150.578
28800/69092 Loss: 152.244
32000/69092 Loss: 150.801
35200/69092 Loss: 147.880
38400/69092 Loss: 148.003
41600/69092 Loss: 147.567
/data1/home/julien.dejasmin/anaconda3/envs/pytorch/lib/python3.7/site-packages/torch/nn/_reduction.py:43: UserWarning: size_average and reduce args will be deprecated, please use reduction='sum' instead.
warnings.warn(warning.format(ret))
## OAR [2020-06-24 16:40:20] Job 2068280 KILLED ##
Namespace(batch_size=64, beta=None, ckpt_dir='checkpoints', ckpt_name='last', cont_capacity=None, dataset='rendered_chairs', disc_capacity=None, epochs=400, experiment_name='VAE_bs_64_ls_20', gpu_devices=[0, 1], is_beta_VAE=False, latent_name='', latent_spec_cont=20, latent_spec_disc=None, load_expe_name='', load_model_checkpoint=False, lr=0.0001, num_worker=4, print_loss_every=50, record_loss_every=50, save_model=True, save_reconstruction_image=False, save_step=1, verbose=True)
creare new diretory experiment: rendered_chairs/VAE_bs_64_ls_20
load dataset: rendered_chairs, with: 69120 train images of shape: (3, 64, 64)
use 2 gpu who named:
Tesla K80
Tesla K80
DataParallel(
(module): VAE(
(img_to_last_conv): Sequential(
(0): Conv2d(3, 32, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1))
(1): ReLU()
(2): Conv2d(32, 32, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1))
(3): ReLU()
(4): Conv2d(32, 64, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1))
(5): ReLU()
(6): Conv2d(64, 64, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1))
(7): ReLU()
)
(last_conv_to_continuous_features): Sequential(
(0): Conv2d(64, 256, kernel_size=(4, 4), stride=(1, 1))
(1): ReLU()
)
(features_to_hidden_continue): Sequential(
(0): Linear(in_features=256, out_features=40, bias=True)
(1): ReLU()
)
(latent_to_features): Sequential(
(0): Linear(in_features=20, out_features=256, bias=True)
(1): ReLU()
)
(features_to_img): Sequential(
(0): ConvTranspose2d(256, 64, kernel_size=(4, 4), stride=(1, 1))
(1): ReLU()
(2): ConvTranspose2d(64, 64, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1))
(3): ReLU()
(4): ConvTranspose2d(64, 32, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1))
(5): ReLU()
(6): ConvTranspose2d(32, 32, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1))
(7): ReLU()
(8): ConvTranspose2d(32, 3, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1))
(9): Sigmoid()
)
)
)
The number of parameters of model is 773035
don't use continuous capacity
=> no checkpoint found at 'trained_models/rendered_chairs/VAE_bs_64_ls_20/checkpoints/last'
0/69092 Loss: 2881.933
3200/69092 Loss: 2719.794
6400/69092 Loss: 873.853
9600/69092 Loss: 554.953
12800/69092 Loss: 492.325
16000/69092 Loss: 482.132
19200/69092 Loss: 452.762
22400/69092 Loss: 330.168
25600/69092 Loss: 255.698
28800/69092 Loss: 239.942
32000/69092 Loss: 228.030
35200/69092 Loss: 218.689
38400/69092 Loss: 215.508
41600/69092 Loss: 216.039
44800/69092 Loss: 210.728
48000/69092 Loss: 214.618
51200/69092 Loss: 213.710
54400/69092 Loss: 209.650
57600/69092 Loss: 205.299
60800/69092 Loss: 203.167
64000/69092 Loss: 204.888
67200/69092 Loss: 199.354
Training time 0:01:59.952372
Epoch: 1 Average loss: 422.07
=> saved checkpoint 'trained_models/rendered_chairs/VAE_bs_64_ls_20/checkpoints/last' (iter 1)
0/69092 Loss: 213.822
3200/69092 Loss: 195.598
6400/69092 Loss: 188.690
9600/69092 Loss: 189.246
12800/69092 Loss: 185.152
16000/69092 Loss: 182.199
19200/69092 Loss: 178.716
22400/69092 Loss: 175.190
25600/69092 Loss: 173.751
28800/69092 Loss: 173.114
32000/69092 Loss: 169.251
35200/69092 Loss: 165.858
38400/69092 Loss: 162.986
41600/69092 Loss: 160.892
44800/69092 Loss: 157.779
48000/69092 Loss: 159.470
51200/69092 Loss: 155.455
54400/69092 Loss: 153.642
57600/69092 Loss: 152.026
60800/69092 Loss: 154.318
64000/69092 Loss: 149.577
67200/69092 Loss: 149.924
Training time 0:01:58.317271
Epoch: 2 Average loss: 167.76
=> saved checkpoint 'trained_models/rendered_chairs/VAE_bs_64_ls_20/checkpoints/last' (iter 2)
0/69092 Loss: 165.835
3200/69092 Loss: 147.308
6400/69092 Loss: 149.297
9600/69092 Loss: 142.005
12800/69092 Loss: 146.004
16000/69092 Loss: 143.574
19200/69092 Loss: 144.306
22400/69092 Loss: 142.412
25600/69092 Loss: 141.625
28800/69092 Loss: 141.184
32000/69092 Loss: 140.178
35200/69092 Loss: 142.290
38400/69092 Loss: 140.434
41600/69092 Loss: 138.432
44800/69092 Loss: 141.047
48000/69092 Loss: 138.527
51200/69092 Loss: 140.962
54400/69092 Loss: 137.973
57600/69092 Loss: 137.226
60800/69092 Loss: 136.121
64000/69092 Loss: 138.238
67200/69092 Loss: 138.245
Training time 0:01:59.670387
Epoch: 3 Average loss: 141.22
=> saved checkpoint 'trained_models/rendered_chairs/VAE_bs_64_ls_20/checkpoints/last' (iter 3)
0/69092 Loss: 142.078
3200/69092 Loss: 136.431
6400/69092 Loss: 135.250
9600/69092 Loss: 136.257
12800/69092 Loss: 135.379
16000/69092 Loss: 135.078
19200/69092 Loss: 134.206
22400/69092 Loss: 135.185
25600/69092 Loss: 132.867
28800/69092 Loss: 137.278
32000/69092 Loss: 134.216
35200/69092 Loss: 134.020
38400/69092 Loss: 130.385
41600/69092 Loss: 135.043
44800/69092 Loss: 133.981
48000/69092 Loss: 132.545
51200/69092 Loss: 135.545
54400/69092 Loss: 134.631
57600/69092 Loss: 131.294
60800/69092 Loss: 132.418
64000/69092 Loss: 131.634
67200/69092 Loss: 133.258
Training time 0:01:58.240577
Epoch: 4 Average loss: 134.15
=> saved checkpoint 'trained_models/rendered_chairs/VAE_bs_64_ls_20/checkpoints/last' (iter 4)
0/69092 Loss: 142.708
3200/69092 Loss: 132.561
6400/69092 Loss: 130.874
9600/69092 Loss: 131.468
12800/69092 Loss: 131.411
16000/69092 Loss: 131.824
19200/69092 Loss: 131.005
22400/69092 Loss: 132.192
25600/69092 Loss: 132.671
28800/69092 Loss: 131.997
32000/69092 Loss: 128.867
35200/69092 Loss: 130.574
38400/69092 Loss: 132.202
41600/69092 Loss: 129.819
44800/69092 Loss: 131.265
48000/69092 Loss: 129.098
51200/69092 Loss: 130.616
54400/69092 Loss: 130.498
57600/69092 Loss: 126.580
60800/69092 Loss: 130.212
64000/69092 Loss: 132.173
67200/69092 Loss: 129.980
Training time 0:01:58.078942
Epoch: 5 Average loss: 130.86
=> saved checkpoint 'trained_models/rendered_chairs/VAE_bs_64_ls_20/checkpoints/last' (iter 5)
0/69092 Loss: 124.281
3200/69092 Loss: 128.822
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Please register or to comment