Logs are saved in `lightning_logs/`, best `val_loss` checkpoints in `checkpoints/`.
Logs are saved in `logs/`, for each experiment there is a file `run.json` with hyperparameters and metrics at each epoch, and two checkpoints, the best and last checkpoints.
The best checkpoint is used for testing.
pytorch-lightning provides a tensorboard logger. You can check it with
The logger provides a simplified tensorboard-like facility. Run it with
```
tensorboard --logdir lightning_logs
python logger.py
```
Then point your browser to http://localhost:6006/.
parser.add_argument('--loss',default='bce',type=str,help='choose loss function [f1, bce] (default=bce)')
parser.add_argument('--augment_data',default=False,action='store_true',help='simulate missing abstract through augmentation (default=do not augment data)')
parser.add_argument('--transfer',default=None,type=str,help='transfer weights from checkpoint (default=do not transfer)')
parser.add_argument('--model',default='bert',type=str,help='model type [rnn, bert] (default=bert)')
parser.add_argument('--bert_flavor',default='monologg/biobert_v1.1_pubmed',type=str,help='pretrained bert model (default=monologg/biobert_v1.1_pubmed')
parser.add_argument('--model',default='bert',type=str,help='model type [rnn, cnn, bert] (default=bert)')
parser.add_argument('--bert_flavor',default='monologg/biobert_v1.1_pubmed',type=str,help='pretrained bert model (default=monologg/biobert_v1.1_pubmed)')