-
Notifications
You must be signed in to change notification settings - Fork 26.3k
Closed
Description
🐛 Bug
To Reproduce
I have training , validation and test dataset. I have created dataloaders for three of them with training shuffle = True and for valdiation and test shuffle = False . So in one of the code epoch loop contains iterator over train and validation and in another one , epoch loop contains iterator over train , validation and test.
Check the output of the code 1 and 2
- https://www.kaggle.com/suchith0312/pytorch-dataloader-testing?scriptVersionId=14425205
- https://www.kaggle.com/suchith0312/pytorch-dataloader-testing?scriptVersionId=14425182
Expected behavior
The train id's from output of files must be same but they are different.
Environment
- PyTorch Version : 1.0.1.post2
- Python version: 3.6.6
- CUDA version : 10.0
- Cudnn version 7.4.2
Metadata
Metadata
Assignees
Labels
No labels