我對PyTorch dataloader裡的shuffle=True的理解

對shuffle=True的理解:

之前不瞭解shuffle的實際效果,假設有數據a,b,c,d,不知道batch_size=2後打亂,具體是如下哪一種情況:

1.先按順序取batch,對batch內打亂,即先取a,b,a,b進行打亂;

2.先打亂,再取batch。

證明是第二種

shuffle (bool, optional): set to ``True`` to have the data reshuffled 
at every epoch (default: ``False``).
if shuffle:
    sampler = RandomSampler(dataset) #此時得到的是索引

補充:簡單測試一下pytorch dataloader裡的shuffle=True是如何工作的

看代碼吧~

import sys
import torch
import random
import argparse
import numpy as np
import pandas as pd
import torch.nn as nn
from torch.nn import functional as F
from torch.optim import lr_scheduler
from torchvision import datasets, transforms
from torch.utils.data import TensorDataset, DataLoader, Dataset
 
class DealDataset(Dataset):
    def __init__(self):
        xy = np.loadtxt(open('./iris.csv','rb'), delimiter=',', dtype=np.float32)
        #data = pd.read_csv("iris.csv",header=None)
        #xy = data.values
        self.x_data = torch.from_numpy(xy[:, 0:-1])
        self.y_data = torch.from_numpy(xy[:, [-1]])
        self.len = xy.shape[0]
    
    def __getitem__(self, index):
        return self.x_data[index], self.y_data[index]
 
    def __len__(self):
        return self.len
   
dealDataset = DealDataset() 
train_loader2 = DataLoader(dataset=dealDataset,
                          batch_size=2,
                          shuffle=True)
#print(dealDataset.x_data)
for i, data in enumerate(train_loader2):
    inputs, labels = data
 
    #inputs, labels = Variable(inputs), Variable(labels)
    print(inputs)
    #print("epoch:", epoch, "的第" , i, "個inputs", inputs.data.size(), "labels", labels.data.size())

簡易數據集

shuffle之後的結果,每次都是隨機打亂,然後分成大小為n的若幹個mini-batch.

以上為個人經驗,希望能給大傢一個參考,也希望大傢多多支持WalkonNet。

推薦閱讀: