pytorch 數據加載性能對比分析

傳統方式需要10s,dat方式需要0.6s

import os
import time
import torch
import random
from common.coco_dataset import COCODataset
def gen_data(batch_size,data_path,target_path):
 os.makedirs(target_path,exist_ok=True)
 dataloader = torch.utils.data.DataLoader(COCODataset(data_path,
               (352, 352),
               is_training=False, is_scene=True),
            batch_size=batch_size,
            shuffle=False, num_workers=0, pin_memory=False,
            drop_last=True) # DataLoader
 start = time.time()
 for step, samples in enumerate(dataloader):
  images, labels, image_paths = samples["image"], samples["label"], samples["img_path"]
  print("time", images.size(0), time.time() - start)
  start = time.time()
  # torch.save(samples,target_path+ '/' + str(step) + '.dat')
  print(step)
def cat_100(target_path,batch_size=100):
 paths = os.listdir(target_path)
 li = [i for i in range(len(paths))]
 random.shuffle(li)
 images = []
 labels = []
 image_paths = []
 start = time.time()
 for i in range(len(paths)):
  samples = torch.load(target_path + str(li[i]) + ".dat")
  image, label, image_path = samples["image"], samples["label"], samples["img_path"]
  images.append(image.cuda())
  labels.append(label.cuda())
  image_paths.append(image_path)
  if i % batch_size == batch_size - 1:
   images = torch.cat((images), 0)
   print("time", images.size(0), time.time() - start)
   images = []
   labels = []
   image_paths = []
   start = time.time()
  i += 1
if __name__ == '__main__':
 os.environ["CUDA_VISIBLE_DEVICES"] = '3'
 batch_size=320
 # target_path='d:/test_1000/'
 target_path='d:\img_2/'
 data_path = r'D:\dataset\origin_all_datas\_2train'
 gen_data(batch_size,data_path,target_path)
 # get_data(target_path,batch_size)
 # cat_100(target_path,batch_size)

這個讀取數據也比較快:320 batch_size 450ms

def cat_100(target_path,batch_size=100):
 paths = os.listdir(target_path)
 li = [i for i in range(len(paths))]
 random.shuffle(li)
 images = []
 labels = []
 image_paths = []
 start = time.time()
 for i in range(len(paths)):
  samples = torch.load(target_path + str(li[i]) + ".dat")
  image, label, image_path = samples["image"], samples["label"], samples["img_path"]
  images.append(image)#.cuda())
  labels.append(label)#.cuda())
  image_paths.append(image_path)
  if i % batch_size < batch_size - 1:
   i += 1
   continue
  i += 1
  images = torch.cat(([image.cuda() for image in images]), 0)
  print("time", images.size(0), time.time() - start)
  images = []
  labels = []
  image_paths = []
  start = time.time()

補充:pytorch數據加載和處理問題解決方案

最近跟著pytorch中文文檔學習遇到一些小問題,已經解決,在此對這些錯誤進行記錄:

在讀取數據集時報錯:

AttributeError: ‘Series’ object has no attribute ‘as_matrix’

在顯示圖片是時報錯:

ValueError: Masked arrays must be 1-D

顯示單張圖片時figure一閃而過

在顯示多張散點圖的時候報錯:

TypeError: show_landmarks() got an unexpected keyword argument ‘image’

解決方案

主要問題在這一行: 最終目的是將Series轉為Matrix,即調用np.mat即可完成。

修改前

landmarks =landmarks_frame.iloc[n, 1:].as_matrix()

修改後

landmarks =np.mat(landmarks_frame.iloc[n, 1:])

打散點的x和y坐標應該均為向量或列表,故將landmarks後使用tolist()方法即可

修改前

plt.scatter(landmarks[:,0],landmarks[:,1],s=10,marker='.',c='r')

修改後

plt.scatter(landmarks[:,0].tolist(),landmarks[:,1].tolist(),s=10,marker='.',c='r')

前面使用plt.ion()打開交互模式,則後面在plt.show()之前一定要加上plt.ioff()。這裡直接加到函數裡面,避免每次plt.show()之前都用plt.ioff()

修改前

def show_landmarks(imgs,landmarks):
 '''顯示帶有地標的圖片'''
 plt.imshow(imgs)
 plt.scatter(landmarks[:,0].tolist(),landmarks[:,1].tolist(),s=10,marker='.',c='r')#打上紅色散點
 plt.pause(1)#繪圖窗口延時

修改後

def show_landmarks(imgs,landmarks):
 '''顯示帶有地標的圖片'''
 plt.imshow(imgs)
 plt.scatter(landmarks[:,0].tolist(),landmarks[:,1].tolist(),s=10,marker='.',c='r')#打上紅色散點
 plt.pause(1)#繪圖窗口延時
 plt.ioff()

網上說對於字典類型的sample可通過 **sample的方式獲取每個鍵下的值,但是會報錯,於是把輸入寫的詳細一點,就成功瞭。

修改前

show_landmarks(**sample)

修改後

show_landmarks(sample['image'],sample['landmarks'])

以上為個人經驗,希望能給大傢一個參考,也希望大傢多多支持WalkonNet。如有錯誤或未考慮完全的地方,望不吝賜教。

推薦閱讀: