pytorch查看網絡參數顯存占用量等操作
1.使用torchstat
pip install torchstat from torchstat import stat import torchvision.models as models model = models.resnet152() stat(model, (3, 224, 224))
關於stat函數的參數,第一個應該是模型,第二個則是輸入尺寸,3為通道數。我沒有調研該函數的詳細參數,也不知道為什麼使用的時候並不提示相應的參數。
2.使用torchsummary
pip install torchsummary from torchsummary import summary summary(model.cuda(),input_size=(3,32,32),batch_size=-1)
使用該函數直接對參數進行提示,可以發現直接有顯式輸入batch_size的地方,我自己的感覺好像該函數更好一些。但是!!!不知道為什麼,該函數在我的機器上一直報錯!!!
TypeError: can’t convert CUDA tensor to numpy. Use Tensor.cpu() to copy the tensor to host memory first.
Update:經過論壇咨詢,報錯的原因找到瞭,隻需要把
pip install torchsummary
修改為
pip install torch-summary
補充:Pytorch查看模型參數並計算模型參數量與可訓練參數量
查看模型參數(以AlexNet為例)
import torch import torch.nn as nn import torchvision class AlexNet(nn.Module): def __init__(self,num_classes=1000): super(AlexNet,self).__init__() self.feature_extraction = nn.Sequential( nn.Conv2d(in_channels=3,out_channels=96,kernel_size=11,stride=4,padding=2,bias=False), nn.ReLU(inplace=True), nn.MaxPool2d(kernel_size=3,stride=2,padding=0), nn.Conv2d(in_channels=96,out_channels=192,kernel_size=5,stride=1,padding=2,bias=False), nn.ReLU(inplace=True), nn.MaxPool2d(kernel_size=3,stride=2,padding=0), nn.Conv2d(in_channels=192,out_channels=384,kernel_size=3,stride=1,padding=1,bias=False), nn.ReLU(inplace=True), nn.Conv2d(in_channels=384,out_channels=256,kernel_size=3,stride=1,padding=1,bias=False), nn.ReLU(inplace=True), nn.Conv2d(in_channels=256,out_channels=256,kernel_size=3,stride=1,padding=1,bias=False), nn.ReLU(inplace=True), nn.MaxPool2d(kernel_size=3, stride=2, padding=0), ) self.classifier = nn.Sequential( nn.Dropout(p=0.5), nn.Linear(in_features=256*6*6,out_features=4096), nn.ReLU(inplace=True), nn.Dropout(p=0.5), nn.Linear(in_features=4096, out_features=4096), nn.ReLU(inplace=True), nn.Linear(in_features=4096, out_features=num_classes), ) def forward(self,x): x = self.feature_extraction(x) x = x.view(x.size(0),256*6*6) x = self.classifier(x) return x if __name__ =='__main__': # model = torchvision.models.AlexNet() model = AlexNet() # 打印模型參數 #for param in model.parameters(): #print(param) #打印模型名稱與shape for name,parameters in model.named_parameters(): print(name,':',parameters.size())
feature_extraction.0.weight : torch.Size([96, 3, 11, 11]) feature_extraction.3.weight : torch.Size([192, 96, 5, 5]) feature_extraction.6.weight : torch.Size([384, 192, 3, 3]) feature_extraction.8.weight : torch.Size([256, 384, 3, 3]) feature_extraction.10.weight : torch.Size([256, 256, 3, 3]) classifier.1.weight : torch.Size([4096, 9216]) classifier.1.bias : torch.Size([4096]) classifier.4.weight : torch.Size([4096, 4096]) classifier.4.bias : torch.Size([4096]) classifier.6.weight : torch.Size([1000, 4096]) classifier.6.bias : torch.Size([1000])
計算參數量與可訓練參數量
def get_parameter_number(model): total_num = sum(p.numel() for p in model.parameters()) trainable_num = sum(p.numel() for p in model.parameters() if p.requires_grad) return {'Total': total_num, 'Trainable': trainable_num}
第三方工具
from torchstat import stat import torchvision.models as models model = models.alexnet() stat(model, (3, 224, 224))
from torchvision.models import alexnet import torch from thop import profile model = alexnet() input = torch.randn(1, 3, 224, 224) flops, params = profile(model, inputs=(input, )) print(flops, params)
以上為個人經驗,希望能給大傢一個參考,也希望大傢多多支持WalkonNet。如有錯誤或未考慮完全的地方,望不吝賜教。
推薦閱讀:
- PyTorch零基礎入門之構建模型基礎
- Pytorch實現ResNet網絡之Residual Block殘差塊
- PyTorch 遷移學習實踐(幾分鐘即可訓練好自己的模型)
- 淺談Pytorch 定義的網絡結構層能否重復使用
- Python LeNet網絡詳解及pytorch實現