Pytorch教程內置模型源碼實現

翻譯自
https://pytorch.org/docs/stable/torchvision/models.html
主要講解瞭torchvision.models的使用

torchvision.models

torchvision.models中包含瞭如下模型

  • AlexNet
  • VGG
  • ResNet
  • SqueezeNet
  • DenseNet
  • Inception v3

隨機初始化模型

import torchvision.models as models
resnet18 = models.resnet18()
alexnet = models.alexnet()
vgg16 = models.vgg16()
squeezenet = models.squeezenet1_0()
desnet = models.densenet161()
inception =models.inception_v3()

使用預訓練好的參數

pytorch提供瞭預訓練的模型,使用torch.utils.model_zoo ,通過讓參數pretrained =True來構建訓練好的模型

方法如下

resnet18 = models.resnet18(pretrained=True)
alexnet = models.alexnet(pretrained=True)
squeezenet = models.squeezenet1_0(pretrained=True)
vgg16 = models.vgg16(pretrained=True)
densenet = models.densenet161(pretrained=True)
inception = models.inception_v3(pretrained=True)

實例化一個預訓練好的模型會自動下載權重到緩存目錄,這個權重存儲路徑可以通過環境變量TORCH_MODEL_ZOO來指定,詳細的參考torch.utils.model_zoo.load_url() 這個函數

有的模型試驗瞭不同的訓練和評估,例如batch normalization。使用model.train()和model.eval()來轉換,查看train() or eval() 來瞭解更多細節

所有的預訓練網絡希望使用相同的方式進行歸一化,例如圖片是mini-batch形式的3通道RGB圖片(3HW),H和W最少是244,。 圖像必須加載到[0,1]范圍內,然後使用均值=[0.485,0.456,0.406]和std =[0.229, 0.224, 0.225]進行歸一化。

您可以使用以下轉換來normalzie:

normalize = trainform.Normalize9mean = [0.485,0.456,0.406],std = [0.229,0.224,0.225])

在這裡我們可以找到一個在Imagenet上的這樣的例子
https://github.com/pytorch/examples/blob/42e5b996718797e45c46a25c55b031e6768f8440/imagenet/main.py#L89-L101

目前這些模型的效果如下

在這裡插入圖片描述

下面是模型源碼的具體實現,具體實現大傢可以閱讀源碼

###ALEXNET
torchvision.models.alexnet(pretrained=False, **kwargs)[SOURCE]
AlexNet model architecture from the “One weird trick…” paper.
Parameters:	pretrained (bool) – If True, returns a model pre-trained on ImageNet
###VGG
torchvision.models.vgg11(pretrained=False, **kwargs)[SOURCE]
VGG 11-layer model (configuration “A”)
Parameters:	pretrained (bool) – If True, returns a model pre-trained on ImageNet
torchvision.models.vgg11_bn(pretrained=False, **kwargs)[SOURCE]
VGG 11-layer model (configuration “A”) with batch normalization
Parameters:	pretrained (bool) – If True, returns a model pre-trained on ImageNet
torchvision.models.vgg13(pretrained=False, **kwargs)[SOURCE]
VGG 13-layer model (configuration “B”)
Parameters:	pretrained (bool) – If True, returns a model pre-trained on ImageNet
torchvision.models.vgg13_bn(pretrained=False, **kwargs)[SOURCE]
VGG 13-layer model (configuration “B”) with batch normalization
Parameters:	pretrained (bool) – If True, returns a model pre-trained on ImageNet
torchvision.models.vgg16(pretrained=False, **kwargs)[SOURCE]
VGG 16-layer model (configuration “D”)
Parameters:	pretrained (bool) – If True, returns a model pre-trained on ImageNet
torchvision.models.vgg16_bn(pretrained=False, **kwargs)[SOURCE]
VGG 16-layer model (configuration “D”) with batch normalization
Parameters:	pretrained (bool) – If True, returns a model pre-trained on ImageNet
torchvision.models.vgg19(pretrained=False, **kwargs)[SOURCE]
VGG 19-layer model (configuration “E”)
Parameters:	pretrained (bool) – If True, returns a model pre-trained on ImageNet
torchvision.models.vgg19_bn(pretrained=False, **kwargs)[SOURCE]
VGG 19-layer model (configuration ‘E') with batch normalization
Parameters:	pretrained (bool) – If True, returns a model pre-trained on ImageNet
RESNET
torchvision.models.resnet18(pretrained=False, **kwargs)[SOURCE]
Constructs a ResNet-18 model.
Parameters:	pretrained (bool) – If True, returns a model pre-trained on ImageNet
torchvision.models.resnet34(pretrained=False, **kwargs)[SOURCE]
Constructs a ResNet-34 model.
Parameters:	pretrained (bool) – If True, returns a model pre-trained on ImageNet
torchvision.models.resnet50(pretrained=False, **kwargs)[SOURCE]
Constructs a ResNet-50 model.
Parameters:	pretrained (bool) – If True, returns a model pre-trained on ImageNet
torchvision.models.resnet101(pretrained=False, **kwargs)[SOURCE]
Constructs a ResNet-101 model.
Parameters:	pretrained (bool) – If True, returns a model pre-trained on ImageNet
torchvision.models.resnet152(pretrained=False, **kwargs)[SOURCE]
Constructs a ResNet-152 model.
Parameters:	pretrained (bool) – If True, returns a model pre-trained on ImageNet
SQUEEZENET
torchvision.models.squeezenet1_0(pretrained=False, **kwargs)[SOURCE]
SqueezeNet model architecture from the “SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and <0.5MB model size” paper.
Parameters:	pretrained (bool) – If True, returns a model pre-trained on ImageNet
torchvision.models.squeezenet1_1(pretrained=False, **kwargs)[SOURCE]
SqueezeNet 1.1 model from the official SqueezeNet repo. SqueezeNet 1.1 has 2.4x less computation and slightly fewer parameters than SqueezeNet 1.0, without sacrificing accuracy.
Parameters:	pretrained (bool) – If True, returns a model pre-trained on ImageNet
DENSENET
torchvision.models.densenet121(pretrained=False, **kwargs)[SOURCE]
Densenet-121 model from “Densely Connected Convolutional Networks”
Parameters:	pretrained (bool) – If True, returns a model pre-trained on ImageNet
torchvision.models.densenet169(pretrained=False, **kwargs)[SOURCE]
Densenet-169 model from “Densely Connected Convolutional Networks”
Parameters:	pretrained (bool) – If True, returns a model pre-trained on ImageNet
torchvision.models.densenet161(pretrained=False, **kwargs)[SOURCE]
Densenet-161 model from “Densely Connected Convolutional Networks”
Parameters:	pretrained (bool) – If True, returns a model pre-trained on ImageNet
torchvision.models.densenet201(pretrained=False, **kwargs)[SOURCE]
Densenet-201 model from “Densely Connected Convolutional Networks”
Parameters:	pretrained (bool) – If True, returns a model pre-trained on ImageNet
INCEPTION V3
torchvision.models.inception_v3(pretrained=False, **kwargs)[SOURCE]
Inception v3 model architecture from “Rethinking the Inception Architecture for Computer Vision”.
Parameters:	pretrained (bool) – If True, returns a model pre-trained on ImageNet

以上就是Pytorch教程內置模型源碼實現的詳細內容,更多關於Pytorch內置模型的資料請關註WalkonNet其它相關文章!

推薦閱讀: