点击上方AI人工智能初学者 ,订阅我!此刻开始我们一起学习进步!
目录
1、ResNeSt简介
2、ResNeSt主要内容和工作
2.1、主要贡献
2.2、相关工作
2.2.1、Modern CNN Architectures
2.2.2、Multi-path and Feature-map Attention
2.2.3、Neural Architecture Search
2.3、Split-Attention Networks
2.3.1、Split-Attention Block
2.3.2、ResNeSt Block
2.3.3、实例化、加速和计算成本
2.3.4、与现有注意方法的关系
2.4、Training Strategy
2.5、实验结果
2.6、ResNeSt实现代码
2.6.1、splat.py
2.6.2、resnet.py
2.6.3、ablation.py
2.6.4、resnest.py
1、ResNeSt简介
虽然图像分类模型最近继续向前发展,但大多数应用,如目标检测和语义分割仍然使用ResNet变种作为骨干网络,因为它们的简单和模块化的结构。该论文提出了一个模块化的分散注意力模块(Split-Attention block),通过将这些分散注意力模块的堆叠得到了一个新的分散注意力模块,称之为**ResNeSt** 。网络保留了整个ResNet结构,以直接用于下游任务,同时不引入额外的计算成本。
当下很多的应用研究仍然处于使用ResNet或其变体之一作为骨干CNN。其简单和模块化的设计可以很容易地适应各种任务。然而,由于ResNet模型最初是为图像分类而设计的,由于感受野大小有限且缺乏跨通道交互,可能不适用于所有的垂直领域应用。这意味着要提高计算机视觉任务的性能,就需要“Network Surgery——网络手术”来修改ResNet,使其对特定任务更有效。
例如,一些方法添加一个金字塔模块,或引入远程连接,或使用跨通道特征图注意力等等。虽然这些方法确实提高了某些任务的迁移学习性能,但它们提出了一个问题:**能否创建一个通用的主干,它具有普遍改进的特性表示,从而提高同时跨多个任务的性能?** 跨通道信息在下游应用中已证明是成功的,而最近的图像分类网络更多地关注Group卷积或Depth-wise卷积。尽管这些模型在分类任务中具有优越的计算能力和精度权衡,但它们不能很好地转移到其他任务,因为它们的孤立表示无法捕获跨通道之间的关系。因此,使用跨通道表示的网络是一个值得探索的方向。
2、ResNeSt主要内容和工作
2.1、主要贡献
第一个贡献: 对ResNet的一个简单的架构进行了修改,将特征图的注意力分散到单个网络块中。
更具体地说,将每一块将特征映射划分为几组(沿着通道尺寸)和细粒度子组或分裂,在每组的特征表示决定通过加权组合表示的分裂(权重选择基于全局上下文信息)。将产生的单元称为Split-Attention模块,它保持简单和模块化。通过堆叠几个分散注意力的模块便可以创建一个类似ResNet的网络,称为ResNeSt (代表Split”)。我们的架构不需要比现有的ResNet变种更多的计算,并且很容易被用作其他视觉任务的主干。
第二个贡献: 图像分类和迁移学习应用的大规模基准。
使用ResNeSt主干的模型能够在多个任务上实现最先进的性能,即:图像分类、目标检测、实例分割和语义分割。提出的ResNeSt优于所有现有ResNet变异和有相同的计算效率,甚至达到准确率权衡法则比最先进的CNN模型产生通过神经网络架构搜索(Cascade-RCNN模式使用ResNeSt-101骨干达到48.3% box mAP;在MS-COCO实例分割有41.56% mAP。单个DeepLabV3模型再次使用了ResNeSt-101主干,在ADE20K场景解析验证集上达到46.9%的mloU,比之前的最佳结果高出超过1%的mloU)。
2.2、相关工作
2.2.1、Modern CNN Architectures
AlexNet: 深度卷积神经网络一直是图像分类的主流。随着这一趋势,研究已经从手工特征转向网络架构提取。NIN首先使用全局平均池化层来代替参数量很大的全连接层,然后采用1×1卷积层来学习Feature-map通道的非线性组合,这是第一种Feature-map注意机制。
VGGNet: 提出了模块化的网络设计策略,将同一类型的网络块反复叠加,简化了下游应用网络设计和迁移学习的工作流程。
Highway network: 引入了公路连接,使信息在几层之间流动而不衰减,有利于网络的收敛。
Res-Net: 在先人工作成功的基础上,引入了身份跳跃连接,减轻了深度神经网络中梯度消失的困难,允许网络学习更深层的特征表示。Res-Net已经成为最成功的CNN架构之一,被广泛应用于各种计算机视觉应用中。
2.2.2、Multi-path and Feature-map Attention
在GoogleNet中,多通道表示已经取得了成功,其中每个网络块由不同的卷积内核组成。ResNeXt在ResNet的Bottle-block块中采用Group convolution,将多路径结构转化为统一操作。SE-Net通过自适应地重新校准通道特征响应,引入了通道注意机制。SK-Net带来了跨两个网络分支的功能图关注。受之前方法的启发,我们的网络将基于通道的注意力泛化为特征映射组表示,通过统一的CNN操作符可以实现模块化和加速。
图1:ResNeSt块与SE-Net和SK-Net的比较
Split注意单元的详细视图如图2所示。为简单起见,我们在基数-主视图中显示ResNeSt块(具有相同基数组索引的Feature-map组彼此相邻)。在实际实现中使用了Radix-major,它可以通过Group Convolution和标准CNN层进行模块化和加速
2.2.3、Neural Architecture Search
随着计算能力的提高,人们的兴趣已经开始从手工设计的体系结构转移到系统搜索的体系结构,这些体系结构根据特定的任务进行了自适应的调整。最近的神经架构搜索算法已经自适应地产生了CNN架构,这些架构实现了最先进的分类性能,例如:Amoeba-Net、MNAS-Net和Efficient-Net。尽管上述模型在图像分类方面取得了巨大的成功,但是元网络结构彼此之间是不同的,这使得下游的模型很难建立在它们之上。相反,ResNeSt模型保留了ResNet元结构,它可以直接应用于许多现有的下游模型。同时还可以扩大神经结构搜索的搜索空间,并有可能提高整体性能。
2.3、Split-Attention Networks
2.3.1、Split-Attention Block
注意分块是一个计算单元,由特征图组和注意分块操作组成。
1、Feature-map Group
在ResNeXt块中,特征可以被分成几个组,特征映射组的数量由基数超参数k给出。将生成的特征映射组称为基数组。ResNeSt引入一个新的基数超参数R,它表示基数组内的分划数,所以特征组的总数是G = KR。也可以申请一系列的转换{F1、F2、...、FG}到每个组,然后每组的中间表示Ui = Fi (X),其中i∈{1,2,...,G}。
2、Split Attention in Cardinal Groups
图3:基数组内的分散注意
为了便于在图中显示,在图中使用c = C /K
可以通过跨多个分段的元素求和来进行融合获得每个基数组的组合表示。第k个基组的表示是:
其中:
通过在空间维度sk∈RC/K上使用全局平均池,可以收集具有嵌入基于通道的全局上下文信息。其中第c个分量计算为:
基数组表示Vk∈RH×W ×C/K的加权融合使用按通道分类的软注意进行聚合,其中每个特征图通道使用基于分段的加权组合生成。其中第c个通道的计算方式如下:
式中,a_k^i (c)表示分配权重(soft),由:
映射G_i^c根据全局上下文表示sk来确定第c个通道每个分路的权重。
2.3.2、ResNeSt Block
沿着通道维将基数组表示连接起来:V = Concat{V1、V2、 ... 、VK}。在标准的残差块中,如果输入和输出功能图具有相同的形状,那么我们的分块注意块的最终输出Y是通过一个快捷连接产生的:Y = V + X。对于带大步的块,对快捷连接应用适当的转换T,以对齐输出形状:Y = V + T (X)。例如,T可以是带状卷积,也可以是带池的联合卷积。
2.3.3、实例化、加速和计算成本
图1(右)显示了Split Attention的一个实例,其中Group Transformation Fi是一个1x1个卷积,然后是一个3x3个卷积,注意权重函数g是使用ReLU激活的两个完全连接的层参数化的。在基数主视图中绘制该图(具有相同基数索引的Feature-map组彼此相邻),以便轻松地描述总体逻辑。通过将布局切换到Radix-Major视图,这个Block可以很容易地使用标准CNN层(如Group Convolution、Group Fully Connected Layer和Softmax Operation)加速,Split Attention块的参数数量和跳转次数与具有相同基数和通道数量的残差块大致相同。
2.3.4、与现有注意方法的关系
在SE-Net中首次引入的“挤压-注意”概念(在原文中称为“激发”)是利用一个全局上下文来预测通道上的注意因素。当基数为1时,Split Attention Block对每个基数组应用一个“挤压-注意”操作,而SE-Net在整个块的顶部操作,而不管多个组。以前的SK-Net模型在两个网络分支之间引入了特征注意,但是在训练效率和扩展到大型神经网络方面,它们的操作并没有得到优化。我们的方法概括了之前在基本组设置中关于特征图注意力的工作,并且它的实现保持计算效率。图1显示了与SE-Net和SK-Net块的总体比较。
2.4、Training Strategy
Large Mini-batch Distributed Training
Label Smoothing
Auto Augmentation
Mixup Training
Large Crop Size
Regularization
2.5、实验结果
分类任务
目标检测任务
实例分割任务
语义分割
2.6、ResNeSt实现代码
2.6.1、splat.py
"""Split-Attention"""
import torch
from torch import nn
import torch.nn.functional as F
from torch.nn import Conv2d, Module, Linear, BatchNorm2d, ReLU
from torch.nn.modules.utils import _pair
__all__ = ['SplAtConv2d']
class SplAtConv2d(Module):
"""Split-Attention Conv2d
"""
def __init__(self, in_channels, channels, kernel_size, stride=(1, 1), padding=(0, 0),
dilation=(1, 1), groups=1, bias=True,
radix=2, reduction_factor=4,
rectify=False, rectify_avg=False, norm_layer=None,
dropblock_prob=0.0, **kwargs):
super(SplAtConv2d, self).__init__()
padding = _pair(padding)
self.rectify = rectify and (padding[0] > 0 or padding[1] > 0)
self.rectify_avg = rectify_avg
inter_channels = max(in_channels*radix//reduction_factor, 32)
self.radix = radix
self.cardinality = groups
self.channels = channels
self.dropblock_prob = dropblock_prob
if self.rectify:
from rfconv import RFConv2d
self.conv = RFConv2d(in_channels, channels*radix, kernel_size, stride, padding, dilation,
groups=groups*radix, bias=bias, average_mode=rectify_avg, **kwargs)
else:
self.conv = Conv2d(in_channels, channels*radix, kernel_size, stride, padding, dilation,
groups=groups*radix, bias=bias, **kwargs)
self.use_bn = norm_layer is not None
if self.use_bn:
self.bn0 = norm_layer(channels*radix)
self.relu = ReLU(inplace=True)
self.fc1 = Conv2d(channels, inter_channels, 1, groups=self.cardinality)
if self.use_bn:
self.bn1 = norm_layer(inter_channels)
self.fc2 = Conv2d(inter_channels, channels*radix, 1, groups=self.cardinality)
if dropblock_prob > 0.0:
self.dropblock = DropBlock2D(dropblock_prob, 3)
self.rsoftmax = rSoftMax(radix, groups)
def forward(self, x):
x = self.conv(x)
if self.use_bn:
x = self.bn0(x)
if self.dropblock_prob > 0.0:
x = self.dropblock(x)
x = self.relu(x)
batch, rchannel = x.shape[:2]
if self.radix > 1:
splited = torch.split(x, rchannel//self.radix, dim=1)
gap = sum(splited)
else:
gap = x
gap = F.adaptive_avg_pool2d(gap, 1)
gap = self.fc1(gap)
if self.use_bn:
gap = self.bn1(gap)
gap = self.relu(gap)
atten = self.fc2(gap)
atten = self.rsoftmax(atten).view(batch, -1, 1, 1)
if self.radix > 1:
attens = torch.split(atten, rchannel//self.radix, dim=1)
out = sum([att*split for (att, split) in zip(attens, splited)])
else:
out = atten * x
return out.contiguous()
class rSoftMax(nn.Module):
def __init__(self, radix, cardinality):
super().__init__()
self.radix = radix
self.cardinality = cardinality
def forward(self, x):
batch = x.size(0)
if self.radix > 1:
x = x.view(batch, self.cardinality, self.radix, -1).transpose(1, 2)
x = F.softmax(x, dim=1)
x = x.reshape(batch, -1)
else:
x = torch.sigmoid(x)
return x
2.6.2、resnet.py
"""ResNet variants"""
import math
import torch
import torch.nn as nn
from .splat import SplAtConv2d
__all__ = ['ResNet', 'Bottleneck']
class DropBlock2D(object):
def __init__(self, *args, **kwargs):
raise NotImplementedError
class GlobalAvgPool2d(nn.Module):
def __init__(self):
"""Global average pooling over the input's spatial dimensions"""
super(GlobalAvgPool2d, self).__init__()
def forward(self, inputs):
return nn.functional.adaptive_avg_pool2d(inputs, 1).view(inputs.size(0), -1)
class Bottleneck(nn.Module):
"""ResNet Bottleneck
"""
# pylint: disable=unused-argument
expansion = 4
def __init__(self, inplanes, planes, stride=1, downsample=None,
radix=1, cardinality=1, bottleneck_width=64,
avd=False, avd_first=False, dilation=1, is_first=False,
rectified_conv=False, rectify_avg=False,
norm_layer=None, dropblock_prob=0.0, last_gamma=False):
super(Bottleneck, self).__init__()
group_width = int(planes * (bottleneck_width / 64.)) * cardinality
self.conv1 = nn.Conv2d(inplanes, group_width, kernel_size=1, bias=False)
self.bn1 = norm_layer(group_width)
self.dropblock_prob = dropblock_prob
self.radix = radix
self.avd = avd and (stride > 1 or is_first)
self.avd_first = avd_first
if self.avd:
self.avd_layer = nn.AvgPool2d(3, stride, padding=1)
stride = 1
if dropblock_prob > 0.0:
self.dropblock1 = DropBlock2D(dropblock_prob, 3)
if radix == 1:
self.dropblock2 = DropBlock2D(dropblock_prob, 3)
self.dropblock3 = DropBlock2D(dropblock_prob, 3)
if radix >= 1:
self.conv2 = SplAtConv2d(
group_width, group_width, kernel_size=3,
stride=stride, padding=dilation,
dilation=dilation, groups=cardinality, bias=False,
radix=radix, rectify=rectified_conv,
rectify_avg=rectify_avg,
norm_layer=norm_layer,
dropblock_prob=dropblock_prob)
elif rectified_conv:
from rfconv import RFConv2d
self.conv2 = RFConv2d(
group_width, group_width, kernel_size=3, stride=stride,
padding=dilation, dilation=dilation,
groups=cardinality, bias=False,
average_mode=rectify_avg)
self.bn2 = norm_layer(group_width)
else:
self.conv2 = nn.Conv2d(
group_width, group_width, kernel_size=3, stride=stride,
padding=dilation, dilation=dilation,
groups=cardinality, bias=False)
self.bn2 = norm_layer(group_width)
self.conv3 = nn.Conv2d(
group_width, planes * 4, kernel_size=1, bias=False)
self.bn3 = norm_layer(planes*4)
if last_gamma:
from torch.nn.init import zeros_
zeros_(self.bn3.weight)
self.relu = nn.ReLU(inplace=True)
self.downsample = downsample
self.dilation = dilation
self.stride = stride
def forward(self, x):
residual = x
out = self.conv1(x)
out = self.bn1(out)
if self.dropblock_prob > 0.0:
out = self.dropblock1(out)
out = self.relu(out)
if self.avd and self.avd_first:
out = self.avd_layer(out)
out = self.conv2(out)
if self.radix == 0:
out = self.bn2(out)
if self.dropblock_prob > 0.0:
out = self.dropblock2(out)
out = self.relu(out)
if self.avd and not self.avd_first:
out = self.avd_layer(out)
out = self.conv3(out)
out = self.bn3(out)
if self.dropblock_prob > 0.0:
out = self.dropblock3(out)
if self.downsample is not None:
residual = self.downsample(x)
out += residual
out = self.relu(out)
return out
class ResNet(nn.Module):
"""ResNet Variants
Parameters
----------
block : Block
Class for the residual block. Options are BasicBlockV1, BottleneckV1.
layers : list of int
Numbers of layers in each block
classes : int, default 1000
Number of classification classes.
dilated : bool, default False
Applying dilation strategy to pretrained ResNet yielding a stride-8 model,
typically used in Semantic Segmentation.
norm_layer : object
Normalization layer used in backbone network (default: :class:`mxnet.gluon.nn.BatchNorm`;
for Synchronized Cross-GPU BachNormalization).
Reference:
- He, Kaiming, et al. "Deep residual learning for image recognition." Proceedings of the IEEE conference on computer vision and pattern recognition. 2016.
- Yu, Fisher, and Vladlen Koltun. "Multi-scale context aggregation by dilated convolutions."
"""
# pylint: disable=unused-variable
def __init__(self, block, layers, radix=1, groups=1, bottleneck_width=64,
num_classes=1000, dilated=False, dilation=1,
deep_stem=False, stem_width=64, avg_down=False,
rectified_conv=False, rectify_avg=False,
avd=False, avd_first=False,
final_drop=0.0, dropblock_prob=0,
last_gamma=False, norm_layer=nn.BatchNorm2d):
self.cardinality = groups
self.bottleneck_width = bottleneck_width
# ResNet-D params
self.inplanes = stem_width*2 if deep_stem else 64
self.avg_down = avg_down
self.last_gamma = last_gamma
# ResNeSt params
self.radix = radix
self.avd = avd
self.avd_first = avd_first
super(ResNet, self).__init__()
self.rectified_conv = rectified_conv
self.rectify_avg = rectify_avg
if rectified_conv:
from rfconv import RFConv2d
conv_layer = RFConv2d
else:
conv_layer = nn.Conv2d
conv_kwargs = {'average_mode': rectify_avg} if rectified_conv else {}
if deep_stem:
self.conv1 = nn.Sequential(
conv_layer(3, stem_width, kernel_size=3, stride=2, padding=1, bias=False, **conv_kwargs),
norm_layer(stem_width),
nn.ReLU(inplace=True),
conv_layer(stem_width, stem_width, kernel_size=3, stride=1, padding=1, bias=False, **conv_kwargs),
norm_layer(stem_width),
nn.ReLU(inplace=True),
conv_layer(stem_width, stem_width*2, kernel_size=3, stride=1, padding=1, bias=False, **conv_kwargs),
)
else:
self.conv1 = conv_layer(3, 64, kernel_size=7, stride=2, padding=3,
bias=False, **conv_kwargs)
self.bn1 = norm_layer(self.inplanes)
self.relu = nn.ReLU(inplace=True)
self.maxpool = nn.MaxPool2d(kernel_size=3, stride=2, padding=1)
self.layer1 = self._make_layer(block, 64, layers[0], norm_layer=norm_layer, is_first=False)
self.layer2 = self._make_layer(block, 128, layers[1], stride=2, norm_layer=norm_layer)
if dilated or dilation == 4:
self.layer3 = self._make_layer(block, 256, layers[2], stride=1,
dilation=2, norm_layer=norm_layer,
dropblock_prob=dropblock_prob)
self.layer4 = self._make_layer(block, 512, layers[3], stride=1,
dilation=4, norm_layer=norm_layer,
dropblock_prob=dropblock_prob)
elif dilation==2:
self.layer3 = self._make_layer(block, 256, layers[2], stride=2,
dilation=1, norm_layer=norm_layer,
dropblock_prob=dropblock_prob)
self.layer4 = self._make_layer(block, 512, layers[3], stride=1,
dilation=2, norm_layer=norm_layer,
dropblock_prob=dropblock_prob)
else:
self.layer3 = self._make_layer(block, 256, layers[2], stride=2,
norm_layer=norm_layer,
dropblock_prob=dropblock_prob)
self.layer4 = self._make_layer(block, 512, layers[3], stride=2,
norm_layer=norm_layer,
dropblock_prob=dropblock_prob)
self.avgpool = GlobalAvgPool2d()
self.drop = nn.Dropout(final_drop) if final_drop > 0.0 else None
self.fc = nn.Linear(512 * block.expansion, num_classes)
for m in self.modules():
if isinstance(m, nn.Conv2d):
n = m.kernel_size[0] * m.kernel_size[1] * m.out_channels
m.weight.data.normal_(0, math.sqrt(2. / n))
elif isinstance(m, norm_layer):
m.weight.data.fill_(1)
m.bias.data.zero_()
def _make_layer(self, block, planes, blocks, stride=1, dilation=1, norm_layer=None,
dropblock_prob=0.0, is_first=True):
downsample = None
if stride != 1 or self.inplanes != planes * block.expansion:
down_layers = []
if self.avg_down:
if dilation == 1:
down_layers.append(nn.AvgPool2d(kernel_size=stride, stride=stride,
ceil_mode=True, count_include_pad=False))
else:
down_layers.append(nn.AvgPool2d(kernel_size=1, stride=1,
ceil_mode=True, count_include_pad=False))
down_layers.append(nn.Conv2d(self.inplanes, planes * block.expansion,
kernel_size=1, stride=1, bias=False))
else:
down_layers.append(nn.Conv2d(self.inplanes, planes * block.expansion,
kernel_size=1, stride=stride, bias=False))
down_layers.append(norm_layer(planes * block.expansion))
downsample = nn.Sequential(*down_layers)
layers = []
if dilation == 1 or dilation == 2:
layers.append(block(self.inplanes, planes, stride, downsample=downsample,
radix=self.radix, cardinality=self.cardinality,
bottleneck_width=self.bottleneck_width,
avd=self.avd, avd_first=self.avd_first,
dilation=1, is_first=is_first, rectified_conv=self.rectified_conv,
rectify_avg=self.rectify_avg,
norm_layer=norm_layer, dropblock_prob=dropblock_prob,
last_gamma=self.last_gamma))
elif dilation == 4:
layers.append(block(self.inplanes, planes, stride, downsample=downsample,
radix=self.radix, cardinality=self.cardinality,
bottleneck_width=self.bottleneck_width,
avd=self.avd, avd_first=self.avd_first,
dilation=2, is_first=is_first, rectified_conv=self.rectified_conv,
rectify_avg=self.rectify_avg,
norm_layer=norm_layer, dropblock_prob=dropblock_prob,
last_gamma=self.last_gamma))
else:
raise RuntimeError("=> unknown dilation size: {}".format(dilation))
self.inplanes = planes * block.expansion
for i in range(1, blocks):
layers.append(block(self.inplanes, planes,
radix=self.radix, cardinality=self.cardinality,
bottleneck_width=self.bottleneck_width,
avd=self.avd, avd_first=self.avd_first,
dilation=dilation, rectified_conv=self.rectified_conv,
rectify_avg=self.rectify_avg,
norm_layer=norm_layer, dropblock_prob=dropblock_prob,
last_gamma=self.last_gamma))
return nn.Sequential(*layers)
def forward(self, x):
x = self.conv1(x)
x = self.bn1(x)
x = self.relu(x)
x = self.maxpool(x)
x = self.layer1(x)
x = self.layer2(x)
x = self.layer3(x)
x = self.layer4(x)
x = self.avgpool(x)
#x = x.view(x.size(0), -1)
x = torch.flatten(x, 1)
if self.drop:
x = self.drop(x)
x = self.fc(x)
return x
2.6.3、ablation.py
"""ResNeSt ablation study models"""
import torch
from .resnet import ResNet, Bottleneck
__all__ = ['resnest50_fast_1s1x64d', 'resnest50_fast_2s1x64d', 'resnest50_fast_4s1x64d',
'resnest50_fast_1s2x40d', 'resnest50_fast_2s2x40d', 'resnest50_fast_4s2x40d',
'resnest50_fast_1s4x24d']
_url_format = 'https://hangzh.s3.amazonaws.com/encoding/models/{}-{}.pth'
_model_sha256 = {name: checksum for checksum, name in [
('d8fbf808', 'resnest50_fast_1s1x64d'),
('44938639', 'resnest50_fast_2s1x64d'),
('f74f3fc3', 'resnest50_fast_4s1x64d'),
('32830b84', 'resnest50_fast_1s2x40d'),
('9d126481', 'resnest50_fast_2s2x40d'),
('41d14ed0', 'resnest50_fast_4s2x40d'),
('d4a4f76f', 'resnest50_fast_1s4x24d'),
]}
def short_hash(name):
if name not in _model_sha256:
raise ValueError('Pretrained model for {name} is not available.'.format(name=name))
return _model_sha256[name][:8]
resnest_model_urls = {name: _url_format.format(name, short_hash(name)) for
name in _model_sha256.keys()
}
def resnest50_fast_1s1x64d(pretrained=False, root='~/.encoding/models', **kwargs):
model = ResNet(Bottleneck, [3, 4, 6, 3],
radix=1, groups=1, bottleneck_width=64,
deep_stem=True, stem_width=32, avg_down=True,
avd=True, avd_first=True, **kwargs)
if pretrained:
model.load_state_dict(torch.hub.load_state_dict_from_url(
resnest_model_urls['resnest50_fast_1s1x64d'], progress=True, check_hash=True))
return model
def resnest50_fast_2s1x64d(pretrained=False, root='~/.encoding/models', **kwargs):
model = ResNet(Bottleneck, [3, 4, 6, 3],
radix=2, groups=1, bottleneck_width=64,
deep_stem=True, stem_width=32, avg_down=True,
avd=True, avd_first=True, **kwargs)
if pretrained:
model.load_state_dict(torch.hub.load_state_dict_from_url(
resnest_model_urls['resnest50_fast_2s1x64d'], progress=True, check_hash=True))
return model
def resnest50_fast_4s1x64d(pretrained=False, root='~/.encoding/models', **kwargs):
model = ResNet(Bottleneck, [3, 4, 6, 3],
radix=4, groups=1, bottleneck_width=64,
deep_stem=True, stem_width=32, avg_down=True,
avd=True, avd_first=True, **kwargs)
if pretrained:
model.load_state_dict(torch.hub.load_state_dict_from_url(
resnest_model_urls['resnest50_fast_4s1x64d'], progress=True, check_hash=True))
return model
def resnest50_fast_1s2x40d(pretrained=False, root='~/.encoding/models', **kwargs):
model = ResNet(Bottleneck, [3, 4, 6, 3],
radix=1, groups=2, bottleneck_width=40,
deep_stem=True, stem_width=32, avg_down=True,
avd=True, avd_first=True, **kwargs)
if pretrained:
model.load_state_dict(torch.hub.load_state_dict_from_url(
resnest_model_urls['resnest50_fast_1s2x40d'], progress=True, check_hash=True))
return model
def resnest50_fast_2s2x40d(pretrained=False, root='~/.encoding/models', **kwargs):
model = ResNet(Bottleneck, [3, 4, 6, 3],
radix=2, groups=2, bottleneck_width=40,
deep_stem=True, stem_width=32, avg_down=True,
avd=True, avd_first=True, **kwargs)
if pretrained:
model.load_state_dict(torch.hub.load_state_dict_from_url(
resnest_model_urls['resnest50_fast_2s2x40d'], progress=True, check_hash=True))
return model
def resnest50_fast_4s2x40d(pretrained=False, root='~/.encoding/models', **kwargs):
model = ResNet(Bottleneck, [3, 4, 6, 3],
radix=4, groups=2, bottleneck_width=40,
deep_stem=True, stem_width=32, avg_down=True,
avd=True, avd_first=True, **kwargs)
if pretrained:
model.load_state_dict(torch.hub.load_state_dict_from_url(
resnest_model_urls['resnest50_fast_4s2x40d'], progress=True, check_hash=True))
return model
def resnest50_fast_1s4x24d(pretrained=False, root='~/.encoding/models', **kwargs):
model = ResNet(Bottleneck, [3, 4, 6, 3],
radix=1, groups=4, bottleneck_width=24,
deep_stem=True, stem_width=32, avg_down=True,
avd=True, avd_first=True, **kwargs)
if pretrained:
model.load_state_dict(torch.hub.load_state_dict_from_url(
resnest_model_urls['resnest50_fast_1s4x24d'], progress=True, check_hash=True))
return model
2.6.4、resnest.py
"""ResNeSt models"""
import torch
from .resnet import ResNet, Bottleneck
__all__ = ['resnest50', 'resnest101', 'resnest200', 'resnest269']
_url_format = 'https://hangzh.s3.amazonaws.com/encoding/models/{}-{}.pth'
_model_sha256 = {name: checksum for checksum, name in [
('528c19ca', 'resnest50'),
('22405ba7', 'resnest101'),
('75117900', 'resnest200'),
('0cc87c48', 'resnest269'),
]}
def short_hash(name):
if name not in _model_sha256:
raise ValueError('Pretrained model for {name} is not available.'.format(name=name))
return _model_sha256[name][:8]
resnest_model_urls = {name: _url_format.format(name, short_hash(name)) for
name in _model_sha256.keys()
}
def resnest50(pretrained=False, root='~/.encoding/models', **kwargs):
model = ResNet(Bottleneck, [3, 4, 6, 3],
radix=2, groups=1, bottleneck_width=64,
deep_stem=True, stem_width=32, avg_down=True,
avd=True, avd_first=False, **kwargs)
if pretrained:
model.load_state_dict(torch.hub.load_state_dict_from_url(
resnest_model_urls['resnest50'], progress=True, check_hash=True))
return model
def resnest101(pretrained=False, root='~/.encoding/models', **kwargs):
model = ResNet(Bottleneck, [3, 4, 23, 3],
radix=2, groups=1, bottleneck_width=64,
deep_stem=True, stem_width=64, avg_down=True,
avd=True, avd_first=False, **kwargs)
if pretrained:
model.load_state_dict(torch.hub.load_state_dict_from_url(
resnest_model_urls['resnest101'], progress=True, check_hash=True))
return model
def resnest200(pretrained=False, root='~/.encoding/models', **kwargs):
model = ResNet(Bottleneck, [3, 24, 36, 3],
radix=2, groups=1, bottleneck_width=64,
deep_stem=True, stem_width=64, avg_down=True,
avd=True, avd_first=False, **kwargs)
if pretrained:
model.load_state_dict(torch.hub.load_state_dict_from_url(
resnest_model_urls['resnest200'], progress=True, check_hash=True))
return model
def resnest269(pretrained=False, root='~/.encoding/models', **kwargs):
model = ResNet(Bottleneck, [3, 30, 48, 8],
radix=2, groups=1, bottleneck_width=64,
deep_stem=True, stem_width=64, avg_down=True,
avd=True, avd_first=False, **kwargs)
if pretrained:
model.load_state_dict(torch.hub.load_state_dict_from_url(
resnest_model_urls['resnest269'], progress=True, check_hash=True))
return model
参考:
代码(提供PyTorch和MXNet双版本):
https://github.com/zhanghang1989/ResNeSt
https://github.com/callmefish/ResNeSt-simplify-pytorch
论文:
https://hangzhang.org/files/resnest.pdf
希望您可以关注公众号,也非常期待您的打赏。
声明:转载请说明出处
下方为小生公众号,还望包容接纳和关注,非常期待与您的美好相遇,让我们以梦为马,砥砺前行。
希望技术与灵魂可以一路同行
长按识别二维码关注一下
更多精彩内容可回复关键词
每篇文章的主题即可
点“在看”给我一朵小黄花