当前位置: 首页 > news >正文

EfficientNet与复合缩放理论(Compound Scaling Theory) 详解(MATLAB)

1.EfficientNet网络与模型复合缩放

1.1 EfficientNet网络简介

1.1.1 提出背景、动机与过程

        EfficientNet是一种高效的卷积神经网络(CNN),由Google的研究团队Tan等人在2019年提出。EfficientNet的设计目标是提高网络的性能,同时减少计算资源的消耗,“EfficientNet”的名称也由此而来。

        为了实现在更少的模型参数和计算需求下达到更高的模型性能,Tan 等人基于神经架构搜索方法(Neural Architecture Search,NAS)首先得到了该模型的基线网络(baseline network),称其为EfficientNet-b0 。

神经架构搜索方法(Neural Architecture Search):NAS的思想类似于机器学习模型的超参数优化方法(如网格搜索算法、粒子群优化算法等),两者两者都定义了一个待搜索的参数空间,在本质上都属于优化算法。

  • NAS的搜索空间一般包含网络架构的参数,这些架构由不同的网络层类型、数量、连接方式等参数组成。
  • 超参数优化方法的搜索空间则是模型超参数的集合,这些超参数定义了模型的行为和训练过程。

作者等人应用的NAS方法为“多目标神经架构搜索(multi-objective neural architecture search)”,该方法的原文地址为:MnasNet: Platform-Aware Neural Architecture Search for Mobile

        基于EfficientNet-b0对模型扩展(缩放)的参数组合进行了网格搜索,协调了扩展过程中输入图像分辨率γ、网络深度α、宽度β的“最佳比值”,并以此提出了复合这三种因素的拓展方法,称为复合缩放(Compound Scaling)理论。基于该理论对EfficientNet的基线模型(EfficientNet-b0)进行复合缩放,得到EfficientNet-b1~b7,8种不同的规模的EfficientNet网络。最终在不同数据集中达到了当时领先的水准。

注意:在Tan等人的研究中,缩放系数(即α, β, γ)的确定是基于网格搜索(Grid Search)的,而不是神经架构搜索(NAS)技术。很多博主都搞错这点。

原论文:Tan M, Le Q. Efficientnet: Rethinking model scaling for convolutional neural networks[C]//International conference on machine learning. PMLR, 2019: 6105-6114.

原论文地址:[1905.11946] EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks

1.1.2 模型性能

该网络在ImageNet数据集上的表现极为优异,能够在准确率和计算效率之间实现良好的平衡。EfficientNet网络采用了深度可扩展卷积网络(Mobile Inverted Bottleneck Block)模块作为基础模块,并对其进行多维度的缩减和扩展,使得网络具备更快的推理速度和更小的模型体积。

从跑分结果上来看轻量化的EfficientNet-B0(76.3%,5.3M)要优于ResNet-50(75.3%,25M)、MobileNet V3-Large(75.2%,5.4M)和ShuffleNet V2(75.4%)。差于MoblieViT V3-XS(76.7%,2.5M)和MobileViT-S(78.4%,5.6M)。(看来混合结构才是大势所趋啊)

中等规模的EfficientNet-B4(82.4%,9.2M)要优于ResNet-200(81.8%)、SENet-101(81.4%,49.2M)和DeepVit-L(82.2%,55M)。

较大大规模的EfficientNet-B7(84.4%,66M),则于ResNeSt-269(84.5%,111M)和AMD(ViT-B/16)(84.6%,87M)相当。

从其表现来看,一些传统模型和方法的Baseline替换成EfficientNet的Baseline基本都会有所提升,综合评价是弱于混合模型,强于大多数ConvNet。

对比图

ImageNet模型排行榜:ImageNet Benchmark (Image Classification) | Papers With Code

1.1.3 EfficientNet的后续相关研究

EfficientNet取得成功后,基于该模型及复合缩放的思路,学者继续优化了该模型及其方法,比较有代表性的相关研究有:

2019年:NoisyStudent(EfficientNet)

论文地址:Self-training with Noisy Student improves ImageNet classification

NoisyStudent是一种半监督学习方法,它扩展了自我训练和蒸馏的思想。该方法的核心在于利用未标注的数据来提升模型的性能。Xie等人结合了NoisyStudent方法和EfficientNet模型提出的NoisyStudent(EfficientNet)让其在ImageNet上的精度更近一步。最终NoisyStudent(EfficientNet-L2)取得了(88.4%,480M)的成绩。

这里的EfficientNet-L2是Xie等人基于EfficientNet-b0继续扩展的,其大小要比b7还要大很多,具体的,其对于B0的缩放倍率为宽度(Width):4.3,深度(Depth):5.3 ,分辨率为(Train Res.):800*800。而EfficientNet-B7的这三个数值是2.0,3.1和600*600。

2020年 FixEfficientNet

论文地址:Fixing the train-test resolution discrepancy: FixEfficientNet

Touvron等人提出了FixRes方法,通过联合优化训练和测试时的分辨率和尺度选择,保持相同的区域分类(RoC)采样。他们将其与EfficientNet结合得到了FixEfficientNet,FixEfficientNet-L2最终取得了88.5% ,480M成绩。

2020年 Meta Pseudo Labels(EfficientNet-L2)

论文地址:Meta Pseudo Labels

Pham等人提出了Meta Pseudo Labels,一种半监督学习方法,结合EfficientNet-L2,该方法在ImageNet上实现了90.2%,480M的成绩。在ViT得到发展之前,打遍天下无敌手,也是目前在ImageNet数据集中的顶端模型。

2021年 EfficientNet-V2

论文地址:EfficientNetV2: Smaller Models and Faster Training

Tan等人在EfficientNet成功的基础上提出了EfficientNetV2,这是一种新的、更小且更快的卷积神经网络家族。通过训练感知的NAS和缩放策略,EfficientNetV2在训练速度和参数效率方面显著优于之前的模型。

1.2 EfficientNet结构

1.2.1EfficientNet的基本单元:MBConv模块

EfficientNet属于当时背景下的缝合怪,由于是基于NAS搜索出来网络,从结构上来说,其糅合MobileNet和ResNet等模型的结构特点。该模型采用了的基本结构称为MBConv模块Mobile Inverted Bottleneck Convolution,移动倒置残差卷积模块),其可以看作由MobileNet的Inverted Residual Block发展而来的模块,其有以下特点:

  1. 反向瓶颈转换:MBConv模块首先通过1x1的卷积层对输入特征图进行升维处理,增加通道数,然后通过一个深度可分离卷积层(Depthwise Convolution)进行空间特征提取,最后再通过另一个1x1的卷积层进行降维处理,恢复到原始的通道数或接近的通道数。这种结构形成了一个反向的瓶颈转换,即先膨胀通道数再压缩通道数。
  2. 残差连接:MBConv模块中包含残差连接(Residual Connection),将模块的输入与经过上述操作后的输出相加,以促进梯度流动,加速训练过程,并有助于解决深度网络中的退化问题。
  3. Squeeze-and-Excitation(SE)模块:对于经典的EfficientNet的版本,会包含一个SE模块。SE模块通过先全局平均池化(压缩),然后通过两个全连接层来调整通道间的权重,实现对特征通道的重新校准,增强重要特征,抑制不重要的特征。

MBConv模块主要包含两种变体,MBConv1和MBConv6

MBConv1的结构

图都是自己画的,和别人不一样很正常,不一样的话建议以我为标准。没错,很多博主都画错或讲错了。

MBConv1是Efficient浅层中的结构:

MBConv1由普通卷积、深度卷积(或称为分组卷积 grouped convolution),批归一化层BN(Batch-Normalization)、Swish激活函数、SE模块、加法层、乘法层构成。假设输入数据的大小为M*M*N/6(空间-空间-通道,S-S-C),在第一个Conv后对通道进行增,一般为输入数据的六倍,MBconv1处理完后在空间大小下采样为输入的1/2,但通道数输出为原先的3/2。

注意:

  • 在EfficientNet中,SE模块中间的激活函数为Swish而不是Relu或其他。
  • MBconv1中,Multiplication Layer后的卷积核对通道进行下采样,压缩为原来的1/4,SE模块中也有下采用过程,为输入通道的1/4。
  • Dethwise-Conv的卷积核大小为k*k,会有所变动,常见的为3*3,5*5.具体数值可以参考结构的表格
 MBConv6的结构

MBConv6是EfficientNet中的深层结构,下图列举了1层、2层的MBConv6和3层的MBConv6,4层、5层的MBConv6以此类推。

其主要嵌套MBConv1结构,通过Skip-Connection 进行连接。相较于输入(M*M*C),其空间尺度下采用为原来的1/2,通道尺度上采样至3/2。当MBConv6的重复层(Layers)为1时,其结构与MBConv1相似,但前面多了个size为1*1的Conv层。二层、三层等多层中的MBConv6层的结构都是单层MBConv6,不是复合嵌套。特别的,MBConv6不一定会对空间尺度进行下采样,具体看模型的设计细节。

注:重要的,在同一stage中,在MBConv6中的第二个及后续的MBConv6层中,Dethwise-Convolution的步幅为1,不为2,同时每个卷积核Filters的数量与第一层的MBConv6对应,以便对齐输出数据的尺度。

MATLAB构建EfficientNet-b0

根据MBConv1和MBConv6的结构,基于Tan等人提供的参数表,就可以非常的轻松建立EfficientNet结构的网络.

如EfficientNet-b0:

EfficientNet-b0是基于NAS搜索出来的,其优化目标是:ACC(m )\times[FLOPS(m)/T]^w,即:准确率*浮点消耗。

  • m代表当前模型
  • T是目标FLOPS(Floating Point Operations Per Second,每秒浮点运算次数)的一个阈值或目标值,是一个常数,由人为设定,用于权衡模型的计算复杂度和准确率。
  • w为-0.07, 是一个超参数,用于在准确率(accuracy)和FLOPS之间找到一个平衡点。具体来说,这个超参数在优化过程中会被用来调整准确率提升和计算量增加之间的相对重要性。当 w 的值为负值时(如-0.07),优化过程会更加倾向于选择那些计算量较小(即FLOPS较低)但准确率相对较高的模型。

EfficientNet由9个“Stage”构成,Stage2为MBConv1,3~7为多层MBConv6("#Layers"对应层数), 第8层为单层的MBConv6。可以发现每个Stage中,空间尺度下采用与原来的1/2,通道上采样原来的3/2。但是在Stage-7中,4层的MBconv6为对空间进行下采样,需要注意。

在EfficientNet-b0中,通道的上采样也并非严格按照3/2进行设计,如Stage4,和Stage5等。这是由于EfficientNet的参数是由于NAS搜索出来的,而不是人为设计的,所以会比较反直觉,但基本是接近3/2~2之间的下采样倍率。但这不是严格要求的,多一些少一些对模型性能影响不是很明显。

OK,基于此的构建EfficientNet-b0(这里是复制了MATLAB官方提供的EfficientNet-b0预训练模型的结构生成代码):


net = dlnetwork;tempNet = [imageInputLayer([224 224 3],"Name","ImageInput","Normalization","zscore")convolution2dLayer([3 3],32,"Name","efficientnet-b0|model|stem|conv2d|Conv2D","Padding",[0 1 0 1],"Stride",[2 2])batchNormalizationLayer("Name","efficientnet-b0|model|stem|tpu_batch_normalization|FusedBatchNorm","Epsilon",0.00100000004749745)];
net = addLayers(net,tempNet);tempNet = sigmoidLayer("Name","efficientnet-b0|model|stem|SigmoidLayer");
net = addLayers(net,tempNet);tempNet = [multiplicationLayer(2,"Name","efficientnet-b0|model|stem|MulLayer")groupedConvolution2dLayer([3 3],1,32,"Name","efficientnet-b0|model|blocks_0|depthwise_conv2d|depthwise","Padding",[1 1 1 1])batchNormalizationLayer("Name","efficientnet-b0|model|blocks_0|tpu_batch_normalization|FusedBatchNorm","Epsilon",0.00100000004749745)];
net = addLayers(net,tempNet);tempNet = sigmoidLayer("Name","efficientnet-b0|model|blocks_0|SigmoidLayer");
net = addLayers(net,tempNet);tempNet = multiplicationLayer(2,"Name","efficientnet-b0|model|blocks_0|MulLayer");
net = addLayers(net,tempNet);tempNet = [globalAveragePooling2dLayer("Name","efficientnet-b0|model|blocks_0|se|GlobAvgPool")convolution2dLayer([1 1],8,"Name","Conv__301")];
net = addLayers(net,tempNet);tempNet = sigmoidLayer("Name","efficientnet-b0|model|blocks_0|se|SigmoidLayer");
net = addLayers(net,tempNet);tempNet = [multiplicationLayer(2,"Name","efficientnet-b0|model|blocks_0|se|MulLayer")convolution2dLayer([1 1],32,"Name","Conv__304")sigmoidLayer("Name","efficientnet-b0|model|blocks_0|se|SigmoidLayer_1")];
net = addLayers(net,tempNet);tempNet = [multiplicationLayer(2,"Name","efficientnet-b0|model|blocks_0|se|MulLayer_1")convolution2dLayer([1 1],16,"Name","efficientnet-b0|model|blocks_0|conv2d|Conv2D")batchNormalizationLayer("Name","efficientnet-b0|model|blocks_0|tpu_batch_normalization_1|FusedBatchNorm","Epsilon",0.00100000004749745)convolution2dLayer([1 1],96,"Name","efficientnet-b0|model|blocks_1|conv2d|Conv2D")batchNormalizationLayer("Name","efficientnet-b0|model|blocks_1|tpu_batch_normalization|FusedBatchNorm","Epsilon",0.00100000004749745)];
net = addLayers(net,tempNet);tempNet = sigmoidLayer("Name","efficientnet-b0|model|blocks_1|SigmoidLayer");
net = addLayers(net,tempNet);tempNet = [multiplicationLayer(2,"Name","efficientnet-b0|model|blocks_1|MulLayer")groupedConvolution2dLayer([3 3],1,96,"Name","efficientnet-b0|model|blocks_1|depthwise_conv2d|depthwise","Padding",[0 1 0 1],"Stride",[2 2])batchNormalizationLayer("Name","efficientnet-b0|model|blocks_1|tpu_batch_normalization_1|FusedBatchNorm","Epsilon",0.00100000004749745)];
net = addLayers(net,tempNet);tempNet = sigmoidLayer("Name","efficientnet-b0|model|blocks_1|SigmoidLayer_1");
net = addLayers(net,tempNet);tempNet = multiplicationLayer(2,"Name","efficientnet-b0|model|blocks_1|MulLayer_1");
net = addLayers(net,tempNet);tempNet = [globalAveragePooling2dLayer("Name","efficientnet-b0|model|blocks_1|se|GlobAvgPool")convolution2dLayer([1 1],4,"Name","Conv__309")];
net = addLayers(net,tempNet);tempNet = sigmoidLayer("Name","efficientnet-b0|model|blocks_1|se|SigmoidLayer");
net = addLayers(net,tempNet);tempNet = [multiplicationLayer(2,"Name","efficientnet-b0|model|blocks_1|se|MulLayer")convolution2dLayer([1 1],96,"Name","Conv__312")sigmoidLayer("Name","efficientnet-b0|model|blocks_1|se|SigmoidLayer_1")];
net = addLayers(net,tempNet);tempNet = [multiplicationLayer(2,"Name","efficientnet-b0|model|blocks_1|se|MulLayer_1")convolution2dLayer([1 1],24,"Name","efficientnet-b0|model|blocks_1|conv2d_1|Conv2D")batchNormalizationLayer("Name","efficientnet-b0|model|blocks_1|tpu_batch_normalization_2|FusedBatchNorm","Epsilon",0.00100000004749745)];
net = addLayers(net,tempNet);tempNet = [convolution2dLayer([1 1],144,"Name","efficientnet-b0|model|blocks_2|conv2d|Conv2D")batchNormalizationLayer("Name","efficientnet-b0|model|blocks_2|tpu_batch_normalization|FusedBatchNorm","Epsilon",0.00100000004749745)];
net = addLayers(net,tempNet);tempNet = sigmoidLayer("Name","efficientnet-b0|model|blocks_2|SigmoidLayer");
net = addLayers(net,tempNet);tempNet = [multiplicationLayer(2,"Name","efficientnet-b0|model|blocks_2|MulLayer")groupedConvolution2dLayer([3 3],1,144,"Name","efficientnet-b0|model|blocks_2|depthwise_conv2d|depthwise","Padding",[1 1 1 1])batchNormalizationLayer("Name","efficientnet-b0|model|blocks_2|tpu_batch_normalization_1|FusedBatchNorm","Epsilon",0.00100000004749745)];
net = addLayers(net,tempNet);tempNet = sigmoidLayer("Name","efficientnet-b0|model|blocks_2|SigmoidLayer_1");
net = addLayers(net,tempNet);tempNet = multiplicationLayer(2,"Name","efficientnet-b0|model|blocks_2|MulLayer_1");
net = addLayers(net,tempNet);tempNet = [globalAveragePooling2dLayer("Name","efficientnet-b0|model|blocks_2|se|GlobAvgPool")convolution2dLayer([1 1],6,"Name","Conv__319")];
net = addLayers(net,tempNet);tempNet = sigmoidLayer("Name","efficientnet-b0|model|blocks_2|se|SigmoidLayer");
net = addLayers(net,tempNet);tempNet = [multiplicationLayer(2,"Name","efficientnet-b0|model|blocks_2|se|MulLayer")convolution2dLayer([1 1],144,"Name","Conv__322")sigmoidLayer("Name","efficientnet-b0|model|blocks_2|se|SigmoidLayer_1")];
net = addLayers(net,tempNet);tempNet = [multiplicationLayer(2,"Name","efficientnet-b0|model|blocks_2|se|MulLayer_1")convolution2dLayer([1 1],24,"Name","efficientnet-b0|model|blocks_2|conv2d_1|Conv2D")batchNormalizationLayer("Name","efficientnet-b0|model|blocks_2|tpu_batch_normalization_2|FusedBatchNorm","Epsilon",0.00100000004749745)];
net = addLayers(net,tempNet);tempNet = [additionLayer(2,"Name","efficientnet-b0|model|blocks_2|Add")convolution2dLayer([1 1],144,"Name","efficientnet-b0|model|blocks_3|conv2d|Conv2D")batchNormalizationLayer("Name","efficientnet-b0|model|blocks_3|tpu_batch_normalization|FusedBatchNorm","Epsilon",0.00100000004749745)];
net = addLayers(net,tempNet);tempNet = sigmoidLayer("Name","efficientnet-b0|model|blocks_3|SigmoidLayer");
net = addLayers(net,tempNet);tempNet = [multiplicationLayer(2,"Name","efficientnet-b0|model|blocks_3|MulLayer")groupedConvolution2dLayer([5 5],1,144,"Name","efficientnet-b0|model|blocks_3|depthwise_conv2d|depthwise","Padding",[1 2 1 2],"Stride",[2 2])batchNormalizationLayer("Name","efficientnet-b0|model|blocks_3|tpu_batch_normalization_1|FusedBatchNorm","Epsilon",0.00100000004749745)];
net = addLayers(net,tempNet);tempNet = sigmoidLayer("Name","efficientnet-b0|model|blocks_3|SigmoidLayer_1");
net = addLayers(net,tempNet);tempNet = multiplicationLayer(2,"Name","efficientnet-b0|model|blocks_3|MulLayer_1");
net = addLayers(net,tempNet);tempNet = [globalAveragePooling2dLayer("Name","efficientnet-b0|model|blocks_3|se|GlobAvgPool")convolution2dLayer([1 1],6,"Name","Conv__327")];
net = addLayers(net,tempNet);tempNet = sigmoidLayer("Name","efficientnet-b0|model|blocks_3|se|SigmoidLayer");
net = addLayers(net,tempNet);tempNet = [multiplicationLayer(2,"Name","efficientnet-b0|model|blocks_3|se|MulLayer")convolution2dLayer([1 1],144,"Name","Conv__330")sigmoidLayer("Name","efficientnet-b0|model|blocks_3|se|SigmoidLayer_1")];
net = addLayers(net,tempNet);tempNet = [multiplicationLayer(2,"Name","efficientnet-b0|model|blocks_3|se|MulLayer_1")convolution2dLayer([1 1],40,"Name","efficientnet-b0|model|blocks_3|conv2d_1|Conv2D")batchNormalizationLayer("Name","efficientnet-b0|model|blocks_3|tpu_batch_normalization_2|FusedBatchNorm","Epsilon",0.00100000004749745)];
net = addLayers(net,tempNet);tempNet = [convolution2dLayer([1 1],240,"Name","efficientnet-b0|model|blocks_4|conv2d|Conv2D")batchNormalizationLayer("Name","efficientnet-b0|model|blocks_4|tpu_batch_normalization|FusedBatchNorm","Epsilon",0.00100000004749745)];
net = addLayers(net,tempNet);tempNet = sigmoidLayer("Name","efficientnet-b0|model|blocks_4|SigmoidLayer");
net = addLayers(net,tempNet);tempNet = [multiplicationLayer(2,"Name","efficientnet-b0|model|blocks_4|MulLayer")groupedConvolution2dLayer([5 5],1,240,"Name","efficientnet-b0|model|blocks_4|depthwise_conv2d|depthwise","Padding",[2 2 2 2])batchNormalizationLayer("Name","efficientnet-b0|model|blocks_4|tpu_batch_normalization_1|FusedBatchNorm","Epsilon",0.00100000004749745)];
net = addLayers(net,tempNet);tempNet = sigmoidLayer("Name","efficientnet-b0|model|blocks_4|SigmoidLayer_1");
net = addLayers(net,tempNet);tempNet = multiplicationLayer(2,"Name","efficientnet-b0|model|blocks_4|MulLayer_1");
net = addLayers(net,tempNet);tempNet = [globalAveragePooling2dLayer("Name","efficientnet-b0|model|blocks_4|se|GlobAvgPool")convolution2dLayer([1 1],10,"Name","Conv__337")];
net = addLayers(net,tempNet);tempNet = sigmoidLayer("Name","efficientnet-b0|model|blocks_4|se|SigmoidLayer");
net = addLayers(net,tempNet);tempNet = [multiplicationLayer(2,"Name","efficientnet-b0|model|blocks_4|se|MulLayer")convolution2dLayer([1 1],240,"Name","Conv__340")sigmoidLayer("Name","efficientnet-b0|model|blocks_4|se|SigmoidLayer_1")];
net = addLayers(net,tempNet);tempNet = [multiplicationLayer(2,"Name","efficientnet-b0|model|blocks_4|se|MulLayer_1")convolution2dLayer([1 1],40,"Name","efficientnet-b0|model|blocks_4|conv2d_1|Conv2D")batchNormalizationLayer("Name","efficientnet-b0|model|blocks_4|tpu_batch_normalization_2|FusedBatchNorm","Epsilon",0.00100000004749745)];
net = addLayers(net,tempNet);tempNet = [additionLayer(2,"Name","efficientnet-b0|model|blocks_4|Add")convolution2dLayer([1 1],240,"Name","efficientnet-b0|model|blocks_5|conv2d|Conv2D")batchNormalizationLayer("Name","efficientnet-b0|model|blocks_5|tpu_batch_normalization|FusedBatchNorm","Epsilon",0.00100000004749745)];
net = addLayers(net,tempNet);tempNet = sigmoidLayer("Name","efficientnet-b0|model|blocks_5|SigmoidLayer");
net = addLayers(net,tempNet);tempNet = [multiplicationLayer(2,"Name","efficientnet-b0|model|blocks_5|MulLayer")groupedConvolution2dLayer([3 3],1,240,"Name","efficientnet-b0|model|blocks_5|depthwise_conv2d|depthwise","Padding",[0 1 0 1],"Stride",[2 2])batchNormalizationLayer("Name","efficientnet-b0|model|blocks_5|tpu_batch_normalization_1|FusedBatchNorm","Epsilon",0.00100000004749745)];
net = addLayers(net,tempNet);tempNet = sigmoidLayer("Name","efficientnet-b0|model|blocks_5|SigmoidLayer_1");
net = addLayers(net,tempNet);tempNet = multiplicationLayer(2,"Name","efficientnet-b0|model|blocks_5|MulLayer_1");
net = addLayers(net,tempNet);tempNet = [globalAveragePooling2dLayer("Name","efficientnet-b0|model|blocks_5|se|GlobAvgPool")convolution2dLayer([1 1],10,"Name","Conv__345")];
net = addLayers(net,tempNet);tempNet = sigmoidLayer("Name","efficientnet-b0|model|blocks_5|se|SigmoidLayer");
net = addLayers(net,tempNet);tempNet = [multiplicationLayer(2,"Name","efficientnet-b0|model|blocks_5|se|MulLayer")convolution2dLayer([1 1],240,"Name","Conv__348")sigmoidLayer("Name","efficientnet-b0|model|blocks_5|se|SigmoidLayer_1")];
net = addLayers(net,tempNet);tempNet = [multiplicationLayer(2,"Name","efficientnet-b0|model|blocks_5|se|MulLayer_1")convolution2dLayer([1 1],80,"Name","efficientnet-b0|model|blocks_5|conv2d_1|Conv2D")batchNormalizationLayer("Name","efficientnet-b0|model|blocks_5|tpu_batch_normalization_2|FusedBatchNorm","Epsilon",0.00100000004749745)];
net = addLayers(net,tempNet);tempNet = [convolution2dLayer([1 1],480,"Name","efficientnet-b0|model|blocks_6|conv2d|Conv2D")batchNormalizationLayer("Name","efficientnet-b0|model|blocks_6|tpu_batch_normalization|FusedBatchNorm","Epsilon",0.00100000004749745)];
net = addLayers(net,tempNet);tempNet = sigmoidLayer("Name","efficientnet-b0|model|blocks_6|SigmoidLayer");
net = addLayers(net,tempNet);tempNet = [multiplicationLayer(2,"Name","efficientnet-b0|model|blocks_6|MulLayer")groupedConvolution2dLayer([3 3],1,480,"Name","efficientnet-b0|model|blocks_6|depthwise_conv2d|depthwise","Padding",[1 1 1 1])batchNormalizationLayer("Name","efficientnet-b0|model|blocks_6|tpu_batch_normalization_1|FusedBatchNorm","Epsilon",0.00100000004749745)];
net = addLayers(net,tempNet);tempNet = sigmoidLayer("Name","efficientnet-b0|model|blocks_6|SigmoidLayer_1");
net = addLayers(net,tempNet);tempNet = multiplicationLayer(2,"Name","efficientnet-b0|model|blocks_6|MulLayer_1");
net = addLayers(net,tempNet);tempNet = [globalAveragePooling2dLayer("Name","efficientnet-b0|model|blocks_6|se|GlobAvgPool")convolution2dLayer([1 1],20,"Name","Conv__355")];
net = addLayers(net,tempNet);tempNet = sigmoidLayer("Name","efficientnet-b0|model|blocks_6|se|SigmoidLayer");
net = addLayers(net,tempNet);tempNet = [multiplicationLayer(2,"Name","efficientnet-b0|model|blocks_6|se|MulLayer")convolution2dLayer([1 1],480,"Name","Conv__358")sigmoidLayer("Name","efficientnet-b0|model|blocks_6|se|SigmoidLayer_1")];
net = addLayers(net,tempNet);tempNet = [multiplicationLayer(2,"Name","efficientnet-b0|model|blocks_6|se|MulLayer_1")convolution2dLayer([1 1],80,"Name","efficientnet-b0|model|blocks_6|conv2d_1|Conv2D")batchNormalizationLayer("Name","efficientnet-b0|model|blocks_6|tpu_batch_normalization_2|FusedBatchNorm","Epsilon",0.00100000004749745)];
net = addLayers(net,tempNet);tempNet = additionLayer(2,"Name","efficientnet-b0|model|blocks_6|Add");
net = addLayers(net,tempNet);tempNet = [convolution2dLayer([1 1],480,"Name","efficientnet-b0|model|blocks_7|conv2d|Conv2D")batchNormalizationLayer("Name","efficientnet-b0|model|blocks_7|tpu_batch_normalization|FusedBatchNorm","Epsilon",0.00100000004749745)];
net = addLayers(net,tempNet);tempNet = sigmoidLayer("Name","efficientnet-b0|model|blocks_7|SigmoidLayer");
net = addLayers(net,tempNet);tempNet = [multiplicationLayer(2,"Name","efficientnet-b0|model|blocks_7|MulLayer")groupedConvolution2dLayer([3 3],1,480,"Name","efficientnet-b0|model|blocks_7|depthwise_conv2d|depthwise","Padding",[1 1 1 1])batchNormalizationLayer("Name","efficientnet-b0|model|blocks_7|tpu_batch_normalization_1|FusedBatchNorm","Epsilon",0.00100000004749745)];
net = addLayers(net,tempNet);tempNet = sigmoidLayer("Name","efficientnet-b0|model|blocks_7|SigmoidLayer_1");
net = addLayers(net,tempNet);tempNet = multiplicationLayer(2,"Name","efficientnet-b0|model|blocks_7|MulLayer_1");
net = addLayers(net,tempNet);tempNet = [globalAveragePooling2dLayer("Name","efficientnet-b0|model|blocks_7|se|GlobAvgPool")convolution2dLayer([1 1],20,"Name","Conv__365")];
net = addLayers(net,tempNet);tempNet = sigmoidLayer("Name","efficientnet-b0|model|blocks_7|se|SigmoidLayer");
net = addLayers(net,tempNet);tempNet = [multiplicationLayer(2,"Name","efficientnet-b0|model|blocks_7|se|MulLayer")convolution2dLayer([1 1],480,"Name","Conv__368")sigmoidLayer("Name","efficientnet-b0|model|blocks_7|se|SigmoidLayer_1")];
net = addLayers(net,tempNet);tempNet = [multiplicationLayer(2,"Name","efficientnet-b0|model|blocks_7|se|MulLayer_1")convolution2dLayer([1 1],80,"Name","efficientnet-b0|model|blocks_7|conv2d_1|Conv2D")batchNormalizationLayer("Name","efficientnet-b0|model|blocks_7|tpu_batch_normalization_2|FusedBatchNorm","Epsilon",0.00100000004749745)];
net = addLayers(net,tempNet);tempNet = [additionLayer(2,"Name","efficientnet-b0|model|blocks_7|Add")convolution2dLayer([1 1],480,"Name","efficientnet-b0|model|blocks_8|conv2d|Conv2D")batchNormalizationLayer("Name","efficientnet-b0|model|blocks_8|tpu_batch_normalization|FusedBatchNorm","Epsilon",0.00100000004749745)];
net = addLayers(net,tempNet);tempNet = sigmoidLayer("Name","efficientnet-b0|model|blocks_8|SigmoidLayer");
net = addLayers(net,tempNet);tempNet = [multiplicationLayer(2,"Name","efficientnet-b0|model|blocks_8|MulLayer")groupedConvolution2dLayer([5 5],1,480,"Name","efficientnet-b0|model|blocks_8|depthwise_conv2d|depthwise","Padding",[2 2 2 2])batchNormalizationLayer("Name","efficientnet-b0|model|blocks_8|tpu_batch_normalization_1|FusedBatchNorm","Epsilon",0.00100000004749745)];
net = addLayers(net,tempNet);tempNet = sigmoidLayer("Name","efficientnet-b0|model|blocks_8|SigmoidLayer_1");
net = addLayers(net,tempNet);tempNet = multiplicationLayer(2,"Name","efficientnet-b0|model|blocks_8|MulLayer_1");
net = addLayers(net,tempNet);tempNet = [globalAveragePooling2dLayer("Name","efficientnet-b0|model|blocks_8|se|GlobAvgPool")convolution2dLayer([1 1],20,"Name","Conv__373")];
net = addLayers(net,tempNet);tempNet = sigmoidLayer("Name","efficientnet-b0|model|blocks_8|se|SigmoidLayer");
net = addLayers(net,tempNet);tempNet = [multiplicationLayer(2,"Name","efficientnet-b0|model|blocks_8|se|MulLayer")convolution2dLayer([1 1],480,"Name","Conv__376")sigmoidLayer("Name","efficientnet-b0|model|blocks_8|se|SigmoidLayer_1")];
net = addLayers(net,tempNet);tempNet = [multiplicationLayer(2,"Name","efficientnet-b0|model|blocks_8|se|MulLayer_1")convolution2dLayer([1 1],112,"Name","efficientnet-b0|model|blocks_8|conv2d_1|Conv2D")batchNormalizationLayer("Name","efficientnet-b0|model|blocks_8|tpu_batch_normalization_2|FusedBatchNorm","Epsilon",0.00100000004749745)];
net = addLayers(net,tempNet);tempNet = [convolution2dLayer([1 1],672,"Name","efficientnet-b0|model|blocks_9|conv2d|Conv2D")batchNormalizationLayer("Name","efficientnet-b0|model|blocks_9|tpu_batch_normalization|FusedBatchNorm","Epsilon",0.00100000004749745)];
net = addLayers(net,tempNet);tempNet = sigmoidLayer("Name","efficientnet-b0|model|blocks_9|SigmoidLayer");
net = addLayers(net,tempNet);tempNet = [multiplicationLayer(2,"Name","efficientnet-b0|model|blocks_9|MulLayer")groupedConvolution2dLayer([5 5],1,672,"Name","efficientnet-b0|model|blocks_9|depthwise_conv2d|depthwise","Padding",[2 2 2 2])batchNormalizationLayer("Name","efficientnet-b0|model|blocks_9|tpu_batch_normalization_1|FusedBatchNorm","Epsilon",0.00100000004749745)];
net = addLayers(net,tempNet);tempNet = sigmoidLayer("Name","efficientnet-b0|model|blocks_9|SigmoidLayer_1");
net = addLayers(net,tempNet);tempNet = multiplicationLayer(2,"Name","efficientnet-b0|model|blocks_9|MulLayer_1");
net = addLayers(net,tempNet);tempNet = [globalAveragePooling2dLayer("Name","efficientnet-b0|model|blocks_9|se|GlobAvgPool")convolution2dLayer([1 1],28,"Name","Conv__383")];
net = addLayers(net,tempNet);tempNet = sigmoidLayer("Name","efficientnet-b0|model|blocks_9|se|SigmoidLayer");
net = addLayers(net,tempNet);tempNet = [multiplicationLayer(2,"Name","efficientnet-b0|model|blocks_9|se|MulLayer")convolution2dLayer([1 1],672,"Name","Conv__386")sigmoidLayer("Name","efficientnet-b0|model|blocks_9|se|SigmoidLayer_1")];
net = addLayers(net,tempNet);tempNet = [multiplicationLayer(2,"Name","efficientnet-b0|model|blocks_9|se|MulLayer_1")convolution2dLayer([1 1],112,"Name","efficientnet-b0|model|blocks_9|conv2d_1|Conv2D")batchNormalizationLayer("Name","efficientnet-b0|model|blocks_9|tpu_batch_normalization_2|FusedBatchNorm","Epsilon",0.00100000004749745)];
net = addLayers(net,tempNet);tempNet = additionLayer(2,"Name","efficientnet-b0|model|blocks_9|Add");
net = addLayers(net,tempNet);tempNet = [convolution2dLayer([1 1],672,"Name","efficientnet-b0|model|blocks_10|conv2d|Conv2D")batchNormalizationLayer("Name","efficientnet-b0|model|blocks_10|tpu_batch_normalization|FusedBatchNorm","Epsilon",0.00100000004749745)];
net = addLayers(net,tempNet);tempNet = sigmoidLayer("Name","efficientnet-b0|model|blocks_10|SigmoidLayer");
net = addLayers(net,tempNet);tempNet = [multiplicationLayer(2,"Name","efficientnet-b0|model|blocks_10|MulLayer")groupedConvolution2dLayer([5 5],1,672,"Name","efficientnet-b0|model|blocks_10|depthwise_conv2d|depthwise","Padding",[2 2 2 2])batchNormalizationLayer("Name","efficientnet-b0|model|blocks_10|tpu_batch_normalization_1|FusedBatchNorm","Epsilon",0.00100000004749745)];
net = addLayers(net,tempNet);tempNet = sigmoidLayer("Name","efficientnet-b0|model|blocks_10|SigmoidLayer_1");
net = addLayers(net,tempNet);tempNet = multiplicationLayer(2,"Name","efficientnet-b0|model|blocks_10|MulLayer_1");
net = addLayers(net,tempNet);tempNet = [globalAveragePooling2dLayer("Name","efficientnet-b0|model|blocks_10|se|GlobAvgPool")convolution2dLayer([1 1],28,"Name","Conv__393")];
net = addLayers(net,tempNet);tempNet = sigmoidLayer("Name","efficientnet-b0|model|blocks_10|se|SigmoidLayer");
net = addLayers(net,tempNet);tempNet = [multiplicationLayer(2,"Name","efficientnet-b0|model|blocks_10|se|MulLayer")convolution2dLayer([1 1],672,"Name","Conv__396")sigmoidLayer("Name","efficientnet-b0|model|blocks_10|se|SigmoidLayer_1")];
net = addLayers(net,tempNet);tempNet = [multiplicationLayer(2,"Name","efficientnet-b0|model|blocks_10|se|MulLayer_1")convolution2dLayer([1 1],112,"Name","efficientnet-b0|model|blocks_10|conv2d_1|Conv2D")batchNormalizationLayer("Name","efficientnet-b0|model|blocks_10|tpu_batch_normalization_2|FusedBatchNorm","Epsilon",0.00100000004749745)];
net = addLayers(net,tempNet);tempNet = [additionLayer(2,"Name","efficientnet-b0|model|blocks_10|Add")convolution2dLayer([1 1],672,"Name","efficientnet-b0|model|blocks_11|conv2d|Conv2D")batchNormalizationLayer("Name","efficientnet-b0|model|blocks_11|tpu_batch_normalization|FusedBatchNorm","Epsilon",0.00100000004749745)];
net = addLayers(net,tempNet);tempNet = sigmoidLayer("Name","efficientnet-b0|model|blocks_11|SigmoidLayer");
net = addLayers(net,tempNet);tempNet = [multiplicationLayer(2,"Name","efficientnet-b0|model|blocks_11|MulLayer")groupedConvolution2dLayer([5 5],1,672,"Name","efficientnet-b0|model|blocks_11|depthwise_conv2d|depthwise","Padding",[1 2 1 2],"Stride",[2 2])batchNormalizationLayer("Name","efficientnet-b0|model|blocks_11|tpu_batch_normalization_1|FusedBatchNorm","Epsilon",0.00100000004749745)];
net = addLayers(net,tempNet);tempNet = sigmoidLayer("Name","efficientnet-b0|model|blocks_11|SigmoidLayer_1");
net = addLayers(net,tempNet);tempNet = multiplicationLayer(2,"Name","efficientnet-b0|model|blocks_11|MulLayer_1");
net = addLayers(net,tempNet);tempNet = [globalAveragePooling2dLayer("Name","efficientnet-b0|model|blocks_11|se|GlobAvgPool")convolution2dLayer([1 1],28,"Name","Conv__401")];
net = addLayers(net,tempNet);tempNet = sigmoidLayer("Name","efficientnet-b0|model|blocks_11|se|SigmoidLayer");
net = addLayers(net,tempNet);tempNet = [multiplicationLayer(2,"Name","efficientnet-b0|model|blocks_11|se|MulLayer")convolution2dLayer([1 1],672,"Name","Conv__404")sigmoidLayer("Name","efficientnet-b0|model|blocks_11|se|SigmoidLayer_1")];
net = addLayers(net,tempNet);tempNet = [multiplicationLayer(2,"Name","efficientnet-b0|model|blocks_11|se|MulLayer_1")convolution2dLayer([1 1],192,"Name","efficientnet-b0|model|blocks_11|conv2d_1|Conv2D")batchNormalizationLayer("Name","efficientnet-b0|model|blocks_11|tpu_batch_normalization_2|FusedBatchNorm","Epsilon",0.00100000004749745)];
net = addLayers(net,tempNet);tempNet = [convolution2dLayer([1 1],1152,"Name","efficientnet-b0|model|blocks_12|conv2d|Conv2D")batchNormalizationLayer("Name","efficientnet-b0|model|blocks_12|tpu_batch_normalization|FusedBatchNorm","Epsilon",0.00100000004749745)];
net = addLayers(net,tempNet);tempNet = sigmoidLayer("Name","efficientnet-b0|model|blocks_12|SigmoidLayer");
net = addLayers(net,tempNet);tempNet = [multiplicationLayer(2,"Name","efficientnet-b0|model|blocks_12|MulLayer")groupedConvolution2dLayer([5 5],1,1152,"Name","efficientnet-b0|model|blocks_12|depthwise_conv2d|depthwise","Padding",[2 2 2 2])batchNormalizationLayer("Name","efficientnet-b0|model|blocks_12|tpu_batch_normalization_1|FusedBatchNorm","Epsilon",0.00100000004749745)];
net = addLayers(net,tempNet);tempNet = sigmoidLayer("Name","efficientnet-b0|model|blocks_12|SigmoidLayer_1");
net = addLayers(net,tempNet);tempNet = multiplicationLayer(2,"Name","efficientnet-b0|model|blocks_12|MulLayer_1");
net = addLayers(net,tempNet);tempNet = [globalAveragePooling2dLayer("Name","efficientnet-b0|model|blocks_12|se|GlobAvgPool")convolution2dLayer([1 1],48,"Name","Conv__411")];
net = addLayers(net,tempNet);tempNet = sigmoidLayer("Name","efficientnet-b0|model|blocks_12|se|SigmoidLayer");
net = addLayers(net,tempNet);tempNet = [multiplicationLayer(2,"Name","efficientnet-b0|model|blocks_12|se|MulLayer")convolution2dLayer([1 1],1152,"Name","Conv__414")sigmoidLayer("Name","efficientnet-b0|model|blocks_12|se|SigmoidLayer_1")];
net = addLayers(net,tempNet);tempNet = [multiplicationLayer(2,"Name","efficientnet-b0|model|blocks_12|se|MulLayer_1")convolution2dLayer([1 1],192,"Name","efficientnet-b0|model|blocks_12|conv2d_1|Conv2D")batchNormalizationLayer("Name","efficientnet-b0|model|blocks_12|tpu_batch_normalization_2|FusedBatchNorm","Epsilon",0.00100000004749745)];
net = addLayers(net,tempNet);tempNet = additionLayer(2,"Name","efficientnet-b0|model|blocks_12|Add");
net = addLayers(net,tempNet);tempNet = [convolution2dLayer([1 1],1152,"Name","efficientnet-b0|model|blocks_13|conv2d|Conv2D")batchNormalizationLayer("Name","efficientnet-b0|model|blocks_13|tpu_batch_normalization|FusedBatchNorm","Epsilon",0.00100000004749745)];
net = addLayers(net,tempNet);tempNet = sigmoidLayer("Name","efficientnet-b0|model|blocks_13|SigmoidLayer");
net = addLayers(net,tempNet);tempNet = [multiplicationLayer(2,"Name","efficientnet-b0|model|blocks_13|MulLayer")groupedConvolution2dLayer([5 5],1,1152,"Name","efficientnet-b0|model|blocks_13|depthwise_conv2d|depthwise","Padding",[2 2 2 2])batchNormalizationLayer("Name","efficientnet-b0|model|blocks_13|tpu_batch_normalization_1|FusedBatchNorm","Epsilon",0.00100000004749745)];
net = addLayers(net,tempNet);tempNet = sigmoidLayer("Name","efficientnet-b0|model|blocks_13|SigmoidLayer_1");
net = addLayers(net,tempNet);tempNet = multiplicationLayer(2,"Name","efficientnet-b0|model|blocks_13|MulLayer_1");
net = addLayers(net,tempNet);tempNet = [globalAveragePooling2dLayer("Name","efficientnet-b0|model|blocks_13|se|GlobAvgPool")convolution2dLayer([1 1],48,"Name","Conv__421")];
net = addLayers(net,tempNet);tempNet = sigmoidLayer("Name","efficientnet-b0|model|blocks_13|se|SigmoidLayer");
net = addLayers(net,tempNet);tempNet = [multiplicationLayer(2,"Name","efficientnet-b0|model|blocks_13|se|MulLayer")convolution2dLayer([1 1],1152,"Name","Conv__424")sigmoidLayer("Name","efficientnet-b0|model|blocks_13|se|SigmoidLayer_1")];
net = addLayers(net,tempNet);tempNet = [multiplicationLayer(2,"Name","efficientnet-b0|model|blocks_13|se|MulLayer_1")convolution2dLayer([1 1],192,"Name","efficientnet-b0|model|blocks_13|conv2d_1|Conv2D")batchNormalizationLayer("Name","efficientnet-b0|model|blocks_13|tpu_batch_normalization_2|FusedBatchNorm","Epsilon",0.00100000004749745)];
net = addLayers(net,tempNet);tempNet = additionLayer(2,"Name","efficientnet-b0|model|blocks_13|Add");
net = addLayers(net,tempNet);tempNet = [convolution2dLayer([1 1],1152,"Name","efficientnet-b0|model|blocks_14|conv2d|Conv2D")batchNormalizationLayer("Name","efficientnet-b0|model|blocks_14|tpu_batch_normalization|FusedBatchNorm","Epsilon",0.00100000004749745)];
net = addLayers(net,tempNet);tempNet = sigmoidLayer("Name","efficientnet-b0|model|blocks_14|SigmoidLayer");
net = addLayers(net,tempNet);tempNet = [multiplicationLayer(2,"Name","efficientnet-b0|model|blocks_14|MulLayer")groupedConvolution2dLayer([5 5],1,1152,"Name","efficientnet-b0|model|blocks_14|depthwise_conv2d|depthwise","Padding",[2 2 2 2])batchNormalizationLayer("Name","efficientnet-b0|model|blocks_14|tpu_batch_normalization_1|FusedBatchNorm","Epsilon",0.00100000004749745)];
net = addLayers(net,tempNet);tempNet = sigmoidLayer("Name","efficientnet-b0|model|blocks_14|SigmoidLayer_1");
net = addLayers(net,tempNet);tempNet = multiplicationLayer(2,"Name","efficientnet-b0|model|blocks_14|MulLayer_1");
net = addLayers(net,tempNet);tempNet = [globalAveragePooling2dLayer("Name","efficientnet-b0|model|blocks_14|se|GlobAvgPool")convolution2dLayer([1 1],48,"Name","Conv__431")];
net = addLayers(net,tempNet);tempNet = sigmoidLayer("Name","efficientnet-b0|model|blocks_14|se|SigmoidLayer");
net = addLayers(net,tempNet);tempNet = [multiplicationLayer(2,"Name","efficientnet-b0|model|blocks_14|se|MulLayer")convolution2dLayer([1 1],1152,"Name","Conv__434")sigmoidLayer("Name","efficientnet-b0|model|blocks_14|se|SigmoidLayer_1")];
net = addLayers(net,tempNet);tempNet = [multiplicationLayer(2,"Name","efficientnet-b0|model|blocks_14|se|MulLayer_1")convolution2dLayer([1 1],192,"Name","efficientnet-b0|model|blocks_14|conv2d_1|Conv2D")batchNormalizationLayer("Name","efficientnet-b0|model|blocks_14|tpu_batch_normalization_2|FusedBatchNorm","Epsilon",0.00100000004749745)];
net = addLayers(net,tempNet);tempNet = [additionLayer(2,"Name","efficientnet-b0|model|blocks_14|Add")convolution2dLayer([1 1],1152,"Name","efficientnet-b0|model|blocks_15|conv2d|Conv2D")batchNormalizationLayer("Name","efficientnet-b0|model|blocks_15|tpu_batch_normalization|FusedBatchNorm","Epsilon",0.00100000004749745)];
net = addLayers(net,tempNet);tempNet = sigmoidLayer("Name","efficientnet-b0|model|blocks_15|SigmoidLayer");
net = addLayers(net,tempNet);tempNet = [multiplicationLayer(2,"Name","efficientnet-b0|model|blocks_15|MulLayer")groupedConvolution2dLayer([3 3],1,1152,"Name","efficientnet-b0|model|blocks_15|depthwise_conv2d|depthwise","Padding",[1 1 1 1])batchNormalizationLayer("Name","efficientnet-b0|model|blocks_15|tpu_batch_normalization_1|FusedBatchNorm","Epsilon",0.00100000004749745)];
net = addLayers(net,tempNet);tempNet = sigmoidLayer("Name","efficientnet-b0|model|blocks_15|SigmoidLayer_1");
net = addLayers(net,tempNet);tempNet = multiplicationLayer(2,"Name","efficientnet-b0|model|blocks_15|MulLayer_1");
net = addLayers(net,tempNet);tempNet = [globalAveragePooling2dLayer("Name","efficientnet-b0|model|blocks_15|se|GlobAvgPool")convolution2dLayer([1 1],48,"Name","Conv__439")];
net = addLayers(net,tempNet);tempNet = sigmoidLayer("Name","efficientnet-b0|model|blocks_15|se|SigmoidLayer");
net = addLayers(net,tempNet);tempNet = [multiplicationLayer(2,"Name","efficientnet-b0|model|blocks_15|se|MulLayer")convolution2dLayer([1 1],1152,"Name","Conv__442")sigmoidLayer("Name","efficientnet-b0|model|blocks_15|se|SigmoidLayer_1")];
net = addLayers(net,tempNet);tempNet = [multiplicationLayer(2,"Name","efficientnet-b0|model|blocks_15|se|MulLayer_1")convolution2dLayer([1 1],320,"Name","efficientnet-b0|model|blocks_15|conv2d_1|Conv2D")batchNormalizationLayer("Name","efficientnet-b0|model|blocks_15|tpu_batch_normalization_2|FusedBatchNorm","Epsilon",0.00100000004749745)convolution2dLayer([1 1],1280,"Name","efficientnet-b0|model|head|conv2d|Conv2D")batchNormalizationLayer("Name","efficientnet-b0|model|head|tpu_batch_normalization|FusedBatchNorm","Epsilon",0.00100000004749745)];
net = addLayers(net,tempNet);tempNet = sigmoidLayer("Name","efficientnet-b0|model|head|SigmoidLayer");
net = addLayers(net,tempNet);tempNet = [multiplicationLayer(2,"Name","efficientnet-b0|model|head|MulLayer")globalAveragePooling2dLayer("Name","efficientnet-b0|model|head|global_average_pooling2d|GlobAvgPool")fullyConnectedLayer(1000,"Name","efficientnet-b0|model|head|dense|MatMul")softmaxLayer("Name","Softmax")];
net = addLayers(net,tempNet);% clean up helper variable
clear tempNet;Connect Layer Branches
Connect all the branches of the network to create the network graph.
net = connectLayers(net,"efficientnet-b0|model|stem|tpu_batch_normalization|FusedBatchNorm","efficientnet-b0|model|stem|SigmoidLayer");
net = connectLayers(net,"efficientnet-b0|model|stem|tpu_batch_normalization|FusedBatchNorm","efficientnet-b0|model|stem|MulLayer/in1");
net = connectLayers(net,"efficientnet-b0|model|stem|SigmoidLayer","efficientnet-b0|model|stem|MulLayer/in2");
net = connectLayers(net,"efficientnet-b0|model|blocks_0|tpu_batch_normalization|FusedBatchNorm","efficientnet-b0|model|blocks_0|SigmoidLayer");
net = connectLayers(net,"efficientnet-b0|model|blocks_0|tpu_batch_normalization|FusedBatchNorm","efficientnet-b0|model|blocks_0|MulLayer/in1");
net = connectLayers(net,"efficientnet-b0|model|blocks_0|SigmoidLayer","efficientnet-b0|model|blocks_0|MulLayer/in2");
net = connectLayers(net,"efficientnet-b0|model|blocks_0|MulLayer","efficientnet-b0|model|blocks_0|se|GlobAvgPool");
net = connectLayers(net,"efficientnet-b0|model|blocks_0|MulLayer","efficientnet-b0|model|blocks_0|se|MulLayer_1/in2");
net = connectLayers(net,"Conv__301","efficientnet-b0|model|blocks_0|se|SigmoidLayer");
net = connectLayers(net,"Conv__301","efficientnet-b0|model|blocks_0|se|MulLayer/in1");
net = connectLayers(net,"efficientnet-b0|model|blocks_0|se|SigmoidLayer","efficientnet-b0|model|blocks_0|se|MulLayer/in2");
net = connectLayers(net,"efficientnet-b0|model|blocks_0|se|SigmoidLayer_1","efficientnet-b0|model|blocks_0|se|MulLayer_1/in1");
net = connectLayers(net,"efficientnet-b0|model|blocks_1|tpu_batch_normalization|FusedBatchNorm","efficientnet-b0|model|blocks_1|SigmoidLayer");
net = connectLayers(net,"efficientnet-b0|model|blocks_1|tpu_batch_normalization|FusedBatchNorm","efficientnet-b0|model|blocks_1|MulLayer/in1");
net = connectLayers(net,"efficientnet-b0|model|blocks_1|SigmoidLayer","efficientnet-b0|model|blocks_1|MulLayer/in2");
net = connectLayers(net,"efficientnet-b0|model|blocks_1|tpu_batch_normalization_1|FusedBatchNorm","efficientnet-b0|model|blocks_1|SigmoidLayer_1");
net = connectLayers(net,"efficientnet-b0|model|blocks_1|tpu_batch_normalization_1|FusedBatchNorm","efficientnet-b0|model|blocks_1|MulLayer_1/in1");
net = connectLayers(net,"efficientnet-b0|model|blocks_1|SigmoidLayer_1","efficientnet-b0|model|blocks_1|MulLayer_1/in2");
net = connectLayers(net,"efficientnet-b0|model|blocks_1|MulLayer_1","efficientnet-b0|model|blocks_1|se|GlobAvgPool");
net = connectLayers(net,"efficientnet-b0|model|blocks_1|MulLayer_1","efficientnet-b0|model|blocks_1|se|MulLayer_1/in2");
net = connectLayers(net,"Conv__309","efficientnet-b0|model|blocks_1|se|SigmoidLayer");
net = connectLayers(net,"Conv__309","efficientnet-b0|model|blocks_1|se|MulLayer/in1");
net = connectLayers(net,"efficientnet-b0|model|blocks_1|se|SigmoidLayer","efficientnet-b0|model|blocks_1|se|MulLayer/in2");
net = connectLayers(net,"efficientnet-b0|model|blocks_1|se|SigmoidLayer_1","efficientnet-b0|model|blocks_1|se|MulLayer_1/in1");
net = connectLayers(net,"efficientnet-b0|model|blocks_1|tpu_batch_normalization_2|FusedBatchNorm","efficientnet-b0|model|blocks_2|conv2d|Conv2D");
net = connectLayers(net,"efficientnet-b0|model|blocks_1|tpu_batch_normalization_2|FusedBatchNorm","efficientnet-b0|model|blocks_2|Add/in2");
net = connectLayers(net,"efficientnet-b0|model|blocks_2|tpu_batch_normalization|FusedBatchNorm","efficientnet-b0|model|blocks_2|SigmoidLayer");
net = connectLayers(net,"efficientnet-b0|model|blocks_2|tpu_batch_normalization|FusedBatchNorm","efficientnet-b0|model|blocks_2|MulLayer/in1");
net = connectLayers(net,"efficientnet-b0|model|blocks_2|SigmoidLayer","efficientnet-b0|model|blocks_2|MulLayer/in2");
net = connectLayers(net,"efficientnet-b0|model|blocks_2|tpu_batch_normalization_1|FusedBatchNorm","efficientnet-b0|model|blocks_2|SigmoidLayer_1");
net = connectLayers(net,"efficientnet-b0|model|blocks_2|tpu_batch_normalization_1|FusedBatchNorm","efficientnet-b0|model|blocks_2|MulLayer_1/in1");
net = connectLayers(net,"efficientnet-b0|model|blocks_2|SigmoidLayer_1","efficientnet-b0|model|blocks_2|MulLayer_1/in2");
net = connectLayers(net,"efficientnet-b0|model|blocks_2|MulLayer_1","efficientnet-b0|model|blocks_2|se|GlobAvgPool");
net = connectLayers(net,"efficientnet-b0|model|blocks_2|MulLayer_1","efficientnet-b0|model|blocks_2|se|MulLayer_1/in2");
net = connectLayers(net,"Conv__319","efficientnet-b0|model|blocks_2|se|SigmoidLayer");
net = connectLayers(net,"Conv__319","efficientnet-b0|model|blocks_2|se|MulLayer/in1");
net = connectLayers(net,"efficientnet-b0|model|blocks_2|se|SigmoidLayer","efficientnet-b0|model|blocks_2|se|MulLayer/in2");
net = connectLayers(net,"efficientnet-b0|model|blocks_2|se|SigmoidLayer_1","efficientnet-b0|model|blocks_2|se|MulLayer_1/in1");
net = connectLayers(net,"efficientnet-b0|model|blocks_2|tpu_batch_normalization_2|FusedBatchNorm","efficientnet-b0|model|blocks_2|Add/in1");
net = connectLayers(net,"efficientnet-b0|model|blocks_3|tpu_batch_normalization|FusedBatchNorm","efficientnet-b0|model|blocks_3|SigmoidLayer");
net = connectLayers(net,"efficientnet-b0|model|blocks_3|tpu_batch_normalization|FusedBatchNorm","efficientnet-b0|model|blocks_3|MulLayer/in1");
net = connectLayers(net,"efficientnet-b0|model|blocks_3|SigmoidLayer","efficientnet-b0|model|blocks_3|MulLayer/in2");
net = connectLayers(net,"efficientnet-b0|model|blocks_3|tpu_batch_normalization_1|FusedBatchNorm","efficientnet-b0|model|blocks_3|SigmoidLayer_1");
net = connectLayers(net,"efficientnet-b0|model|blocks_3|tpu_batch_normalization_1|FusedBatchNorm","efficientnet-b0|model|blocks_3|MulLayer_1/in1");
net = connectLayers(net,"efficientnet-b0|model|blocks_3|SigmoidLayer_1","efficientnet-b0|model|blocks_3|MulLayer_1/in2");
net = connectLayers(net,"efficientnet-b0|model|blocks_3|MulLayer_1","efficientnet-b0|model|blocks_3|se|GlobAvgPool");
net = connectLayers(net,"efficientnet-b0|model|blocks_3|MulLayer_1","efficientnet-b0|model|blocks_3|se|MulLayer_1/in2");
net = connectLayers(net,"Conv__327","efficientnet-b0|model|blocks_3|se|SigmoidLayer");
net = connectLayers(net,"Conv__327","efficientnet-b0|model|blocks_3|se|MulLayer/in1");
net = connectLayers(net,"efficientnet-b0|model|blocks_3|se|SigmoidLayer","efficientnet-b0|model|blocks_3|se|MulLayer/in2");
net = connectLayers(net,"efficientnet-b0|model|blocks_3|se|SigmoidLayer_1","efficientnet-b0|model|blocks_3|se|MulLayer_1/in1");
net = connectLayers(net,"efficientnet-b0|model|blocks_3|tpu_batch_normalization_2|FusedBatchNorm","efficientnet-b0|model|blocks_4|conv2d|Conv2D");
net = connectLayers(net,"efficientnet-b0|model|blocks_3|tpu_batch_normalization_2|FusedBatchNorm","efficientnet-b0|model|blocks_4|Add/in2");
net = connectLayers(net,"efficientnet-b0|model|blocks_4|tpu_batch_normalization|FusedBatchNorm","efficientnet-b0|model|blocks_4|SigmoidLayer");
net = connectLayers(net,"efficientnet-b0|model|blocks_4|tpu_batch_normalization|FusedBatchNorm","efficientnet-b0|model|blocks_4|MulLayer/in1");
net = connectLayers(net,"efficientnet-b0|model|blocks_4|SigmoidLayer","efficientnet-b0|model|blocks_4|MulLayer/in2");
net = connectLayers(net,"efficientnet-b0|model|blocks_4|tpu_batch_normalization_1|FusedBatchNorm","efficientnet-b0|model|blocks_4|SigmoidLayer_1");
net = connectLayers(net,"efficientnet-b0|model|blocks_4|tpu_batch_normalization_1|FusedBatchNorm","efficientnet-b0|model|blocks_4|MulLayer_1/in1");
net = connectLayers(net,"efficientnet-b0|model|blocks_4|SigmoidLayer_1","efficientnet-b0|model|blocks_4|MulLayer_1/in2");
net = connectLayers(net,"efficientnet-b0|model|blocks_4|MulLayer_1","efficientnet-b0|model|blocks_4|se|GlobAvgPool");
net = connectLayers(net,"efficientnet-b0|model|blocks_4|MulLayer_1","efficientnet-b0|model|blocks_4|se|MulLayer_1/in2");
net = connectLayers(net,"Conv__337","efficientnet-b0|model|blocks_4|se|SigmoidLayer");
net = connectLayers(net,"Conv__337","efficientnet-b0|model|blocks_4|se|MulLayer/in1");
net = connectLayers(net,"efficientnet-b0|model|blocks_4|se|SigmoidLayer","efficientnet-b0|model|blocks_4|se|MulLayer/in2");
net = connectLayers(net,"efficientnet-b0|model|blocks_4|se|SigmoidLayer_1","efficientnet-b0|model|blocks_4|se|MulLayer_1/in1");
net = connectLayers(net,"efficientnet-b0|model|blocks_4|tpu_batch_normalization_2|FusedBatchNorm","efficientnet-b0|model|blocks_4|Add/in1");
net = connectLayers(net,"efficientnet-b0|model|blocks_5|tpu_batch_normalization|FusedBatchNorm","efficientnet-b0|model|blocks_5|SigmoidLayer");
net = connectLayers(net,"efficientnet-b0|model|blocks_5|tpu_batch_normalization|FusedBatchNorm","efficientnet-b0|model|blocks_5|MulLayer/in1");
net = connectLayers(net,"efficientnet-b0|model|blocks_5|SigmoidLayer","efficientnet-b0|model|blocks_5|MulLayer/in2");
net = connectLayers(net,"efficientnet-b0|model|blocks_5|tpu_batch_normalization_1|FusedBatchNorm","efficientnet-b0|model|blocks_5|SigmoidLayer_1");
net = connectLayers(net,"efficientnet-b0|model|blocks_5|tpu_batch_normalization_1|FusedBatchNorm","efficientnet-b0|model|blocks_5|MulLayer_1/in1");
net = connectLayers(net,"efficientnet-b0|model|blocks_5|SigmoidLayer_1","efficientnet-b0|model|blocks_5|MulLayer_1/in2");
net = connectLayers(net,"efficientnet-b0|model|blocks_5|MulLayer_1","efficientnet-b0|model|blocks_5|se|GlobAvgPool");
net = connectLayers(net,"efficientnet-b0|model|blocks_5|MulLayer_1","efficientnet-b0|model|blocks_5|se|MulLayer_1/in2");
net = connectLayers(net,"Conv__345","efficientnet-b0|model|blocks_5|se|SigmoidLayer");
net = connectLayers(net,"Conv__345","efficientnet-b0|model|blocks_5|se|MulLayer/in1");
net = connectLayers(net,"efficientnet-b0|model|blocks_5|se|SigmoidLayer","efficientnet-b0|model|blocks_5|se|MulLayer/in2");
net = connectLayers(net,"efficientnet-b0|model|blocks_5|se|SigmoidLayer_1","efficientnet-b0|model|blocks_5|se|MulLayer_1/in1");
net = connectLayers(net,"efficientnet-b0|model|blocks_5|tpu_batch_normalization_2|FusedBatchNorm","efficientnet-b0|model|blocks_6|conv2d|Conv2D");
net = connectLayers(net,"efficientnet-b0|model|blocks_5|tpu_batch_normalization_2|FusedBatchNorm","efficientnet-b0|model|blocks_6|Add/in2");
net = connectLayers(net,"efficientnet-b0|model|blocks_6|tpu_batch_normalization|FusedBatchNorm","efficientnet-b0|model|blocks_6|SigmoidLayer");
net = connectLayers(net,"efficientnet-b0|model|blocks_6|tpu_batch_normalization|FusedBatchNorm","efficientnet-b0|model|blocks_6|MulLayer/in1");
net = connectLayers(net,"efficientnet-b0|model|blocks_6|SigmoidLayer","efficientnet-b0|model|blocks_6|MulLayer/in2");
net = connectLayers(net,"efficientnet-b0|model|blocks_6|tpu_batch_normalization_1|FusedBatchNorm","efficientnet-b0|model|blocks_6|SigmoidLayer_1");
net = connectLayers(net,"efficientnet-b0|model|blocks_6|tpu_batch_normalization_1|FusedBatchNorm","efficientnet-b0|model|blocks_6|MulLayer_1/in1");
net = connectLayers(net,"efficientnet-b0|model|blocks_6|SigmoidLayer_1","efficientnet-b0|model|blocks_6|MulLayer_1/in2");
net = connectLayers(net,"efficientnet-b0|model|blocks_6|MulLayer_1","efficientnet-b0|model|blocks_6|se|GlobAvgPool");
net = connectLayers(net,"efficientnet-b0|model|blocks_6|MulLayer_1","efficientnet-b0|model|blocks_6|se|MulLayer_1/in2");
net = connectLayers(net,"Conv__355","efficientnet-b0|model|blocks_6|se|SigmoidLayer");
net = connectLayers(net,"Conv__355","efficientnet-b0|model|blocks_6|se|MulLayer/in1");
net = connectLayers(net,"efficientnet-b0|model|blocks_6|se|SigmoidLayer","efficientnet-b0|model|blocks_6|se|MulLayer/in2");
net = connectLayers(net,"efficientnet-b0|model|blocks_6|se|SigmoidLayer_1","efficientnet-b0|model|blocks_6|se|MulLayer_1/in1");
net = connectLayers(net,"efficientnet-b0|model|blocks_6|tpu_batch_normalization_2|FusedBatchNorm","efficientnet-b0|model|blocks_6|Add/in1");
net = connectLayers(net,"efficientnet-b0|model|blocks_6|Add","efficientnet-b0|model|blocks_7|conv2d|Conv2D");
net = connectLayers(net,"efficientnet-b0|model|blocks_6|Add","efficientnet-b0|model|blocks_7|Add/in2");
net = connectLayers(net,"efficientnet-b0|model|blocks_7|tpu_batch_normalization|FusedBatchNorm","efficientnet-b0|model|blocks_7|SigmoidLayer");
net = connectLayers(net,"efficientnet-b0|model|blocks_7|tpu_batch_normalization|FusedBatchNorm","efficientnet-b0|model|blocks_7|MulLayer/in1");
net = connectLayers(net,"efficientnet-b0|model|blocks_7|SigmoidLayer","efficientnet-b0|model|blocks_7|MulLayer/in2");
net = connectLayers(net,"efficientnet-b0|model|blocks_7|tpu_batch_normalization_1|FusedBatchNorm","efficientnet-b0|model|blocks_7|SigmoidLayer_1");
net = connectLayers(net,"efficientnet-b0|model|blocks_7|tpu_batch_normalization_1|FusedBatchNorm","efficientnet-b0|model|blocks_7|MulLayer_1/in1");
net = connectLayers(net,"efficientnet-b0|model|blocks_7|SigmoidLayer_1","efficientnet-b0|model|blocks_7|MulLayer_1/in2");
net = connectLayers(net,"efficientnet-b0|model|blocks_7|MulLayer_1","efficientnet-b0|model|blocks_7|se|GlobAvgPool");
net = connectLayers(net,"efficientnet-b0|model|blocks_7|MulLayer_1","efficientnet-b0|model|blocks_7|se|MulLayer_1/in2");
net = connectLayers(net,"Conv__365","efficientnet-b0|model|blocks_7|se|SigmoidLayer");
net = connectLayers(net,"Conv__365","efficientnet-b0|model|blocks_7|se|MulLayer/in1");
net = connectLayers(net,"efficientnet-b0|model|blocks_7|se|SigmoidLayer","efficientnet-b0|model|blocks_7|se|MulLayer/in2");
net = connectLayers(net,"efficientnet-b0|model|blocks_7|se|SigmoidLayer_1","efficientnet-b0|model|blocks_7|se|MulLayer_1/in1");
net = connectLayers(net,"efficientnet-b0|model|blocks_7|tpu_batch_normalization_2|FusedBatchNorm","efficientnet-b0|model|blocks_7|Add/in1");
net = connectLayers(net,"efficientnet-b0|model|blocks_8|tpu_batch_normalization|FusedBatchNorm","efficientnet-b0|model|blocks_8|SigmoidLayer");
net = connectLayers(net,"efficientnet-b0|model|blocks_8|tpu_batch_normalization|FusedBatchNorm","efficientnet-b0|model|blocks_8|MulLayer/in1");
net = connectLayers(net,"efficientnet-b0|model|blocks_8|SigmoidLayer","efficientnet-b0|model|blocks_8|MulLayer/in2");
net = connectLayers(net,"efficientnet-b0|model|blocks_8|tpu_batch_normalization_1|FusedBatchNorm","efficientnet-b0|model|blocks_8|SigmoidLayer_1");
net = connectLayers(net,"efficientnet-b0|model|blocks_8|tpu_batch_normalization_1|FusedBatchNorm","efficientnet-b0|model|blocks_8|MulLayer_1/in1");
net = connectLayers(net,"efficientnet-b0|model|blocks_8|SigmoidLayer_1","efficientnet-b0|model|blocks_8|MulLayer_1/in2");
net = connectLayers(net,"efficientnet-b0|model|blocks_8|MulLayer_1","efficientnet-b0|model|blocks_8|se|GlobAvgPool");
net = connectLayers(net,"efficientnet-b0|model|blocks_8|MulLayer_1","efficientnet-b0|model|blocks_8|se|MulLayer_1/in2");
net = connectLayers(net,"Conv__373","efficientnet-b0|model|blocks_8|se|SigmoidLayer");
net = connectLayers(net,"Conv__373","efficientnet-b0|model|blocks_8|se|MulLayer/in1");
net = connectLayers(net,"efficientnet-b0|model|blocks_8|se|SigmoidLayer","efficientnet-b0|model|blocks_8|se|MulLayer/in2");
net = connectLayers(net,"efficientnet-b0|model|blocks_8|se|SigmoidLayer_1","efficientnet-b0|model|blocks_8|se|MulLayer_1/in1");
net = connectLayers(net,"efficientnet-b0|model|blocks_8|tpu_batch_normalization_2|FusedBatchNorm","efficientnet-b0|model|blocks_9|conv2d|Conv2D");
net = connectLayers(net,"efficientnet-b0|model|blocks_8|tpu_batch_normalization_2|FusedBatchNorm","efficientnet-b0|model|blocks_9|Add/in2");
net = connectLayers(net,"efficientnet-b0|model|blocks_9|tpu_batch_normalization|FusedBatchNorm","efficientnet-b0|model|blocks_9|SigmoidLayer");
net = connectLayers(net,"efficientnet-b0|model|blocks_9|tpu_batch_normalization|FusedBatchNorm","efficientnet-b0|model|blocks_9|MulLayer/in1");
net = connectLayers(net,"efficientnet-b0|model|blocks_9|SigmoidLayer","efficientnet-b0|model|blocks_9|MulLayer/in2");
net = connectLayers(net,"efficientnet-b0|model|blocks_9|tpu_batch_normalization_1|FusedBatchNorm","efficientnet-b0|model|blocks_9|SigmoidLayer_1");
net = connectLayers(net,"efficientnet-b0|model|blocks_9|tpu_batch_normalization_1|FusedBatchNorm","efficientnet-b0|model|blocks_9|MulLayer_1/in1");
net = connectLayers(net,"efficientnet-b0|model|blocks_9|SigmoidLayer_1","efficientnet-b0|model|blocks_9|MulLayer_1/in2");
net = connectLayers(net,"efficientnet-b0|model|blocks_9|MulLayer_1","efficientnet-b0|model|blocks_9|se|GlobAvgPool");
net = connectLayers(net,"efficientnet-b0|model|blocks_9|MulLayer_1","efficientnet-b0|model|blocks_9|se|MulLayer_1/in2");
net = connectLayers(net,"Conv__383","efficientnet-b0|model|blocks_9|se|SigmoidLayer");
net = connectLayers(net,"Conv__383","efficientnet-b0|model|blocks_9|se|MulLayer/in1");
net = connectLayers(net,"efficientnet-b0|model|blocks_9|se|SigmoidLayer","efficientnet-b0|model|blocks_9|se|MulLayer/in2");
net = connectLayers(net,"efficientnet-b0|model|blocks_9|se|SigmoidLayer_1","efficientnet-b0|model|blocks_9|se|MulLayer_1/in1");
net = connectLayers(net,"efficientnet-b0|model|blocks_9|tpu_batch_normalization_2|FusedBatchNorm","efficientnet-b0|model|blocks_9|Add/in1");
net = connectLayers(net,"efficientnet-b0|model|blocks_9|Add","efficientnet-b0|model|blocks_10|conv2d|Conv2D");
net = connectLayers(net,"efficientnet-b0|model|blocks_9|Add","efficientnet-b0|model|blocks_10|Add/in2");
net = connectLayers(net,"efficientnet-b0|model|blocks_10|tpu_batch_normalization|FusedBatchNorm","efficientnet-b0|model|blocks_10|SigmoidLayer");
net = connectLayers(net,"efficientnet-b0|model|blocks_10|tpu_batch_normalization|FusedBatchNorm","efficientnet-b0|model|blocks_10|MulLayer/in1");
net = connectLayers(net,"efficientnet-b0|model|blocks_10|SigmoidLayer","efficientnet-b0|model|blocks_10|MulLayer/in2");
net = connectLayers(net,"efficientnet-b0|model|blocks_10|tpu_batch_normalization_1|FusedBatchNorm","efficientnet-b0|model|blocks_10|SigmoidLayer_1");
net = connectLayers(net,"efficientnet-b0|model|blocks_10|tpu_batch_normalization_1|FusedBatchNorm","efficientnet-b0|model|blocks_10|MulLayer_1/in1");
net = connectLayers(net,"efficientnet-b0|model|blocks_10|SigmoidLayer_1","efficientnet-b0|model|blocks_10|MulLayer_1/in2");
net = connectLayers(net,"efficientnet-b0|model|blocks_10|MulLayer_1","efficientnet-b0|model|blocks_10|se|GlobAvgPool");
net = connectLayers(net,"efficientnet-b0|model|blocks_10|MulLayer_1","efficientnet-b0|model|blocks_10|se|MulLayer_1/in2");
net = connectLayers(net,"Conv__393","efficientnet-b0|model|blocks_10|se|SigmoidLayer");
net = connectLayers(net,"Conv__393","efficientnet-b0|model|blocks_10|se|MulLayer/in1");
net = connectLayers(net,"efficientnet-b0|model|blocks_10|se|SigmoidLayer","efficientnet-b0|model|blocks_10|se|MulLayer/in2");
net = connectLayers(net,"efficientnet-b0|model|blocks_10|se|SigmoidLayer_1","efficientnet-b0|model|blocks_10|se|MulLayer_1/in1");
net = connectLayers(net,"efficientnet-b0|model|blocks_10|tpu_batch_normalization_2|FusedBatchNorm","efficientnet-b0|model|blocks_10|Add/in1");
net = connectLayers(net,"efficientnet-b0|model|blocks_11|tpu_batch_normalization|FusedBatchNorm","efficientnet-b0|model|blocks_11|SigmoidLayer");
net = connectLayers(net,"efficientnet-b0|model|blocks_11|tpu_batch_normalization|FusedBatchNorm","efficientnet-b0|model|blocks_11|MulLayer/in1");
net = connectLayers(net,"efficientnet-b0|model|blocks_11|SigmoidLayer","efficientnet-b0|model|blocks_11|MulLayer/in2");
net = connectLayers(net,"efficientnet-b0|model|blocks_11|tpu_batch_normalization_1|FusedBatchNorm","efficientnet-b0|model|blocks_11|SigmoidLayer_1");
net = connectLayers(net,"efficientnet-b0|model|blocks_11|tpu_batch_normalization_1|FusedBatchNorm","efficientnet-b0|model|blocks_11|MulLayer_1/in1");
net = connectLayers(net,"efficientnet-b0|model|blocks_11|SigmoidLayer_1","efficientnet-b0|model|blocks_11|MulLayer_1/in2");
net = connectLayers(net,"efficientnet-b0|model|blocks_11|MulLayer_1","efficientnet-b0|model|blocks_11|se|GlobAvgPool");
net = connectLayers(net,"efficientnet-b0|model|blocks_11|MulLayer_1","efficientnet-b0|model|blocks_11|se|MulLayer_1/in2");
net = connectLayers(net,"Conv__401","efficientnet-b0|model|blocks_11|se|SigmoidLayer");
net = connectLayers(net,"Conv__401","efficientnet-b0|model|blocks_11|se|MulLayer/in1");
net = connectLayers(net,"efficientnet-b0|model|blocks_11|se|SigmoidLayer","efficientnet-b0|model|blocks_11|se|MulLayer/in2");
net = connectLayers(net,"efficientnet-b0|model|blocks_11|se|SigmoidLayer_1","efficientnet-b0|model|blocks_11|se|MulLayer_1/in1");
net = connectLayers(net,"efficientnet-b0|model|blocks_11|tpu_batch_normalization_2|FusedBatchNorm","efficientnet-b0|model|blocks_12|conv2d|Conv2D");
net = connectLayers(net,"efficientnet-b0|model|blocks_11|tpu_batch_normalization_2|FusedBatchNorm","efficientnet-b0|model|blocks_12|Add/in2");
net = connectLayers(net,"efficientnet-b0|model|blocks_12|tpu_batch_normalization|FusedBatchNorm","efficientnet-b0|model|blocks_12|SigmoidLayer");
net = connectLayers(net,"efficientnet-b0|model|blocks_12|tpu_batch_normalization|FusedBatchNorm","efficientnet-b0|model|blocks_12|MulLayer/in1");
net = connectLayers(net,"efficientnet-b0|model|blocks_12|SigmoidLayer","efficientnet-b0|model|blocks_12|MulLayer/in2");
net = connectLayers(net,"efficientnet-b0|model|blocks_12|tpu_batch_normalization_1|FusedBatchNorm","efficientnet-b0|model|blocks_12|SigmoidLayer_1");
net = connectLayers(net,"efficientnet-b0|model|blocks_12|tpu_batch_normalization_1|FusedBatchNorm","efficientnet-b0|model|blocks_12|MulLayer_1/in1");
net = connectLayers(net,"efficientnet-b0|model|blocks_12|SigmoidLayer_1","efficientnet-b0|model|blocks_12|MulLayer_1/in2");
net = connectLayers(net,"efficientnet-b0|model|blocks_12|MulLayer_1","efficientnet-b0|model|blocks_12|se|GlobAvgPool");
net = connectLayers(net,"efficientnet-b0|model|blocks_12|MulLayer_1","efficientnet-b0|model|blocks_12|se|MulLayer_1/in2");
net = connectLayers(net,"Conv__411","efficientnet-b0|model|blocks_12|se|SigmoidLayer");
net = connectLayers(net,"Conv__411","efficientnet-b0|model|blocks_12|se|MulLayer/in1");
net = connectLayers(net,"efficientnet-b0|model|blocks_12|se|SigmoidLayer","efficientnet-b0|model|blocks_12|se|MulLayer/in2");
net = connectLayers(net,"efficientnet-b0|model|blocks_12|se|SigmoidLayer_1","efficientnet-b0|model|blocks_12|se|MulLayer_1/in1");
net = connectLayers(net,"efficientnet-b0|model|blocks_12|tpu_batch_normalization_2|FusedBatchNorm","efficientnet-b0|model|blocks_12|Add/in1");
net = connectLayers(net,"efficientnet-b0|model|blocks_12|Add","efficientnet-b0|model|blocks_13|conv2d|Conv2D");
net = connectLayers(net,"efficientnet-b0|model|blocks_12|Add","efficientnet-b0|model|blocks_13|Add/in2");
net = connectLayers(net,"efficientnet-b0|model|blocks_13|tpu_batch_normalization|FusedBatchNorm","efficientnet-b0|model|blocks_13|SigmoidLayer");
net = connectLayers(net,"efficientnet-b0|model|blocks_13|tpu_batch_normalization|FusedBatchNorm","efficientnet-b0|model|blocks_13|MulLayer/in1");
net = connectLayers(net,"efficientnet-b0|model|blocks_13|SigmoidLayer","efficientnet-b0|model|blocks_13|MulLayer/in2");
net = connectLayers(net,"efficientnet-b0|model|blocks_13|tpu_batch_normalization_1|FusedBatchNorm","efficientnet-b0|model|blocks_13|SigmoidLayer_1");
net = connectLayers(net,"efficientnet-b0|model|blocks_13|tpu_batch_normalization_1|FusedBatchNorm","efficientnet-b0|model|blocks_13|MulLayer_1/in1");
net = connectLayers(net,"efficientnet-b0|model|blocks_13|SigmoidLayer_1","efficientnet-b0|model|blocks_13|MulLayer_1/in2");
net = connectLayers(net,"efficientnet-b0|model|blocks_13|MulLayer_1","efficientnet-b0|model|blocks_13|se|GlobAvgPool");
net = connectLayers(net,"efficientnet-b0|model|blocks_13|MulLayer_1","efficientnet-b0|model|blocks_13|se|MulLayer_1/in2");
net = connectLayers(net,"Conv__421","efficientnet-b0|model|blocks_13|se|SigmoidLayer");
net = connectLayers(net,"Conv__421","efficientnet-b0|model|blocks_13|se|MulLayer/in1");
net = connectLayers(net,"efficientnet-b0|model|blocks_13|se|SigmoidLayer","efficientnet-b0|model|blocks_13|se|MulLayer/in2");
net = connectLayers(net,"efficientnet-b0|model|blocks_13|se|SigmoidLayer_1","efficientnet-b0|model|blocks_13|se|MulLayer_1/in1");
net = connectLayers(net,"efficientnet-b0|model|blocks_13|tpu_batch_normalization_2|FusedBatchNorm","efficientnet-b0|model|blocks_13|Add/in1");
net = connectLayers(net,"efficientnet-b0|model|blocks_13|Add","efficientnet-b0|model|blocks_14|conv2d|Conv2D");
net = connectLayers(net,"efficientnet-b0|model|blocks_13|Add","efficientnet-b0|model|blocks_14|Add/in2");
net = connectLayers(net,"efficientnet-b0|model|blocks_14|tpu_batch_normalization|FusedBatchNorm","efficientnet-b0|model|blocks_14|SigmoidLayer");
net = connectLayers(net,"efficientnet-b0|model|blocks_14|tpu_batch_normalization|FusedBatchNorm","efficientnet-b0|model|blocks_14|MulLayer/in1");
net = connectLayers(net,"efficientnet-b0|model|blocks_14|SigmoidLayer","efficientnet-b0|model|blocks_14|MulLayer/in2");
net = connectLayers(net,"efficientnet-b0|model|blocks_14|tpu_batch_normalization_1|FusedBatchNorm","efficientnet-b0|model|blocks_14|SigmoidLayer_1");
net = connectLayers(net,"efficientnet-b0|model|blocks_14|tpu_batch_normalization_1|FusedBatchNorm","efficientnet-b0|model|blocks_14|MulLayer_1/in1");
net = connectLayers(net,"efficientnet-b0|model|blocks_14|SigmoidLayer_1","efficientnet-b0|model|blocks_14|MulLayer_1/in2");
net = connectLayers(net,"efficientnet-b0|model|blocks_14|MulLayer_1","efficientnet-b0|model|blocks_14|se|GlobAvgPool");
net = connectLayers(net,"efficientnet-b0|model|blocks_14|MulLayer_1","efficientnet-b0|model|blocks_14|se|MulLayer_1/in2");
net = connectLayers(net,"Conv__431","efficientnet-b0|model|blocks_14|se|SigmoidLayer");
net = connectLayers(net,"Conv__431","efficientnet-b0|model|blocks_14|se|MulLayer/in1");
net = connectLayers(net,"efficientnet-b0|model|blocks_14|se|SigmoidLayer","efficientnet-b0|model|blocks_14|se|MulLayer/in2");
net = connectLayers(net,"efficientnet-b0|model|blocks_14|se|SigmoidLayer_1","efficientnet-b0|model|blocks_14|se|MulLayer_1/in1");
net = connectLayers(net,"efficientnet-b0|model|blocks_14|tpu_batch_normalization_2|FusedBatchNorm","efficientnet-b0|model|blocks_14|Add/in1");
net = connectLayers(net,"efficientnet-b0|model|blocks_15|tpu_batch_normalization|FusedBatchNorm","efficientnet-b0|model|blocks_15|SigmoidLayer");
net = connectLayers(net,"efficientnet-b0|model|blocks_15|tpu_batch_normalization|FusedBatchNorm","efficientnet-b0|model|blocks_15|MulLayer/in1");
net = connectLayers(net,"efficientnet-b0|model|blocks_15|SigmoidLayer","efficientnet-b0|model|blocks_15|MulLayer/in2");
net = connectLayers(net,"efficientnet-b0|model|blocks_15|tpu_batch_normalization_1|FusedBatchNorm","efficientnet-b0|model|blocks_15|SigmoidLayer_1");
net = connectLayers(net,"efficientnet-b0|model|blocks_15|tpu_batch_normalization_1|FusedBatchNorm","efficientnet-b0|model|blocks_15|MulLayer_1/in1");
net = connectLayers(net,"efficientnet-b0|model|blocks_15|SigmoidLayer_1","efficientnet-b0|model|blocks_15|MulLayer_1/in2");
net = connectLayers(net,"efficientnet-b0|model|blocks_15|MulLayer_1","efficientnet-b0|model|blocks_15|se|GlobAvgPool");
net = connectLayers(net,"efficientnet-b0|model|blocks_15|MulLayer_1","efficientnet-b0|model|blocks_15|se|MulLayer_1/in2");
net = connectLayers(net,"Conv__439","efficientnet-b0|model|blocks_15|se|SigmoidLayer");
net = connectLayers(net,"Conv__439","efficientnet-b0|model|blocks_15|se|MulLayer/in1");
net = connectLayers(net,"efficientnet-b0|model|blocks_15|se|SigmoidLayer","efficientnet-b0|model|blocks_15|se|MulLayer/in2");
net = connectLayers(net,"efficientnet-b0|model|blocks_15|se|SigmoidLayer_1","efficientnet-b0|model|blocks_15|se|MulLayer_1/in1");
net = connectLayers(net,"efficientnet-b0|model|head|tpu_batch_normalization|FusedBatchNorm","efficientnet-b0|model|head|SigmoidLayer");
net = connectLayers(net,"efficientnet-b0|model|head|tpu_batch_normalization|FusedBatchNorm","efficientnet-b0|model|head|MulLayer/in1");
net = connectLayers(net,"efficientnet-b0|model|head|SigmoidLayer","efficientnet-b0|model|head|MulLayer/in2");
net = initialize(net);
plot(net);

 1.3 复合缩放理论与EfficientNet的扩展模型(B1~B7)

1.3.1单一维度扩展模型存在的问题

        有许多方法可以根据不同的资源约束来缩放ConvNet:ResNet可以通过调整网络深度(#layers)来缩小(例如,ResNet-18)或放大(例如,ResNet-200),而WideResNet和MobileNets可以通过网络宽度(#channels)来缩放。因此,人们也普遍认识到,更大的输入图像尺寸将有助于准确性,但会增加FLOPS的开销。尽管先前的研究已经表明网络深度和宽度对于ConvNets的表达能力都很重要,但如何有效地缩放ConvNet以实现更好的效率和准确性仍然是一个开放的问题。

        直观地,对于更高分辨率的图像,我们应该增加网络深度,以便更大的感受野可以帮助捕获包含更多像素的相似特征。相应地,当分辨率更高时,我们也应该增加网络宽度,以便捕获更多具有更多像素的细粒度模式。

Tan等人提供的示意图,复合缩放需要结合模型多个规模参数进行权衡,包括宽度,深度和分辨率

所以问题的主要困难在于最优的d、w、r相互依赖,并且在不同的资源约束下值会发生变化。由于这个困难,传统方法大多只在一个维度上缩放ConvNets:深度、宽度或分辨率。相关研究表明,扩展任何维度的网络宽度、深度或分辨率都可以提高准确性,但对于更大的模型,准确性增益会减少。也就是说,单独地对某一维度进行不断扩展,对于计算资源来说并不高效,我们需要一种更科学的方法去指导我们合理的扩展模型。

1.3.2 复合缩放方法的构建 

我们数学建模的思路描述上述的优化问题:

在固定的计算资源限制下(计算量提高一倍),利用网络的最优深度、宽度和输入分辨率与计算量的关系(式1):

Depth: d=\alpha ^\Phi

Width:w=\beta ^\Phi

Resolution: r=\gamma ^\Phi

subject\: to:\: \alpha \cdot \beta ^2 \cdot \gamma^2 \approx 2 \: ( \alpha \geq 1, \beta\geq 1, \gamma\geq 1)        

复合系数ϕ指定为1,而αβγ则是决定如何将这些额外资源分别分配给网络宽度、深度和分辨率的常数。

这一问题的描述方程可写为(式2):

_{d,w,r}^{max}Accuracy(Net(d,w,r))

subject \: to\: Net(d,w,r)=\odot _{i=1...s}\widehat{F}_{i}^{d\cdot \widehat{L_i}}(X_{r\cdot \widehat{H_i},r\cdot \widehat{W_i},w\cdot \widehat{C_i}})

Memory(Net)\leq Targat\: memory

FLOPS(Net)\leq Targat \: Flops  

\odot _{i=1...s}\widehat{F}_{i}表示EfficientNet-b0中一系列层的组合操作。w,d,r分别是用于缩放后网络宽度、深度和分辨率的倍率;Fi、Li、Hi、Wi、Ci是基线网络(EfficientNet-b0)中预定义的参数,头上带个尖(widehat),主要是用来特指EfficientNet-b0。

其中:

  • Fi表示基线网络中第i层卷积层的操作符(operator),即其对应结构,
  • Li表示基线网络中第i阶段(stage)中卷积层的重复次数,和放大深度倍率d关联。
  • Hi表示基线网络中第i层卷积层的操作符(operator),和分辨率的放大倍率r关联。
  • Wi表示基线网络中第i层输入张量的宽度(width),和分辨率的放大倍率r关联。
  • Ci表示基线网络中第i层输入张量的通道数(channel dimension),和宽度的放大倍率w关联。

从基线模型EfficientNet-B0出发,基于复合缩放方法数学思路,通过两步来对其进行扩展:

第一步:我们首先设定φ=1(假设有两倍以上的资源可用),并根据公式1和公式2对α、β、γ进行小范围的网格搜索特别地,在式1的约束条件下,Tan等人发现对于EfficientNet-B0而言最佳的α、β、γ值分别为1.2、1.1和1.15

第二步:随后,将α、β、γ固定为常数(1.2、1.1和1.15),并使用公式1,通过调整不同的\phi值来扩展基线网络,从而得到了EfficientNet-B1至B7。

注意:

  • α、β、γ值分别为1.2、1.1和1.15是作者针对EfficientNet-b0网格搜索的优化值,并不是对所有模型都是这个扩展值,对于其他的模型α、β、γ的值并不一样。
  • 此处优化的α、β、γ值是针对ImageNet数据库迭代出来的优化值,不是理论计算得到上的最优值,因此需要注意其对于数据库或使用场景不同普适性,对于不同场景会有所变动。

1.3.3 将Efficient-b0扩展到b1~b7

正在更新中,预计14好完成


http://www.mrgr.cn/news/79865.html

相关文章:

  • QTreeView 与 QTreeWidget 例子
  • flutter实现如何 检测键盘的显示和隐藏状态
  • 计算机视觉与医学的结合:推动医学领域研究的新机遇
  • 新能源汽车 “能量侠”:移动充电机器人开启便捷补电新征程
  • 【Redis】Redis 缓存更新策略
  • SNN学习(3):Brain2Loihi,基于brain2实现的Loihi模拟器
  • 三、nginx实现lnmp+discuz论坛
  • 移动端自动化Auto.js入门及案例实操
  • Strawberry Fields:探索学习量子光学编程的奇妙世界
  • 【AI知识】有监督学习之回归任务(附线性回归代码及可视化)
  • scala的泛型参数
  • 作业Day2: 多文件编译; 思维导图
  • HBuilderX(uni-app)Vue3路由传参和接收路由参数!!
  • 自动驾驶控制与规划——Project 1: 车辆纵向控制
  • 【Redis源码】网络模型
  • hbuilder 安卓app手机调试中基座如何设置
  • 微信原生小程序---生成海报并分享,保存本地
  • ssd202d-badblock-坏块检测
  • 【数据结构——查找】二叉排序树(头歌实践教学平台习题)【合集】
  • 嵌入式驱动开发详解15(电容触摸屏gt9147)
  • C# 实现 10 位纯数字随机数
  • 我们来学mysql -- 探讨win安装方式(安装篇)
  • LabVIEW实现MQTT通信
  • Blue Ocean 在Jenkins上创建Pipeline使用详解
  • 频域滤波中默认的边界条件——补零与不补零(答作者问)
  • 电脑怎么设置通电自动开机(工控机)