当前位置: 首页 > news >正文

YOLOv11改进策略【注意力机制篇】| 2024 SCSA-CBAM 空间和通道的协同注意模块

一、本文介绍

本文记录的是基于SCSA-CBAM注意力模块的YOLOv11目标检测改进方法研究。现有注意力方法在空间-通道协同方面未充分挖掘其潜力,缺乏对多语义信息的充分利用来引导特征和缓解语义差异。SCSA-CBAM注意力模块构建一个空间-通道协同机制,使空间注意力引导通道注意力增强综合学习,通道注意力从多语义水平调节更丰富的空间特定模式。

文章目录

  • 一、本文介绍
  • 二、SCSA原理
    • 2.1 原理
    • 2.2 优势
  • 三、SCSA的实现代码
  • 四、创新模块
    • 4.1 改进点1
    • 4.2 改进点2⭐
  • 五、添加步骤
    • 5.1 修改ultralytics/nn/modules/block.py
    • 5.2 修改ultralytics/nn/modules/__init__.py
    • 5.3 修改ultralytics/nn/modules/tasks.py
  • 六、yaml模型文件
    • 6.1 模型改进版本一
    • 6.2 模型改进版本二⭐
  • 七、成功运行结果


二、SCSA原理

SCSA:空间注意与通道注意的协同效应研究

SCSA(Spatial and Channel Synergistic Attention)是一种新颖的、即插即用的空间和通道协同注意力机制,其设计的原理和优势如下:

2.1 原理

  • Shared Multi - Semantic Spatial Attention(SMSA)
    • 空间和通道分解:将输入X沿高度和宽度维度分解,应用全局平均池化创建两个单向1D序列结构,然后将特征集划分为K个独立的子特征,每个子特征具有C / K个通道,便于高效提取多语义空间信息。
    • 轻量级卷积策略:在四个子特征中应用核大小为3、5、7和9的深度一维卷积,以捕获不同的语义空间结构,并使用共享卷积来对齐,解决分解特征和应用一维卷积导致的有限感受野问题。使用Group Normalization对不同语义子特征进行归一化,最后使用Sigmoid激活函数生成空间注意力。
  • Progressive Channel - wise Self - Attention(PCSA)
    • 受ViT利用MHSA建模空间注意力中不同token之间相似性的启发,结合SMSA调制的空间先验来计算通道间相似性。
    • 采用渐进压缩方法来保留和利用SMSA提取的多语义空间信息,并减少MHSA的计算成本。
    • 具体实现过程包括池化、映射生成查询、键和值,进行注意力计算等。
  • 协同效应:通过简单的串行连接集成SMSA和PCSA模块,空间注意力从每个特征中提取多语义空间信息,为通道注意力计算提供精确的空间先验;通道注意力利用整体特征图X来细化局部子特征的语义理解,缓解SMSA中多尺度卷积引起的语义差异。同时,不采用通道压缩,防止关键特征丢失。

在这里插入图片描述

2.2 优势

  • 高效的SMSA:利用多尺度深度共享1D卷积捕获每个特征通道的多语义空间信息,有效整合全局上下文依赖和多语义空间先验。
  • PCSA缓解语义差异:使用SMSA计算引导的压缩空间知识来计算通道相似性和贡献,缓解空间结构中的语义差异。
  • 协同效应:通过维度解耦、轻量级多语义引导和语义差异缓解来探索协同效应,在各种视觉任务和复杂场景中优于当前最先进的注意力机制。
  • 实验验证优势
    • 在图像分类任务中,SCSA在不同规模的网络中实现了最高的Top - 1准确率,且参数和计算复杂度较低,基于ResNet的推理速度仅次于CA,在准确性、速度和模型复杂度之间实现了较好的平衡。
    • 在目标检测任务中,在各种检测器、模型大小和对象尺度上优于其他先进的注意力方法,在复杂场景(如小目标、黑暗环境和红外场景)中进一步证明了其有效性和泛化能力。
    • 在分割任务中,基于多语义空间信息,在像素级任务中表现出色,显著优于其他注意力方法。
    • 可视化分析:SCSA在相似的感受野条件下能明显关注多个关键区域,最大限度地减少关键信息丢失,为最终的下游任务提供丰富的特征信息,其协同设计在空间和通道域注意力计算中保留了关键信息,具有更优越的表示能力。
    • 其他分析:SCSA具有更大的有效感受野,有利于网络利用丰富的上下文信息进行集体决策,从而提升性能;在计算复杂度方面,当模型宽度适当时,SCSA可以以线性复杂度进行推理;在推理吞吐量评估中,虽然SCSA比纯通道注意力略慢,但优于大多数混合注意力机制,在模型复杂性、推理速度和准确性之间实现了优化平衡。

论文:https://arxiv.org/pdf/2407.05128
源码:https://github.com/HZAI-ZJNU/SCSA

三、SCSA的实现代码

SCSA模块的实现代码如下:

import typing as t
from einops import rearrange
from mmengine.model import BaseModuleclass SCSA(BaseModule):def __init__(self,dim: int,head_num: int,window_size: int = 7,group_kernel_sizes: t.List[int] = [3, 5, 7, 9],qkv_bias: bool = False,fuse_bn: bool = False,norm_cfg: t.Dict = dict(type='BN'),act_cfg: t.Dict = dict(type='ReLU'),down_sample_mode: str = 'avg_pool',attn_drop_ratio: float = 0.,gate_layer: str = 'sigmoid',):super(SCSA, self).__init__()self.dim = dimself.head_num = head_numself.head_dim = dim // head_numself.scaler = self.head_dim ** -0.5self.group_kernel_sizes = group_kernel_sizesself.window_size = window_sizeself.qkv_bias = qkv_biasself.fuse_bn = fuse_bnself.down_sample_mode = down_sample_modeassert self.dim // 4, 'The dimension of input feature should be divisible by 4.'self.group_chans = group_chans = self.dim // 4self.local_dwc = nn.Conv1d(group_chans, group_chans, kernel_size=group_kernel_sizes[0],padding=group_kernel_sizes[0] // 2, groups=group_chans)self.global_dwc_s = nn.Conv1d(group_chans, group_chans, kernel_size=group_kernel_sizes[1],padding=group_kernel_sizes[1] // 2, groups=group_chans)self.global_dwc_m = nn.Conv1d(group_chans, group_chans, kernel_size=group_kernel_sizes[2],padding=group_kernel_sizes[2] // 2, groups=group_chans)self.global_dwc_l = nn.Conv1d(group_chans, group_chans, kernel_size=group_kernel_sizes[3],padding=group_kernel_sizes[3] // 2, groups=group_chans)self.sa_gate = nn.Softmax(dim=2) if gate_layer == 'softmax' else nn.Sigmoid()self.norm_h = nn.GroupNorm(4, dim)self.norm_w = nn.GroupNorm(4, dim)self.conv_d = nn.Identity()self.norm = nn.GroupNorm(1, dim)self.q = nn.Conv2d(in_channels=dim, out_channels=dim, kernel_size=1, bias=qkv_bias, groups=dim)self.k = nn.Conv2d(in_channels=dim, out_channels=dim, kernel_size=1, bias=qkv_bias, groups=dim)self.v = nn.Conv2d(in_channels=dim, out_channels=dim, kernel_size=1, bias=qkv_bias, groups=dim)self.attn_drop = nn.Dropout(attn_drop_ratio)self.ca_gate = nn.Softmax(dim=1) if gate_layer == 'softmax' else nn.Sigmoid()if window_size == -1:self.down_func = nn.AdaptiveAvgPool2d((1, 1))else:if down_sample_mode == 'recombination':self.down_func = self.space_to_chans# dimensionality reductionself.conv_d = nn.Conv2d(in_channels=dim * window_size ** 2, out_channels=dim, kernel_size=1, bias=False)elif down_sample_mode == 'avg_pool':self.down_func = nn.AvgPool2d(kernel_size=(window_size, window_size), stride=window_size)elif down_sample_mode == 'max_pool':self.down_func = nn.MaxPool2d(kernel_size=(window_size, window_size), stride=window_size)def forward(self, x: torch.Tensor) -> torch.Tensor:"""The dim of x is (B, C, H, W)"""# Spatial attention priority calculationb, c, h_, w_ = x.size()# (B, C, H)x_h = x.mean(dim=3)l_x_h, g_x_h_s, g_x_h_m, g_x_h_l = torch.split(x_h, self.group_chans, dim=1)# (B, C, W)x_w = x.mean(dim=2)l_x_w, g_x_w_s, g_x_w_m, g_x_w_l = torch.split(x_w, self.group_chans, dim=1)x_h_attn = self.sa_gate(self.norm_h(torch.cat((self.local_dwc(l_x_h),self.global_dwc_s(g_x_h_s),self.global_dwc_m(g_x_h_m),self.global_dwc_l(g_x_h_l),), dim=1)))x_h_attn = x_h_attn.view(b, c, h_, 1)x_w_attn = self.sa_gate(self.norm_w(torch.cat((self.local_dwc(l_x_w),self.global_dwc_s(g_x_w_s),self.global_dwc_m(g_x_w_m),self.global_dwc_l(g_x_w_l)), dim=1)))x_w_attn = x_w_attn.view(b, c, 1, w_)x = x * x_h_attn * x_w_attn# Channel attention based on self attention# reduce calculationsy = self.down_func(x)y = self.conv_d(y)_, _, h_, w_ = y.size()# normalization first, then reshape -> (B, H, W, C) -> (B, C, H * W) and generate q, k and vy = self.norm(y)q = self.q(y)k = self.k(y)v = self.v(y)# (B, C, H, W) -> (B, head_num, head_dim, N)q = rearrange(q, 'b (head_num head_dim) h w -> b head_num head_dim (h w)', head_num=int(self.head_num),head_dim=int(self.head_dim))k = rearrange(k, 'b (head_num head_dim) h w -> b head_num head_dim (h w)', head_num=int(self.head_num),head_dim=int(self.head_dim))v = rearrange(v, 'b (head_num head_dim) h w -> b head_num head_dim (h w)', head_num=int(self.head_num),head_dim=int(self.head_dim))# (B, head_num, head_dim, head_dim)attn = q @ k.transpose(-2, -1) * self.scalerattn = self.attn_drop(attn.softmax(dim=-1))# (B, head_num, head_dim, N)attn = attn @ v# (B, C, H_, W_)attn = rearrange(attn, 'b head_num head_dim (h w) -> b (head_num head_dim) h w', h=int(h_), w=int(w_))# (B, C, 1, 1)attn = attn.mean((2, 3), keepdim=True)attn = self.ca_gate(attn)return attn * x

四、创新模块

4.1 改进点1

模块改进方法1️⃣:直接加入SCSA模块
SCSA模块添加后如下:

在这里插入图片描述

注意❗:在5.2和5.3小节中需要声明的模块名称为:SCSA

4.2 改进点2⭐

模块改进方法2️⃣:基于SCSA模块C3k2

第二种改进方法是对YOLOv11中的C3k2模块进行改进。SCSA的协同设计能够在空间和通道域注意力计算中保留了关键信息,最大限度地减少关键信息丢失,使C3k2模块具有更优越的表示能力。

改进代码如下:

首先添加如下代码改进C2f模块,并重命名为C2f_SCSA

class C2f_SCSA(nn.Module):"""Faster Implementation of CSP Bottleneck with 2 convolutions."""def __init__(self, c1, c2, n=1, shortcut=False, g=1, e=0.5):"""Initialize CSP bottleneck layer with two convolutions with arguments ch_in, ch_out, number, shortcut, groups,expansion."""super().__init__()self.c = int(c2 * e)  # hidden channelsself.cv1 = Conv(c1, 2 * self.c, 1, 1)self.cv2 = Conv((2 + n) * self.c, c2, 1)  # optional act=FReLU(c2)self.m = nn.ModuleList(Bottleneck(self.c, self.c, shortcut, g, k=((3, 3), (3, 3)), e=1.0) for _ in range(n))self.att = SCSA(c2)def forward(self, x):"""Forward pass through C2f layer."""y = list(self.cv1(x).chunk(2, 1))y.extend(m(y[-1]) for m in self.m)return self.att(self.cv2(torch.cat(y, 1)))def forward_split(self, x):"""Forward pass using split() instead of chunk()."""y = list(self.cv1(x).split((self.c, self.c), 1))y.extend(m(y[-1]) for m in self.m)return self.att(self.cv2(torch.cat(y, 1)))

在这里插入图片描述
再添加如下代码让C3k2继承于C2f_SCSA,并重命名为C3k2_SCSA

class C3k2_SCSA(C2f_SCSA):"""Faster Implementation of CSP Bottleneck with 2 convolutions."""def __init__(self, c1, c2, n=1, c3k=False, e=0.5, g=1, shortcut=True):"""Initializes the C3k2 module, a faster CSP Bottleneck with 2 convolutions and optional C3k blocks."""super().__init__(c1, c2, n, shortcut, g, e)self.m = nn.ModuleList(C3k(self.c, self.c, 2, shortcut, g) if c3k else Bottleneck(self.c, self.c, shortcut, g) for _ in range(n))

在这里插入图片描述

注意❗:在5.2和5.3小节中需要声明的模块名称为:C3k2_SCSA


五、添加步骤

5.1 修改ultralytics/nn/modules/block.py

此处需要修改的文件是ultralytics/nn/modules/block.py

block.py中定义了网络结构的通用模块,我们想要加入新的模块就只需要将模块代码放到这个文件内即可。

SCSAC3k2_SCSA模块代码添加到此文件下。

5.2 修改ultralytics/nn/modules/init.py

此处需要修改的文件是ultralytics/nn/modules/__init__.py

__init__.py文件中定义了所有模块的初始化,我们只需要将block.py中的新的模块命添加到对应的函数即可。

SCSAC3k2_SCSAblock.py中实现,所有要添加在from .block import

from .block import (C1,C2,...SCSA,C3k2_SCSA
)

在这里插入图片描述

5.3 修改ultralytics/nn/modules/tasks.py

tasks.py文件中,需要在两处位置添加各模块类名称。

首先:在函数声明中引入SCSAC3k2_SCSA

在这里插入图片描述

在这里插入图片描述

其次:在parse_model函数中注册SCSAC3k2_SCSA模块

在这里插入图片描述

在这里插入图片描述


六、yaml模型文件

6.1 模型改进版本一

在代码配置完成后,配置模型的YAML文件。

此处以ultralytics/cfg/models/11/yolov11m.yaml为例,在同目录下创建一个用于自己数据集训练的模型文件yolov11m-SCSA.yaml

yolov11m.yaml中的内容复制到yolov11m-SCSA.yaml文件下,修改nc数量等于自己数据中目标的数量。
在骨干网络中添加SCSA模块只需要填入一个参数,通道数

# Ultralytics YOLO 🚀, AGPL-3.0 license
# YOLO11 object detection model with P3-P5 outputs. For Usage examples see https://docs.ultralytics.com/tasks/detect# yolo task=detect mode=train model=yolov11m.yaml data=data.yaml device=0 epochs=300 batch=16 imgsz=640 workers=10# Parameters
nc: 1 # number of classes
scales: # model compound scaling constants, i.e. 'model=yolo11n.yaml' will call yolo11.yaml with scale 'n'# [depth, width, max_channels]n: [0.50, 0.25, 1024] # summary: 319 layers, 2624080 parameters, 2624064 gradients, 6.6 GFLOPss: [0.50, 0.50, 1024] # summary: 319 layers, 9458752 parameters, 9458736 gradients, 21.7 GFLOPsm: [0.50, 1.00, 512] # summary: 409 layers, 20114688 parameters, 20114672 gradients, 68.5 GFLOPsl: [1.00, 1.00, 512] # summary: 631 layers, 25372160 parameters, 25372144 gradients, 87.6 GFLOPsx: [1.00, 1.50, 512] # summary: 631 layers, 56966176 parameters, 56966160 gradients, 196.0 GFLOPs# YOLO11n backbone
backbone:# [from, repeats, module, args]- [-1, 1, Conv, [64, 3, 2]] # 0-P1/2- [-1, 1, Conv, [128, 3, 2]] # 1-P2/4- [-1, 2, C3k2, [256, False, 0.25]]- [-1, 1, Conv, [256, 3, 2]] # 3-P3/8- [-1, 2, C3k2, [512, False, 0.25]]- [-1, 1, Conv, [512, 3, 2]] # 5-P4/16- [-1, 2, C3k2, [512, True]]- [-1, 1, Conv, [1024, 3, 2]] # 7-P5/32- [-1, 2, C3k2, [1024, True]]- [-1, 1, SCSA, [1024]]- [-1, 1, SPPF, [1024, 5]] # 9- [-1, 2, C2PSA, [1024]] # 10# YOLO11n head
head:- [-1, 1, nn.Upsample, [None, 2, "nearest"]]- [[-1, 6], 1, Concat, [1]] # cat backbone P4- [-1, 2, C3k2, [512, False]] # 13- [-1, 1, nn.Upsample, [None, 2, "nearest"]]- [[-1, 4], 1, Concat, [1]] # cat backbone P3- [-1, 2, C3k2, [256, False]] # 16 (P3/8-small)- [-1, 1, Conv, [256, 3, 2]]- [[-1, 14], 1, Concat, [1]] # cat head P4- [-1, 2, C3k2, [512, False]] # 19 (P4/16-medium)- [-1, 1, Conv, [512, 3, 2]]- [[-1, 11], 1, Concat, [1]] # cat head P5- [-1, 2, C3k2, [1024, True]] # 22 (P5/32-large)- [[17, 20, 23], 1, Detect, [nc]] # Detect(P3, P4, P5)

6.2 模型改进版本二⭐

此处同样以ultralytics/cfg/models/11/yolov11m.yaml为例,在同目录下创建一个用于自己数据集训练的模型文件yolov11m-C3k2_SCSA.yaml

yolov11m.yaml中的内容复制到yolov11m-C3k2_SCSA.yaml文件下,修改nc数量等于自己数据中目标的数量。

📌 模型的修改方法是将骨干网络中的所有C3k2模块替换成C3k2_SCSA模块

# Ultralytics YOLO 🚀, AGPL-3.0 license
# YOLO11 object detection model with P3-P5 outputs. For Usage examples see https://docs.ultralytics.com/tasks/detect# yolo task=detect mode=train model=yolov11m.yaml data=data.yaml device=0 epochs=300 batch=16 imgsz=640 workers=10# Parameters
nc: 1 # number of classes
scales: # model compound scaling constants, i.e. 'model=yolo11n.yaml' will call yolo11.yaml with scale 'n'# [depth, width, max_channels]n: [0.50, 0.25, 1024] # summary: 319 layers, 2624080 parameters, 2624064 gradients, 6.6 GFLOPss: [0.50, 0.50, 1024] # summary: 319 layers, 9458752 parameters, 9458736 gradients, 21.7 GFLOPsm: [0.50, 1.00, 512] # summary: 409 layers, 20114688 parameters, 20114672 gradients, 68.5 GFLOPsl: [1.00, 1.00, 512] # summary: 631 layers, 25372160 parameters, 25372144 gradients, 87.6 GFLOPsx: [1.00, 1.50, 512] # summary: 631 layers, 56966176 parameters, 56966160 gradients, 196.0 GFLOPs# YOLO11n backbone
backbone:# [from, repeats, module, args]- [-1, 1, Conv, [64, 3, 2]] # 0-P1/2- [-1, 1, Conv, [128, 3, 2]] # 1-P2/4- [-1, 2, C3k2_SCSA, [256, False, 0.25]]- [-1, 1, Conv, [256, 3, 2]] # 3-P3/8- [-1, 2, C3k2_SCSA, [512, False, 0.25]]- [-1, 1, Conv, [512, 3, 2]] # 5-P4/16- [-1, 2, C3k2_SCSA, [512, True]]- [-1, 1, Conv, [1024, 3, 2]] # 7-P5/32- [-1, 2, C3k2_SCSA, [1024, True]]- [-1, 1, SPPF, [1024, 5]] # 9- [-1, 2, C2PSA, [1024]] # 10# YOLO11n head
head:- [-1, 1, nn.Upsample, [None, 2, "nearest"]]- [[-1, 6], 1, Concat, [1]] # cat backbone P4- [-1, 2, C3k2, [512, False]] # 13- [-1, 1, nn.Upsample, [None, 2, "nearest"]]- [[-1, 4], 1, Concat, [1]] # cat backbone P3- [-1, 2, C3k2, [256, False]] # 16 (P3/8-small)- [-1, 1, Conv, [256, 3, 2]]- [[-1, 13], 1, Concat, [1]] # cat head P4- [-1, 2, C3k2, [512, False]] # 19 (P4/16-medium)- [-1, 1, Conv, [512, 3, 2]]- [[-1, 10], 1, Concat, [1]] # cat head P5- [-1, 2, C3k2, [1024, True]] # 22 (P5/32-large)- [[16, 19, 22], 1, Detect, [nc]] # Detect(P3, P4, P5)

七、成功运行结果

分别打印网络模型可以看到SCSAC3k2_SCSA已经加入到模型中,并可以进行训练了。

YOLOv11m-SCSA

                   from  n    params  module                                       arguments                     0                  -1  1      1856  ultralytics.nn.modules.conv.Conv             [3, 64, 3, 2]                 1                  -1  1     73984  ultralytics.nn.modules.conv.Conv             [64, 128, 3, 2]               2                  -1  1    111872  ultralytics.nn.modules.block.C3k2            [128, 256, 1, True, 0.25]     3                  -1  1    590336  ultralytics.nn.modules.conv.Conv             [256, 256, 3, 2]              4                  -1  1    444928  ultralytics.nn.modules.block.C3k2            [256, 512, 1, True, 0.25]     5                  -1  1   2360320  ultralytics.nn.modules.conv.Conv             [512, 512, 3, 2]              6                  -1  1   1380352  ultralytics.nn.modules.block.C3k2            [512, 512, 1, True]           7                  -1  1   2360320  ultralytics.nn.modules.conv.Conv             [512, 512, 3, 2]              8                  -1  1   1380352  ultralytics.nn.modules.block.C3k2            [512, 512, 1, True]           9                  -1  1      8192  ultralytics.nn.modules.block.SCSA            [512, 512]                    10                  -1  1    656896  ultralytics.nn.modules.block.SPPF            [512, 512, 5]                 11                  -1  1    990976  ultralytics.nn.modules.block.C2PSA           [512, 512, 1]                 12                  -1  1         0  torch.nn.modules.upsampling.Upsample         [None, 2, 'nearest']          13             [-1, 6]  1         0  ultralytics.nn.modules.conv.Concat           [1]                           14                  -1  1   1642496  ultralytics.nn.modules.block.C3k2            [1024, 512, 1, True]          15                  -1  1         0  torch.nn.modules.upsampling.Upsample         [None, 2, 'nearest']          16             [-1, 4]  1         0  ultralytics.nn.modules.conv.Concat           [1]                           17                  -1  1    542720  ultralytics.nn.modules.block.C3k2            [1024, 256, 1, True]          18                  -1  1    590336  ultralytics.nn.modules.conv.Conv             [256, 256, 3, 2]              19            [-1, 14]  1         0  ultralytics.nn.modules.conv.Concat           [1]                           20                  -1  1   1511424  ultralytics.nn.modules.block.C3k2            [768, 512, 1, True]           21                  -1  1   2360320  ultralytics.nn.modules.conv.Conv             [512, 512, 3, 2]              22            [-1, 11]  1         0  ultralytics.nn.modules.conv.Concat           [1]                           23                  -1  1   1642496  ultralytics.nn.modules.block.C3k2            [1024, 512, 1, True]          24        [17, 20, 23]  1   1411795  ultralytics.nn.modules.head.Detect           [1, [256, 512, 512]]          
YOLOv11m-SCSA summary: 425 layers, 20,061,971 parameters, 20,061,955 gradients, 68.2 GFLOPs

YOLOv11m-C3k2_SCSA

                   from  n    params  module                                       arguments                     0                  -1  1      1856  ultralytics.nn.modules.conv.Conv             [3, 64, 3, 2]                 1                  -1  1     73984  ultralytics.nn.modules.conv.Conv             [64, 128, 3, 2]               2                  -1  1    103424  ultralytics.nn.modules.block.C3k2_SCSA       [128, 256, False, 0.25]       3                  -1  1    590336  ultralytics.nn.modules.conv.Conv             [256, 256, 3, 2]              4                  -1  1    403456  ultralytics.nn.modules.block.C3k2_SCSA       [256, 512, False, 0.25]       5                  -1  1   2360320  ultralytics.nn.modules.conv.Conv             [512, 512, 3, 2]              6                  -1  1   1256192  ultralytics.nn.modules.block.C3k2_SCSA       [512, 512, True]              7                  -1  1   2360320  ultralytics.nn.modules.conv.Conv             [512, 512, 3, 2]              8                  -1  1   1256192  ultralytics.nn.modules.block.C3k2_SCSA       [512, 512, True]              9                  -1  1    656896  ultralytics.nn.modules.block.SPPF            [512, 512, 5]                 10                  -1  1    990976  ultralytics.nn.modules.block.C2PSA           [512, 512, 1]                 11                  -1  1         0  torch.nn.modules.upsampling.Upsample         [None, 2, 'nearest']          12             [-1, 6]  1         0  ultralytics.nn.modules.conv.Concat           [1]                           13                  -1  1   1642496  ultralytics.nn.modules.block.C3k2            [1024, 512, 1, True]          14                  -1  1         0  torch.nn.modules.upsampling.Upsample         [None, 2, 'nearest']          15             [-1, 4]  1         0  ultralytics.nn.modules.conv.Concat           [1]                           16                  -1  1    542720  ultralytics.nn.modules.block.C3k2            [1024, 256, 1, True]          17                  -1  1    590336  ultralytics.nn.modules.conv.Conv             [256, 256, 3, 2]              18            [-1, 13]  1         0  ultralytics.nn.modules.conv.Concat           [1]                           19                  -1  1   1511424  ultralytics.nn.modules.block.C3k2            [768, 512, 1, True]           20                  -1  1   2360320  ultralytics.nn.modules.conv.Conv             [512, 512, 3, 2]              21            [-1, 10]  1         0  ultralytics.nn.modules.conv.Concat           [1]                           22                  -1  1   1642496  ultralytics.nn.modules.block.C3k2            [1024, 512, 1, True]          23        [16, 19, 22]  1   1411795  ultralytics.nn.modules.head.Detect           [1, [256, 512, 512]]          
YOLOv11m-C3k2_SCSA summary: 387 layers, 19,755,539 parameters, 19,755,523 gradients, 66.4 GFLOPs

http://www.mrgr.cn/news/47780.html

相关文章:

  • 【Canvas与诗词】要做一棵树,站成永恒
  • 《系统架构设计师教程(第2版)》第18章-安全架构设计理论与实践-02-安全模型
  • Mac pro 之Android Studio之解决logcat日志无法输出问题
  • STM32CUBEIDE的使用【一】点亮LED
  • 一文详细解析如何使用LangChain、NestJS 和 Gemma 2 构建一个 Agentic RAG 应用
  • 小猿口算自动PK脚本
  • Matplotlib运行报错ValueError: object __array__ method not producing an array
  • 【自用视频笔记】25计算机基础综合408大纲新增考点 多处理机调度
  • VMWare vsphere ESXi 6.7在线升级至7.0.3
  • 【hot100-java】随机链表的复制
  • ODE45函数——中间变量提取,时变量参数,加速仿真以及运行进度条
  • CLIP——多模态预训练模型介绍
  • 【Linux系统编程】第三十弹---软硬链接与动静态库的深入探索
  • BERT的中文问答系统14
  • C++网络编程之套接字基础
  • 大模型推荐LLM4Rec调研2024
  • 浅谈云原生--微服务、CICD、Serverless、服务网格
  • 『Mysql进阶』Mysql SQL语句性能分析(七)
  • 代码随想录算法训练营Day31 | 455.分发饼干、376.摆动序列、53.最大子数组和
  • 2025年第九届绿色能源与应用国际会议(ICGEA 2025)即将召开!