当前位置: 首页 > news >正文

YOLO11改进-模块-引入分层互补注意力混合器HRAMi

        随着目标检测技术的不断进步,YOLO系列模型凭借其高效的计算性能和出色的精度在各类视觉任务中得到了广泛应用。然而,传统YOLO模型在层次化特征处理存在一定的局限性。为此,我们引入了一种来自图像恢复领域的先进模块——H-RAMi(Hierarchical Reciprocal Attention Mixer)。H-RAMi是一种层次化互相关注模块,旨在弥补下采样特征导致的像素级信息损失,同时利用语义级信息,保持高效的层次结构本文将其与YOLOv11相结合,进一步提升检测精度与鲁棒性。

1. H-RAMi的概述

H-RAMi层在整体架构中起着关键作用,解决了层次化网络在图像恢复中的一些局限性:

  1. 弥补信息丢失:H-RAMi旨在减轻层次化结构中的下采样引起的像素级信息丢失。它通过对编码器不同阶段的多尺度注意力进行上采样,确保网络能保持关键的细节信息。

  2. 融合多尺度与双维度注意力:H-RAMi将来自不同尺度和维度(空间和通道)的注意力进行融合,捕捉更丰富的语义信息。这一融合发生在层归一化之前,确保混合后的注意力在后续处理中仍然有效。

  3. 提高模型鲁棒性:H-RAMi层在提高模型鲁棒性方面表现突出,特别是在一些需要精确边界的任务中(如图像去噪)。该层在提升性能的同时仅增加了少量计算资源​

2. 将H-RAMi与YOLOv11相结合的改进

         我们将H-RAMi模块嵌入到YOLOv11的特征金字塔网络(FPN)中,替换contact层后面,使其在多个特征层进行多尺度融合。这样可以在保持特征丰富性的同时,增强对目标的检测能力,特别是对那些被下采样操作模糊的细节进行恢复。

3. 分层互补注意力混合器HRAMi代码部分

import torch.nn as nn
import torchclass MobiVari1(nn.Module):  # MobileNet v1 Variantsdef __init__(self, dim, kernel_size, stride, act=nn.LeakyReLU, out_dim=None):super(MobiVari1, self).__init__()self.dim = dimself.kernel_size = kernel_sizeself.out_dim = out_dim or dimself.dw_conv = nn.Conv2d(dim, dim, kernel_size, stride, kernel_size // 2, groups=dim)self.pw_conv = nn.Conv2d(dim, self.out_dim, 1, 1, 0)self.act = act()def forward(self, x):out = self.act(self.pw_conv(self.act(self.dw_conv(x)) + x))return out + x if self.dim == self.out_dim else outdef flops(self, resolutions):H, W = resolutionsflops = H * W * self.kernel_size * self.kernel_size * self.dim + H * W * 1 * 1 * self.dim * self.out_dim  # self.dw_conv + self.pw_convreturn flopsclass MobiVari2(MobiVari1):  # MobileNet v2 Variantsdef __init__(self, dim, kernel_size, stride, act=nn.LeakyReLU, out_dim=None, exp_factor=1.2, expand_groups=4):super(MobiVari2, self).__init__(dim, kernel_size, stride, act, out_dim)self.expand_groups = expand_groupsexpand_dim = int(dim * exp_factor)expand_dim = expand_dim + (expand_groups - expand_dim % expand_groups)self.expand_dim = expand_dimself.exp_conv = nn.Conv2d(dim, self.expand_dim, 1, 1, 0, groups=expand_groups)self.dw_conv = nn.Conv2d(expand_dim, expand_dim, kernel_size, stride, kernel_size // 2, groups=expand_dim)self.pw_conv = nn.Conv2d(expand_dim, self.out_dim, 1, 1, 0)def forward(self, x):x1 = self.act(self.exp_conv(x))out = self.pw_conv(self.act(self.dw_conv(x1) + x1))return out + x if self.dim == self.out_dim else outdef flops(self, resolutions):H, W = resolutionsflops = H * W * 1 * 1 * (self.dim // self.expand_groups) * self.expand_dim  # self.exp_convflops += H * W * self.kernel_size * self.kernel_size * self.expand_dim  # self.dw_convflops += H * W * 1 * 1 * self.expand_dim * self.out_dim  # self.pw_convreturn flopsclass HRAMi(nn.Module):def __init__(self, dim, kernel_size=3, stride=1, mv_ver=1, mv_act=nn.LeakyReLU, exp_factor=1.2, expand_groups=4):super(HRAMi, self).__init__()self.dim = dimself.kernel_size = kernel_sizeif mv_ver == 1:self.mobivari = MobiVari1(dim, kernel_size, stride, act=mv_act, out_dim=dim)elif mv_ver == 2:self.mobivari = MobiVari2(dim, kernel_size, stride, act=mv_act, out_dim=dim,exp_factor=2., expand_groups=1)def forward(self, attn_list):# for i, attn in enumerate(attn_list[:-1]):#     attn = F.pixel_shuffle(attn, 2 ** i)#     x = attn if i == 0 else torch.cat([x, attn], dim=1)# x = torch.cat([attn_list[0], attn_list[1]], dim=1) # 将这个contact放在外面x = self.mobivari(attn_list)return xdef flops(self, resolutions):return self.mobivari.flops(resolutions)if __name__ == '__main__':hrami = HRAMi(dim=64)input = [torch.randn(1, 64, 32, 32),  # Level 0torch.randn(1, 32, 32, 32)  # Level 3 (final level)]output = hrami(input)print(output.size())

 4. 分层互补注意力混合器HRAMi引入到YOLOv11中

第一: 将下面的核心代码复制到D:\bilibili\model\YOLO11\ultralytics-main\ultralytics\nn路径下,如下图所示。

第二:在task.py中导入HRAMi包

第三:在task.py中的模型配置部分下面代码

 1. 第一个改进,直接在backbone中添加FRFN

elif m is HRAMi:args = [ch[f]]

第四:将模型配置文件复制到YOLOV11.YAMY文件中

# Ultralytics YOLO 🚀, AGPL-3.0 license
# YOLO11 object detection model with P3-P5 outputs. For Usage examples see https://docs.ultralytics.com/tasks/detect# Parameters
nc: 80 # number of classes
scales: # model compound scaling constants, i.e. 'model=yolo11n.yaml' will call yolo11.yaml with scale 'n'# [depth, width, max_channels]n: [0.50, 0.25, 1024] # summary: 319 layers, 2624080 parameters, 2624064 gradients, 6.6 GFLOPss: [0.50, 0.50, 1024] # summary: 319 layers, 9458752 parameters, 9458736 gradients, 21.7 GFLOPsm: [0.50, 1.00, 512] # summary: 409 layers, 20114688 parameters, 20114672 gradients, 68.5 GFLOPsl: [1.00, 1.00, 512] # summary: 631 layers, 25372160 parameters, 25372144 gradients, 87.6 GFLOPsx: [1.00, 1.50, 512] # summary: 631 layers, 56966176 parameters, 56966160 gradients, 196.0 GFLOPs# YOLO11n backbone
backbone:# [from, repeats, module, args]- [-1, 1, Conv, [64, 3, 2]] # 0-P1/2- [-1, 1, Conv, [128, 3, 2]] # 1-P2/4- [-1, 2, C3k2, [256, False, 0.25]]- [-1, 1, Conv, [256, 3, 2]] # 3-P3/8- [-1, 2, C3k2, [512, False, 0.25]]- [-1, 1, Conv, [512, 3, 2]] # 5-P4/16- [-1, 2, C3k2, [512, True]]- [-1, 1, Conv, [1024, 3, 2]] # 7-P5/32- [-1, 2, C3k2, [1024, True]]- [-1, 1, SPPF, [1024, 5]] # 9- [-1, 2, C2PSA, [1024]] # 10# YOLO11n head
head:- [-1, 1, nn.Upsample, [None, 2, "nearest"]]- [[-1, 6], 1, Concat, [1]] # cat backbone P4- [-1, 2, C3k2, [512, False]] # 13- [-1, 1, nn.Upsample, [None, 2, "nearest"]]- [[-1, 4], 1, Concat, [1]] # cat backbone P3- [-1, 1, HRAMi, []]- [-1, 2, C3k2, [256, False]] # 16 (P3/8-small)- [-1, 1, Conv, [256, 3, 2]]- [[-1, 13], 1, Concat, [1]] # cat head P4- [-1, 1, HRAMi, []]  # 20- [-1, 2, C3k2, [512, False]] # 21 (P4/16-medium)- [-1, 1, Conv, [512, 3, 2]]- [[-1, 10], 1, Concat, [1]] # cat head P5- [-1, 1, HRAMi, []] # 24- [-1, 2, C3k2, [1024, True]] # 22 (P5/32-large)- [[17, 21, 25], 1, Detect, [nc]] # Detect(P3, P4, P5)

第五:运行成功


from ultralytics.models import NAS, RTDETR, SAM, YOLO, FastSAM, YOLOWorldif __name__=="__main__":# 使用自己的YOLOv11.yamy文件搭建模型并加载预训练权重训练模型model = YOLO(r"D:\bilibili\model\YOLO11\ultralytics-main\ultralytics\cfg\models\11\yolo11_HRAMI.yaml")\.load(r'D:\bilibili\model\YOLO11\ultralytics-main\yolo11n.pt')  # build from YAML and transfer weightsresults = model.train(data=r'D:\bilibili\model\ultralytics-main\ultralytics\cfg\datasets\VOC_my.yaml',epochs=100, imgsz=640, batch=8)


http://www.mrgr.cn/news/56863.html

相关文章:

  • 自动化测试实施过程中需要考虑的因素!
  • 在 Docker 中搭建 PostgreSQL16 主从同步环境
  • 【无标题】unity, 在编辑界面中隐藏公开变量和现实私有变量
  • 优先级队列(4)_数据流的中位数
  • 目标检测——yolov5-3.1的环境搭建和运行
  • 【消息队列】RabbitMQ实现消费者组机制
  • AI大模型会对我们的生活带来什么改变?普通人终于有机会感觉到大模型的用处了
  • 网址工具(完善中)
  • ssh scp提示Bad configuration option: GSSAPIKexAlgorithms
  • Nodejs上传阿里云oss图片案例
  • antv g6
  • Ping32数据保护工具,提供全面的数据安全解决方案
  • mono源码交叉编译 linux arm arm64全过程
  • stm32f103zet6 ili9341(fsmc) freertos 制作数字电子时钟
  • 志华软件 openfile.aspx 任意文件读取漏洞复现
  • 【无人机设计与控制】机器人RRT路径规划或者无人机二维平面航迹规划
  • 【算法】归并排序概念及例题运用
  • 在线图片翻译有哪些?快来试试这5款
  • 大华智能云网关注册管理平台 doLogin SQL注入漏洞复现(CNVD-2024-38747)
  • Bitcoin全节点搭建
  • 又进入火坑了,该何去何从
  • PyQt 程序使用 Inno Setup 打包成 Setup 安装包教程
  • 【zlm】h264 vp9 尝试研究
  • 探讨程序搭建
  • 学习AJAX请求(初步)24.10.21-10.23
  • PCC Net模型实现行人数量统计