yolov5将推理模型导出为onnx
背景
为了将前文yolov5仅检测人物部署到边缘计算平台,需要将pytorch的训练结果best.pt转换为onnx格式
格式转换
yolov5工程带有export.py可以方便的进行转换,在执行如下命令时安装依赖环境时
pip install -r requirements.txt coremltools onnx onnx-simplifier onnxruntime openvino-dev tensorflow-cpu
遇到问题,
WARNING: Generating metadata for package onnx-simplifier produced metadata for project name onnxsim. Fix your #egg=onnx-simplifier fragments.Discarding https://files.pythonhosted.org/packages/6a/95/9d93b8cfdd9f57abe7000cd6b9e56e2c518ce0e6bf6b312b1cf37b4e68a8/onnx-simplifier-0.4.36.tar.gz (from https://pypi.org/simple/onnx-simplifier/) (requires-
python:>=3.7): Requested onnxsim from https://files.pythonhosted.org/packages/6a/95/9d93b8cfdd9f57abe7000cd6b9e56e2c518ce0e6bf6b312b1cf37b4e68a8/onnx-simplifier-0.4.36.tar.gz has inconsistent name: expected 'onnx-simplifier', but metadata has 'onnxsim'Preparing metadata (setup.py) ... errorerror: subprocess-exited-with-error× python setup.py egg_info did not run successfully.│ exit code: 1╰─> [36 lines of output]C:\Users\14406\AppData\Local\Temp\pip-install-56642aax\onnx-simplifier_3720961cbf954764811dc3ac6ca52852\setup.py:26: DeprecationWarning: Use shutil.which instead of find_executableCMAKE = find_executable('cmake')fatal: No names found, cannot describe anything.fatal: ambiguous argument 'HEAD': unknown revision or path not in the working tree.Use '--' to separate paths from revisions, like this:'git <command> [<revision>...] -- [<file>...]'D:\anaconda3\envs\yolov5\Lib\site-packages\setuptools\__init__.py:94: _DeprecatedInstaller: setuptools.installer and fetch_build_eggs are deprecated.!!********************************************************************************Requirements should be satisfied by a PEP 517 installer.If you are using pip, you can try `pip install --use-pep517`.********************************************************************************!!dist.fetch_build_eggs(dist.setup_requires)Traceback (most recent call last):File "<string>", line 2, in <module>File "<pip-setuptools-caller>", line 34, in <module>File "C:\Users\14406\AppData\Local\Temp\pip-install-56642aax\onnx-simplifier_3720961cbf954764811dc3ac6ca52852\setup.py", line 271, in <module>setuptools.setup(File "D:\anaconda3\envs\yolov5\Lib\site-packages\setuptools\__init__.py", line 117, in setupreturn distutils.core.setup(**attrs)^^^^^^^^^^^^^^^^^^^^^^^^^^^^^File "D:\anaconda3\envs\yolov5\Lib\site-packages\setuptools\_distutils\core.py", line 145, in setup_setup_distribution = dist = klass(attrs)^^^^^^^^^^^^File "D:\anaconda3\envs\yolov5\Lib\site-packages\setuptools\dist.py", line 294, in __init__self.metadata.version = self._normalize_version(self.metadata.version)^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^File "D:\anaconda3\envs\yolov5\Lib\site-packages\setuptools\dist.py", line 330, in _normalize_versionnormalized = str(Version(version))^^^^^^^^^^^^^^^^File "D:\anaconda3\envs\yolov5\Lib\site-packages\packaging\version.py", line 202, in __init__raise InvalidVersion(f"Invalid version: '{version}'")packaging.version.InvalidVersion: Invalid version: 'unknown'[end of output]note: This error originates from a subprocess, and is likely not a problem with pip.
error: metadata-generation-failed× Encountered error while generating package metadata.
╰─> See above for output.note: This is an issue with the package mentioned above, not pip.
hint: See above for details.
上述问题主要是来自于onnx-simplifier安装时出现问题,查阅解决方法后,网络上的譬如加usr,或者安装cmake库等都不起作用。最终,还是回归到代码阅读,才解决了这个问题。我们来看yolov5工程中export.py的源代码,如果使用命令–simplify,才会使用到onnx简化这个功能,也就是调用onnx简化库。
具体的,我们找到如下这个函数export_onnx,就是执行简化的函数,最后一个if判断,关于是否进行简化。
def export_onnx(model, im, file, opset, dynamic, simplify, prefix=colorstr("ONNX:")):"""Export a YOLOv5 model to ONNX format with dynamic axes support and optional model simplification.Args:model (torch.nn.Module): The YOLOv5 model to be exported.im (torch.Tensor): A sample input tensor for model tracing, usually the shape is (1, 3, height, width).file (pathlib.Path | str): The output file path where the ONNX model will be saved.opset (int): The ONNX opset version to use for export.dynamic (bool): If True, enables dynamic axes for batch, height, and width dimensions.simplify (bool): If True, applies ONNX model simplification for optimization.prefix (str): A prefix string for logging messages, defaults to 'ONNX:'.Returns:tuple[pathlib.Path | str, None]: The path to the saved ONNX model file and None (consistent with decorator).Raises:ImportError: If required libraries for export (e.g., 'onnx', 'onnx-simplifier') are not installed.AssertionError: If the simplification check fails.Notes:The required packages for this function can be installed via:```pip install onnx onnx-simplifier onnxruntime onnxruntime-gpu```Example:```pythonfrom pathlib import Pathimport torchfrom models.experimental import attempt_loadfrom utils.torch_utils import select_device# Load modelweights = 'yolov5s.pt'device = select_device('')model = attempt_load(weights, map_location=device)# Example input tensorim = torch.zeros(1, 3, 640, 640).to(device)# Export modelfile_path = Path('yolov5s.onnx')export_onnx(model, im, file_path, opset=12, dynamic=True, simplify=True)```"""check_requirements("onnx>=1.12.0")import onnxLOGGER.info(f"\n{prefix} starting export with onnx {onnx.__version__}...")f = str(file.with_suffix(".onnx"))output_names = ["output0", "output1"] if isinstance(model, SegmentationModel) else ["output0"]if dynamic:dynamic = {"images": {0: "batch", 2: "height", 3: "width"}} # shape(1,3,640,640)if isinstance(model, SegmentationModel):dynamic["output0"] = {0: "batch", 1: "anchors"} # shape(1,25200,85)dynamic["output1"] = {0: "batch", 2: "mask_height", 3: "mask_width"} # shape(1,32,160,160)elif isinstance(model, DetectionModel):dynamic["output0"] = {0: "batch", 1: "anchors"} # shape(1,25200,85)torch.onnx.export(model.cpu() if dynamic else model, # --dynamic only compatible with cpuim.cpu() if dynamic else im,f,verbose=False,opset_version=opset,do_constant_folding=True, # WARNING: DNN inference with torch>=1.12 may require do_constant_folding=Falseinput_names=["images"],output_names=output_names,dynamic_axes=dynamic or None,)# Checksmodel_onnx = onnx.load(f) # load onnx modelonnx.checker.check_model(model_onnx) # check onnx model# Metadatad = {"stride": int(max(model.stride)), "names": model.names}for k, v in d.items():meta = model_onnx.metadata_props.add()meta.key, meta.value = k, str(v)onnx.save(model_onnx, f)# Simplifyif simplify:try:cuda = torch.cuda.is_available()check_requirements(("onnxruntime-gpu" if cuda else "onnxruntime", "onnxslim"))import onnxslimLOGGER.info(f"{prefix} slimming with onnxslim {onnxslim.__version__}...")model_onnx = onnxslim.slim(model_onnx)onnx.save(model_onnx, f)except Exception as e:LOGGER.info(f"{prefix} simplifier failure: {e}")return f, model_onnx
在如下的代码片段中可以看到,代码引用的onnxsim,进一步查阅资料,是需要安装onnxsim这个库。
安装好onnxslim库后,我们验证一下
安装完成后,问题解决。进一步的执行我们的转换任务
python export.py --weights best.pt --simplify --include onnx
转换成功,无报错