当前位置: 首页 > article >正文

YOLOv5 + SE注意力机制:提升目标检测性能的实践

一、引言

目标检测是计算机视觉领域的一个重要任务,广泛应用于自动驾驶、安防监控、工业检测等领域。YOLOv5作为YOLO系列的最新版本,以其高效性和准确性在实际应用中表现出色。然而,随着应用场景的复杂化,传统的卷积神经网络在处理复杂背景和多尺度目标时可能会遇到性能瓶颈。为此,引入注意力机制成为了一种有效的改进方法。本文将详细介绍如何在YOLOv5中引入SE(Squeeze-and-Excitation)注意力机制,通过修改模型配置文件和代码实现,提升模型性能,并对比训练效果。

YOLOv5是YOLO系列的最新版本,相较于之前的版本,YOLOv5在模型结构、训练策略和数据增强等方面进行了多项改进,显著提升了模型的性能和效率。其主要特点包括:

  • 模型结构优化:YOLOv5采用新的骨干网络(Backbone)和路径聚合网络(Neck),提高了特征提取和融合的能力。
  • 数据增强策略:引入了多种数据增强方法,如Mosaic、MixUp等,提升了模型的泛化能力。
  • 训练策略改进:采用动态标签分配策略(SimOTA),提高了训练效率和检测精度。

然而,随着任务复杂度的增加,传统的卷积神经网络在处理多尺度目标时的表现不够理想,SE注意力机制的引入为提升目标检测精度提供了新的思路。

二、YOLOv5与SE注意力机制

2.1 YOLOv5简介

YOLOv5以其高效性和准确性在目标检测中得到了广泛应用。其主要结构特点是:

  • Backbone:负责从输入图像中提取特征。
  • Neck:通过特征融合提高模型的多尺度感知能力。
  • Head:根据提取的特征进行预测。

2.2 SE注意力机制简介

SE(Squeeze-and-Excitation)注意力机制是一种轻量级的注意力模块,旨在通过显式地建模通道间的依赖关系,提升模型的表示能力。SE模块由两个关键部分组成:

  • Squeeze(压缩):通过全局平均池化操作,将特征图的空间维度压缩为1,生成通道描述符。
  • Excitation(激励):通过两个全连接层和一个Sigmoid激活函数生成通道权重,用于重新校准特征图的通道响应。

通过引入SE模块,YOLOv5能够更加关注重要的特征通道,抑制不重要的特征通道,从而提升模型性能。

三、YOLOv5 + SE注意力机制的实现

3.1 模型配置文件修改

首先,想要将SE注意力机制引入到Yolov5中去,需要修改以下几个文件:commom.py、yolo.py和yolov5s.yaml文件。需要修改YOLOv5的模型配置文件(yolov5_se.yaml),在Backbone和Neck中引入SE模块。注意将SE模块引入之后,需要更改层数的号码,SE注意力机制也可以加入到其他层中,比如head层的P3输出之前等等。以下是修改后的配置文件内容:

# YOLOv5 馃殌 by Ultralytics, GPL-3.0 license# Parameters
nc: 80  # number of classes
depth_multiple: 0.33  # model depth multiple
width_multiple: 0.50  # layer channel multiple
anchors:- [10,13, 16,30, 33,23]  # P3/8- [30,61, 62,45, 59,119]  # P4/16- [116,90, 156,198, 373,326]  # P5/32# YOLOv5 v6.0 backbone
backbone:# [from, number, module, args][[-1, 1, Conv, [64, 6, 2, 2]],  # 0-P1/2[-1, 1, Conv, [128, 3, 2]],  # 1-P2/4[-1, 3, C3, [128]],[-1, 1, Conv, [256, 3, 2]],  # 3-P3/8[-1, 6, C3, [256]],[-1, 1, Conv, [512, 3, 2]],  # 5-P4/16[-1, 9, C3, [512]],[-1, 1, Conv, [1024, 3, 2]],  # 7-P5/32[-1, 3, C3, [1024]],[-1, 1, SENet,[1024]], #SEAttention #9[-1, 1, SPPF, [1024, 5]],  # 10]# YOLOv5 v6.0 head
head:[[-1, 1, Conv, [512, 1, 1]],[-1, 1, nn.Upsample, [None, 2, 'nearest']],[[-1, 6], 1, Concat, [1]],  # cat backbone P4[-1, 3, C3, [512, False]],  # 13[-1, 1, Conv, [256, 1, 1]],[-1, 1, nn.Upsample, [None, 2, 'nearest']],[[-1, 4], 1, Concat, [1]],  # cat backbone P3#[-1, 1, SENet,[1024]], #SEAttention #9[-1, 3, C3, [256, False]],  # 18 (P3/8-small)[-1, 1, Conv, [256, 3, 2]],[[-1, 14], 1, Concat, [1]],  # cat head P4#[-1, 1, SENet,[1024]], #SEAttention #9[-1, 3, C3, [512, False]],  # 21 (P4/16-medium)[-1, 1, Conv, [512, 3, 2]],[[-1, 10], 1, Concat, [1]],  # cat head P5#[-1, 1, SENet,[1024]], #SEAttention #9[-1, 3, C3, [1024, False]],  # 24 (P5/32-large)[[18, 21, 24], 1, Detect, [nc, anchors]],  # Detect(P3, P4, P5)]

3.2 SE注意力模块的代码实现

在YOLOv5的代码中,需要实现SE模块。以下是一个SEBlock的实现:

import torch
import torch.nn as nnclass SENet(nn.Module):#c1, c2, n=1, shortcut=True, g=1, e=0.5def __init__(self, c1, c2, n=1, shortcut=True, g=1, e=0.5 ):super(SENet, self).__init__()#c*1*1self.avgpool = nn.AdaptiveAvgPool2d(1)self.l1 = nn.Linear(c1, c1 // 16, bias=False)self.relu = nn.ReLU(inplace=True)self.l2 = nn.Linear(c1 // 16, c1, bias=False)self.sig = nn.Sigmoid()def forward(self, x):b, c, _, _ = x.size()y = self.avgpool(x).view(b, c)y = self.l1(y)y = self.relu(y)y = self.l2(y)y = self.sig(y)y = y.view(b, c, 1, 1)return x * y.expand_as(x)

3.3 使用SE注意力模块

为了在YOLOv5的Backbone和Neck中引入SE模块,可以对Yolo.py文件原有的parse_model进行修改,以下是修改后的Bottleneck模块:

def parse_model(d, ch):  # model_dict, input_channels(3)# Parse a YOLOv5 model.yaml dictionaryLOGGER.info(f"\n{'':>3}{'from':>18}{'n':>3}{'params':>10}  {'module':<40}{'arguments':<30}")anchors, nc, gd, gw, act = d['anchors'], d['nc'], d['depth_multiple'], d['width_multiple'], d.get('activation')if act:Conv.default_act = eval(act)  # redefine default activation, i.e. Conv.default_act = nn.SiLU()LOGGER.info(f"{colorstr('activation:')} {act}")  # printna = (len(anchors[0]) // 2) if isinstance(anchors, list) else anchors  # number of anchorsno = na * (nc + 5)  # number of outputs = anchors * (classes + 5)layers, save, c2 = [], [], ch[-1]  # layers, savelist, ch outfor i, (f, n, m, args) in enumerate(d['backbone'] + d['head']):  # from, number, module, argsm = eval(m) if isinstance(m, str) else m  # eval stringsfor j, a in enumerate(args):with contextlib.suppress(NameError):args[j] = eval(a) if isinstance(a, str) else a  # eval stringsn = n_ = max(round(n * gd), 1) if n > 1 else n  # depth gainif m in {Conv, GhostConv, Bottleneck, GhostBottleneck, SPP, SPPF, DWConv, MixConv2d, Focus, CrossConv,BottleneckCSP, C3, C3TR, C3SPP, C3Ghost, nn.ConvTranspose2d, DWConvTranspose2d, C3x,SENet,}:c1, c2 = ch[f], args[0]if c2 != no:  # if not outputc2 = make_divisible(c2 * gw, 8)args = [c1, c2, *args[1:]]if m in {BottleneckCSP, C3, C3TR, C3Ghost, C3x, CBAMBottleneck, CABottleneck, CBAMC3, SENet, CANet, CAC3, CBAM, ECANet, GAMNet}:args.insert(2, n)  # number of repeatsn = 1elif m is nn.BatchNorm2d:args = [ch[f]]elif m is Concat:c2 = sum(ch[x] for x in f)# TODO: channel, gw, gdelif m in {Detect, Segment}:args.append([ch[x] for x in f])if isinstance(args[1], int):  # number of anchorsargs[1] = [list(range(args[1] * 2))] * len(f)if m is Segment:args[3] = make_divisible(args[3] * gw, 8)elif m is Contract:c2 = ch[f] * args[0] ** 2elif m is Expand:c2 = ch[f] // args[0] ** 2else:c2 = ch[f]m_ = nn.Sequential(*(m(*args) for _ in range(n))) if n > 1 else m(*args)  # modulet = str(m)[8:-2].replace('__main__.', '')  # module typenp = sum(x.numel() for x in m_.parameters())  # number paramsm_.i, m_.f, m_.type, m_.np = i, f, t, np  # attach index, 'from' index, type, number paramsLOGGER.info(f'{i:>3}{str(f):>18}{n_:>3}{np:10.0f}  {t:<40}{str(args):<30}')  # printsave.extend(x % i for x in ([f] if isinstance(f, int) else f) if x != -1)  # append to savelistlayers.append(m_)if i == 0:ch = []ch.append(c2)return nn.Sequential(*layers), sorted(save)

3.4 模型训练与效果对比

完成模型配置文件和代码的修改后,可以开始训练模型。推荐使用

COCO数据集或自定义数据集进行训练和验证。或者其他的自定义数据集也可以,在这里我使用自定义数据集camel_elephant_training进行100个epoch训练,该数据集仅仅有骆驼和大象两个种类。

训练完成后,可以通过AP(平均精度)指标来评估引入SE注意力机制前后的模型性能。一般情况下,引入SE模块后,YOLOv5在复杂背景和多尺度目标的检测中表现更为出色。

训练之后的结果如下:

由于时间有限我仅仅训练了100个epoch,正常情况下应设置150~200epoch,从train/obj_loss来看,仍然有下降的空间。

3.5 训练步骤

  1. 配置训练环境,确保已安装YOLOv5和相关依赖。
  2. 下载COCO数据集或使用自定义数据集进行训练。
  3. 修改训练脚本,加载修改后的模型配置文件yolov5_se.yaml
  4. 开始训练并监控训练过程中的损失和精度。
  5. 完成训练后,使用验证集评估效果。

3.6 模型部署

将训练好的数据权重通过export.py文件转换成.onnx格式,可以部署到任意平台上。

import argparse
import contextlib
import json
import os
import platform
import re
import subprocess
import sys
import time
import warnings
from pathlib import Pathimport pandas as pd
import torch
from torch.utils.mobile_optimizer import optimize_for_mobileFILE = Path(__file__).resolve()
ROOT = FILE.parents[0]  # YOLOv5 root directory
if str(ROOT) not in sys.path:sys.path.append(str(ROOT))  # add ROOT to PATH
if platform.system() != 'Windows':ROOT = Path(os.path.relpath(ROOT, Path.cwd()))  # relativefrom models.experimental import attempt_load
from models.yolo import ClassificationModel, Detect, DetectionModel, SegmentationModel
from utils.dataloaders import LoadImages
from utils.general import (LOGGER, Profile, check_dataset, check_img_size, check_requirements, check_version,check_yaml, colorstr, file_size, get_default_args, print_args, url2file, yaml_save)
from utils.torch_utils import select_device, smart_inference_modeMACOS = platform.system() == 'Darwin'  # macOS environmentdef export_formats():# YOLOv5 export formatsx = [['PyTorch', '-', '.pt', True, True],['TorchScript', 'torchscript', '.torchscript', True, True],['ONNX', 'onnx', '.onnx', True, True],['OpenVINO', 'openvino', '_openvino_model', True, False],['TensorRT', 'engine', '.engine', False, True],['CoreML', 'coreml', '.mlmodel', True, False],['TensorFlow SavedModel', 'saved_model', '_saved_model', True, True],['TensorFlow GraphDef', 'pb', '.pb', True, True],['TensorFlow Lite', 'tflite', '.tflite', True, False],['TensorFlow Edge TPU', 'edgetpu', '_edgetpu.tflite', False, False],['TensorFlow.js', 'tfjs', '_web_model', False, False],['PaddlePaddle', 'paddle', '_paddle_model', True, True],]return pd.DataFrame(x, columns=['Format', 'Argument', 'Suffix', 'CPU', 'GPU'])def try_export(inner_func):# YOLOv5 export decorator, i..e @try_exportinner_args = get_default_args(inner_func)def outer_func(*args, **kwargs):prefix = inner_args['prefix']try:with Profile() as dt:f, model = inner_func(*args, **kwargs)LOGGER.info(f'{prefix} export success 鉁?{dt.t:.1f}s, saved as {f} ({file_size(f):.1f} MB)')return f, modelexcept Exception as e:LOGGER.info(f'{prefix} export failure 鉂?{dt.t:.1f}s: {e}')return None, Nonereturn outer_func@try_export
def export_torchscript(model, im, file, optimize, prefix=colorstr('TorchScript:')):# YOLOv5 TorchScript model exportLOGGER.info(f'\n{prefix} starting export with torch {torch.__version__}...')f = file.with_suffix('.torchscript')ts = torch.jit.trace(model, im, strict=False)d = {"shape": im.shape, "stride": int(max(model.stride)), "names": model.names}extra_files = {'config.txt': json.dumps(d)}  # torch._C.ExtraFilesMap()if optimize:  # https://pytorch.org/tutorials/recipes/mobile_interpreter.htmloptimize_for_mobile(ts)._save_for_lite_interpreter(str(f), _extra_files=extra_files)else:ts.save(str(f), _extra_files=extra_files)return f, None@try_export
def export_onnx(model, im, file, opset, dynamic, simplify, prefix=colorstr('ONNX:')):# YOLOv5 ONNX exportcheck_requirements('onnx')import onnxLOGGER.info(f'\n{prefix} starting export with onnx {onnx.__version__}...')f = file.with_suffix('.onnx')output_names = ['output0', 'output1'] if isinstance(model, SegmentationModel) else ['output0']if dynamic:dynamic = {'images': {0: 'batch', 2: 'height', 3: 'width'}}  # shape(1,3,640,640)if isinstance(model, SegmentationModel):dynamic['output0'] = {0: 'batch', 1: 'anchors'}  # shape(1,25200,85)dynamic['output1'] = {0: 'batch', 2: 'mask_height', 3: 'mask_width'}  # shape(1,32,160,160)elif isinstance(model, DetectionModel):dynamic['output0'] = {0: 'batch', 1: 'anchors'}  # shape(1,25200,85)torch.onnx.export(model.cpu() if dynamic else model,  # --dynamic only compatible with cpuim.cpu() if dynamic else im,f,verbose=False,opset_version=opset,do_constant_folding=True,input_names=['images'],output_names=output_names,dynamic_axes=dynamic or None)# Checksmodel_onnx = onnx.load(f)  # load onnx modelonnx.checker.check_model(model_onnx)  # check onnx model# Metadatad = {'stride': int(max(model.stride)), 'names': model.names}for k, v in d.items():meta = model_onnx.metadata_props.add()meta.key, meta.value = k, str(v)onnx.save(model_onnx, f)# Simplifyif simplify:try:cuda = torch.cuda.is_available()check_requirements(('onnxruntime-gpu' if cuda else 'onnxruntime', 'onnx-simplifier>=0.4.1'))import onnxsimLOGGER.info(f'{prefix} simplifying with onnx-simplifier {onnxsim.__version__}...')model_onnx, check = onnxsim.simplify(model_onnx)assert check, 'assert check failed'onnx.save(model_onnx, f)except Exception as e:LOGGER.info(f'{prefix} simplifier failure: {e}')return f, model_onnx@try_export
def export_openvino(file, metadata, half, prefix=colorstr('OpenVINO:')):# YOLOv5 OpenVINO exportcheck_requirements('openvino-dev')  # requires openvino-dev: https://pypi.org/project/openvino-dev/import openvino.inference_engine as ieLOGGER.info(f'\n{prefix} starting export with openvino {ie.__version__}...')f = str(file).replace('.pt', f'_openvino_model{os.sep}')cmd = f"mo --input_model {file.with_suffix('.onnx')} --output_dir {f} --data_type {'FP16' if half else 'FP32'}"subprocess.run(cmd.split(), check=True, env=os.environ)  # exportyaml_save(Path(f) / file.with_suffix('.yaml').name, metadata)  # add metadata.yamlreturn f, None@try_export
def export_paddle(model, im, file, metadata, prefix=colorstr('PaddlePaddle:')):# YOLOv5 Paddle exportcheck_requirements(('paddlepaddle', 'x2paddle'))import x2paddlefrom x2paddle.convert import pytorch2paddleLOGGER.info(f'\n{prefix} starting export with X2Paddle {x2paddle.__version__}...')f = str(file).replace('.pt', f'_paddle_model{os.sep}')pytorch2paddle(module=model, save_dir=f, jit_type='trace', input_examples=[im])  # exportyaml_save(Path(f) / file.with_suffix('.yaml').name, metadata)  # add metadata.yamlreturn f, None@try_export
def export_coreml(model, im, file, int8, half, prefix=colorstr('CoreML:')):# YOLOv5 CoreML exportcheck_requirements('coremltools')import coremltools as ctLOGGER.info(f'\n{prefix} starting export with coremltools {ct.__version__}...')f = file.with_suffix('.mlmodel')ts = torch.jit.trace(model, im, strict=False)  # TorchScript modelct_model = ct.convert(ts, inputs=[ct.ImageType('image', shape=im.shape, scale=1 / 255, bias=[0, 0, 0])])bits, mode = (8, 'kmeans_lut') if int8 else (16, 'linear') if half else (32, None)if bits < 32:if MACOS:  # quantization only supported on macOSwith warnings.catch_warnings():warnings.filterwarnings("ignore", category=DeprecationWarning)  # suppress numpy==1.20 float warningct_model = ct.models.neural_network.quantization_utils.quantize_weights(ct_model, bits, mode)else:print(f'{prefix} quantization only supported on macOS, skipping...')ct_model.save(f)return f, ct_model@try_export
def export_engine(model, im, file, half, dynamic, simplify, workspace=4, verbose=False, prefix=colorstr('TensorRT:')):# YOLOv5 TensorRT export https://developer.nvidia.com/tensorrtassert im.device.type != 'cpu', 'export running on CPU but must be on GPU, i.e. `python export.py --device 0`'try:import tensorrt as trtexcept Exception:if platform.system() == 'Linux':check_requirements('nvidia-tensorrt', cmds='-U --index-url https://pypi.ngc.nvidia.com')import tensorrt as trtif trt.__version__[0] == '7':  # TensorRT 7 handling https://github.com/ultralytics/yolov5/issues/6012grid = model.model[-1].anchor_gridmodel.model[-1].anchor_grid = [a[..., :1, :1, :] for a in grid]export_onnx(model, im, file, 12, dynamic, simplify)  # opset 12model.model[-1].anchor_grid = gridelse:  # TensorRT >= 8check_version(trt.__version__, '8.0.0', hard=True)  # require tensorrt>=8.0.0export_onnx(model, im, file, 12, dynamic, simplify)  # opset 12onnx = file.with_suffix('.onnx')LOGGER.info(f'\n{prefix} starting export with TensorRT {trt.__version__}...')assert onnx.exists(), f'failed to export ONNX file: {onnx}'f = file.with_suffix('.engine')  # TensorRT engine filelogger = trt.Logger(trt.Logger.INFO)if verbose:logger.min_severity = trt.Logger.Severity.VERBOSEbuilder = trt.Builder(logger)config = builder.create_builder_config()config.max_workspace_size = workspace * 1 << 30# config.set_memory_pool_limit(trt.MemoryPoolType.WORKSPACE, workspace << 30)  # fix TRT 8.4 deprecation noticeflag = (1 << int(trt.NetworkDefinitionCreationFlag.EXPLICIT_BATCH))network = builder.create_network(flag)parser = trt.OnnxParser(network, logger)if not parser.parse_from_file(str(onnx)):raise RuntimeError(f'failed to load ONNX file: {onnx}')inputs = [network.get_input(i) for i in range(network.num_inputs)]outputs = [network.get_output(i) for i in range(network.num_outputs)]for inp in inputs:LOGGER.info(f'{prefix} input "{inp.name}" with shape{inp.shape} {inp.dtype}')for out in outputs:LOGGER.info(f'{prefix} output "{out.name}" with shape{out.shape} {out.dtype}')if dynamic:if im.shape[0] <= 1:LOGGER.warning(f"{prefix} WARNING 鈿狅笍 --dynamic model requires maximum --batch-size argument")profile = builder.create_optimization_profile()for inp in inputs:profile.set_shape(inp.name, (1, *im.shape[1:]), (max(1, im.shape[0] // 2), *im.shape[1:]), im.shape)config.add_optimization_profile(profile)LOGGER.info(f'{prefix} building FP{16 if builder.platform_has_fast_fp16 and half else 32} engine as {f}')if builder.platform_has_fast_fp16 and half:config.set_flag(trt.BuilderFlag.FP16)with builder.build_engine(network, config) as engine, open(f, 'wb') as t:t.write(engine.serialize())return f, None@try_export
def export_saved_model(model,im,file,dynamic,tf_nms=False,agnostic_nms=False,topk_per_class=100,topk_all=100,iou_thres=0.45,conf_thres=0.25,keras=False,prefix=colorstr('TensorFlow SavedModel:')):# YOLOv5 TensorFlow SavedModel exporttry:import tensorflow as tfexcept Exception:check_requirements(f"tensorflow{'' if torch.cuda.is_available() else '-macos' if MACOS else '-cpu'}")import tensorflow as tffrom tensorflow.python.framework.convert_to_constants import convert_variables_to_constants_v2from models.tf import TFModelLOGGER.info(f'\n{prefix} starting export with tensorflow {tf.__version__}...')f = str(file).replace('.pt', '_saved_model')batch_size, ch, *imgsz = list(im.shape)  # BCHWtf_model = TFModel(cfg=model.yaml, model=model, nc=model.nc, imgsz=imgsz)im = tf.zeros((batch_size, *imgsz, ch))  # BHWC order for TensorFlow_ = tf_model.predict(im, tf_nms, agnostic_nms, topk_per_class, topk_all, iou_thres, conf_thres)inputs = tf.keras.Input(shape=(*imgsz, ch), batch_size=None if dynamic else batch_size)outputs = tf_model.predict(inputs, tf_nms, agnostic_nms, topk_per_class, topk_all, iou_thres, conf_thres)keras_model = tf.keras.Model(inputs=inputs, outputs=outputs)keras_model.trainable = Falsekeras_model.summary()if keras:keras_model.save(f, save_format='tf')else:spec = tf.TensorSpec(keras_model.inputs[0].shape, keras_model.inputs[0].dtype)m = tf.function(lambda x: keras_model(x))  # full modelm = m.get_concrete_function(spec)frozen_func = convert_variables_to_constants_v2(m)tfm = tf.Module()tfm.__call__ = tf.function(lambda x: frozen_func(x)[:4] if tf_nms else frozen_func(x), [spec])tfm.__call__(im)tf.saved_model.save(tfm,f,options=tf.saved_model.SaveOptions(experimental_custom_gradients=False) if check_version(tf.__version__, '2.6') else tf.saved_model.SaveOptions())return f, keras_model@try_export
def export_pb(keras_model, file, prefix=colorstr('TensorFlow GraphDef:')):# YOLOv5 TensorFlow GraphDef *.pb export https://github.com/leimao/Frozen_Graph_TensorFlowimport tensorflow as tffrom tensorflow.python.framework.convert_to_constants import convert_variables_to_constants_v2LOGGER.info(f'\n{prefix} starting export with tensorflow {tf.__version__}...')f = file.with_suffix('.pb')m = tf.function(lambda x: keras_model(x))  # full modelm = m.get_concrete_function(tf.TensorSpec(keras_model.inputs[0].shape, keras_model.inputs[0].dtype))frozen_func = convert_variables_to_constants_v2(m)frozen_func.graph.as_graph_def()tf.io.write_graph(graph_or_graph_def=frozen_func.graph, logdir=str(f.parent), name=f.name, as_text=False)return f, None@try_export
def export_tflite(keras_model, im, file, int8, data, nms, agnostic_nms, prefix=colorstr('TensorFlow Lite:')):# YOLOv5 TensorFlow Lite exportimport tensorflow as tfLOGGER.info(f'\n{prefix} starting export with tensorflow {tf.__version__}...')batch_size, ch, *imgsz = list(im.shape)  # BCHWf = str(file).replace('.pt', '-fp16.tflite')converter = tf.lite.TFLiteConverter.from_keras_model(keras_model)converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS]converter.target_spec.supported_types = [tf.float16]converter.optimizations = [tf.lite.Optimize.DEFAULT]if int8:from models.tf import representative_dataset_gendataset = LoadImages(check_dataset(check_yaml(data))['train'], img_size=imgsz, auto=False)converter.representative_dataset = lambda: representative_dataset_gen(dataset, ncalib=100)converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS_INT8]converter.target_spec.supported_types = []converter.inference_input_type = tf.uint8  # or tf.int8converter.inference_output_type = tf.uint8  # or tf.int8converter.experimental_new_quantizer = Truef = str(file).replace('.pt', '-int8.tflite')if nms or agnostic_nms:converter.target_spec.supported_ops.append(tf.lite.OpsSet.SELECT_TF_OPS)tflite_model = converter.convert()open(f, "wb").write(tflite_model)return f, None@try_export
def export_edgetpu(file, prefix=colorstr('Edge TPU:')):# YOLOv5 Edge TPU export https://coral.ai/docs/edgetpu/models-intro/cmd = 'edgetpu_compiler --version'help_url = 'https://coral.ai/docs/edgetpu/compiler/'assert platform.system() == 'Linux', f'export only supported on Linux. See {help_url}'if subprocess.run(f'{cmd} >/dev/null', shell=True).returncode != 0:LOGGER.info(f'\n{prefix} export requires Edge TPU compiler. Attempting install from {help_url}')sudo = subprocess.run('sudo --version >/dev/null', shell=True).returncode == 0  # sudo installed on systemfor c in ('curl https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -','echo "deb https://packages.cloud.google.com/apt coral-edgetpu-stable main" | sudo tee /etc/apt/sources.list.d/coral-edgetpu.list','sudo apt-get update', 'sudo apt-get install edgetpu-compiler'):subprocess.run(c if sudo else c.replace('sudo ', ''), shell=True, check=True)ver = subprocess.run(cmd, shell=True, capture_output=True, check=True).stdout.decode().split()[-1]LOGGER.info(f'\n{prefix} starting export with Edge TPU compiler {ver}...')f = str(file).replace('.pt', '-int8_edgetpu.tflite')  # Edge TPU modelf_tfl = str(file).replace('.pt', '-int8.tflite')  # TFLite modelcmd = f"edgetpu_compiler -s -d -k 10 --out_dir {file.parent} {f_tfl}"subprocess.run(cmd.split(), check=True)return f, None@try_export
def export_tfjs(file, prefix=colorstr('TensorFlow.js:')):# YOLOv5 TensorFlow.js exportcheck_requirements('tensorflowjs')import tensorflowjs as tfjsLOGGER.info(f'\n{prefix} starting export with tensorflowjs {tfjs.__version__}...')f = str(file).replace('.pt', '_web_model')  # js dirf_pb = file.with_suffix('.pb')  # *.pb pathf_json = f'{f}/model.json'  # *.json pathcmd = f'tensorflowjs_converter --input_format=tf_frozen_model ' \f'--output_node_names=Identity,Identity_1,Identity_2,Identity_3 {f_pb} {f}'subprocess.run(cmd.split())json = Path(f_json).read_text()with open(f_json, 'w') as j:  # sort JSON Identity_* in ascending ordersubst = re.sub(r'{"outputs": {"Identity.?.?": {"name": "Identity.?.?"}, 'r'"Identity.?.?": {"name": "Identity.?.?"}, 'r'"Identity.?.?": {"name": "Identity.?.?"}, 'r'"Identity.?.?": {"name": "Identity.?.?"}}}', r'{"outputs": {"Identity": {"name": "Identity"}, 'r'"Identity_1": {"name": "Identity_1"}, 'r'"Identity_2": {"name": "Identity_2"}, 'r'"Identity_3": {"name": "Identity_3"}}}', json)j.write(subst)return f, Nonedef add_tflite_metadata(file, metadata, num_outputs):# Add metadata to *.tflite models per https://www.tensorflow.org/lite/models/convert/metadatawith contextlib.suppress(ImportError):# check_requirements('tflite_support')from tflite_support import flatbuffersfrom tflite_support import metadata as _metadatafrom tflite_support import metadata_schema_py_generated as _metadata_fbtmp_file = Path('/tmp/meta.txt')with open(tmp_file, 'w') as meta_f:meta_f.write(str(metadata))model_meta = _metadata_fb.ModelMetadataT()label_file = _metadata_fb.AssociatedFileT()label_file.name = tmp_file.namemodel_meta.associatedFiles = [label_file]subgraph = _metadata_fb.SubGraphMetadataT()subgraph.inputTensorMetadata = [_metadata_fb.TensorMetadataT()]subgraph.outputTensorMetadata = [_metadata_fb.TensorMetadataT()] * num_outputsmodel_meta.subgraphMetadata = [subgraph]b = flatbuffers.Builder(0)b.Finish(model_meta.Pack(b), _metadata.MetadataPopulator.METADATA_FILE_IDENTIFIER)metadata_buf = b.Output()populator = _metadata.MetadataPopulator.with_model_file(file)populator.load_metadata_buffer(metadata_buf)populator.load_associated_files([str(tmp_file)])populator.populate()tmp_file.unlink()@smart_inference_mode()
def run(data=ROOT / 'data/coco128.yaml',  # 'dataset.yaml path'weights=ROOT / 'yolov5s.pt',  # weights pathimgsz=(640, 640),  # image (height, width)batch_size=1,  # batch sizedevice='cpu',  # cuda device, i.e. 0 or 0,1,2,3 or cpuinclude=('torchscript', 'onnx'),  # include formatshalf=False,  # FP16 half-precision exportinplace=False,  # set YOLOv5 Detect() inplace=Truekeras=False,  # use Kerasoptimize=False,  # TorchScript: optimize for mobileint8=False,  # CoreML/TF INT8 quantizationdynamic=False,  # ONNX/TF/TensorRT: dynamic axessimplify=False,  # ONNX: simplify modelopset=12,  # ONNX: opset versionverbose=False,  # TensorRT: verbose logworkspace=4,  # TensorRT: workspace size (GB)nms=False,  # TF: add NMS to modelagnostic_nms=False,  # TF: add agnostic NMS to modeltopk_per_class=100,  # TF.js NMS: topk per class to keeptopk_all=100,  # TF.js NMS: topk for all classes to keepiou_thres=0.45,  # TF.js NMS: IoU thresholdconf_thres=0.25,  # TF.js NMS: confidence threshold
):t = time.time()include = [x.lower() for x in include]  # to lowercasefmts = tuple(export_formats()['Argument'][1:])  # --include argumentsflags = [x in include for x in fmts]assert sum(flags) == len(include), f'ERROR: Invalid --include {include}, valid --include arguments are {fmts}'jit, onnx, xml, engine, coreml, saved_model, pb, tflite, edgetpu, tfjs, paddle = flags  # export booleansfile = Path(url2file(weights) if str(weights).startswith(('http:/', 'https:/')) else weights)  # PyTorch weights# Load PyTorch modeldevice = select_device(device)if half:assert device.type != 'cpu' or coreml, '--half only compatible with GPU export, i.e. use --device 0'assert not dynamic, '--half not compatible with --dynamic, i.e. use either --half or --dynamic but not both'model = attempt_load(weights, device=device, inplace=True, fuse=True)  # load FP32 model# Checksimgsz *= 2 if len(imgsz) == 1 else 1  # expandif optimize:assert device.type == 'cpu', '--optimize not compatible with cuda devices, i.e. use --device cpu'# Inputgs = int(max(model.stride))  # grid size (max stride)imgsz = [check_img_size(x, gs) for x in imgsz]  # verify img_size are gs-multiplesim = torch.zeros(batch_size, 3, *imgsz).to(device)  # image size(1,3,320,192) BCHW iDetection# Update modelmodel.eval()for k, m in model.named_modules():if isinstance(m, Detect):m.inplace = inplacem.dynamic = dynamicm.export = Truefor _ in range(2):y = model(im)  # dry runsif half and not coreml:im, model = im.half(), model.half()  # to FP16shape = tuple((y[0] if isinstance(y, tuple) else y).shape)  # model output shapemetadata = {'stride': int(max(model.stride)), 'names': model.names}  # model metadataLOGGER.info(f"\n{colorstr('PyTorch:')} starting from {file} with output shape {shape} ({file_size(file):.1f} MB)")# Exportsf = [''] * len(fmts)  # exported filenameswarnings.filterwarnings(action='ignore', category=torch.jit.TracerWarning)  # suppress TracerWarningif jit:  # TorchScriptf[0], _ = export_torchscript(model, im, file, optimize)if engine:  # TensorRT required before ONNXf[1], _ = export_engine(model, im, file, half, dynamic, simplify, workspace, verbose)if onnx or xml:  # OpenVINO requires ONNXf[2], _ = export_onnx(model, im, file, opset, dynamic, simplify)if xml:  # OpenVINOf[3], _ = export_openvino(file, metadata, half)if coreml:  # CoreMLf[4], _ = export_coreml(model, im, file, int8, half)if any((saved_model, pb, tflite, edgetpu, tfjs)):  # TensorFlow formatsassert not tflite or not tfjs, 'TFLite and TF.js models must be exported separately, please pass only one type.'assert not isinstance(model, ClassificationModel), 'ClassificationModel export to TF formats not yet supported.'f[5], s_model = export_saved_model(model.cpu(),im,file,dynamic,tf_nms=nms or agnostic_nms or tfjs,agnostic_nms=agnostic_nms or tfjs,topk_per_class=topk_per_class,topk_all=topk_all,iou_thres=iou_thres,conf_thres=conf_thres,keras=keras)if pb or tfjs:  # pb prerequisite to tfjsf[6], _ = export_pb(s_model, file)if tflite or edgetpu:f[7], _ = export_tflite(s_model, im, file, int8 or edgetpu, data=data, nms=nms, agnostic_nms=agnostic_nms)if edgetpu:f[8], _ = export_edgetpu(file)add_tflite_metadata(f[8] or f[7], metadata, num_outputs=len(s_model.outputs))if tfjs:f[9], _ = export_tfjs(file)if paddle:  # PaddlePaddlef[10], _ = export_paddle(model, im, file, metadata)# Finishf = [str(x) for x in f if x]  # filter out '' and Noneif any(f):cls, det, seg = (isinstance(model, x) for x in (ClassificationModel, DetectionModel, SegmentationModel))  # typedir = Path('segment' if seg else 'classify' if cls else '')h = '--half' if half else ''  # --half FP16 inference args = "# WARNING 鈿狅笍 ClassificationModel not yet supported for PyTorch Hub AutoShape inference" if cls else \"# WARNING 鈿狅笍 SegmentationModel not yet supported for PyTorch Hub AutoShape inference" if seg else ''LOGGER.info(f'\nExport complete ({time.time() - t:.1f}s)'f"\nResults saved to {colorstr('bold', file.parent.resolve())}"f"\nDetect:          python {dir / ('detect.py' if det else 'predict.py')} --weights {f[-1]} {h}"f"\nValidate:        python {dir / 'val.py'} --weights {f[-1]} {h}"f"\nPyTorch Hub:     model = torch.hub.load('ultralytics/yolov5', 'custom', '{f[-1]}')  {s}"f"\nVisualize:       https://netron.app")return f  # return list of exported files/dirsdef parse_opt():parser = argparse.ArgumentParser()parser.add_argument('--data', type=str, default=ROOT / 'data/coco128.yaml', help='dataset.yaml path')parser.add_argument('--weights', nargs='+', type=str, default=ROOT / 'yolov5s.pt', help='model.pt path(s)')parser.add_argument('--imgsz', '--img', '--img-size', nargs='+', type=int, default=[640, 640], help='image (h, w)')parser.add_argument('--batch-size', type=int, default=1, help='batch size')parser.add_argument('--device', default='cpu', help='cuda device, i.e. 0 or 0,1,2,3 or cpu')parser.add_argument('--half', action='store_true', help='FP16 half-precision export')parser.add_argument('--inplace', action='store_true', help='set YOLOv5 Detect() inplace=True')parser.add_argument('--keras', action='store_true', help='TF: use Keras')parser.add_argument('--optimize', action='store_true', help='TorchScript: optimize for mobile')parser.add_argument('--int8', action='store_true', help='CoreML/TF INT8 quantization')parser.add_argument('--dynamic', action='store_true', help='ONNX/TF/TensorRT: dynamic axes')parser.add_argument('--simplify', action='store_true', help='ONNX: simplify model')parser.add_argument('--opset', type=int, default=12, help='ONNX: opset version')parser.add_argument('--verbose', action='store_true', help='TensorRT: verbose log')parser.add_argument('--workspace', type=int, default=4, help='TensorRT: workspace size (GB)')parser.add_argument('--nms', action='store_true', help='TF: add NMS to model')parser.add_argument('--agnostic-nms', action='store_true', help='TF: add agnostic NMS to model')parser.add_argument('--topk-per-class', type=int, default=100, help='TF.js NMS: topk per class to keep')parser.add_argument('--topk-all', type=int, default=100, help='TF.js NMS: topk for all classes to keep')parser.add_argument('--iou-thres', type=float, default=0.45, help='TF.js NMS: IoU threshold')parser.add_argument('--conf-thres', type=float, default=0.25, help='TF.js NMS: confidence threshold')parser.add_argument('--include',nargs='+',default=['torchscript'],help='torchscript, onnx, openvino, engine, coreml, saved_model, pb, tflite, edgetpu, tfjs, paddle')opt = parser.parse_args()print_args(vars(opt))return optdef main(opt):for opt.weights in (opt.weights if isinstance(opt.weights, list) else [opt.weights]):run(**vars(opt))if __name__ == "__main__":opt = parse_opt()main(opt)

四、总结

本文介绍了如何在YOLOv5中引入SE注意力机制,包括模型配置文件的修改、代码实现、训练步骤以及效果对比。通过引入SE模块,YOLOv5在多尺度目标和复杂背景下的检测精度有所提升。未来,可以继续探索其他注意力机制(如CBAM、ECA等)的应用,以进一步提升YOLOv5的性能。感谢大家的支持。

相关文章:

YOLOv5 + SE注意力机制:提升目标检测性能的实践

一、引言 目标检测是计算机视觉领域的一个重要任务&#xff0c;广泛应用于自动驾驶、安防监控、工业检测等领域。YOLOv5作为YOLO系列的最新版本&#xff0c;以其高效性和准确性在实际应用中表现出色。然而&#xff0c;随着应用场景的复杂化&#xff0c;传统的卷积神经网络在处…...

极简Redis速成学习

redis是什么&#xff1f; 是一种以键值对形式存储的数据库&#xff0c;特点是基于内存存储&#xff0c;读写快&#xff0c;性能高&#xff0c;常用于缓存、消息队列等应用情境 redis的五种数据类型是什么&#xff1f; 分别是String、Hash、List、Set和Zset&#xff08;操作命…...

教育培训APP开发全攻略:从网校系统源码搭建到功能优化的技术方案

本篇文章&#xff0c;笔者将从网校系统源码搭建到功能优化的角度&#xff0c;全面解析教育培训APP的开发技术方案&#xff0c;帮助企业和开发者更好地理解如何提升在线教育平台的性能与用户体验。 一、教育培训APP开发的核心架构 教育培训APP的架构设计是其能否顺利运行和扩展…...

网络安全与认知安全的区别 网络和安全的关系

前言 说说信息安全 与网络安全 的关系 一、包含和被包含的关系 信息安全包括网络安全&#xff0c;信息安全还包括操作系统安全&#xff0c;数据库安全 &#xff0c;硬件设备和设施安全&#xff0c;物理安全&#xff0c;人员安全&#xff0c;软件开发&#xff0c;应用安全等。…...

16.1 LangChain 表达式语言(LCEL)深度解析:构建灵活高效的大模型应用流水线

LangChain 表达式语言(LCEL)深度解析:构建灵活高效的大模型应用流水线 关键词:LangChain 表达式语言、LCEL 编程范式、大模型应用编排、流式处理优化、生产级链式开发 1. LCEL 设计哲学与核心优势 1.1 为何需要 LCEL? #mermaid-svg-pIEGtObTES1T3LgF {font-family:"…...

介绍微信小程序中页面的生命周期函数和组件的生命周期函数

1.1 页面生命周期函数 这些函数主要用于管理页面的显示和隐藏。 onLoad(options): 页面加载时触发&#xff0c;options 是页面路由参数&#xff0c;可以在这里初始化数据。每当用户进入该页面时都会调用这个函数。 onShow(): 页面显示时触发。每当页面从后台切换到前台时都会…...

arm | lrzsz移植记录

1 我的使用场景 开发板无网络, 无奈只得用U盘拷贝文件 文件不大, 每次都插拔U盘, 很繁琐 原来的环境不支持rz等命令 就需要移植这个命令来使用 下载地址 https://ohse.de/uwe/releases/lrzsz-0.12.20.tar.gz 2 编译脚本 # 主要内容在这里 configure_for_arm(){mkdir -p $PA…...

智能机器人加速进化:AI大模型与传感器的双重buff加成

Deepseek不仅可以在手机里为你解答现在的困惑、占卜未来的可能&#xff0c;也将成为你的贴心生活帮手&#xff01; 2月21日&#xff0c;追觅科技旗下Dreamehome APP正式接入DeepSeek-R1大模型&#xff0c;2月24日发布的追觅S50系列扫地机器人也成为市面上首批搭载DeepSeek-R1的…...

Qt:day1

一、作业 写1个Widget窗口&#xff0c;窗口里面放1个按钮&#xff0c;按钮随便叫什么&#xff1b; 创建2个Widget对象&#xff1a; Widget w1, w2; w1.show(); w2不管&#xff1b; 要求&#xff1a; 点击 w1.btn&#xff0c;w1隐藏&#xff0c;w2显示&#xff1b; 点击 w2.btn&…...

端口映射/内网穿透方式及问题解决:warning: remote port forwarding failed for listen port

文章目录 需求&#xff1a;A机器是内网机器&#xff0c;B机器是公网服务器&#xff0c;想要从公网&#xff0c;访问A机器的端口方式&#xff1a;端口映射&#xff0c;内网穿透&#xff0c;使用ssh打洞端口&#xff1a;遇到问题&#xff1a;命令执行成功&#xff0c;但是端口转发…...

Java从根上理解 ConcurrentHashMap:缓存机制与性能优化

目录 一、ConcurrentHashMap 的核心原理1. 数据结构2. 锁机制3. 扩容机制二、ConcurrentHashMap 的缓存机制1. 缓存的实现2. 缓存的更新策略三、ConcurrentHashMap 的性能优化1. 减少锁竞争2. 优化数据结构3. 合理设置容量和负载因子四、具体代码示例1. 创建 ConcurrentHashMap…...

通过百度构建一个智能体

通过百度构建一个智能体 直接可用,我不吝啬算力 首先部署一个模型,我们选用deepseek14 构建智能体思考步骤,甚至多智能体; from openai import OpenAIclass Agent:def __init__(self, api_key, base_url, model...

【MySQL】(1) 数据库基础

一、什么是数据库 数据库自行选择了合适的数据结构来组织数据&#xff0c;方便用户写入&#xff08;存储介质&#xff0c;如硬盘&#xff0c;机器断电不会丢失数据&#xff09;和查询数据。在数据结构部分&#xff0c;我们讲到的 ArrayList、HashMap 集合类对象也能存储数据&am…...

DeepSeek后训练:监督微调和强化学习

注&#xff1a;此文章内容均节选自充电了么创始人&#xff0c;CEO兼CTO陈敬雷老师的新书《自然语言处理原理与实战》&#xff08;人工智能科学与技术丛书&#xff09;【陈敬雷编著】【清华大学出版社】 文章目录 DeepSeek大模型技术系列十二DeepSeek大模型技术系列十二》DeepS…...

基于 MetaGPT 自部署一个类似 MGX 的多智能体协作框架

MGX&#xff08;由 MetaGPT 团队开发的 mgx.dev&#xff09;是一个收费的多智能体编程平台&#xff0c;提供从需求分析到代码生成、测试和修复的全流程自动化功能。虽然 MGX 本身需要付费&#xff0c;但您可以通过免费服务和开源项目搭建一个类似的功能。以下是一个分步骤的实现…...

三个小时学完vue3 —— 简单案例(二)

三个小时学完vue3&#xff08;二&#xff09; 图片轮播案例 <!DOCTYPE html> <html lang"en"> <head><meta charset"UTF-8"><meta name"viewport" content"widthdevice-width, initial-scale1.0"><…...

数字人技术再超越,TANGO 可生成与音频匹配的全身手势视频

TANGO 是由东京大学与 CyberAgent AI Lab 于 2024 年共同研发的开源框架&#xff0c;专注于声音驱动的全身数字人生成。该技术能够根据目标语音音频生成与之同步的全身手势视频&#xff0c;突破了传统数字人技术仅支持面部或上半身动作的局限性。TANGO 的工作原理利用隐式分层音…...

释放微软bing的力量:深度剖析其主要功能

在浩瀚无垠的互联网海洋中,搜索引擎就如同指南针,引领我们找到所需要的信息。微软必应凭借其一系列强大功能,在搜索引擎领域脱颖而出,成为极具竞争力的一员。在这篇博客文章中,我们将深入探讨微软必应的主要功能,这些功能使其独具特色,成为全球用户的得力工具。 1. 智能…...

DeepSeek 开源周(2025/0224-0228)进度全分析:技术亮点、调用与编程及潜在影响

DeepSeek 技术开源周期间所有开放下载资源的目录及简要说明: 1. FlashMLA 描述:针对 Hopper GPU 优化的高效 MLA 解码内核,专为处理可变长度序列设计,显著提升大语言模型(LLM)的解码效率。性能:内存受限配置下可达 3000 GB/s 带宽,计算受限配置下可达 580 TFLOPS 算力(…...

let、const【ES6】

‌“我唯一知道的就是我一无所知。” - 苏格拉底 目录 块级作用域&#xff1a;var、let、const的对比&#xff1a;Object.freeze()&#xff1a; 块级作用域&#xff1a; 块级作用域指由 {} 包围的代码块&#xff08;如 if、for、while、单独代码块等&#xff09;形成的独立作用…...

PySpark中mapPartitionsWithIndex等map类算子生成器函数问题 - return\yield

PySpark中mapPartitionsWithIndex等map类算子生成器函数问题 - return\yield 顾名思义&#xff0c;本文讲述了map算子生成器函数的相关问题——return 和 yield的使用。 首先先讲结论&#xff0c;在使用map等迭代生成的算子时最好使用yield。 1、问题产生 在写代码的过程中&…...

网络原理 初识[Java EE]

目录 网络发展史 独立模式 网络互联 局域网 LAN 1. 基于网络直连 2. 基于集线器(Hub)组建 3. 基于交换机(Switch)组建 4. 基于交换机和路由器(Router)组建 广域网 WAN 网络通信基础 IP 地址 1. 概念 2. 格式 端口号 1. 概念 2.格式 认识协议 1. 概念 2. 作用…...

Redis Stream基本使用及应用场景

一、概念 Redis Streams是Redis5.0提供的一种消息队列机制&#xff0c;支持多播的可持久化的消息队列&#xff0c;用户实现发布订阅的功能&#xff0c;借鉴了kafka设计。 二、常用命令 命令名称描述XADD key ID field value [field value ...]添加一条消息 key&#xff1a;St…...

amcl :odometry 到global frame 的转换

amcl - ROS Wiki amcl - ROS Wiki...

Ollama下载安装+本地部署DeepSeek+UI可视化+搭建个人知识库——详解!(Windows版本)

目录 1️⃣下载和安装Ollama 1. &#x1f947;官网下载安装包 2. &#x1f948;安装Ollama 3.&#x1f949;配置Ollama环境变量 4、&#x1f389;验证Ollama 2️⃣本地部署DeepSeek 1. 选择模型并下载 2. 验证和使用DeepSeek 3️⃣使用可视化工具 1. Chrome插件-Page …...

解决Value of type ‘AVCodecContext‘ has no member ‘channels‘ 的问题

在 FFmpeg 7.1 中,AVCodecContext 的 channels 和 channel_layout 字段已经被移除,取而代之的是 AVChannelLayout 结构。因此,代码需要进行调整以适应新的 API。 以下是如何正确设置 AVCodecContext 和 AVCodecParameters 的方法。 1. 问题分析 在 FFmpeg 7.1 中: AVCode…...

STM32内存五区及堆栈空间大小设置(启动文件浅析)

前言 嘿&#xff0c;朋友们&#xff01;今天咱们来聊聊STM32的内存五区和堆栈空间大小设置。这可是嵌入式开发里的“必修课”&#xff0c;要是没整明白&#xff0c;程序说不定就“翻车”了。别担心&#xff0c;我这就带你一步步搞懂这事儿&#xff0c;让你轻松上手&#xff0c…...

定义数组存储3部汽车对象(class2:类在class1中请看上一篇博客)

package test3; import java.util.Scanner; public class carTest {public static void main(String[] args){//创建一个数组car[] arrnew car[3];//2创建汽车对象&#xff0c;来源于输入Scanner sc new Scanner(System.in);for (int i 0; i <arr.length ; i) {car cnew ca…...

Go红队开发—语法补充

文章目录 错误控制使用自定义错误类型错误包装errors.Is 和 errors.Aspanic捕获、recover 、defer错误控制练习 接口结构体实现接口基本类型实现接口切片实现接口 接口练习Embed嵌入文件 之前有师傅问这个系列好像跟红队没啥关系&#xff0c;前几期确实没啥关系&#xff0c;因为…...

IP----访问服务器流程

这只是IP的其中一块内容-访问服务器流程&#xff0c;IP还有更多内容可以查看IP专栏&#xff0c;前一段学习内容为IA内容&#xff0c;还有更多内容可以查看IA专栏&#xff0c;可通过以下路径查看IA-----配置NAT-CSDN博客CSDN,欢迎指正 1.访问服务器流程 1.分层 1.更利于标准化…...