当前位置: 首页 > article >正文

CANN/cann-recipes-train:基于昇腾NPU的多轮工具调用代码强化学习

Code RL with Multi-Turn Tool Calling on Ascend NPUs【免费下载链接】cann-recipes-train本项目针对LLM与多模态模型训练业务中的典型模型、加速算法提供基于CANN平台的优化样例项目地址: https://gitcode.com/cann/cann-recipes-trainOverviewThis project is based on the Qwen3-1.7B model, employing the verl code sandbox service adapted for the Ascend platform, achieving efficient and stablelong-context multi-turn tool-call Code RLtraining. Our contributions include:Developed a scalable distributed code execution sandbox ScaleBox, supporting large-scale multi-node deployment, mainstream RL framework compatibility, and efficient unified evaluation across multiple models and benchmarks.Provided a unified deployment image combining verl and ScaleBox, supporting co-deployment of ScaleBox service and verl training tasks on a single node, with zero-cost migration to Huawei Cloud ModelArts.Validated Code RL training using the verl framework and ScaleBox sandbox on Ascend NPUs.Organized SFT data and SFT strategy for Coding Toolcall, and introduced multi-turn tool-call Coding Agent training in RL (the first open-source verl-based Coding Agent RL Recipe supporting multi-turn tool calling).Patches to integrate speculative decoding (EAGLE3 and Suffix) into the verl vLLM-Ascend rollout pipeline, with per-step metrics collection — draft token count, accepted token count, draft acceptance rate, mean acceptance length, and per-position acceptance rates.Validation of EAGLE3 speculative decoding within the multi-turn tool-call Code RL training loop on Ascend NPUs, achieving30% improvement in end-to-end throughputand25% reduction in training step timewithout loss of accuracy.ScaleBox 是一个可扩展的分布式代码执行沙盒其核心特性包括可扩展的分布式代码沙盒体系支持多机分布式沙盒部署与请求负载均衡支持单元测试并行与实例级并行面向 Code RL 的统一训练接口和评估套件提供高效的批量评估接口common_evaluate_batch相较于run_code通过单次请求处理多个测试用例显著提升训练效率内置对 LiveCodeBench、HumanEval、MBPP 等主流代码评测基准的支持实现一键式快速评估灵活的 Special Judge 判题机制支持自定义判题逻辑能够灵活适应具有多种正确答案的复杂编程题目Hardware RequirementsAtlas A2/A3 series, single node with 8 NPUs.Software RequirementsThe base recipe and SD extension share the same verl commit but differ in vLLM version. The SD extension uses vLLM0.13.0and vLLM-Ascendv0.13.0, which bring more stable speculative decoding support and an async implementation compared to0.11.0. Note that the software versions below reflect the tested environment — CANN8.3.RC1is expected to work for the SD extension as well.ComponentBase RecipeSD ExtensionEnvironmentDockerCondaverlcommitc651b7b(based on v0.7.0.dev)commitc651b7b(based on v0.7.0.dev)vllm0.11.00.13.0vllm-ascendv0.11.0rc1v0.13.0CANN8.3.RC18.5.0File Structure├── patches │ ├── verl # verl patch directory │ │ ├── 0001-verl-feature-improve_rl_usability.patch # General Code RL usability improvements (shared) │ │ ├── 0002-enable-tool-agent-loop.patch # Multi-turn tool-call support (shared) │ │ ├── 0003-toolcall-reward.patch # Tool-call reward (base recipe) │ │ └── 0004-enable-specrl-clean.patch # Suffix/EAGLE3 speculative decoding integration (SD extension) │ └── vllm │ └── 0001-enable-sprl.patch # vLLM-side EAGLE3 speculative decoding support (SD extension) ├── figures │ ├── evaluation_progress.png # Evaluation scores across training checkpoints (base) │ ├── training_progress.png # Training metrics progress (base) │ ├── sd_nosd_accuracy.png # Accuracy comparison: spec decode vs. no spec decode (SD) │ ├── throughput_speedup.png # Throughput speedup results (SD) │ └── acceptance_rate_overall.png # Draft acceptance rate across RL training steps (SD) ├── tool_config │ └── scalebox_tool_config.yaml # ScaleBox tool-call configuration (shared) ├── build_dataset.py # RL training dataset construction script (shared) ├── filter_sft_data.py # SFT tool-call dataset construction script (base) ├── scalebox.py # Custom reward function for ScaleBox integration (shared) ├── download_eagle.py # EAGLE3 draft model download script (SD extension) ├── run_code_rl_demo.sh # RL training script (base recipe) ├── run_multi_turn_livecodebench_eval.sh # Multi-turn LiveCodeBench evaluation script (base) ├── run_toolcall_sft_demo.sh # Multi-turn tool-call SFT training script (base) ├── spec_rl_run.sh # RL training script with speculative decoding (SD extension) ├── no_spec_rl_run.sh # RL training script without speculative decoding — baseline (SD extension) ├── process_all_the_logs_sprl.py # Log processing and metrics analysis script (SD extension) └── README.md # This documentPart 1: Base Recipe — Multi-Turn Tool-Call Code RLEnvironment SetupBuild Docker ImagesBuild the verl image supporting Code RL. Refer toverl.Dockerfileandverl_sandbox.Dockerfilefrom theagent_rl/qwen2_code_rlexample:docker build --networkhost -f verl.Dockerfile -t verl:main-c651b7b-py311-cann8.3.RC1 .Clone ScaleBox and build the combined verl ScaleBox image:git clone https://link.gitcode.com/i/cabdcdb331cef587028f0fd703a28949 docker build --networkhost -f verl_sandbox.Dockerfile -t verl_sandbox:main-c651b7b-py311-cann8.3.RC1 .Set Up verlClone verl and check out the specified commit:git clone https://github.com/volcengine/verl cd verl git checkout c651b7b4207e408875f132c4226969ef3495d408 cd ..Apply patches. The following modifications are included:Add support forcode_contestsdata source inprime reward managerReduce concurrent process count inprime reward managerfrom 64 to 32 to avoid sandbox resource contentionExtend task timeout inprime reward managerfrom 300s to 3000s to support code execution with larger batchesEnhanced logging during training for easier debuggingSupport for multi-turn tool-call Coding training logicAdded Toolcall reward to improve training stabilitygit apply patches/verl/0001-verl-feature-improve_rl_usability.patch git apply patches/verl/0002-enable-tool-agent-loop.patch git apply patches/verl/0003-toolcall-reward.patchDeploy ScaleBoxStart the combined verl_sandbox container:docker run -it --privileged --namestart_verl_sandbox --user root --network host \ --shm-size 500g \ --device/dev/davinci0 \ --device/dev/davinci1 \ --device/dev/davinci2 \ --device/dev/davinci3 \ --device/dev/davinci4 \ --device/dev/davinci5 \ --device/dev/davinci6 \ --device/dev/davinci7 \ --device/dev/davinci_manager \ --device/dev/hisi_hdc \ --device /dev/devmm_svm \ -v /usr/local/dcmi:/usr/local/dcmi \ -v /usr/bin/hccn_tool:/usr/bin/hccn_tool \ -v /usr/local/sbin:/usr/local/sbin \ -v /usr/local/bin/npu-smi:/usr/local/bin/npu-smi \ -v /usr/local/Ascend/driver:/usr/local/Ascend/driver \ -v /usr/local/Ascend/firmware:/usr/local/Ascend/firmware \ -v /etc/ascend_install.info:/etc/ascend_install.info \ -v /etc/hccn.conf:/etc/hccn.conf \ -v /sys/fs/cgroup:/sys/fs/cgroup:ro \ verl_sandbox:main-c651b7b-py311-cann8.3.RC1 /bin/bashActivate the ScaleBox environment:source /home/ma-user/miniconda3/bin/activate sandbox-baseDeploy ScaleBox. The following command is for single-node Code RL training. For distributed deployment options, refer to the ScaleBox repository:export HOST0.0.0.0 # Server host address export PORT8080 # Service port export WORKERS32 # Number of Uvicorn parallel workers export MAX_MEM50000000 # Maximum memory per process cd ScaleBox make run-online deploy_${HOST}:${PORT}.log 21 Verify the service is running:curl http://localhost:8080/run_code \ -H Content-Type: application/json \ --data-raw {code: print(\Hello, world!\), language: python}Expected response:{status:Success,message:,compile_result:null,run_result:{status:Finished,execution_time:0.02984905242919922,return_code:0,stdout:Hello, world!\n,stderr:}Dataset PreparationSFT Tool-Call DataBased on Gen-Verse/Open-AgentRL-SFT-3K, this filters multi-turn Python tool-call reasoning data and converts it for RL training:python build_toolcall_sft_data.pyRL DataBased on PrimeIntellect/verifiable-coding-problems, this filters high-quality Python code samples as RL training data (verifiable-coding-problems-python-only):python build_rl_dataset.pySFT Fine-TuningDownload model weights:hf download Qwen/Qwen3-1.7B --local-dir Qwen/Qwen3-1.7BRun SFT usingrun_toolcall_sft_demo.sh, adjusting default model and data paths as needed:source /home/ma-user/miniconda3/bin/activate base mkdir -p log/sft_run_log bash run_toolcall_sft_demo.shSelect the sft_step_50 checkpoint and merge the trained model weights:python3 -m verl.model_merger merge \ --backend fsdp \ --local_dir checkpoint/multiturn-toolcall-sft-qwen-3-1b/global_step_50 \ --target_dir checkpoint/multiturn-toolcall-sft-qwen-3-1b/global_step_50/huggingfaceReinforcement Learning TrainingThe RL training script isrun_code_rl_demo.sh. Adjust the default model weights and data paths as needed:bash run_code_rl_demo.shTraining ResultsThe figures below show training metrics: model scores on training data (no repeated data), inference length and clip ratio, and tool-call interaction rounds.Model EvaluationThis experiment evaluates the models code generation capability on the LiveCodeBench dataset, following inference settings from DeepSeek-R1.Evaluation settings:release_version: v5start_date: 2024-08-01code_execution: ScaleBoxInference settings:n: 4temperature: 0.6top_p: 0.95max_tokens: 32768StepsLiveCodeBench (Pass1)2016.034016.746018.088018.6310019.1912020.1414021.3416024.4518026.2020025.9722026.3624028.39Part 2: Speculative Decoding ExtensionThis section describes how to enable EAGLE3 speculative decoding on top of the base recipe. It requires an updated vLLM version and a Conda-based environment instead of Docker.Our analysis shows the rollout phase accounts for up to78.3% of total RL step time(2816.6s out of 3596.5s per step on Qwen3-1.7B). Speculative decoding directly addresses this bottleneck by accelerating token generation during vLLM rollout, targeting a≥25% end-to-end training speedupwithout accuracy degradation.Note:0001-verl-feature-improve_rl_usability.patch,0002-enable-tool-agent-loop.patch,build_dataset.py, andscalebox.pyare shared with the base recipe unchanged. The remaining files in this section are new additions specific to the SD extension. The SFT fine-tuning and tool-call rewarding steps described in Part 1 are not required for the Speculative Decoding extension. The SD extension uses the public Qwen3-1.7B model weights directly from HuggingFace.Environment Setup1. Create Conda Environmentconda create -n verl-specrl python3.11 -y conda activate verl-specrl source /path/to/CANN_8.5.0/ascend-toolkit/set_env.sh source /path/to/CANN_8.5.0/nnal/atb/set_env.sh2. Install vLLMgit clone --depth 1 --branch v0.13.0 https://github.com/vllm-project/vllm.git cd vllm VLLM_TARGET_DEVICEempty pip install -v -e . cd ..3. Install vLLM-Ascendgit clone --depth 1 --branch v0.13.0 https://github.com/vllm-project/vllm-ascend.git cd vllm-ascend pip install decorator python -m pip install -U pip setuptools wheel python -m pip install -U cmake ninja pybind11 python -m pip install -U setuptools-scm8 pip install --no-cache-dir torch2.8.0 torch-npu2.8.0 pip install torchvision0.23.0 --no-deps pip install -e . --no-build-isolation --no-deps # vllm-ascend commit id: 6281c1207a7a499e9f23a42b3a1e7027469f2b10 cd ..4. Install verlgit clone https://github.com/volcengine/verl cd verl git checkout c651b7b4207e408875f132c4226969ef3495d408 pip install -r requirements-npu.txt pip install click8.2.1 pip install githttps://github.com/ShaohonChen/PyExt.gitpy311support pip install -e . cd ..5. Apply Patches# verl patches — run from inside the verl directory git apply ../patches/verl/0001-verl-feature-improve_rl_usability.patch git apply ../patches/verl/0002-enable-tool-agent-loop.patch git apply ../patches/verl/0004-enable-specrl-clean.patch # vLLM patch — run from inside the vllm directory cd /path/to/vllm git apply /path/to/cann-recipes-train/agent_rl/qwen3_code_toolcall/patches/vllm/0001-enable-eagle-sprl.patch cd ..6. Fix Dependenciespip install numba pip uninstall triton-ascend triton -y pip install transformers4.57.6 pip install setuptools80.10.2 pip install decorator pip install arctic-inference0.1.1Deploy ScaleBoxconda create -n scalebox python3.11 -y conda activate scalebox git clone https://github.com/icip-cas/ScaleBox.git cd ScaleBox pip config set global.index-url https://mirrors.aliyun.com/pypi/simple/ pip config set global.trusted-host mirrors.aliyun.com pip install -U pip setuptools wheel pip install -r requirements.txt pip install databases pip install aiosqliteexport HOST0.0.0.0 export PORT8080 export WORKERS32 export MAX_MEM50000000 cd ScaleBox make run-online deploy_${HOST}:${PORT}.log 21 Verify the service is running:curl http://localhost:8080/run_code \ -H Content-Type: application/json \ --data-raw {code: print(\Hello, world!\), language: python}Expected response:{status:Success,message:,compile_result:null,run_result:{status:Finished,execution_time:0.02984905242919922,return_code:0,stdout:Hello, world!\n,stderr:}}Dataset PreparationInherited from the base recipe — runpython build_dataset.pyas described in Part 1.Model PreparationDownload the target model and EAGLE3 draft model weights:python download_eagle.pyReinforcement Learning TrainingBefore running, set the required paths at the top of the respective script.Forno_spec_rl_run.sh:VariableDescriptionMODEL_PATHPath to Qwen3-1.7B target model weightsDATA_PATHPath to RL training datasetASCEND_HOME_TOOLKITPath to CANN toolkit (e.g./path/to/CANN_8.5.0/)Forspec_rl_run.sh:VariableDescriptionMODEL_PATHPath to Qwen3-1.7B target model weightsDRAFT_MODEL_PATHPath to EAGLE3 draft model weightsDATA_PATHPath to RL training datasetASCEND_HOME_TOOLKITPath to CANN toolkit (e.g./path/to/CANN_8.5.0/)Run baseline RL training without speculative decoding:bash no_spec_rl_run.shRun RL training with EAGLE3 speculative decoding:bash spec_rl_run.shProcess Training LogsOnce training is complete, collect all logs associated with an experiment into a single folder, then run:python process_all_the_logs_sprl.py path/to/logs/ -o path/to/output/combined_metrics.csvRun for both the SD and baseline runs to generate CSVs for comparison.Training ResultsSuffix and EAGLE3 speculative decoding achieve up to38% improvement in end-to-end throughputand25% reduction in training step timewith no loss of accuracy compared to the baseline.Since the EAGLE3 drafter is frozen during RL training, the draft acceptance rate gradually decreases as the actor policy drifts from the drafters training distribution:Future Work for Speculative DecodingNgram Speculative Decoding fixes: Fix a bug in the Ngram speculative decoding.Block Verification: Enable block verification in the rejection sampling module of speculative decoding.Online Drafter Training: Investigate co-training the EAGLE3 drafter alongside the actor during RL to counteract acceptance rate decay caused by policy drift.Elastic Speculation: Explore adaptively adjusting speculative decoding parameters (e.g. number of speculation tokens) during RL training.SD Recipe Evolution: As SD-specific features (block verification, online drafter training, elastic speculation, online MTP) mature, we will revisit whether a dedicated directory for the SD recipe is warranted.【免费下载链接】cann-recipes-train本项目针对LLM与多模态模型训练业务中的典型模型、加速算法提供基于CANN平台的优化样例项目地址: https://gitcode.com/cann/cann-recipes-train创作声明:本文部分内容由AI辅助生成(AIGC),仅供参考

相关文章:

CANN/cann-recipes-train:基于昇腾NPU的多轮工具调用代码强化学习

Code RL with Multi-Turn Tool Calling on Ascend NPUs 【免费下载链接】cann-recipes-train 本项目针对LLM与多模态模型训练业务中的典型模型、加速算法,提供基于CANN平台的优化样例 项目地址: https://gitcode.com/cann/cann-recipes-train Overview This…...

3分钟掌握微信聊天记录解密:WechatDecrypt让你的数据重获自由

3分钟掌握微信聊天记录解密:WechatDecrypt让你的数据重获自由 【免费下载链接】WechatDecrypt 微信消息解密工具 项目地址: https://gitcode.com/gh_mirrors/we/WechatDecrypt 想象一下这样的场景:你刚换了新手机,却发现珍贵的微信聊天…...

黑湖科技完成近10亿融资:要加速工业AI应用落地和全球扩张

雷递网 乐天 4月23日黑湖科技日前宣布完成近10亿元D轮融资,不过,黑湖科技并未公布投资方。黑湖科技称,本轮融资将主要用于加速工业AI应用落地和全球扩张,进一步推动AI与制造业真实业务场景的深度结合。黑湖科技创始人兼CEO 周宇翔…...

3个核心优势:阴阳师自动化脚本的智能解决方案

3个核心优势:阴阳师自动化脚本的智能解决方案 【免费下载链接】OnmyojiAutoScript Onmyoji Auto Script | 阴阳师脚本 项目地址: https://gitcode.com/gh_mirrors/on/OnmyojiAutoScript 阴阳师自动化脚本(Onmyoji Auto Script)是一款专…...

普渡机器人宣布融资近10亿:北汽产投与蓝思科技是投资方

雷递网 乐天 4月23日商用服务机器人领军企业普渡机器人日前宣布完成近10亿元新一轮融资,本轮融资后,公司估值突破百亿元。普渡机器人本轮融资由龙岗金控、亚投资本联合领投,北汽产投、蓝思科技、弘晖基金、珠三角与长三角等多地政府引导基金及…...

AI与VR/AR技术如何重塑人力资源管理:从招聘到培训的实战应用

1. 项目概述:当HR遇见下一代技术浪潮最近几年,和不少做人力资源的朋友聊天,发现一个挺有意思的现象:大家嘴上都在聊数字化转型,但真到了实操层面,很多公司还停留在用Excel做报表、用邮件发通知的阶段。直到…...

CANN/ops-math矩阵对角线算子

MatrixDiag 【免费下载链接】ops-math 本项目是CANN提供的数学类基础计算算子库,实现网络在NPU上加速计算。 项目地址: https://gitcode.com/cann/ops-math 产品支持情况 产品是否支持Ascend 950PR/Ascend 950DT√Atlas A3 训练系列产品/Atlas A3 推理系列产…...

AI与P-VAR模型融合:量化电子商务对国际贸易的动态影响

1. 项目概述:当AI遇见P-VAR,如何洞察电商的全球贸易脉搏最近和几位做国际贸易和宏观经济研究的朋友聊天,大家不约而同地提到了一个现象:传统的贸易模型在解释当下跨境电商、直播带货等新业态对全球货物流通的影响时,越…...

初创团队如何利用Taotoken低成本试用多种大模型

🚀 告别海外账号与网络限制!稳定直连全球优质大模型,限时半价接入中。 👉 点击领取海量免费额度 初创团队如何利用Taotoken低成本试用多种大模型 对于初创团队而言,在有限的预算内快速验证不同大语言模型的能力&#…...

CANN/pto-isa库开发者规则与限制

This file lists some rules and limitations on the implementation of this library for pto-isa developers. 【免费下载链接】pto-isa Parallel Tile Operation (PTO) is a virtual instruction set architecture designed by Ascend CANN, focusing on tile-level operati…...

CANN运行时IPC内存共享示例

11_ipc_memory_withoutpid 【免费下载链接】runtime 本项目提供CANN运行时组件和维测功能组件。 项目地址: https://gitcode.com/cann/runtime 描述 本样例展示了同一个Device、两个进程间的内存共享,在内存共享时关闭进程白名单校验。 产品支持情况 本样…...

Taotoken用量看板如何帮助项目管理者精细化控制AI成本

🚀 告别海外账号与网络限制!稳定直连全球优质大模型,限时半价接入中。 👉 点击领取海量免费额度 Taotoken用量看板如何帮助项目管理者精细化控制AI成本 对于项目管理者而言,将大模型能力集成到产品中,除了…...

从IMU到自动驾驶:卡尔曼滤波参数(Q,R)怎么调?一个Python仿真实验说清楚

卡尔曼滤波参数调优实战:用Python仿真破解Q/R矩阵之谜 在自动驾驶和机器人定位领域,卡尔曼滤波器的性能往往取决于两个神秘参数——过程噪声协方差Q和测量噪声协方差R。许多工程师能够熟练实现算法代码,却在参数调试阶段陷入反复试错的泥潭。…...

CANN/ops-blas快速入门指南

快速入门:基于ops-blas仓 【免费下载链接】ops-blas 本项目是CANN提供的高性能线性代数计算以及轻量化GEMM调用算子库。 项目地址: https://gitcode.com/cann/ops-blas 使用须知 本指南旨在帮助您快速上手CANN和ops-blas算子仓的使用。为方便快速了解算子开…...

基于MFDFA、传递熵与Kuramoto模型的EEG信号特征工程实践

1. 项目概述:从EEG信号到网络动力学特征的工程化探索在神经科学和脑机接口领域,脑电图信号分析一直是个既迷人又充满挑战的课题。我们面对的是一系列从头皮表面采集到的、看似杂乱无章的微弱电信号,它们背后却隐藏着大脑这个复杂系统运作的奥…...

5分钟让小爱音箱变身AI语音助手:MiGPT完整指南

5分钟让小爱音箱变身AI语音助手:MiGPT完整指南 【免费下载链接】mi-gpt 🏠 将小爱音箱接入 ChatGPT 和豆包,改造成你的专属语音助手。 项目地址: https://gitcode.com/GitHub_Trending/mi/mi-gpt 你是否曾经对着小爱音箱提问&#xff…...

生成式AI在医疗领域的伦理挑战与GREAT PLEA治理框架实践

1. 项目概述:当生成式AI走进手术室与战场最近和几位在医疗信息化和国防科技领域的朋友聊天,话题不约而同地聚焦在了同一个“新工具”上:生成式人工智能。一位三甲医院的影像科主任正在为科室是否引入AI辅助报告生成系统而纠结,他既…...

PowerShell效率翻倍:给你的终端加个‘时光机’,永久保存并快速检索所有历史命令(基于PSReadLine)

PowerShell效率革命:构建你的命令时光机与智能检索系统 每次在终端里反复输入相似的命令时,你是否想过——那些曾经敲过的命令,其实是你最宝贵的数字资产?PowerShell的默认历史记录功能就像沙滩上的脚印,一次退潮就会消…...

构建AI for Science统一生态:从数据、模型到社区的核心架构与实践

1. 项目概述:当AI遇见科学,一场范式革命正在发生“AI for Science”这个词,最近几年在科研圈和工业界的热度是肉眼可见地飙升。它早已不是实验室里的概念玩具,而是正在实实在在地重塑我们探索未知、解决复杂科学问题的方式。简单来…...

从问题到解决方案:Atmosphere大气层系统深度配置与优化指南

从问题到解决方案:Atmosphere大气层系统深度配置与优化指南 【免费下载链接】Atmosphere-stable 大气层整合包系统稳定版 项目地址: https://gitcode.com/gh_mirrors/at/Atmosphere-stable Atmosphere大气层系统作为Nintendo Switch最受欢迎的自定义固件之一…...

别再买错USB转串口模块了!手把手教你用CH340G芯片自己做一个(附完整原理图)

从零打造高性价比USB转串口模块:CH340G实战指南 为什么选择自制USB转串口模块? 市面上充斥着各种USB转串口模块,价格从几元到几十元不等,质量却参差不齐。不少开发者都遇到过这样的糟心事:买回来的模块要么驱动装不上&…...

数据分析中的车辆重新分配

在数据分析中,我们常常需要处理和重新排列数据以满足特定需求。今天我们将讨论一个有趣的案例:如何使用Python中的Pandas库重新排序一个数据框(DataFrame),以确保在重新分配车辆时遵循特定的分配规则。 案例背景 假设你有一份车辆重新分配的记录表,每个车辆从一个账户转…...

基于深度学习的YOLOV8目标检测+目标跟踪+车辆测速+车辆行人计数+交互式禁停区域识别+GUI

文章目录YOLOV8目标跟踪与测速(绘制进出线与禁停区域)使用后端运行参数修改可视化界面界面参数测速不准测速不准进出线与禁停区域禁停区域时间禁停区域时间YOLOV8目标跟踪与测速(绘制进出线与禁停区域) 使用 后端运行 python d…...

图片换背景底色怎么制作?2026年最全工具对比和实战指南

最近有个朋友问我,怎样才能快速给证件照换个漂亮的背景色,结果我才意识到,身边很多人其实都在为这个问题纠结。无论是需要制作证件照、商品图还是简单的头像美化,图片换背景底色这个需求真的很常见。我自己也经历过这个阶段——以…...

2026年照片换背景底色在线制作免费工具大测评,我找到了最好用的方案

最近有个朋友要给淘宝店铺换商品图背景,另一个朋友需要准备证件照,还有人在处理婚礼合影想要统一背景……我才意识到,照片换背景底色在线制作免费这个需求,真的是太常见了。以前我对这类需求也头疼,总觉得没有专业软件…...

基于深度学习的遥感建筑物分割识别 yolov11遥感图像分割 无人机车辆识别 无人机道路分割识别

YOLOv11 在遥感图像分割中的应用:建筑物、汽车与道路的精准识别 遥感图像分割是地理信息系统(GIS)、智慧城市规划和灾害监测等领域的核心技术。随着深度学习的发展,YOLO(You Only Look Once)系列模型因其高…...

CANN元数据融合解析函数

FusionParseParamsFn(Overload) 【免费下载链接】metadef Ascend Metadata Definition 项目地址: https://gitcode.com/cann/metadef 函数功能 注册解析融合算子属性的函数,为FusionParseParamsFn的重载函数。 函数原型 [OpRegistr…...

自动化内容创作:从链接到小红书爆款素材的完整流水线实践

1. 项目概述:从链接到爆款素材的自动化流水线作为一名长期混迹于内容创作一线的博主,我深知从零开始制作一套高质量、风格统一的社交媒体素材有多耗时耗力。特别是对于小红书这类对视觉要求极高的平台,一张吸引人的知识卡片,背后往…...

CANN/ops-nn erfinv算子API文档

aclnnErfinv&aclnnInplaceErfinv 【免费下载链接】ops-nn 本项目是CANN提供的神经网络类计算算子库,实现网络在NPU上加速计算。 项目地址: https://gitcode.com/cann/ops-nn 📄 查看源码 产品支持情况 产品是否支持Ascend 950PR/Ascend 950…...

避坑指南:在GEE中合成月度NPP数据时,为什么必须加.millis()和提前clip?我的踩坑实录

GEE数据处理避坑实战:月度合成NPP数据必须掌握的.millis()与clip技巧 当你第一次在Google Earth Engine(GEE)中尝试合成月度NPP数据时,可能会觉得这不过是简单的过滤、计算和导出流程。但现实往往会给这种乐观想法当头一棒——导出…...