当前位置: 首页 > article >正文

摩尔线程S4000国产信创计算卡性能实战——Pytorch转译,多卡P2P通信与MUSA编程

简介

MTT S4000 是基于摩尔线程曲院 GPU 架构打造的全功能元计算卡,为千亿规模大语言模型的训练、微调和推理进行了定制优化,结合先进的图形渲染能力、视频编解码能力和超高清 8K HDR 显示能力,助力人工智能、图形渲染、多媒体、科学计算与物理仿真等复合应用场景的计算加速。

MTT S4000 全面支持大语言模型的预训练、微调和推理服务,MUSA 软件栈专门针对大规模集群的分布式计算性能进行了优化,适配主流分布式计算加速框架, 包括 DeepSpeed, Colossal AI,Megatron 等,支持千亿参数大语言模型的稳定预训练。

官方参数如下

运行环境

本次运行环境为AutoDL云中的镜像环境,系统环境如下

常用命令

显卡运行状态

输入如下命令

mthreads-gmi

即可查看当前显卡运行状态

查看当前GPU详细信息

输入

musaInfo

即可

查看当前运行环境版本

输入

musa_version_query

即可查看当前运行环境版本

Pytorch部分

转义

根据官网介绍,对于pytorch代码,只需要正确import torch_musa的拓展插件,并且将代码中的所有cuda->musa,将所有的nccl->mccl即可。

实测

作者使用豆包随机生成了一个测试allreduce的pytorch代码,代码如下,在经过上述转译后能正常运行

import os
import time
import argparse
import torch
import torch_musa
import torch.distributed as dist
from torch.nn.parallel import DistributedDataParallel as DDPdef setup(rank, world_size):os.environ['MASTER_ADDR'] = 'localhost'os.environ['MASTER_PORT'] = '12355'# 初始化MUSA分布式环境dist.init_process_group("mccl", rank=rank, world_size=world_size)torch.musa.set_device(rank)def cleanup():dist.destroy_process_group()def run_benchmark(rank, world_size, sizes, num_iters=100, warmup=20):setup(rank, world_size)for size in sizes:# 创建随机张量(使用MUSA设备)tensor = torch.rand(size, device=f'musa:{rank}')# 预热for _ in range(warmup):dist.all_reduce(tensor)torch.musa.synchronize()# 测量时间start_time = time.time()for _ in range(num_iters):dist.all_reduce(tensor)torch.musa.synchronize()end_time = time.time()# 计算统计信息total_time = end_time - start_timeavg_time = total_time / num_iterssize_mb = size * 4 / (1024 * 1024)  # float32是4字节bandwidth = (size_mb * world_size) / avg_time  # MB/sif rank == 0:print(f"张量大小: {size:,} 元素 ({size_mb:.2f} MB)")print(f"平均耗时: {avg_time * 1000:.2f} ms")print(f"带宽: {bandwidth / 1024:.2f} GB/s")print("-" * 50)cleanup()def main():parser = argparse.ArgumentParser()parser.add_argument('--sizes', type=int, nargs='+',default=[1000, 10000, 100000, 1000000, 10000000, 100000000],metavar='N',help='测试的张量大小列表')parser.add_argument('--num-iters', type=int, default=100,help='每个大小的迭代次数')parser.add_argument('--warmup', type=int, default=20,help='预热迭代次数')args = parser.parse_args()world_size = torch.musa.device_count()if world_size != 4:raise ValueError("此脚本需要4个MUSA GPU,但发现 {} 个GPU".format(world_size))import torch.multiprocessing as mpmp.spawn(run_benchmark,args=(world_size, args.sizes, args.num_iters, args.warmup),nprocs=world_size,join=True)if __name__ == "__main__":main()

MUSA编程

p2p通信部分

代码参考

笔者按照英伟达cudasamples仓库中的p2pbandwidthtest 代码,cuda-samples/Samples/5_Domain_Specific/p2pBandwidthLatencyTest at master · NVIDIA/cuda-samples · GitHub

并且参考相应的musa event api与mublasapi

https://docs.mthreads.com/musa-sdk/musa-sdk-doc-online/api/mcc_um.zh-CN

编写了一个适用于摩尔线程的p2p通信检测验证程序

代码部分

#include <cstdio>
#include <vector>
#include <musa_runtime.h>  // 假设 MUSA 头文件using namespace std;const char *sSampleName = "P2P (Peer-to-Peer) GPU Bandwidth Latency Test";typedef enum {P2P_WRITE = 0,P2P_READ  = 1,
} P2PDataTransfer;typedef enum {CE = 0,SM = 1,
} P2PEngine;P2PEngine p2p_mechanism = CE; // 默认使用 Copy Engine// 错误检查宏
#define musaCheckError()                                                              \{                                                                                   \musaError_t e = musaGetLastError();                                             \if (e != musaSuccess) {                                                         \printf("MUSA failure %s:%d: '%s'\n", __FILE__, __LINE__, musaGetErrorString(e)); \exit(EXIT_FAILURE);                                                         \}                                                                               \}// 延迟内核
__global__ void delay(volatile int *flag, unsigned long long timeout_clocks = 10000000)
{// 等待应用程序通知我们它已经完成了实验的排队,或者超时并退出,允许应用程序继续执行long long int start_clock, sample_clock;start_clock = clock64();while (!*flag) {sample_clock = clock64();if (sample_clock - start_clock > timeout_clocks) {break;}}
}// P2P 复制内核
__global__ void copyp2p(int4 *__restrict__ dest, const int4 *__restrict__ src, size_t num_elems) {size_t globalId = blockIdx.x * blockDim.x + threadIdx.x;size_t gridSize = blockDim.x * gridDim.x;#pragma unroll 5 // 移除括号for (size_t i = globalId; i < num_elems; i += gridSize) {dest[i] = src[i];}
}// 打印帮助信息
void printHelp(void) {printf("Usage:  p2pBandwidthLatencyTest [OPTION]...\n");printf("Tests bandwidth/latency of GPU pairs using P2P and without P2P\n");printf("\n");printf("Options:\n");printf("--help\t\tDisplay this help menu\n");printf("--p2p_read\tUse P2P reads for data transfers between GPU pairs\n");printf("--sm_copy\tUse SM intiated p2p transfers instead of Copy Engine\n");printf("--numElems=<NUM_OF_INT_ELEMS>  Number of integer elements for p2p copy\n");
}// 检查P2P访问
void checkP2Paccess(int numGPUs) {for (int i = 0; i < numGPUs; i++) {musaSetDevice(i);musaCheckError();for (int j = 0; j < numGPUs; j++) {if (i != j) {int access;musaDeviceCanAccessPeer(&access, i, j);musaCheckError();printf("Device=%d %s Access Peer Device=%d\n", i, access ? "CAN" : "CANNOT", j);}}}printf("\n***NOTE: Devices without P2P access fall back to normal memcpy.\n");
}// 执行P2P复制
void performP2PCopy(int *dest, int destDevice, int *src, int srcDevice,size_t num_elems, int repeat, bool p2paccess,musaStream_t streamToRun) {int blockSize, numBlocks;musaOccupancyMaxPotentialBlockSize(&numBlocks, &blockSize, copyp2p);musaCheckError();if (p2p_mechanism == SM && p2paccess) {for (int r = 0; r < repeat; r++) {copyp2p<<<numBlocks, blockSize, 0, streamToRun>>>((int4*)dest, (int4*)src, num_elems/4);}} else {for (int r = 0; r < repeat; r++) {musaMemcpyPeerAsync(dest, destDevice, src, srcDevice,sizeof(int)*num_elems, streamToRun);musaCheckError();}}
}// 输出带宽矩阵
void outputBandwidthMatrix(int numElems, int numGPUs, bool p2p, P2PDataTransfer p2p_method) {int repeat = 5;volatile int *flag = NULL;vector<int *> buffers(numGPUs);vector<int *> buffersD2D(numGPUs);vector<musaEvent_t> start(numGPUs);vector<musaEvent_t> stop(numGPUs);vector<musaStream_t> stream(numGPUs);musaHostAlloc((void **)&flag, sizeof(*flag), musaHostAllocPortable);musaCheckError();for (int d = 0; d < numGPUs; d++) {musaSetDevice(d);musaStreamCreateWithFlags(&stream[d], musaStreamNonBlocking);musaMalloc(&buffers[d], numElems * sizeof(int));musaMemset(buffers[d], 0, numElems * sizeof(int));musaMalloc(&buffersD2D[d], numElems * sizeof(int));musaMemset(buffersD2D[d], 0, numElems * sizeof(int));musaCheckError();musaEventCreate(&start[d]);musaCheckError();musaEventCreate(&stop[d]);musaCheckError();}vector<double> bandwidthMatrix(numGPUs * numGPUs);for (int i = 0; i < numGPUs; i++) {musaSetDevice(i);for (int j = 0; j < numGPUs; j++) {int access = 0;if (p2p) {musaDeviceCanAccessPeer(&access, i, j);if (access) {musaDeviceEnablePeerAccess(j, 0);musaCheckError();musaSetDevice(j);musaDeviceEnablePeerAccess(i, 0);musaCheckError();musaSetDevice(i);musaCheckError();}}musaStreamSynchronize(stream[i]);musaCheckError();// 阻塞流,直到所有工作排队完成*flag = 0;delay<<<1, 1, 0, stream[i]>>>(flag);musaCheckError();musaEventRecord(start[i], stream[i]);musaCheckError();if (i == j) {performP2PCopy(buffers[i], i, buffersD2D[i], i, numElems, repeat, access, stream[i]);}else {if (p2p_method == P2P_WRITE) {performP2PCopy(buffers[j], j, buffers[i], i, numElems, repeat, access, stream[i]);}else {performP2PCopy(buffers[i], i, buffers[j], j, numElems, repeat, access, stream[i]);}}musaEventRecord(stop[i], stream[i]);musaCheckError();// 释放排队的事件*flag = 1;musaStreamSynchronize(stream[i]);musaCheckError();float time_ms;musaEventElapsedTime(&time_ms, start[i], stop[i]);double time_s = time_ms / 1e3;double gb = numElems * sizeof(int) * repeat / (double)1e9;if (i == j) {gb *= 2;}bandwidthMatrix[i * numGPUs + j] = gb / time_s;if (p2p && access) {musaDeviceDisablePeerAccess(j);musaSetDevice(j);musaDeviceDisablePeerAccess(i);musaSetDevice(i);musaCheckError();}}}printf("   D\\D");for (int j = 0; j < numGPUs; j++) {printf("%6d ", j);}printf("\n");for (int i = 0; i < numGPUs; i++) {printf("%6d ", i);for (int j = 0; j < numGPUs; j++) {printf("%6.02f ", bandwidthMatrix[i * numGPUs + j]);}printf("\n");}for (int d = 0; d < numGPUs; d++) {musaSetDevice(d);musaFree(buffers[d]);musaFree(buffersD2D[d]);musaCheckError();musaEventDestroy(start[d]);musaCheckError();musaEventDestroy(stop[d]);musaCheckError();musaStreamDestroy(stream[d]);musaCheckError();}musaFreeHost((void *)flag);musaCheckError();
}// 输出双向带宽矩阵
void outputBidirectionalBandwidthMatrix(int numElems, int numGPUs, bool p2p) {int repeat = 5;volatile int *flag = NULL;vector<int *> buffers(numGPUs);vector<int *> buffersD2D(numGPUs);vector<musaEvent_t> start(numGPUs);vector<musaEvent_t> stop(numGPUs);vector<musaStream_t> stream0(numGPUs);vector<musaStream_t> stream1(numGPUs);musaHostAlloc((void **)&flag, sizeof(*flag), musaHostAllocPortable);musaCheckError();for (int d = 0; d < numGPUs; d++) {musaSetDevice(d);musaMalloc(&buffers[d], numElems * sizeof(int));musaMemset(buffers[d], 0, numElems * sizeof(int));musaMalloc(&buffersD2D[d], numElems * sizeof(int));musaMemset(buffersD2D[d], 0, numElems * sizeof(int));musaCheckError();musaEventCreate(&start[d]);musaCheckError();musaEventCreate(&stop[d]);musaCheckError();musaStreamCreateWithFlags(&stream0[d], musaStreamNonBlocking);musaCheckError();musaStreamCreateWithFlags(&stream1[d], musaStreamNonBlocking);musaCheckError();}vector<double> bandwidthMatrix(numGPUs * numGPUs);for (int i = 0; i < numGPUs; i++) {musaSetDevice(i);for (int j = 0; j < numGPUs; j++) {int access = 0;if (p2p) {musaDeviceCanAccessPeer(&access, i, j);if (access) {musaSetDevice(i);musaDeviceEnablePeerAccess(j, 0);musaCheckError();musaSetDevice(j);musaDeviceEnablePeerAccess(i, 0);musaCheckError();}}musaSetDevice(i);musaStreamSynchronize(stream0[i]);musaStreamSynchronize(stream1[j]);musaCheckError();// 阻塞流,直到所有工作排队完成*flag = 0;musaSetDevice(i);// 无需阻塞 stream1,因为它会在 stream0 的事件上阻塞delay<<<1, 1, 0, stream0[i]>>>(flag);musaCheckError();// 强制 stream1 在 stream0 开始之前不启动,以确保 stream0 上的事件完全涵盖所有操作所需的时间musaEventRecord(start[i], stream0[i]);musaStreamWaitEvent(stream1[j], start[i], 0);if (i == j) {// 对于 GPU 内操作,执行 2 次内存复制 buffersD2D <-> buffersperformP2PCopy(buffers[i], i, buffersD2D[i], i, numElems, repeat, access, stream0[i]);performP2PCopy(buffersD2D[i], i, buffers[i], i, numElems, repeat, access, stream1[i]);}else {if (access && p2p_mechanism == SM) {musaSetDevice(j);}performP2PCopy(buffers[i], i, buffers[j], j, numElems, repeat, access, stream1[j]);if (access && p2p_mechanism == SM) {musaSetDevice(i);}performP2PCopy(buffers[j], j, buffers[i], i, numElems, repeat, access, stream0[i]);}// 通知 stream0 stream1 已完成,并记录总事务的时间musaEventRecord(stop[j], stream1[j]);musaStreamWaitEvent(stream0[i], stop[j], 0);musaEventRecord(stop[i], stream0[i]);// 释放排队的操作*flag = 1;musaStreamSynchronize(stream0[i]);musaStreamSynchronize(stream1[j]);musaCheckError();float time_ms;musaEventElapsedTime(&time_ms, start[i], stop[i]);double time_s = time_ms / 1e3;double gb = 2.0 * numElems * sizeof(int) * repeat / (double)1e9;if (i == j) {gb *= 2;}bandwidthMatrix[i * numGPUs + j] = gb / time_s;if (p2p && access) {musaSetDevice(i);musaDeviceDisablePeerAccess(j);musaSetDevice(j);musaDeviceDisablePeerAccess(i);}}}printf("   D\\D");for (int j = 0; j < numGPUs; j++) {printf("%6d ", j);}printf("\n");for (int i = 0; i < numGPUs; i++) {printf("%6d ", i);for (int j = 0; j < numGPUs; j++) {printf("%6.02f ", bandwidthMatrix[i * numGPUs + j]);}printf("\n");}for (int d = 0; d < numGPUs; d++) {musaSetDevice(d);musaFree(buffers[d]);musaFree(buffersD2D[d]);musaCheckError();musaEventDestroy(start[d]);musaCheckError();musaEventDestroy(stop[d]);musaCheckError();musaStreamDestroy(stream0[d]);musaCheckError();musaStreamDestroy(stream1[d]);musaCheckError();}musaFreeHost((void *)flag);musaCheckError();
}// 输出延迟矩阵
void outputLatencyMatrix(int numGPUs, bool p2p, P2PDataTransfer p2p_method) {int repeat = 100;int numElems = 4; // 执行 1 个 int4 传输volatile int *flag = NULL;vector<int *> buffers(numGPUs);vector<int *> buffersD2D(numGPUs); // 用于 D2D(即 GPU 内复制)的缓冲区vector<musaStream_t> stream(numGPUs);vector<musaEvent_t> start(numGPUs);vector<musaEvent_t> stop(numGPUs);musaHostAlloc((void **)&flag, sizeof(*flag), musaHostAllocPortable);musaCheckError();for (int d = 0; d < numGPUs; d++) {musaSetDevice(d);musaStreamCreateWithFlags(&stream[d], musaStreamNonBlocking);musaMalloc(&buffers[d], sizeof(int) * numElems);musaMemset(buffers[d], 0, sizeof(int) * numElems);musaMalloc(&buffersD2D[d], sizeof(int) * numElems);musaMemset(buffersD2D[d], 0, sizeof(int) * numElems);musaCheckError();musaEventCreate(&start[d]);musaCheckError();musaEventCreate(&stop[d]);musaCheckError();}vector<double> gpuLatencyMatrix(numGPUs * numGPUs);vector<double> cpuLatencyMatrix(numGPUs * numGPUs);for (int i = 0; i < numGPUs; i++) {musaSetDevice(i);for (int j = 0; j < numGPUs; j++) {int access = 0;if (p2p) {musaDeviceCanAccessPeer(&access, i, j);if (access) {musaDeviceEnablePeerAccess(j, 0);musaCheckError();musaSetDevice(j);musaDeviceEnablePeerAccess(i, 0);musaSetDevice(i);musaCheckError();}}musaStreamSynchronize(stream[i]);musaCheckError();// 阻塞流,直到所有工作排队完成*flag = 0;delay<<<1, 1, 0, stream[i]>>>(flag);musaCheckError();musaEventRecord(start[i], stream[i]);if (i == j) {// 执行 GPU 内的 D2D 复制performP2PCopy(buffers[i], i, buffersD2D[i], i, numElems, repeat, access, stream[i]);}else {if (p2p_method == P2P_WRITE) {performP2PCopy(buffers[j], j, buffers[i], i, numElems, repeat, access, stream[i]);}else {performP2PCopy(buffers[i], i, buffers[j], j, numElems, repeat, access, stream[i]);}}musaEventRecord(stop[i], stream[i]);// 现在工作已经排队完成,释放流*flag = 1;musaStreamSynchronize(stream[i]);musaCheckError();float gpu_time_ms;musaEventElapsedTime(&gpu_time_ms, start[i], stop[i]);gpuLatencyMatrix[i * numGPUs + j] = gpu_time_ms * 1e3 / repeat;if (p2p && access) {musaDeviceDisablePeerAccess(j);musaSetDevice(j);musaDeviceDisablePeerAccess(i);musaSetDevice(i);musaCheckError();}}}printf("   GPU");for (int j = 0; j < numGPUs; j++) {printf("%6d ", j);}printf("\n");for (int i = 0; i < numGPUs; i++) {printf("%6d ", i);for (int j = 0; j < numGPUs; j++) {printf("%6.02f ", gpuLatencyMatrix[i * numGPUs + j]);}printf("\n");}for (int d = 0; d < numGPUs; d++) {musaSetDevice(d);musaFree(buffers[d]);musaFree(buffersD2D[d]);musaCheckError();musaEventDestroy(start[d]);musaCheckError();musaEventDestroy(stop[d]);musaCheckError();musaStreamDestroy(stream[d]);musaCheckError();}musaFreeHost((void *)flag);musaCheckError();
}// 主函数
int main(int argc, char **argv) {int numGPUs, numElems = 40000000;P2PDataTransfer p2p_method = P2P_WRITE;musaGetDeviceCount(&numGPUs);musaCheckError();// 处理命令行参数for (int i = 1; i < argc; i++) {if (strcmp(argv[i], "--help") == 0) {printHelp();return 0;} else if (strcmp(argv[i], "--p2p_read") == 0) {p2p_method = P2P_READ;} else if (strcmp(argv[i], "--sm_copy") == 0) {p2p_mechanism = SM;} else if (strncmp(argv[i], "--numElems=", 11) == 0) {numElems = atoi(argv[i] + 11);}}printf("[%s]\n", sSampleName);// 输出设备信息for (int i = 0; i < numGPUs; i++) {musaDeviceProp prop;musaGetDeviceProperties(&prop, i);printf("Device: %d, %s, pciBusID: %x, pciDeviceID: %x, pciDomainID:%x\n",i, prop.name, prop.pciBusID, prop.pciDeviceID, prop.pciDomainID);}checkP2Paccess(numGPUs);// 输出P2P连接矩阵printf("P2P Connectivity Matrix\n");printf("     D\\D");for (int j = 0; j < numGPUs; j++) {printf("%6d", j);}printf("\n");for (int i = 0; i < numGPUs; i++) {printf("%6d\t", i);for (int j = 0; j < numGPUs; j++) {if (i != j) {int access;musaDeviceCanAccessPeer(&access, i, j);printf("%6d", (access) ? 1 : 0);} else {printf("%6d", 1);}}printf("\n");}// 输出各种测试结果printf("Unidirectional P2P=Disabled Bandwidth Matrix (GB/s)\n");outputBandwidthMatrix(numElems, numGPUs, false, P2P_WRITE);printf("Unidirectional P2P=Enabled Bandwidth (P2P Writes) Matrix (GB/s)\n");outputBandwidthMatrix(numElems, numGPUs, true, P2P_WRITE);if (p2p_method == P2P_READ) {printf("Unidirectional P2P=Enabled Bandwidth (P2P Reads) Matrix (GB/s)\n");outputBandwidthMatrix(numElems, numGPUs, true, p2p_method);}printf("Bidirectional P2P=Disabled Bandwidth Matrix (GB/s)\n");outputBidirectionalBandwidthMatrix(numElems, numGPUs, false);printf("Bidirectional P2P=Enabled Bandwidth Matrix (GB/s)\n");outputBidirectionalBandwidthMatrix(numElems, numGPUs, true);printf("P2P=Disabled Latency Matrix (us)\n");outputLatencyMatrix(numGPUs, false, P2P_WRITE);printf("P2P=Enabled Latency (P2P Writes) Matrix (us)\n");outputLatencyMatrix(numGPUs, true, P2P_WRITE);if (p2p_method == P2P_READ) {printf("P2P=Enabled Latency (P2P Reads) Matrix (us)\n");outputLatencyMatrix(numGPUs, true, p2p_method);}printf("\nNOTE: Results may vary when GPU Boost is enabled.\n");return 0;
}

编译

参考mcc编译手册,此时代码中引用的库为musa_runtime,则编译是-l参数后跟musart

mcc p2p.mu -o p2p -lmusart

结果

可以看到p2p已经正确开启,但是延迟测试有问题,后续改进

基于musa编程的allreduce测试

代码参考

主要参考了NCCLtest中的allreduce部分逻辑

GitHub - NVIDIA/nccl-tests: NCCL Tests

并且参考了mublas api设计
https://docs.mthreads.com/musa-sdk/musa-sdk-doc-online/api/mublas_api

代码部分

#include <stdio.h>
#include <stdlib.h>
#include <math.h>
#include "musa_runtime.h"
#include "mccl.h"
#include <inttypes.h> // 必须包含此头文件// 宏定义(所有标识符在此处声明)
#define MIN_SIZE_B       16ULL           // 最小测试尺寸(16字节)
#define MAX_SIZE_B  (4096ULL * 1024ULL * 1024ULL)  // 最大测试尺寸(4096MB)
#define STEP_FACTOR      2ULL           // 尺寸增长因子(每次翻倍)
#define WARMUP_ITERS       5             // 热身迭代次数
#define TEST_ITERS        20             // 测试迭代次数
#define ROOT_RANK        -1             // 根节点(-1表示全归约)
#define DATA_TYPE         mcclFloat      // 数据类型
#define REDUCTION_OP      mcclSum        // 归约操作
#define FLOAT_SIZE        sizeof(float)  // float类型字节数(4字节)// 错误检查宏
#define MUSACHECK(cmd) do { \musaError_t err = cmd; \if (err != musaSuccess) { \printf("MUSA Error at %s:%d: %s\n", __FILE__, __LINE__, musaGetErrorString(err)); \exit(EXIT_FAILURE); \} \
} while(0)#define MCCLCHECK(cmd) do { \mcclResult_t res = cmd; \if (res != mcclSuccess) { \printf("MCCL Error at %s:%d: %s\n", __FILE__, __LINE__, mcclGetErrorString(res)); \exit(EXIT_FAILURE); \} \
} while(0)// 带宽计算函数
void calculate_bandwidth(size_t count, int type_size, double time_sec, double* alg_bw, double* bus_bw, int nranks) {if (time_sec <= 0 || count == 0) {*alg_bw = 0.0;*bus_bw = 0.0;return;}double data_size_gb = (double)(count * type_size) / 1e9;*alg_bw = data_size_gb / time_sec;double factor = (nranks > 1) ? (2.0 * (nranks - 1)) / nranks : 1.0;*bus_bw = *alg_bw * factor;
}int main(int argc, char* argv[]) {int nDev = 4;                  // 设备数量int devs[4] = {0, 1, 2, 3};     // 设备ID列表mcclComm_t comms[4];           // MCCL通信器musaStream_t streams[4];       // 流数组float** sendbuff = NULL;       // 发送缓冲区float** recvbuff = NULL;       // 接收缓冲区size_t current_size_b = MIN_SIZE_B;  // 当前测试尺寸(字节)double alg_bw, bus_bw;          // 算法带宽和总线带宽int test_wrong = 0;             // 错误计数// 初始化MCCL通信器MCCLCHECK(mcclCommInitAll(comms, nDev, devs));// 分配设备内存并创建流sendbuff = (float**)malloc(nDev * sizeof(float*));recvbuff = (float**)malloc(nDev * sizeof(float*));for (int i = 0; i < nDev; ++i) {MUSACHECK(musaSetDevice(i));MUSACHECK(musaMalloc(&sendbuff[i], MAX_SIZE_B));        // 分配最大尺寸内存MUSACHECK(musaMalloc(&recvbuff[i], MAX_SIZE_B));MUSACHECK(musaStreamCreate(&streams[i]));               // 创建独立流}// 打印结果表头printf("| %10s | %10s | %5s | %4s | %14s | %13s | %13s | %13s | %5s |\n","size (B)", "count", "type", "root", "warmup_time (us)", "test_time (us)", "alg_bw (GB/s)", "bus_bw (GB/s)", "#wrong");printf("|------------|------------|-------|------|------------------|----------------|---------------|---------------|--------|\n");// 尺寸循环测试while (current_size_b <= MAX_SIZE_B) {size_t element_count = current_size_b / FLOAT_SIZE;  // 元素数量// 跳过非对齐尺寸if (current_size_b % FLOAT_SIZE != 0) {current_size_b *= STEP_FACTOR;continue;}// 初始化设备数据(通过主机内存正确赋值为1.0f)for (int i = 0; i < nDev; ++i) {MUSACHECK(musaSetDevice(i));float* host_buf = (float*)malloc(current_size_b);for (size_t j = 0; j < element_count; ++j) host_buf[j] = 1.0f;MUSACHECK(musaMemcpy(sendbuff[i], host_buf, current_size_b, musaMemcpyHostToDevice));free(host_buf);MUSACHECK(musaMemset(recvbuff[i], 0, current_size_b));}// 热身迭代(包含流同步)for (int warmup = 0; warmup < WARMUP_ITERS; ++warmup) {MCCLCHECK(mcclGroupStart());for (int i = 0; i < nDev; ++i) {MCCLCHECK(mcclAllReduce(sendbuff[i], recvbuff[i], element_count, DATA_TYPE, REDUCTION_OP,comms[i], streams[i]));}MCCLCHECK(mcclGroupEnd());for (int i = 0; i < nDev; ++i) {MUSACHECK(musaSetDevice(i));MUSACHECK(musaStreamSynchronize(streams[i]));}}// 事件计时(仅在主设备0操作)musaEvent_t start, stop;MUSACHECK(musaSetDevice(0));MUSACHECK(musaEventCreate(&start));MUSACHECK(musaEventCreate(&stop));MUSACHECK(musaEventRecord(start, streams[0]));// 测试迭代(包含完整Group操作)MCCLCHECK(mcclGroupStart());for (int iter = 0; iter < TEST_ITERS; ++iter) {for (int i = 0; i < nDev; ++i) {MUSACHECK(musaSetDevice(i));MCCLCHECK(mcclAllReduce(sendbuff[i], recvbuff[i], element_count, DATA_TYPE, REDUCTION_OP,comms[i], streams[i]));}}MCCLCHECK(mcclGroupEnd());MUSACHECK(musaEventRecord(stop, streams[0]));MUSACHECK(musaEventSynchronize(stop));// 计算平均时间float total_time_ms;MUSACHECK(musaEventElapsedTime(&total_time_ms, start, stop));double avg_time_us = (total_time_ms / TEST_ITERS) * 1000;// 计算带宽calculate_bandwidth(element_count, FLOAT_SIZE, avg_time_us / 1e6, &alg_bw, &bus_bw, nDev);// 验证结果(允许浮点精度误差)test_wrong = 0;float expected = (float)nDev;for (int i = 0; i < nDev; ++i) {MUSACHECK(musaSetDevice(i));float* h_recv = (float*)malloc(current_size_b);MUSACHECK(musaMemcpy(h_recv, recvbuff[i], current_size_b, musaMemcpyDeviceToHost));for (size_t j = 0; j < element_count; ++j) {if (fabs(h_recv[j] - expected) > 1e-6) test_wrong++;}free(h_recv);}// 打印结果printf("| %10" PRIu64 " | %10" PRIu64 " | %4s | %4d | %16.3f | %14.3f | %13.3f | %13.3f | %6d |\n",(uint64_t)current_size_b, (uint64_t)element_count, "float", ROOT_RANK, 0.0, avg_time_us, alg_bw, bus_bw, test_wrong);// 销毁事件MUSACHECK(musaSetDevice(0));MUSACHECK(musaEventDestroy(start));MUSACHECK(musaEventDestroy(stop));// 增大测试尺寸current_size_b *= STEP_FACTOR;}// 释放资源for (int i = 0; i < nDev; ++i) {MUSACHECK(musaSetDevice(i));MUSACHECK(musaFree(sendbuff[i]));MUSACHECK(musaFree(recvbuff[i]));MUSACHECK(musaStreamDestroy(streams[i]));mcclCommDestroy(comms[i]);}free(sendbuff);free(recvbuff);printf("AllReduce Test Completed Successfully\n");return 0;
}

编译

因为代码用了musa_runtime与mccl两个库,因此编译选项也会有所改变

mcc allreduce.mu -o allreduce -lmusart -lmccl

结果

不知道为什么结果测出来和用pytorch测出来结果相差不小,目测是因为musa event打点计时函数没使用正确(在p2p测试的自交中也有体现,不管什么情况都是50us左右),这个需要后续再看下

相关文章:

摩尔线程S4000国产信创计算卡性能实战——Pytorch转译,多卡P2P通信与MUSA编程

简介 MTT S4000 是基于摩尔线程曲院 GPU 架构打造的全功能元计算卡&#xff0c;为千亿规模大语言模型的训练、微调和推理进行了定制优化&#xff0c;结合先进的图形渲染能力、视频编解码能力和超高清 8K HDR 显示能力&#xff0c;助力人工智能、图形渲染、多媒体、科学计算与物…...

Tesseract OCR 安装与中文+英文识别实现

一、下载 https://digi.bib.uni-mannheim.de/tesseract/ 下载&#xff0c;尽量选择时间靠前的&#xff08;识别更好些&#xff09;。符合你的运行机&#xff08;我的是windows64&#xff09; 持续点击下一步安装&#xff0c;安装你认可的路径即可&#xff0c;没必要配置环境变…...

Cypress + React + TypeScript

🧪 Cypress + React + TypeScript 组件测试全流程实战:从入门到自动化集成 在现代前端开发中,组件测试 是保障 UI 行为可靠性的重要手段。本文将通过一个 React 项目示例,实战演示如何结合 Cypress + React + TypeScript 实现从零配置到自动化集成的完整测试链路。 一、项…...

每个路由器接口,都必须分配所属网络内的 IP 地址,用于转发数据包

在IP网络中&#xff0c;主机&#xff08;Host&#xff09;和路由器接口&#xff08;Router Interface&#xff09;都需要分配网络地址&#xff08;IP地址&#xff09;。 1. 主机&#xff08;Host&#xff09;的IP地址分配 (1) 作用 主机的IP地址用于唯一标识该设备&#xff0…...

c++第四课(基础c)——布尔变量

1.前言 好&#xff0c;今天我们来学布尔变量&#xff08;bool&#xff09;&#xff0c;开搞&#xff01; 2.正文 2.1布尔数据的定义值 布尔数据的定义值&#xff0c;是只有真和假 顺便提一句0是假&#xff0c;非0的数字都是真 不过为了简便 我们一般都用0和1 2.2布尔数…...

第2期:APM32微控制器键盘PCB设计实战教程

第2期&#xff1a;APM32微控制器键盘PCB设计实战教程 一、APM32小系统介绍 使用apm32键盘小系统开源工程操作 APM32是一款与STM32兼容的微控制器&#xff0c;可以直接替代STM32进行使用。本教程基于之前开源的APM32小系统&#xff0c;链接将放在录播评论区中供大家参考。 1…...

Docker-搭建MySQL主从复制与双主双从

Docker -- 搭建MySQL主从复制与双主双从 一、MySQL主从复制1.1 准备工作从 Harbor 私有仓库拉取镜像直接拉取镜像运行容器 1.2 配置主、从服务器1.3 创建主、从服务器1.4 启动主库&#xff0c;创建同步用户1.5 配置启动从库1.6 主从复制测试 二、MySQL双主双从2.1 创建网络2.2 …...

LeetCode - 203. 移除链表元素

目录 题目 解题思路 读者可能出现的错误写法 正确的写法 题目 203. 移除链表元素 - 力扣&#xff08;LeetCode&#xff09; 解题思路 使用哨兵节点&#xff1a; 创建一个哨兵节点(dummy)&#xff0c;将其next指向原链表头节点 哨兵节点的作用是统一处理所有情况&#x…...

canvas 实现全屏倾斜重复水印

​ 参考&#xff1a; html、js、canvas实现水印_html页面使用canvas绘制重复水印-CSDN博客 效果 ​​​​ 不求水印显示完全。 实现代码 <template><div class"watermark" ref"waterMark"></div></template><script lang&q…...

vue3项目 前端文件下载的两种工具函数

1、Blob 流下载 Blob 表示不可变的原始数据的类文件对象&#xff0c;通常用于处理文件或大块二进制数据。 注意&#xff1a;js中还有一个二进制数据类型ArrayBuffer&#xff0c;它们的区别如下 Blob 可以位于磁盘、高速缓存内存和其他不可用的位置&#xff1b;ArrayBuffer 是存…...

SpringAI系列 - 升级1.0.0

目录 一、调整pom二、MessageChatMemoryAdvisor调整三、ChatMemory get方法删除lastN参数四、QuestionAnswerAdvisor调整Spring AI发布1.0.0正式版了😅 ,搞起… 一、调整pom <properties><java.version>17</java.version><spring-ai.version>...

5.31 day33

知识点回顾&#xff1a; PyTorch和cuda的安装 查看显卡信息的命令行命令&#xff08;cmd中使用&#xff09; cuda的检查 简单神经网络的流程 数据预处理&#xff08;归一化、转换成张量&#xff09; 模型的定义 继承nn.Module类 定义每一个层 定义前向传播流程 定义损失函数和优…...

Vue3 + VTable 高性能表格组件完全指南,一个基于 Canvas 的高性能表格组件

Vue3 + VTable 高性能表格组件完全指南 前言 VTable 是一个高性能的多维表格组件,专为处理大数据量场景而设计。它支持数十万条数据的快速渲染,提供了丰富的表格功能和良好的用户体验。本文将详细介绍如何在 Vue3 项目中使用 VTable,并解决常见的配置问题。 什么是 VTabl…...

【七. Java字符串操作与StringBuilder高效拼接技巧】

7. java字符串 7.1 API 介绍&#xff1a;应用程序编程接口。在 Java 中&#xff0c;API 指的是 JDK 提供的各种功能类&#xff0c;这些类把底层实现封装好了&#xff0c;我们不用关心内部怎么写的&#xff0c;直接用就行 用 API 帮助文档步骤&#xff1a;以查Random类为例 打…...

题解:洛谷 P12672 「LAOI-8」近期我们注意到有网站混淆视听

设 LGR 存在数量为 x x x&#xff0c;CSP 存在数量为 y y y。 很明显&#xff0c;我们只需要将其中数量较小的一方改没就行了&#xff08;一个巴掌拍不响&#xff09;。 每两个字符串可同意进行一次更改&#xff0c;答案为&#xff1a; ⌈ min ⁡ ( x , y ) 2 ⌉ \left\lce…...

HTML 计算网页的PPI

HTML 计算网页的PPI vscode上安装live server插件&#xff0c;可以实时看网页预览 有个疑问&#xff1a; 鸿蒙density是按照类别写死的吗&#xff0c;手机520dpi 折叠屏426dpi 平板360dpi <html lang"en" data - overlayscrollbars - initialize><header&…...

WIN11+eclipse搭建java开发环境

环境搭建&#xff08;WIN11ECLIPSE&#xff09; 安装JAVA JDK https://www.oracle.com/cn/java/technologies/downloads/#jdk24安装eclipse https://www.eclipse.org/downloads/ 注意&#xff1a;eclipse下载时指定aliyun的软件源&#xff0c;后面安装会快一些。默认是jp汉化e…...

Linux 环境下C、C++、Go语言编译环境搭建秘籍

引言 在当今多元化的编程世界里&#xff0c;C、C 和 Go 语言凭借各自独特的优势&#xff0c;在不同的领域发光发热。C 语言作为一门古老而强大的编程语言&#xff0c;以其高效、贴近硬件的特性&#xff0c;在操作系统、嵌入式系统等底层开发中占据着重要地位&#xff1b;C 作为…...

MMR-Mamba:基于 Mamba 和空间频率信息融合的多模态 MRI 重建|文献速递-深度学习医疗AI最新文献

Title 题目 MMR-Mamba: Multi-modal MRI reconstruction with Mamba and spatial-frequency information fusion MMR-Mamba&#xff1a;基于 Mamba 和空间频率信息融合的多模态 MRI 重建 01 文献速递介绍 磁共振成像&#xff08;MRI&#xff09;因其无创、无辐射特性以及…...

2.5/Q2,Charls最新文章解读

文章题目&#xff1a;Trajectories of depressive symptoms and risk of chronic liver disease: evidence from CHARLS DOI&#xff1a;10.1186/s12876-025-03943-7 中文标题&#xff1a;抑郁症状的轨迹和慢性肝病风险&#xff1a;来自 CHARLS 的证据 发表杂志&#xff1a;BM…...

Unity QFramework 简介

目录 什么是MVC模式&#xff1f; QFramework 架构提供了 Model 的概念 QFramework 架构引入 Command 的方式 QFramework 架构引入 Event事件机制 四个层&#xff1a;表现层、系统层、数据层、工具层 委托和回调函数的关系 命令和事件的区别 工具篇 QFramework整体基于M…...

C++ 日志系统实战第五步:日志器的设计

全是通俗易懂的讲解&#xff0c;如果你本节之前的知识都掌握清楚&#xff0c;那就速速来看我的项目笔记吧~ 本文项目代码编写收尾&#xff01; 日志器类 (Logger) 设计&#xff08;建造者模式&#xff09; 日志器主要用于和前端交互。当我们需要使用日志系统打印 log 时&…...

@Docker Compose部署Alertmanager

文章目录 Docker Compose部署Alertmanager1. 准备工作1.1 系统要求1.2 目录结构准备 2. 配置文件准备2.1 创建docker-compose.yml文件2.2 创建Alertmanager配置文件 3. 部署Alertmanager3.1 启动服务3.2 验证服务状态3.3 检查日志 4. 服务验证4.1 访问Web UI 4.2 API健康检查5.…...

前端面试准备-3

1.let、const、var的区别 ①&#xff1a;let和const为块级作用域&#xff0c;var为全局作用域 ②&#xff1a;let和var可以重新赋值定义&#xff0c;而const不可以 ③&#xff1a;var会提升到作用域顶部&#xff0c;但不会初始化&#xff1b;let和const也会提升到作用不顶部…...

性能测试-jmeter实战1

课程&#xff1a;B站大学 记录软件测试-性能测试学习历程、掌握前端性能测试、后端性能测试、服务端性能测试的你才是一个专业的软件测试工程师 性能测试-jmeter实战1 为什么需要性能测试呢&#xff1f;性能测试的作用&#xff1f;性能测试体系性能测试基础性能测试工具性能监控…...

汽车高速通信的EMC挑战

随着“软件定义汽车”的理念全面渗透,中国汽车行业正加速向集中式电子电气架构(E/E架构)转型。SOA(面向服务的架构)理念推动下,整车开始围绕中央计算平台(OIB)与分布式域控制器(VIU)构建,硬件平台具备前所未有的数据处理能力,能掌控整车控制与实时感知决策。 一、…...

[SC]SystemC在CPU/GPU验证中的应用(五)

SystemC在CPU/GPU验证中的应用(五) 摘要:下面分享50个逐步升级SystemC编程能力的示例及建议的学习路线图。您可以一次一批地完成它们——从前五个基础的例子开始,然后转向channels, TLM, bus models, simple CPU/GPU kernels等等。在每个阶段掌握之后,再进行下一组…...

[蓝桥杯C++ 2024 国 B ] 立定跳远(二分)

题目描述 在运动会上&#xff0c;小明从数轴的原点开始向正方向立定跳远。项目设置了 n n n 个检查点 a 1 , a 2 , ⋯ , a n a_1, a_2, \cdots , a_n a1​,a2​,⋯,an​ 且 a i ≥ a i − 1 > 0 a_i \ge a_{i−1} > 0 ai​≥ai−1​>0。小明必须先后跳跃到每个检查…...

现代网络安全攻防技术与发展现状

1. 引言 随着数字化转型进程的加速&#xff0c;全球信息化程度不断深入&#xff0c;网络安全问题日益凸显。根据最新的统计数据&#xff0c;2022年全球范围内的网络攻击事件较前一年增长了约41%&#xff0c;造成的经济损失高达超过6万亿美元。在这个背景下&#xff0c;了解现代…...

杏仁海棠花饼的学习日记第十四天CSS

一&#xff0c;前言 第二天&#xff0c;今天看CSS。 二&#xff0c;CSS简介及导入方式 CSS简介 CSS&#xff08;层叠样式表&#xff0c;Cascading Style Sheets&#xff09;是一种用于描述 HTML 或 XML&#xff08;包括 SVG、XHTML 等&#xff09;文档呈现效果的样式语言。…...