当前位置: 首页 > article >正文

基于Transformer的中文文本分类

前言我在github上发现了一个有意思的项目Chinese-Text-Classification-Pytorch使用pytorch复现了基于Transformer的中文文本分类。中文数据集我从THUCNews中抽取了20万条新闻标题文本长度在20到30之间。一共10个类别每类2万条。以字为单位输入模型使用了预训练词向量搜狗新闻 WordCharacter 300d。类别财经、房产、股票、教育、科技、社会、时政、体育、游戏、娱乐。数据集在githubChinese-Text-Classification-Pytorch数据集划分数据集数据量训练集18万验证集1万测试集1万代码实现1. 首先我们要对数据信息处理将文字转化成模型可以使用的张量并将我们的数据集分割成训练集、验证集和测试集。utils_fasttext.py实现了以下功能词汇表构建通过build_vocab()函数统计训练数据中的词频过滤低频词min_freq1限制词汇表大小MAX_VOCAB_SIZE10000并为每个词分配索引同时添加UNK未知词和PAD填充特殊标记数据集加载通过load_dataset()函数读取数据文件将文本分词后转换为对应的词索引序列并进行填充pad_size32或截断处理N-gram特征为每个位置计算 Bigram 和 Trigram 哈希特征用于增强模型对局部词序的感知数据集分割通过build_dataset()函数分别加载训练集config.train_path、验证集config.dev_path和测试集config.test_path返回词汇表和三个数据集批量迭代器通过DataSetIterate类将数据按批次batch_size组织成迭代器并将数据转换为 PyTorch 张量torch.LongTensor并移动到指定设备config.deviceimport os import torch import numpy as np from tqdm import tqdm import pickle as pkl import time from datetime import timedelta MAX_VOCAB_SIZE 10000 UNK, PAD UNK, PAD def build_vocab(file_path,tokenizer,max_size,min_freq): vocab_dic{} with open(file_path,r,encodingutf-8) as f: for line in tqdm(f): line.strip() if not line: continue contentline.split(\t)[0] for word in tokenizer(content): vocab_dic[word]vocab_dic.get(word,0)1 vocab_listsorted([_ for _ in vocab_dic.items() if _[1]min_freq],keylambda x: x[1],reverseTrue)[:max_size] vocab_dic{wordcount[0]:idx for idx,wordcount in enumerate(vocab_list)} vocab_dic.update({UNK:len(vocab_dic),PAD:len(vocab_dic)1}) return vocab_dic def build_dataset(config,ues_word): if ues_word: Tokenizerlambda x:x.split( ) else: Tokenizerlambda x:[y for y in x] if os.path.exists(config.vocab_path): vocabpkl.load(open(config.vocab_path,rb)) else: vocabbuild_vocab(config.train_path,Tokenizer,max_sizeMAX_VOCAB_SIZE,min_freq1) pkl.dump(vocab,open(config.vocab_path,wb)) print(fvocab的大小为{len(vocab)}\n) def biGremHash(sequence,t,buckets): t1sequence[t-1] if t-10 else 0 return (t1 * 14918087) % buckets def triGremHash(sequence,t,buckets): t1sequence[t-1] if t-1 0 else 0 t2sequence[t-2] if t-2 0 else 0 return (t2 * 14918087 * 18408749 t1 * 14918087) % buckets def load_dataset(path,pad_size32): contents[] with open(path,r, encodingUTF-8) as f : for line in tqdm(f): lineline.strip() if not line: continue content,labelline.split(\t) words_line[] tokenTokenizer(content) seq_lenlen(token) if seq_lenpad_size: token.extend([PAD]*(pad_size-seq_len)) else: tokentoken[:pad_size] seq_lenpad_size for word in token: words_line.append(vocab.get(word,vocab.get(UNK))) bucketsconfig.n_vocab bigrem[] trigrem[] for i in range(pad_size): bigrem.append(biGremHash(words_line,i,buckets)) trigrem.append(triGremHash(words_line, i, buckets)) contents.append((words_line,int(label),seq_len,bigrem,trigrem)) return contents train load_dataset(config.train_path,config.pad_size) dev load_dataset(config.dev_path, config.pad_size) test load_dataset(config.test_path, config.pad_size) return vocab,train,dev,test class DataSetIterate(object): def __init__(self,batches,batch_size,device): self.batchesbatches self.batch_sizebatch_size self.devicedevice self.n_batcheslen(batches)//batch_size self.residue False # 记录batch数量是否为整数 if len(batches) % self.batch_size ! 0: self.residue True self.index 0 def _to_tensor(self,datas): x torch.LongTensor([_[0] for _ in datas]).to(self.device) y torch.LongTensor([_[1] for _ in datas]).to(self.device) bigram torch.LongTensor([_[3] for _ in datas]).to(self.device) trigram torch.LongTensor([_[4] for _ in datas]).to(self.device) seq_len torch.LongTensor([_[2] for _ in datas]).to(self.device) return (x, seq_len, bigram, trigram), y def __next__(self): #先处理最后一组不满batch——size的情况 if self.residue and self.index self.n_batches: batchesself.batches[self.index*self.batch_size:len(self.batches)] self.index1 batchesself._to_tensor(batches) return batches elif self.indexself.n_batches: self.index0 raise StopIteration else: batches self.batches[self.index * self.batch_size:(self.index1) * self.batch_size] self.index 1 batches self._to_tensor(batches) return batches def __iter__(self): return self def __len__(self): if self.residue: return self.n_batches1 else: return self.n_batches def bulid_iterator(dataset,config): iterDataSetIterate(dataset,config.batch_size,config.device) return iter def get_time_dif(start_time): 获取已使用时间 end_time time.time() time_dif end_time - start_time return timedelta(secondsint(round(time_dif)))2.基于Transformer的中文文本分类模型架构import torch import torch.nn as nn import torch.nn.functional as F import numpy as np import copy from torch.nn.functional import multi_head_attention_forward class Config(object): 配置参数 def __init__(self,dataset,embedding): self.model_nameTransformer self.train_path dataset /data/train.txt # 训练集 self.dev_path dataset /data/dev.txt # 验证集 self.test_path dataset /data/test.txt#测试集 self.class_list[x.strip() for x in open(dataset/data/class.txt, encodingutf-8).readlines()]#分类标签 self.vocab_pathdataset/data/vocab.pkl#词表 self.save_path dataset /saved_dict/ self.model_name .ckpt # 模型训练结果 self.log_path dataset /log/ self.model_name self.embedding_pretrained torch.tensor( np.load(dataset /data/ embedding)[embeddings].astype(float32)) if embedding ! random else None # 预训练词向量 self.device torch.device(cuda if torch.cuda.is_available() else cpu) # 设备 self.dropout 0.5 # 随机失活 self.require_improvement 2000 # 若超过1000batch效果还没提升则提前结束训练 self.num_classes len(self.class_list) # 类别数 self.n_vocab 0 # 词表大小在运行时赋值 self.num_epochs 20 # epoch数 self.batch_size 128 # mini-batch大小 self.pad_size 32 # 每句话处理成的长度(短填长切) self.learning_rate 5e-4 # 学习率 self.embed self.embedding_pretrained.size(1) if self.embedding_pretrained is not None else 300 # 字向量维度 self.dim_model 300 self.hidden 1024 self.last_hidden 512 self.num_head 5 self.num_encoder 2 self.n_gram_vocab8 class Positional_Emcoding(nn.Module): def __init__(self,embed,pad_size,dropout,device): super().__init__() self.devicedevice self.petorch.tensor([[pos/(10000**(i // 2 * 2.0 / embed)) for i in range(embed)] for pos in range(pad_size)]) self.pe[:,0::2]np.sin(self.pe[:,0::2]) self.pe[:,1::2]np.cos(self.pe[:,1::2]) self.dropoutnn.Dropout(dropout) def forward(self,x): outxself.pe outself.dropout(out) return out class Scaled_Dot_Product_Attention(nn.Module): def __init__(self): super().__init__() def forward(self,Q,K,V,scaleNone): attentiontorch.matmul(Q,K.permute(0,2,1)) if scale: attention attention * scale attentionF.softmax(attention,dim-1) contexttorch.matmul(attention,V) return context class Multi_head_attention(nn.Module): def __init__(self,dim_model,num_head,dropout0): super().__init__() self.num_headnum_head assert dim_model%num_head0 self.dim_headdim_model//num_head self.fc_Qnn.Linear(dim_model,num_head*self.dim_head) self.fc_K nn.Linear(dim_model, num_head * self.dim_head) self.fc_V nn.Linear(dim_model, num_head * self.dim_head) self.attentionScaled_Dot_Product_Attention() self.fcnn.Linear(num_head*self.dim_head,dim_model) self.dropoutnn.Dropout(dropout) self.layernormnn.LayerNorm(dim_model) def forward(self,x): batch_sizex.shape[0] Qself.fc_Q(x) Kself.fc_K(x) Vself.fc_V(x) Q Q.view(batch_size*self.num_head,-1,self.dim_head) K K.view(batch_size * self.num_head, -1, self.dim_head) V V.view(batch_size * self.num_head, -1, self.dim_head) scale1/K.size(-1)**(1/2) contextself.attention(Q,K,V,scale) contextcontext.view(batch_size,-1,self.num_head*self.dim_head) outself.fc(context) out self.dropout(out) out out x # 残差连接 out self.layernorm(out) return out class Position_wise_Feed_Forward(nn.Module): def __init__(self,dim_model,hidden,dropout0): super().__init__() self.fc1nn.Linear(dim_model,hidden) self.fc2nn.Linear(hidden,dim_model) self.dropoutnn.Dropout(dropout) self.layer_norm nn.LayerNorm(dim_model) def forward(self,x): out self.fc1(x) out F.relu(out) out self.fc2(out) out self.dropout(out) out out x # 残差连接 out self.layer_norm(out) return out class Encoder(nn.Module): def __init__(self,dim_model,num_head,hidden,dropout): super().__init__() self.attentionMulti_head_attention(dim_model,num_head,dropout) self.feed_forward Position_wise_Feed_Forward(dim_model, hidden, dropout) def forward(self, x): out self.attention(x) out self.feed_forward(out) return out class Model(nn.Module): def __init__(self,config): super().__init__() if config.embedding_pretrained is not None: self.embeddingnn.Embedding.from_pretrained(config.embedding_pretrained,freezeFalse) else: self.embeddingnn.Embedding(config.n_vocab,config.embed, padding_idxconfig.n_vocab - 1) self.positional_embeddingPositional_Emcoding(config.embed,config.pad_size,config.dropout,config.device) self.encoderEncoder(config.dim_model, config.num_head, config.hidden, config.dropout) self.encodersnn.ModuleList([copy.deepcopy(self.encoder) for _ in range(config.num_encoder)]) self.fc1 nn.Linear(config.pad_size * config.dim_model, config.num_classes) def forward(self, x): out self.embedding(x[0]) out self.positional_embedding(out) for encoder in self.encoders: out encoder(out) out out.view(out.size(0), -1) # out torch.mean(out, 1) out self.fc1(out) return out3.模型训练和测试代码import numpy as np import torch import torch.nn as nn import torch.nn.functional as F from sklearn import metrics import time from utils_fasttext import get_time_dif from tensorboardX import SummaryWriter def init_network(model,methodxavier, excludeembedding, seed123): for name,w in model.named_parameters(): if exclude not in name: if weight in name: if method xavier: nn.init.xavier_normal_(w) elif method kaiming: nn.init.kaiming_normal_(w) else: nn.init.normal_(w) elif bias in name: nn.init.constant_(w,0) else: pass def train(config,model,train_iter,dev_iter,test_iter): start_timetime.time() model.train() optimizertorch.optim.Adam(model.parameters(),lrconfig.learning_rate) total_batch0 best_val_lossfloat(inf) last_improve0 flagFalse writerSummaryWriter(log_dirconfig.log_path / time.strftime(%m-%d_%H.%M, time.localtime())) for epoch in range(config.num_epoch): print(Epoch[{}/{}].format(epoch1,config.num_epoch)) for i,(train,labels) in enumerate(train_iter): outputsmodel(train) model.zero_grad() lossF.cross_entropy(outputs,labels) loss.backward() optimizer.step() if total_batch%1000: truelabels.data.cpu() predicttorch.max(outputs,1)[1].cpu() train_accmetrics.accuracy_score(true,predict) dev_acc,dev_lossevaluate(config,model,dev_iter) if dev_lossbest_val_loss: best_val_lossdev_loss last_improvetotal_batch torch.save(model.state_dict(),config.save_path) improve* else: improve time_difget_time_dif(start_time) msgIter: {0:6}, Train Loss: {1:5.2}, Train Acc: {2:6.2%}, Val Loss: {3:5.2}, Val Acc: {4:6.2%}, Time: {5} {6} print(msg.format(total_batch, loss.item(), train_acc, dev_loss, dev_acc, time_dif, improve)) writer.add_scalar(loss/train,loss.item(),total_batch) writer.add_scalar(loss/dev, dev_loss, total_batch) writer.add_scalar(acc/train, train_acc, total_batch) writer.add_scalar(acc/dev, dev_acc, total_batch) model.train() total_batch1 if total_batch - last_improve config.require_improvement: # 验证集loss超过1000batch没下降结束训练 print(No optimization for a long time, auto-stopping...) flag True break if flag: break writer.close() test(config,model,test_iter) def evaluate(config,model,data_iter,testFalse): model.eval() loss_total 0 pretict_allnp.array([],dtypeint) true_allnp.array([],dtypeint) with torch.no_grad(): for texts,labels in data_iter: outputsmodel(texts) lossF.cross_entropy(outputs,labels) loss_totalloss preticttorch.max(outputs.data,1)[1].cpu().numpy() labellabels.data.cpu().numpy() pretict_allnp.append(pretict_all,pretict) true_allnp.append(true_all,labels) accmetrics.accuracy_score(true_all,pretict_all) if test: reportmetrics.classification_report(true_all,pretict_all,target_namesconfig.class_list,digits4) confusionmetrics.confusion_matrix(true_all,pretict_all) return acc,loss_total/len(data_iter),report,confusion return acc, loss_total / len(data_iter) def test(config,model,data_iter): model.load_state_dict(torch.load(config.save_path)) model.eval() start_timetime.time() test_acc, test_loss, test_report, test_confusion evaluate(config, model, data_iter, testTrue) msg Test Loss: {0:5.2}, Test Acc: {1:6.2%} print(msg.format(test_loss, test_acc)) print(Precision, Recall and F1-Score...) print(test_report) print(Confusion Matrix...) print(test_confusion) time_dif get_time_dif(start_time) print(Time usage:, time_dif)4.运行代码import torch import torch.nn as nn import torch.nn.functional as F import numpy as np import copy class Config(object): 配置参数 def __init__(self, dataset, embedding): self.model_name Transformer self.train_path dataset /data/train.txt # 训练集 self.dev_path dataset /data/dev.txt # 验证集 self.test_path dataset /data/test.txt # 测试集 self.class_list [x.strip() for x in open( dataset /data/class.txt, encodingutf-8).readlines()] # 类别名单 self.vocab_path dataset /data/vocab.pkl # 词表 self.save_path dataset /saved_dict/ self.model_name .ckpt # 模型训练结果 self.log_path dataset /log/ self.model_name self.embedding_pretrained torch.tensor( np.load(dataset /data/ embedding)[embeddings].astype(float32))\ if embedding ! random else None # 预训练词向量 self.device torch.device(cuda if torch.cuda.is_available() else cpu) # 设备 self.dropout 0.5 # 随机失活 self.require_improvement 2000 # 若超过1000batch效果还没提升则提前结束训练 self.num_classes len(self.class_list) # 类别数 self.n_vocab 0 # 词表大小在运行时赋值 self.num_epochs 20 # epoch数 self.batch_size 128 # mini-batch大小 self.pad_size 32 # 每句话处理成的长度(短填长切) self.learning_rate 5e-4 # 学习率 self.embed self.embedding_pretrained.size(1) if self.embedding_pretrained is not None else 300 # 字向量维度 self.dim_model 300 self.hidden 1024 self.last_hidden 512 self.num_head 5 self.num_encoder 2 self.n_gram_vocab8 Attention Is All You Need class Model(nn.Module): def __init__(self, config): super(Model, self).__init__() if config.embedding_pretrained is not None: self.embedding nn.Embedding.from_pretrained(config.embedding_pretrained, freezeFalse) else: self.embedding nn.Embedding(config.n_vocab, config.embed, padding_idxconfig.n_vocab - 1) self.postion_embedding Positional_Encoding(config.embed, config.pad_size, config.dropout, config.device) self.encoder Encoder(config.dim_model, config.num_head, config.hidden, config.dropout) self.encoders nn.ModuleList([ copy.deepcopy(self.encoder) # Encoder(config.dim_model, config.num_head, config.hidden, config.dropout) for _ in range(config.num_encoder)]) self.fc1 nn.Linear(config.pad_size * config.dim_model, config.num_classes) # self.fc2 nn.Linear(config.last_hidden, config.num_classes) # self.fc1 nn.Linear(config.dim_model, config.num_classes) def forward(self, x): out self.embedding(x[0]) out self.postion_embedding(out) for encoder in self.encoders: out encoder(out) out out.view(out.size(0), -1) # out torch.mean(out, 1) out self.fc1(out) return out class Encoder(nn.Module): def __init__(self, dim_model, num_head, hidden, dropout): super(Encoder, self).__init__() self.attention Multi_Head_Attention(dim_model, num_head, dropout) self.feed_forward Position_wise_Feed_Forward(dim_model, hidden, dropout) def forward(self, x): out self.attention(x) out self.feed_forward(out) return out class Positional_Encoding(nn.Module): def __init__(self, embed, pad_size, dropout, device): super(Positional_Encoding, self).__init__() self.device device self.pe torch.tensor([[pos / (10000.0 ** (i // 2 * 2.0 / embed)) for i in range(embed)] for pos in range(pad_size)]) self.pe[:, 0::2] np.sin(self.pe[:, 0::2]) self.pe[:, 1::2] np.cos(self.pe[:, 1::2]) self.dropout nn.Dropout(dropout) def forward(self, x): out x nn.Parameter(self.pe, requires_gradFalse).to(self.device) out self.dropout(out) return out class Scaled_Dot_Product_Attention(nn.Module): Scaled Dot-Product Attention def __init__(self): super(Scaled_Dot_Product_Attention, self).__init__() def forward(self, Q, K, V, scaleNone): Args: Q: [batch_size, len_Q, dim_Q] K: [batch_size, len_K, dim_K] V: [batch_size, len_V, dim_V] scale: 缩放因子 论文为根号dim_K Return: self-attention后的张量以及attention张量 attention torch.matmul(Q, K.permute(0, 2, 1)) if scale: attention attention * scale # if mask: # TODO change this # attention attention.masked_fill_(mask 0, -1e9) attention F.softmax(attention, dim-1) context torch.matmul(attention, V) return context class Multi_Head_Attention(nn.Module): def __init__(self, dim_model, num_head, dropout0.0): super(Multi_Head_Attention, self).__init__() self.num_head num_head assert dim_model % num_head 0 self.dim_head dim_model // self.num_head self.fc_Q nn.Linear(dim_model, num_head * self.dim_head) self.fc_K nn.Linear(dim_model, num_head * self.dim_head) self.fc_V nn.Linear(dim_model, num_head * self.dim_head) self.attention Scaled_Dot_Product_Attention() self.fc nn.Linear(num_head * self.dim_head, dim_model) self.dropout nn.Dropout(dropout) self.layer_norm nn.LayerNorm(dim_model) def forward(self, x): batch_size x.size(0) Q self.fc_Q(x) K self.fc_K(x) V self.fc_V(x) Q Q.view(batch_size * self.num_head, -1, self.dim_head) K K.view(batch_size * self.num_head, -1, self.dim_head) V V.view(batch_size * self.num_head, -1, self.dim_head) # if mask: # TODO # mask mask.repeat(self.num_head, 1, 1) # TODO change this scale K.size(-1) ** -0.5 # 缩放因子 context self.attention(Q, K, V, scale) context context.view(batch_size, -1, self.dim_head * self.num_head) out self.fc(context) out self.dropout(out) out out x # 残差连接 out self.layer_norm(out) return out class Position_wise_Feed_Forward(nn.Module): def __init__(self, dim_model, hidden, dropout0.0): super(Position_wise_Feed_Forward, self).__init__() self.fc1 nn.Linear(dim_model, hidden) self.fc2 nn.Linear(hidden, dim_model) self.dropout nn.Dropout(dropout) self.layer_norm nn.LayerNorm(dim_model) def forward(self, x): out self.fc1(x) out F.relu(out) out self.fc2(out) out self.dropout(out) out out x # 残差连接 out self.layer_norm(out) return out

相关文章:

基于Transformer的中文文本分类

前言 我在github上发现了一个有意思的项目Chinese-Text-Classification-Pytorch,使用pytorch复现了基于Transformer的中文文本分类。 中文数据集 我从THUCNews中抽取了20万条新闻标题,文本长度在20到30之间。一共10个类别,每类2万条。 以…...

整个 AI 项目从本地 → 部署到服务器

一、整体流程(最清晰版)本地打包镜像 → 上传到服务器 → 服务器加载镜像 → 挂载模型目录 → 启动容器 → 运行成功二、完整部署步骤(照着执行即可)1.本地:把你的 AI 项目打包成 Docker 镜像(cmd->项目根目录下执行…...

第十七届蓝桥杯省赛c++b组题解

蓝桥杯省赛自测&#xff08;CB 组&#xff09; - 洛谷 洛谷自测链接(由于数据原因 真实成绩可能与官方成绩有所出入) 1.青春常数 非常简单的入门题目 一共四年&#xff0c;前两年总和要小于后两年 即xy2026202520242023且x<y算出x的最大值即可(注意&#xff01;x可以为0 所…...

vue2+element-UI上传图片封装

针对上传组件进行封装&#xff0c;在页面直接引用即可&#xff0c;上传到minio文件服务器&#xff1a; 可以预览&#xff0c;重新上传&#xff0c;只读模式&#xff0c;可以传入展示缩略图尺寸&#xff0c;传入上传校验尺寸 <template><div><div v-if"read…...

如何用GHelper轻松掌控华硕笔记本性能:5分钟快速配置终极指南

如何用GHelper轻松掌控华硕笔记本性能&#xff1a;5分钟快速配置终极指南 【免费下载链接】g-helper Lightweight, open-source control tool for ASUS laptops and ROG Ally. Manage performance modes, fans, GPU, battery, and RGB lighting across Zephyrus, Flow, TUF, St…...

盟接之桥®制造业EDI软件:从Forecast到Invoice,打通供应链的“任督二脉”

在全球制造业数字化转型的浪潮中&#xff0c;供应链的协同效率直接决定了企业的竞争力。对于汽车零部件、机械制造、电子电器等行业的制造企业而言&#xff0c;电子数据交换&#xff08;EDI&#xff09;已不再是“锦上添花”的选项&#xff0c;而是进入全球顶级供应链体系的“入…...

YOLO26管道泄漏识别检测系统(项目源码+YOLO数据集+模型权重+UI界面+python+深度学习+远程环境部署)

摘要 管道泄漏是石油、化工、城市供水及燃气输送系统中的主要安全隐患&#xff0c;传统人工巡检与基于压力、流量等参数的监测方法存在响应慢、定位难、误报率高等问题。本文基于YOLO26系列目标检测算法&#xff0c;构建了一套端到端的管道泄漏视觉识别检测系统。系统以管道场…...

哔哩下载姬:专业B站视频下载工具,支持8K与批量下载

哔哩下载姬&#xff1a;专业B站视频下载工具&#xff0c;支持8K与批量下载 【免费下载链接】downkyi 哔哩下载姬downkyi&#xff0c;哔哩哔哩网站视频下载工具&#xff0c;支持批量下载&#xff0c;支持8K、HDR、杜比视界&#xff0c;提供工具箱&#xff08;音视频提取、去水印…...

烟台群策电子-FMC_M6678评估板

功能说明本子卡是一款面向国产M6678处理器的FMC转接卡。其提供标准的FMC HPC接口&#xff0c;可实现便捷的模块互联&#xff0c;既可作为国产M6678应用生态的评估平台&#xff0c;又能作为算力扩展节点&#xff0c;有效增强系统的整体处理能力。主要组成子卡实现了M6678的最小系…...

为什么现在的人越来越难快乐?曾仕强:因为你只懂“刺激”,不懂“豫卦”

在这个娱乐至死的年代&#xff0c;我们似乎拥有了前所未有的快乐资源&#xff1a;短视频、游戏、直播带货……但奇怪的是&#xff0c;我们却越来越难感到快乐了。台湾师范大学曾仕强教授在讲解《易经》豫卦时&#xff0c;一针见血地指出&#xff1a;现代人过度追求感官刺激&…...

从初出茅庐到功成身退:一个人最高级的活法,是修好这6个阶段

在这个张扬个性的时代&#xff0c;我们常被教导要“敢于表现”、“秀出自己”。但台湾师范大学曾仕强教授在解读《易经》谦卦时&#xff0c;却提出了一个发人深省的观点&#xff1a;有能力的人&#xff0c;往往混不好&#xff1b;真正厉害的人&#xff0c;都有“本事”。为什么…...

# Linux Shell 编程入门 Day01:Shell 基础认知、脚本编写规范、变量四大类型、数值运算

一、实验环境准备 本次实验基于模板机创建 1 台虚拟机&#xff0c;完成 IP 配置后&#xff0c;使用 WindTerm 远程连接主机&#xff0c;为后续脚本编写与执行做好环境准备。 二、Shell 环境及核心特性 Linux 层级关系梳理: 程序/用户输入的命令&#xff08;ls/cd/pwd&#xff…...

2026“网安湘军杯”报名指南:双赛道实战,直通优质offer

真实漏洞挖掘&#xff5c;5小时线下靶场&#xff5c;精英赛新秀赛&#xff5c;省级权威证书&#xff5c;企业重点关注 你是不是也遇到过这种情况&#xff1a; 刷着招聘软件&#xff0c;看到“网络安全工程师”动辄 15K 的起薪&#xff0c;心里很动心。但一看职位要求——“实战…...

2026AI 写论文软件:亲测

作为一名刚完成硕博连读的学术人&#xff0c;过去一年我把AI 论文工具都试了一遍。从本科毕业论文到核心期刊&#xff0c;踩过的坑能绕图书馆三圈——AI 生成内容查重率暴红、参考文献幻觉、学术语言生硬、逻辑链断裂... 这些痛谁懂&#xff1f; 一、掌桥科研 AI 论文&#xff…...

芯片功耗分析入门:如何用Pre-Gate Sim的FSDB波形生成精准的SAIF文件

芯片功耗分析入门&#xff1a;从Pre-Gate Sim到精准SAIF文件生成全流程解析 在数字IC设计流程中&#xff0c;功耗分析已成为与性能、面积同等重要的设计指标。随着工艺节点不断微缩&#xff0c;芯片的静态功耗与动态功耗特性变得愈发复杂&#xff0c;而基于门级仿真的功耗分析…...

GPT5.5数据分析与商业智能实战从入门到提效2026最新

想稳定体验GPT-5.5的数据分析能力&#xff0c;推荐直接用库拉&#xff0c;这是一个AI聚合平台&#xff0c;已上线GPT-5.5&#xff0c;国内直连&#xff0c;注册即用。GPT-5.5来了&#xff0c;数据分析的工作方式正在被重写4月24日&#xff0c;OpenAI正式发布GPT-5.5。官方称这是…...

Copilot Next 工作流配置安全基线(2024 Q3最新):覆盖GDPR/CCPA/等保2.0三级要求,附可审计Terraform模块+自动检测脚本

更多请点击&#xff1a; https://intelliparadigm.com 第一章&#xff1a;Copilot Next 工作流配置安全基线概览 Copilot Next 作为新一代 AI 编程协作者&#xff0c;其工作流配置直接影响代码生成的合规性、数据隔离强度与权限控制粒度。安全基线并非单一策略&#xff0c;而是…...

用 React Native + Expo 开发一个大学生日程排程 App

前面我们已经介绍了如何用 AI 生成一份“AI 赋能大学生全流程计划”。但计划生成只是第一步&#xff0c;真正困难的是&#xff1a;这些任务到底怎么落到每天&#xff1f;怎么避开课表&#xff1f;怎么提醒自己&#xff1f;怎么和手机日历打通&#xff1f;所以我做了一个配套 Ap…...

【Web前端】CSS(一)——基础语法与选择器

文章目录1.什么是CSS2.CSS基本语法规范3.CSS引入方式3.1 内部样式表3.2 行内样式表3.3 外部样式4.选择器的种类4.1 常见的选择器4.2 基础选择器4.2.1 标签选择器4.2.2 类选择器4.2.3 id选择器4.2.4通配符选择器4.2.5 基础选择器小结4.3 复合选择器4.3.1 后代选择器4.3.2 子选择…...

ARM GICv3中断控制器与ICC_EOIR1_EL1寄存器详解

1. ARM GICv3中断控制器架构概述在现代ARM处理器架构中&#xff0c;通用中断控制器(GIC)扮演着系统中断管理的核心角色。作为ARMv8/v9架构的标准组件&#xff0c;GICv3相比前代架构带来了显著的改进&#xff1a;支持更多CPU接口&#xff08;最多256个&#xff09;引入中断分组机…...

CUDA应用检查点技术:透明化GPU状态保存与恢复

1. CUDA应用检查点技术解析在HPC和科学计算领域&#xff0c;GPU加速应用通常需要长时间运行&#xff0c;如何实现这类应用的状态保存与恢复一直是技术难点。传统解决方案要么需要应用层显式实现状态保存逻辑&#xff08;开发成本高&#xff09;&#xff0c;要么依赖虚拟机级别的…...

ncmdump终极指南:3分钟掌握NCM格式解密,解锁网易云音乐播放自由

ncmdump终极指南&#xff1a;3分钟掌握NCM格式解密&#xff0c;解锁网易云音乐播放自由 【免费下载链接】ncmdump 项目地址: https://gitcode.com/gh_mirrors/ncmd/ncmdump 你是否曾经遇到过这样的困扰&#xff1a;从网易云音乐精心下载的歌曲&#xff0c;却只能在特定…...

苹果MacBook Neo与保时捷968 Club Sport:如何让便宜产品变酷炫,成市场新宠?

问题所在回顾1992年&#xff0c;保时捷处境不佳&#xff0c;车型老化、库存堆积&#xff0c;外界认为其可能关门。凯文加斯凯尔提到入门级车型968超2.9万英镑纳税门槛&#xff0c;需更便宜版本。苹果虽未陷入困境&#xff0c;但也面临价格阻碍潜在用户转换的问题。降低成本保时…...

5分钟极速部署NVIDIA Riva ASR语音识别服务

1. 项目概述在语音技术领域&#xff0c;自动语音识别&#xff08;ASR&#xff09;已成为企业智能化转型的核心组件。NVIDIA Riva作为GPU加速的语音AI SDK&#xff0c;其部署效率直接影响实际业务的上线速度。本文将分享如何在Kubernetes GPU集群上实现Riva ASR服务的极速部署—…...

Portarium:轻量级本地服务可视化管理的Go语言实现

1. 项目概述&#xff1a;一个轻量级、可视化的端口管理工具最近在折腾一些本地开发环境&#xff0c;经常需要同时运行好几个后端服务、数据库和前端项目。每次启动项目&#xff0c;都得手动记下哪个服务跑在哪个端口上&#xff0c;或者去翻看一堆启动日志&#xff0c;效率低下不…...

初步了解安卓逆向

初步了解安卓逆向 目的 了解so层和java层&#xff0c;然后了解安卓逆向题目 so文件 它相当于Windows下的.dll 动态链接库&#xff08;一种共享库文件&#xff0c;包含了程序所需的代码和数据&#xff0c;它的优势是使得程序的内存占用更小&#xff0c;同时也方便了程序的更新和…...

工业级Cat-1导轨式DTU USR-DR154/DR152(口红DTU)技术规范、核心优势与标准化应用场景白皮书

引言随着全球 2G/3G 网络加速退网&#xff0c;中速率蜂窝物联网技术成为工业串口设备联网的主流方案。LTE Cat‑1 凭借下行 10Mbps、上行 5Mbps 的峰值速率、&#xff1c;50ms 低时延与高性价比&#xff0c;成为工业 DTU 的核心通信制式&#xff0c;2025 年市场渗透率已达 68%&…...

【多线路故障】含sop的配电网故障重构研究(Matlab代码实现)

&#x1f4a5;&#x1f4a5;&#x1f49e;&#x1f49e;欢迎来到本博客❤️❤️&#x1f4a5;&#x1f4a5; &#x1f3c6;博主优势&#xff1a;&#x1f31e;&#x1f31e;&#x1f31e;博客内容尽量做到思维缜密&#xff0c;逻辑清晰&#xff0c;为了方便读者。 ⛳️座右铭&a…...

AI时代程序员真的会被替代吗_一份冷静的岗位分析报告

AI 时代&#xff0c;程序员真的会被替代吗&#xff1f;——一份冷静的岗位分析报告 本文不贩卖焦虑&#xff0c;也不粉饰太平。用真实的数据、具体的岗位走势、可验证的逻辑&#xff0c;分析 AI 对程序员行业的影响——什么岗位在消失、什么岗位在增长、以及作为个体应该怎么应…...

深入浅出 16.1 例题(二叉树)P4715 P4913

淘汰赛 P4715 符合二叉树结构 输入叶子结点。叶子结点共2^n 个&#xff0c;则编号从2^n开始&#xff08;完美二叉树每层起始编号这层结点个数&#xff09;。 for(int i0;i< 1<<n;i){ // 一共2^n个结点cin>>v[(1<<n) i]; // 树中编号从2^n开始&#xff0c…...