当前位置: 首页 > news >正文

Spark 管理和更新Hadoop token 流程

Hadoop Token 管理

  • AM 通过 kerberos authentication
  • AM 获取 Yarn 和 HDFS Token
  • AM send tokens to containers
  • Containers load tokens

Enable debug message

log4j.logger.org.apache.hadoop.security=DEBUG

AM Generate tokens

Logs:

23/09/07 22:38:50,375 INFO [main] security.HadoopDelegationTokenManager:57 : Attempting to login to KDC using principal: hadoop_user@PROD.COM, keytab: /home/hadoop_user/hadoop_user.keytab
23/09/07 22:38:50,381 DEBUG [main] security.UserGroupInformation:246 : Hadoop login
23/09/07 22:38:50,381 DEBUG [main] security.UserGroupInformation:192 : hadoop login commit
23/09/07 22:38:50,382 DEBUG [main] security.UserGroupInformation:200 : Using kerberos user: hadoop_user@PROD.COM
23/09/07 22:38:50,382 DEBUG [main] security.UserGroupInformation:218 : Using user: "hadoop_user@PROD.COM" with name: hadoop_user@PROD.COM
23/09/07 22:38:50,382 DEBUG [main] security.UserGroupInformation:230 : User entry: "hadoop_user@PROD.COM"
23/09/07 22:38:50,382 INFO [main] security.HadoopDelegationTokenManager:57 : Successfully logged into KDC.
23/09/07 22:38:51,247 INFO [main] security.HadoopFSDelegationTokenProvider:57 : getting token for: DFS[DFSClient[clientName=DFSClient_NONMAPREDUCE_-113291108_1, ugi=hadoop_user@PROD.COM (auth:KERBEROS)]] with renewer rm/hadoop-rm-1.vip.hadoop.COM@PROD.COM
23/09/07 22:38:51,391 DEBUG [main] security.SaslRpcClient:493 : Sending sasl message state: NEGOTIATE23/09/07 22:38:51,398 DEBUG [main] security.SaslRpcClient:288 : Get token info proto:interface org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolPB info:@org.apache.hadoop.security.token.TokenInfo(value=class org.apache.hadoop.hdfs.security.token.delegation.DelegationTokenSelector)
23/09/07 22:38:51,399 DEBUG [main] security.SaslRpcClient:241 : tokens aren't supported for this protocol or user doesn't have one
23/09/07 22:38:51,399 DEBUG [main] security.SaslRpcClient:313 : Get kerberos info proto:interface org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolPB info:@org.apache.hadoop.security.KerberosInfo(clientPrincipal=, serverPrincipal=dfs.namenode.kerberos.principal)
23/09/07 22:38:51,420 DEBUG [main] security.SaslRpcClient:260 : RPC Server's Kerberos principal name for protocol=org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolPB is nn/hadoop-nn-2.vip.hadoop.COM@PROD.COM
23/09/07 22:38:51,421 DEBUG [main] security.SaslRpcClient:271 : Creating SASL GSSAPI(KERBEROS)  client to authenticate to service at hadoop-nn-2.vip.hadoop.COM
23/09/07 22:38:51,425 DEBUG [main] security.SaslRpcClient:194 : Use KERBEROS authentication for protocol ClientNamenodeProtocolPB
23/09/07 22:38:51,441 DEBUG [main] security.SaslRpcClient:493 : Sending sasl message state: INITIATE
23/09/07 22:38:51,506 INFO [main] security.HadoopFSDelegationTokenProvider:57 : getting token for: DFS[DFSClient[clientName=DFSClient_NONMAPREDUCE_-113291108_1, ugi=hadoop_user@PROD.COM (auth:KERBEROS)]] with renewer hadoop_user@PROD.COM
23/09/07 22:38:52,807 INFO [main] security.HadoopDelegationTokenManager:57 : Scheduling renewal in 18.0 h.
23/09/07 22:38:52,809 INFO [main] security.HadoopDelegationTokenManager:57 : Updating delegation tokens.
23/09/07 22:38:52,833 INFO [main] deploy.SparkHadoopUtil:57 : Updating delegation tokens for current user.
23/09/07 22:38:52,858 INFO [dispatcher-CoarseGrainedScheduler] deploy.SparkHadoopUtil:57 : Updating delegation tokens for current user.
23/09/07 22:38:53,119 DEBUG [main] security.SaslRpcClient:288 : Get token info proto:interface org.apache.hadoop.yarn.api.ApplicationClientProtocolPB info:org.apache.hadoop.yarn.security.client.ClientRMSecurityInfo$2@48f2054d
23/09/07 22:38:53,120 DEBUG [main] security.SaslRpcClient:241 : tokens aren't supported for this protocol or user doesn't have one
23/09/07 22:38:53,121 DEBUG [main] security.SaslRpcClient:313 : Get kerberos info proto:interface org.apache.hadoop.yarn.api.ApplicationClientProtocolPB info:org.apache.hadoop.yarn.security.client.ClientRMSecurityInfo$1@6ce26986
23/09/07 22:38:53,124 DEBUG [main] security.SaslRpcClient:343 : getting serverKey: yarn.resourcemanager.principal conf value: rm/_HOST@PROD.COM principal: rm/hadoop-rm-1.hadoop-rm-rm.hm-prod.svc.35.tess.io@PROD.COM
23/09/07 22:38:53,124 DEBUG [main] security.SaslRpcClient:260 : RPC Server's Kerberos principal name for protocol=org.apache.hadoop.yarn.api.ApplicationClientProtocolPB is rm/hadoop-rm-1.hadoop-rm-rm.hm-prod.svc.35.tess.io@PROD.COM
23/09/07 22:38:53,124 DEBUG [main] security.SaslRpcClient:271 : Creating SASL GSSAPI(KERBEROS)  client to authenticate to service at hadoop-rm-1.hadoop-rm-rm.hm-prod.svc.35.tess.io
23/09/07 22:38:53,125 DEBUG [main] security.SaslRpcClient:194 : Use KERBEROS authentication for protocol ApplicationClientProtocolPB
23/09/07 22:38:53,131 DEBUG [main] security.SaslRpcClient:493 : Sending sasl message state: INITIATE
23/09/07 22:38:53,182 DEBUG [main] token.Token:260 : Cloned private token Kind: HDFS_DELEGATION_TOKEN, Service: hadoop-nn-1.vip.hadoop.COM:8020, Ident: (token for hadoop_user: HDFS_DELEGATION_TOKEN owner=hadoop_user@PROD.COM, renewer=yarn, realUser=, issueDate=1694151531461, maxDate=1694756331461, sequenceNumber=1007863, masterKeyId=7139) from Kind: HDFS_DELEGATION_TOKEN, Service: ha-hdfs:hadoop, Ident: (token for hadoop_user: HDFS_DELEGATION_TOKEN owner=hadoop_user@PROD.COM, renewer=yarn, realUser=, issueDate=1694151531461, maxDate=1694756331461, sequenceNumber=1007863, masterKeyId=7139)
23/09/07 22:38:53,182 DEBUG [main] token.Token:260 : Cloned private token Kind: HDFS_DELEGATION_TOKEN, Service: hadoop-nn-2.vip.hadoop.COM:8020, Ident: (token for hadoop_user: HDFS_DELEGATION_TOKEN owner=hadoop_user@PROD.COM, renewer=yarn, realUser=, issueDate=1694151531461, maxDate=1694756331461, sequenceNumber=1007863, masterKeyId=7139) from Kind: HDFS_DELEGATION_TOKEN, Service: ha-hdfs:hadoop, Ident: (token for hadoop_user: HDFS_DELEGATION_TOKEN owner=hadoop_user@PROD.COM, renewer=yarn, realUser=, issueDate=1694151531461, maxDate=1694756331461, sequenceNumber=1007863, masterKeyId=7139)
23/09/07 22:38:53,182 DEBUG [main] token.Token:260 : Cloned private token Kind: HDFS_DELEGATION_TOKEN, Service: hadoop-nn-3.vip.hadoop.COM:8020, Ident: (token for hadoop_user: HDFS_DELEGATION_TOKEN owner=hadoop_user@PROD.COM, renewer=yarn, realUser=, issueDate=1694151531461, maxDate=1694756331461, sequenceNumber=1007863, masterKeyId=7139) from Kind: HDFS_DELEGATION_TOKEN, Service: ha-hdfs:hadoop, Ident: (token for hadoop_user: HDFS_DELEGATION_TOKEN owner=hadoop_user@PROD.COM, renewer=yarn, realUser=, issueDate=1694151531461, maxDate=1694756331461, sequenceNumber=1007863, masterKeyId=7139)

CoarseGrainedSchedulerBackend

启动 token manager

  override def start(): Unit = {if (UserGroupInformation.isSecurityEnabled()) {delegationTokenManager = createTokenManager()delegationTokenManager.foreach { dtm =>val ugi = UserGroupInformation.getCurrentUser()val tokens = if (dtm.renewalEnabled) {dtm.start()} else {val creds = ugi.getCredentials()dtm.obtainDelegationTokens(creds)if (creds.numberOfTokens() > 0 || creds.numberOfSecretKeys() > 0) {SparkHadoopUtil.get.serialize(creds)} else {null}}if (tokens != null) {updateDelegationTokens(tokens)}}}}

HadoopDelegationTokenManager

定时 Refresh tokens

  def start(): Array[Byte] = {require(renewalEnabled, "Token renewal must be enabled to start the renewer.")require(schedulerRef != null, "Token renewal requires a scheduler endpoint.")renewalExecutor =ThreadUtils.newDaemonSingleThreadScheduledExecutor("Credential Renewal Thread")val ugi = UserGroupInformation.getCurrentUser()if (ugi.isFromKeytab()) {// In Hadoop 2.x, renewal of the keytab-based login seems to be automatic, but in Hadoop 3.x,// it is configurable (see hadoop.kerberos.keytab.login.autorenewal.enabled, added in// HADOOP-9567). This task will make sure that the user stays logged in regardless of that// configuration's value. Note that checkTGTAndReloginFromKeytab() is a no-op if the TGT does// not need to be renewed yet.val tgtRenewalTask = new Runnable() {override def run(): Unit = {try {ugi.checkTGTAndReloginFromKeytab()} catch {case e: Throwable =>logWarning("Failed to renew TGT from keytab file", e)}}}val tgtRenewalPeriod = sparkConf.get(KERBEROS_RELOGIN_PERIOD)renewalExecutor.scheduleAtFixedRate(tgtRenewalTask, tgtRenewalPeriod, tgtRenewalPeriod,TimeUnit.SECONDS)}updateTokensTask()}private def updateTokensTask(): Array[Byte] = {try {val freshUGI = doLogin()val creds = obtainTokensAndScheduleRenewal(freshUGI)val tokens = SparkHadoopUtil.get.serialize(creds)logInfo("Updating delegation tokens.")schedulerRef.send(UpdateDelegationTokens(tokens))tokens} catch {case _: InterruptedException =>// Ignore, may happen if shutting down.nullcase e: Exception =>val delay = TimeUnit.SECONDS.toMillis(sparkConf.get(CREDENTIALS_RENEWAL_RETRY_WAIT))logWarning(s"Failed to update tokens, will try again in ${UIUtils.formatDuration(delay)}!" +" If this happens too often tasks will fail.", e)scheduleRenewal(delay)null}}

CoarseGrainedSchedulerBackend

      case UpdateDelegationTokens(newDelegationTokens) =>updateDelegationTokens(newDelegationTokens)

Container 启动 Load token

23/09/07 23:41:56,279 DEBUG [main] security.UserGroupInformation:246 : Hadoop login
23/09/07 23:41:56,281 DEBUG [main] security.UserGroupInformation:192 : hadoop login commit
23/09/07 23:41:56,284 DEBUG [main] security.UserGroupInformation:214 : Using local user: UnixPrincipal: hadoop_user
23/09/07 23:41:56,285 DEBUG [main] security.UserGroupInformation:218 : Using user: "UnixPrincipal: hadoop_user" with name: hadoop_user
23/09/07 23:41:56,285 DEBUG [main] security.UserGroupInformation:230 : User entry: "hadoop_user"
23/09/07 23:41:56,285 DEBUG [main] security.UserGroupInformation:741 : Reading credentials from location /hadoop/1/yarn/local/usercache/hadoop_user/appcache/application_1684894519955_69959/container_e2311_1684894519955_69959_01_000021/container_tokens
23/09/07 23:41:56,303 DEBUG [main] security.UserGroupInformation:746 : Loaded 7 tokens from /hadoop/1/yarn/local/usercache/hadoop_user/appcache/application_1684894519955_69959/container_e2311_1684894519955_69959_01_000021/container_tokens
23/09/07 23:41:56,304 DEBUG [main] security.UserGroupInformation:787 : UGI loginUser: hadoop_user (auth:SIMPLE)23/09/07 23:44:54,825 DEBUG [Executor 1 task launch worker for task 1785, task 1757.0 in stage 8.0 of app application_1684894519955_69959] security.SaslRpcClient:284 : Get token info proto:interface org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolPB info:@org.apache.hadoop.security.token.TokenInfo(value=class org.apache.hadoop.hdfs.security.token.delegation.DelegationTokenSelector)
23/09/07 23:44:54,831 DEBUG [Executor 1 task launch worker for task 1785, task 1757.0 in stage 8.0 of app application_1684894519955_69959] security.SaslRpcClient:267 : Creating SASL DIGEST-MD5(TOKEN)  client to authenticate to service at default
23/09/07 23:44:54,833 DEBUG [Executor 1 task launch worker for task 1785, task 1757.0 in stage 8.0 of app application_1684894519955_69959] security.SaslRpcClient:190 : Use TOKEN authentication for protocol ClientNamenodeProtocolPB
23/09/07 23:44:54,836 DEBUG [Executor 1 task launch worker for task 1785, task 1757.0 in stage 8.0 of app application_1684894519955_69959] security.SaslRpcClient:690 : SASL client callback: setting username: ABZiX2Nhcm1lbEBQUk9ELkVCQVkuQ09NBHlhcm4AigGKc4ZbmYoBipeS35mMAUIqhY4Ggg==
23/09/07 23:44:54,836 DEBUG [Executor 1 task launch worker for task 1785, task 1757.0 in stage 8.0 of app application_1684894519955_69959] security.SaslRpcClient:695 : SASL client callback: setting userPassword
23/09/07 23:44:54,836 DEBUG [Executor 1 task launch worker for task 1785, task 1757.0 in stage 8.0 of app application_1684894519955_69959] security.SaslRpcClient:700 : SASL client callback: setting realm: default

Spark AM 和 Executor 更新收到的 tokens

    case UpdateDelegationTokens(tokenBytes) =>logInfo(s"Received tokens of ${tokenBytes.length} bytes")SparkHadoopUtil.get.addDelegationTokens(tokenBytes, env.conf).....	UserGroupInformation.getCurrentUser.addCredentials(creds)

相关文章:

Spark 管理和更新Hadoop token 流程

Hadoop Token 管理 AM 通过 kerberos authenticationAM 获取 Yarn 和 HDFS TokenAM send tokens to containersContainers load tokens Enable debug message log4j.logger.org.apache.hadoop.securityDEBUG AM Generate tokens Logs: 23/09/07 22:38:50,375 INFO [main]…...

Android文件关联

用户需求:Android在系统文件夹找到一个文件想发送自己开发的app进行处理该怎么办? 这时候可以采用两个Activity,一个Activity用作Launcher,一个用于处理发送的文件;具体Activity intent-filter该怎么写了?可以参考下面的代码: <intent-filter><action androi…...

java操作adb查看apk安装包包名【搬代码】

Testpublic static void findadb() throws InterruptedException {String apkip"E:\\需求\\2023\\gql_1.0.1.apk";String findname1"cmd /c cd E:\\appium\\android-sdk\\build-tools\\27.0.2";//没有进到这里String s1 Cmd.exeCmd(findname1);System.out…...

【JAVA】Object类与抽象类

作者主页&#xff1a;paper jie_的博客 本文作者&#xff1a;大家好&#xff0c;我是paper jie&#xff0c;感谢你阅读本文&#xff0c;欢迎一建三连哦。 本文录入于《JAVASE语法系列》专栏&#xff0c;本专栏是针对于大学生&#xff0c;编程小白精心打造的。笔者用重金(时间和…...

【设计模式】二、UML 类图概述

文章目录 常见含义含义依赖关系&#xff08;Dependence&#xff09;泛化关系&#xff08;Generalization&#xff09;实现关系&#xff08;Implementation&#xff09;关联关系&#xff08;Association&#xff09;聚合关系&#xff08;Aggregation&#xff09;组合关系&#x…...

百望云亮相服贸会 重磅发布业财税融Copilot

小望小望&#xff0c;我要一杯拿铁&#xff01; 好的&#xff0c;已下单成功&#xff0c;请问要开具发票嘛&#xff1f; 在获得确认的指令后&#xff0c; 百小望AI智能助手 按用户要求成功开具了一张电子发票&#xff01; 这是2023年服贸会国家会议中心成果发布现场&#x…...

vue 项目代码混淆配置(自定义插件适用)带配置项注释

文章目录 vue 项目代码混淆配置&#xff08;自定义插件适用&#xff09;带配置项注释一、概要二、混淆步骤1. 引入混淆插件2. 添加混淆配置3. 执行代码混淆 vue 项目代码混淆配置&#xff08;自定义插件适用&#xff09;带配置项注释 一、概要 本文章适用 vue-cli3/webpack4 …...

手写Spring:第7章-实现应用上下文

文章目录 一、目标&#xff1a;实现应用上下文二、设计&#xff1a;实现应用上下文三、实现&#xff1a;实现应用上下文3.1 工程结构3.2 Spring应用上下文和Bean对象扩展类图3.3 对象工厂和对象扩展接口3.3.1 对象工厂扩展接口3.3.2 对象扩展接口 3.4 定义应用上下文3.4.1 定义…...

Java(三)逻辑控制(if....else,循环语句)与方法

逻辑控制&#xff08;if....else&#xff0c;循环语句&#xff09;与方法 四、逻辑控制1.if...else(常用)1.1表达格式&#xff08;三种&#xff09; 2.switch...case(用的少)2.1表达式 3.while(常用)3.1语法格式3.2关键字beak&#xff1a;3.3关键字 continue&#xff1a; 4.for…...

通过API接口实现数据实时更新的方案(InsCode AI 创作助手)

要实现实时数据更新&#xff0c;需要采用轮询或者长连接两种方式。 1. 轮询方式 轮询方式指的是客户端定时向服务器请求数据的方式&#xff0c;通过一定的时间间隔去请求最新数据。具体的实现方法包括&#xff1a; 客户端定时向服务器发送请求&#xff0c;获取最新数据&…...

分类预测 | MATLAB实现PCA-GRU(主成分门控循环单元)分类预测

分类预测 | MATLAB实现PCA-GRU(主成分门控循环单元)分类预测 目录 分类预测 | MATLAB实现PCA-GRU(主成分门控循环单元)分类预测预测效果基本介绍程序设计参考资料致谢 预测效果 基本介绍 Matlab实现基于PCA-GRU主成分分析-门控循环单元多输入分类预测&#xff08;完整程序和数据…...

el-dialog无法关闭

代码如下&#xff0c;:visible.sync"result2DeptVisible"来控制dialog的隐显问题&#xff0c;但当点击关闭的时候 &#xff0c;无法关闭&#xff01;&#xff01; <el-dialog :visible.sync"result2DeptVisible" class"el-dialog-view">&…...

MATLAB算法实战应用案例精讲-【大模型】LLM算法(最终篇)

目录 前言 知识储备 1).通讯原语操作: 2).并行计算技术: 算法原理...

Mac brew -v 报错 fatal: detected dubious ownership in repository

Mac 电脑查询 brew版本时报错&#xff0c;如下错误&#xff1a; Last login: Fri Sep 8 14:56:21 on ttys021 sunshiyusunshiyudeMacBook-Pro-2 ~ % brew -v Homebrew 4.0.3-30-g7ac31f7 fatal: detected dubious ownership in repository at /usr/local/Homebrew/Library/Ta…...

Docker镜像、容器、仓库及数据管理

使用Docker镜像 获取镜像 使用docker pull命令&#xff0c;使用docker search命令可以搜索远端仓库中共享的镜像。 运行容器 使用docker run [OPTIONS] IMAGE [COMMAND] [ARG...]命令&#xff0c;如&#xff1a;docker run --name ubuntu_test --rm -it ubuntu:test /bin/b…...

Java的选择排序、冒泡排序、插入排序

不爱生姜不吃醋 如果本文有什么错误的话欢迎在评论区中指正 与其明天开始&#xff0c;不如现在行动&#xff01; 文章目录 &#x1f334;前言&#x1f334;一、选择排序1.原理2.时间复杂度3.代码实现 &#x1f334;二、冒泡排序1. 原理2. 时间复杂度3.代码实现 &#x1f334;三…...

Vagrant + VirtualBox + CentOS7 + WindTerm 5分钟搭建本地linux开发环境

1、准备阶段 将环境搭建所需要的工具和文件下载好&#xff08;页面找不到可参考Tips部分&#xff09; Vagrant 版本&#xff1a;vagrant_2.2.18_x86_64.msi 链接&#xff1a;https://developer.hashicorp.com/vagrant/downloads VirtualBox 版本&#xff1a;VirtualBox-6.1.46…...

关于Ajax

1.Ajax 异步 JavaScript 和 XML&#xff0c; 或 Ajax 本身不是一种技术&#xff0c;而是一种将一些现有技术结合起来使用的方法&#xff0c;包括&#xff1a;HTML 或 XHTML、CSS、JavaScript、DOM、XML、XSLT、以及最重要的 XMLHttpRequest 对象。当使用结合了这些技术的 Aja…...

打开转盘锁 -- BFS

打开转盘锁 这里提供两种实现&#xff0c;单向BFS和双向BFS。 class OpenLock:"""752. 打开转盘锁https://leetcode.cn/problems/open-the-lock/"""def solution(self, deadends: List[str], target: str) -> int:"""单向BFS:…...

国标EHOME视频平台EasyCVR视频融合平台助力地下停车场安全

EasyCVR能在复杂的网络环境中&#xff0c;将分散的各类视频资源进行统一汇聚、整合、集中管理&#xff0c;实现视频资源的鉴权管理、按需调阅、全网分发、云存储、智能分析等&#xff0c;视频智能分析平台EasyCVR融合性强、开放度高、部署轻快&#xff0c;在智慧工地、智慧园区…...

MPNet:旋转机械轻量化故障诊断模型详解python代码复现

目录 一、问题背景与挑战 二、MPNet核心架构 2.1 多分支特征融合模块(MBFM) 2.2 残差注意力金字塔模块(RAPM) 2.2.1 空间金字塔注意力(SPA) 2.2.2 金字塔残差块(PRBlock) 2.3 分类器设计 三、关键技术突破 3.1 多尺度特征融合 3.2 轻量化设计策略 3.3 抗噪声…...

盘古信息PCB行业解决方案:以全域场景重构,激活智造新未来

一、破局&#xff1a;PCB行业的时代之问 在数字经济蓬勃发展的浪潮中&#xff0c;PCB&#xff08;印制电路板&#xff09;作为 “电子产品之母”&#xff0c;其重要性愈发凸显。随着 5G、人工智能等新兴技术的加速渗透&#xff0c;PCB行业面临着前所未有的挑战与机遇。产品迭代…...

可靠性+灵活性:电力载波技术在楼宇自控中的核心价值

可靠性灵活性&#xff1a;电力载波技术在楼宇自控中的核心价值 在智能楼宇的自动化控制中&#xff0c;电力载波技术&#xff08;PLC&#xff09;凭借其独特的优势&#xff0c;正成为构建高效、稳定、灵活系统的核心解决方案。它利用现有电力线路传输数据&#xff0c;无需额外布…...

学校招生小程序源码介绍

基于ThinkPHPFastAdminUniApp开发的学校招生小程序源码&#xff0c;专为学校招生场景量身打造&#xff0c;功能实用且操作便捷。 从技术架构来看&#xff0c;ThinkPHP提供稳定可靠的后台服务&#xff0c;FastAdmin加速开发流程&#xff0c;UniApp则保障小程序在多端有良好的兼…...

HBuilderX安装(uni-app和小程序开发)

下载HBuilderX 访问官方网站&#xff1a;https://www.dcloud.io/hbuilderx.html 根据您的操作系统选择合适版本&#xff1a; Windows版&#xff08;推荐下载标准版&#xff09; Windows系统安装步骤 运行安装程序&#xff1a; 双击下载的.exe安装文件 如果出现安全提示&…...

微服务商城-商品微服务

数据表 CREATE TABLE product (id bigint(20) UNSIGNED NOT NULL AUTO_INCREMENT COMMENT 商品id,cateid smallint(6) UNSIGNED NOT NULL DEFAULT 0 COMMENT 类别Id,name varchar(100) NOT NULL DEFAULT COMMENT 商品名称,subtitle varchar(200) NOT NULL DEFAULT COMMENT 商…...

视觉slam十四讲实践部分记录——ch2、ch3

ch2 一、使用g++编译.cpp为可执行文件并运行(P30) g++ helloSLAM.cpp ./a.out运行 二、使用cmake编译 mkdir build cd build cmake .. makeCMakeCache.txt 文件仍然指向旧的目录。这表明在源代码目录中可能还存在旧的 CMakeCache.txt 文件,或者在构建过程中仍然引用了旧的路…...

音视频——I2S 协议详解

I2S 协议详解 I2S (Inter-IC Sound) 协议是一种串行总线协议&#xff0c;专门用于在数字音频设备之间传输数字音频数据。它由飞利浦&#xff08;Philips&#xff09;公司开发&#xff0c;以其简单、高效和广泛的兼容性而闻名。 1. 信号线 I2S 协议通常使用三根或四根信号线&a…...

Git 3天2K星标:Datawhale 的 Happy-LLM 项目介绍(附教程)

引言 在人工智能飞速发展的今天&#xff0c;大语言模型&#xff08;Large Language Models, LLMs&#xff09;已成为技术领域的焦点。从智能写作到代码生成&#xff0c;LLM 的应用场景不断扩展&#xff0c;深刻改变了我们的工作和生活方式。然而&#xff0c;理解这些模型的内部…...

【 java 虚拟机知识 第一篇 】

目录 1.内存模型 1.1.JVM内存模型的介绍 1.2.堆和栈的区别 1.3.栈的存储细节 1.4.堆的部分 1.5.程序计数器的作用 1.6.方法区的内容 1.7.字符串池 1.8.引用类型 1.9.内存泄漏与内存溢出 1.10.会出现内存溢出的结构 1.内存模型 1.1.JVM内存模型的介绍 内存模型主要分…...