k8s helm部署kafka集群(KRaft模式)——筑梦之路
添加helm仓库
helm repo add bitnami "https://helm-charts.itboon.top/bitnami" --force-update
helm repo add grafana "https://helm-charts.itboon.top/grafana" --force-update
helm repo add prometheus-community "https://helm-charts.itboon.top/prometheus-community" --force-update
helm repo add ingress-nginx "https://helm-charts.itboon.top/ingress-nginx" --force-update
helm repo update
搜索kafka版本
# helm search repo bitnami/kafka -l
NAME CHART VERSION APP VERSION DESCRIPTION
bitnami2/kafka 31.1.1 3.9.0
bitnami2/kafka 31.0.0 3.9.0
bitnami2/kafka 30.1.8 3.8.1
bitnami2/kafka 30.1.4 3.8.0
bitnami2/kafka 30.0.5 3.8.0
bitnami2/kafka 29.3.13 3.7.1
bitnami2/kafka 29.3.4 3.7.0
bitnami2/kafka 29.2.0 3.7.0
bitnami2/kafka 28.1.1 3.7.0
bitnami2/kafka 28.0.0 3.7.0
bitnami2/kafka 26.11.4 3.6.1
bitnami2/kafka 26.8.3 3.6.1
编辑kafka.yaml
image:registry: docker.iorepository: bitnami/kafkatag: 3.9.0-debian-12-r4
listeners:client:protocol: PLAINTEXT #关闭访问认证controller:protocol: PLAINTEXT #关闭访问认证interbroker:protocol: PLAINTEXT #关闭访问认证external:protocol: PLAINTEXT #关闭访问认证
controller:replicaCount: 3 #副本数controllerOnly: false #controller+broker共用模式heapOpts: -Xmx4096m -Xms2048m #KAFKA JVMresources:limits:cpu: 4 memory: 8Girequests:cpu: 500mmemory: 512Miaffinity: #仅部署在master节点,不限制可删除nodeAffinity:requiredDuringSchedulingIgnoredDuringExecution:nodeSelectorTerms:- matchExpressions:- key: node-role.kubernetes.io/masteroperator: Exists- matchExpressions:- key: node-role.kubernetes.io/control-planeoperator: Existstolerations: #仅部署在master节点,不限制可删除- operator: Existseffect: NoSchedule- operator: Existseffect: NoExecutepersistence:storageClass: "local-path" #存储卷类型size: 10Gi #每个pod的存储大小
externalAccess:enabled: true #开启外部访问controller:service:type: NodePort #使用NodePort方式nodePorts:- 30091 #对外端口- 30092 #对外端口- 30093 #对外端口useHostIPs: true #使用宿主机IP
安装部署
helm install kafka bitnami/kafka -f kafka.yaml --dry-runhelm install kafka bitnami-china/kafka -f kafka.yaml
内部访问
kafka-controller-headless.default:9092kafka-controller-0.kafka-controller-headless.default:9092
kafka-controller-1.kafka-controller-headless.default:9092
kafka-controller-2.kafka-controller-headless.default:9092
外部访问
# node ip +设置的nodeport端口,注意端口对应的节点的ip
192.168.100.110:30091
192.168.100.111:30092
192.168.100.112:30093# 从pod的配置中查找外部访问信息
kubectl exec -it kafka-controller-0 -- cat /opt/bitnami/kafka/config/server.properties | grep advertised.listeners
测试验证
# 创建一个podkubectl run kafka-client --restart='Never' --image bitnami/kafka:3.9.0-debian-12-r4 --namespace default --command -- sleep infinity# 进入pod生产消息
kubectl exec --tty -i kafka-client --namespace default -- bash
kafka-console-producer.sh \--broker-list kafka-controller-0.kafka-controller-headless.default.svc.cluster.local:9092,kafka-controller-1.kafka-controller-headless.default.svc.cluster.local:9092,kafka-controller-2.kafka-controller-headless.default.svc.cluster.local:9092 \--topic test# 进入pod消费消息
kubectl exec --tty -i kafka-client --namespace default -- bash
kafka-console-consumer.sh \--bootstrap-server kafka.default.svc.cluster.local:9092 \--topic test \--from-beginning
仅供参考
所有yaml文件
---
# Source: kafka/templates/networkpolicy.yaml
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:name: kafkanamespace: "default"labels:app.kubernetes.io/instance: kafkaapp.kubernetes.io/managed-by: Helmapp.kubernetes.io/name: kafkaapp.kubernetes.io/version: 3.9.0helm.sh/chart: kafka-31.1.1
spec:podSelector:matchLabels:app.kubernetes.io/instance: kafkaapp.kubernetes.io/name: kafkapolicyTypes:- Ingress- Egressegress:- {}ingress:# Allow client connections- ports:- port: 9092- port: 9094- port: 9093- port: 9095
---
# Source: kafka/templates/broker/pdb.yaml
apiVersion: policy/v1
kind: PodDisruptionBudget
metadata:name: kafka-brokernamespace: "default"labels:app.kubernetes.io/instance: kafkaapp.kubernetes.io/managed-by: Helmapp.kubernetes.io/name: kafkaapp.kubernetes.io/version: 3.9.0helm.sh/chart: kafka-31.1.1app.kubernetes.io/component: brokerapp.kubernetes.io/part-of: kafka
spec:maxUnavailable: 1selector:matchLabels:app.kubernetes.io/instance: kafkaapp.kubernetes.io/name: kafkaapp.kubernetes.io/component: brokerapp.kubernetes.io/part-of: kafka
---
# Source: kafka/templates/controller-eligible/pdb.yaml
apiVersion: policy/v1
kind: PodDisruptionBudget
metadata:name: kafka-controllernamespace: "default"labels:app.kubernetes.io/instance: kafkaapp.kubernetes.io/managed-by: Helmapp.kubernetes.io/name: kafkaapp.kubernetes.io/version: 3.9.0helm.sh/chart: kafka-31.1.1app.kubernetes.io/component: controller-eligibleapp.kubernetes.io/part-of: kafka
spec:maxUnavailable: 1selector:matchLabels:app.kubernetes.io/instance: kafkaapp.kubernetes.io/name: kafkaapp.kubernetes.io/component: controller-eligibleapp.kubernetes.io/part-of: kafka
---
# Source: kafka/templates/provisioning/serviceaccount.yaml
apiVersion: v1
kind: ServiceAccount
metadata:name: kafka-provisioningnamespace: "default"labels:app.kubernetes.io/instance: kafkaapp.kubernetes.io/managed-by: Helmapp.kubernetes.io/name: kafkaapp.kubernetes.io/version: 3.9.0helm.sh/chart: kafka-31.1.1
automountServiceAccountToken: false
---
# Source: kafka/templates/rbac/serviceaccount.yaml
apiVersion: v1
kind: ServiceAccount
metadata:name: kafkanamespace: "default"labels:app.kubernetes.io/instance: kafkaapp.kubernetes.io/managed-by: Helmapp.kubernetes.io/name: kafkaapp.kubernetes.io/version: 3.9.0helm.sh/chart: kafka-31.1.1app.kubernetes.io/component: kafka
automountServiceAccountToken: false
---
# Source: kafka/templates/secrets.yaml
apiVersion: v1
kind: Secret
metadata:name: kafka-kraft-cluster-idnamespace: "default"labels:app.kubernetes.io/instance: kafkaapp.kubernetes.io/managed-by: Helmapp.kubernetes.io/name: kafkaapp.kubernetes.io/version: 3.9.0helm.sh/chart: kafka-31.1.1
type: Opaque
data:kraft-cluster-id: "eDJrTHBicnVhQ1ZIUExEVU5BZVMxUA=="
---
# Source: kafka/templates/controller-eligible/configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:name: kafka-controller-configurationnamespace: "default"labels:app.kubernetes.io/instance: kafkaapp.kubernetes.io/managed-by: Helmapp.kubernetes.io/name: kafkaapp.kubernetes.io/version: 3.9.0helm.sh/chart: kafka-31.1.1app.kubernetes.io/component: controller-eligibleapp.kubernetes.io/part-of: kafka
data:server.properties: |-# Listeners configurationlisteners=CLIENT://:9092,INTERNAL://:9094,EXTERNAL://:9095,CONTROLLER://:9093advertised.listeners=CLIENT://advertised-address-placeholder:9092,INTERNAL://advertised-address-placeholder:9094listener.security.protocol.map=CLIENT:PLAINTEXT,INTERNAL:PLAINTEXT,CONTROLLER:PLAINTEXT,EXTERNAL:PLAINTEXT# KRaft process rolesprocess.roles=controller,broker#node.id=controller.listener.names=CONTROLLERcontroller.quorum.voters=0@kafka-controller-0.kafka-controller-headless.default.svc.cluster.local:9093,1@kafka-controller-1.kafka-controller-headless.default.svc.cluster.local:9093,2@kafka-controller-2.kafka-controller-headless.default.svc.cluster.local:9093# Kafka data logs directorylog.dir=/bitnami/kafka/data# Kafka application logs directorylogs.dir=/opt/bitnami/kafka/logs# Common Kafka Configuration# Interbroker configurationinter.broker.listener.name=INTERNAL# Custom Kafka Configuration
---
# Source: kafka/templates/scripts-configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:name: kafka-scriptsnamespace: "default"labels:app.kubernetes.io/instance: kafkaapp.kubernetes.io/managed-by: Helmapp.kubernetes.io/name: kafkaapp.kubernetes.io/version: 3.9.0helm.sh/chart: kafka-31.1.1
data:kafka-init.sh: |-#!/bin/bashset -o errexitset -o nounsetset -o pipefailerror(){local message="${1:?missing message}"echo "ERROR: ${message}"exit 1}retry_while() {local -r cmd="${1:?cmd is missing}"local -r retries="${2:-12}"local -r sleep_time="${3:-5}"local return_value=1read -r -a command <<< "$cmd"for ((i = 1 ; i <= retries ; i+=1 )); do"${command[@]}" && return_value=0 && breaksleep "$sleep_time"donereturn $return_value}replace_in_file() {local filename="${1:?filename is required}"local match_regex="${2:?match regex is required}"local substitute_regex="${3:?substitute regex is required}"local posix_regex=${4:-true}local result# We should avoid using 'sed in-place' substitutions# 1) They are not compatible with files mounted from ConfigMap(s)# 2) We found incompatibility issues with Debian10 and "in-place" substitutionslocal -r del=$'\001' # Use a non-printable character as a 'sed' delimiter to avoid issuesif [[ $posix_regex = true ]]; thenresult="$(sed -E "s${del}${match_regex}${del}${substitute_regex}${del}g" "$filename")"elseresult="$(sed "s${del}${match_regex}${del}${substitute_regex}${del}g" "$filename")"fiecho "$result" > "$filename"}kafka_conf_set() {local file="${1:?missing file}"local key="${2:?missing key}"local value="${3:?missing value}"# Check if the value was set beforeif grep -q "^[#\\s]*$key\s*=.*" "$file"; then# Update the existing keyreplace_in_file "$file" "^[#\\s]*${key}\s*=.*" "${key}=${value}" falseelse# Add a new keyprintf '\n%s=%s' "$key" "$value" >>"$file"fi}replace_placeholder() {local placeholder="${1:?missing placeholder value}"local password="${2:?missing password value}"local -r del=$'\001' # Use a non-printable character as a 'sed' delimiter to avoid issues with delimiter symbols in sed stringsed -i "s${del}$placeholder${del}$password${del}g" "$KAFKA_CONFIG_FILE"}append_file_to_kafka_conf() {local file="${1:?missing source file}"local conf="${2:?missing kafka conf file}"cat "$1" >> "$2"}configure_external_access() {# Configure external hostnameif [[ -f "/shared/external-host.txt" ]]; thenhost=$(cat "/shared/external-host.txt")elif [[ -n "${EXTERNAL_ACCESS_HOST:-}" ]]; thenhost="$EXTERNAL_ACCESS_HOST"elif [[ -n "${EXTERNAL_ACCESS_HOSTS_LIST:-}" ]]; thenread -r -a hosts <<<"$(tr ',' ' ' <<<"${EXTERNAL_ACCESS_HOSTS_LIST}")"host="${hosts[$POD_ID]}"elif [[ "$EXTERNAL_ACCESS_HOST_USE_PUBLIC_IP" =~ ^(yes|true)$ ]]; thenhost=$(curl -s https://ipinfo.io/ip)elseerror "External access hostname not provided"fi# Configure external portif [[ -f "/shared/external-port.txt" ]]; thenport=$(cat "/shared/external-port.txt")elif [[ -n "${EXTERNAL_ACCESS_PORT:-}" ]]; thenif [[ "${EXTERNAL_ACCESS_PORT_AUTOINCREMENT:-}" =~ ^(yes|true)$ ]]; thenport="$((EXTERNAL_ACCESS_PORT + POD_ID))"elseport="$EXTERNAL_ACCESS_PORT"fielif [[ -n "${EXTERNAL_ACCESS_PORTS_LIST:-}" ]]; thenread -r -a ports <<<"$(tr ',' ' ' <<<"${EXTERNAL_ACCESS_PORTS_LIST}")"port="${ports[$POD_ID]}"elseerror "External access port not provided"fi# Configure Kafka advertised listenerssed -i -E "s|^(advertised\.listeners=\S+)$|\1,EXTERNAL://${host}:${port}|" "$KAFKA_CONFIG_FILE"}export KAFKA_CONFIG_FILE=/config/server.propertiescp /configmaps/server.properties $KAFKA_CONFIG_FILE# Get pod ID and role, last and second last fields in the pod name respectivelyPOD_ID=$(echo "$MY_POD_NAME" | rev | cut -d'-' -f 1 | rev)POD_ROLE=$(echo "$MY_POD_NAME" | rev | cut -d'-' -f 2 | rev)# Configure node.id and/or broker.idif [[ -f "/bitnami/kafka/data/meta.properties" ]]; thenif grep -q "broker.id" /bitnami/kafka/data/meta.properties; thenID="$(grep "broker.id" /bitnami/kafka/data/meta.properties | awk -F '=' '{print $2}')"kafka_conf_set "$KAFKA_CONFIG_FILE" "node.id" "$ID"elseID="$(grep "node.id" /bitnami/kafka/data/meta.properties | awk -F '=' '{print $2}')"kafka_conf_set "$KAFKA_CONFIG_FILE" "node.id" "$ID"fielseID=$((POD_ID + KAFKA_MIN_ID))kafka_conf_set "$KAFKA_CONFIG_FILE" "node.id" "$ID"fireplace_placeholder "advertised-address-placeholder" "${MY_POD_NAME}.kafka-${POD_ROLE}-headless.default.svc.cluster.local"if [[ "${EXTERNAL_ACCESS_ENABLED:-false}" =~ ^(yes|true)$ ]]; thenconfigure_external_accessfiif [ -f /secret-config/server-secret.properties ]; thenappend_file_to_kafka_conf /secret-config/server-secret.properties $KAFKA_CONFIG_FILEfi
---
# Source: kafka/templates/controller-eligible/svc-external-access.yaml
apiVersion: v1
kind: Service
metadata:name: kafka-controller-0-externalnamespace: "default"labels:app.kubernetes.io/instance: kafkaapp.kubernetes.io/managed-by: Helmapp.kubernetes.io/name: kafkaapp.kubernetes.io/version: 3.9.0helm.sh/chart: kafka-31.1.1app.kubernetes.io/component: kafkapod: kafka-controller-0
spec:type: NodePortpublishNotReadyAddresses: falseports:- name: tcp-kafkaport: 9094nodePort: 30091targetPort: externalselector:app.kubernetes.io/instance: kafkaapp.kubernetes.io/name: kafkaapp.kubernetes.io/part-of: kafkaapp.kubernetes.io/component: controller-eligiblestatefulset.kubernetes.io/pod-name: kafka-controller-0
---
# Source: kafka/templates/controller-eligible/svc-external-access.yaml
apiVersion: v1
kind: Service
metadata:name: kafka-controller-1-externalnamespace: "default"labels:app.kubernetes.io/instance: kafkaapp.kubernetes.io/managed-by: Helmapp.kubernetes.io/name: kafkaapp.kubernetes.io/version: 3.9.0helm.sh/chart: kafka-31.1.1app.kubernetes.io/component: kafkapod: kafka-controller-1
spec:type: NodePortpublishNotReadyAddresses: falseports:- name: tcp-kafkaport: 9094nodePort: 30092targetPort: externalselector:app.kubernetes.io/instance: kafkaapp.kubernetes.io/name: kafkaapp.kubernetes.io/part-of: kafkaapp.kubernetes.io/component: controller-eligiblestatefulset.kubernetes.io/pod-name: kafka-controller-1
---
# Source: kafka/templates/controller-eligible/svc-external-access.yaml
apiVersion: v1
kind: Service
metadata:name: kafka-controller-2-externalnamespace: "default"labels:app.kubernetes.io/instance: kafkaapp.kubernetes.io/managed-by: Helmapp.kubernetes.io/name: kafkaapp.kubernetes.io/version: 3.9.0helm.sh/chart: kafka-31.1.1app.kubernetes.io/component: kafkapod: kafka-controller-2
spec:type: NodePortpublishNotReadyAddresses: falseports:- name: tcp-kafkaport: 9094nodePort: 30093targetPort: externalselector:app.kubernetes.io/instance: kafkaapp.kubernetes.io/name: kafkaapp.kubernetes.io/part-of: kafkaapp.kubernetes.io/component: controller-eligiblestatefulset.kubernetes.io/pod-name: kafka-controller-2
---
# Source: kafka/templates/controller-eligible/svc-headless.yaml
apiVersion: v1
kind: Service
metadata:name: kafka-controller-headlessnamespace: "default"labels:app.kubernetes.io/instance: kafkaapp.kubernetes.io/managed-by: Helmapp.kubernetes.io/name: kafkaapp.kubernetes.io/version: 3.9.0helm.sh/chart: kafka-31.1.1app.kubernetes.io/component: controller-eligibleapp.kubernetes.io/part-of: kafka
spec:type: ClusterIPclusterIP: NonepublishNotReadyAddresses: trueports:- name: tcp-interbrokerport: 9094protocol: TCPtargetPort: interbroker- name: tcp-clientport: 9092protocol: TCPtargetPort: client- name: tcp-controllerprotocol: TCPport: 9093targetPort: controllerselector:app.kubernetes.io/instance: kafkaapp.kubernetes.io/name: kafkaapp.kubernetes.io/component: controller-eligibleapp.kubernetes.io/part-of: kafka
---
# Source: kafka/templates/svc.yaml
apiVersion: v1
kind: Service
metadata:name: kafkanamespace: "default"labels:app.kubernetes.io/instance: kafkaapp.kubernetes.io/managed-by: Helmapp.kubernetes.io/name: kafkaapp.kubernetes.io/version: 3.9.0helm.sh/chart: kafka-31.1.1app.kubernetes.io/component: kafka
spec:type: ClusterIPsessionAffinity: Noneports:- name: tcp-clientport: 9092protocol: TCPtargetPort: clientnodePort: null- name: tcp-externalport: 9095protocol: TCPtargetPort: externalselector:app.kubernetes.io/instance: kafkaapp.kubernetes.io/name: kafkaapp.kubernetes.io/part-of: kafka
---
# Source: kafka/templates/controller-eligible/statefulset.yaml
apiVersion: apps/v1
kind: StatefulSet
metadata:name: kafka-controllernamespace: "default"labels:app.kubernetes.io/instance: kafkaapp.kubernetes.io/managed-by: Helmapp.kubernetes.io/name: kafkaapp.kubernetes.io/version: 3.9.0helm.sh/chart: kafka-31.1.1app.kubernetes.io/component: controller-eligibleapp.kubernetes.io/part-of: kafka
spec:podManagementPolicy: Parallelreplicas: 3selector:matchLabels:app.kubernetes.io/instance: kafkaapp.kubernetes.io/name: kafkaapp.kubernetes.io/component: controller-eligibleapp.kubernetes.io/part-of: kafkaserviceName: kafka-controller-headlessupdateStrategy:type: RollingUpdatetemplate:metadata:labels:app.kubernetes.io/instance: kafkaapp.kubernetes.io/managed-by: Helmapp.kubernetes.io/name: kafkaapp.kubernetes.io/version: 3.9.0helm.sh/chart: kafka-31.1.1app.kubernetes.io/component: controller-eligibleapp.kubernetes.io/part-of: kafkaannotations:checksum/configuration: 84a30ef8698d80825ae7ffe45fae93a0d18c8861e2dfc64b4b809aa92065dfffspec:automountServiceAccountToken: falsehostNetwork: falsehostIPC: falseaffinity:podAffinity:podAntiAffinity:preferredDuringSchedulingIgnoredDuringExecution:- podAffinityTerm:labelSelector:matchLabels:app.kubernetes.io/instance: kafkaapp.kubernetes.io/name: kafkaapp.kubernetes.io/component: controller-eligibletopologyKey: kubernetes.io/hostnameweight: 1nodeAffinity:securityContext:fsGroup: 1001fsGroupChangePolicy: AlwaysseccompProfile:type: RuntimeDefaultsupplementalGroups: []sysctls: []serviceAccountName: kafkaenableServiceLinks: trueinitContainers:- name: kafka-initimage: docker.io/bitnami/kafka:3.9.0-debian-12-r4imagePullPolicy: IfNotPresentsecurityContext:allowPrivilegeEscalation: falsecapabilities:drop:- ALLreadOnlyRootFilesystem: truerunAsGroup: 1001runAsNonRoot: truerunAsUser: 1001seLinuxOptions: {}resources:limits: {}requests: {} command:- /bin/bashargs:- -ec- |/scripts/kafka-init.shenv:- name: BITNAMI_DEBUGvalue: "false"- name: MY_POD_NAMEvalueFrom:fieldRef:fieldPath: metadata.name- name: KAFKA_VOLUME_DIRvalue: "/bitnami/kafka"- name: KAFKA_MIN_IDvalue: "0"- name: EXTERNAL_ACCESS_ENABLEDvalue: "true"- name: HOST_IPvalueFrom:fieldRef:fieldPath: status.hostIP- name: EXTERNAL_ACCESS_HOSTvalue: "$(HOST_IP)"- name: EXTERNAL_ACCESS_PORTS_LISTvalue: "30091,30092,30093"volumeMounts:- name: datamountPath: /bitnami/kafka- name: kafka-configmountPath: /config- name: kafka-configmapsmountPath: /configmaps- name: kafka-secret-configmountPath: /secret-config- name: scriptsmountPath: /scripts- name: tmpmountPath: /tmpcontainers:- name: kafkaimage: docker.io/bitnami/kafka:3.9.0-debian-12-r4imagePullPolicy: "IfNotPresent"securityContext:allowPrivilegeEscalation: falsecapabilities:drop:- ALLreadOnlyRootFilesystem: truerunAsGroup: 1001runAsNonRoot: truerunAsUser: 1001seLinuxOptions: {}env:- name: BITNAMI_DEBUGvalue: "false"- name: KAFKA_HEAP_OPTSvalue: "-Xmx4096m -Xms2048m"- name: KAFKA_KRAFT_CLUSTER_IDvalueFrom:secretKeyRef:name: kafka-kraft-cluster-idkey: kraft-cluster-idports:- name: controllercontainerPort: 9093- name: clientcontainerPort: 9092- name: interbrokercontainerPort: 9094- name: externalcontainerPort: 9095livenessProbe:failureThreshold: 3initialDelaySeconds: 10periodSeconds: 10successThreshold: 1timeoutSeconds: 5exec:command:- pgrep- -f- kafkareadinessProbe:failureThreshold: 6initialDelaySeconds: 5periodSeconds: 10successThreshold: 1timeoutSeconds: 5tcpSocket:port: "controller"resources:limits:cpu: 4memory: 8Girequests:cpu: 500mmemory: 512MivolumeMounts:- name: datamountPath: /bitnami/kafka- name: logsmountPath: /opt/bitnami/kafka/logs- name: kafka-configmountPath: /opt/bitnami/kafka/config/server.propertiessubPath: server.properties- name: tmpmountPath: /tmpvolumes:- name: kafka-configmapsconfigMap:name: kafka-controller-configuration- name: kafka-secret-configemptyDir: {}- name: kafka-configemptyDir: {}- name: tmpemptyDir: {}- name: scriptsconfigMap:name: kafka-scriptsdefaultMode: 493- name: logsemptyDir: {}volumeClaimTemplates:- apiVersion: v1kind: PersistentVolumeClaimmetadata:name: dataspec:accessModes:- "ReadWriteOnce"resources:requests:storage: "10Gi"storageClassName: "local-path"
相关文章:
k8s helm部署kafka集群(KRaft模式)——筑梦之路
添加helm仓库 helm repo add bitnami "https://helm-charts.itboon.top/bitnami" --force-update helm repo add grafana "https://helm-charts.itboon.top/grafana" --force-update helm repo add prometheus-community "https://helm-charts.itboo…...
unity action委托举例
using System; using UnityEngine; public class DelegateExample : MonoBehaviour { void Start() { // 创建委托实例并添加方法 Action myAction Method1; myAction Method2; myAction Method3; // 调用委托,会依次执…...

conda 批量安装requirements.txt文件
conda 批量安装requirements.txt文件中包含的组件依赖 conda install --yes --file requirements.txt #这种执行方式,一遇到安装不上就整体停止不会继续下面的包安装。 下面这条命令能解决上面出现的不执行后续包的问题,需要在CMD窗口执行: 点…...

Flutter:封装一个自用的bottom_picker选择器
效果图:单列选择器 使用bottom_picker: ^2.9.0实现,单列选择器,官方文档 pubspec.yaml # 底部选择 bottom_picker: ^2.9.0picker_utils.dart AppTheme:自定义的颜色 TextWidget.body Text() <Widget>[].toRow Row()下边代…...

Group3r:一款针对活动目录组策略安全的漏洞检测工具
关于Group3r Group3r是一款针对活动目录组策略安全的漏洞检测工具,可以帮助广大安全研究人员迅速枚举目标AD组策略中的相关配置,并识别其中的潜在安全威胁。 Group3r专为红蓝队研究人员和渗透测试人员设计,该工具可以通过将 LDAP 与域控制器…...

支持向量机算法(一):像讲故事一样讲明白它的原理及实现奥秘
1、支持向量机算法介绍 支持向量机(Support Vector Machine,SVM)是一种基于统计学习理论的模式识别方法, 属于有监督学习模型,主要用于解决数据分类问题。SVM将每个样本数据表示为空间中的点,使不同类别的…...
力扣-数组-35 搜索插入位置
解析 时间复杂度要求,所以使用二分的思想,漏掉了很多问题,这里记录 在left-right1时,已经找到了插入位置,但是没有赋值,然后break,所以导致一直死循环。 if(right - left 1){result right;b…...

List ---- 模拟实现LIST功能的发现
目录 listlist概念 list 中的迭代器list迭代器知识const迭代器写法list访问自定义类型 附录代码 list list概念 list是可以在常数范围内在任意位置进行插入和删除的序列式容器,并且该容器可以前后双向迭代。list的底层是双向链表结构,双向链表中每个元素…...
HashMap和HashTable区别问题
并发:hashMap线程不安全,hashTable线程安全,底层在put操作的方法上加了synchronized 初始化:hashTable初始容量为11,hashmap初始容量为16 阔容因子:阔容因子都是0.75 扩容比例: 补充 hashMap…...

mysql -> 达梦数据迁移(mbp大小写问题兼容)
安装 注意后面初始化需要忽略大小写 初始化程序启动路径 F:\dmdbms\tool dbca.exe 创建表空间,用户,模式 管理工具启动路径 F:\dmdbms\tool manager.exe 创建表空间 创建用户 创建同名模式,指定模式拥有者TEST dts 工具数据迁移 mysql -&g…...
leetcode热门100题1-4
第一天 两数之和 //暴力枚举 class Solution { public:vector<int> twoSum(vector<int>& nums, int target) {int n nums.size();for (int i 0; i < n; i) {for (int j i 1; j < n; j) {if (nums[i] nums[j] target) {return {i, j};}}}return {…...

作业:IO:day2
题目一 第一步:创建一个 struct Student 类型的数组 arr[3],初始化该数组中3个学生的属性 第二步:编写一个叫做save的函数,功能为 将数组arr中的3个学生的所有信息,保存到文件中去,使用fread实现fwrite 第三步…...

UVM: TLM机制
topic overview 不建议的方法:假如没有TLM TLM TLM 1.0 整个TLM机制下,底层逻辑离不开动作发起者和被动接受者这个底层的模型基础,但实际上,在验证环境中,任何一个组件,都有可能成为动作的发起者࿰…...

flink的EventTime和Watermark
时间机制 Flink中的时间机制主要用在判断是否触发时间窗口window的计算。 在Flink中有三种时间概念:ProcessTime、IngestionTime、EventTime。 ProcessTime:是在数据抵达算子产生的时间(Flink默认使用ProcessTime) IngestionT…...

arcgis的合并、相交、融合、裁剪、联合、标识操作的区别和使用
1、相交 需要输入两个面要素,最终得到的是两个输入面要素相交部分的结果面要素。 2、合并 合并能将两个单独存放的两个要素类的内容,汇集到一个要素类里面。 3、融合 融合能将一个要素类内的所有元素融合成一个整体。 4、裁剪 裁剪需要输入两个面要…...
【Leetcode 热题 100】20. 有效的括号
问题背景 给定一个只包括 ‘(’,‘)’,‘{’,‘}’,‘[’,‘]’ 的字符串 s s s,判断字符串是否有效。 有效字符串需满足: 左括号必须用相同类型的右括号闭合。左括号必须以正确的顺序闭合。每…...
比较procfs 、 sysctl和Netlink
procfs 文件系统和 sysctl 的使用: procfs 文件系统(/proc) procfs 文件系统是 Linux 内核向用户空间暴露内核数据结构以及配置信息的一种方式。`procfs` 的挂载点是 /proc 目录,这个目录中的文件和目录呈现内核的运行状况和配置信息。通过读写这些文件,可以查看和控制内…...
Leetcode 3413. Maximum Coins From K Consecutive Bags
Leetcode 3413. Maximum Coins From K Consecutive Bags 1. 解题思路2. 代码实现 题目链接:3413. Maximum Coins From K Consecutive Bags 1. 解题思路 这一题的话思路上整体上就是一个遍历,显然,要获得最大的coin,其选取的范围…...
MakeFile使用指南
文章目录 1. MakeFile 的作用2. 背景知识说明2.1 程序的编译与链接2.2 常见代码的文档结构 3. MakeFile 的内容4. Makefile的基本语法5. 变量定义5.1 一般变量赋值语法5.2 自动化变量 6. 通配符 参考: Makefile教程:Makefile文件编写1天入门 Makefile由浅…...

矩阵碰一碰发视频的视频剪辑功能源码搭建,支持OEM
在短视频创作与传播领域,矩阵碰一碰发视频结合视频剪辑功能,为用户带来了高效且富有创意的内容产出方式。这一功能允许用户通过碰一碰 NFC 设备触发视频分享,并在分享前对视频进行个性化剪辑。以下将详细阐述该功能的源码搭建过程。 一、技术…...

网络编程(Modbus进阶)
思维导图 Modbus RTU(先学一点理论) 概念 Modbus RTU 是工业自动化领域 最广泛应用的串行通信协议,由 Modicon 公司(现施耐德电气)于 1979 年推出。它以 高效率、强健性、易实现的特点成为工业控制系统的通信标准。 包…...

如何在看板中体现优先级变化
在看板中有效体现优先级变化的关键措施包括:采用颜色或标签标识优先级、设置任务排序规则、使用独立的优先级列或泳道、结合自动化规则同步优先级变化、建立定期的优先级审查流程。其中,设置任务排序规则尤其重要,因为它让看板视觉上直观地体…...

【项目实战】通过多模态+LangGraph实现PPT生成助手
PPT自动生成系统 基于LangGraph的PPT自动生成系统,可以将Markdown文档自动转换为PPT演示文稿。 功能特点 Markdown解析:自动解析Markdown文档结构PPT模板分析:分析PPT模板的布局和风格智能布局决策:匹配内容与合适的PPT布局自动…...
spring:实例工厂方法获取bean
spring处理使用静态工厂方法获取bean实例,也可以通过实例工厂方法获取bean实例。 实例工厂方法步骤如下: 定义实例工厂类(Java代码),定义实例工厂(xml),定义调用实例工厂ÿ…...
如何为服务器生成TLS证书
TLS(Transport Layer Security)证书是确保网络通信安全的重要手段,它通过加密技术保护传输的数据不被窃听和篡改。在服务器上配置TLS证书,可以使用户通过HTTPS协议安全地访问您的网站。本文将详细介绍如何在服务器上生成一个TLS证…...
Linux云原生安全:零信任架构与机密计算
Linux云原生安全:零信任架构与机密计算 构建坚不可摧的云原生防御体系 引言:云原生安全的范式革命 随着云原生技术的普及,安全边界正在从传统的网络边界向工作负载内部转移。Gartner预测,到2025年,零信任架构将成为超…...

微信小程序云开发平台MySQL的连接方式
注:微信小程序云开发平台指的是腾讯云开发 先给结论:微信小程序云开发平台的MySQL,无法通过获取数据库连接信息的方式进行连接,连接只能通过云开发的SDK连接,具体要参考官方文档: 为什么? 因为…...
Unit 1 深度强化学习简介
Deep RL Course ——Unit 1 Introduction 从理论和实践层面深入学习深度强化学习。学会使用知名的深度强化学习库,例如 Stable Baselines3、RL Baselines3 Zoo、Sample Factory 和 CleanRL。在独特的环境中训练智能体,比如 SnowballFight、Huggy the Do…...

IT供电系统绝缘监测及故障定位解决方案
随着新能源的快速发展,光伏电站、储能系统及充电设备已广泛应用于现代能源网络。在光伏领域,IT供电系统凭借其持续供电性好、安全性高等优势成为光伏首选,但在长期运行中,例如老化、潮湿、隐裂、机械损伤等问题会影响光伏板绝缘层…...

智能仓储的未来:自动化、AI与数据分析如何重塑物流中心
当仓库学会“思考”,物流的终极形态正在诞生 想象这样的场景: 凌晨3点,某物流中心灯火通明却空无一人。AGV机器人集群根据实时订单动态规划路径;AI视觉系统在0.1秒内扫描包裹信息;数字孪生平台正模拟次日峰值流量压力…...