当前位置: 首页 > news >正文

k8s helm部署kafka集群(KRaft模式)——筑梦之路

添加helm仓库

helm repo add bitnami "https://helm-charts.itboon.top/bitnami" --force-update
helm repo add grafana "https://helm-charts.itboon.top/grafana" --force-update
helm repo add prometheus-community "https://helm-charts.itboon.top/prometheus-community" --force-update
helm repo add ingress-nginx "https://helm-charts.itboon.top/ingress-nginx" --force-update
helm repo update

搜索kafka版本

# helm  search repo bitnami/kafka -l
NAME          	CHART VERSION	APP VERSION	DESCRIPTION
bitnami2/kafka	31.1.1       	3.9.0      	           
bitnami2/kafka	31.0.0       	3.9.0      	           
bitnami2/kafka	30.1.8       	3.8.1      	           
bitnami2/kafka	30.1.4       	3.8.0      	           
bitnami2/kafka	30.0.5       	3.8.0      	           
bitnami2/kafka	29.3.13      	3.7.1      	           
bitnami2/kafka	29.3.4       	3.7.0      	           
bitnami2/kafka	29.2.0       	3.7.0      	           
bitnami2/kafka	28.1.1       	3.7.0      	           
bitnami2/kafka	28.0.0       	3.7.0      	           
bitnami2/kafka	26.11.4      	3.6.1      	           
bitnami2/kafka	26.8.3       	3.6.1

编辑kafka.yaml

image:registry: docker.iorepository: bitnami/kafkatag: 3.9.0-debian-12-r4
listeners:client:protocol: PLAINTEXT #关闭访问认证controller:protocol: PLAINTEXT #关闭访问认证interbroker:protocol: PLAINTEXT #关闭访问认证external:protocol: PLAINTEXT #关闭访问认证
controller:replicaCount: 3 #副本数controllerOnly: false #controller+broker共用模式heapOpts: -Xmx4096m -Xms2048m #KAFKA JVMresources:limits:cpu: 4 memory: 8Girequests:cpu: 500mmemory: 512Miaffinity: #仅部署在master节点,不限制可删除nodeAffinity:requiredDuringSchedulingIgnoredDuringExecution:nodeSelectorTerms:- matchExpressions:- key: node-role.kubernetes.io/masteroperator: Exists- matchExpressions:- key: node-role.kubernetes.io/control-planeoperator: Existstolerations: #仅部署在master节点,不限制可删除- operator: Existseffect: NoSchedule- operator: Existseffect: NoExecutepersistence:storageClass: "local-path" #存储卷类型size: 10Gi #每个pod的存储大小
externalAccess:enabled: true #开启外部访问controller:service:type: NodePort #使用NodePort方式nodePorts:- 30091 #对外端口- 30092 #对外端口- 30093 #对外端口useHostIPs: true #使用宿主机IP

安装部署 

helm install kafka bitnami/kafka -f kafka.yaml --dry-runhelm install kafka bitnami-china/kafka -f kafka.yaml

内部访问

kafka-controller-headless.default:9092kafka-controller-0.kafka-controller-headless.default:9092
kafka-controller-1.kafka-controller-headless.default:9092
kafka-controller-2.kafka-controller-headless.default:9092

外部访问

# node ip +设置的nodeport端口,注意端口对应的节点的ip
192.168.100.110:30091  
192.168.100.111:30092  
192.168.100.112:30093# 从pod的配置中查找外部访问信息
kubectl exec -it kafka-controller-0 -- cat /opt/bitnami/kafka/config/server.properties | grep advertised.listeners

测试验证

# 创建一个podkubectl run kafka-client --restart='Never' --image bitnami/kafka:3.9.0-debian-12-r4 --namespace default --command -- sleep infinity# 进入pod生产消息
kubectl exec --tty -i kafka-client --namespace default -- bash
kafka-console-producer.sh \--broker-list kafka-controller-0.kafka-controller-headless.default.svc.cluster.local:9092,kafka-controller-1.kafka-controller-headless.default.svc.cluster.local:9092,kafka-controller-2.kafka-controller-headless.default.svc.cluster.local:9092 \--topic test# 进入pod消费消息
kubectl exec --tty -i kafka-client --namespace default -- bash
kafka-console-consumer.sh \--bootstrap-server kafka.default.svc.cluster.local:9092 \--topic test \--from-beginning

仅供参考

所有yaml文件

---
# Source: kafka/templates/networkpolicy.yaml
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:name: kafkanamespace: "default"labels:app.kubernetes.io/instance: kafkaapp.kubernetes.io/managed-by: Helmapp.kubernetes.io/name: kafkaapp.kubernetes.io/version: 3.9.0helm.sh/chart: kafka-31.1.1
spec:podSelector:matchLabels:app.kubernetes.io/instance: kafkaapp.kubernetes.io/name: kafkapolicyTypes:- Ingress- Egressegress:- {}ingress:# Allow client connections- ports:- port: 9092- port: 9094- port: 9093- port: 9095
---
# Source: kafka/templates/broker/pdb.yaml
apiVersion: policy/v1
kind: PodDisruptionBudget
metadata:name: kafka-brokernamespace: "default"labels:app.kubernetes.io/instance: kafkaapp.kubernetes.io/managed-by: Helmapp.kubernetes.io/name: kafkaapp.kubernetes.io/version: 3.9.0helm.sh/chart: kafka-31.1.1app.kubernetes.io/component: brokerapp.kubernetes.io/part-of: kafka
spec:maxUnavailable: 1selector:matchLabels:app.kubernetes.io/instance: kafkaapp.kubernetes.io/name: kafkaapp.kubernetes.io/component: brokerapp.kubernetes.io/part-of: kafka
---
# Source: kafka/templates/controller-eligible/pdb.yaml
apiVersion: policy/v1
kind: PodDisruptionBudget
metadata:name: kafka-controllernamespace: "default"labels:app.kubernetes.io/instance: kafkaapp.kubernetes.io/managed-by: Helmapp.kubernetes.io/name: kafkaapp.kubernetes.io/version: 3.9.0helm.sh/chart: kafka-31.1.1app.kubernetes.io/component: controller-eligibleapp.kubernetes.io/part-of: kafka
spec:maxUnavailable: 1selector:matchLabels:app.kubernetes.io/instance: kafkaapp.kubernetes.io/name: kafkaapp.kubernetes.io/component: controller-eligibleapp.kubernetes.io/part-of: kafka
---
# Source: kafka/templates/provisioning/serviceaccount.yaml
apiVersion: v1
kind: ServiceAccount
metadata:name: kafka-provisioningnamespace: "default"labels:app.kubernetes.io/instance: kafkaapp.kubernetes.io/managed-by: Helmapp.kubernetes.io/name: kafkaapp.kubernetes.io/version: 3.9.0helm.sh/chart: kafka-31.1.1
automountServiceAccountToken: false
---
# Source: kafka/templates/rbac/serviceaccount.yaml
apiVersion: v1
kind: ServiceAccount
metadata:name: kafkanamespace: "default"labels:app.kubernetes.io/instance: kafkaapp.kubernetes.io/managed-by: Helmapp.kubernetes.io/name: kafkaapp.kubernetes.io/version: 3.9.0helm.sh/chart: kafka-31.1.1app.kubernetes.io/component: kafka
automountServiceAccountToken: false
---
# Source: kafka/templates/secrets.yaml
apiVersion: v1
kind: Secret
metadata:name: kafka-kraft-cluster-idnamespace: "default"labels:app.kubernetes.io/instance: kafkaapp.kubernetes.io/managed-by: Helmapp.kubernetes.io/name: kafkaapp.kubernetes.io/version: 3.9.0helm.sh/chart: kafka-31.1.1
type: Opaque
data:kraft-cluster-id: "eDJrTHBicnVhQ1ZIUExEVU5BZVMxUA=="
---
# Source: kafka/templates/controller-eligible/configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:name: kafka-controller-configurationnamespace: "default"labels:app.kubernetes.io/instance: kafkaapp.kubernetes.io/managed-by: Helmapp.kubernetes.io/name: kafkaapp.kubernetes.io/version: 3.9.0helm.sh/chart: kafka-31.1.1app.kubernetes.io/component: controller-eligibleapp.kubernetes.io/part-of: kafka
data:server.properties: |-# Listeners configurationlisteners=CLIENT://:9092,INTERNAL://:9094,EXTERNAL://:9095,CONTROLLER://:9093advertised.listeners=CLIENT://advertised-address-placeholder:9092,INTERNAL://advertised-address-placeholder:9094listener.security.protocol.map=CLIENT:PLAINTEXT,INTERNAL:PLAINTEXT,CONTROLLER:PLAINTEXT,EXTERNAL:PLAINTEXT# KRaft process rolesprocess.roles=controller,broker#node.id=controller.listener.names=CONTROLLERcontroller.quorum.voters=0@kafka-controller-0.kafka-controller-headless.default.svc.cluster.local:9093,1@kafka-controller-1.kafka-controller-headless.default.svc.cluster.local:9093,2@kafka-controller-2.kafka-controller-headless.default.svc.cluster.local:9093# Kafka data logs directorylog.dir=/bitnami/kafka/data# Kafka application logs directorylogs.dir=/opt/bitnami/kafka/logs# Common Kafka Configuration# Interbroker configurationinter.broker.listener.name=INTERNAL# Custom Kafka Configuration
---
# Source: kafka/templates/scripts-configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:name: kafka-scriptsnamespace: "default"labels:app.kubernetes.io/instance: kafkaapp.kubernetes.io/managed-by: Helmapp.kubernetes.io/name: kafkaapp.kubernetes.io/version: 3.9.0helm.sh/chart: kafka-31.1.1
data:kafka-init.sh: |-#!/bin/bashset -o errexitset -o nounsetset -o pipefailerror(){local message="${1:?missing message}"echo "ERROR: ${message}"exit 1}retry_while() {local -r cmd="${1:?cmd is missing}"local -r retries="${2:-12}"local -r sleep_time="${3:-5}"local return_value=1read -r -a command <<< "$cmd"for ((i = 1 ; i <= retries ; i+=1 )); do"${command[@]}" && return_value=0 && breaksleep "$sleep_time"donereturn $return_value}replace_in_file() {local filename="${1:?filename is required}"local match_regex="${2:?match regex is required}"local substitute_regex="${3:?substitute regex is required}"local posix_regex=${4:-true}local result# We should avoid using 'sed in-place' substitutions# 1) They are not compatible with files mounted from ConfigMap(s)# 2) We found incompatibility issues with Debian10 and "in-place" substitutionslocal -r del=$'\001' # Use a non-printable character as a 'sed' delimiter to avoid issuesif [[ $posix_regex = true ]]; thenresult="$(sed -E "s${del}${match_regex}${del}${substitute_regex}${del}g" "$filename")"elseresult="$(sed "s${del}${match_regex}${del}${substitute_regex}${del}g" "$filename")"fiecho "$result" > "$filename"}kafka_conf_set() {local file="${1:?missing file}"local key="${2:?missing key}"local value="${3:?missing value}"# Check if the value was set beforeif grep -q "^[#\\s]*$key\s*=.*" "$file"; then# Update the existing keyreplace_in_file "$file" "^[#\\s]*${key}\s*=.*" "${key}=${value}" falseelse# Add a new keyprintf '\n%s=%s' "$key" "$value" >>"$file"fi}replace_placeholder() {local placeholder="${1:?missing placeholder value}"local password="${2:?missing password value}"local -r del=$'\001' # Use a non-printable character as a 'sed' delimiter to avoid issues with delimiter symbols in sed stringsed -i "s${del}$placeholder${del}$password${del}g" "$KAFKA_CONFIG_FILE"}append_file_to_kafka_conf() {local file="${1:?missing source file}"local conf="${2:?missing kafka conf file}"cat "$1" >> "$2"}configure_external_access() {# Configure external hostnameif [[ -f "/shared/external-host.txt" ]]; thenhost=$(cat "/shared/external-host.txt")elif [[ -n "${EXTERNAL_ACCESS_HOST:-}" ]]; thenhost="$EXTERNAL_ACCESS_HOST"elif [[ -n "${EXTERNAL_ACCESS_HOSTS_LIST:-}" ]]; thenread -r -a hosts <<<"$(tr ',' ' ' <<<"${EXTERNAL_ACCESS_HOSTS_LIST}")"host="${hosts[$POD_ID]}"elif [[ "$EXTERNAL_ACCESS_HOST_USE_PUBLIC_IP" =~ ^(yes|true)$ ]]; thenhost=$(curl -s https://ipinfo.io/ip)elseerror "External access hostname not provided"fi# Configure external portif [[ -f "/shared/external-port.txt" ]]; thenport=$(cat "/shared/external-port.txt")elif [[ -n "${EXTERNAL_ACCESS_PORT:-}" ]]; thenif [[ "${EXTERNAL_ACCESS_PORT_AUTOINCREMENT:-}" =~ ^(yes|true)$ ]]; thenport="$((EXTERNAL_ACCESS_PORT + POD_ID))"elseport="$EXTERNAL_ACCESS_PORT"fielif [[ -n "${EXTERNAL_ACCESS_PORTS_LIST:-}" ]]; thenread -r -a ports <<<"$(tr ',' ' ' <<<"${EXTERNAL_ACCESS_PORTS_LIST}")"port="${ports[$POD_ID]}"elseerror "External access port not provided"fi# Configure Kafka advertised listenerssed -i -E "s|^(advertised\.listeners=\S+)$|\1,EXTERNAL://${host}:${port}|" "$KAFKA_CONFIG_FILE"}export KAFKA_CONFIG_FILE=/config/server.propertiescp /configmaps/server.properties $KAFKA_CONFIG_FILE# Get pod ID and role, last and second last fields in the pod name respectivelyPOD_ID=$(echo "$MY_POD_NAME" | rev | cut -d'-' -f 1 | rev)POD_ROLE=$(echo "$MY_POD_NAME" | rev | cut -d'-' -f 2 | rev)# Configure node.id and/or broker.idif [[ -f "/bitnami/kafka/data/meta.properties" ]]; thenif grep -q "broker.id" /bitnami/kafka/data/meta.properties; thenID="$(grep "broker.id" /bitnami/kafka/data/meta.properties | awk -F '=' '{print $2}')"kafka_conf_set "$KAFKA_CONFIG_FILE" "node.id" "$ID"elseID="$(grep "node.id" /bitnami/kafka/data/meta.properties | awk -F '=' '{print $2}')"kafka_conf_set "$KAFKA_CONFIG_FILE" "node.id" "$ID"fielseID=$((POD_ID + KAFKA_MIN_ID))kafka_conf_set "$KAFKA_CONFIG_FILE" "node.id" "$ID"fireplace_placeholder "advertised-address-placeholder" "${MY_POD_NAME}.kafka-${POD_ROLE}-headless.default.svc.cluster.local"if [[ "${EXTERNAL_ACCESS_ENABLED:-false}" =~ ^(yes|true)$ ]]; thenconfigure_external_accessfiif [ -f /secret-config/server-secret.properties ]; thenappend_file_to_kafka_conf /secret-config/server-secret.properties $KAFKA_CONFIG_FILEfi
---
# Source: kafka/templates/controller-eligible/svc-external-access.yaml
apiVersion: v1
kind: Service
metadata:name: kafka-controller-0-externalnamespace: "default"labels:app.kubernetes.io/instance: kafkaapp.kubernetes.io/managed-by: Helmapp.kubernetes.io/name: kafkaapp.kubernetes.io/version: 3.9.0helm.sh/chart: kafka-31.1.1app.kubernetes.io/component: kafkapod: kafka-controller-0
spec:type: NodePortpublishNotReadyAddresses: falseports:- name: tcp-kafkaport: 9094nodePort: 30091targetPort: externalselector:app.kubernetes.io/instance: kafkaapp.kubernetes.io/name: kafkaapp.kubernetes.io/part-of: kafkaapp.kubernetes.io/component: controller-eligiblestatefulset.kubernetes.io/pod-name: kafka-controller-0
---
# Source: kafka/templates/controller-eligible/svc-external-access.yaml
apiVersion: v1
kind: Service
metadata:name: kafka-controller-1-externalnamespace: "default"labels:app.kubernetes.io/instance: kafkaapp.kubernetes.io/managed-by: Helmapp.kubernetes.io/name: kafkaapp.kubernetes.io/version: 3.9.0helm.sh/chart: kafka-31.1.1app.kubernetes.io/component: kafkapod: kafka-controller-1
spec:type: NodePortpublishNotReadyAddresses: falseports:- name: tcp-kafkaport: 9094nodePort: 30092targetPort: externalselector:app.kubernetes.io/instance: kafkaapp.kubernetes.io/name: kafkaapp.kubernetes.io/part-of: kafkaapp.kubernetes.io/component: controller-eligiblestatefulset.kubernetes.io/pod-name: kafka-controller-1
---
# Source: kafka/templates/controller-eligible/svc-external-access.yaml
apiVersion: v1
kind: Service
metadata:name: kafka-controller-2-externalnamespace: "default"labels:app.kubernetes.io/instance: kafkaapp.kubernetes.io/managed-by: Helmapp.kubernetes.io/name: kafkaapp.kubernetes.io/version: 3.9.0helm.sh/chart: kafka-31.1.1app.kubernetes.io/component: kafkapod: kafka-controller-2
spec:type: NodePortpublishNotReadyAddresses: falseports:- name: tcp-kafkaport: 9094nodePort: 30093targetPort: externalselector:app.kubernetes.io/instance: kafkaapp.kubernetes.io/name: kafkaapp.kubernetes.io/part-of: kafkaapp.kubernetes.io/component: controller-eligiblestatefulset.kubernetes.io/pod-name: kafka-controller-2
---
# Source: kafka/templates/controller-eligible/svc-headless.yaml
apiVersion: v1
kind: Service
metadata:name: kafka-controller-headlessnamespace: "default"labels:app.kubernetes.io/instance: kafkaapp.kubernetes.io/managed-by: Helmapp.kubernetes.io/name: kafkaapp.kubernetes.io/version: 3.9.0helm.sh/chart: kafka-31.1.1app.kubernetes.io/component: controller-eligibleapp.kubernetes.io/part-of: kafka
spec:type: ClusterIPclusterIP: NonepublishNotReadyAddresses: trueports:- name: tcp-interbrokerport: 9094protocol: TCPtargetPort: interbroker- name: tcp-clientport: 9092protocol: TCPtargetPort: client- name: tcp-controllerprotocol: TCPport: 9093targetPort: controllerselector:app.kubernetes.io/instance: kafkaapp.kubernetes.io/name: kafkaapp.kubernetes.io/component: controller-eligibleapp.kubernetes.io/part-of: kafka
---
# Source: kafka/templates/svc.yaml
apiVersion: v1
kind: Service
metadata:name: kafkanamespace: "default"labels:app.kubernetes.io/instance: kafkaapp.kubernetes.io/managed-by: Helmapp.kubernetes.io/name: kafkaapp.kubernetes.io/version: 3.9.0helm.sh/chart: kafka-31.1.1app.kubernetes.io/component: kafka
spec:type: ClusterIPsessionAffinity: Noneports:- name: tcp-clientport: 9092protocol: TCPtargetPort: clientnodePort: null- name: tcp-externalport: 9095protocol: TCPtargetPort: externalselector:app.kubernetes.io/instance: kafkaapp.kubernetes.io/name: kafkaapp.kubernetes.io/part-of: kafka
---
# Source: kafka/templates/controller-eligible/statefulset.yaml
apiVersion: apps/v1
kind: StatefulSet
metadata:name: kafka-controllernamespace: "default"labels:app.kubernetes.io/instance: kafkaapp.kubernetes.io/managed-by: Helmapp.kubernetes.io/name: kafkaapp.kubernetes.io/version: 3.9.0helm.sh/chart: kafka-31.1.1app.kubernetes.io/component: controller-eligibleapp.kubernetes.io/part-of: kafka
spec:podManagementPolicy: Parallelreplicas: 3selector:matchLabels:app.kubernetes.io/instance: kafkaapp.kubernetes.io/name: kafkaapp.kubernetes.io/component: controller-eligibleapp.kubernetes.io/part-of: kafkaserviceName: kafka-controller-headlessupdateStrategy:type: RollingUpdatetemplate:metadata:labels:app.kubernetes.io/instance: kafkaapp.kubernetes.io/managed-by: Helmapp.kubernetes.io/name: kafkaapp.kubernetes.io/version: 3.9.0helm.sh/chart: kafka-31.1.1app.kubernetes.io/component: controller-eligibleapp.kubernetes.io/part-of: kafkaannotations:checksum/configuration: 84a30ef8698d80825ae7ffe45fae93a0d18c8861e2dfc64b4b809aa92065dfffspec:automountServiceAccountToken: falsehostNetwork: falsehostIPC: falseaffinity:podAffinity:podAntiAffinity:preferredDuringSchedulingIgnoredDuringExecution:- podAffinityTerm:labelSelector:matchLabels:app.kubernetes.io/instance: kafkaapp.kubernetes.io/name: kafkaapp.kubernetes.io/component: controller-eligibletopologyKey: kubernetes.io/hostnameweight: 1nodeAffinity:securityContext:fsGroup: 1001fsGroupChangePolicy: AlwaysseccompProfile:type: RuntimeDefaultsupplementalGroups: []sysctls: []serviceAccountName: kafkaenableServiceLinks: trueinitContainers:- name: kafka-initimage: docker.io/bitnami/kafka:3.9.0-debian-12-r4imagePullPolicy: IfNotPresentsecurityContext:allowPrivilegeEscalation: falsecapabilities:drop:- ALLreadOnlyRootFilesystem: truerunAsGroup: 1001runAsNonRoot: truerunAsUser: 1001seLinuxOptions: {}resources:limits: {}requests: {} command:- /bin/bashargs:- -ec- |/scripts/kafka-init.shenv:- name: BITNAMI_DEBUGvalue: "false"- name: MY_POD_NAMEvalueFrom:fieldRef:fieldPath: metadata.name- name: KAFKA_VOLUME_DIRvalue: "/bitnami/kafka"- name: KAFKA_MIN_IDvalue: "0"- name: EXTERNAL_ACCESS_ENABLEDvalue: "true"- name: HOST_IPvalueFrom:fieldRef:fieldPath: status.hostIP- name: EXTERNAL_ACCESS_HOSTvalue: "$(HOST_IP)"- name: EXTERNAL_ACCESS_PORTS_LISTvalue: "30091,30092,30093"volumeMounts:- name: datamountPath: /bitnami/kafka- name: kafka-configmountPath: /config- name: kafka-configmapsmountPath: /configmaps- name: kafka-secret-configmountPath: /secret-config- name: scriptsmountPath: /scripts- name: tmpmountPath: /tmpcontainers:- name: kafkaimage: docker.io/bitnami/kafka:3.9.0-debian-12-r4imagePullPolicy: "IfNotPresent"securityContext:allowPrivilegeEscalation: falsecapabilities:drop:- ALLreadOnlyRootFilesystem: truerunAsGroup: 1001runAsNonRoot: truerunAsUser: 1001seLinuxOptions: {}env:- name: BITNAMI_DEBUGvalue: "false"- name: KAFKA_HEAP_OPTSvalue: "-Xmx4096m -Xms2048m"- name: KAFKA_KRAFT_CLUSTER_IDvalueFrom:secretKeyRef:name: kafka-kraft-cluster-idkey: kraft-cluster-idports:- name: controllercontainerPort: 9093- name: clientcontainerPort: 9092- name: interbrokercontainerPort: 9094- name: externalcontainerPort: 9095livenessProbe:failureThreshold: 3initialDelaySeconds: 10periodSeconds: 10successThreshold: 1timeoutSeconds: 5exec:command:- pgrep- -f- kafkareadinessProbe:failureThreshold: 6initialDelaySeconds: 5periodSeconds: 10successThreshold: 1timeoutSeconds: 5tcpSocket:port: "controller"resources:limits:cpu: 4memory: 8Girequests:cpu: 500mmemory: 512MivolumeMounts:- name: datamountPath: /bitnami/kafka- name: logsmountPath: /opt/bitnami/kafka/logs- name: kafka-configmountPath: /opt/bitnami/kafka/config/server.propertiessubPath: server.properties- name: tmpmountPath: /tmpvolumes:- name: kafka-configmapsconfigMap:name: kafka-controller-configuration- name: kafka-secret-configemptyDir: {}- name: kafka-configemptyDir: {}- name: tmpemptyDir: {}- name: scriptsconfigMap:name: kafka-scriptsdefaultMode: 493- name: logsemptyDir: {}volumeClaimTemplates:- apiVersion: v1kind: PersistentVolumeClaimmetadata:name: dataspec:accessModes:- "ReadWriteOnce"resources:requests:storage: "10Gi"storageClassName: "local-path"

相关文章:

k8s helm部署kafka集群(KRaft模式)——筑梦之路

添加helm仓库 helm repo add bitnami "https://helm-charts.itboon.top/bitnami" --force-update helm repo add grafana "https://helm-charts.itboon.top/grafana" --force-update helm repo add prometheus-community "https://helm-charts.itboo…...

unity action委托举例

using System; using UnityEngine; public class DelegateExample : MonoBehaviour { void Start() { // 创建委托实例并添加方法 Action myAction Method1; myAction Method2; myAction Method3; // 调用委托&#xff0c;会依次执…...

conda 批量安装requirements.txt文件

conda 批量安装requirements.txt文件中包含的组件依赖 conda install --yes --file requirements.txt #这种执行方式&#xff0c;一遇到安装不上就整体停止不会继续下面的包安装。 下面这条命令能解决上面出现的不执行后续包的问题&#xff0c;需要在CMD窗口执行&#xff1a; 点…...

Flutter:封装一个自用的bottom_picker选择器

效果图&#xff1a;单列选择器 使用bottom_picker: ^2.9.0实现&#xff0c;单列选择器&#xff0c;官方文档 pubspec.yaml # 底部选择 bottom_picker: ^2.9.0picker_utils.dart AppTheme&#xff1a;自定义的颜色 TextWidget.body Text() <Widget>[].toRow Row()下边代…...

Group3r:一款针对活动目录组策略安全的漏洞检测工具

关于Group3r Group3r是一款针对活动目录组策略安全的漏洞检测工具&#xff0c;可以帮助广大安全研究人员迅速枚举目标AD组策略中的相关配置&#xff0c;并识别其中的潜在安全威胁。 Group3r专为红蓝队研究人员和渗透测试人员设计&#xff0c;该工具可以通过将 LDAP 与域控制器…...

支持向量机算法(一):像讲故事一样讲明白它的原理及实现奥秘

1、支持向量机算法介绍 支持向量机&#xff08;Support Vector Machine&#xff0c;SVM&#xff09;是一种基于统计学习理论的模式识别方法&#xff0c; 属于有监督学习模型&#xff0c;主要用于解决数据分类问题。SVM将每个样本数据表示为空间中的点&#xff0c;使不同类别的…...

力扣-数组-35 搜索插入位置

解析 时间复杂度要求&#xff0c;所以使用二分的思想&#xff0c;漏掉了很多问题&#xff0c;这里记录 在left-right1时&#xff0c;已经找到了插入位置&#xff0c;但是没有赋值&#xff0c;然后break&#xff0c;所以导致一直死循环。 if(right - left 1){result right;b…...

List ---- 模拟实现LIST功能的发现

目录 listlist概念 list 中的迭代器list迭代器知识const迭代器写法list访问自定义类型 附录代码 list list概念 list是可以在常数范围内在任意位置进行插入和删除的序列式容器&#xff0c;并且该容器可以前后双向迭代。list的底层是双向链表结构&#xff0c;双向链表中每个元素…...

HashMap和HashTable区别问题

并发&#xff1a;hashMap线程不安全&#xff0c;hashTable线程安全&#xff0c;底层在put操作的方法上加了synchronized 初始化&#xff1a;hashTable初始容量为11&#xff0c;hashmap初始容量为16 阔容因子&#xff1a;阔容因子都是0.75 扩容比例&#xff1a; 补充 hashMap…...

mysql -> 达梦数据迁移(mbp大小写问题兼容)

安装 注意后面初始化需要忽略大小写 初始化程序启动路径 F:\dmdbms\tool dbca.exe 创建表空间&#xff0c;用户&#xff0c;模式 管理工具启动路径 F:\dmdbms\tool manager.exe 创建表空间 创建用户 创建同名模式&#xff0c;指定模式拥有者TEST dts 工具数据迁移 mysql -&g…...

leetcode热门100题1-4

第一天 两数之和 //暴力枚举 class Solution { public:vector<int> twoSum(vector<int>& nums, int target) {int n nums.size();for (int i 0; i < n; i) {for (int j i 1; j < n; j) {if (nums[i] nums[j] target) {return {i, j};}}}return {…...

作业:IO:day2

题目一 第一步&#xff1a;创建一个 struct Student 类型的数组 arr[3],初始化该数组中3个学生的属性 第二步&#xff1a;编写一个叫做save的函数&#xff0c;功能为 将数组arr中的3个学生的所有信息&#xff0c;保存到文件中去&#xff0c;使用fread实现fwrite 第三步&#xf…...

UVM: TLM机制

topic overview 不建议的方法&#xff1a;假如没有TLM TLM TLM 1.0 整个TLM机制下&#xff0c;底层逻辑离不开动作发起者和被动接受者这个底层的模型基础&#xff0c;但实际上&#xff0c;在验证环境中&#xff0c;任何一个组件&#xff0c;都有可能成为动作的发起者&#xff0…...

flink的EventTime和Watermark

时间机制 Flink中的时间机制主要用在判断是否触发时间窗口window的计算。 在Flink中有三种时间概念&#xff1a;ProcessTime、IngestionTime、EventTime。 ProcessTime&#xff1a;是在数据抵达算子产生的时间&#xff08;Flink默认使用ProcessTime&#xff09; IngestionT…...

arcgis的合并、相交、融合、裁剪、联合、标识操作的区别和使用

1、相交 需要输入两个面要素&#xff0c;最终得到的是两个输入面要素相交部分的结果面要素。 2、合并 合并能将两个单独存放的两个要素类的内容&#xff0c;汇集到一个要素类里面。 3、融合 融合能将一个要素类内的所有元素融合成一个整体。 4、裁剪 裁剪需要输入两个面要…...

【Leetcode 热题 100】20. 有效的括号

问题背景 给定一个只包括 ‘(’&#xff0c;‘)’&#xff0c;‘{’&#xff0c;‘}’&#xff0c;‘[’&#xff0c;‘]’ 的字符串 s s s&#xff0c;判断字符串是否有效。 有效字符串需满足&#xff1a; 左括号必须用相同类型的右括号闭合。左括号必须以正确的顺序闭合。每…...

比较procfs 、 sysctl和Netlink

procfs 文件系统和 sysctl 的使用: procfs 文件系统(/proc) procfs 文件系统是 Linux 内核向用户空间暴露内核数据结构以及配置信息的一种方式。`procfs` 的挂载点是 /proc 目录,这个目录中的文件和目录呈现内核的运行状况和配置信息。通过读写这些文件,可以查看和控制内…...

Leetcode 3413. Maximum Coins From K Consecutive Bags

Leetcode 3413. Maximum Coins From K Consecutive Bags 1. 解题思路2. 代码实现 题目链接&#xff1a;3413. Maximum Coins From K Consecutive Bags 1. 解题思路 这一题的话思路上整体上就是一个遍历&#xff0c;显然&#xff0c;要获得最大的coin&#xff0c;其选取的范围…...

MakeFile使用指南

文章目录 1. MakeFile 的作用2. 背景知识说明2.1 程序的编译与链接2.2 常见代码的文档结构 3. MakeFile 的内容4. Makefile的基本语法5. 变量定义5.1 一般变量赋值语法5.2 自动化变量 6. 通配符 参考&#xff1a; Makefile教程&#xff1a;Makefile文件编写1天入门 Makefile由浅…...

矩阵碰一碰发视频的视频剪辑功能源码搭建,支持OEM

在短视频创作与传播领域&#xff0c;矩阵碰一碰发视频结合视频剪辑功能&#xff0c;为用户带来了高效且富有创意的内容产出方式。这一功能允许用户通过碰一碰 NFC 设备触发视频分享&#xff0c;并在分享前对视频进行个性化剪辑。以下将详细阐述该功能的源码搭建过程。 一、技术…...

未来机器人的大脑:如何用神经网络模拟器实现更智能的决策?

编辑&#xff1a;陈萍萍的公主一点人工一点智能 未来机器人的大脑&#xff1a;如何用神经网络模拟器实现更智能的决策&#xff1f;RWM通过双自回归机制有效解决了复合误差、部分可观测性和随机动力学等关键挑战&#xff0c;在不依赖领域特定归纳偏见的条件下实现了卓越的预测准…...

Docker 离线安装指南

参考文章 1、确认操作系统类型及内核版本 Docker依赖于Linux内核的一些特性&#xff0c;不同版本的Docker对内核版本有不同要求。例如&#xff0c;Docker 17.06及之后的版本通常需要Linux内核3.10及以上版本&#xff0c;Docker17.09及更高版本对应Linux内核4.9.x及更高版本。…...

【网络】每天掌握一个Linux命令 - iftop

在Linux系统中&#xff0c;iftop是网络管理的得力助手&#xff0c;能实时监控网络流量、连接情况等&#xff0c;帮助排查网络异常。接下来从多方面详细介绍它。 目录 【网络】每天掌握一个Linux命令 - iftop工具概述安装方式核心功能基础用法进阶操作实战案例面试题场景生产场景…...

React Native 开发环境搭建(全平台详解)

React Native 开发环境搭建&#xff08;全平台详解&#xff09; 在开始使用 React Native 开发移动应用之前&#xff0c;正确设置开发环境是至关重要的一步。本文将为你提供一份全面的指南&#xff0c;涵盖 macOS 和 Windows 平台的配置步骤&#xff0c;如何在 Android 和 iOS…...

ssc377d修改flash分区大小

1、flash的分区默认分配16M、 / # df -h Filesystem Size Used Available Use% Mounted on /dev/root 1.9M 1.9M 0 100% / /dev/mtdblock4 3.0M...

Nginx server_name 配置说明

Nginx 是一个高性能的反向代理和负载均衡服务器&#xff0c;其核心配置之一是 server 块中的 server_name 指令。server_name 决定了 Nginx 如何根据客户端请求的 Host 头匹配对应的虚拟主机&#xff08;Virtual Host&#xff09;。 1. 简介 Nginx 使用 server_name 指令来确定…...

C++ Visual Studio 2017厂商给的源码没有.sln文件 易兆微芯片下载工具加开机动画下载。

1.先用Visual Studio 2017打开Yichip YC31xx loader.vcxproj&#xff0c;再用Visual Studio 2022打开。再保侟就有.sln文件了。 易兆微芯片下载工具加开机动画下载 ExtraDownloadFile1Info.\logo.bin|0|0|10D2000|0 MFC应用兼容CMD 在BOOL CYichipYC31xxloaderDlg::OnIni…...

优选算法第十二讲:队列 + 宽搜 优先级队列

优选算法第十二讲&#xff1a;队列 宽搜 && 优先级队列 1.N叉树的层序遍历2.二叉树的锯齿型层序遍历3.二叉树最大宽度4.在每个树行中找最大值5.优先级队列 -- 最后一块石头的重量6.数据流中的第K大元素7.前K个高频单词8.数据流的中位数 1.N叉树的层序遍历 2.二叉树的锯…...

使用 SymPy 进行向量和矩阵的高级操作

在科学计算和工程领域&#xff0c;向量和矩阵操作是解决问题的核心技能之一。Python 的 SymPy 库提供了强大的符号计算功能&#xff0c;能够高效地处理向量和矩阵的各种操作。本文将深入探讨如何使用 SymPy 进行向量和矩阵的创建、合并以及维度拓展等操作&#xff0c;并通过具体…...

在web-view 加载的本地及远程HTML中调用uniapp的API及网页和vue页面是如何通讯的?

uni-app 中 Web-view 与 Vue 页面的通讯机制详解 一、Web-view 简介 Web-view 是 uni-app 提供的一个重要组件&#xff0c;用于在原生应用中加载 HTML 页面&#xff1a; 支持加载本地 HTML 文件支持加载远程 HTML 页面实现 Web 与原生的双向通讯可用于嵌入第三方网页或 H5 应…...