当前位置: 首页 > news >正文

kubesphere报错

1.安装过程报错unable to sign certificate: must specify a CommonName

[root@node1 ~]# ./kk init registry -f config-sample.yaml -a kubesphere.tar.gz _   __      _          _   __           
| | / /     | |        | | / /           
| |/ / _   _| |__   ___| |/ /  ___ _   _ 
|    \| | | | '_ \ / _ \    \ / _ \ | | |
| |\  \ |_| | |_) |  __/ |\  \  __/ |_| |
\_| \_/\__,_|_.__/ \___\_| \_/\___|\__, |__/ ||___/11:28:47 CST [GreetingsModule] Greetings
11:28:47 CST message: [node1]
Greetings, KubeKey!
11:28:47 CST success: [node1]
11:28:47 CST [UnArchiveArtifactModule] Check the KubeKey artifact md5 value
11:28:49 CST success: [LocalHost]
11:28:49 CST [UnArchiveArtifactModule] UnArchive the KubeKey artifact
11:28:49 CST skipped: [LocalHost]
11:28:49 CST [UnArchiveArtifactModule] Create the KubeKey artifact Md5 file
11:28:49 CST skipped: [LocalHost]
11:28:49 CST [RegistryPackageModule] Download registry package
11:28:49 CST message: [localhost]
downloading amd64 harbor v2.5.3  ...
11:28:56 CST message: [localhost]
downloading amd64 docker 24.0.6  ...
11:28:56 CST message: [localhost]
downloading amd64 compose v2.2.2  ...
11:28:56 CST success: [LocalHost]
11:28:56 CST [ConfigureOSModule] Get OS release
11:28:56 CST success: [node1]
11:28:56 CST [ConfigureOSModule] Prepare to init OS
11:28:57 CST success: [node1]
11:28:57 CST [ConfigureOSModule] Generate init os script
11:28:57 CST success: [node1]
11:28:57 CST [ConfigureOSModule] Exec init os script
11:28:58 CST stdout: [node1]
setenforce: SELinux is disabled
Disabled
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-arptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_local_reserved_ports = 30000-32767
net.core.netdev_max_backlog = 65535
net.core.rmem_max = 33554432
net.core.wmem_max = 33554432
net.core.somaxconn = 32768
net.ipv4.tcp_max_syn_backlog = 1048576
net.ipv4.neigh.default.gc_thresh1 = 512
net.ipv4.neigh.default.gc_thresh2 = 2048
net.ipv4.neigh.default.gc_thresh3 = 4096
net.ipv4.tcp_retries2 = 15
net.ipv4.tcp_max_tw_buckets = 1048576
net.ipv4.tcp_max_orphans = 65535
net.ipv4.udp_rmem_min = 131072
net.ipv4.udp_wmem_min = 131072
net.ipv4.conf.all.rp_filter = 1
net.ipv4.conf.default.rp_filter = 1
net.ipv4.conf.all.arp_accept = 1
net.ipv4.conf.default.arp_accept = 1
net.ipv4.conf.all.arp_ignore = 1
net.ipv4.conf.default.arp_ignore = 1
vm.max_map_count = 262144
vm.swappiness = 0
vm.overcommit_memory = 0
fs.inotify.max_user_instances = 524288
fs.inotify.max_user_watches = 524288
fs.pipe-max-size = 4194304
fs.aio-max-nr = 262144
kernel.pid_max = 65535
kernel.watchdog_thresh = 5
kernel.hung_task_timeout_secs = 5
11:28:58 CST success: [node1]
11:28:58 CST [ConfigureOSModule] configure the ntp server for each node
11:28:58 CST skipped: [node1]
11:28:58 CST [InitRegistryModule] Fetch registry certs
11:28:58 CST success: [node1]
11:28:58 CST [InitRegistryModule] Generate registry Certs
[certs] Using existing ca certificate authority
11:28:59 CST message: [LocalHost]
unable to sign certificate: must specify a CommonName
11:28:59 CST failed: [LocalHost]
error: Pipeline[InitRegistryPipeline] execute failed: Module[InitRegistryModule] exec failed: 
failed: [LocalHost] [GenerateRegistryCerts] exec failed after 1 retries: unable to sign certificate: must specify a CommonName

解决办法
配置文件原因导致的,修改配置文件将注释打开
官网是注释掉的
这是官网的截取

registry:# 如需使用 kk 部署 harbor, 可将该参数设置为 harbor,不设置该参数且需使用 kk 创建容器镜像仓库,将默认使用docker registry。type: harbor# 如使用 kk 部署的 harbor 或其他需要登录的仓库,可设置对应仓库的auths,如使用 kk 创建的 docker registry 仓库,则无需配置该参数。# 注意:如使用 kk 部署 harbor,该参数请于 harbor 启动后设置。#auths:#  "dockerhub.kubekey.local":#    username: admin#    password: Harbor12345# 设置集群部署时使用的私有仓库privateRegistry: ""

本地修改的

[root@node1 ~]# apiVersion: kubekey.kubesphere.io/v1alpha2
kind: Cluster
metadata:name: sample
spec:hosts:1. {name: node1, address: 10.1.1.1, internalAddress: 10.1.1.1, user: root, password: "123456"}roleGroups:etcd:- node1control-plane:- node1worker:- node1registry:- node1controlPlaneEndpoint:## Internal loadbalancer for apiservers# internalLoadbalancer: haproxydomain: lb.kubesphere.localaddress: ""port: 6443kubernetes:version: v1.22.12clusterName: cluster.localautoRenewCerts: truecontainerManager: dockeretcd:type: kubekeynetwork:plugin: calicokubePodsCIDR: 10.233.64.0/18kubeServiceCIDR: 10.233.0.0/18## multus support. https://github.com/k8snetworkplumbingwg/multus-cnimultusCNI:enabled: falseregistry:type: harbordomain: dockerhub.kubekey.localtls:selfSigned: truecertCommonName: dockerhub.kubekey.localauths:"dockerhub.kubekey.local":username: adminpassword: Harbor12345privateRegistry: "dockerhub.kubekey.local"namespaceOverride: "kubesphereio"
#    privateRegistry: ""
#    namespaceOverride: ""registryMirrors: []insecureRegistries: []addons: []

2.报错显示缺少包

pull image failed: Failed to exec command: sudo -E /bin/bash -c "env PATH=$PATH docker pull dockerhub.kubekey.local/kubesphereio/pod2daemon-flexvol:v3.26.1 --platform amd64" 
downloading image: dockerhub.kubekey.local/kubesphereio/pod2daemon-flexvol:v3.26.1
14:10:52 CST message: [node1]
pull image failed: Failed to exec command: sudo -E /bin/bash -c "env PATH=$PATH docker pull dockerhub.kubekey.local/kubesphereio/pod2daemon-flexvol:v3.26.1 --platform amd64" 
Error response from daemon: unknown: repository kubesphereio/pod2daemon-flexvol not found: Process exited with status 1
14:10:52 CST retry: [node1]
14:10:57 CST message: [node1]
downloading image: dockerhub.kubekey.local/kubesphereio/pause:3.5
14:10:57 CST message: [node1]
downloading image: dockerhub.kubekey.local/kubesphereio/kube-apiserver:v1.22.12
14:10:57 CST message: [node1]
downloading image: dockerhub.kubekey.local/kubesphereio/kube-controller-manager:v1.22.12
14:10:57 CST message: [node1]
downloading image: dockerhub.kubekey.local/kubesphereio/kube-scheduler:v1.22.12
14:10:57 CST message: [node1]
downloading image: dockerhub.kubekey.local/kubesphereio/kube-proxy:v1.22.12
14:10:57 CST message: [node1]
downloading image: dockerhub.kubekey.local/kubesphereio/coredns:1.8.0
14:10:57 CST message: [node1]
downloading image: dockerhub.kubekey.local/kubesphereio/k8s-dns-node-cache:1.15.12
14:10:57 CST message: [node1]
downloading image: dockerhub.kubekey.local/kubesphereio/kube-controllers:v3.26.1
14:10:58 CST message: [node1]
downloading image: dockerhub.kubekey.local/kubesphereio/cni:v3.26.1
14:10:58 CST message: [node1]
downloading image: dockerhub.kubekey.local/kubesphereio/node:v3.26.1
14:10:58 CST message: [node1]
downloading image: dockerhub.kubekey.local/kubesphereio/pod2daemon-flexvol:v3.26.1
14:10:58 CST message: [node1]
pull image failed: Failed to exec command: sudo -E /bin/bash -c "env PATH=$PATH docker pull dockerhub.kubekey.local/kubesphereio/pod2daemon-flexvol:v3.26.1 --platform amd64" 
Error response from daemon: unknown: repository kubesphereio/pod2daemon-flexvol not found: Process exited with status 1
14:10:58 CST failed: [node1]
error: Pipeline[CreateClusterPipeline] execute failed: Module[PullModule] exec failed: 
failed: [node1] [PullImages] exec failed after 3 retries: pull image failed: Failed to exec command: sudo -E /bin/bash -c "env PATH=$PATH docker pull dockerhub.kubekey.local/kubesphereio/pod2daemon-flexvol:v3.26.1 --platform amd64" 
Error response from daemon: unknown: repository kubesphereio/pod2daemon-flexvol not found: Process exited with status 1

这是缺少安装包,外网下载然后本地导入

docker save -o pod2daemon-flexvo.tar registry.cn-beijing.aliyuncs.com/kubesphereio/pod2daemon-flexvo:v3.26.1

3、etcd x509 certificate is valid for 127.0.0.1 not 155.1.94.77 error
remote error tls bad certifcate servianem

13:10:44 CST [CertsModule] Generate etcd Certs
[certs] Using existing ca certificate authority
[certs] Using existing admin-node1 certificate and key on disk
[certs] Using existing member-node1 certificate and key on disk
[certs] Using existing node-node1 certificate and key on disk
13:10:44 CST success: [LocalHost]
13:10:44 CST [CertsModule] Synchronize certs file
13:10:46 CST success: [node1]
13:10:46 CST [CertsModule] Synchronize certs file to master
13:10:46 CST skipped: [node1]
13:10:46 CST [InstallETCDBinaryModule] Install etcd using binary
13:10:47 CST success: [node1]
13:10:47 CST [InstallETCDBinaryModule] Generate etcd service
13:10:47 CST success: [node1]
13:10:47 CST [InstallETCDBinaryModule] Generate access address
13:10:47 CST success: [node1]
13:10:47 CST [ETCDConfigureModule] Health check on exist etcd
13:10:47 CST skipped: [node1]
13:10:47 CST [ETCDConfigureModule] Generate etcd.env config on new etcd
13:10:48 CST success: [node1]
13:10:48 CST [ETCDConfigureModule] Refresh etcd.env config on all etcd
13:10:48 CST success: [node1]
13:10:48 CST [ETCDConfigureModule] Restart etcd
13:10:52 CST success: [node1]
13:10:52 CST [ETCDConfigureModule] Health check on all etcd
13:10:52 CST message: [node1]
etcd health check failed: Failed to exec command: sudo -E /bin/bash -c "export ETCDCTL_API=2;export ETCDCTL_CERT_FILE='/etc/ssl/etcd/ssl/admin-node1.pem';export ETCDCTL_KEY_FILE='/etc/ssl/etcd/ssl/admin-node1-key.pem';export ETCDCTL_CA_FILE='/etc/ssl/etcd/ssl/ca.pem';/usr/local/bin/etcdctl --endpoints=https://10.1.1.1:2379 cluster-health | grep -q 'cluster is healthy'" 
Error:  client: etcd cluster is unavailable or misconfigured; error #0: x509: certificate is valid for 127.0.0.1, ::1, 155.1.94.77, not 10.1.1.247error #0: x509: certificate is valid for 127.0.0.1, ::1, 155.1.94.77., not 10.1.1.1: Process exited with status 1
13:10:52 CST retry: [node1]

这是因为旧版本的证书已经生成导致的证书有问题,删除文件里面生成的证书

删除这个路径下已经生成的证书/etc/ssl/etcd/ssl
使用./kk delete cluster,删除其他文件保留镜像images文件夹,建议删除先备份

4.May 31 15:48:14 node1 etcd[43663]: listen tcp 155.1.94.77:2380: bind: cannot assign requested address

不能绑定地址,是因为网段原因,实际ip是10.1.1.1,映射IP155段,需要用10段的ip才能绑定

May 31 15:48:14 node1 etcd[43663]: peerTLS: cert = /etc/ssl/etcd/ssl/member-node1.pem, key = /etc/ssl/etcd/ssl/member-node1-key.pem, trusted-ca = /etc/ssl/etcd/ssl/ca.pem, client-cert-auth = true, crl-file = 
May 31 15:48:14 node1 etcd[43663]: listen tcp 155.1.94.77:2380: bind: cannot assign requested address
May 31 15:48:14 node1 systemd[1]: etcd.service: main process exited, code=exited, status=1/FAILURE
May 31 15:48:14 node1 systemd[1]: Failed to start etcd.
-- Subject: Unit etcd.service has failed
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
-- 
-- Unit etcd.service has failed.
-- 
-- The result is failed.

5.离线安装需要harbor配置仓库名
执行配置文件

[root@node1 ~]# ./create_project_harbor.sh 
bash: ./create_project_harbor.sh: /bin/bash^M: bad interpreter: No such file or directory

这是因为格式原因,里面有空格不能识别

sed -i "s/\r//" create_project_harbor.sh

5.执行安装集群报错需要rhel-7.5-amd64.iso

./kk create cluster -f config-sample.yaml -a kubesphere.tar.gz --with-packages

–with-packages:若需要安装操作系统依赖,需指定该选项。
报错显示需要rhel-7.5-amd64.iso 系统,
rhel是商业操作系统 建议自己把依赖装好 conntrack socat 这两装上就行了 然后安装的时候不要加 –wit-packages

6.执行创建集群报错

W0603 11:26:02.921549   60122 utils.go:69] The recommended value for "clusterDNS" in "KubeletConfiguration" is: [10.233.0.10]; the provided value is: [169.254.25.10]
[init] Using Kubernetes version: v1.22.12
[preflight] Running pre-flight checks[WARNING FileExisting-socat]: socat not found in system path[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.6. Latest validated version: 20.10
error execution phase preflight: [preflight] Some fatal errors occurred:[ERROR FileExisting-conntrack]: conntrack not found in system path #缺少组件
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher
11:26:03 CST stdout: [node1]
[preflight] Running pre-flight checks
W0603 11:26:03.194656   60202 removeetcdmember.go:80] [reset] No kubeadm config, using etcd pod spec to get data directory
[reset] No etcd config found. Assuming external etcd
[reset] Please, manually reset etcd to prevent further issues
[reset] Stopping the kubelet service
[reset] Unmounting mounted directories in "/var/lib/kubelet"
W0603 11:26:03.198708   60202 cleanupnode.go:109] [reset] Failed to evaluate the "/var/lib/kubelet" directory. Skipping its unmount and cleanup: lstat /var/lib/kubelet: no such file or directory
[reset] Deleting contents of config directories: [/etc/kubernetes/manifests /etc/kubernetes/pki]
[reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]
[reset] Deleting contents of stateful directories: [/var/lib/dockershim /var/run/kubernetes /var/lib/cni]The reset process does not clean CNI configuration. To do so, you must remove /etc/cni/net.dThe reset process does not reset or clean up iptables rules or IPVS tables.
If you wish to reset iptables, you must do so manually by using the "iptables" command.If your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar)
to reset your system's IPVS tables.The reset process does not clean your kubeconfig files and you must remove them manually.
Please, check the contents of the $HOME/.kube/config file.
11:26:03 CST message: [node1]
init kubernetes cluster failed: Failed to exec command: sudo -E /bin/bash -c "/usr/local/bin/kubeadm init --config=/etc/kubernetes/kubeadm-config.yaml --ignore-preflight-errors=FileExisting-crictl,ImagePull" 

报错缺少安装文件
socat not found in system path

  • 这是一个警告,说明你的系统中没有找到socat这个工具。socat是一个多功能的网络工具,尽管在kubeadm的预检查中它可能不是必需的,但建议最好还是安装它,因为它可能在某些操作中被使用到。

Docker version is not on the list of validated versions

  • 这也是一个警告,表示你当前使用的Docker版本(24.0.6)并不在kubeadm官方验证过的Docker版本列表中。虽然这个警告不一定会阻止kubeadm的操作,但建议使用一个经过验证的Docker版本(如20.10),以避免潜在的问题。

contrack not found in system path

  • 这是一个错误,表示kubeadm在预检查阶段没有找到contrack这个命令。但通常我们使用的应该是conntrack,它是Linux内核用来跟踪网络连接的工具。你可能需要安装或检查conntrack-tools包是否已正确安装在你的系统上。

kubeadm reset 相关输出

  • 在重置过程中,kubeadm会尝试停止kubelet服务、卸载挂载的目录、删除Kubernetes的配置文件和状态目录。从日志来看,kubeadm成功地删除了部分文件和目录,但遇到了对/var/lib/kubelet目录评估失败的问题(可能是因为该目录不存在或不可访问)。

CNI配置和iptables/IPVS表未清理

  • 重置过程不会清理CNI(容器网络接口)配置和iptables/IPVS表。如果你需要清理这些,你需要手动执行相关命令。
 yum -y install conntrack-toolsyum -y install socat

7、报错因为是挂在文件找不到,在创建集群初始化calico一直等待,超时失败报错日志

kubelet MountVolume.SetUp failed for volume “bpffs” : hostPath type check failed: /sys/fs/bpf is not a directory

14:17:31 CST message: [node1]
Default storageClass in cluster is not unique!
14:17:31 CST skipped: [node1]
14:17:31 CST [DeployStorageClassModule] Deploy OpenEBS as cluster default StorageClass
14:17:31 CST message: [node1]
Default storageClass in cluster is not unique!
14:17:31 CST skipped: [node1]

#查看pod
[root@node1 logs]# kubectl get po -A -o wide
NAMESPACE           NAME                                           READY   STATUS     RESTARTS   AGE     IP           NODE     NOMINATED NODE   READINESS GATES
kube-system         calico-kube-controllers-769bbc4c9-2smqd        0/1     Pending    0          4h39m   <none>       <none>   <none>           <none>
kube-system         calico-node-hsj57                              0/1     Init:0/3   0          4h39m   10.1.1.1   node1    <none>           <none>
kube-system         coredns-558b97598-d6v2c                        0/1     Pending    0          4h39m   <none>       <none>   <none>           <none>
kube-system         coredns-558b97598-gqwh4                        0/1     Pending    0          4h39m   <none>       <none>   <none>           <none>
kube-system         kube-apiserver-node1                           1/1     Running    0          4h39m   10.1.1.1   node1    <none>           <none>
kube-system         kube-controller-manager-node1                  1/1     Running    0          4h39m   10.1.1.1   node1    <none>           <none>
kube-system         kube-proxy-c4tg9                               1/1     Running    0          4h39m   10.1.1.1   node1    <none>           <none>
kube-system         kube-scheduler-node1                           1/1     Running    0          4h39m   10.1.1.1   node1    <none>           <none>
kube-system         nodelocaldns-kcz4p                             1/1     Running    0          4h39m   10.1.1.1   node1    <none>           <none>
kube-system         openebs-localpv-provisioner-7869648cbc-cls8s   0/1     Pending    0          4h39m   <none>       <none>   <none>           <none>
kubesphere-system   ks-installer-6c6c47d8f8-jnzj9                  0/1     Pending    0          4h39m   <none>       <none>   <none>           <none>

#查看日志
kubectl describe pod calico-node-hsj57 -n kube-system
Events:Type     Reason       Age                     From     Message----     ------       ----                    ----     -------Warning  FailedMount  7m6s (x117 over 4h27m)  kubelet  MountVolume.SetUp failed for volume "bpffs" : hostPath type check failed: /sys/fs/bpf is not a directoryWarning  FailedMount  2m33s (x128 over 4h7m)  kubelet  (combined from similar events): Unable to attach or mount volumes: unmounted volumes=[bpffs], unattached volumes=[policysync kube-api-access-xqn6h var-run-calico bpffs host-local-net-dir lib-modules cni-log-dir cni-bin-dir xtables-lock var-lib-calico cni-net-dir sys-fs nodeproc]: timed out waiting for the condition
[root@node1 logs]# ls -ld /sys/fs/bpf
ls: cannot access /sys/fs/bpf: No such file or directory
[root@node1 logs]# cat /boot/config/-$(uname -r)| grep CONFIG_BPF
cat: /boot/config/-3.10.0-862.el7.x86_64: No such file or directory
[root@node1 logs]# cat /boot/config-$(uname -r)| grep CONFIG_BPF
CONFIG_BPF_JIT=y

是因为不支持挂载的功能,这跟内核版本有关 linux版本cnetos7.5

centos7.8版本
[root@node1 cert]# cat /boot/config-$(uname -r) | grep CONFIG_BPF
CONFIG_BPF=y
CONFIG_BPF_SYSCALL=y
CONFIG_BPF_JIT_ALWAYS_ON=y
CONFIG_BPF_JIT=y
CONFIG_BPF_EVENTS=y
CONFIG_BPF_KPROBE_OVERRIDE=y

要重新安装系统内核

8报错failed to create network harbor_harbor error response for daemon
failed to setup ip tables:unable to enable skip DNAT rule 重启docker解决

systemctl start docker

相关文章:

kubesphere报错

1.安装过程报错unable to sign certificate: must specify a CommonName [rootnode1 ~]# ./kk init registry -f config-sample.yaml -a kubesphere.tar.gz _ __ _ _ __ | | / / | | | | / / | |/ / _ _| |__ ___| |/…...

【QT5】<总览二> QT信号槽、对象树及样式表

文章目录 前言 一、QT信号与槽 1. 信号槽连接模型 2. 信号槽介绍 3. 自定义信号槽 二、不使用UI文件编程 三、QT的对象树 四、添加资源文件 五、样式表的使用 六、QSS文件的使用 前言 承接【QT5】&#xff1c;总览一&#xff1e; QT环境搭建、快捷键及编程规范。若存…...

2024.05.24 校招 实习 内推 面经

绿*泡*泡VX&#xff1a; neituijunsir 交流*裙 &#xff0c;内推/实习/校招汇总表格 1、实习丨蔚来2025届实习生招募计划开启&#xff08;内推&#xff09; 实习丨蔚来2025届实习生招募计划开启&#xff08;内推&#xff09; 2、校招&实习丨联芯集成电路2025届暑期实习…...

如何理解 Java 8 引入的 Lambda 表达式及其使用场景

Lambda表达式是Java 8引入的一项重要特性&#xff0c;它使得编写简洁、可读和高效的代码成为可能。Lambda表达式本质上是一种匿名函数&#xff0c;能够更简洁地表示可传递的代码块&#xff0c;用于简化函数式编程的实现。 一、Lambda表达式概述 1. 什么是Lambda表达式 Lambd…...

GPT-4与GPT-4O的区别详解:面向小白用户

1. 模型介绍 在人工智能的语言模型领域&#xff0c;OpenAI的GPT-4和GPT-4O是最新的成员。这两个模型虽然来源于相同的基础技术&#xff0c;但在功能和应用上有着明显的区别。 GPT-4&#xff1a;这是一个通用型语言模型&#xff0c;可以理解和生成自然语言。无论是写作、对话还…...

使用throttle防止按钮多次点击

背景&#xff1a;如上图所示&#xff0c;点击按钮&#xff0c;防止按钮点击多次 <div class"footer"><el-button type"primary" click"submitThrottle">发起咨询 </el-button> </div>import { throttle } from loda…...

Echarts 在折线图的指定位置绘制一个图标展示

文章目录 需求分析需求 在线段交汇处用一个六边形图标展示 分析 可以使用 markPoint 和 symbol 属性来实现。这是一个更简单和更标准的方法来添加标记点在运行下述代码后,你将在浏览器中看到一个折线图,其中在 [3, 35] (即图表中第四个数据点 Thu 的 y 值为 35 的位置)处…...

适用于 Windows 的 8 大数据恢复软件

数据恢复软件可帮助您恢复因意外删除或由于某些技术故障&#xff08;如硬盘损坏等&#xff09;而丢失的数据。这些工具可帮助您从硬盘驱动器 (HDD) 中高效地恢复丢失的数据&#xff0c;因为这些工具不支持从 SSD 恢复数据。重要的是要了解&#xff0c;您删除的数据不会被系统永…...

HTTP基础

一、HTTP协议 1、HTTP协议概念 HTTP的全称是&#xff1a;Hyper Text Transfer Protocol&#xff0c;意为 超文本传输协议。它指的是服务器和客户端之间交互必须遵循的一问一答的规则。形容这个规则&#xff1a;问答机制、握手机制。 它规范了请求和响应内容的类型和格式, 是基于…...

深入了解Linux命令:visudo

深入了解Linux命令&#xff1a;visudo 在Linux系统中&#xff0c;sudo&#xff08;superuser do&#xff09;是一个允许用户以其他用户身份&#xff08;通常是超级用户或其他用户&#xff09;执行命令的程序。sudo的配置文件/etc/sudoers存储了哪些用户可以执行哪些命令的权限…...

十大排序 —— 希尔排序

十大排序 —— 希尔排序 什么是希尔排序插入排序希尔排序递归版本 我们今天来看另一个很有名的排序——希尔排序 什么是希尔排序 希尔排序&#xff08;Shell Sort&#xff09;是插入排序的一种更高效的改进版本&#xff0c;由Donald Shell于1959年提出。它通过比较相距一定间…...

SpringCloud Hystrix服务熔断实例总结

SpringCloud Hystrix断路器-服务熔断与降级和HystrixDashboard SpringCloud Hystrix服务降级实例总结 本文采用版本为Hoxton.SR1系列&#xff0c;SpringBoot为2.2.2.RELEASE <dependency><groupId>org.springframework.cloud</groupId><artifactId>s…...

为什么没有输出九九乘法表?

下面的程序本来想输出九九乘法表到屏幕上&#xff0c;为什么没有输出呢&#xff1f;怎样修改&#xff1f; <!DOCTYPE html> <html> <head> <meta charset"utf-8" /> <title>我的HTML练习</title> …...

EasyRecovery5步轻松恢复电脑手机数据,EasyRecovery带你探索!

在当今的数字化时代&#xff0c;数据已经成为我们生活和工作中不可或缺的一部分。无论是个人照片、工作文件还是重要的商业信息&#xff0c;数据的安全存储和恢复都显得尤为重要。EasyRecovery作为一款广受欢迎的数据恢复软件&#xff0c;为用户提供了强大的数据恢复功能&#…...

904. 水果成篮

904. 水果成篮 原题链接&#xff1a;完成情况&#xff1a;解题思路&#xff1a;参考代码&#xff1a;_904水果成篮_滑动窗口 错误经验吸取 原题链接&#xff1a; 904. 水果成篮 https://leetcode.cn/problems/fruit-into-baskets/description/ 完成情况&#xff1a; 解题思…...

在618集中上新,蕉下、VVC们为何押注拼多多?

编辑&#xff5c;Ray 自前两年崛起的防晒产品&#xff0c;今年依旧热度不减。 头部品牌蕉下&#xff0c;2020年入驻拼多多&#xff0c;如今年销售额已过亿元。而自去年起重点押注拼多多的时尚防晒品牌VVC&#xff0c;很快销量翻番。这两家公司&#xff0c;不约而同在618之前上…...

Maximo Attachments配置

以下内容以 Windows 上 Maximo 为例&#xff0c;并假定设置 DOCLINKS 的根路径为 “C:\DOCLINKS”。 HTTP Server配置 修改C:\Program Files\IBM\HTTPServer\conf\httpd.conf文件 查找 “DocumentRoot” 并修改成如下配置 DocumentRoot "C:\DOCLINKS"查找 “<…...

一分钟了解香港的场外期权报价

香港的场外期权报价 在香港这个国际金融中心&#xff0c;场外期权交易是金融市场不可或缺的一部分。场外期权&#xff0c;作为一种非标准化的金融衍生品&#xff0c;为投资者提供了在特定时间以约定价格买入或卖出某种资产的机会。对于希望参与这一市场的投资者来说&#xff0…...

专业开放式耳机什么牌子更好?六大技巧教你不踩坑!

相信很多入坑的朋友再最开始挑选耳机的时候都会矛盾&#xff0c;现在市面上这么多耳机&#xff0c;我该怎么选择&#xff1f;其实对于开放式耳机&#xff0c;大家都没有一个明确的概念&#xff0c;可能会为了音质的一小点提升而耗费大量的资金&#xff0c;毕竟这是一个无底洞。…...

注意!!24软考系统集成有变化,第三版考试一定要看这个!

系统集成在今年年初改版之后&#xff0c;上半年的考试也取消了&#xff0c;留给大家充足的时间来学习新的教材和考纲。但11月也将是第三版考纲的第1次考试&#xff0c;重点到底有什么&#xff1f;今天带大家详细的了解一下最新版中项考试大纲。 一、考试说明 1.考试目标 通过…...

MPNet:旋转机械轻量化故障诊断模型详解python代码复现

目录 一、问题背景与挑战 二、MPNet核心架构 2.1 多分支特征融合模块(MBFM) 2.2 残差注意力金字塔模块(RAPM) 2.2.1 空间金字塔注意力(SPA) 2.2.2 金字塔残差块(PRBlock) 2.3 分类器设计 三、关键技术突破 3.1 多尺度特征融合 3.2 轻量化设计策略 3.3 抗噪声…...

2024年赣州旅游投资集团社会招聘笔试真

2024年赣州旅游投资集团社会招聘笔试真 题 ( 满 分 1 0 0 分 时 间 1 2 0 分 钟 ) 一、单选题(每题只有一个正确答案,答错、不答或多答均不得分) 1.纪要的特点不包括()。 A.概括重点 B.指导传达 C. 客观纪实 D.有言必录 【答案】: D 2.1864年,()预言了电磁波的存在,并指出…...

在四层代理中还原真实客户端ngx_stream_realip_module

一、模块原理与价值 PROXY Protocol 回溯 第三方负载均衡&#xff08;如 HAProxy、AWS NLB、阿里 SLB&#xff09;发起上游连接时&#xff0c;将真实客户端 IP/Port 写入 PROXY Protocol v1/v2 头。Stream 层接收到头部后&#xff0c;ngx_stream_realip_module 从中提取原始信息…...

【配置 YOLOX 用于按目录分类的图片数据集】

现在的图标点选越来越多&#xff0c;如何一步解决&#xff0c;采用 YOLOX 目标检测模式则可以轻松解决 要在 YOLOX 中使用按目录分类的图片数据集&#xff08;每个目录代表一个类别&#xff0c;目录下是该类别的所有图片&#xff09;&#xff0c;你需要进行以下配置步骤&#x…...

uniapp微信小程序视频实时流+pc端预览方案

方案类型技术实现是否免费优点缺点适用场景延迟范围开发复杂度​WebSocket图片帧​定时拍照Base64传输✅ 完全免费无需服务器 纯前端实现高延迟高流量 帧率极低个人demo测试 超低频监控500ms-2s⭐⭐​RTMP推流​TRTC/即构SDK推流❌ 付费方案 &#xff08;部分有免费额度&#x…...

Python 包管理器 uv 介绍

Python 包管理器 uv 全面介绍 uv 是由 Astral&#xff08;热门工具 Ruff 的开发者&#xff09;推出的下一代高性能 Python 包管理器和构建工具&#xff0c;用 Rust 编写。它旨在解决传统工具&#xff08;如 pip、virtualenv、pip-tools&#xff09;的性能瓶颈&#xff0c;同时…...

NXP S32K146 T-Box 携手 SD NAND(贴片式TF卡):驱动汽车智能革新的黄金组合

在汽车智能化的汹涌浪潮中&#xff0c;车辆不再仅仅是传统的交通工具&#xff0c;而是逐步演变为高度智能的移动终端。这一转变的核心支撑&#xff0c;来自于车内关键技术的深度融合与协同创新。车载远程信息处理盒&#xff08;T-Box&#xff09;方案&#xff1a;NXP S32K146 与…...

Kafka入门-生产者

生产者 生产者发送流程&#xff1a; 延迟时间为0ms时&#xff0c;也就意味着每当有数据就会直接发送 异步发送API 异步发送和同步发送的不同在于&#xff1a;异步发送不需要等待结果&#xff0c;同步发送必须等待结果才能进行下一步发送。 普通异步发送 首先导入所需的k…...

区块链技术概述

区块链技术是一种去中心化、分布式账本技术&#xff0c;通过密码学、共识机制和智能合约等核心组件&#xff0c;实现数据不可篡改、透明可追溯的系统。 一、核心技术 1. 去中心化 特点&#xff1a;数据存储在网络中的多个节点&#xff08;计算机&#xff09;&#xff0c;而非…...

前端高频面试题2:浏览器/计算机网络

本专栏相关链接 前端高频面试题1&#xff1a;HTML/CSS 前端高频面试题2&#xff1a;浏览器/计算机网络 前端高频面试题3&#xff1a;JavaScript 1.什么是强缓存、协商缓存&#xff1f; 强缓存&#xff1a; 当浏览器请求资源时&#xff0c;首先检查本地缓存是否命中。如果命…...