docker安装elk6.7.1-搜集java日志
docker安装elk6.7.1-搜集java日志
如果对运维课程感兴趣,可以在b站上、A站或csdn上搜索我的账号: 运维实战课程,可以关注我,学习更多免费的运维实战技术视频
0.规划
192.168.171.130 tomcat日志+filebeat
192.168.171.131 tomcat日志+filebeat
192.168.171.128 redis
192.168.171.129 logstash
192.168.171.128 es1
192.168.171.129 es2
192.168.171.132 kibana

1.docker安装es集群-6.7.1 和head插件(在192.168.171.128-es1和192.168.171.129-es2)
在192.168.171.128上安装es6.7.1和es6.7.1-head插件:
1)安装docker19.03.2:
[root@localhost ~]# docker info
.......
Server Version: 19.03.2
[root@localhost ~]# sysctl -w vm.max_map_count=262144 #设置elasticsearch用户拥有的内存权限太小,至少需要262144
[root@localhost ~]# sysctl -a |grep vm.max_map_count #查看
vm.max_map_count = 262144
[root@localhost ~]# vim /etc/sysctl.conf
vm.max_map_count=262144
2)安装es6.7.1:
上传相关es的压缩包到/data目录:
[root@localhost ~]# cd /data/
[root@localhost data]# ls es-6.7.1.tar.gz
es-6.7.1.tar.gz
[root@localhost data]# tar -zxf es-6.7.1.tar.gz
[root@localhost data]# cd es-6.7.1
[root@localhost es-6.7.1]# ls
config image scripts
[root@localhost es-6.7.1]# ls config/
es.yml
[root@localhost es-6.7.1]# ls image/
elasticsearch_6.7.1.tar
[root@localhost es-6.7.1]# ls scripts/
run_es_6.7.1.sh
[root@localhost es-6.7.1]# docker load -i image/elasticsearch_6.7.1.tar
[root@localhost es-6.7.1]# docker images |grep elasticsearch
elasticsearch 6.7.1 e2667f5db289 11 months ago 812MB
[root@localhost es-6.7.1]# cat config/es.yml
cluster.name: elasticsearch-cluster
node.name: es-node1
network.host: 0.0.0.0
network.publish_host: 192.168.171.128
http.port: 9200
transport.tcp.port: 9300
http.cors.enabled: true
http.cors.allow-origin: "*"
node.master: true
node.data: true
discovery.zen.ping.unicast.hosts: ["192.168.171.128:9300","192.168.171.129:9300"]
discovery.zen.minimum_master_nodes: 1
#cluster.name 集群的名称,可以自定义名字,但两个es必须一样,就是通过是不是同一个名称判断是不是一个集群
#node.name 本机的节点名,可自定义,没必要必须hosts解析或配置该主机名
#下面两个是默认基础上新加的,允许跨域访问
#http.cors.enabled: true
#http.cors.allow-origin: '*'
##注意:容器里有两个端口,9200是:ES节点和外部通讯使用,9300是:ES节点之间通讯使用
[root@localhost es-6.7.1]# cat scripts/run_es_6.7.1.sh
#!/bin/bash
docker run -e ES_JAVA_OPTS="-Xms1024m -Xmx1024m" -d --net=host --restart=always -v /data/es-6.7.1/config/es.yml:/usr/share/elasticsearch/config/elasticsearch.yml -v /data/es6.7.1_data:/usr/share/elasticsearch/data -v /data/es6.7.1_logs:/usr/share/elasticsearch/logs --name es6.7.1 elasticsearch:6.7.1
#注意:容器里有两个端口,9200是:ES节点和外部通讯使用,9300是:ES节点之间通讯使用
[root@localhost es-6.7.1]# mkdir /data/es6.7.1_data
[root@localhost es-6.7.1]# mkdir /data/es6.7.1_logs
[root@localhost es-6.7.1]# chmod -R 777 /data/es6.7.1_data/ #需要es用户能写入,否则无法映射
[root@localhost es-6.7.1]# chmod -R 777 /data/es6.7.1_logs/ #需要es用户能写入,否则无法映射
[root@localhost es-6.7.1]# sh scripts/run_es_6.7.1.sh
[root@localhost es-6.7.1]# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
988abe7eedac elasticsearch:6.7.1 "/usr/local/bin/dock…" 23 seconds ago Up 19 seconds es6.7.1
[root@localhost es-6.7.1]# netstat -anput |grep 9200
tcp6 0 0 :::9200 :::* LISTEN 16196/java
[root@localhost es-6.7.1]# netstat -anput |grep 9300
tcp6 0 0 :::9300 :::* LISTEN 16196/java
[root@localhost es-6.7.1]# cd
浏览器访问es服务:http://192.168.171.128:9200/

3)安装es6.7.1-head插件:
上传相关es-head插件的压缩包到/data目录
[root@localhost ~]# cd /data/
[root@localhost data]# ls es-6.7.1-head.tar.gz
es-6.7.1-head.tar.gz
[root@localhost data]# tar -zxf es-6.7.1-head.tar.gz
[root@localhost data]# cd es-6.7.1-head
[root@localhost es-6.7.1-head]# ls
conf image scripts
[root@localhost es-6.7.1-head]# ls conf/
app.js Gruntfile.js
[root@localhost es-6.7.1-head]# ls image/
elasticsearch-head_6.7.1.tar
[root@localhost es-6.7.1-head]# ls scripts/
run_es-head.sh
[root@localhost es-6.7.1-head]# docker load -i image/elasticsearch-head_6.7.1.tar
[root@localhost es-6.7.1-head]# docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
elasticsearch 6.7.1 e2667f5db289 11 months ago 812MB
elasticsearch-head 6.7.1 b19a5c98e43b 3 years ago 824MB
[root@localhost es-6.7.1-head]# vim conf/app.js
.....
this.base_uri = this.config.base_uri || this.prefs.get("app-base_uri") || "http://192.168.171.128:9200"; #修改为本机ip
....
[root@localhost es-6.7.1-head]# vim conf/Gruntfile.js
....
connect: {
server: {
options: {
hostname: '*', #添加
port: 9100,
base: '.',
keepalive: true
}
}
....
[root@localhost es-6.7.1-head]# cat scripts/run_es-head.sh
#!/bin/bash
docker run -d --name es-head-6.7.1 --net=host --restart=always -v /data/es-6.7.1-head/conf/Gruntfile.js:/usr/src/app/Gruntfile.js -v /data/es-6.7.1-head/conf/app.js:/usr/src/app/_site/app.js elasticsearch-head:6.7.1
#容器端口是9100,是es的管理端口
[root@localhost es-6.7.1-head]# sh scripts/run_es-head.sh
[root@localhost es-6.7.1-head]# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
c46189c3338b elasticsearch-head:6.7.1 "/bin/sh -c 'grunt s…" 42 seconds ago Up 37 seconds es-head-6.7.1
988abe7eedac elasticsearch:6.7.1 "/usr/local/bin/dock…" 9 minutes ago Up 9 minutes es6.7.1
[root@localhost es-6.7.1-head]# netstat -anput |grep 9100
tcp6 0 0 :::9100 :::* LISTEN 16840/grunt
浏览器访问es-head插件:http://192.168.171.128:9100/

在192.168.171.129上安装es6.7.1和es6.7.1-head插件:
1)安装docker19.03.2:
[root@localhost ~]# docker info
Client:
Debug Mode: false
Server:
Containers: 2
Running: 2
Paused: 0
Stopped: 0
Images: 2
Server Version: 19.03.2
[root@localhost ~]# sysctl -w vm.max_map_count=262144 #设置elasticsearch用户拥有的内存权限太小,至少需要262144
[root@localhost ~]# sysctl -a |grep vm.max_map_count #查看
vm.max_map_count = 262144
[root@localhost ~]# vim /etc/sysctl.conf
vm.max_map_count=262144
2)安装es6.7.1:
上传相关es的压缩包到/data目录:
[root@localhost ~]# cd /data/
[root@localhost data]# ls es-6.7.1.tar.gz
es-6.7.1.tar.gz
[root@localhost data]# tar -zxf es-6.7.1.tar.gz
[root@localhost data]# cd es-6.7.1
[root@localhost es-6.7.1]# ls
config image scripts
[root@localhost es-6.7.1]# ls config/
es.yml
[root@localhost es-6.7.1]# ls image/
elasticsearch_6.7.1.tar
[root@localhost es-6.7.1]# ls scripts/
run_es_6.7.1.sh
[root@localhost es-6.7.1]# docker load -i image/elasticsearch_6.7.1.tar
[root@localhost es-6.7.1]# docker images |grep elasticsearch
elasticsearch 6.7.1 e2667f5db289 11 months ago 812MB
[root@localhost es-6.7.1]# vim config/es.yml
cluster.name: elasticsearch-cluster
node.name: es-node2
network.host: 0.0.0.0
network.publish_host: 192.168.171.129
http.port: 9200
transport.tcp.port: 9300
http.cors.enabled: true
http.cors.allow-origin: "*"
node.master: true
node.data: true
discovery.zen.ping.unicast.hosts: ["192.168.171.128:9300","192.168.171.129:9300"]
discovery.zen.minimum_master_nodes: 1
#cluster.name 集群的名称,可以自定义名字,但两个es必须一样,就是通过是不是同一个名称判断是不是一个集群
#node.name 本机的节点名,可自定义,没必要必须hosts解析或配置该主机名
#下面两个是默认基础上新加的,允许跨域访问
#http.cors.enabled: true
#http.cors.allow-origin: '*'
##注意:容器里有两个端口,9200是:ES节点和外部通讯使用,9300是:ES节点之间通讯使用
[root@localhost es-6.7.1]# cat scripts/run_es_6.7.1.sh
#!/bin/bash
docker run -e ES_JAVA_OPTS="-Xms1024m -Xmx1024m" -d --net=host --restart=always -v /data/es-6.7.1/config/es.yml:/usr/share/elasticsearch/config/elasticsearch.yml -v /data/es6.7.1_data:/usr/share/elasticsearch/data -v /data/es6.7.1_logs:/usr/share/elasticsearch/logs --name es6.7.1 elasticsearch:6.7.1
#注意:容器里有两个端口,9200是:ES节点和外部通讯使用,9300是:ES节点之间通讯使用
[root@localhost es-6.7.1]# mkdir /data/es6.7.1_data
[root@localhost es-6.7.1]# mkdir /data/es6.7.1_logs
[root@localhost es-6.7.1]# chmod -R 777 /data/es6.7.1_data/ #需要es用户能写入,否则无法映射
[root@localhost es-6.7.1]# chmod -R 777 /data/es6.7.1_logs/ #需要es用户能写入,否则无法映射
[root@localhost es-6.7.1]# sh scripts/run_es_6.7.1.sh
[root@localhost es-6.7.1]# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
a3b0a0187db8 elasticsearch:6.7.1 "/usr/local/bin/dock…" 9 seconds ago Up 7 seconds es6.7.1
[root@localhost es-6.7.1]# netstat -anput |grep 9200
tcp6 0 0 :::9200 :::* LISTEN 14171/java
[root@localhost es-6.7.1]# netstat -anput |grep 9300
tcp6 0 0 :::9300 :::* LISTEN 14171/java
[root@localhost es-6.7.1]# cd
浏览器访问es服务:http://192.168.171.129:9200/

3)安装es6.7.1-head插件:
上传相关es-head插件的压缩包到/data目录
[root@localhost ~]# cd /data/
[root@localhost data]# ls es-6.7.1-head.tar.gz
es-6.7.1-head.tar.gz
[root@localhost data]# tar -zxf es-6.7.1-head.tar.gz
[root@localhost data]# cd es-6.7.1-head
[root@localhost es-6.7.1-head]# ls
conf image scripts
[root@localhost es-6.7.1-head]# ls conf/
app.js Gruntfile.js
[root@localhost es-6.7.1-head]# ls image/
elasticsearch-head_6.7.1.tar
[root@localhost es-6.7.1-head]# ls scripts/
run_es-head.sh
[root@localhost es-6.7.1-head]# docker load -i image/elasticsearch-head_6.7.1.tar
[root@localhost es-6.7.1-head]# docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
elasticsearch 6.7.1 e2667f5db289 11 months ago 812MB
elasticsearch-head 6.7.1 b19a5c98e43b 3 years ago 824MB
[root@localhost es-6.7.1-head]# vim conf/app.js
.....
this.base_uri = this.config.base_uri || this.prefs.get("app-base_uri") || "http://192.168.171.129:9200"; #修改为本机ip
....
[root@localhost es-6.7.1-head]# vim conf/Gruntfile.js
....
connect: {
server: {
options: {
hostname: '*', #添加
port: 9100,
base: '.',
keepalive: true
}
}
....
[root@localhost es-6.7.1-head]# cat scripts/run_es-head.sh
#!/bin/bash
docker run -d --name es-head-6.7.1 --net=host --restart=always -v /data/es-6.7.1-head/conf/Gruntfile.js:/usr/src/app/Gruntfile.js -v /data/es-6.7.1-head/conf/app.js:/usr/src/app/_site/app.js elasticsearch-head:6.7.1
#容器端口是9100,是es的管理端口
[root@localhost es-6.7.1-head]# sh scripts/run_es-head.sh
[root@localhost es-6.7.1-head]# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
f4f5c967754b elasticsearch-head:6.7.1 "/bin/sh -c 'grunt s…" 12 seconds ago Up 7 seconds es-head-6.7.1
a3b0a0187db8 elasticsearch:6.7.1 "/usr/local/bin/dock…" 7 minutes ago Up 7 minutes es6.7.1
[root@localhost es-6.7.1-head]# netstat -anput |grep 9100
tcp6 0 0 :::9100 :::* LISTEN 14838/grunt
浏览器访问es-head插件:http://192.168.171.129:9100/

同样在机器192.168.171.128的head插件也能查看到状态,因为插件管理工具都是一样的,如下:
http://192.168.171.128:9100/

2.docker安装redis4.0.10(在192.168.171.128上)
上传redis4.0.10镜像:
[root@localhost ~]# ls redis_4.0.10.tar
redis_4.0.10.tar
[root@localhost ~]# docker load -i redis_4.0.10.tar
[root@localhost ~]# docker images |grep redis
gmprd.baiwang-inner.com/redis 4.0.10 f713a14c7f9b 13 months ago 425MB
[root@localhost ~]# mkdir -p /data/redis/conf #创建配置文件目录
[root@localhost ~]# vim /data/redis/conf/redis.conf #自定义配置文件
protected-mode no
port 6379
bind 0.0.0.0
tcp-backlog 511
timeout 0
tcp-keepalive 300
supervised no
pidfile "/usr/local/redis/redis_6379.pid"
loglevel notice
logfile "/opt/redis/logs/redis.log"
databases 16
save 900 1
save 300 10
save 60 10000
stop-writes-on-bgsave-error yes
rdbcompression yes
rdbchecksum yes
dbfilename "dump.rdb"
dir "/"
slave-serve-stale-data yes
slave-read-only yes
repl-diskless-sync no
repl-diskless-sync-delay 5
repl-disable-tcp-nodelay no
slave-priority 100
requirepass 123456
appendonly yes
dir "/opt/redis/data"
logfile "/opt/redis/logs/redis.log"
appendfilename "appendonly.aof"
appendfsync everysec
no-appendfsync-on-rewrite no
auto-aof-rewrite-percentage 100
auto-aof-rewrite-min-size 64mb
aof-load-truncated yes
lua-time-limit 5000
slowlog-log-slower-than 10000
slowlog-max-len 128
latency-monitor-threshold 0
notify-keyspace-events ""
hash-max-ziplist-entries 512
hash-max-ziplist-value 64
list-max-ziplist-size -2
list-compress-depth 0
set-max-intset-entries 512
zset-max-ziplist-entries 128
zset-max-ziplist-value 64
hll-sparse-max-bytes 3000
activerehashing yes
client-output-buffer-limit normal 0 0 0
client-output-buffer-limit slave 256mb 64mb 60
client-output-buffer-limit pubsub 32mb 8mb 60
hz 10
aof-rewrite-incremental-fsync yes
maxclients 4064
#appendonly yes 是开启数据持久化
#dir "/opt/redis/data" #持久化到的容器里的目录
#logfile "/opt/redis/logs/redis.log" #持久化到的容器里的目录,此处写的必须是文件路径,目录路径不行
[root@localhost ~]# docker run -d --net=host --restart=always --name=redis4.0.10 -v /data/redis/conf/redis.conf:/opt/redis/conf/redis.conf -v /data/redis_data:/opt/redis/data -v /data/redis_logs:/opt/redis/logs gmprd.baiwang-inner.com/redis:4.0.10
[root@localhost ~]# docker ps |grep redis
735fb213ee41 gmprd.baiwang-inner.com/redis:4.0.10 "redis-server /opt/r…" 9 seconds ago Up 8 seconds redis4.0.10
[root@localhost ~]# netstat -anput |grep 6379
tcp 0 0 0.0.0.0:6379 0.0.0.0:* LISTEN 16988/redis-server
[root@localhost ~]# ls /data/redis_data/
appendonly.aof
[root@localhost ~]# ls /data/redis_logs/
redis.log
[root@localhost ~]# docker exec -it redis4.0.10 bash
[root@localhost /]# redis-cli -a 123456
127.0.0.1:6379> set k1 v1
OK
127.0.0.1:6379> keys *
1) "k1"
127.0.0.1:6379> get k1
"v1"
127.0.0.1:6379> quit
[root@localhost /]# exit
3.docker安装tomcat(不安装,仅创建模拟tomcat和其他java日志)和filebeat6.7.1 (192.168.171.130和192.168.171.131)
在192.168.171.130上:
模拟创建各类java日志,将各类java日志用filebeat写入redis中,在用logstash以多行匹配模式,写入es中:
注意:下面日志不能提前生成,需要先启动filebeat开始收集后,在vim编写下面的日志,否则filebeat不能读取已经有的日志.
a)创建模拟tomcat日志:
[root@localhost ~]# mkdir /data/java-logs
[root@localhost ~]# mkdir /data/java-logs/{tomcat_logs,es_logs,message_logs}
[root@localhost ~]# vim /data/java-logs/tomcat_logs/catalina.out
2020-03-09 13:07:48|ERROR|org.springframework.web.context.ContextLoader:351|Context initialization failed
org.springframework.beans.factory.parsing.BeanDefinitionParsingException: Configuration problem: Unable to locate Spring NamespaceHandler for XML schema namespace [http://www.springframework.org/schema/aop]
Offending resource: URL [file:/usr/local/apache-tomcat-8.0.32/webapps/ROOT/WEB-INF/classes/applicationContext.xml]
at org.springframework.beans.factory.parsing.FailFastProblemReporter.error(FailFastProblemReporter.java:70) ~[spring-beans-4.2.6.RELEASE.jar:4.2.6.RELEASE]
at org.springframework.beans.factory.parsing.ReaderContext.error(ReaderContext.java:85) ~[spring-beans-4.2.6.RELEASE.jar:4.2.6.RELEASE]
at org.springframework.beans.factory.parsing.ReaderContext.error(ReaderContext.java:80) ~[spring-beans-4.2.6.RELEASE.jar:4.2.6.RELEASE]
at org.springframework.beans.factory.xml.BeanDefinitionParserDelegate.error(BeanDefinitionParserDelegate.java:301) ~[spring-beans-4.2.6.RELEASE.jar:4.2.6.RELEASE]
at org.springframework.beans.factory.xml.BeanDefinitionParserDelegate.parseCustomElement(BeanDefinitionParserDelegate.java:1408) ~[spring-beans-4.2.6.RELEASE.jar:4.2.6.RELEASE]
at org.springframework.beans.factory.xml.BeanDefinitionParserDelegate.parseCustomElement(BeanDefinitionParserDelegate.java:1401) ~[spring-beans-4.2.6.RELEASE.jar:4.2.6.RELEASE]
at org.springframework.beans.factory.xml.DefaultBeanDefinitionDocumentReader.parseBeanDefinitions(DefaultBeanDefinitionDocumentReader.java:168) ~[spring-beans-4.2.6.RELEASE.jar:4.2.6.RELEASE]
at org.springframework.beans.factory.xml.DefaultBeanDefinitionDocumentReader.doRegisterBeanDefinitions(DefaultBeanDefinitionDocumentReader.java:138) ~[spring-beans-4.2.6.RELEASE.jar:4.2.6.RELEASE]
at org.springframework.beans.factory.xml.DefaultBeanDefinitionDocumentReader.registerBeanDefinitions(DefaultBeanDefinitionDocumentReader.java:94) ~[spring-beans-4.2.6.RELEASE.jar:4.2.6.RELEASE]
at org.springframework.beans.factory.xml.XmlBeanDefinitionReader.registerBeanDefinitions(XmlBeanDefinitionReader.java:508) ~[spring-beans-4.2.6.RELEASE.jar:4.2.6.RELEASE]
at org.springframework.beans.factory.xml.XmlBeanDefinitionReader.doLoadBeanDefinitions(XmlBeanDefinitionReader.java:392) ~[spring-beans-4.2.6.RELEASE.jar:4.2.6.RELEASE]
at org.springframework.beans.factory.xml.XmlBeanDefinitionReader.loadBeanDefinitions(XmlBeanDefinitionReader.java:336) ~[spring-beans-4.2.6.RELEASE.jar:4.2.6.RELEASE]
at org.springframework.beans.factory.xml.XmlBeanDefinitionReader.loadBeanDefinitions(XmlBeanDefinitionReader.java:304) ~[spring-beans-4.2.6.RELEASE.jar:4.2.6.RELEASE]
at org.springframework.beans.factory.support.AbstractBeanDefinitionReader.loadBeanDefinitions(AbstractBeanDefinitionReader.java:181) ~[spring-beans-4.2.6.RELEASE.jar:4.2.6.RELEASE]
at org.springframework.beans.factory.support.AbstractBeanDefinitionReader.loadBeanDefinitions(AbstractBeanDefinitionReader.java:217) ~[spring-beans-4.2.6.RELEASE.jar:4.2.6.RELEASE]
at org.springframework.beans.factory.support.AbstractBeanDefinitionReader.loadBeanDefinitions(AbstractBeanDefinitionReader.java:188) ~[spring-beans-4.2.6.RELEASE.jar:4.2.6.RELEASE]
at org.springframework.web.context.support.XmlWebApplicationContext.loadBeanDefinitions(XmlWebApplicationContext.java:125) ~[spring-web-4.2.6.RELEASE.jar:4.2.6.RELEASE]
at org.springframework.web.context.support.XmlWebApplicationContext.loadBeanDefinitions(XmlWebApplicationContext.java:94) ~[spring-web-4.2.6.RELEASE.jar:4.2.6.RELEASE]
at org.springframework.context.support.AbstractRefreshableApplicationContext.refreshBeanFactory(AbstractRefreshableApplicationContext.java:129) ~[spring-context-4.2.6.RELEASE.jar:4.2.6.RELEASE]
at org.springframework.context.support.AbstractApplicationContext.obtainFreshBeanFactory(AbstractApplicationContext.java:609) ~[spring-context-4.2.6.RELEASE.jar:4.2.6.RELEASE]
at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:510) ~[spring-context-4.2.6.RELEASE.jar:4.2.6.RELEASE]
at org.springframework.web.context.ContextLoader.configureAndRefreshWebApplicationContext(ContextLoader.java:444) ~[spring-web-4.2.6.RELEASE.jar:4.2.6.RELEASE]
at org.springframework.web.context.ContextLoader.initWebApplicationContext(ContextLoader.java:326) ~[spring-web-4.2.6.RELEASE.jar:4.2.6.RELEASE]
at org.springframework.web.context.ContextLoaderListener.contextInitialized(ContextLoaderListener.java:107) [spring-web-4.2.6.RELEASE.jar:4.2.6.RELEASE]
at org.apache.catalina.core.StandardContext.listenerStart(StandardContext.java:4812) [catalina.jar:8.0.32]
at org.apache.catalina.core.StandardContext.startInternal(StandardContext.java:5255) [catalina.jar:8.0.32]
at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:147) [catalina.jar:8.0.32]
at org.apache.catalina.core.ContainerBase.addChildInternal(ContainerBase.java:725) [catalina.jar:8.0.32]
at org.apache.catalina.core.ContainerBase.addChild(ContainerBase.java:701) [catalina.jar:8.0.32]
at org.apache.catalina.core.StandardHost.addChild(StandardHost.java:717) [catalina.jar:8.0.32]
at org.apache.catalina.startup.HostConfig.deployDirectory(HostConfig.java:1091) [catalina.jar:8.0.32]
at org.apache.catalina.startup.HostConfig$DeployDirectory.run(HostConfig.java:1830) [catalina.jar:8.0.32]
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [na:1.8.0_144]
at java.util.concurrent.FutureTask.run(FutureTask.java:266) [na:1.8.0_144]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [na:1.8.0_144]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [na:1.8.0_144]
at java.lang.Thread.run(Thread.java:748) [na:1.8.0_144]
13-Oct-2020 13:07:48.990 SEVERE [localhost-startStop-3] org.apache.catalina.core.StandardContext.startInternal One or more listeners failed to start. Full details will be found in the appropriate container log file
13-Oct-2020 13:07:48.991 SEVERE [localhost-startStop-3] org.apache.catalina.core.StandardContext.startInternal Context [] startup failed due to previous errors
2020-03-09 13:07:48|INFO|org.springframework.context.support.AbstractApplicationContext:960|Closing Root WebApplicationContext: startup date [Sun Oct 13 13:07:43 CST 2020]; root of context hierarchy
2020-03-09 13:09:41|INFO|org.springframework.context.support.AbstractApplicationContext:960|Closing Root WebApplicationContext: startup date [Sun Oct 13 13:07:43 CST 2020]; root of context hierarchy error test1
2020-03-09 13:10:41|INFO|org.springframework.context.support.AbstractApplicationContext:960|Closing Root WebApplicationContext: startup date [Sun Oct 13 13:07:43 CST 2020]; root of context hierarchy error test2
2020-03-09 13:11:41|INFO|org.springframework.context.support.AbstractApplicationContext:960|Closing Root WebApplicationContext: startup date [Sun Oct 13 13:07:43 CST 2020]; root of context hierarchy error test3
b)制造系统日志(将/var/log/messages部分弄出来) 系统日志
[root@localhost ~]# vim /data/java-logs/message_logs/messages
Mar 09 14:19:06 localhost systemd: Removed slice system-selinux\x2dpolicy\x2dmigrate\x2dlocal\x2dchanges.slice.
Mar 09 14:19:06 localhost systemd: Stopping system-selinux\x2dpolicy\x2dmigrate\x2dlocal\x2dchanges.slice.
Mar 09 14:19:06 localhost systemd: Stopped target Network is Online.
Mar 09 14:19:06 localhost systemd: Stopping Network is Online.
Mar 09 14:19:06 localhost systemd: Stopping Authorization Manager...
Mar 09 14:20:38 localhost kernel: Initializing cgroup subsys cpuset
Mar 09 14:20:38 localhost kernel: Initializing cgroup subsys cpu
Mar 09 14:20:38 localhost kernel: Initializing cgroup subsys cpuacct
Mar 09 14:20:38 localhost kernel: Linux version 3.10.0-693.el7.x86_64 (builder@kbuilder.dev.centos.org) (gcc version 4.8.5 20150623 (Red Hat 4.8.5-16) (GCC) ) #1 SMP Tue Aug 22 21:09:27 UTC 2017
Mar 09 14:20:38 localhost kernel: Command line: BOOT_IMAGE=/vmlinuz-3.10.0-693.el7.x86_64 root=/dev/mapper/centos-root ro crashkernel=auto rd.lvm.lv=centos/root rd.lvm.lv=centos/swap rhgb quiet LANG=en_US.UTF-8
c)制造es日志:
[root@localhost ~]# vim /data/java-logs/es_logs/es_log
[2020-03-09T21:44:58,440][ERROR][o.e.b.Bootstrap ] Exception
java.lang.RuntimeException: can not run elasticsearch as root
at org.elasticsearch.bootstrap.Bootstrap.initializeNatives(Bootstrap.java:035) ~[elasticsearch-6.2.4.jar:6.2.4]
at org.elasticsearch.bootstrap.Bootstrap.setup(Bootstrap.java:172) ~[elasticsearch-6.2.4.jar:6.2.4]
at org.elasticsearch.bootstrap.Bootstrap.init(Bootstrap.java:323) [elasticsearch-6.2.4.jar:6.2.4]
at org.elasticsearch.bootstrap.Elasticsearch.init(Elasticsearch.java:091) [elasticsearch-6.2.4.jar:6.2.4]
at org.elasticsearch.bootstrap.Elasticsearch.execute(Elasticsearch.java:109) [elasticsearch-6.2.4.jar:6.2.4]
at org.elasticsearch.cli.EnvironmentAwareCommand.execute(EnvironmentAwareCommand.java:86) [elasticsearch-6.2.4.jar:6.2.4]
at org.elasticsearch.cli.Command.mainWithoutErrorHandling(Command.java:094) [elasticsearch-cli-6.2.4.jar:6.2.4]
at org.elasticsearch.cli.Command.main(Command.java:90) [elasticsearch-cli-6.2.4.jar:6.2.4]
at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:92) [elasticsearch-6.2.4.jar:6.2.4]
at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:85) [elasticsearch-6.2.4.jar:6.2.4]
[2020-03-09T21:44:58,549][WARN ][o.e.b.ElasticsearchUncaughtExceptionHandler] [] uncaught exception in thread [main]
org.elasticsearch.bootstrap.StartupException: java.lang.RuntimeException: can not run elasticsearch as root
at org.elasticsearch.bootstrap.Elasticsearch.init(Elasticsearch.java:095) ~[elasticsearch-6.2.4.jar:6.2.4]
at org.elasticsearch.bootstrap.Elasticsearch.execute(Elasticsearch.java:109) ~[elasticsearch-6.2.4.jar:6.2.4]
at org.elasticsearch.cli.EnvironmentAwareCommand.execute(EnvironmentAwareCommand.java:86) ~[elasticsearch-6.2.4.jar:6.2.4]
at org.elasticsearch.cli.Command.mainWithoutErrorHandling(Command.java:094) ~[elasticsearch-cli-6.2.4.jar:6.2.4]
at org.elasticsearch.cli.Command.main(Command.java:90) ~[elasticsearch-cli-6.2.4.jar:6.2.4]
at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:92) ~[elasticsearch-6.2.4.jar:6.2.4]
at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:85) ~[elasticsearch-6.2.4.jar:6.2.4]
Caused by: java.lang.RuntimeException: can not run elasticsearch as root
at org.elasticsearch.bootstrap.Bootstrap.initializeNatives(Bootstrap.java:035) ~[elasticsearch-6.2.4.jar:6.2.4]
at org.elasticsearch.bootstrap.Bootstrap.setup(Bootstrap.java:172) ~[elasticsearch-6.2.4.jar:6.2.4]
at org.elasticsearch.bootstrap.Bootstrap.init(Bootstrap.java:323) ~[elasticsearch-6.2.4.jar:6.2.4]
at org.elasticsearch.bootstrap.Elasticsearch.init(Elasticsearch.java:091) ~[elasticsearch-6.2.4.jar:6.2.4]
... 6 more
[2020-03-09T21:46:32,174][INFO ][o.e.n.Node ] [] initializing ...
[2020-03-09T21:46:32,467][INFO ][o.e.e.NodeEnvironment ] [koccs5f] using [1] data paths, mounts [[/ (rootfs)]], net usable_space [48gb], net total_space [49.9gb], types [rootfs]
[2020-03-09T21:46:32,468][INFO ][o.e.e.NodeEnvironment ] [koccs5f] heap size [0315.6mb], compressed ordinary object pointers [true]
d)制造tomcat访问日志
[root@localhost ~]# vim /data/java-logs/tomcat_logs/localhost_access_log.2020-03-09.txt
192.168.171.1 - - [09/Mar/2020:09:07:59 +0800] "GET /favicon.ico HTTP/1.1" 404 -
Caused by: java.lang.RuntimeException: can not run elasticsearch as root
at org.elasticsearch.bootstrap.Bootstrap.initializeNatives(Bootstrap.java:105) ~[elasticsearch-6.2.4.jar:6.2.4]
at org.elasticsearch.bootstrap.Bootstrap.setup(Bootstrap.java:172) ~[elasticsearch-6.2.4.jar:6.2.4]
192.168.171.2 - - [09/Mar/2020:09:07:59 +0800] "GET / HTTP/1.1" 404 -
192.168.171.1 - - [09/Mar/2020:15:09:12 +0800] "GET / HTTP/1.1" 200 11250
Caused by: java.lang.RuntimeException: can not run elasticsearch as root
at org.elasticsearch.bootstrap.Bootstrap.initializeNatives
192.168.171.2 - - [09/Mar/2020:15:09:12 +0800] "GET /tomcat.png HTTP/1.1" 200 5103
192.168.171.3 - - [09/Mar/2020:15:09:12 +0800] "GET /tomcat.css HTTP/1.1" 200 5576
192.168.171.5 - - [09/Mar/2020:15:09:09 +0800] "GET /bg-nav.png HTTP/1.1" 200 1401
192.168.171.1 - - [09/Mar/2020:15:09:09 +0800] "GET /bg-upper.png HTTP/1.1" 200 3103
安装filebeat6.7.1:
[root@localhost ~]# cd /data/
[root@localhost data]# ls filebeat6.7.1.tar.gz
filebeat6.7.1.tar.gz
[root@localhost data]# tar -zxf filebeat6.7.1.tar.gz
[root@localhost data]# cd filebeat6.7.1
[root@localhost filebeat6.7.1]# ls
conf image scripts
[root@localhost filebeat6.7.1]# ls conf/
filebeat.yml filebeat.yml.bak
[root@localhost filebeat6.7.1]# ls image/
filebeat_6.7.1.tar
[root@localhost filebeat6.7.1]# ls scripts/
run_filebeat6.7.1.sh
[root@localhost filebeat6.7.1]# docker load -i image/filebeat_6.7.1.tar
[root@localhost filebeat6.7.1]# docker images |grep filebeat
docker.elastic.co/beats/filebeat 6.7.1 04fcff75b160 11 months ago 279MB
[root@localhost filebeat6.7.1]# cat conf/filebeat.yml
filebeat.inputs:
#下面为添加:——————————————
#系统日志:
- type: log
enabled: true
paths:
- /usr/share/filebeat/logs/message_logs/messages
fields:
log_source: system-171.130
#tomcat的catalina日志:
- type: log
enabled: true
paths:
- /usr/share/filebeat/logs/tomcat_logs/catalina.out
fields:
log_source: catalina-log-171.130
multiline.pattern: '^[0-9]{4}-(((0[13578]|(10|12))-(0[1-9]|[1-2][0-9]|3[0-1]))|(02-(0[1-9]|[1-2][0-9]))|((0[469]|11)-(0[1-9]|[1-2][0-9]|30)))'
multiline.negate: true
multiline.match: after
# 上面正则是匹配日期开头正则,类似:2004-02-29开头的
# log_source: xxx 表示: 因为存入redis的只有一个索引名,logstash对多种类型日志无法区分,定义该项可以让logstash以此来判断日志来源,当是这种类型日志,输出相应的索引名存入es,当时另一种类型日志,输出相应索引名存入es
#es日志:
- type: log
enabled: true
paths:
- /usr/share/filebeat/logs/es_logs/es_log
fields:
log_source: es-log-171.130
multiline.pattern: '^\['
multiline.negate: true
multiline.match: after
#上面正则是是匹配以[开头的,\表示转义.
#tomcat的访问日志:
- type: log
enabled: true
paths:
- /usr/share/filebeat/logs/tomcat_logs/localhost_access_log.2020-03-09.txt
fields:
log_source: tomcat-access-log-171.130
multiline.pattern: '^((2(5[0-5]|[0-4]\d))|[0-1]?\d{1,2})(\.((2(5[0-5]|[0-4]\d))|[0-1]?\d{1,2})){3}'
multiline.negate: true
multiline.match: after
#上面为添加:—————————————————————
filebeat.config.modules:
path: ${path.config}/modules.d/*.yml
reload.enabled: false
setup.template.settings:
index.number_of_shards: 3
setup.kibana:
#下面是直接写入es中:
#output.elasticsearch:
# hosts: ["192.168.171.128:9200"]
#下面是写入redis中:
#下面的filebeat-common是自定的key,要和logstash中从redis里对应的key要要一致,多个节点的nginx的都可以该key写入,但需要定义log_source以作为区分,logstash读取的时候以区分的标志来分开存放索引到es中
output.redis:
hosts: ["192.168.171.128"]
port: 6379
password: "123456"
key: "filebeat-common"
db: 0
datatype: list
processors:
- add_host_metadata: ~
- add_cloud_metadata: ~
#注意:因为默认情况下,宿主机日志路径和容器内日志路径是不一致的,所以配置文件里配置的路径如果是宿主机日志路径,容器里则找不到
##所以采取措施是:配置文件里配置成容器里的日志路径,再把宿主机的日志目录和容器日志目录做一个映射就可以了
#/usr/share/filebeat/logs/*.log 是容器里的日志路径
[root@localhost filebeat6.7.1]# cat scripts/run_filebeat6.7.1.sh
#!/bin/bash
docker run -d --name filebeat6.7.1 --net=host --restart=always --user=root -v /data/filebeat6.7.1/conf/filebeat.yml:/usr/share/filebeat/filebeat.yml -v /data/java-logs:/usr/share/filebeat/logs docker.elastic.co/beats/filebeat:6.7.1
#注意:因为默认情况下,宿主机日志路径和容器内日志路径是不一致的,所以配置文件里配置的路径如果是宿主机日志路径,容器里则找不到
#所以采取措施是:配置文件里配置成容器里的日志路径,再把宿主机的日志目录和容器日志目录做一个映射就可以了
[root@localhost filebeat6.7.1]# sh scripts/run_filebeat6.7.1.sh #运行后则开始收集日志到redis
[root@localhost filebeat6.7.1]# docker ps |grep filebeat
1f2bbd450e7e docker.elastic.co/beats/filebeat:6.7.1 "/usr/local/bin/dock…" 8 seconds ago Up 7 seconds filebeat6.7.1
[root@localhost filebeat6.7.1]# cd
在192.168.171.131上:
模拟创建各类java日志,将各类java日志用filebeat写入redis中,在用logstash以多行匹配模式,写入es中:
注意:下面日志不能提前生成,需要先启动filebeat开始收集后,在vim编写下面的日志,否则filebeat不能读取已经有的日志.
a)创建模拟tomcat日志:
[root@localhost ~]# mkdir /data/java-logs
[root@localhost ~]# mkdir /data/java-logs/{tomcat_logs,es_logs,message_logs}
[root@localhost ~]# vim /data/java-logs/tomcat_logs/catalina.out
2050-05-09 13:07:48|ERROR|org.springframework.web.context.ContextLoader:351|Context initialization failed
org.springframework.beans.factory.parsing.BeanDefinitionParsingException: Configuration problem: Unable to locate Spring NamespaceHandler for XML schema namespace [http://www.springframework.org/schema/aop]
Offending resource: URL [file:/usr/local/apache-tomcat-8.0.32/webapps/ROOT/WEB-INF/classes/applicationContext.xml]
at org.springframework.beans.factory.parsing.FailFastProblemReporter.error(FailFastProblemReporter.java:70) ~[spring-beans-4.2.6.RELEASE.jar:4.2.6.RELEASE]
at org.springframework.beans.factory.parsing.ReaderContext.error(ReaderContext.java:85) ~[spring-beans-4.2.6.RELEASE.jar:4.2.6.RELEASE]
at org.springframework.beans.factory.parsing.ReaderContext.error(ReaderContext.java:80) ~[spring-beans-4.2.6.RELEASE.jar:4.2.6.RELEASE]
at org.springframework.beans.factory.xml.BeanDefinitionParserDelegate.error(BeanDefinitionParserDelegate.java:301) ~[spring-beans-4.2.6.RELEASE.jar:4.2.6.RELEASE]
at org.springframework.beans.factory.xml.BeanDefinitionParserDelegate.parseCustomElement(BeanDefinitionParserDelegate.java:1408) ~[spring-beans-4.2.6.RELEASE.jar:4.2.6.RELEASE]
at org.springframework.beans.factory.xml.BeanDefinitionParserDelegate.parseCustomElement(BeanDefinitionParserDelegate.java:1401) ~[spring-beans-4.2.6.RELEASE.jar:4.2.6.RELEASE]
at org.springframework.beans.factory.xml.DefaultBeanDefinitionDocumentReader.parseBeanDefinitions(DefaultBeanDefinitionDocumentReader.java:168) ~[spring-beans-4.2.6.RELEASE.jar:4.2.6.RELEASE]
at org.springframework.beans.factory.xml.DefaultBeanDefinitionDocumentReader.doRegisterBeanDefinitions(DefaultBeanDefinitionDocumentReader.java:138) ~[spring-beans-4.2.6.RELEASE.jar:4.2.6.RELEASE]
at org.springframework.beans.factory.xml.DefaultBeanDefinitionDocumentReader.registerBeanDefinitions(DefaultBeanDefinitionDocumentReader.java:94) ~[spring-beans-4.2.6.RELEASE.jar:4.2.6.RELEASE]
at org.springframework.beans.factory.xml.XmlBeanDefinitionReader.registerBeanDefinitions(XmlBeanDefinitionReader.java:508) ~[spring-beans-4.2.6.RELEASE.jar:4.2.6.RELEASE]
at org.springframework.beans.factory.xml.XmlBeanDefinitionReader.doLoadBeanDefinitions(XmlBeanDefinitionReader.java:392) ~[spring-beans-4.2.6.RELEASE.jar:4.2.6.RELEASE]
at org.springframework.beans.factory.xml.XmlBeanDefinitionReader.loadBeanDefinitions(XmlBeanDefinitionReader.java:336) ~[spring-beans-4.2.6.RELEASE.jar:4.2.6.RELEASE]
at org.springframework.beans.factory.xml.XmlBeanDefinitionReader.loadBeanDefinitions(XmlBeanDefinitionReader.java:304) ~[spring-beans-4.2.6.RELEASE.jar:4.2.6.RELEASE]
at org.springframework.beans.factory.support.AbstractBeanDefinitionReader.loadBeanDefinitions(AbstractBeanDefinitionReader.java:181) ~[spring-beans-4.2.6.RELEASE.jar:4.2.6.RELEASE]
at org.springframework.beans.factory.support.AbstractBeanDefinitionReader.loadBeanDefinitions(AbstractBeanDefinitionReader.java:217) ~[spring-beans-4.2.6.RELEASE.jar:4.2.6.RELEASE]
at org.springframework.beans.factory.support.AbstractBeanDefinitionReader.loadBeanDefinitions(AbstractBeanDefinitionReader.java:188) ~[spring-beans-4.2.6.RELEASE.jar:4.2.6.RELEASE]
at org.springframework.web.context.support.XmlWebApplicationContext.loadBeanDefinitions(XmlWebApplicationContext.java:125) ~[spring-web-4.2.6.RELEASE.jar:4.2.6.RELEASE]
at org.springframework.web.context.support.XmlWebApplicationContext.loadBeanDefinitions(XmlWebApplicationContext.java:94) ~[spring-web-4.2.6.RELEASE.jar:4.2.6.RELEASE]
at org.springframework.context.support.AbstractRefreshableApplicationContext.refreshBeanFactory(AbstractRefreshableApplicationContext.java:129) ~[spring-context-4.2.6.RELEASE.jar:4.2.6.RELEASE]
at org.springframework.context.support.AbstractApplicationContext.obtainFreshBeanFactory(AbstractApplicationContext.java:609) ~[spring-context-4.2.6.RELEASE.jar:4.2.6.RELEASE]
at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:510) ~[spring-context-4.2.6.RELEASE.jar:4.2.6.RELEASE]
at org.springframework.web.context.ContextLoader.configureAndRefreshWebApplicationContext(ContextLoader.java:444) ~[spring-web-4.2.6.RELEASE.jar:4.2.6.RELEASE]
at org.springframework.web.context.ContextLoader.initWebApplicationContext(ContextLoader.java:326) ~[spring-web-4.2.6.RELEASE.jar:4.2.6.RELEASE]
at org.springframework.web.context.ContextLoaderListener.contextInitialized(ContextLoaderListener.java:107) [spring-web-4.2.6.RELEASE.jar:4.2.6.RELEASE]
at org.apache.catalina.core.StandardContext.listenerStart(StandardContext.java:4812) [catalina.jar:8.0.32]
at org.apache.catalina.core.StandardContext.startInternal(StandardContext.java:5255) [catalina.jar:8.0.32]
at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:147) [catalina.jar:8.0.32]
at org.apache.catalina.core.ContainerBase.addChildInternal(ContainerBase.java:725) [catalina.jar:8.0.32]
at org.apache.catalina.core.ContainerBase.addChild(ContainerBase.java:701) [catalina.jar:8.0.32]
at org.apache.catalina.core.StandardHost.addChild(StandardHost.java:717) [catalina.jar:8.0.32]
at org.apache.catalina.startup.HostConfig.deployDirectory(HostConfig.java:1091) [catalina.jar:8.0.32]
at org.apache.catalina.startup.HostConfig$DeployDirectory.run(HostConfig.java:1830) [catalina.jar:8.0.32]
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [na:1.8.0_144]
at java.util.concurrent.FutureTask.run(FutureTask.java:266) [na:1.8.0_144]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [na:1.8.0_144]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [na:1.8.0_144]
at java.lang.Thread.run(Thread.java:748) [na:1.8.0_144]
13-Oct-2050 13:07:48.990 SEVERE [localhost-startStop-3] org.apache.catalina.core.StandardContext.startInternal One or more listeners failed to start. Full details will be found in the appropriate container log file
13-Oct-2050 13:07:48.991 SEVERE [localhost-startStop-3] org.apache.catalina.core.StandardContext.startInternal Context [] startup failed due to previous errors
2050-05-09 13:07:48|INFO|org.springframework.context.support.AbstractApplicationContext:960|Closing Root WebApplicationContext: startup date [Sun Oct 13 13:07:43 CST 2050]; root of context hierarchy
2050-05-09 13:09:41|INFO|org.springframework.context.support.AbstractApplicationContext:960|Closing Root WebApplicationContext: startup date [Sun Oct 13 13:07:43 CST 2050]; root of context hierarchy error test1
2050-05-09 13:10:41|INFO|org.springframework.context.support.AbstractApplicationContext:960|Closing Root WebApplicationContext: startup date [Sun Oct 13 13:07:43 CST 2050]; root of context hierarchy error test2
2050-05-09 13:11:41|INFO|org.springframework.context.support.AbstractApplicationContext:960|Closing Root WebApplicationContext: startup date [Sun Oct 13 13:07:43 CST 2050]; root of context hierarchy error test3
b)制造系统日志(将/var/log/messages部分弄出来) 系统日志
[root@localhost ~]# vim /data/java-logs/message_logs/messages
Mar 50 50:50:06 localhost systemd: Removed slice system-selinux\x2dpolicy\x2dmigrate\x2dlocal\x2dchanges.slice.
Mar 50 50:50:06 localhost systemd: Stopping system-selinux\x2dpolicy\x2dmigrate\x2dlocal\x2dchanges.slice.
Mar 50 50:50:06 localhost systemd: Stopped target Network is Online.
Mar 50 50:50:06 localhost systemd: Stopping Network is Online.
Mar 50 50:50:06 localhost systemd: Stopping Authorization Manager...
Mar 50 50:20:38 localhost kernel: Initializing cgroup subsys cpuset
Mar 50 50:20:38 localhost kernel: Initializing cgroup subsys cpu
Mar 50 50:20:38 localhost kernel: Initializing cgroup subsys cpuacct
Mar 50 50:20:38 localhost kernel: Linux version 3.10.0-693.el7.x86_64 (builder@kbuilder.dev.centos.org) (gcc version 4.8.5 20150623 (Red Hat 4.8.5-16) (GCC) ) #1 SMP Tue Aug 22 21:50:27 UTC 2050
Mar 50 50:20:38 localhost kernel: Command line: BOOT_IMAGE=/vmlinuz-3.10.0-693.el7.x86_64 root=/dev/mapper/centos-root ro crashkernel=auto rd.lvm.lv=centos/root rd.lvm.lv=centos/swap rhgb quiet LANG=en_US.UTF-8
c)制造es日志:
[root@localhost ~]# vim /data/java-logs/es_logs/es_log
[2050-50-09T21:44:58,440][ERROR][o.e.b.Bootstrap ] Exception
java.lang.RuntimeException: can not run elasticsearch as root
at org.elasticsearch.bootstrap.Bootstrap.initializeNatives(Bootstrap.java:505) ~[elasticsearch-6.2.4.jar:6.2.4]
at org.elasticsearch.bootstrap.Bootstrap.setup(Bootstrap.java:172) ~[elasticsearch-6.2.4.jar:6.2.4]
at org.elasticsearch.bootstrap.Bootstrap.init(Bootstrap.java:323) [elasticsearch-6.2.4.jar:6.2.4]
at org.elasticsearch.bootstrap.Elasticsearch.init(Elasticsearch.java:091) [elasticsearch-6.2.4.jar:6.2.4]
at org.elasticsearch.bootstrap.Elasticsearch.execute(Elasticsearch.java:109) [elasticsearch-6.2.4.jar:6.2.4]
at org.elasticsearch.cli.EnvironmentAwareCommand.execute(EnvironmentAwareCommand.java:86) [elasticsearch-6.2.4.jar:6.2.4]
at org.elasticsearch.cli.Command.mainWithoutErrorHandling(Command.java:094) [elasticsearch-cli-6.2.4.jar:6.2.4]
at org.elasticsearch.cli.Command.main(Command.java:90) [elasticsearch-cli-6.2.4.jar:6.2.4]
at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:92) [elasticsearch-6.2.4.jar:6.2.4]
at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:85) [elasticsearch-6.2.4.jar:6.2.4]
[2050-50-09T21:44:58,549][WARN ][o.e.b.ElasticsearchUncaughtExceptionHandler] [] uncaught exception in thread [main]
org.elasticsearch.bootstrap.StartupException: java.lang.RuntimeException: can not run elasticsearch as root
at org.elasticsearch.bootstrap.Elasticsearch.init(Elasticsearch.java:095) ~[elasticsearch-6.2.4.jar:6.2.4]
at org.elasticsearch.bootstrap.Elasticsearch.execute(Elasticsearch.java:109) ~[elasticsearch-6.2.4.jar:6.2.4]
at org.elasticsearch.cli.EnvironmentAwareCommand.execute(EnvironmentAwareCommand.java:86) ~[elasticsearch-6.2.4.jar:6.2.4]
at org.elasticsearch.cli.Command.mainWithoutErrorHandling(Command.java:094) ~[elasticsearch-cli-6.2.4.jar:6.2.4]
at org.elasticsearch.cli.Command.main(Command.java:90) ~[elasticsearch-cli-6.2.4.jar:6.2.4]
at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:92) ~[elasticsearch-6.2.4.jar:6.2.4]
at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:85) ~[elasticsearch-6.2.4.jar:6.2.4]
Caused by: java.lang.RuntimeException: can not run elasticsearch as root
at org.elasticsearch.bootstrap.Bootstrap.initializeNatives(Bootstrap.java:505) ~[elasticsearch-6.2.4.jar:6.2.4]
at org.elasticsearch.bootstrap.Bootstrap.setup(Bootstrap.java:172) ~[elasticsearch-6.2.4.jar:6.2.4]
at org.elasticsearch.bootstrap.Bootstrap.init(Bootstrap.java:323) ~[elasticsearch-6.2.4.jar:6.2.4]
at org.elasticsearch.bootstrap.Elasticsearch.init(Elasticsearch.java:091) ~[elasticsearch-6.2.4.jar:6.2.4]
... 6 more
[2050-50-09T21:46:32,174][INFO ][o.e.n.Node ] [] initializing ...
[2050-50-09T21:46:32,467][INFO ][o.e.e.NodeEnvironment ] [koccs5f] using [1] data paths, mounts [[/ (rootfs)]], net usable_space [48gb], net total_space [49.9gb], types [rootfs]
[2050-50-09T21:46:32,468][INFO ][o.e.e.NodeEnvironment ] [koccs5f] heap size [5015.6mb], compressed ordinary object pointers [true]
d)制造tomcat访问日志
[root@localhost ~]# vim /data/java-logs/tomcat_logs/localhost_access_log.2050-50-09.txt
192.168.150.1 - - [09/Mar/2050:09:07:59 +0800] "GET /favicon.ico HTTP/1.1" 404 -
Caused by: java.lang.RuntimeException: can not run elasticsearch as root
at org.elasticsearch.bootstrap.Bootstrap.initializeNatives(Bootstrap.java:105) ~[elasticsearch-6.2.4.jar:6.2.4]
at org.elasticsearch.bootstrap.Bootstrap.setup(Bootstrap.java:172) ~[elasticsearch-6.2.4.jar:6.2.4]
192.168.150.2 - - [09/Mar/2050:09:07:59 +0800] "GET / HTTP/1.1" 404 -
192.168.150.1 - - [09/Mar/2050:15:09:12 +0800] "GET / HTTP/1.1" 200 11250
Caused by: java.lang.RuntimeException: can not run elasticsearch as root
at org.elasticsearch.bootstrap.Bootstrap.initializeNatives
192.168.150.2 - - [09/Mar/2050:15:09:12 +0800] "GET /tomcat.png HTTP/1.1" 200 5103
192.168.150.3 - - [09/Mar/2050:15:09:12 +0800] "GET /tomcat.css HTTP/1.1" 200 5576
192.168.150.5 - - [09/Mar/2050:15:09:09 +0800] "GET /bg-nav.png HTTP/1.1" 200 1401
192.168.150.1 - - [09/Mar/2050:15:09:09 +0800] "GET /bg-upper.png HTTP/1.1" 200 3103
安装filebeat6.7.1:
[root@localhost ~]# cd /data/
[root@localhost data]# ls filebeat6.7.1.tar.gz
filebeat6.7.1.tar.gz
[root@localhost data]# tar -zxf filebeat6.7.1.tar.gz
[root@localhost data]# cd filebeat6.7.1
[root@localhost filebeat6.7.1]# ls
conf image scripts
[root@localhost filebeat6.7.1]# ls conf/
filebeat.yml filebeat.yml.bak
[root@localhost filebeat6.7.1]# ls image/
filebeat_6.7.1.tar
[root@localhost filebeat6.7.1]# ls scripts/
run_filebeat6.7.1.sh
[root@localhost filebeat6.7.1]# docker load -i image/filebeat_6.7.1.tar
[root@localhost filebeat6.7.1]# docker images |grep filebeat
docker.elastic.co/beats/filebeat 6.7.1 04fcff75b160 11 months ago 279MB
[root@localhost filebeat6.7.1]# cat conf/filebeat.yml
filebeat.inputs:
#下面为添加:——————————————
#系统日志:
- type: log
enabled: true
paths:
- /usr/share/filebeat/logs/message_logs/messages
fields:
log_source: system-171.131
#tomcat的catalina日志:
- type: log
enabled: true
paths:
- /usr/share/filebeat/logs/tomcat_logs/catalina.out
fields:
log_source: catalina-log-171.131
multiline.pattern: '^[0-9]{4}-(((0[13578]|(10|12))-(0[1-9]|[1-2][0-9]|3[0-1]))|(02-(0[1-9]|[1-2][0-9]))|((0[469]|11)-(0[1-9]|[1-2][0-9]|30)))'
multiline.negate: true
multiline.match: after
# 上面正则是匹配日期开头正则,类似:2004-02-29开头的
# log_source: xxx 表示: 因为存入redis的只有一个索引名,logstash对多种类型日志无法区分,定义该项可以让logstash以此来判断日志来源,当是这种类型日志,输出相应的索引名存入es,当时另一种类型日志,输出相应索引名存入es
#es日志:
- type: log
enabled: true
paths:
- /usr/share/filebeat/logs/es_logs/es_log
fields:
log_source: es-log-171.131
multiline.pattern: '^\['
multiline.negate: true
multiline.match: after
#上面正则是是匹配以[开头的,\表示转义.
#tomcat的访问日志:
- type: log
enabled: true
paths:
- /usr/share/filebeat/logs/tomcat_logs/localhost_access_log.2050-50-09.txt
fields:
log_source: tomcat-access-log-171.131
multiline.pattern: '^((2(5[0-5]|[0-4]\d))|[0-1]?\d{1,2})(\.((2(5[0-5]|[0-4]\d))|[0-1]?\d{1,2})){3}'
multiline.negate: true
multiline.match: after
#上面为添加:—————————————————————
filebeat.config.modules:
path: ${path.config}/modules.d/*.yml
reload.enabled: false
setup.template.settings:
index.number_of_shards: 3
setup.kibana:
#下面是直接写入es中:
#output.elasticsearch:
# hosts: ["192.168.171.128:9200"]
#下面是写入redis中:
#下面的filebeat-common是自定的key,要和logstash中从redis里对应的key要要一致,多个节点的nginx的都可以该key写入,但需要定义log_source以作为区分,logstash读取的时候以区分的标志来分开存放索引到es中
output.redis:
hosts: ["192.168.171.128"]
port: 6379
password: "123456"
key: "filebeat-common"
db: 0
datatype: list
processors:
- add_host_metadata: ~
- add_cloud_metadata: ~
#注意:因为默认情况下,宿主机日志路径和容器内日志路径是不一致的,所以配置文件里配置的路径如果是宿主机日志路径,容器里则找不到
##所以采取措施是:配置文件里配置成容器里的日志路径,再把宿主机的日志目录和容器日志目录做一个映射就可以了
#/usr/share/filebeat/logs/*.log 是容器里的日志路径
[root@localhost filebeat6.7.1]# cat scripts/run_filebeat6.7.1.sh
#!/bin/bash
docker run -d --name filebeat6.7.1 --net=host --restart=always --user=root -v /data/filebeat6.7.1/conf/filebeat.yml:/usr/share/filebeat/filebeat.yml -v /data/java-logs:/usr/share/filebeat/logs docker.elastic.co/beats/filebeat:6.7.1
#注意:因为默认情况下,宿主机日志路径和容器内日志路径是不一致的,所以配置文件里配置的路径如果是宿主机日志路径,容器里则找不到
#所以采取措施是:配置文件里配置成容器里的日志路径,再把宿主机的日志目录和容器日志目录做一个映射就可以了
[root@localhost filebeat6.7.1]# sh scripts/run_filebeat6.7.1.sh #运行后则开始收集日志到redis
[root@localhost filebeat6.7.1]# docker ps |grep filebeat
3cc559a84904 docker.elastic.co/beats/filebeat:6.7.1 "/usr/local/bin/dock…" 8 seconds ago Up 7 seconds filebeat6.7.1
[root@localhost filebeat6.7.1]# cd
到redis里查看是否以写入日志:(192.168.171.128,两台都以同一个key写入redis,所以只有一个key名,筛选进入es时再根据标识筛选)
[root@localhost ~]# docker exec -it redis4.0.10 bash
[root@localhost /]# redis-cli -a 123456
127.0.0.1:6379> KEYS *
1)"filebeat-common"
127.0.0.1:6379> quit
[root@localhost /]# exit
4.docker安装logstash6.7.1(在192.168.171.129上)——从redis读出日志,写入es集群
[root@localhost ~]# cd /data/
[root@localhost data]# ls logstash6.7.1.tar.gz
logstash6.7.1.tar.gz
[root@localhost data]# tar -zxf logstash6.7.1.tar.gz
[root@localhost data]# cd logstash6.7.1
[root@localhost logstash6.7.1]# ls
config image scripts
[root@localhost logstash6.7.1]# ls config/
GeoLite2-City.mmdb log4j2.properties logstash.yml pipelines.yml_bak startup.options
jvm.options logstash-sample.conf pipelines.yml redis_out_es_in.conf
[root@localhost logstash6.7.1]# ls image/
logstash_6.7.1.tar
[root@localhost logstash6.7.1]# ls scripts/
run_logstash6.7.1.sh
[root@localhost logstash6.7.1]# docker load -i image/logstash_6.7.1.tar
[root@localhost logstash6.7.1]# docker images |grep logstash
logstash 6.7.1 1f5e249719fc 11 months ago 778MB
[root@localhost logstash6.7.1]# cat config/pipelines.yml #确认配置,引用的conf目录
# This file is where you define your pipelines. You can define multiple.
# For more information on multiple pipelines, see the documentation:
# https://www.elastic.co/guide/en/logstash/current/multiple-pipelines.html
- pipeline.id: main
path.config: "/usr/share/logstash/config/*.conf" #容器内的目录
pipeline.workers: 3
[root@localhost logstash6.7.1]# cat config/redis_out_es_in.conf #查看和确认配置
input {
redis {
host => "192.168.171.128"
port => "6379"
password => "123456"
db => "0"
data_type => "list"
key => "filebeat-common"
}
}
#默认target是@timestamp,所以time_local会更新@timestamp时间。下面filter的date插件作用: 当第一次收集或使用缓存写入时候,会发现入库时间比日志实际时间有延时,导致时间不准确,最好加入date插件,使得>入库时间和日志实际时间保持一致.
filter {
date {
locale => "en"
match => ["time_local", "dd/MMM/yyyy:HH:mm:ss Z"]
}
}
output {
if [fields][log_source] == 'system-171.130' {
elasticsearch {
hosts => ["192.168.171.128:9200"]
index => "logstash-system-171.130-log-%{+YYYY.MM.dd}"
}
}
if [fields][log_source] == 'system-171.131' {
elasticsearch {
hosts => ["192.168.171.128:9200"]
index => "logstash-system-171.131-log-%{+YYYY.MM.dd}"
}
}
if [fields][log_source] == 'catalina-log-171.130' {
elasticsearch {
hosts => ["192.168.171.128:9200"]
index => "logstash-catalina-171.130-log-%{+YYYY.MM.dd}"
}
}
if [fields][log_source] == 'catalina-log-171.131' {
elasticsearch {
hosts => ["192.168.171.128:9200"]
index => "logstash-catalina-171.131-log-%{+YYYY.MM.dd}"
}
}
if [fields][log_source] == 'es-log-171.130' {
elasticsearch {
hosts => ["192.168.171.128:9200"]
index => "logstash-es-log-171.130-%{+YYYY.MM.dd}"
}
}
if [fields][log_source] == 'es-log-171.131' {
elasticsearch {
hosts => ["192.168.171.128:9200"]
index => "logstash-es-log-171.131-%{+YYYY.MM.dd}"
}
}
if [fields][log_source] == 'tomcat-access-log-171.130' {
elasticsearch {
hosts => ["192.168.171.128:9200"]
index => "logstash-tomcat-access-171.130-log-%{+YYYY.MM.dd}"
}
}
if [fields][log_source] == 'tomcat-access-log-171.131' {
elasticsearch {
hosts => ["192.168.171.128:9200"]
index => "logstash-tomcat-access-171.131-log-%{+YYYY.MM.dd}"
}
}
stdout { codec=> rubydebug }
#codec=> rubydebug 调试使用,能将信息输出到控制台
}
[root@localhost logstash6.7.1]# cat scripts/run_logstash6.7.1.sh
#!/bin/bash
docker run -d --name logstash6.7.1 --net=host --restart=always -v /data/logstash6.7.1/config:/usr/share/logstash/config logstash:6.7.1
[root@localhost logstash6.7.1]# sh scripts/run_logstash6.7.1.sh #从redis读取日志,写入es
[root@localhost logstash6.7.1]# docker ps |grep logstash
980aefbc077e logstash:6.7.1 "/usr/local/bin/dock…" 9 seconds ago Up 7 seconds logstash6.7.1
到es集群查看,如下:

到redis查看,数据已经读取走,为空了:
[root@localhost ~]# docker exec -it redis4.0.10 bash
[root@localhost /]# redis-cli -a 123456
127.0.0.1:6379> KEYS *
(empty list or set)
127.0.0.1:6379> quit
5.docker安装kibana6.7.1(在192.168.171.132上)从es中读取日志展示出来
[root@localhost ~]# cd /data/
[root@localhost data]# ls kibana6.7.1.tar.gz
kibana6.7.1.tar.gz
[root@localhost data]# tar -zxf kibana6.7.1.tar.gz
[root@localhost data]# cd kibana6.7.1
[root@localhost kibana6.7.1]# ls
config image scripts
[root@localhost kibana6.7.1]# ls config/
kibana.yml
[root@localhost kibana6.7.1]# ls image/
kibana_6.7.1.tar
[root@localhost kibana6.7.1]# ls scripts/
run_kibana6.7.1.sh
[root@localhost kibana6.7.1]# docker load -i image/kibana_6.7.1.tar
[root@localhost kibana6.7.1]# docker images |grep kibana
kibana 6.7.1 860831fbf9e7 11 months ago 677MB
[root@localhost kibana6.7.1]# cat config/kibana.yml
#
# ** THIS IS AN AUTO-GENERATED FILE **
#
# Default Kibana configuration for docker target
server.name: kibana
server.host: "0"
elasticsearch.hosts: [ "http://192.168.171.128:9200" ]
xpack.monitoring.ui.container.elasticsearch.enabled: true
[root@localhost kibana6.7.1]# cat scripts/run_kibana6.7.1.sh
#!/bin/bash
docker run -d --name kibana6.7.1 --net=host --restart=always -v /data/kibana6.7.1/config/kibana.yml:/usr/share/kibana/config/kibana.yml kibana:6.7.1
[root@localhost kibana6.7.1]# sh scripts/run_kibana6.7.1.sh #运行,从es读取展示到kibana中
[root@localhost kibana6.7.1]# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
bf16aaeaf4d9 kibana:6.7.1 "/usr/local/bin/kiba…" 16 seconds ago Up 15 seconds kibana6.7.1
[root@localhost kibana6.7.1]# netstat -anput |grep 5601 #kibana端口
tcp 0 0 0.0.0.0:5601 0.0.0.0:* LISTEN 2418/node
浏览器访问kibana: http://192.168.171.132:5601

kibana依次创建索引(尽量和es里索引名对应,方便查找)——查询和展示es里的数据
(1)先创建-*索引:logstash-catalina-* 点击management,如下:


输入索引名:logstash-catalina-*,点击下一步,如下:

选择时间戳: @timestamp,点击创建索引,如下:

(2)先创建-*索引:logstash-es-log-*

点击下一步,如下:

选择时间戳,点击创建索引,如下:

(3)创建-*索引:logstash-system-*

点击下一步,如下:

选择时间戳,点击创建索引,如下:

(4)创建-*索引:logstash-tomcat-access-*

点击下一步,如下:

点击创建索引,如下:

查看日志,点击discover,如下: #注意:由于之前测试访问日志量少,后面又多写了些日志,方便测试。




随便选择几个点击箭头,即可展开,如下:







如果对运维课程感兴趣,可以在b站上、A站或csdn上搜索我的账号: 运维实战课程,可以关注我,学习更多免费的运维实战技术视频
相关文章:
docker安装elk6.7.1-搜集java日志
docker安装elk6.7.1-搜集java日志 如果对运维课程感兴趣,可以在b站上、A站或csdn上搜索我的账号: 运维实战课程,可以关注我,学习更多免费的运维实战技术视频 0.规划 192.168.171.130 tomcat日志filebeat 192.168.171.131 …...
XML实体注入漏洞攻与防
JAVA中的XXE攻防 回显型 无回显型 cve-2014-3574...
Flutter 与 React 前端框架对比:深入分析与实战示例
Flutter 与 React 前端框架对比:深入分析与实战示例 在现代前端开发中,Flutter 和 React 是两个非常流行的框架。Flutter 是 Google 推出的跨平台开发框架,支持从一个代码库生成 iOS、Android、Web 和桌面应用;React 则是 Facebo…...
使用 Docker Compose 一键启动 Redis、MySQL 和 RabbitMQ
目录 一、Docker Compose 简介 二、服务配置详解 1. Redis 配置 2. MySQL 配置 3. RabbitMQ 配置 三、数据持久化与时间同步 四、部署与管理 五、总结 目录挂载与卷映射的区别 现代软件开发中,微服务架构因其灵活性和可扩展性而备受青睐。为了支持微服务的…...
【问题解决】el-upload数据上传成功后不显示成功icon
el-upload数据上传成功后不显示成功icon 原因 由于后端返回数据与要求形式不符,使用el-upload默认方法调用onSuccess钩子失败,上传文件的状态并未发生改变,因此数据上传成功后并未显示成功的icon标志。 解决方法 点击按钮,调用…...
spring框架之IoC学习与梳理(1)
目录 一、spring-IoC的基本解释。 二、spring-IoC的简单demo(案例)。 (1)maven-repository官网中找依赖坐标。 (2).pom文件中通过标签引入。 (3)使用lombok帮助快速开发。 ÿ…...
MQ的可靠消息投递机制
确保消息在发送、传递和消费过程中不会丢失、重复消费或错乱。 1. 消息的可靠投递 消息持久化: 消息被发送到队列后会存储在磁盘上,即使消息队列崩溃,消息也不会丢失。例如:Kafka、RabbitMQ等都支持持久化消息。Kafka通过将消息存…...
Mono里运行C#脚本35—加载C#语言基类的过程
前面大体地分析了整个Mono运行过程,主要从文件的加载,再到EXE文件的入口点, 然后到方法的编译,机器代码的生成,再到函数调用的跳板转换,进而解析递归地实现JIT。 但是还有很多功能没有解析的,就是C#语言相关最多的,就是类的加载,以及类语言设计的实现属性, 比如类的…...
location+rewrite实现隐性域名配置
隐性域名:访问www.a.com 则跳转到www.b.com的页面,但是地址栏还是显示www.a.com 1、配置基于根目录的隐性域名(就是nginx反向代理) 访问http://www.bbb.org:8002, 跳转http://www.accp.org:8001的页面,地址…...
150 Linux 网络编程6 ,从socket 到 epoll整理。listen函数参数再研究
一 . 只能被一个client 链接 socket例子 此例子用于socket 例子, 该例子只能用于一个客户端连接server。 不能用于多个client 连接 server socket_server_support_one_clientconnect.c /* 此例子用于socket 例子, 该例子只能用于一个客户端连接server。…...
centos7 配置国内镜像源安装 docker
使用国内镜像源:由于 Docker 的官方源在国内访问可能不稳定,你可以使用国内的镜像源,如阿里云的镜像源。手动创建 /etc/yum.repos.d/docker-ce.repo 文件,并添加以下内容: [docker-ce-stable] nameDocker CE Stable -…...
周末总结(2024/01/25)
工作 人际关系核心实践: 要学会随时回应别人的善意,执行时间控制在5分钟以内 坚持每天早会打招呼 遇到接不住的话题时拉低自己,抬高别人(无阴阳气息) 朋友圈点赞控制在5min以内,职场社交不要放在5min以外 职场的人际关系在面对利…...
【go语言】map 和 list
一、map map 是一种无序的键值对的集合。 无序 :map[key]键值对:key - value map 最重要的一点是通过 key 来快速检索数据,key 类似于索引,指向数据的值。map 是一种集合,所以我们可以像迭代数组和切片那样迭代他。…...
PCIe 个人理解专栏——【2】LTSSM(Link Training and Status State Machine)
前言: 链路训练和状况状态机LTSSM(Link Training and Status State Machine)是整个链路训练和运行中状态的状态转换逻辑关系图,总共有11个状态。 正文: 包括检测(Detect),轮询&…...
《DiffIR:用于图像修复的高效扩散模型》学习笔记
paper:2303.09472 GitHub:GitHub - Zj-BinXia/DiffIR: This project is the official implementation of Diffir: Efficient diffusion model for image restoration, ICCV2023 目录 摘要 1、介绍 2、相关工作 2.1 图像恢复(Image Rest…...
Vue3 30天精进之旅:Day01 - 初识Vue.js的奇妙世界
引言 在前端开发领域,Vue.js是一款极具人气的JavaScript框架。它以其简单易用、灵活高效的特性,吸引了大量开发者。本文是“Vue 3之30天系列学习”的第一篇博客,旨在帮助大家快速了解Vue.js的基本概念和核心特性,为后续的深入学习…...
[笔记] 极狐GitLab实例 : 手动备份步骤总结
官方备份文档 : 备份和恢复极狐GitLab 一. 要求 为了能够进行备份和恢复,请确保您系统已安装 Rsync。 如果您安装了极狐GitLab: 如果您使用 Omnibus 软件包,则无需额外操作。如果您使用源代码安装,您需要确定是否安装了 rsync。…...
将本地项目上传到 GitLab/GitHub
以下是将本地项目上传到 GitLab 的完整步骤,从创建仓库到推送代码的详细流程: 1. 在 GitLab 上创建新项目 登录 GitLab,点击 New project。选择 Create blank project。填写项目信息: Project name: 项目名称(如 my-p…...
switch组件的功能与用法
文章目录 1 概念介绍2 使用方法3 示例代码 我们在上一章回中介绍了PageView这个Widget,本章回中将介绍Switch Widget.闲话休提,让我们一起Talk Flutter吧。 1 概念介绍 我们在这里介绍的Switch是指左右滑动的开关,常用来表示某项设置是打开还是关闭。Fl…...
mac 电脑上安装adb命令
在Mac下配置android adb命令环境,配置方式如下: 1、下载并安装IDE (android studio) Android Studio官网下载链接 详细的安装连接请参考 Mac 安装Android studio 2、配置环境 在安装完成之后,将android的adb工具所在…...
Couchbase UI: Dashboard
以下是 Couchbase UI Dashboard 页面详细介绍,包括页面布局和功能说明,帮助你更好地理解和使用。 1. 首页(Overview) 功能:提供集群的整体健康状态和性能摘要 集群状态 节点健康状况:绿色(正…...
[极客大挑战 2019]Knife1
题目 蚁剑直接连接密码是Syc 拿下flag flag{1d373584-fc74-4a2c-a6d4-3691314be4ab}...
第17篇:python进阶:详解数据分析与处理
第17篇:数据分析与处理 内容简介 本篇文章将深入探讨数据分析与处理在Python中的应用。您将学习如何使用pandas库进行数据清洗与分析,掌握matplotlib和seaborn库进行数据可视化,以及处理大型数据集的技巧。通过丰富的代码示例和实战案例&am…...
【Maui】提示消息的扩展
文章目录 前言一、问题描述二、解决方案三、软件开发(源码)3.1 消息扩展库3.2 消息提示框使用3.3 错误消息提示使用3.4 问题选择框使用 四、项目展示 前言 .NET 多平台应用 UI (.NET MAUI) 是一个跨平台框架,用于使用 C# 和 XAML 创建本机移…...
消息队列篇--通信协议扩展篇--二进制编码(ASCII,UTF-8,UTF-16,Unicode等)
1、ASCII(American Standard Code for Information Interchange) 范围:0 到 127(共 128 个字符)描述:ASCII 是一种早期的字符编码标准,主要用于表示英文字母、数字和一些常见的符号。每个字符占…...
CentOS 7 搭建lsyncd实现文件实时同步 —— 筑梦之路
在 CentOS 7 上搭建 lsyncd(Live Syncing Daemon)以实现文件的实时同步,可以按照以下步骤进行操作。lsyncd 是一个基于 inotify 的轻量级实时同步工具,支持本地和远程同步。以下是详细的安装和配置步骤: 1. 系统准备 …...
【2025最新计算机毕业设计】基于SSM房屋租赁平台【提供源码+答辩PPT+文档+项目部署】(高质量源码,可定制,提供文档,免费部署到本地)
作者简介:✌CSDN新星计划导师、Java领域优质创作者、掘金/华为云/阿里云/InfoQ等平台优质作者、专注于Java技术领域和学生毕业项目实战,高校老师/讲师/同行前辈交流。✌ 主要内容:🌟Java项目、Python项目、前端项目、PHP、ASP.NET、人工智能…...
(开源)基于Django+Yolov8+Tensorflow的智能鸟类识别平台
1 项目简介(开源地址在文章结尾) 系统旨在为了帮助鸟类爱好者、学者、动物保护协会等群体更好的了解和保护鸟类动物。用户群体可以通过平台采集野外鸟类的保护动物照片和视频,甄别分类、实况分析鸟类保护动物,与全世界各地的用户&…...
【转帖】eclipse-24-09版本后,怎么还原原来版本的搜索功能
【1】原贴地址:eclipse - 怎么还原原来版本的搜索功能_eclipse打开类型搜索类功能失效-CSDN博客 https://blog.csdn.net/sinat_32238399/article/details/145113105 【2】原文如下: 更新eclipse-24-09版本后之后,新的搜索功能(CT…...
【自定义函数】编码-查询-匹配
目录 自定义编码匹配编码匹配改进 sheet来源汇总来源汇总改进 END 自定义编码匹配 在wps vb环境写一个新的excel函数名为编码匹配,第一个参数指定待匹配文本所在单元格(相对引用),第二个参数指定关键词区域(绝对引用&…...
