项目部署ELK+Redis+Kafka

## 一.环境规划

### 1.1 环境列表

| 类型 | 工具选型 | 版本 | 主目录 |
| -------- | ------------- | --------- | ---------------------------- |
| 系统 | Centos | 7.9.2009 | |
| Java | JDK | 1.8.0_191 | /usr/java/jdk1.8.0_191-amd64 |
| 缓存 | Redis | 6.2.1 | /home/tfd/app/redis |
| 搜索引擎 | ElasticSearch | 7.11.2 | |
| | | | |
| | | | |

### 1.2 部署结构

| 主机名 | 服务 | 备注 |
| ------------- | ------------- | ------------- |
| IP地址 | 192.168.2.170 | 192.168.2.171 |
| 192.168.2.170 | | |
| 192.168.2.171 | | |
| 192.168.2.172 | | |

## 二. 服务器及账户权限检查

### 2.1 网络环境检查

检查各个服务器的连通性

```shell
ping 192.168.2.170
ping 192.168.2.171
ping 192.168.2.172
```

#### 2.1.1 添加hosts

```shell
sudo tee -a /etc/hosts <<-'EOF' 192.168.2.170 dc-did-server-1 192.168.2.171 dc-did-server-2 192.168.2.172 dc-did-server-3 EOF ``` ### 2.2 系统版本检查 ```shell cat /etc/redhat-release ``` ### 2.3 服务器配置检查 #### 2.3.1 CPU核数确认 ```shell cat /proc/cpuinfo | grep family |wc -l ``` #### 2.3.2 内存确认 ```shell free -h | grep Mem | awk '{print $2}' ``` #### 2.3.3 磁盘确认 ```shell df -h ``` **检查是否有未挂载磁盘** ```shell sudo fdisk -l ``` ### 2.4 账号权限检查 一般正式情况,甲方给予的是普通用户权限,但是我们系统中有些配置需要root权限,需单独和甲方说明并申请 ## 三. 系统初始化 ### 3.1 规范要求 1. 所有配置必须通过配置文件配置,不允许通过命令完成. 2. 统一在普通用户下建立setup文件夹,上传待部署组件. eg:/home/tfd/setup #客户如有其它目录约定,可遵循甲方规范. 3. 统一在普通用户下建立app文件夹,放所有待运行组件. eg:/home/tfd/app #客户如有其它目录约定,可遵循甲方规范. 4. 针对运行的服务,必须是普通用户运行,禁止使用root用户运行服务/中间件. ### 3.2 网络配置 #### 3.2.1 增加对应hosts的映射表 ```sh sudo vim /etc/hosts 192.168.2.170 dc-did-server-1 192.168.2.171 dc-did-server-2 192.168.2.172 dc-did-server-3 ``` ### 3.3 文件句柄数优化 #### 3.3.1 limits (文件句柄数) **此项配置完毕,需重启后生效,临时方法为sudo ulimit -n 65535** ```shell sudo vim /etc/security/limits.conf * soft nofile 65536 * hard nofile 65536 * soft nproc 2048 * hard nproc 4096 * -memlock unlimited #tfd用户可以无限制的增加锁定的内存大小 tfd soft memlock unlimited tfd hard memlock unlimited ``` ​ **注: tfd为普通用户运行** #### 3.3.2 用户nproc限制文件 ```shell sudo vim /etc/security/limits.d/20-nproc.conf * soft nproc 65536 root soft nproc unlimited ``` #### 3.3.3 内核参数优化 ```shell sudo vim /etc/sysctl.conf #定义一个进程能拥有最多VMA(虚拟内存区域)的数量 vm.max_map_count=655360 #内存分配策略,redis服务器要加这个,其他服务器可以不用加 vm.overcommit_memory=1 #定义每个端口最大的监听队列的长度,大的侦听队列对DOS攻击会有所帮助 net.core.somaxconn=2048 ``` ​ **配置生效** ```shell sudo sysctl -p ``` ### 3.4 安装JDK ```shell vim jdk.sh #!/bin/sh rpm -ivh jdk-8u191-linux-x64.rpm echo "export JAVA_HOME=/usr/java/jdk1.8.0_191-amd64" ><<-'EOF' #!/bin/sh REDISPORT=6379 EXEC=/home/tfd/app/redis/bin/redis-server CLIEXEC=/home/tfd/app/redis/bin/redis-cli PIDFILE=/var/run/redis_${REDISPORT}.pid CONF="/home/tfd/app/redis_cluster/6379/redis.conf" case "$1" in start) if [ -f $PIDFILE ] then echo "$PIDFILE exists, process is already running or crashed" else echo "Starting Redis server..." $EXEC $CONF fi ;; stop) if [ ! -f $PIDFILE ] then echo "$PIDFILE does not exist, process is not running" else PID=$(cat $PIDFILE) echo "Stopping ..." $CLIEXEC -p $REDISPORT shutdown while [ -x /proc/${PID} ] do echo "Waiting for Redis to shutdown ..." sleep 1 done echo "Redis stopped" fi ;; *) echo "Please use start or stop as first argument" ;; esac EOF ``` ```shell tee -a /home/tfd/app/redis_cluster/6380/6380.sh <<-'EOF' #!/bin/sh REDISPORT=6380 EXEC=/home/tfd/app/redis/bin/redis-server CLIEXEC=/home/tfd/app/redis/bin/redis-cli PIDFILE=/var/run/redis_${REDISPORT}.pid CONF="/home/tfd/app/redis_cluster/6380/redis.conf" case "$1" in start) if [ -f $PIDFILE ] then echo "$PIDFILE exists, process is already running or crashed" else echo "Starting Redis server..." $EXEC $CONF fi ;; stop) if [ ! -f $PIDFILE ] then echo "$PIDFILE does not exist, process is not running" else PID=$(cat $PIDFILE) echo "Stopping ..." $CLIEXEC -p $REDISPORT shutdown while [ -x /proc/${PID} ] do echo "Waiting for Redis to shutdown ..." sleep 1 done echo "Redis stopped" fi ;; *) echo "Please use start or stop as first argument" ;; esac EOF ``` **添加权限** ```shell chmod +x /home/tfd/app/redis_cluster/6379/6379.sh; chmod +x /home/tfd/app/redis_cluster/6380/6380.sh; ``` ### 4.8 拷贝到其他节点 **拷贝目录到其他节点上,改redis.conf中的bind ip.** ```shell scp -r /home/tfd/app/ tfd@dc-did-server-2:/home/tfd; sed -i 's/bind 192.168.2.170/bind 192.168.2.171/g' /home/tfd/app/redis_cluster/6379/redis.conf; sed -i 's/bind 192.168.2.170/bind 192.168.2.171/g' /home/tfd/app/redis_cluster/6380/redis.conf; scp -r /home/tfd/app/ tfd@dc-did-server-3:/home/tfd; sed -i 's/bind 192.168.2.170/bind 192.168.2.172/g' /home/tfd/app/redis_cluster/6379/redis.conf; sed -i 's/bind 192.168.2.170/bind 192.168.2.172/g' /home/tfd/app/redis_cluster/6380/redis.conf; ``` ### 4.9 启动各个节点 ```shell /home/tfd/app/redis_cluster/6379/6379.sh start; /home/tfd/app/redis_cluster/6380/6380.sh start; ``` ### 4.10 检查redis启动情况 ```shell ps -efl | grep redis ``` ### 4.11 创建集群 #### 4.11.1 建立三个master ```shell redis-cli --cluster create 192.168.2.170:6379 192.168.2.171:6379 192.168.2.172:6379 --cluster-replicas 0 ``` #### 4.11.2 slave挂靠master ```shell redis-cli --cluster add-node 192.168.2.171:6380 192.168.2.170:6379 --cluster-slave --cluster-master-id master id redis-cli --cluster add-node 192.168.2.172:6380 192.168.2.171:6379 --cluster-slave --cluster-master-id master id redis-cli --cluster add-node 192.168.2.170:6380 192.168.2.172:6379 --cluster-slave --cluster-master-id master id ``` ### 4.12 集群验证 ```shell redis-cli -c -h 192.168.2.170 -p 6379 cluster nodes redis-cli -c -h 192.168.2.170 -p 6379 192.168.2.170:6379><<-'EOF' #/bin/bash nohup /home/tfd/app/kafka/bin/kafka-server-start.sh /home/tfd/app/kafka/config/server.properties &> /home/tfd/app/kafka/logs/kafka.log &

EOF
#添加执行权限
chmod +x /home/tfd/app/kafka/start.sh
```

##### 2.2.6.5 启动

```shell
sh /home/tfd/app/kafka/start.sh
```

##### 2.2.6.6 测试连通性

```shell
/home/tfd/app/kafka/bin/kafka-console-producer.sh --broker-list 192.168.2.170:9092 --topic test
```

在另一个终端运行

```shell
/home/tfd/app/kafka/bin/kafka-console-consumer.sh --bootstrap-server dc-did-server-1:9092,dc-did-server-2:9092,dc-did-server-3:9092 --from-beginning --topic test
```

jps显示QuorumPeerMain和Kafka

```sh
107261 QuorumPeerMain
109054 Kafka
```

#### 2.2.7 LogStash部署

##### 2.2.7.1 下载

```shell
wget -P /home/tfd/app/ https://artifacts.elastic.co/downloads/logstash/logstash-6.8.14.tar.gz
cd /home/tfd/app/;
tar zxvf logstash-6.8.14.tar.gz;
mv logstash-6.8.14 logstash;
```

##### 2.2.7.2 logstash配置文件

```shell
vim /home/tfd/app/logstash/logstash.conf
input{
kafka {
bootstrap_servers => "192.168.2.170:9092,192.168.2.171:9092,192.168.2.172:9092"
topics => ["DIDLIST_caesar_dynamic_data"]
client_id => "logstash-0-0"
group_id => "logstash"
codec => "json"
add_field => {
"s_index" => "caesar_dynamic_data"
"s_estype" => "dynamic_data"
}
}

kafka {
bootstrap_servers => "192.168.2.170:9092,192.168.2.171:9092,192.168.2.172:9092"
topics => ["DIDLIST_caesar_persist_ios"]
client_id => "logstash-0-1"
group_id => "logstash"
codec => "json"
add_field => {
"s_index" => "caesar_persist_ios"
"s_estype" => "caesar_deviceid"
}
}

kafka {
bootstrap_servers => "192.168.2.170:9092,192.168.2.171:9092,192.168.2.172:9092"
topics => ["DIDLIST_caesar_persist_android"]
client_id => "logstash-0-2"
group_id => "logstash"
codec => "json"
add_field => {
"s_index" => "caesar_persist_android"
"s_estype" => "caesar_deviceid"
}
}

kafka {
bootstrap_servers => "192.168.2.170:9092,192.168.2.171:9092,192.168.2.172:9092"
topics => ["DIDLIST_caesar_persist_pc"]
client_id => "logstash-0-3"
group_id => "logstash"
codec => "json"
add_field => {
"s_index" => "caesar_persist_pc"
"s_estype" => "caesar_deviceid"
}
}

kafka {
bootstrap_servers => "192.168.2.170:9092,192.168.2.171:9092,192.168.2.172:9092"
topics => ["DIDLIST_caesar_data_bank_make_ios"]
client_id => "logstash-0-4"
group_id => "logstash"
codec => "json"
add_field => {
"s_index" => "caesar_data_bank_make_ios_%{+YYYY.MM.dd}"
"s_estype" => "make"
}
}

kafka {
bootstrap_servers => "192.168.2.170:9092,192.168.2.171:9092,192.168.2.172:9092"
topics => ["DIDLIST_caesar_data_bank_make_android"]
client_id => "logstash-0-5"
group_id => "logstash"
codec => "json"
add_field => {
"s_index" => "caesar_data_bank_make_android_%{+YYYY.MM.dd}"
"s_estype" => "make"
}
}

kafka {
bootstrap_servers => "192.168.2.170:9092,192.168.2.171:9092,192.168.2.172:9092"
topics => ["DIDLIST_caesar_data_bank_make_pc"]
client_id => "logstash-0-6"
group_id => "logstash"
codec => "json"
add_field => {
"s_index" => "caesar_data_bank_make_pc_%{+YYYY.MM.dd}"
"s_estype" => "make"
}
}

kafka {
bootstrap_servers => "192.168.2.170:9092,192.168.2.171:9092,192.168.2.172:9092"
topics => ["DIDLIST_caesar_data_bank_get"]
client_id => "logstash-0-7"
group_id => "logstash"
codec => "json"
add_field => {
"s_index" => "caesar_data_bank_get_%{+YYYY.MM.dd}"
"s_estype" => "get"
}
}

kafka {
bootstrap_servers => "192.168.2.170:9092,192.168.2.171:9092,192.168.2.172:9092"
topics => ["DIDLIST_caesar_event_upload"]
client_id => "logstash-0-8"
group_id => "logstash"
codec => "json"
add_field => {
"s_index" => "caesar_event_upload"
"s_estype" => "caesar_event_upload"
}
}

kafka {
bootstrap_servers => "192.168.2.170:9092,192.168.2.171:9092,192.168.2.172:9092"
topics => ["DIDLIST_caesar_text_discern"]
client_id => "logstash-0-9"
group_id => "logstash"
codec => "json"
add_field => {
"s_index" => "caesar_text_discern"
"s_estype" => "caesar_text_discern"
}
}

kafka {
bootstrap_servers => "192.168.2.170:9092,192.168.2.171:9092,192.168.2.172:9092"
topics => ["DIDLIST_caesar_app_labels_info"]
client_id => "logstash-0-10"
group_id => "logstash"
codec => "json"
add_field => {
"s_index" => "caesar_app_labels_info"
"s_estype" => "app_labels_info"
}
}

kafka {
bootstrap_servers => "192.168.2.170:9092,192.168.2.171:9092,192.168.2.172:9092"
topics => ["DIDLIST_caesar_sdk_error_event"]
client_id => "logstash-0-11"
group_id => "logstash"
codec => "json"
add_field => {
"s_index" => "caesar_sdk_error_event"
"s_estype" => "caesar_sdk_error_event"
}
}
kafka {
bootstrap_servers => "192.168.2.170:9092,192.168.2.171:9092,192.168.2.172:9092"
topics => ["DIDLIST_caesar_call_record_upload"]
client_id => "logstash-0-12"
group_id => "logstash"
codec => "json"
add_field => {
"s_index" => "caesar_call_record_upload"
"s_estype" => "caesar_call_record_upload"
}
}
kafka {
bootstrap_servers => "192.168.2.170:9092,192.168.2.171:9092,192.168.2.172:9092"
topics => ["DIDLIST_caesar_call_record_upload"]
client_id => "logstash-0-12"
group_id => "logstash"
codec => "json"
add_field => {
"s_index" => "caesar_call_record_upload"
"s_estype" => "caesar_call_record_upload"
}
}
kafka {
bootstrap_servers => "192.168.2.170:9092,192.168.2.171:9092,192.168.2.172:9092"
topics => ["DIDLIST_caesar_attack"]
client_id => "logstash-0-13"
group_id => "logstash"
codec => "json"
add_field => {
"s_index" => "caesar_attack"
"s_estype" => "caesar_attack"
}
}

kafka {
bootstrap_servers => "192.168.2.170:9092,192.168.2.171:9092,192.168.2.172:9092"
topics => ["DIDLIST_caesar_environment_detection"]
client_id => "logstash-0-14"
group_id => "logstash"
codec => "json"
add_field => {
"s_index" => "caesar_environment_detection"
"s_estype" => "caesar_environment_detection"
}
}
}
output {
elasticsearch {
hosts => ["192.168.2.170:9200","192.168.2.171:9200","192.168.2.172:9200"]
document_type => "%{s_estype}"
document_id => "%{id}"
index => "%{s_index}"
}
#stdout { codec => rubydebug }
}
```

##### 2.2.7.3 启动

```shell
nohup /home/tfd/app/logstash/bin/logstash -f /home/tfd/app/logstash/logstash.conf &
```

THE END
分享
二维码
< <上一篇
下一篇>>
文章目录
关闭