项目部署ELK+Redis+Kafka

一.环境规划

1.1 环境列表

类型 工具选型 版本 主目录
系统 Centos 7.9.2009
Java JDK 1.8.0_191 /usr/java/jdk1.8.0_191-amd64
缓存 Redis 6.2.1 /home/tfd/app/redis
搜索引擎 ElasticSearch 7.11.2

1.2 部署结构

主机名 服务 备注
IP地址 192.168.2.170 192.168.2.171
192.168.2.170
192.168.2.171
192.168.2.172

二. 服务器及账户权限检查

2.1 网络环境检查

检查各个服务器的连通性

ping 192.168.2.170
ping 192.168.2.171
ping 192.168.2.172

2.1.1 添加hosts

sudo tee -a /etc/hosts <<-'EOF'
192.168.2.170 dc-did-server-1
192.168.2.171 dc-did-server-2
192.168.2.172 dc-did-server-3
EOF

2.2 系统版本检查

cat /etc/redhat-release

2.3 服务器配置检查

2.3.1 CPU核数确认

cat /proc/cpuinfo | grep family |wc -l

2.3.2 内存确认

free -h | grep Mem | awk '{print $2}'

2.3.3 磁盘确认

df -h

检查是否有未挂载磁盘

sudo fdisk -l

2.4 账号权限检查

一般正式情况,甲方给予的是普通用户权限,但是我们系统中有些配置需要root权限,需单独和甲方说明并申请

三. 系统初始化

3.1 规范要求

  1. 所有配置必须通过配置文件配置,不允许通过命令完成.

  2. 统一在普通用户下建立setup文件夹,上传待部署组件.

    eg:/home/tfd/setup #客户如有其它目录约定,可遵循甲方规范.

  3. 统一在普通用户下建立app文件夹,放所有待运行组件.

    eg:/home/tfd/app #客户如有其它目录约定,可遵循甲方规范.

  4. 针对运行的服务,必须是普通用户运行,禁止使用root用户运行服务/中间件.

3.2 网络配置

3.2.1 增加对应hosts的映射表

sudo vim /etc/hosts
192.168.2.170 dc-did-server-1
192.168.2.171 dc-did-server-2
192.168.2.172 dc-did-server-3

3.3 文件句柄数优化

3.3.1 limits (文件句柄数)

此项配置完毕,需重启后生效,临时方法为sudo ulimit -n 65535

sudo vim /etc/security/limits.conf
* soft nofile 65536
* hard nofile 65536
* soft nproc 2048
* hard nproc 4096
* -memlock unlimited
#tfd用户可以无限制的增加锁定的内存大小
tfd soft memlock unlimited
tfd hard memlock unlimited

注: tfd为普通用户运行

3.3.2 用户nproc限制文件

sudo vim /etc/security/limits.d/20-nproc.conf
*          soft    nproc     65536
root       soft    nproc     unlimited

3.3.3 内核参数优化

sudo vim /etc/sysctl.conf
#定义一个进程能拥有最多VMA(虚拟内存区域)的数量
vm.max_map_count=655360
#内存分配策略,redis服务器要加这个,其他服务器可以不用加
vm.overcommit_memory=1
#定义每个端口最大的监听队列的长度,大的侦听队列对DOS攻击会有所帮助
net.core.somaxconn=2048

配置生效

sudo sysctl -p

3.4 安装JDK

vim jdk.sh
#!/bin/sh
rpm -ivh jdk-8u191-linux-x64.rpm
echo "export JAVA_HOME=/usr/java/jdk1.8.0_191-amd64" >> /etc/profile
echo "export CLASSPATH=.:JAVA_HOME/jre/lib/rt.jar:JAVA_HOME/lib/dt.jar:JAVA_HOME/lib/tools.jar" >> /etc/profile
echo "export PATH=PATH:$JAVA_HOME/bin" >> /etc/profile
echo "export export pesdk_home=/home/tfd/app" >> /etc/profile
update-alternatives --install /usr/bin/java java /usr/java/jdk1.8.0_191-amd64/bin/java 100
update-alternatives --config java

把rpm包和shell脚本放在同一个目录下面执行

sudo sh jdk.sh;
警告:jdk-8u191-linux-x64.rpm: 头V3 RSA/SHA256 Signature, 密钥 ID ec551f03: NOKEY
准备中...                          ################################# [100%]
正在升级/安装...
   1:jdk1.8-2000:1.8.0_191-fcs        ################################# [100%]
Unpacking JAR files...
    tools.jar...
    plugin.jar...
    javaws.jar...
    deploy.jar...
    rt.jar...
    jsse.jar...
    charsets.jar...
    localedata.jar...

共有 2 个提供“java”的程序。

  选项    命令
-----------------------------------------------
*+ 1           /usr/java/jdk1.8.0_191-amd64/jre/bin/java
   2           /usr/java/jdk1.8.0_191-amd64/bin/java

按 Enter 保留当前选项[+],或者键入选项编号:2
#出现以上提示,输入2即可
配置生效
source /etc/profile;

3.5 关闭防火墙和SELINUX

sudo systemctl stop firewalld;
sudo systemctl disable firewalld;
sudo sed -i 's/SELINUX=enforcing/SELINUX=disabled/' /etc/selinux/config;
sudo setenforce 0

四. Redis集群部署

4.1 Redis安装环境和版本

客户生产环境必须redis集群,采用redis-6.2.1版本

用三台虚拟机(192.168.2.170-172)模拟6个节点,一台机器2个节点,创建3 master、3 slave环境

注意:master对应的slave不要在一台服务器上

4.2 安装相关依赖包

如是离线环境可以把下载的rpm拷贝到服务器上然后安装

sudo yum install -y  --downloaddir=/home/tfd/setup/yum gcc gcc-c autoconf automake  rpm-build tcl

4.3 安装Redis

mkdir -p /home/tfd/app;
tar zxvf redis-6.2.1.tar.gz;
mv redis-6.2.1 /home/tfd/app/redis;
cd /home/tfd/app/redis;
make;
make test;
make PREFIX=/home/tfd/app/redis install;

4.4 增加环境变量

sudo vim /etc/profile
#Redis
export Redis=/home/tfd/app/redis
export PATH=PATH:Redis/bin

立即生效

source /etc/profile

4.5 创建redis_cluster目录

mkdir -p /home/tfd/app/redis_cluster/{6379,6380}/{data,logs};
cp /home/tfd/app/redis/redis.conf /home/tfd/app/redis_cluster/6379;
cp /home/tfd/app/redis/redis.conf /home/tfd/app/redis_cluster/6380;

4.6 修改配置文件

6379端口

vim /home/tfd/app/redis_cluster/6379/redis.conf
#绑定ip
bind 192.168.2.170
#服务端口
port 6379
#后台运行
daemonize yes
#pid文件
pidfile /var/run/redis_6379.pid
#指定日志文件
logfile "/home/tfd/app/redis_cluster/6379/log/redis-6379.log"
#指定数据文件路径
dir "/home/tfd/app/redis_cluster/6379/data"
#默认过期策略,只对设置龙过期时间的key进行LRU(默认值)
maxmemory-policy volatile-lru
#去掉注释,开启集群
cluster-enabled yes
#集群配置文件,对应端口
 cluster-config-file "/home/tfd/app/redis_cluster/6379/nodes-6379.conf"
 #请求超时,默认15秒,可自行设置
 cluster-node-timeout 15000
 #aof日志开启,如有需要可开启
appendonly no

6380端口

vim /home/tfd/app/redis_cluster/6380/redis.conf
#绑定ip
bind 192.168.2.170
#服务端口
port 6380
#后台运行
daemonize yes
#pid文件
pidfile /var/run/redis_6380.pid
#指定日志文件
logfile "/home/tfd/app/redis_cluster/6380/log/redis-6380.log"
#指定数据文件路径
dir "/home/tfd/app/redis_cluster/6380/data"
#默认过期策略,只对设置龙过期时间的key进行LRU(默认值)
maxmemory-policy volatile-lru
#去掉注释,开启集群
cluster-enabled yes
#集群配置,对应端口
 cluster-config-file "/home/tfd/app/redis_cluster/6380/nodes-6380.conf"
 #请求超时,默认15秒,可自行设置
 cluster-node-timeout 15000
 #aof日志开启,如有需要可开启
appendonly no

4.7 创建启动脚本

tee -a /home/tfd/app/redis_cluster/6379/6379.sh <<-'EOF'
#!/bin/sh
REDISPORT=6379
EXEC=/home/tfd/app/redis/bin/redis-server
CLIEXEC=/home/tfd/app/redis/bin/redis-cli

PIDFILE=/var/run/redis_{REDISPORT}.pid
CONF="/home/tfd/app/redis_cluster/6379/redis.conf"

case "1" in
    start)
        if [ -f PIDFILE ]
        then
                echo "PIDFILE exists, process is already running or crashed"
        else
                echo "Starting Redis server..."
                EXECCONF
        fi
        ;;
    stop)
        if [ ! -f PIDFILE ]
        then
                echo "PIDFILE does not exist, process is not running"
        else
                PID=(catPIDFILE)
                echo "Stopping ..."
                CLIEXEC -pREDISPORT shutdown
                while [ -x /proc/${PID} ]
                do
                    echo "Waiting for Redis to shutdown ..."
                    sleep 1
                done
                echo "Redis stopped"
        fi
        ;;
    *)
        echo "Please use start or stop as first argument"
        ;;
esac
EOF
tee -a /home/tfd/app/redis_cluster/6380/6380.sh <<-'EOF'
#!/bin/sh
REDISPORT=6380
EXEC=/home/tfd/app/redis/bin/redis-server
CLIEXEC=/home/tfd/app/redis/bin/redis-cli

PIDFILE=/var/run/redis_{REDISPORT}.pid
CONF="/home/tfd/app/redis_cluster/6380/redis.conf"

case "1" in
    start)
        if [ -f PIDFILE ]
        then
                echo "PIDFILE exists, process is already running or crashed"
        else
                echo "Starting Redis server..."
                EXECCONF
        fi
        ;;
    stop)
        if [ ! -f PIDFILE ]
        then
                echo "PIDFILE does not exist, process is not running"
        else
                PID=(catPIDFILE)
                echo "Stopping ..."
                CLIEXEC -pREDISPORT shutdown
                while [ -x /proc/${PID} ]
                do
                    echo "Waiting for Redis to shutdown ..."
                    sleep 1
                done
                echo "Redis stopped"
        fi
        ;;
    *)
        echo "Please use start or stop as first argument"
        ;;
esac
EOF

添加权限

chmod +x /home/tfd/app/redis_cluster/6379/6379.sh;
chmod +x /home/tfd/app/redis_cluster/6380/6380.sh;

4.8 拷贝到其他节点

拷贝目录到其他节点上,改redis.conf中的bind ip.

scp -r /home/tfd/app/ tfd@dc-did-server-2:/home/tfd;
sed -i 's/bind 192.168.2.170/bind 192.168.2.171/g' /home/tfd/app/redis_cluster/6379/redis.conf;
sed -i 's/bind 192.168.2.170/bind 192.168.2.171/g' /home/tfd/app/redis_cluster/6380/redis.conf;
scp -r /home/tfd/app/ tfd@dc-did-server-3:/home/tfd;
sed -i 's/bind 192.168.2.170/bind 192.168.2.172/g' /home/tfd/app/redis_cluster/6379/redis.conf;
sed -i 's/bind 192.168.2.170/bind 192.168.2.172/g' /home/tfd/app/redis_cluster/6380/redis.conf;

4.9 启动各个节点

/home/tfd/app/redis_cluster/6379/6379.sh start;
/home/tfd/app/redis_cluster/6380/6380.sh start;

4.10 检查redis启动情况

ps -efl | grep redis

4.11 创建集群

4.11.1 建立三个master

redis-cli --cluster create 192.168.2.170:6379 192.168.2.171:6379 192.168.2.172:6379 --cluster-replicas 0

4.11.2 slave挂靠master

redis-cli --cluster add-node 192.168.2.171:6380 192.168.2.170:6379 --cluster-slave --cluster-master-id  master id
redis-cli --cluster add-node 192.168.2.172:6380 192.168.2.171:6379 --cluster-slave --cluster-master-id master id
redis-cli --cluster add-node 192.168.2.170:6380 192.168.2.172:6379 --cluster-slave --cluster-master-id master id

4.12 集群验证

redis-cli -c -h 192.168.2.170 -p 6379 cluster nodes
redis-cli -c -h 192.168.2.170 -p 6379
192.168.2.170:6379> set hello world

五. Elasticsearch 集群部署

5.1 基本概念

节点(Node):节点是一个ElasticSearch的实例,一般一台主机上部署一个节点

集群(Cluster):集群由若干节点组成,和任意节点的通信等价于和集群的通信

分片(Shard):一个索引会分成多个分片存储,分片数量在索引建立后不可更改

副本(Replica):副本是分片的一个拷贝,目的在于提高系统的容错性和搜索的效率

索引(Index):类似数据库的库

类型(Type):类似数据库的表

文档(Document):类似数据库的行,包含一个或多个Field

字段(Field):搜索的最小单元,可通过Mapping定义不同的属性(比如可否被搜索)

5.2 下载解压

tar zxvf elasticsearch-7.11.2-linux-x86_64.tar.gz;
mv elasticsearch-7.11.2 /home/tfd/app/elasticsearch;

5.3 创建数据和日志目录

mkdir -p /home/tfd/app/es/{data,logs}

5.4 修改elasticsearch.yml配置文件

vim /home/tfd/app/elasticsearch/config/elasticsearch.yml
# ---------------------------------- Cluster ----------------------------------
#命名规则es-应用程序
cluster.name: es-did
#节点可自定义,建议node1、node2去命名
# ------------------------------------ Node ----------------------------------
node.name: node1
#指定该节点是否有资格被选举成为node,默认是true,es是默认集群中的第一台机器为master,如果这台机挂了就会重新选举master
node.master: true
#指定该节点是否存储索引数据,默认为true。
node.data: true
# ----------------------------------- Paths ----------------------------------
path.data: /home/tfd/app/es/data
path.logs: /home/tfd/app/es/logs
# ----------------------------------- Memory -----------------------------------
#服务器发生系统swapping的时候ES节点的性能会非常差,也会影响节点的稳定性。
#所以要不惜一切代价来避免swapping。swapping会导致Java GC的周期延迟从毫秒级恶化到分钟,
#更严重的是会引起节点响应延迟甚至脱离集群。

#这个参数的目的是当你无法关闭系统的swap的时候,建议把这个参数设为true。
#防止在内存不够用的时候,elasticsearch的内存被交换至交换区,导致性能骤降。
#ES默认开启了内存地址锁定,为了避免内存交换提高性能。但是Centos6不支持SecComp功能,启动会报错,所以需要将其设置为false
bootstrap.memory_lock: false
bootstrap.system_call_filter: false
# ---------------------------------- Network ----------------------------------
#network.host ,如果不限制主机访问,同时设置bind_host和publish_host两个参数,那我们只需要把该属性设置为0.0.0.0,那么就不限制主机的访问和节点的交互
network.host: 192.168.2.170
#监听端口
http.port: 9200
#节点之间交互的端口
transport.tcp.port: 9300
#设置是否压缩tcp传输时的数据,默认false
transport.tcp.compress: true
# 设置请求内容的最大容量,默认100mb 
http.max_content_length: 100mb
#IP 最好是三个,注意master节点数量,一般为三个,data数量不限,配合node.master: true
discovery.zen.ping.unicast.hosts: ["192.168.2.170","192.168.2.171","192.168.2.172"]
# 这个参数控制的是,一个节点需要看到具有master资格的节点的最小数量,然后才能在集群中做操作。官方的推荐值是(N/2)+1,其中N是具有master资格的节点的数量
discovery.zen.minimum_master_nodes: 1
2.2.4.3 修改启动文件
vim /home/tfd/app/elasticsearch/bin/elasticsearch
export JAVA_HOME=/usr/java/jdk1.8.0_144

拷贝到其它节点并修改ip

#node2 192.168.2.171
scp -r elasticsearch es tfd@dc-did-server-2:/home/tfd/app/;
sed -i 's/node1/node2/' /home/tfd/app/elasticsearch/config/elasticsearch.yml
sed -i 's/network.host: 192.168.2.170/network.host: 192.168.2.171/' /home/tfd/app/elasticsearch/config/elasticsearch.yml
#node3 192.168.2.172
scp -r elasticsearch es tfd@dc-did-server-3:/home/tfd/app/;
sed -i 's/node1/node3/' /home/tfd/app/elasticsearch/config/elasticsearch.yml
sed -i 's/network.host: 192.168.2.170/network.host: 192.168.2.172/' /home/tfd/app/elasticsearch/config/elasticsearch.yml
2.2.4.4 启动es服务

elasticsearch最新版需要jdk11,建议根据环境现在相应的es版本

/home/tfd/app/elasticsearch/bin/elasticsearch -d
2.2.4.5 检查服务状态

1.访问链接是否正常

curl  http://192.168.2.172:9200
2.2.4.6 Elasticsearch调优

1.配置优化,配置文件末尾添加

vim /home/tfd/app/elasticsearch/config/elasticsearch.yml
# #---------------------------------- Indexing Settings ------------------------
#index.refresh_interval默认1s在index的时候如果没有实时性检索需求,建议可以设置大一些,比如30S,如果不需要检索,等index完成才进行检索的话,可以设置为-1,也就是禁用,等完成index之后在调整回来。
index.refresh_interval: 30s
#当发生多少次操作时进行一次flush。默认是 unlimited。
index.translog.flush_threshold_ops: 50000
#---------------------------------- Search pool ---------------------
#索引线程池类型fixed:这是一个有着固定大小的线程池,大小由size属性指定,允许你指定一个队列(使用queue_size属性指定)用来保存请求,直到有一个空闲的线程来执行请求。如果Elasticsearch无法把请求放到队列中(队列满了),该请求将被拒绝。
#此线程池用于搜索和计数请求。它的类型默认为fixed,size默认为可用处理器的数量乘以3,队列的size默认为1000
threadpool.search.type: fixed
#线程池大小
threadpool.search.size: 200
#队列大小
threadpool.search.queue_size: 1000

#---------------------------------- Bulk pool ----------------------
#此线程池用于批量操作。它的类型默认为fixed,size默认为可用处理器的数量,队列的size默认为50
threadpool.bulk.type: fixed
threadpool.bulk.size: 600
threadpool.bulk.queue_size: 3000

#---------------------------------- Index pool ---------------------
#此线程池用于索引和删除操作。它的类型默认为fixed,size默认为可用处理器的数量,队列的size默认为300
threadpool.index.type: fixed
threadpool.index.size: 200
threadpool.index.queue_size: 1000

#---------------------------------- Indices settings ---------------
#索引缓存用于存储最新的索引文档(newly indexed documents),当该缓存填充满时,缓存的文件将被写入到磁盘中的段(segment)中
#默认值是10%,在单个结点上,所有索引的分片占用的最大内存大小,或占用JVM堆内存的百分比
indices.memory.index_buffer_size: 30%
#当indices.memory.index_buffer_size指定为百分比时,使用该选项配置绝对值,默认值是48MB;
indices.memory.min_shard_index_buffer_size: 12mb
#当indices.memory.index_buffer_size指定为百分比时,使用该选项配置绝对值,默认值是无限大;
indices.memory.min_index_buffer_size: 96mb

#---------------------------------- Cache Sizes -------------------
#filedata cache的使用场景是一些聚合操作(包括排序),构建filedata cache是个相对昂贵的操作。所以尽量能让他保留在内存中,然后日志场景聚合操作比较少,绝大多数也集中在半夜,所以限制了这个值的大小,cache到达约定的内存大小时会自动清理,驱逐一部分FieldData数据以便容纳新数据,默认不限制
indices.fielddata.cache.size: 15%
#用于约定多久没有访问到的数据会被驱逐,默认值为-1,即无限。expire配置不推荐使用,按时间驱逐数据会大量消耗性能。而且这个设置在不久之后的版本中将会废弃
indices.fielddata.cache.expire: 6h
#允许控制过滤器缓存占多少内存大小,注意 ,这是不一个索引级别的设置,而是节点级别设置(可以在节点上进行配置)
indices.cache.filter.size: 15%
#设置缓存的数据处于非活动状态多长时间后过期
indices.cache.filter.expire: 6h

2.堆内存

ES默认分配的堆内存是1g,可以根据机器内存情况适当调大,一般不超过机器内存的一半,修改-Xms和-Xmx增大到10g:

vim /home/tfd/app/elasticsearch/config/jvm.options
-Xms10g
-Xmx10g
2.2.4.7 Kabina部署

1.下载

wget -P /home/tfd/app/ https://artifacts.elastic.co/downloads/kibana/kibana-6.8.14-linux-x86_64.tar.gz
cd /home/tfd/app;
tar zxvf kibana-6.8.14-linux-x86_64.tar.gz;
mv kibana-6.8.14-linux-x86_64 kibana;

2.修改配置文件

vim /home/tfd/app/kibana/config/kibana.yml
server.port: 5601
#kibana 机器ip
server.host: "192.168.2.170"
#es某个节点ip
elasticsearch.hosts: ["http://192.168.2.170:9200"]

3.启动kibana

nohup /home/tfd/app/kibana/bin/kibana &
2.2.4.8 es索引配置

1.登录kibana

http://192.168.2.170:5601

2.点击Dev Tools,输入以下内容

POST _template/caesar_standard
{
    "order" : 0,
    "template" : "caesar_*",
    "settings" : {
      "index" : {
        "codec" : "best_compression",
        "mapper" : {
          "dynamic" : "true"
        },
        "unassigned" : {
          "node_left" : {
            "delayed_timeout" : "1m"
          }
        },
        "number_of_shards" : "3",
        "translog" : {
          "durability" : "async"
        },
        "number_of_replicas" : "1",
 "max_result_window" :"2147483647"
      }
    },
    "mappings" : {
      "_default_" : {
        "dynamic_templates" : [
          {
            "string_fields" : {
              "mapping" : {
                "index" : true,
                "type" : "keyword"
              },
              "match_mapping_type" : "*",
              "match" : "s_*"
            }
          },
          {
            "float_fields" : {
              "mapping" : {
                "type" : "float"
              },
              "match_mapping_type" : "*",
              "match" : "f_*"
            }
          },
          {
            "double_fields" : {
              "mapping" : {
                "type" : "double"
              },
              "match_mapping_type" : "*",
              "match" : "d_*"
            }
          },
          {
            "boolean_fields" : {
              "mapping" : {
                "type" : "boolean"
              },
              "match_mapping_type" : "*",
              "match" : "b_*"
            }
          },
          {
            "integer_fields" : {
              "mapping" : {
                "type" : "integer"
              },
              "match_mapping_type" : "*",
              "match" : "i_*"
            }
          },
          {
            "long_fields" : {
              "mapping" : {
                "type" : "long"
              },
              "match_mapping_type" : "*",
              "match" : "l_*"
            }
          },
          {
            "listString_fields" : {
              "mapping" : {
                "index" : true,
                "type" : "keyword"
              },
              "match_mapping_type" : "*",
              "match" : "ls_*"
            }
          },
          {
            "listText_fields" : {
              "mapping" : {
                "index" : true,
                "type" : "text"
              },
              "match_mapping_type" : "*",
              "match" : "lt_*"
            }
          },
          {
            "row_key" : {
              "mapping" : {
                "index" : true,
                "type" : "keyword"
              },
              "match_mapping_type" : "*",
              "match" : "row_key"
            }
          },
          {
            "v_field" : {
              "mapping" : {
                "type":"object"
              },
              "match_mapping_type" : "*",
              "match" : "v_*"
            }
          },
          {
            "location_field" : {
              "mapping" : {
                "type":"geo_point"
              },
              "match_mapping_type" : "*",
              "match" : "location_*"
            }
          }
        ]
      }
    },
    "aliases" : { }
  }

2.2.5 Zookeeper集群部署

2.2.5.1 下载
wget -P /home/tfd/app/ https://mirrors.bfsu.edu.cn/apache/zookeeper/zookeeper-3.6.2/apache-zookeeper-3.6.2-bin.tar.gz;
tar zxvf apache-zookeeper-3.6.2-bin.tar.gz;
mv apache-zookeeper-3.6.2-bin zookeeper;
cd zookeeper;
mkdir data logs;
2.2.5.2 修改zoo.cfg
cp conf/zoo_sample.cfg conf/zoo.cfg;
sed -i 's#dataDir=/tmp/zookeeper#dataDir=/home/tfd/app/zookeeper/data/#' conf/zoo.cfg;
sed -i 's/#maxClientCnxns=60/maxClientCnxns=0/' conf/zoo.cfg;
sed -i 's/initLimit=10/initLimit=5/' conf/zoo.cfg;
sed -i 's/syncLimit=5/syncLimit=2/' conf/zoo.cfg;
sed -i '13a\dataLogDir=/home/tfd/app/zookeeper/logs' conf/zoo.cfg;
sed -i 'a\server.1=dc-did-server-1:2888:3888' conf/zoo.cfg;
sed -i 'a\server.2=dc-did-server-2:2888:3888' conf/zoo.cfg;
sed -i '$a\server.3=dc-did-server-3:2888:3888' conf/zoo.cfg;
echo 1 >> /home/tfd/app/zookeeper/data/myid;
2.2.5.3 拷贝到其它节点
scp -r /home/tfd/app/zookeeper/ tfd@dc-did-server-2:/home/tfd/app/;
echo 2 > /home/tfd/app/zookeeper/data/myid;
scp -r /home/tfd/app/zookeeper/ tfd@dc-did-server-3:/home/tfd/app/;
echo 3 > /home/tfd/app/zookeeper/data/myid;
2.2.5.4 启动zookeeper
/home/tfd/app/zookeeper/bin/zkServer.sh start
#查看状态
/home/tfd/app/zookeeper/bin/zkServer.sh status
2.2.5.5 增加环境变量
sudo vim /etc/profile
ZOOKEEPER=/home/tfd/app/zookeeper
PATH=PATH:ZOOKEEPER/bin

#立即生效
source /etc/profile

2.2.6 Kafka集群

2.2.6.1 下载
wget -P /home/tfd/app/ https://mirrors.tuna.tsinghua.edu.cn/apache/kafka/2.7.0/kafka_2.13-2.7.0.tgz;
cd /home/tfd/app/; 
tar zxvf kafka_2.13-2.7.0.tgz;
mv kafka_2.13-2.7.0 kafka;
mkdir -p /home/tfd/app/kafka/{kafka-logs,logs}
2.2.6.2 修改配置文件
vim /home/tfd/app/kafka/config/server.properties
#三台机器id必须设置成不一样
broker.id=1 
log.dirs=/home/tfd/app/kafka/kafka-logs
listeners=PLAINTEXT://:9092
##如果有端口映射等情况,可以填写对应的访问ip
#advertised.listeners=PLAINTEXT://192.168.2.170:9092
#配置创建主题时包含多少个分区,默认值为1,因为我们能增加主题的分区数,但是不能减少分区的个数,所以,如果要让一个主题的分区个数少于num.partitions需要手动创建该主题而不是通过自动创建主题。
num.partitions=4
#用来恢复和清理data下数据的线程数量
num.recovery.threads.per.data.dir=1
#表示kafka的内部topic consumer_offsets副本数。当该副本所在的broker宕机,consumer_offsets只有一份副本,该分区宕机。使用该分区存储消费分组offset位置的消费者均会收到影响,offset无法提交,从而导致生产者可以发送消息但消费者不可用。所以需要设置该字段的值大于1。
offsets.topic.replication.factor=1
#事务主题的复制因子(设置更高以确保可用性)
transaction.state.log.replication.factor=1
#覆盖事务主题
transaction.state.log.min.isr=1
#连接zookeeper
zookeeper.connect=192.168.2.170:2181,192.168.2.171:2181,192.168.2.172:2181
zookeeper.connection.timeout.ms=6000
2.2.6.3 拷贝到其它服务器上
scp -r /home/tfd/app/kafka tfd@dc-did-server-2:/home/tfd/app
sed -i 's/broker.id=1/broker.id=2/' /home/tfd/app/kafka/config/server.properties;
scp -r /home/tfd/app/kafka tfd@dc-did-server-3:/home/tfd/app
sed -i 's/broker.id=1/broker.id=3/' /home/tfd/app/kafka/config/server.properties;
2.2.6.4 启动脚本
tee -a /home/tfd/app/kafka/start.sh <<-'EOF'
#/bin/bash
nohup /home/tfd/app/kafka/bin/kafka-server-start.sh /home/tfd/app/kafka/config/server.properties &> /home/tfd/app/kafka/logs/kafka.log &

EOF
#添加执行权限
chmod +x /home/tfd/app/kafka/start.sh
2.2.6.5 启动
sh /home/tfd/app/kafka/start.sh
2.2.6.6 测试连通性
/home/tfd/app/kafka/bin/kafka-console-producer.sh --broker-list 192.168.2.170:9092 --topic test

在另一个终端运行

/home/tfd/app/kafka/bin/kafka-console-consumer.sh --bootstrap-server dc-did-server-1:9092,dc-did-server-2:9092,dc-did-server-3:9092 --from-beginning --topic test

jps显示QuorumPeerMain和Kafka

107261 QuorumPeerMain
109054 Kafka

2.2.7 LogStash部署

2.2.7.1 下载
wget -P /home/tfd/app/ https://artifacts.elastic.co/downloads/logstash/logstash-6.8.14.tar.gz
cd /home/tfd/app/;
tar zxvf logstash-6.8.14.tar.gz;
mv logstash-6.8.14 logstash;
2.2.7.2 logstash配置文件
vim /home/tfd/app/logstash/logstash.conf
input{
    kafka {
        bootstrap_servers => "192.168.2.170:9092,192.168.2.171:9092,192.168.2.172:9092"
        topics => ["DIDLIST_caesar_dynamic_data"]
        client_id => "logstash-0-0"
        group_id => "logstash"
        codec => "json"
        add_field => {
            "s_index" => "caesar_dynamic_data"
            "s_estype" => "dynamic_data"
        }
    }

    kafka {
        bootstrap_servers => "192.168.2.170:9092,192.168.2.171:9092,192.168.2.172:9092"
        topics => ["DIDLIST_caesar_persist_ios"]
        client_id => "logstash-0-1"
        group_id => "logstash"
        codec => "json"
        add_field => {
            "s_index" => "caesar_persist_ios"
            "s_estype" => "caesar_deviceid"
        }
    }

    kafka {
        bootstrap_servers =>  "192.168.2.170:9092,192.168.2.171:9092,192.168.2.172:9092"
        topics => ["DIDLIST_caesar_persist_android"]
        client_id => "logstash-0-2"
        group_id => "logstash"
        codec => "json"
        add_field => {
            "s_index" => "caesar_persist_android"
            "s_estype" => "caesar_deviceid"
        }
    }

    kafka {
        bootstrap_servers =>  "192.168.2.170:9092,192.168.2.171:9092,192.168.2.172:9092"
        topics => ["DIDLIST_caesar_persist_pc"]
        client_id => "logstash-0-3"
        group_id => "logstash"
        codec => "json"
        add_field => {
           "s_index" => "caesar_persist_pc"
           "s_estype" => "caesar_deviceid"
        }
    }

    kafka {
        bootstrap_servers => "192.168.2.170:9092,192.168.2.171:9092,192.168.2.172:9092"
        topics => ["DIDLIST_caesar_data_bank_make_ios"]
        client_id => "logstash-0-4"
        group_id => "logstash"
        codec => "json"
        add_field => {
            "s_index" => "caesar_data_bank_make_ios_%{+YYYY.MM.dd}"
            "s_estype" => "make"
         }
    }

    kafka {
        bootstrap_servers =>  "192.168.2.170:9092,192.168.2.171:9092,192.168.2.172:9092"
        topics => ["DIDLIST_caesar_data_bank_make_android"]
        client_id => "logstash-0-5"
        group_id => "logstash"
        codec => "json"
        add_field => {
           "s_index" => "caesar_data_bank_make_android_%{+YYYY.MM.dd}"
           "s_estype" => "make"
        }
    }

    kafka {
        bootstrap_servers => "192.168.2.170:9092,192.168.2.171:9092,192.168.2.172:9092"
        topics => ["DIDLIST_caesar_data_bank_make_pc"]
        client_id => "logstash-0-6"
        group_id => "logstash"
        codec => "json"
        add_field => {
            "s_index" => "caesar_data_bank_make_pc_%{+YYYY.MM.dd}"
            "s_estype" => "make"
        }
    }

    kafka {
        bootstrap_servers => "192.168.2.170:9092,192.168.2.171:9092,192.168.2.172:9092"
        topics => ["DIDLIST_caesar_data_bank_get"]
        client_id => "logstash-0-7"
        group_id => "logstash"
        codec => "json"
        add_field => {
            "s_index" => "caesar_data_bank_get_%{+YYYY.MM.dd}"
            "s_estype" => "get"
        }
    }

    kafka {
        bootstrap_servers =>  "192.168.2.170:9092,192.168.2.171:9092,192.168.2.172:9092"
        topics => ["DIDLIST_caesar_event_upload"]
        client_id => "logstash-0-8"
        group_id => "logstash"
        codec => "json"
        add_field => {
            "s_index" => "caesar_event_upload"
            "s_estype" => "caesar_event_upload"
        }
    }

    kafka {
        bootstrap_servers =>  "192.168.2.170:9092,192.168.2.171:9092,192.168.2.172:9092"
        topics => ["DIDLIST_caesar_text_discern"]
        client_id => "logstash-0-9"
        group_id => "logstash"
        codec => "json"
        add_field => {
            "s_index" => "caesar_text_discern"
            "s_estype" => "caesar_text_discern"
        }
    }

    kafka {
        bootstrap_servers => "192.168.2.170:9092,192.168.2.171:9092,192.168.2.172:9092"
        topics => ["DIDLIST_caesar_app_labels_info"]
        client_id => "logstash-0-10"
        group_id => "logstash"
        codec => "json"
        add_field => {
            "s_index" => "caesar_app_labels_info"
            "s_estype" => "app_labels_info"
        }
    }

    kafka {
        bootstrap_servers => "192.168.2.170:9092,192.168.2.171:9092,192.168.2.172:9092"
        topics => ["DIDLIST_caesar_sdk_error_event"]
        client_id => "logstash-0-11"
        group_id => "logstash"
        codec => "json"
        add_field => {
            "s_index" => "caesar_sdk_error_event"
            "s_estype" => "caesar_sdk_error_event"
        }
    }
    kafka {
        bootstrap_servers =>  "192.168.2.170:9092,192.168.2.171:9092,192.168.2.172:9092"
        topics => ["DIDLIST_caesar_call_record_upload"]
        client_id => "logstash-0-12"
        group_id => "logstash"
        codec => "json"
        add_field => {
            "s_index" => "caesar_call_record_upload"
            "s_estype" => "caesar_call_record_upload"
        }
    }
    kafka {
        bootstrap_servers => "192.168.2.170:9092,192.168.2.171:9092,192.168.2.172:9092"
        topics => ["DIDLIST_caesar_call_record_upload"]
        client_id => "logstash-0-12"
        group_id => "logstash"
        codec => "json"
        add_field => {
            "s_index" => "caesar_call_record_upload"
            "s_estype" => "caesar_call_record_upload"
         }
    }
    kafka {
        bootstrap_servers =>  "192.168.2.170:9092,192.168.2.171:9092,192.168.2.172:9092"
        topics => ["DIDLIST_caesar_attack"]
        client_id => "logstash-0-13"
        group_id => "logstash"
        codec => "json"
        add_field => {
            "s_index" => "caesar_attack"
            "s_estype" => "caesar_attack"
        }
    }

    kafka {
        bootstrap_servers => "192.168.2.170:9092,192.168.2.171:9092,192.168.2.172:9092"
        topics => ["DIDLIST_caesar_environment_detection"]
        client_id => "logstash-0-14"
        group_id => "logstash"
        codec => "json"
        add_field => {
           "s_index" => "caesar_environment_detection"
           "s_estype" => "caesar_environment_detection"
        }
    }
}
output {
    elasticsearch {
    hosts => ["192.168.2.170:9200","192.168.2.171:9200","192.168.2.172:9200"]
    document_type => "%{s_estype}"
    document_id => "%{id}"
    index => "%{s_index}"
    }
   #stdout { codec => rubydebug }
}
2.2.7.3 启动
nohup /home/tfd/app/logstash/bin/logstash -f /home/tfd/app/logstash/logstash.conf &
THE END
分享
二维码
< <上一篇
下一篇>>
文章目录
关闭
目 录