中间件集群部署
1、kkfileView arm64部署
1
2
3
4
5
6
docker run --name kkfileview --net=host \
-e TZ="Asia/Shanghai" \
-v /etc/localtime:/etc/localtime \
-v /home/docker/kkfileView/config:/opt/kkFileView/config \
-v /home/docker/kkfileView/bin/kkFileView.jar:/opt/kkFileView/bin/kkFileView.jar \
-d yimik/kkfileview:latest
说明:提供文件预览服务
- 配置文件挂载在
/home/docker/kkfileView/config
下,可以修改对应的端口,执行线程数及nginx回调地址等挂载信息 - jar包挂载在
/home/docker/kkfileView/bin/kkFileView.jar
,如有jar包更新替换重启即可
2、selenium arm64位部署
1
2
3
4
5
docker run --name firefox-aarch64 --net=host \
-e TZ="Asia/Shanghai" \
-v /etc/localtime:/etc/localtime \
--shm-size="2g" \
-d seleniarm/standalone-firefox:latest
说明:提供后台打开浏览器服务,监听主机4444和7900端口,远程连接使用ip:4444
3、Fastdfs arm64架构集群部署
a.拉取镜像
1
docker pull onlyonelmm/fastdfs-arm64
b.拉取镜像后,单机部署可直接启动
1
2
3
4
5
6
7
8
9
docker run -d \
--restart=always \
--privileged=true \
--net=host \
--name=fastdfs \
-e FASTDFS_IPADDR=192.168.0.165 \
-e WEB_PORT=8888 \
-v /usr/soft/fastdfs:/home/fastdfs \
onlyonelmm/fastdfs-arm64:latest
其中参数 IP
为当前机器的IP, 是必须要修改的,这里IP可以是内网IP,仅供局域网范围内使用,如果是生产服务器,则需要使用公网IP。否则在上传文件时,会提示22122端口连接超时或者23000端口超时。
WEB_PORT
参数可要可不要,如果不加此参数, 默认nginx访问端口为8888,这里在运行时改为8080。其中nginx的80端口为nginx默认使用了,这里并没有屏蔽,如果需要将fastdfs的资源访问端口改为80,则需要在启动容器后,手动进入容器内部修改。修改方式,在最后会贴出来。
容器中的文件统一挂到 /home/fastdfs
下了,这里将此目录挂载到宿主机的/usr/soft/fastdfs
文件夹下。包含文件和日志。
正常启动后,可以查看容器日志:docker logs 容器ID
,正常如下所示,否则会有错误提示信息:
1
2
3
4
5
6
7
8
[root@ecs-603f-0002 ~]# docker logs ce32d48efc0e
FASTDFS_IPADDR= 192.168.0.165
WEB_PORT= 8080
start trackerd
start storage
start nginx
ngx_http_fastdfs_set pid=24
fastdfs start success........
这时,直接在宿主机上访问:curl localhost
正常可以看到nginx的欢迎页。如果后面修改了fastdfs的资源访问端口,则只能拿上传的文件测试预览下载。
c.把对应单机的配置复制出来
1
2
docker cp 容器id:/etc/fdfs/ /home/docker/fastdfs/config/
docker cp 容器id:/usr/local/nginx/conf/nginx.conf /home/docker/fastdfs/config/nginx/
d. 删除原有单机部署节点,将复制出来的配置文件复制到不同节点,然后修改配置文件
1
2
3
4
5
6
vim /home/docker/fastdfs/config/storage.conf
#需要修改的内容如下
tracker_server=10.168.109.63:22122 # 服务器1
tracker_server=10.168.109.64:22122 # 服务器2
tracker_server=10.168.109.65:22122 # 服务器3
http.server_port=8888 # http访问文件的端口(默认8888,看情况修改,和nginx中保持一致)
1
2
3
4
5
vim /home/docker/fastdfs/config/client.conf
#需要修改的内容如下
tracker_server=10.168.109.63:22122 # 服务器1
tracker_server=10.168.109.64:22122 # 服务器2
tracker_server=10.168.109.65:22122 # 服务器3
1
2
3
4
5
6
vim /home/docker/fastdfs/config/mod_fastdfs.conf
#需要修改的内容如下
tracker_server=10.168.109.63:22122 # 服务器1
tracker_server=10.168.109.64:22122 # 服务器2
tracker_server=10.168.109.65:22122 # 服务器3
url_have_group_name=true
1
2
# 修改nginx 配置(镜像nginx配置有问题需要加上数据目录加上M00)
vim /home/docker/fastdfs/config/nginx/nginx.conf
e.启动(ip修改为对应主机ip)
1
2
3
4
5
6
7
8
9
10
11
12
docker run -d --restart=always \
-e TZ="Asia/Shanghai" \
-v /etc/localtime:/etc/localtime \
--privileged=true \
--net=host \
--name=fastdfs \
-e FASTDFS_IPADDR=10.168.109.65 \
-e WEB_PORT=8888 \
-v /home/docker/fastdfs/config/fdfs:/etc/fdfs \
-v /home/docker/fastdfs/config/nginx/nginx.conf:/usr/local/nginx/conf/nginx.conf \
-v /home/docker/fastdfs:/home/fastdfs \
onlyonelmm/fastdfs-arm64:latest
f.集群检测
1
2
3
# 进入容器内部执行以下命令
/usr/bin/fdfs_monitor /etc/fdfs/storage.conf
# 会显示会有几台服务器 有3台就会 显示 Storage 1-Storage 3的详细信息
g.上传测试
正常启动后,在不借助其他任何工具的情况下,可以进入容器中,测试文件上传。
进入容器内部:docker exec -it ce32d48efc0e bash
新建一个html文件:echo '<h1>Hello World ! FastDfs .</h1>' > index.html
测试上传:/usr/bin/fdfs_test /etc/fdfs/client.conf upload index.html
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
[root@ecs-603f-0002 /]# /usr/bin/fdfs_test /etc/fdfs/client.conf upload index.html
This is FastDFS client test program v6.07
Copyright (C) 2008, Happy Fish / YuQing
FastDFS may be copied only under the terms of the GNU General
Public License V3, which may be found in the FastDFS source kit.
Please visit the FastDFS Home Page http://www.fastken.com/
for more detail.
[2022-02-28 11:16:25] DEBUG - base_path=/home/fastdfs, connect_timeout=5, network_timeout=60, tracker_server_count=1, anti_steal_token=0, anti_steal_secret_key length=0, use_connection_pool=0, g_connection_pool_max_idle_time=3600s, use_storage_id=0, storage server id count: 0
tracker_query_storage_store_list_without_group:
server 1. group_name=, ip_addr=192.168.0.165, port=23000
group_name=group1, ip_addr=192.168.0.165, port=23000
storage_upload_by_filename
group_name=group1, remote_filename=M00/00/00/wKgApWIcrwmACh0gAAAAITYrfZM95.html
source ip address: 192.168.0.165
file timestamp=2022-02-28 11:16:25
file size=33
file crc32=908819859
example file url: http://192.168.0.165:8888/group1/M00/00/00/wKgApWIcrwmACh0gAAAAITYrfZM95.html
storage_upload_slave_by_filename
group_name=group1, remote_filename=M00/00/00/wKgApWIcrwmACh0gAAAAITYrfZM95_big.html
source ip address: 192.168.0.165
file timestamp=2022-02-28 11:16:26
file size=33
file crc32=908819859
example file url: http://192.168.0.165:8888/group1/M00/00/00/wKgApWIcrwmACh0gAAAAITYrfZM95_big.html
h.说明
1
2
3
4
5
tracker_server #有几台服务器写几个
group_name #地址的名称的命名
bind_addr #服务器ip绑定
store_path_count #store_path(数字)有几个写几个
store_path(数字) #设置几个储存地址写几个 从0开始
浏览器访问:http://192.168.0.165:8888/group1/M00/00/00/wKgApWIcrwmACh0gAAAAITYrfZM95_big.html 正常可以预览上传的html文件内容。
4、Redis集群部署
a.要求
至少6个节点,master节点至少要3个,slave节点也是3个。因为一个Redis集群如果要对外提供可用的服务,那么集群中必须要有过半的master节点正常工作。基于这个特性,如果想搭建一个能够允许 n 个master节点挂掉的集群,那么就要搭建2n+1个master节点的集群如:
1
2
3
4
5
6
2个master节点,挂掉1个,则1不过半,则集群down掉,无法使用,容错率为0
3个master节点,挂掉1个,2>1,还可以正常运行,容错率为1
4个master节点,挂掉1个,3>1,还可以正常运行,但是当挂掉2个时,2=2,不过半,容错率依然为1
b.以下已一台服务器为例部署(多台也一样,修改为对应的IP即可)
-
下载Redis安装包
https://download.redis.io/releases/redis-6.2.14.tar.gz
-
编译安装
1 2 3 4 5 6 7 8
安装gcc: yum install gcc 解压: tar -zxvf redis-6.2.14.tar.gz 切换到redis解压目录下 cd redis-6.2.14 编译安装 make install PREFIX=/apply/redis/redis-6.2.14 (把redis-6.2.14指定安装在/apply/redis-6.2.14,会在此目录下生成bin目录。)
-
创建redis用户并赋权(出于安全考虑)
1 2 3
创建用户:useradd redis 设置密码:passwd redis 赋权:chown -R redis:redis /apply/redis/redis-6.2.14
-
集群安装(一下都使用redis用户操作)
新建集群主目录:mkdir -p /apply/redis/redis-cluster 在redis_cluster创建各个节点目录: 7001、7002、7003、7004、7005、7006,并将redis.conf文件cp到这几个文件夹下 修改各个文件夹下的配置文件:
1 2 3 4 5 6 7 8
port 7001 #端口7001,7002,7003 bind 本机ip #默认ip为127.0.0.1 需要改为其他节点机器可访问的ip 否则创建集群时无法访问对应的端口 daemonize yes #redis后台运行 pidfile /var/run/redis_7000.pid #pidfile文件对应7000,7001,7002,7003,7004,7005 cluster-enabled yes #开启集群 cluster-config-file nodes_7000.conf #集群的配置 配置文件首次启动自动生成 7000,7001,7002 cluster-node-timeout 15000 #请求超时 默认15秒,可自行设置 requirepass nm@eloL!FFmmY5 #设置redis密码
-
然后分别启动这6个节点
1 2 3 4 5 6
/apply/redis/redis-6.2.14/bin/redis-server /apply/redis/redis_cluster/7001/redis.conf /apply/redis/redis-6.2.14/bin/redis-server /apply/redis/redis-cluster/7002/redis.conf /apply/redis/redis-6.2.14/bin/redis-server /apply/redis/redis-cluster/7003/redis.conf /apply/redis/redis-6.2.14/bin/redis-server /apply/redis/redis-cluster/7004/redis.conf /apply/redis/redis-6.2.14/bin/redis-server /apply/redis/redis-cluster/7005/redis.conf /apply/redis/redis-6.2.14/bin/redis-server /apply/redis/redis-cluster/7006/redis.conf
-
看看Redis是否启动了
1 2 3 4 5 6 7 8
[redis@wzy-cloud redis]# ps -ef|grep redis redis 6304 1 0 21:44 ? 00:00:00 redis-server 0.0.0.0:7000 [cluster] redis 6306 1 0 21:44 ? 00:00:00 redis-server 0.0.0.0:7001 [cluster] redis 6308 1 0 21:44 ? 00:00:00 redis-server 0.0.0.0:7002 [cluster] redis 6310 1 0 21:44 ? 00:00:00 redis-server 0.0.0.0:7003 [cluster] redis 6318 1 0 21:44 ? 00:00:00 redis-server 0.0.0.0:7004 [cluster] redis 6320 1 0 21:44 ? 00:00:00 redis-server 0.0.0.0:7005 [cluster] redis 6345 4985 0 21:44 pts/0 00:00:00 grep --color=auto redis
-
所有节点启动好后,输入以下命令创建集群,注意ip,是你的对应集群ip
-
输入创建集群的命令后会出现以下提示,注意Can I set the above configuration? (type ‘yes’ to accept): yes,该处请输入yes
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24
[redis@wzy-cloud redis_cluster]# redis-cli --cluster create 10.168.109.63:7001 10.168.109.63:7002 10.168.109.64:7003 10.168.109.64:7004 10.168.109.65:7005 114.116.35.252:7006 --cluster-replicas 1 -a nm@eloL!FFmmY5 Warning: Using a password with '-a' or '-u' option on the command line interface may not be safe. >>> Performing hash slots allocation on 6 nodes... Master[0] -> Slots 0 - 5460 Master[1] -> Slots 5461 - 10922 Master[2] -> Slots 10923 - 16383 Adding replica 10.168.109.64:7004 to 10.168.109.63:7001 Adding replica 10.168.109.65:7005 to 10.168.109.63:7002 Adding replica 10.168.109.65:7006 to 10.168.109.64:7003 >>> Trying to optimize slaves allocation for anti-affinity [WARNING] Some slaves are in the same host as their master M: 4e571c020d1f2cca020132a9adfdea2a367da21d 10.168.109.63:7001 slots:[0-5460] (5461 slots) master M: 838382153a78260e274c1d2d11a105dd3986a223 10.168.109.63:7002 slots:[5461-10922] (5462 slots) master M: e86d01e92214015304461a104a9f14e3cedc7829 10.168.109.64:7003 slots:[10923-16383] (5461 slots) master S: 22b1f3d83f068973c6e8a5d0b9e87c0c1b950594 10.168.109.64:7004 replicates 838382153a78260e274c1d2d11a105dd3986a223 S: 5248474e122d745b7e929a2705da210d3d150b4c 10.168.109.65:7005 replicates e86d01e92214015304461a104a9f14e3cedc7829 S: b3d20a419df22b4c9f4fe14c1fda22c2920c5c11 10.168.109.65:7006 replicates 4e571c020d1f2cca020132a9adfdea2a367da21d Can I set the above configuration? (type 'yes' to accept): yes
-
输完yes后,会出现如下提示,[OK] All 16384 slots covered.说明成功
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29
Can I set the above configuration? (type 'yes' to accept): yes >>> Nodes configuration updated >>> Assign a different config epoch to each node >>> Sending CLUSTER MEET messages to join the cluster Waiting for the cluster to join ..... >>> Performing Cluster Check (using node 0.0.0.0:7000) M: 4e571c020d1f2cca020132a9adfdea2a367da21d 10.168.109.63:7001 slots:[0-5460] (5461 slots) master 1 additional replica(s) S: b3d20a419df22b4c9f4fe14c1fda22c2920c5c11 10.168.109.63:7002 slots: (0 slots) slave replicates 4e571c020d1f2cca020132a9adfdea2a367da21d M: e86d01e92214015304461a104a9f14e3cedc7829 10.168.109.64:7003 slots:[10923-16383] (5461 slots) master 1 additional replica(s) S: 22b1f3d83f068973c6e8a5d0b9e87c0c1b950594 10.168.109.64:7004 slots: (0 slots) slave replicates 838382153a78260e274c1d2d11a105dd3986a223 M: 838382153a78260e274c1d2d11a105dd3986a223 10.168.109.65:7005 slots:[5461-10922] (5462 slots) master 1 additional replica(s) S: 5248474e122d745b7e929a2705da210d3d150b4c 10.168.109.65:7006 slots: (0 slots) slave replicates e86d01e92214015304461a104a9f14e3cedc7829 [OK] All nodes agree about slots configuration. >>> Check for open slots... >>> Check slots coverage... [OK] All 16384 slots covered.
-
验证一下
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26
[redis@wzy-cloud redis_cluster]# redis-cli -c -h 10.168.109.65 -p 7006 127.0.0.1:7000> auth nm@eloL!FFmmY5 OK 10.168.109.65:7006> cluster nodes b3d20a419df22b4c9f4fe14c1fda22c2920c5c11 10.168.109.63:7002@17005 slave 4e571c020d1f2cca020132a9adfdea2a367da21d 0 1563378113000 6 connected e86d01e92214015304461a104a9f14e3cedc7829 10.168.109.63:7001@17002 master - 0 1563378115000 3 connected 10923-16383 22b1f3d83f068973c6e8a5d0b9e87c0c1b950594 10.168.109.64:7003@17003 slave 838382153a78260e274c1d2d11a105dd3986a223 0 1563378116490 4 connected 838382153a78260e274c1d2d11a105dd3986a223 10.168.109.64:7004@17001 master - 0 1563378114486 2 connected 5461-10922 5248474e122d745b7e929a2705da210d3d150b4c 10.168.109.65:7005@17004 slave e86d01e92214015304461a104a9f14e3cedc7829 0 1563378115488 5 connected 4e571c020d1f2cca020132a9adfdea2a367da21d 10.168.109.65:7006@17000 myself,master - 0 1563378114000 1 connected 0-5460 10.168.109.63:7001> set qwe 111 OK 10.168.109.63:7001>exit [redis@wzy-cloud 7001]# redis-cli -c -h 10.168.109.64 -p 7003 10.168.109.64:7003> auth nm@eloL!FFmmY5 OK 10.168.109.64:7003> get qwe -> Redirected to slot [757] located at 10.168.109.63:7001 (error) NOAUTH Authentication required. 10.168.109.63:7001> auth b2V1kw2GJg OK 10.168.109.63:7001> get qwe "111" 10.168.109.63:7001>
5、 Elasticsearch集群部署
a.修改/etc/security/limits.conf文件 增加配置
1
2
3
4
vi /etc/security/limits.conf
在文件最后,增加如下配置
* soft nofile 65536
* hard nofile 65536
b.在/etc/sysctl.conf文件最后添加一行
1
vm.max_map_count=655360
添加完毕之后,执行命令: sysctl -p
c.从官网下载elasticsearch 并解压
1
2
3
4
5
mkdir -p /apply/es
cd /apply/es
wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-7.8.0-linux-x86_64.tar.gz
tar -zxvf elasticsearch-7.8.0-linux-x86_64.tar.gz
目录重命名:mv elasticsearch-7.8.0 elasticsearch
d.创建es用户并赋权(出于安全考虑,elasticsearch默认不允许以root账号运行。)
1
2
3
创建用户:useradd es
设置密码:passwd es
赋权:chown -R es:es /apply/es
e.生成证书
1
/apply/es/elasticsearch/bin/elasticsearch-certutil cert -out config/elastic-certificates.p12 -pass
f.将生成的证书elastic-certificates.p12复制到集群中的其他节点的config目录下
g.切换到es用户修改配置
- 节点1配置:节点1配置:配置过程中,只需要设置elasticsearch.yml文件,其他的无须设置
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
# 设置集群名称,集群内所有节点的名称必须一致。
cluster.name: myes
# 设置节点名称,集群内节点名称必须唯一。
node.name: node1
# 表示该节点会不会作为主节点,true表示会;false表示不会
node.master: true
# 当前节点是否用于存储数据,是:true、否:false
node.data: true
# 索引数据存放的位置
path.data: /apply/es/elasticsearch/data
# 日志文件存放的位置
path.logs: /apply/es/elasticsearch/logs
# 需求锁住物理内存,是:true、否:false
bootstrap.memory_lock: true
# 监听地址,用于访问该es
network.host: 172.16.100.1
# es对外提供的http端口,默认 9200
http.port: 9200
# TCP的默认监听端口,默认 9300
transport.tcp.port: 9300
# 设置这个参数来保证集群中的节点可以知道其它N个有master资格的节点。默认为1,对于大的集群来说,可以设置大一点的值(2-4)
discovery.zen.minimum_master_nodes: 2
# es7.x 之后新增的配置,写入候选主节点的设备地址,在开启服务后可以被选为主节点
discovery.seed_hosts: ["172.16.100.1:9300", "172.16.100.2:9300", "172.16.100.3:9300"]
discovery.zen.fd.ping_timeout: 1m
discovery.zen.fd.ping_retries: 5
# es7.x 之后新增的配置,初始化一个新的集群时需要此配置来选举master
cluster.initial_master_nodes: ["node1", "node2", "node3"]
# 是否支持跨域,是:true,在使用head插件时需要此配置
http.cors.enabled: true
# “*” 表示支持所有域名
http.cors.allow-origin: "*"
#证书相关配置
xpack.security.enabled: true
xpack.security.transport.ssl.enabled: true
xpack.security.transport.ssl.verification_mode: certificate
xpack.security.transport.ssl.keystore.path: elastic-certificates.p12
xpack.security.transport.ssl.truststore.path: elastic-certificates.p12
- 节点二和节点三配置一样,修改对应的node名称和network.host 监听地址
h.在三个节点上启动
/apply/es/elasticsearch/bin/elasticsearch -d
i.设置密码,在主节点上执行下面命令:
/apply/es/elasticsearch/bin/elasticsearch-setup-passwords interactive
并根据提示输入密码想要设置的密码(例如:123456)即可。
Changed password for user [apm_system]
Changed password for user [kibana_system]
Changed password for user [kibana]
Changed password for user [logstash_system]
Changed password for user [beats_system]
Changed password for user [remote_monitoring_user]
Changed password for user [elastic]
j.验证
打开浏览器,输入我们的Elasticsearch的网址,然后会弹出一个输入框,让我们输入账号和密码。
k.后续修改密码
如果你觉得之前用户的密码设置的太简单了,你想修改密码可以采用如下方式:
curl -XPOST -u elastic “localhost:9200/_security/user/elastic/_password” -H ‘Content-Type: application/json’ -d’{“password” : “abcd1234”}’
6、Nacos集群部署
由于只作为注册中心,暂时使用内置数据库启动
a. 下载安装包
下载地址:https://github.com/alibaba/nacos/releases/download/2.3.2/nacos-server-2.3.2.tar.gz
b. 配置集群配置文件
在nacos的解压目录nacos/的conf目录下,有配置文件cluster.conf,请每行配置成ip:port。(请配置3个或3个以上节点)
1
2
3
4
# ip:port
200.8.9.16:8848
200.8.9.17:8848
200.8.9.18:8848
c. 修改配置文件
1
2
3
4
5
### 开启鉴权功能
nacos.core.auth.system.type=nacos
nacos.core.auth.enabled=true
### 自定义密钥推荐将配置项设置为Base64编码的字符串,且原始密钥长度不得低于32字符。例如下面的的例子:
nacos.core.auth.default.token.secret.key=VGhpc0lzTXlDdXN0b21TZWNyZXRLZXkwMTIzNDU2Nzg=
d.使用内置数据源启动
1
sh startup.sh -p embedded