本文就centos6.10基础上,解读红帽存储系统brick高可用。本文仅讲述高可用,负载均衡还需要单独去实现。
显而易见,这个高可用集群没有自动负载均衡能力,这对大规模高并发访问或数据密集型应用是非常不利的 balabala~
最简单的方式是通过DNS,我看很多公司是这样做的,但纯在一定弊端,之后的篇幅我会单独讲如何使用lvs来实现
红帽glusterfs存储版本目前好像已经停止对centos6的支持,之后发布的版本都仅支持centos7。(好像是这酱紫的,可以自己去官网瞅瞅)
环境:
centos6.10
两个Brick节点
复制卷形式
虚拟VIP:
VIP1:172.16.6.142
VIP2:172.16.6.143
RS IP:
172.16.6.140 server1
172.16.6.141 server2
还是老样子,我们直接离线支持系统的最新版本到本地,所有操作离线完成:
Brick1/Brick2:
cat /etc/hosts
172.16.6.140 server1
172.16.6.141 server2
ulimit参数调整:
echo “root soft nofile 65535”>>/etc/security/limits.conf
echo “root hard nofile 65535”>>/etc/security/limits.conf
echo “* soft nofile 65535”>>/etc/security/limits.conf
echo “* hard nofile 65535”>>/etc/security/limits.conf
echo “ulimit -n 65536” >> /etc/profile
ulimit -n 65536
安装依赖:
yum install automake autoconf libtool flex bison openssl-devel libxml2-devel python-devel libaio-devel libibverbs-devel librdmacm-devel readline-devel lvm2-devel glib2-devel userspace-rcu-devel libcmocka-devel libacl-devel -y
上传并安装gluster软件:
rz gluster.zip
unzip gluster.zip
glusterfs-3.12.8-1.el6.x86_64.rpm
glusterfs-api-3.12.8-1.el6.x86_64.rpm
glusterfs-cli-3.12.8-1.el6.x86_64.rpm
glusterfs-client-xlators-3.12.8-1.el6.x86_64.rpm
glusterfs-fuse-3.12.8-1.el6.x86_64.rpm
glusterfs-libs-3.12.8-1.el6.x86_64.rpm
glusterfs-server-3.12.8-1.el6.x86_64.rpm
pyxattr-0.5.0-1.el6.x86_64.rpm
安装并自动解决依耐性关系:
yum localinstall *.rpm
启动并设置开机启动:
/etc/init.d/glusterd start
chkconfig glusterd on
格式化数据盘:
fdisk -l
mkfs.ext4 /dev/sdb
查看uuid并添加到fstab开机启动:
blkid
cat /etc/fstab
UUID=”xxx” /data1 ext4 defaults 0 0
这里创建两个复制卷,一个用于存储数据,一个用于共享和保存配置信息。至于是创建条带卷还是复制卷自己根据实际情况操作。
Brick1:
创建数据存储卷gv0:
加入可信任存储池:
gluster peer probe server2
查看状态:
gluster peer status
使用Replicated的方式,建立一个名为gv0的卷(Volume),存储块(Brick)为2个,分别为server1:/data1和server2:/data1。
gluster volume create gv0 replica 2 server1:/data1 server2:/data1 force
启用GlusterFS逻辑卷:
gluster volume start gv0
Starting volume gv0 has been successful
查看逻辑卷状态:
gluster volume info
gluster volume status
创建ctdb卷gv1:
使用Replicated的方式,建立一个名为gv0的卷(Volume),存储块(Brick)为2个,分别为server1:/data和server2:/data。
gluster volume create gv1 replica 2 server1:/data server2:/data force
启用GlusterFS逻辑卷:
gluster volume start gv1
Starting volume gv1 has been successful
查看逻辑卷状态:
gluster volume info
gluster volume status
生产环境一定要注意安全!!!务必要做白名单配置。
仅允许172.16.0.0/16和172.31.0.0/16网段挂载使用
gluster volume set gv0 auth.allow 172.16..,172.31..
gluster volume set gv1 auth.allow 172.16..,172.31..
在两台Brick上分别安装客户端程序:
yum -y install glusterfs glusterfs-fuse
添加到fstab开机挂载,并在无法连接时放弃挂载:
cat /etc/fstab
server1:/gv0 /data1 glusterfs defaults,_netdev 0 0
server1:/gv1 /data glusterfs defaults,_netdev 0 0
cat /etc/fstab
server2:/gv0 /data1 glusterfs defaults,_netdev 0 0
server2:/gv1 /data glusterfs defaults,_netdev 0 0
Brick1/Brick2:
安装ctdb组件,如果需要存储支持windows操作系统挂载还需要安装samba组件:
yum -y install ctdb samba samba-common samba-winbind-clients nfs-utils
mkdir -p /data/ctdb/lock /data/ctdb/share /data/ctdb/nfs-share
[root@server2 ~]# cat /data/ctdb/lock/ctdb
CTDB_RECOVERY_LOCK=/data/ctdb/lock/ctdb.lock
CTDB_PUBLIC_INTERFACE=eth0
CTDB_SET_RecoveryBanPeriod=5
CTDB_SET_MonitorInterval=5
CTDB_NOTIFY_SCRIPT=/etc/ctdb/notify.sh
CTDB_LOGFILE=/var/log/ctdb.log
CTDB_SAMBA_SKIP_SHARE_CHECK=no
CTDB_FTP_SKIP_SHARE_CHECK=no
CTDB_NFS_SKIP_SHARE_CHECK=no
CTDB_MANAGES_WINBIND=no
CTDB_MANAGES_SAMBA=no
CTDB_MANAGES_VSFTPD=no
CTDB_MANAGES_NFS=no
CTDB_MANAGES_ISCSI=no
CTDB_MANAGES_LDAP=no
CTDB_DEBUGLEVEL=ERR
CTDB_PUBLIC_ADDRESSES=/etc/ctdb/public_addresses
CTDB_NODES=/etc/ctdb/nodes
ln -s /data/ctdb/lock/ctdb /etc/sysconfig/ctdb
[root@server2 ~]# cat /etc/ctdb/notify.sh
#!/bin/sh
# This script is activated by setting CTDB_NOTIFY_SCRIPT=/etc/ctdb/notify.sh
# in /etc/sysconfig/ctdb
# This is script is invoked from ctdb when certain events happen. See
# /etc/ctdb/notify.d/README for more details.
d=$(dirname $0)
nd="${d}/notify.d"
ok=true
for i in "${nd}/"* ; do
# Don't run files matching basename
case "${i##*/}" in
*~|*,|*.rpm*|*.swp|README) continue ;;
esac
# Files must be executable
[ -x "$i" ] || continue
# Flag failures
"$i" "$1" || ok=false
done
$ok
rs配置:
[root@server2 ~]# cat /data/ctdb/lock/nodes
172.16.6.140
172.16.6.141
ln -s /data/ctdb/lock/nodes /etc/ctdb/nodes
vip配置:
[root@server2 ~]# cat /data/ctdb/lock/public_addresses
172.16.6.142/24 eth1
172.16.6.143/24 eth1
ln -s /data/ctdb/lock/public_addresses /etc/ctdb/public_addresses
Brick1:
eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP qlen 1000
link/ether 00:0c:29:8c:b6:4e brd ff:ff:ff:ff:ff:ff
inet 172.16.6.140/24 brd 172.16.6.255 scope global eth1
inet 172.16.6.143/24 brd 172.16.6.255 scope global secondary eth1
inet6 fe80::20c:29ff:fe8c:b64e/64 scope link
valid_lft forever preferred_lft forever
Brick2:
ip addr
eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP qlen 1000
link/ether 00:0c:29:be:7d:ee brd ff:ff:ff:ff:ff:ff
inet 172.16.6.141/24 brd 172.16.6.255 scope global eth1
inet 172.16.6.142/24 brd 172.16.6.255 scope global secondary eth1
inet6 fe80::20c:29ff:febe:7dee/64 scope link
valid_lft forever preferred_lft forever
查看VIP分布情况,可以看到VIP1:172.16.6.142在Brick2上,VIP2:172.16.6.143在Brick1上
[root@server2 data1]# ctdb ip
Public IPs on node 1
172.16.6.142 1
172.16.6.143 0
客户端挂载测试:
yum -y install nfs-utils rpcbind
172.16.6.142:/gv0 /data1 glusterfs defaults,_netdev 0 0
关闭Brick2电源后数秒查看VIP是否漂移:
Brick1:
eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP qlen 1000
link/ether 00:0c:29:8c:b6:4e brd ff:ff:ff:ff:ff:ff
inet 172.16.6.140/24 brd 172.16.6.255 scope global eth1
inet 172.16.6.143/24 brd 172.16.6.255 scope global secondary eth1
inet 172.16.6.142/24 brd 172.16.6.255 scope global secondary eth1
inet6 fe80::20c:29ff:fe8c:b64e/64 scope link
valid_lft forever preferred_lft forever
[root@server1 ~]# ctdb ip
Public IPs on node 0
172.16.6.142 0
172.16.6.143 0
观察客户端挂载是否正常。
[root@clinet ~]#df -h
172.16.6.142:/gv0 296G 65M 281G 1% /data1
[root@clinet ~]# ll /data1
total 9.5K
-rw-r–r–. 1 root root 0 2018-08-01 20:13 1.txt
-rw-r–r–. 1 root root 0 2018-08-01 20:13 2.txt
-rw-r–r–. 1 root root 0 2018-08-01 20:13 3.txt
-rw-r–r–. 1 root root 0 2018-08-01 20:13 4.txt
挂载正常,VIP漂移成功。
最新评论
世间因为有你而美丽!我也一直想当个志愿者,奈何身体不允许
告诉那小孩 叫哥哥
666
123
此时此刻,感慨万千,旧的一年已经过去,新的一年已经到来,展望未来,美好的日子正等 着我们去奋斗,愿在新的一年里,我们能够继续经受住考验,克服困难,春暖花开
智能插座都是扯,插座还是公牛好用
下载链接失效了 求补
连接失效了