GlusterFS allows you to replicate files across a number of servers giving higher performance and additional resiliency. Also, almost instantaneous replication to off-site servers makes it a good option for disaster recovery and business continuity.

In this example, we’ll have two storage nodes to replicate the data between, and also one client node to mount the network filesystem from remotely.

Download Gluster RPMs
Update: As of CentOS 6.6, these RPMs are available in the base repo.

# do this inside a directory to contain all the RPMs, replace with your release and arch.

Install glusterfs server on the storage nodes (gluster01 and gluster02)

# from directory with RPMs, we need to say yum install all of them b/c of interdependencies
# you could also place them all in a yum repo that you have setup to make it unnecessary to download
# them on all servers. We're going to install glusterfs-3.*.rpm, glusterfs-fuse-3.*.rpm,
# glusterfs-geo-replication-3.*.rpm, and glusterfs-server-3.*.rpm, glusterfs-cli
# all others can be omitted for now but may be useful in the future.
# note below tries to resolve some dependencies from other repos you may have
# configured, so it might be good to disable those repos temporarily.
yum -y install glusterfs-3.*.rpm glusterfs-fuse-3.*.rpm glusterfs-geo-replication-3.*.rpm glusterfs-server-3.*.rpm
# it's much better if you just put the RPMs in your own yum repo and type
yum -y install glusterfs-server

Start services, and set to start at boot

/etc/init.d/glusterd start
/sbin/chkconfig glusterd on

Disable firewall (or open it up to addresses on subnet) so it doesn’t block attempts to connect to gluster. We’ll just disable in this example.

/etc/init.d/iptables stop

From gluster01

# make sure gluster02 is resolvable or use IP
gluster peer probe gluster02

You should see something like:

peer probe: success.

You can confirm that you see the peer on both sides with

# this prints out the hostname or ip of the other nodes in the cluster, UUID, and status - connected, etc.
gluster peer status
# should print something like
# Hostname: gluster02
# Uuid: aac061ad-bee8-48f2-8fab-b6e262fa1e54
# State: Peer in Cluster (Connected)

Hard disks are referred to as “bricks” in gluster. We have /dev/sdb on both nodes which we’ll use as the bricks for gluster. First we’ll partition /dev/sdb with fdisk, then format, mount and add to fstab.

# create one single large partition if > 2TB, you must use parted and i need to verify those larger disks work
fdisk /dev/sdb
# format as xfs with inode size of 512
mkfs.xfs -i size=512 /dev/sdb1
# create mount point and mount and create brick folder
mkdir -p /export/sdb1 && mount /dev/sdb1 /export/sdb1 && mkdir -p /export/sdb1/brick
# now add to fstab
echo "/dev/sdb1 /export/sdb1 xfs defaults 0 0"  >> /etc/fstab

In this example, both gluster01 and gluster02 have 8GB /dev/sdb disks. We want to mirror the data across the disks, so if a node dies all the data will be accessible. It’s like a raid1 across two hosts instead of two disks in 1 host.

# create the volume
gluster volume create gv0 replica 2 gluster01:/export/sdb1/brick gluster02:/export/sdb1/brick
# check volume we created
gluster volume info
# start the volume
gluster volume start gv0

Mount the volume. From any machine that has glusterfs-fuse installed, you should be able to mount it.

mount -t glusterfs gluster01:/gv0 /mnt

Write to mount point, and verify it on both servers.

echo "first file on glusterfs" >> /mnt/first_gluster_file.txt
# verify it got put on the bricks, you'll see it on both servers.
ls -aul /export/sdb1/brick/first_gluster_file.txt

At this point, you can mount the volume (pointing to any of the peer IP/Hostnames) from a client computer and read/write to it like any network volume. Say you mount glusterfs01:/gv0 from a client and glusterfs01 dies, the client will seemlessly failover to glusterfs02:/gv0 and will continue reading/writing without having to do anything. When you get glusterfs01 fixed up and bring it back online, as soon as glusterd starts, whatever updates occurred while it was offline will be synchronized.

Geo replication slave

Now if we need to push the contents of this volume to another datacenter, we can use glusterfs-geo-replication which basically rsyncs the volume across the WAN.