Percona Galera Cluster

Galera Cluster for MySQL is a multi-master cluster using synchronous replication. It is scalable, easy to use and provides high availability.


  • Disable SELinux
  • Open TCP Ports 3306, 4444, 4567 and 4568 or disable iptables

We’ll start out by setting up the percona yum repo and installing the necessary software.

Setup the Percona Yum Repo

Install Galera RPMs from Percona

yum install Percona-XtraDB-Cluster-server-56 Percona-XtraDB-Cluster-client-56 Percona-XtraDB-Cluster-galera-2 -y

Configure Nodes
Three or more seems to be the suggested number of nodes in the cluster. When setting this up, keep in mind the cluster is only as powerful as the weakest link. So using identical/very similar hardware (or virtual resource) configuration is highly recommended.

We will have three nodes here, db01, db02 and db03.

  • db01 –
  • db02 –
  • db03 –

Setup conf files on all nodes
The conf file should look like this on all nodes with the only change being the IP address in wsrep_node_address. Below is for db01 but only because we have the IP in this configuration option.

# Path to Galera library
# Cluster connection URL contains the IPs of all nodes
# In order for Galera to work correctly binlog format should be ROW
# MyISAM storage engine has only experimental support
# This changes how InnoDB autoincrement locks are managed and is a requirement for Galera
# db01 address - put db02 and db03 address in on other nodes.
# SST method
# Cluster name
# Authentication for SST method

Bootstrap first node
We’ll now use the init script to start the instance on db01 using the additional parameter –wsrep-cluster-address=”gcom://” We don’t want it to start with the value in my.cnf because the cluster and the servers mentioned don’t exist and the cluster will not properly bootstrap.

/etc/init.d/mysql start --wsrep-cluster-address="gcomm://"

Set MySQL root Password

mysqladmin password MySecretPassWord

Create sstuser
The sstuser is a user that galera uses to keep synced, and is specified in the my.cnf. You could use root here, but that’s not a good idea.

echo "create user 'sstuser'@'localhost' identified by 'Mys3cretPAssword';" | mysql -uroot -p
echo "grant reload, lock tables, replication client on *.* to 'sstuser'@'localhost';" | mysql -uroot -p
echo "flush privileges;" | mysql -uroot -p

Check status of 1 node cluster
Lets take a minute to verify the cluster is in fact bootstrapped.

echo "show status like 'wsrep%';" | mysql -uroot -p

Copy conf to other nodes in cluster
Now copy the above /etc/my.cnf conf to the other nodes (db02 and db03) just replacing the wsrep_node_address with the correct IP address of your server and start mysql as normal with no additional flags. Upon starting the service, it will read the wsrep_cluster_address=gcom:// and so on, so that it knows it’s a member of that cluster, and will automatically synchronize all data including users.

# on db02 and db03 after conf has been setup
/etc/init.d/mysql start