SaltStack on CentOS

Install and Configure SaltStack on CentOS 6

Salt stack like other configuration management tools use a client/server methodology with public and private keypairs to authenticate the nodes. Also similar to others, it can be run as a daemon or on demand. You can run it as root, or another user if you want to tighten security and limit access as to what salt may change. Saltstack is a python application unlike puppet and chef which are written in ruby. It takes a very minimalist approach in configuration files, and I have found it to be very feature rich and powerful but more lightweight than some of the other options. For example, zeromq is a lot more lightweight than rabbitmq or activemq. And while you can configure puppet/mcollective to use zeromq, it requires more work than saltstack which is ready to go once the RPMs are installed.

In this quick how-to we’re just going to get up and running and can dive into more sophisticated configurations in the future.

We’ll begin with two nodes: – – runs salt-master and also salt-minion to mange itself – – runs salt-minion and communicates with salt-master

These nodes must be able to resolve each other by fqdn or search path and must be added to dns, or a /etc/hosts file.

Enable the EPEL repo

The saltstack RPM packages are not included in the CentOS base repositories at this time. Please enable the EPEL repository on both machines so we can install the software.

Setting up the salt master (

Install saltstack rpms
Once the EPEL repo is setup, lets install the software.

# first clear the yum cache
yum clean all

# install salt-master and salt-minion
yum -y install salt-master salt-minion

Configuring salt-master

By default the salt-master listens on all available interface or on tcp port 4506. You can change this and other options by creating or editing /etc/salt/master. The RPM package has a well commented default configuration file here that you can go through and see what else to change. The file has all the default options but commented out, so you can see what defaults to overwrite, however the defaults are pretty good to get us up and going.

Start salt-master, and set to start at boot.

/etc/init.d/salt-master start && /sbin/chkconfig salt-master on

Configure salt-minion
The default configuration will work as long as you can resolve the hostname ‘salt’.

# send a test ping to see if the hostname is resolved via dns w/ search path or hosts
# we'll see this resolve and know we're pointing to the right
# IP address (
ping -c3 salt

Again, since we already resolve the salt hostname, nothing is needed in /etc/salt/minion. If you setup your salt-master on a node with a different hostname, you can put that hostname in /etc/salt/minion by adding a simple configuration line.

master: #fqdn or IP of salt-master

Start salt-minion and set to start at boot

service salt-minion start && /sbin/chkconfig salt-minion on

Authenticate salt-minion

After starting the salt-minion service, it will try to connect to the salt-master, and authenticate with an ssl keypair that it also generated.

We need to accept this minion client. To view and accept minions that are waiting to be authorized, we run the salt-key command.

# list minions waiting to be accepted
salt-key -L
# accept all minions waiting to be
salt-key -A
# test that we have properly authenticated
# this should simply reply with the fqdn and "True"
salt '*'

Setup additional node (

Install salt-minion rpm
Now lets add another minion to the cluster.
On, once you’ve setup the EPEL repo, lets install salt-minion.

yum -y install salt-minion

Configure salt-minion

Again, since we can resolve the hostname salt, no additional configuration is needed at this time to get up and going, so we just simply start the service and set it to start at boot.

service salt-minion start && /sbin/chkconfig salt-minion on

Back on salt-master (

Check to see that salt-minion on has tried to authenticate, and then authorized it.

# list to check if app1 is there
salt-key -L
# authorize the listed minions - app1
salt-key -A

Test that we can communicate with the salt-minions

We’ll send the same we did earlier, only this time we should see both responses as True. the ‘*’ part of the command is the hosts to run it on, and the * is the wildcard to run on all.

[root@salt ~]# salt '*'

We can also test each node individually

salt ''
salt ''

Other useful salt commands

The base install of salt with the default configuration options allows you to do a good amount of things right out of the gate. I’ll use the ‘*’ host wildcard here, but remember you can replace that with the fqdn of any authorized node.

List users

salt '*' user.list_user

Run a command
In this case we’ll run md5sum against all minion confs and we’ll be able to see if they’re the same. We’ll see the fqdn, pid of the command, return code, and also the stderr and stdout from the command as output.

salt '*' cmd.run_all "md5sum /etc/salt/minion"

View the root users crontab

salt '*' cron.raw_cron root

Check disk usage

salt '*' disk.usage

Check disk percent free of a mount point

salt '*' disk.percent /

Get filesystem permissions of a directory or file

salt '*' acl.getfacl /root/
salt '*' acl.getfacl /etc/

List Minion Status

Use these commands to see what minions are up, down, or get a list of all up and down.

salt-run manage.up
salt-run manage.down
salt-run manage.status 

Configure tops salt sates

Next we want to configure salt states. The top sls files are written in YAML (or python if you want) and are how we install packages, ensure services are running, setup users, etc. All of which can be done manually with commands similar to the ones above, but if we want to keep a server in a consistent state, it’s best to create tops and apply across various clusters. After that we will dive into pillars.