In a production environment, there are more than one or two servers, usually thousands of servers. This is too difficult for the operation and maintenance personnel to manage, and saltstack is a new basic platform management tool, which can support the management of tens of thousands of servers and complete the data transfer in seconds. It is one of the more automatic operation and maintenance tools used now. Salt is a basic platform management tool. SaltStack adopts C/S mode. The server end is the master of salt, the client end is minion. Minion and master communicate through ZeroMQ message queue. The master listens to the 4505 and 4506 ports. The 4505 port is the communication port for the master and minion authentication. The 4506 port is used by the master to send commands or receive the command execution return information of minion.
1. Developed based on python
2. Lightweight management tools, batch command execution
3. Common templates
|pkg||Package, with addition, deletion and update|
|file||Used to manage files, including synchronizing files, setting file permissions and user groups, deleting files, etc|
|cmd||Executing commands or scripts on minion|
|use||Management system account operation|
|service||Management system service operation|
|cron||Manage crontab tasks|
4. saltstack data system
Grains (static data)
pillar (dynamic data)
When the SaltStack client (Minion) starts, it automatically generates a set of keys, including the private key and the public key. After that, the public key is sent to the server, which verifies and accepts the public key to establish a reliable and encrypted communication connection. At the same time, a message publishing connection is established between the client and the server through the message queue ZeroMQ.
Minion is the client installation component that SaltStack needs to manage. It will take the initiative to connect to the Master, get the resource status information from the Master, and synchronize the resource management information.
As the control center, the Master runs on the host server and is responsible for the operation of the Salt command and the management of the resource status. The Master executes an instruction and sends it to each Minions through the queue for execution and returns the result.
ZeroMQ is an open source message queuing software, which is used to build a system communication bridge between Minion and Master.
Fast speed, based on message queue + thread, running multiple devices, all at millisecond level; very flexible, source code is python, easy to understand and customize modules (because Python is better understood than other perl, ruby, etc.) The command is simple and powerful.
It is inconvenient to deploy minion.
grains is some information collected when minion (client) is started, such as static information such as operating system type, network card ip, etc. Grain's information is not dynamic and will not change from time to time. It is only collected when minion is started.
Pillar is different from grains. It is defined on the master, and it is some information defined for minion. For example, some important data (passwords) can exist in the pillar, and variables can be defined.
state is the core function of saltstack. It manages the controlled host (including package, network configuration, system service, system user, etc.) through pre specified sls file
- Copy files to client
salt 'client2' cp.get_file salt:#apache.sls /tmp/cp.txt
- Copy directory to client
salt 'client2' cp.get_dir salt:#test /tmp
- Show surviving clients
- Execute server script under command
#Edit script vim /srv/salt/test/shell.sh #! /bin/sh echo "salt server do run shell script on client" > /tmp/shell.txt #Execution script salt 'client2' cmd.script salt:#test/shell.sh
- Environmental deployment
Prepare three machines, all of which turn off selinux and clear the firewall rules.
|Server role||IP address||Host name|
- Install saltstack
#epel sources are added for three machines respectively, and there are official sources locally yum install -y epel-release #Install epel source #Server installation yum -y install salt-master
- Configure master host
#Install complete modify master profile vim /etc/salt/master #Amend the following #15 elements interface: 192.168.175.132 #Monitor address #215 elements auto_accept: True #Avoid running salt key to confirm certificate authentication #416 elements file_roots: base: - /srv/salt #The root directory of the saltstack file. The directory needs to be created #710 line group classification nodegroups: group1: 'web01.saltstack.com' group2: 'web02.saltstack.com' #552 That's ok pillar_opts: True #Enable pillar function and synchronize file function #529 elements pillar_roots: base: - /srv/pillar #The home directory of pillar, which needs to be created cat /etc/salt/master | grep -v ^$ | grep -v ^# #View changes to the main profile
- Start server
#Opening service systemctl start salt-master #Set service start-up systemctl enable salt-master #View service port listening status netstat -anpt | egrep '4505|4506'
- Create root directory of salt and pillar files
mkdir /srv/salt mkdir /srv/pillar
- minion installation
#Install on two servers respectively yum -y install salt-minion
- Configure minino
#Modify the / etc/salt/minino main configuration file vim /etc/salt/minion #Modify the configuration as follows #16 row master: 192.168.175.132 - specify IP address of master #78 row id: web01.saltstack.com ා specify the host name of the controlled end Start the controlled end service systemctl start salt-minion
- Test the communication status with the controlled end at the main control end
#View communication status salt '*' test.ping web01.saltstack.com: True web02.saltstack.com: True #View the mount status of all managed ends salt '*' cmd.run 'df -h' #View the clients that have been accepted on the master salt-key #Check all the values of grains on the controlled host (client information will be obtained every time minion is started) #Static data salt 'web01.saltstack.com' grains.items #dynamic data salt 'web01.saltstack.com' pillar.items
- Configure management to install Apache. The following demonstration is to install Apache remotely via yum. The steps are as follows:
#Modify master profile vim /etc/salt/master file_roots: base: - /srv/salt/ #Note: Environment: base, dev (development environment), test (test environment), prod (production environment). #Create working directory mkdir /srv/salt vim /srv/salt/top.sls base: '*': - apache #Note: '*' indicates that apache modules are executed on all clients. vim /srv/salt/apache.sls apache-service: pkg.installed: - names: # If there is only one service, it can be written as – name: httpd without changing another line - httpd - httpd-devel service.running: - name: httpd - enable: True #Note: Apache service is a custom id name. pkg.installed is the package installation function. The following is the name of the package to be installed. service.running is also a function to ensure that the specified service is started. enable means start-up. #Restart service systemctl restart salt-master #Execute refresh state configuration command salt '*' state.highstate
- Verify whether httpd service is installed successfully on two minino terminals
#View service port listening status netstst -ntap | grep 80 #View generated profiles rpm -qc httpd