1. saltstack data system
saltstack has two major data systems:
- Grains
- Pillar
2. saltstack data system components
2.1 Grains of saltstack component
Grains is a component of saltstack, which stores the information collected when minion starts.
Grains is one of the most important components of saltstack component, because we often use it in the process of configuration and deployment. Grains is a component of saltstack that records some static information of minion. It can be simply understood that grains records some common attributes of each minion, such as CPU, memory, disk, network information, etc. We can view all grains information of a minion through grains.items.
Functions of Grains:
- Collect asset information
Grains application scenario:
- Information Service
- Target matching at the command line
- Target matching in top file
- Target matching in template
For target matching in the template, see: https://docs.saltstack.com/en/latest/topics/pillar/
Information query example:
//List the key s and value s of all grains [root@master ~]# salt node1 grains.items node1: ---------- biosreleasedate: //bios time 07/22/2020 biosversion: //Version of bios 6.00 cpu_flags: //cpu related properties - fpu - vme - de - pse - tsc - msr - pae - mce - cx8 - apic - sep - mtrr - pge - mca - cmov - pat - pse36 - clflush - mmx - fxsr - sse - sse2 - ss - syscall - nx - pdpe1gb - rdtscp - lm - constant_tsc - arch_perfmon - nopl - xtopology - tsc_reliable - nonstop_tsc - cpuid - pni - pclmulqdq - ssse3 - fma - cx16 - pcid - sse4_1 - sse4_2 - x2apic - movbe - popcnt - tsc_deadline_timer - aes - xsave - avx - f16c - rdrand - hypervisor - lahf_lm - abm - 3dnowprefetch - cpuid_fault - invpcid_single - pti - ssbd - ibrs - ibpb - stibp - fsgsbase - tsc_adjust - bmi1 - avx2 - smep - bmi2 - invpcid - rdseed - adx - smap - clflushopt - xsaveopt - xsavec - xgetbv1 - xsaves - arat - md_clear - flush_l1d - arch_capabilities cpu_model: //Specific model of cpu Intel(R) Core(TM) i7-9750H CPU @ 2.60GHz cpuarch: //cpu architecture x86_64 cwd: / disks: - sr0 dns: ---------- domain: ip4_nameservers: - 114.114.114.114 - 8.8.8.8 ip6_nameservers: nameservers: - 114.114.114.114 - 8.8.8.8 options: search: sortlist: domain: efi: False efi-secure-boot: False fqdn: node1 fqdn_ip4: //ip address - 192.168.8.130 fqdn_ip6: - ::1 fqdns: - node1 gid: 0 gpus: |_ ---------- model: SVGA II Adapter vendor: vmware groupname: root host: //host name node1 hwaddr_interfaces: ---------- ens160: 00:0c:29:29:7f:87 lo: 00:00:00:00:00:00 id: //ID of minion node1 init: systemd ip4_gw: 192.168.8.2 ip4_interfaces: ---------- ens160: - 192.168.8.130 lo: - 127.0.0.1 ip6_gw: False ip6_interfaces: ---------- ens160: lo: - ::1 ip_gw: True ip_interfaces: ---------- ens160: - 192.168.8.130 lo: - 127.0.0.1 - ::1 ipv4: - 127.0.0.1 - 192.168.8.130 ipv6: - ::1 kernel: Linux kernelparams: |_ - BOOT_IMAGE - (hd0,msdos1)/vmlinuz-4.18.0-193.el8.x86_64 |_ - root - /dev/mapper/rhel-root |_ - ro - None |_ - crashkernel - auto |_ - resume - /dev/mapper/rhel-swap |_ - rd.lvm.lv - rhel/root |_ - rd.lvm.lv - rhel/swap |_ - rhgb - None |_ - quiet - None kernelrelease: 4.18.0-193.el8.x86_64 kernelversion: #1 SMP Fri Mar 27 14:35:58 UTC 2020 locale_info: ---------- defaultencoding: UTF-8 defaultlanguage: zh_CN detectedencoding: UTF-8 timezone: CST localhost: node1 lsb_distrib_codename: Red Hat Enterprise Linux 8.2 (Ootpa) lsb_distrib_id: Red Hat Enterprise Linux lsb_distrib_release: 8.2 lvm: ---------- rhel: - home - root - swap machine_id: 6570bb3369cc413f94355e02e4fe2963 manufacturer: VMware, Inc. master: 192.168.8.129 mdadm: mem_total: 3752 nodename: node1 num_cpus: 2 num_gpus: 1 os: RedHat os_family: RedHat osarch: x86_64 oscodename: Ootpa osfinger: Red Hat Enterprise Linux-8 osfullname: Red Hat Enterprise Linux osmajorrelease: 8 osrelease: 8.2 osrelease_info: - 8 - 2 path: /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin pid: 10719 productname: VMware Virtual Platform ps: ps -efHww pythonexecutable: /usr/bin/python3.6 pythonpath: - /usr/bin - /usr/lib64/python36.zip - /usr/lib64/python3.6 - /usr/lib64/python3.6/lib-dynload - /usr/lib64/python3.6/site-packages - /usr/lib/python3.6/site-packages pythonversion: - 3 - 6 - 8 - final - 0 saltpath: /usr/lib/python3.6/site-packages/salt saltversion: 3004 saltversioninfo: - 3004 selinux: ---------- enabled: False enforced: Disabled serialnumber: VMware-56 4d 04 6b 61 db 55 04-60 f4 e0 5e 11 29 7f 87 server_id: 1797241226 shell: /bin/sh ssds: - nvme0n1 swap_total: 4043 systemd: ---------- features: +PAM +AUDIT +SELINUX +IMA -APPARMOR +SMACK +SYSVINIT +UTMP +LIBCRYPTSETUP +GCRYPT +GNUTLS +ACL +XZ +LZ4 +SECCOMP +BLKID +ELFUTILS +KMOD +IDN2 -IDN +PCRE2 default-hierarchy=legacy version: 239 systempath: - /usr/local/sbin - /usr/local/bin - /usr/sbin - /usr/bin transactional: False uid: 0 username: root uuid: 6b044d56-db61-0455-60f4-e05e11297f87 virtual: VMware zfs_feature_flags: False zfs_support: False zmqversion: 4.3.4 [root@master ~]#
//Only query the key s of all grains [root@master ~]# salt node1 grains.ls node1: - biosreleasedate - biosversion - cpu_flags - cpu_model - cpuarch - cwd - disks - dns - domain - efi - efi-secure-boot - fqdn - fqdn_ip4 - fqdn_ip6 - fqdns - gid - gpus - groupname - host - hwaddr_interfaces - id - init - ip4_gw - ip4_interfaces - ip6_gw - ip6_interfaces - ip_gw - ip_interfaces - ipv4 - ipv6 - kernel - kernelparams - kernelrelease - kernelversion - locale_info - localhost - lsb_distrib_codename - lsb_distrib_id - lsb_distrib_release - lvm - machine_id - manufacturer - master - mdadm - mem_total - nodename - num_cpus - num_gpus - os - os_family - osarch - oscodename - osfinger - osfullname - osmajorrelease - osrelease - osrelease_info - path - pid - productname - ps - pythonexecutable - pythonpath - pythonversion - saltpath - saltversion - saltversioninfo - selinux - serialnumber - server_id - shell - ssds - swap_total - systemd - systempath - transactional - uid - username - uuid - virtual - zfs_feature_flags - zfs_support - zmqversion [root@master ~]#
//Query the value of a key, such as the ip address [root@master ~]# salt node1 grains.get fqdn_ip4 node1: - 192.168.8.130 [root@master ~]# salt node1 grains.get ip4_interfaces node1: ---------- ens160: - 192.168.8.130 lo: - 127.0.0.1 [root@master ~]# salt node1 grains.get ip4_interfaces:ens160 node1: - 192.168.8.130 [root@master ~]#
Target matching instance:
Match minion with Grains:
//Execute commands in all redhat systems [root@master ~]# salt -G 'os:RedHat' cmd.run 'uptime' node2: 16:35:48 up 2:32, 1 user, load average: 0.02, 0.29, 0.23 master: 16:35:48 up 4:19, 2 users, load average: 0.24, 0.07, 0.02 node1: 16:35:48 up 4:19, 1 user, load average: 0.02, 0.22, 0.22 [root@master ~]#
Use Grains in the top file:
//Note: both apache and nginx are installed here to demonstrate the effect. It is normal that apache cannot be started [root@master ~]# cat /srv/salt/base/top.sls base: 'os:RedHat': - match: grain - web.nginx.install - web.apache.install [root@master ~]# salt 'n*' state.highstate
There are two ways to customize Grains:
- minion configuration file, search for grains in the configuration file
- Generate a grains file under / etc/salt and define it in this file (recommended method)
//Add the following to the node1 host configuration file [root@node1 ~]# vim /etc/salt/minion 144 grains: 145 roles: 146 - apache 147 - nginx [root@node1 ~]# systemctl restart salt-minion.service [root@master ~]# salt 'node1' grains.items ********* roles: - apache - nginx *********
Customize Grains without restarting:
[root@node1 ~]# vim /etc/salt/grains [root@node1 ~]# cat /etc/salt/grains name: pengyudong [root@node1 ~]# [root@master ~]# salt 'node1' saltutil.sync_grains node1: [root@master ~]# salt 'node1' grains.items ********* name: pengyudong *********
2.2 Pillar of saltstack component
Pillar is also one of the very important components of the SaltStack component. It is a data management center. It often configures states and uses it in large-scale configuration management. The main function of pillar in SaltStack is to store and define some data required in configuration management, such as software version number, user name, password and other information. Its definition storage format is similar to Grains, which is YAML format.
There is a section of Pillar settings in the Master configuration file, which specifically defines some parameters related to Pillar:
#pillar_roots: # base: # - /srv/pillar
In the default Base environment, the working directory of Pillar is under / srv/pillar directory. If you want to define multiple Pillar working directories with different environments, you only need to modify the configuration file here.
Pillar features:
- You can define the data required for the specified minion
- Only the specified person can see the defined data
- Set in master configuration file
//View pillar information [root@master ~]# salt '*' pillar.items node2: ---------- master: ---------- node1: ---------- [root@master ~]#
The default pillar does not have any information. If you want to view the information, you need to set the pillar in the master configuration file_ The annotation of opts is uncommented and its value is set to True.
[root@master ~]# vim /etc/salt/master 910 pillar_opts: True [root@master ~]# systemctl restart salt-master.service [root@master ~]# salt '*' pillar.items *********
pillar custom data
Find pillar in the master configuration file_ Roots can see where they store the pillar
[root@master ~]# vim /etc/salt/master 864 pillar_roots: 865 base: 866 - /srv/pillar/base [root@master ~]# systemctl restart salt-master.service [root@master ~]# mkdir -p /srv/pillar/base [root@master ~]# cd /srv/pillar/base/ [root@master base]# vim apache.sls [root@master base]# cat apache.sls {% if grains['os'] == 'RedHat' %} apache: httpd {% elif grains['os'] == 'Debian' %} apache: apache2 {% endif %} [root@master base]# vim top.sls [root@master base]# cat top.sls base: 'node1': - apache [root@master base]# salt 'node1' pillar.items node1: ---------- apache: httpd
//Modify the apache state file [root@master ~]# vim /srv/salt/base/web/apache/install.sls [root@master ~]# cat /srv/salt/base/web/apache/install.sls apache-install: pkg.installed: - name: {{ pillar['apache'] }} apache-service: service.running: - name: httpd - enable: {{ pillar['apache'] }} [root@master ~]#
[root@master ~]# salt 'node1' state.highstate node1: ---------- ID: apache-install Function: pkg.installed Name: httpd Result: True Comment: The following packages were installed/updated: httpd Started: 17:38:49.491996 Duration: 7938.926 ms Changes: ---------- apr: ---------- new: 1.6.3-11.el8 old: apr-util: ---------- new: 1.6.1-6.el8 old: apr-util-bdb: ---------- new: 1.6.1-6.el8 old: apr-util-openssl: ---------- new: 1.6.1-6.el8 old: centos-logos-httpd: ---------- new: 85.8-1.el8 old: httpd: ---------- new: 2.4.37-39.module_el8.4.0+950+0577e6ac.1 old: httpd-filesystem: ---------- new: 2.4.37-39.module_el8.4.0+950+0577e6ac.1 old: httpd-tools: ---------- new: 2.4.37-39.module_el8.4.0+950+0577e6ac.1 old: mailcap: ---------- new: 2.1.48-3.el8 old: mod_http2: ---------- new: 1.15.7-3.module_el8.4.0+778+c970deab old: ---------- ID: apache-service Function: service.running Name: httpd Result: True Comment: Service httpd is already disabled, and is running Started: 17:38:57.441281 Duration: 151.979 ms Changes: ---------- httpd: True Summary for node1 ------------ Succeeded: 2 (changed=2) Failed: 0 ------------ Total states run: 2 Total run time: 8.091 s [root@master ~]#
2.3 differences between grains and Pillar
Storage location | type | Acquisition mode | Application scenario | |
---|---|---|---|---|
Grains | minion | static state | Acquisition at minion startup You can avoid restarting the minion service by refreshing | 1. Information query 2. Perform target matching on the command line 3. Perform target matching in the top file 4. Target matching in the template |
Pillar | master | dynamic | Specify and take effect in real time | 1. Target matching 2. Sensitive data configuration |