SaltStack data system
SaltStack data system
SaltStack has two major data systems:
- Grains
- Pillar
Grains of SaltStack component
Grains is a component of SaltStack, which stores the information collected when minion starts.
Grains is one of the most important components in SaltStack components, because we often use it in the process of configuration and deployment. Grains is a component of SaltStack that records some static information of minion. It can be simply understood that grains records some common attributes of each minion, such as CPU, memory, disk, network information, etc. We can view all grains information of a minion through grains.items.
Functions of Grains:
- Collect asset information
Grains application scenario:
- Information Service
- Target matching at the command line
- Target matching in top file
- Target matching in template
Information Service
[root@server1 ~]# salt 'node1' grains.items node1: ---------- biosreleasedate: //bios time 07/22/2020 biosversion: //Version of bios 6.00 cpu_flags: //cpu related properties - fpu - vme - de - pse - tsc - msr - pae - mce - cx8 - apic - sep - mtrr - pge - mca - cmov - pat - pse36 - clflush - mmx - fxsr - sse - sse2 - ss - syscall - nx - pdpe1gb - rdtscp - lm - constant_tsc - arch_perfmon - rep_good - nopl - xtopology - tsc_reliable - nonstop_tsc - cpuid - pni - pclmulqdq - ssse3 - fma - cx16 - pcid - sse4_1 - sse4_2 - x2apic - movbe - popcnt - tsc_deadline_timer - aes - xsave - avx - f16c - rdrand - hypervisor - lahf_lm - abm - 3dnowprefetch - invpcid_single - ssbd - ibrs - ibpb - stibp - ibrs_enhanced - fsgsbase - tsc_adjust - bmi1 - avx2 - smep - bmi2 - erms - invpcid - avx512f - avx512dq - rdseed - adx - smap - avx512ifma - clflushopt - clwb - avx512cd - sha_ni - avx512bw - avx512vl - xsaveopt - xsavec - xgetbv1 - xsaves - arat - avx512vbmi - umip - pku - ospke - avx512_vbmi2 - gfni - vaes - vpclmulqdq - avx512_vnni - avx512_bitalg - avx512_vpopcntdq - rdpid - movdiri - movdir64b - md_clear - flush_l1d - arch_capabilities cpu_model: 11th Gen Intel(R) Core(TM) i7-11800H @ 2.30GHz cpuarch: x86_64 cwd: / disks: - sr0 dns: ---------- domain: ip4_nameservers: - 114.114.114.114 ip6_nameservers: nameservers: - 114.114.114.114 options: search: sortlist: domain: efi: False efi-secure-boot: False fqdn: node1 fqdn_ip4: - 192.168.244.135 fqdn_ip6: - fe80::22a0:ac79:2d1a:18b7 fqdns: - node1 gid: 0 gpus: |_ ---------- model: SVGA II Adapter vendor: vmware groupname: root host: node1 hwaddr_interfaces: ---------- ens160: 00:0c:29:36:3e:51 lo: 00:00:00:00:00:00 id: node1 init: systemd ip4_gw: 192.168.244.2 ip4_interfaces: ---------- ens160: - 192.168.244.135 lo: - 127.0.0.1 ip6_gw: False ip6_interfaces: ---------- ens160: - fe80::22a0:ac79:2d1a:18b7 lo: - ::1 ip_gw: True ip_interfaces: ---------- ens160: - 192.168.244.135 - fe80::22a0:ac79:2d1a:18b7 lo: - 127.0.0.1 - ::1 ipv4: - 127.0.0.1 - 192.168.244.135 ipv6: - ::1 - fe80::22a0:ac79:2d1a:18b7 kernel: Linux kernelparams: |_ - BOOT_IMAGE - (hd0,msdos1)/vmlinuz-4.18.0-193.el8.x86_64 |_ - root - /dev/mapper/rhel-root |_ - ro - None |_ - crashkernel - auto |_ - resume - /dev/mapper/rhel-swap |_ - rd.lvm.lv - rhel/root |_ - rd.lvm.lv - rhel/swap |_ - rhgb - None |_ - quiet - None kernelrelease: 4.18.0-193.el8.x86_64 kernelversion: #1 SMP Fri Mar 27 14:35:58 UTC 2020 locale_info: ---------- defaultencoding: UTF-8 defaultlanguage: en_US detectedencoding: UTF-8 timezone: CST localhost: node1 lsb_distrib_codename: Red Hat Enterprise Linux 8.2 (Ootpa) lsb_distrib_id: Red Hat Enterprise Linux lsb_distrib_release: 8.2 lvm: ---------- rhel: - home - root - swap machine_id: c79e2798e118482c81cd18a8552b825c manufacturer: VMware, Inc. master: 192.168.244.131 mdadm: mem_total: 1800 nodename: node1 num_cpus: 2 num_gpus: 1 os: RedHat os_family: RedHat osarch: x86_64 oscodename: Ootpa osfinger: Red Hat Enterprise Linux-8 osfullname: Red Hat Enterprise Linux osmajorrelease: 8 osrelease: 8.2 osrelease_info: - 8 - 2 path: /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin pid: 1568 productname: VMware Virtual Platform ps: ps -efHww pythonexecutable: /usr/bin/python3.6 pythonpath: - /usr/bin - /usr/lib64/python36.zip - /usr/lib64/python3.6 - /usr/lib64/python3.6/lib-dynload - /usr/lib64/python3.6/site-packages - /usr/lib/python3.6/site-packages pythonversion: - 3 - 6 - 8 - final - 0 saltpath: /usr/lib/python3.6/site-packages/salt saltversion: 3004 saltversioninfo: - 3004 selinux: ---------- enabled: True enforced: Enforcing serialnumber: VMware-56 4d 36 48 63 d4 91 d7-a6 f1 c2 aa 2e 36 3e 51 server_id: 1797241226 shell: /bin/sh ssds: - nvme0n1 swap_total: 2091 systemd: ---------- features: +PAM +AUDIT +SELINUX +IMA -APPARMOR +SMACK +SYSVINIT +UTMP +LIBCRYPTSETUP +GCRYPT +GNUTLS +ACL +XZ +LZ4 +SECCOMP +BLKID +ELFUTILS +KMOD +IDN2 -IDN +PCRE2 default-hierarchy=legacy version: 239 systempath: - /usr/local/sbin - /usr/local/bin - /usr/sbin - /usr/bin transactional: False uid: 0 username: root uuid: 48364d56-d463-d791-a6f1-c2aa2e363e51 virtual: VMware zfs_feature_flags: False zfs_support: False zmqversion: 4.3.4
Query a key value to obtain the IP address
[root@server1 ~]# salt '*' grains.get fqdn_ip4 node2: - 192.168.244.137 node3: - 192.168.244.138 node1: - 192.168.244.135 server1: - 192.168.244.131 [root@server1 ~]# salt '*' grains.get ip4_interfaces node3: ---------- ens160: - 192.168.244.138 lo: - 127.0.0.1 node2: ---------- ens160: - 192.168.244.137 lo: - 127.0.0.1 node1: ---------- ens160: - 192.168.244.135 lo: - 127.0.0.1 server1: ---------- ens160: - 192.168.244.131 lo: - 127.0.0.1 [root@server1 ~]# salt '*' grains.get ip4_interfaces:ens160 server1: - 192.168.244.131 node3: - 192.168.244.138 node2: - 192.168.244.137 node1: - 192.168.244.135
Target matching instance:
Match minion with Grains:
[root@server1 ~]# salt -G 'os:RedHat' cmd.run 'uptime' node3: 18:43:03 up 9 min, 2 users, load average: 0.01, 0.07, 0.07 server1: 18:43:03 up 9 min, 2 users, load average: 0.04, 0.19, 0.16 node2: 18:43:03 up 9 min, 2 users, load average: 0.03, 0.15, 0.13 node1: 18:43:03 up 9 min, 2 users, load average: 0.12, 0.22, 0.16
There are two ways to customize Grains:
- minion configuration file, search for grains in the configuration file
- Generate a grains file under / etc/salt and define it in this file (recommended method)
[root@server1 ~]# cat /etc/salt/grains test-grains: linux-node1 [root@server1 ~]# salt '*' grains.get test-grains server1: linux-node1 node1: node3: node2:
Customize Grains without restarting:
[root@server1 ~]# cat /etc/salt/grains test-grains: linux-node1 wy: YYDS!! [root@server1 ~]# salt '*' saltutil.sync_grains node1: node3: node2: server1: [root@server1 ~]# salt '*' grains.get wy node2: node1: node3: server1: YYDS!!
Pillar of SaltStack component
Pillar is also one of the very important components of the SaltStack component. It is a data management center. It often configures states and uses it in large-scale configuration management. The main function of pillar in SaltStack is to store and define some data required in configuration management, such as software version number, user name, password and other information. Its definition storage format is similar to Grains, which is YAML format.
There is a section of Pillar settings in the Master configuration file, which specifically defines some parameters related to Pillar:
#pillar_roots: # base: # - /srv/pillar
Create directory
[root@server1 ~]# mkdir /srv/pillar [root@server1 ~]# systemctl restart salt-minion
In the default Base environment, the working directory of Pillar is under / srv/pillar directory. If you want to define multiple Pillar working directories with different environments, you only need to modify the configuration file here.
Pillar features:
- You can define the data required for the specified minion
- Only the specified person can see the defined data
- Set in master configuration file
[root@server1 ~]# salt '*' pillar.items node1: ---------- server1: ---------- node3: ---------- node2: ----------
The default pillar does not have any information. If you want to view the information, you need to set the pillar in the master configuration file_ The annotation of opts is uncommented and its value is set to True.
# master config file that can then be used on minions. pillar_opts: true [root@server1 ~]# systemctl restart salt-master [root@server1 ~]# salt '*' pillar.items server1: ---------- master: ---------- __cli: salt-master __role: master allow_minion_key_revoke: True archive_jobs: False auth_events: ........
pillar custom data:
Find pillar in the master configuration file_ Roots can see where they store the pillar
[root@server1 ~]# vim /etc/salt/master #pillar_roots: # base: # - /srv/pillar # #ext_pillar: # - hiera: /etc/hiera.yaml # - cmd_yaml: cat /etc/salt/yaml pillar_roots: base: - /srv/pillar/base prod: - /srv/pillar/prod [root@server1 ~]# mkdir -p /srv/pillar/{base,prod} [root@server1 ~]# tree /srv/pillar/ /srv/pillar/ |-- base `-- prod 2 directories, 0 files [root@server1 ~]# cat /srv/pillar/base/apache.sls {% if grains['os'] == 'RedHat' %} apache: httpd {% elif grains['os'] == 'Debian' %} apache: apache2 {% endif %} //Define top file entry file [root@server1 ~]# cat /srv/pillar/base/top.sls base: 'node1': - apache [root@server1 ~]# salt 'node1' pillar.items node1: ---------- apache: httpd master: ---------- __cli: salt-master __role: //Modify the apache status file under salt and reference the pilar data [root@server1 ~]# cat /srv/salt/base/web/apache/apache.sls apache-install: pkg.installed: - name: {{ pillar['apache'] }} apache-service: service.running: - name: {{ pillar['apache'] }} - enable: True //Execute advanced status file [root@server1 ~]# salt 'node1' state.highstatenode1: ---------- ID: apache-install Function: pkg.installed Name: httpd Result: True Comment: All specified packages are already installed Started: 19:07:45.668778 Duration: 636.19 ms Changes: ---------- ID: apache-service Function: service.running Name: httpd Result: True Comment: The service httpd is already running Started: 19:07:46.317796 Duration: 75.799 ms Changes: Summary for node1 ------------ Succeeded: 2 Failed: 0 ------------ Total states run: 2 Total run time: 711.989 ms
Differences between Grains and Pillar
Storage location | type | Acquisition mode | Application scenario | |
---|---|---|---|---|
Grains | minion | static state | When minion starts, the collection can avoid restarting minion service by refreshing | 1. Information query 2. Target matching on the command line 3. Target matching in top file 4. Target matching in template |
Pillar | master | dynamic | Specify and take effect in real time | 1. Target matching 2. Sensitive data configuration |