day43-44 overview and common modules of ansible automation tool and detailed explanation of playbook&roles of ansible

Task background

The company has more and more servers, and maintaining some simple things will become very cumbersome. Using shell scripts to manage a small number of servers is fairly efficient. After there are many servers, shell scripts cannot achieve efficient operation and maintenance. In this case, we need to introduce * * automatic operation and maintenance * * tool to realize efficient operation and maintenance of multiple servers.

Task requirements

The management server can flexibly and efficiently manage the operation and maintenance operations of all application servers according to the requirements

Task Disassembly

1. A server is required as the management side to connect and manage all application servers

2. Consider how to implement operation and maintenance operations for only a part of application servers (server grouping)

3. Learn to convert linux operation commands familiar with the platform into automatic operation and maintenance (learning of common modules)

4. If the operation is very lengthy, learn to use playbook and role to manage

Learning objectives

  • Ability to install ansible server and client

  • Ability to define an ansible host list for server grouping

  • The hostname module can be used to modify the hostname

  • Be able to use the file module to do basic file operations

  • The copy module can be used to copy files to remote machines

  • The fetch module can be used to copy files from remote to local

  • Ability to manage users using the user module

  • Ability to manage user groups using the group module

  • Ability to manage time tasks using cron modules

  • Can use yum_repository module configuration yum

  • Be able to install packages using the yum module

  • The service module can be used to control service startup, shutdown, startup and self startup

  • The script module can be used to execute local scripts on remote machines

  • command and shell module can be used to execute commands remotely

  • Be able to write playbook to realize httpd

  • Can use roles to implement lamp

1, Understanding automatic operation and maintenance

Question:

Suppose I want to do an operation on 1000 services (for example, the nginx server modifies a parameter in the configuration file), the following two methods have obvious disadvantages:

  1. According to the traditional method, a ssh connected to a server can be operated manually.

    Disadvantages:

    • The efficiency is too low.
  2. Write a shell script to do it.

    Disadvantages:

    • The managed machine platform is inconsistent, and the script may not be universal.

    • It is troublesome to pass the password (expect is required to pass the password in the environment of non password free login)

    • The efficiency is low. 1000 cycles also need to be completed one by one. If you use the & sign to execute in the background, 1000 processes will be generated.

Automatic operation and maintenance: a large number of repetitive work in daily IT operation and maintenance, ranging from simple daily inspection, configuration change and software installation to the organization and scheduling of the whole change process, is changed from manual execution to automatic operation, so as to reduce or even eliminate the delay in operation and maintenance and realize "zero delay" it operation and maintenance.

Main concerns of automated operation and maintenance

If you manage many servers, you should mainly focus on the following aspects:

  1. Connection between the manager and the managed machine (how the manager sends management instructions to the managed machine)

  2. Server information collection (if the managed server has CentOS 7.5 and other linux distributions, such as suse,ubuntu, etc. when the things you want to do are different on different OS, you need to collect information and process it separately)

  3. Server grouping (because sometimes what I want to do is not for all servers, maybe only for one group)

  4. Main classification of management content

  • File directory management (including file creation, deletion, modification, viewing status, remote copy, etc.)

  • User and group management
    cron time task management

  • yum source configuration and management package

  • Service management

  • Execute scripts remotely

  • Remote command execution

Comparison of common open source automated operation and maintenance tools

  1. Puppet (expansion)

    Based on ruby language, it is mature and stable. It is suitable for large-scale architecture, which is more complex than ansible and saltstack.

  2. Saltstack (expansion)

    It is based on python language. It is relatively simple, and its large concurrency capability is better than ansible. It is necessary to maintain the managed end services. If the services are disconnected, the connection will go wrong.

  3. ansible

    Based on the python language. It is simple and fast. The managed end does not need to start the service. It directly follows the ssh protocol and needs to be verified. Therefore, if there are many machines, the speed will be slow.

2, ansible

ansible is an automatic operation and maintenance tool developed by Python. It integrates the advantages of many operation and maintenance tools (puppet, cfengine, chef, func, fabric), and realizes the functions of batch system configuration, batch program deployment, batch operation commands and so on.

characteristic:

  • Simple deployment
  • ssh is used for management by default and is developed based on the paramiko module in python
  • The management end and the managed end do not need to start the service
  • Simple configuration, powerful function and strong expansibility
  • Be able to arrange multiple tasks through playbook (script)

Building an ansible environment

Experimental preparation: three machines, one managed and two managed

  1. Static ip
  2. The host name and host name are bound to each other
  3. Turn off the firewall, selinux
  4. time synchronization
  5. Confirm and configure yum source (epel source required)

Experimental process:

Step 1: install ansible on the management machine, and the managed node must open ssh service

# yum install epel-release
# yum install ansible
# ansible --version
ansible 2.8.4
  config file = /etc/ansible/ansible.cfg
  configured module search path = [u'/root/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
  ansible python module location = /usr/lib/python2.7/site-packages/ansible
  executable location = /usr/bin/ansible
  python version = 2.7.5 (default, Oct 30 2018, 23:45:53) [GCC 4.8.5 20150623 (Red Hat 4.8.5-36)]

Step 2: realize the master's password free login to the agent, and only do it on the master. (if this step is not done, the - k parameter shall be added to pass the password when operating the agent later; or pass the password in the host list)

master# ssh-keygen

master# ssh-copy-id -i 10.1.1.12
master# ssh-copy-id -i 10.1.1.13

Step 3: define the host group on the master and test the connectivity

master# vim /etc/ansible/hosts 
[group1]
10.1.1.12
10.1.1.13
master# ansible -m ping group1
10.1.1.13 | SUCCESS => {
    "changed": false, 
    "ping": "pong"
}
10.1.1.12 | SUCCESS => {
    "changed": false, 
    "ping": "pong"
}    
master# ansible -m ping all
10.1.1.13 | SUCCESS => {
    "changed": false, 
    "ping": "pong"
}
10.1.1.12 | SUCCESS => {
    "changed": false, 
    "ping": "pong"
}

Server grouping

ansible enables server grouping through a host list function.

The default host list configuration file for Ansible is / etc/ansible/hosts

Example:

[nginx]					Group name
apache[1:10].aaa.com	express apache1.aaa.com reach apache10.aaa.com These 10 machines
nginx[a:z].aaa.com		express nginxa.aaa.com reach nginxz.aaa.com 26 machines in total
10.1.1.[11:15]			Represents 10.1.1.11 To 10.1.1.15 These five machines

Example:

[nginx]
10.1.1.13:2222			Represents 10.1.1.13 This one, but ssh Port 2222

Example: define 10.1.1.12:2222 the alias of this server as nginx1

nginx1 ansible_ssh_host=10.1.1.13 ansible_ssh_port=2222

Example: a server without password free login can specify a user name and password

nginx1  ansible_ssh_host=10.1.1.13 ansible_ssh_port=2222 ansible_ssh_user=root ansible_ssh_pass="123456"

Example: grouping using aliases

nginx1  ansible_ssh_host=10.1.1.13 ansible_ssh_port=2222 ansible_ssh_user=root ansible_ssh_pass="123456"
nginx2  ansible_ssh_host=10.1.1.12

[nginx]
nginx1
nginx2

Summary:

Role of host list: server grouping.

Common functions of host list:

  1. It can be divided by IP range and host name range

  2. If the ssh port is not 22, you can pass in a new port.

  3. No password free login, you can pass the password.

Exercise: no matter what kind of environment you use (secret free or secret free, port 22 or not), please finally add the two managed machines to group 1

ansible module

Ansible works based on modules and does not have the ability of batch deployment. What really has batch deployment is the modules that ansible runs. Ansible only provides a framework.

ansible supports many modules. We don't need to remember every module. We just need to be familiar with some common modules and query other modules when they need to be used.

View all supported modules

# ansible-doc -l		
a10_server                                           Manage A10 Networks AX/SoftAX...
a10_server_axapi3                                    Manage A10 Networks AX/SoftAX...
a10_service_group                                    Manage A10 Networks AX/SoftAX...
a10_virtual_server                                   Manage A10 Networks AX/SoftAX...
aci_aaa_user                                         Manage AAA users (aaa:User)
. . . . . . 

If you want to view ping For module usage, use the following command (other modules, and so on)
# ansible-doc ping

Official website module document address: https://docs.ansible.com/ansible/latest/modules/list_of_all_modules.html

hostname module

The hostname module is used to modify the host name (Note: it cannot modify the / etc/hosts file)

https://docs.ansible.com/ansible/latest/modules/hostname_module.html#hostname-module

Change the host name of one of the remote machines to agent1.cluster.com

master# ansible 10.1.1.12  -m hostname -a 'name=agent1.cluster.com'
The basic format is: ansible The machine or group name of the operation -m Module name -a "Parameter 1=Value1 parameter2=Value 2"

file module (key)

The file module is used for file related operations (creation, deletion, soft and hard links, etc.)

https://docs.ansible.com/ansible/latest/modules/file_module.html#file-module

Create a directory

master# ansible group1 -m file -a 'path=/test state=directory'

Create a file

master# ansible group1 -m file -a 'path=/test/111 state=touch'

Recursively modify owner,group,mode

master# ansible group1 -m file -a 'path=/test recurse=yes owner=bin group=daemon mode=1777'

Delete the directory (along with all files in the directory)

master# ansible group1 -m file -a 'path=/test state=absent'

Create a file and specify owner,group,mode, etc

master# ansible group1 -m file -a 'path=/tmp/111 state=touch owner=bin group=daemon mode=1777'

Delete file

master# ansible group1 -m file -a 'path=/tmp/111 state=absent'

Create soft link file

master# ansible group1 -m file -a 'src=/etc/fstab path=/tmp/fstab state=link'

Create hard link file

master# ansible group1 -m file -a 'src=/etc/fstab path=/tmp/fstab2 state=hard'

stat module (understand)

The stat module is similar to the stat command of linux and is used to obtain the status information of files.

https://docs.ansible.com/ansible/latest/modules/stat_module.html#stat-module

Get the status information of / etc/fstab file

master# ansible group1 -m stat -a 'path=/etc/fstab'

copy module (key)

The copy module is used for remote copying of files (such as copying local files to remote machines)

https://docs.ansible.com/ansible/latest/modules/copy_module.html#copy-module

Prepare a file on the master and copy it to all machines of group1

master# echo master > /tmp/222
master# ansible group1 -m copy -a 'src=/tmp/222 dest=/tmp/333'

Use the content parameter to write content directly to the remote file (the original content will be overwritten)

master# ansible group1 -m copy -a 'content="ha ha\n" dest=/tmp/333'
be careful:ansible in-a When there are quotation marks in the following parameters, remember to cross use single quotation and double quotation. If they are both double quotation, there will be problems

Use the force parameter to control whether to force overrides

If the target file already exists, it will not be overwritten
master# ansible group1 -m copy -a 'src=/tmp/222 dest=/tmp/333 force=no'
If the target file already exists, it will be forcibly overwritten
master# ansible group1 -m copy -a 'src=/tmp/222 dest=/tmp/333 force=yes'

Use the backup parameter to control whether files are backed up

backup=yes Indicates that if the contents of the copied file are different from the original contents, a copy will be backed up
group1 On your machine/tmp/333 Back up one copy (the name of the backup file plus the time), and then remotely copy the new file as/tmp/333
master# ansible group1 -m copy -a 'src=/etc/fstab dest=/tmp/333 backup=yes owner=daemon group=daemon mode=1777'

When copying the copy module, pay attention to whether there is a "/" sign after the copy directory

/etc/yum.repos.d No back/The symbol indicates that/etc/yum.repos.d Copy entire directory to/tmp/Directory
master# ansible group1 -m copy -a 'src=/etc/yum.repos.d dest=/tmp/'
/etc/yum.repos.d/Back band/The symbol indicates that/etc/yum.repos.d/Copy all files in the directory to/tmp/Directory
master# ansible group1 -m copy -a 'src=/etc/yum.repos.d/ dest=/tmp/'

Exercise: configure all yum sources on the master, and then copy them to the remote machine of group1 (the contents in the directory should be exactly the same)

master# ansible group1 -m file -a "path=/etc/yum.repos.d/ state=absent"
master# ansible group1 -m copy -a "src=/etc/yum.repos.d dest=/etc/"

Exercise: after modifying the hostname using the hostname module, modify the / etc/hosts file on the master and copy it to the remote machine of group1

First in master It has been revised on the/etc/hosts File, and then use the following command to copy it and overwrite it
master# ansible group1 -m copy -a "src=/etc/hosts dest=/etc/hosts"

DNS supplement:

  • The domain name is the unique name of the public network, and the host name is the name of the intranet (you can duplicate the name, but it's best not to do so)

  • At present, there are few self built DNS domain name resolution, but multiple servers in the Intranet can be resolved through DNS host name resolution

  • Now I have learned the hostname and copy module of ansible to easily realize the host name management of N multiple servers, and DNS does not need to be built

template module (expansion)

It has almost the same function as the copy module

The template module first uses variables to render jinja2 template files into ordinary files, and then copies them. The copy module does not support it. (jinja2 is a python based template engine)

https://docs.ansible.com/ansible/latest/modules/template_module.html#template-module

master# ansible -m template group1 -a "src=/etc/hosts dest=/tmp/hosts"

template module cannot copy directory

master# ansible -m template group1 -a "src=/etc/yum.repos.d/ dest=/etc/yum.repos.d/"

fetch module

The fetch module is similar to the copy module, but has the opposite effect. It is used to copy files from a remote machine to the local.

https://docs.ansible.com/ansible/latest/modules/fetch_module.html#fetch-module

Step 1: create a file with the same name (but different contents) on the two managed machines

agent1# echo agent1 > /tmp/1.txt
agent2# echo agent2 > /tmp/1.txt

Step 2: fetch files from the master (because there are two machines in group1, it uses different directories to avoid file conflicts with files of the same name)

master# ansible group1  -m fetch -a 'src=/tmp/1.txt dest=/tmp/'
10.1.1.12 | CHANGED => {
    "changed": true, 
    "checksum": "d2911a028d3fcdf775a4e26c0b9c9d981551ae41", 
    "dest": "/tmp/10.1.1.12/tmp/1.txt", 	10.1.1.12 Yes, here it is
    "md5sum": "0d59da0b2723eb03ecfbb0d779e6eca5", 
    "remote_checksum": "d2911a028d3fcdf775a4e26c0b9c9d981551ae41", 
    "remote_md5sum": null
}
10.1.1.13 | CHANGED => {
    "changed": true, 
    "checksum": "b27fb3c4285612643593d53045035bd8d972c995", 
    "dest": "/tmp/10.1.1.13/tmp/1.txt", 	10.1.1.13 Yes, here it is
    "md5sum": "cd0bd22f33d6324908dbadf6bc128f52", 
    "remote_checksum": "b27fb3c4285612643593d53045035bd8d972c995", 
    "remote_md5sum": null
}

Step 3: first delete the above fetches, and then try to fetch only one of the machines. The name will also be used to distinguish subdirectories

master# rm /tmp/10.1.1.* -rf


master# ansible 10.1.1.12  -m fetch -a 'src=/tmp/1.txt dest=/tmp/'
10.1.1.12 | CHANGED => {
    "changed": true, 
    "checksum": "d2911a028d3fcdf775a4e26c0b9c9d981551ae41", 
    "dest": "/tmp/10.1.1.12/tmp/1.txt", 	only fetch One will be named like this
    "md5sum": "0d59da0b2723eb03ecfbb0d779e6eca5", 
    "remote_checksum": "d2911a028d3fcdf775a4e26c0b9c9d981551ae41", 
    "remote_md5sum": null
}

Note: fetch modules cannot copy directories from remote to local

user module

The user module is used to manage user accounts and user attributes.

https://docs.ansible.com/ansible/latest/modules/user_module.html#user-module

Create aaa user, default to normal user, create home directory

master# ansible group1 -m user -a 'name=aaa state=present'

Create a bbb system user and log in to the shell environment as / sbin/nologin

master# ansible group1 -m user -a 'name=bbb state=present system=yes  shell="/sbin/nologin"'

Create ccc user, specify uid with uid parameter, and pass password with password parameter

master# echo 123456 | openssl passwd -1 -stdin
$1$DpcyhW2G$Kb/y1f.lyLI4MpRlHU9oq0

Pay attention to the format of the next command. The password should be enclosed in double quotation marks. If it is in single quotation marks, the password will be incorrect during verification
master# ansible group1 -m user -a 'name=ccc uid=2000 state=present password="$1$DpcyhW2G$Kb/y1f.lyLI4MpRlHU9oq0"'

Create an ordinary user called hadoop and generate an empty password key pair

master# ansible group1 -m user -a 'name=hadoop generate_ssh_key=yes'

Delete aaa users, but the home directory is not deleted by default

master# ansible group1 -m user -a 'name=aaa state=absent'

Delete the bbb user. Use the remove=yes parameter to delete the home directory as well as the user

master# ansible group1 -m user -a 'name=bbb state=absent remove=yes'

group module

The group module is used to manage user groups and user group attributes.

https://docs.ansible.com/ansible/latest/modules/group_module.html#group-module

Create group

master# ansible group1 -m group -a 'name=groupa gid=3000 state=present'

Delete group (if a user's gid is this group, it cannot be deleted)

master# ansible group1 -m group -a 'name=groupa state=absent'

cron module

The cron module is used to manage periodic time tasks

https://docs.ansible.com/ansible/latest/modules/cron_module.html#cron-module

Create a cron task. If no user is specified, the default is root (because I use root here).
If minute, hour, day, month and week are not specified, the default is*

master# ansible group1 -m cron -a 'name="test cron1" user=root job="touch /tmp/111" minute=*/2' 

Delete cron task

master# ansible group1 -m cron -a 'name="test cron1" state=absent'

yum_repository module

The yum_repository module is used to configure the yum repository.

https://docs.ansible.com/ansible/latest/modules/yum_repository_module.html

Add a / etc/yum.repos.d/local.repo configuration file

master# ansible group1 -m yum_repository -a "name=local description=localyum baseurl=file:///mnt/ enabled=yes gpgcheck=no"
Note: This module only helps with configuration yum Warehouse,However, if there is no software package in the warehouse, the installation will fail, so you can manually mount the optical drive to/mnt catalogue
# mount /dev/cdrom /mnt

Delete the / etc/yum.repos.d/local.repo configuration file

master# ansible group1 -m yum_repository -a "name=local state=absent" 

yum module (key)

The yum module is used to use the yum command to install and uninstall software packages.

https://docs.ansible.com/ansible/latest/modules/yum_module.html#yum-module

Install a software using Yum (premise: the yum configuration on the machine of group1 is OK)

master# ansible group1 -m yum -a 'name=vsftpd state=present'

Install httpd, httpd devel software using yum or up2date. state=latest indicates the latest version is installed

master# ansible group1 -m yum -a 'name=httpd,httpd-devel state=latest' 

Uninstall httpd devel software using yum or up2date

master# ansible group1 -m yum -a 'name=httpd,httpd-devel state=absent' 

service module (key)

The service module is used to control service startup, shutdown, startup and self startup.

https://docs.ansible.com/ansible/latest/modules/service_module.html#service-module

Start the vsftpd service and set it to start automatically

master# ansible group1 -m service -a 'name=vsftpd state=started enabled=on'

Turn off the vsftpd service and set it to start automatically

master# ansible group1 -m service -a 'name=vsftpd state=stopped enabled=false'

Exercise: create an abc Library in mariadb in the managed machine of group 1

practice:

Suppose that there are multiple machines in group1 defined in my host list, and they now need to be a cluster. This cluster requires two or two password free login between ordinary users named hadoop. How to implement it (it is required to operate only on the master)?

script module

The script module is used to execute local scripts on remote machines.

https://docs.ansible.com/ansible/latest/modules/script_module.html#script-module

stay master Prepare a script on
master# vim /tmp/1.sh
#!/bin/bash
mkdir /tmp/haha
touch /tmp/haha/{1..10}

stay group1 In all remote machines master Upper/tmp/1.sh Script (this script does not need to be given execution permission))
master# ansible group1 -m script -a '/tmp/1.sh'

Extension: use shell script to create an abc Library in mariadb in the managed machine of group 1

#!/bin/bash

yum install mariadb-server -y  &> /dev/null

systemctl start mariadb
systemctl enable mariadb

mysql << EOF
create database abc;
quit
EOF

Use the above script script Module in group1 It can be executed in the managed machine

command and shell module

Both modules are used to execute linux commands, which is very high for engineers familiar with commands.

The shell module is similar to the command module (the command module cannot execute symbols like $home, >, <, |, but the shell can)

https://docs.ansible.com/ansible/latest/modules/command_module.html

https://docs.ansible.com/ansible/latest/modules/shell_module.html

master# ansible -m command group1 -a "useradd user2"
master# ansible -m command group1 -a "id user2"

master# ansible -m command group1 -a "cat /etc/passwd |wc -l" 		-- report errors
master# ansible -m shell group1 -a "cat /etc/passwd |wc -l" 		-- success

master# ansible -m command group1 -a "cd $HOME;pwd" 	 -- error reporting
master# ansible -m shell  group1 -a "cd $HOME;pwd" 	 -- success

Note: the shell module is not 100% capable of any command, such as vim or ll alias. It is not recommended that you remember which commands are not allowed. You just need to develop the habit of testing any commands in the production environment in the test environment first.

3, playbook

PlayBook (script): it is an ansible script for configuring, deploying, and managing controlled nodes. Orchestration for ansible operations.

reference resources: https://docs.ansible.com/ansible/latest/user_guide/playbooks_intro.html

The format used is yaml format (saltstack, elk, docker, docker compose, kubernetes, etc. also use yaml format)

YMAL format

  • End with. yaml or. yml
  • The first line of the file starts with "-" to indicate the beginning of the YMAL file (optional)
  • Comments beginning with # sign
  • All members in the list start at the same indentation level and start with a "-" (a horizontal bar and a space)
  • A dictionary consists of a simple key: the form of a value (this colon must be followed by a space)
  • Note: do not use the tab key to write this kind of file, but use spaces

reference resources: https://docs.ansible.com/ansible/latest/reference_appendices/YAMLSyntax.html#yaml-syntax

Let's take a look at an official example and feel it

---
# An employee record
name: Example Developer
job: Developer
skill: Elite
employed: True
foods:
    - Apple
    - Orange
    - Strawberry
    - Mango
languages:
    ruby: Elite
    python: Elite
    dotnet: Lame

playbook instance

Let's take a direct look at an example

Step 1: create a directory to store playbook (path customization)

master# mkdir /etc/ansible/playbook

Step 2: prepare the httpd configuration file and modify it to the configuration you want

master# yum install httpd -y

Modify the configuration you want as needed(For testing, you can change the mark at will)
master# vim /etc/httpd/conf/httpd.conf

Step 3: write a playbook file (suffix. yml or. yaml)

# vim /etc/ansible/playbook/example.yaml
---
- hosts: group1
  remote_user: root
  tasks:  
  - name: ensure apache is at the latest version	
    yum: name=httpd,httpd-devel state=latest
    
  - name: write the apache config file		
    copy: src=/etc/httpd/conf/httpd.conf dest=/etc/httpd/conf/httpd.conf
    
    notify:
    - restart apache
    
  - name: ensure apache is running (and enable it at boot)
    service: name=httpd state=started enabled=yes
    
  handlers:	
    - name: restart apache
      service: name=httpd state=restarted

Step 4: execute the written palybook

  • The execution process will be displayed, and each step will be marked with ok,changed,failed, etc
  • If the execution fails, it will be rolled back. After solving the problem, you can directly execute this command, and change the failed to changed (idempotent)
# ansible-playbook /etc/ansible/playbook/example.yaml

Playbook common syntax

hosts: used to specify the host to perform the task. It can be one or more host groups separated by colons

remote_user: used to specify the user performing tasks on the remote host

- hosts: group1			
  remote_user: root	

Tasks: task list, executing tasks in order

  • If a host fails to execute a task, the whole task will be rolled back. Correct the error in the playbook, and then execute it again
  tasks:
  - name: ensure apache is at the latest version	
    yum: name=httpd,httpd-devel state=latest
    
  - name: write the apache config file		
    copy: src=/etc/httpd/conf/httpd.conf dest=/etc/httpd/conf/httpd.conf

handlers: similar to task, but it needs to be called with notify notification.

  • No matter how many notifiers notify, handlers will only be executed once after all task s in play are executed
  • The best application scenario of handlers is to restart services or trigger system restart. In addition, it is rarely used
    notify:				  
    - restart apache
    
  - name: ensure apache is running (and enable it at boot)
    service: name=httpd state=started enabled=yes
    
  handlers:
    - name: restart apache
      service: name=httpd state=restarted

Exercise: change the port of httpd to 8080, and then perform the playbook test

variables: variables

  • The definition variable can be easily called many times
master# vim /etc/ansible/playbook/example2.yaml
---
 - hosts: group1
   remote_user: root
   vars:
   - user: test1
   tasks:
   - name: create user
     user: name={{user}} state=present
master# ansible-playbook /etc/ansible/playbook/example2.yaml

Case: playbook layout vsftpd

Write a playbook implementation

  1. Configure yum
  2. Install vsftpd package
  3. Modify the configuration file (require anonymous users to be denied login)
  4. Start the service and realize the automatic startup of vsftpd service
---
- hosts: group1                 
  remote_user: root                     
  tasks:                                
  - name: rm yum repository      
    file: path=/etc/yum.repos.d/ state=absent
    
  - name: synchronization master Upper yum Source to group1
    copy: src=/etc/yum.repos.d dest=/etc/
    
  - name: ensure vsftpd is at the latest version        
    yum: name=vsftpd state=latest
    
  - name: write the apache config file          
    copy: src=/etc/vsftpd/vsftpd.conf dest=/etc/vsftpd/vsftpd.conf 
    
    notify:                             
    - restart vsftpd
    
  - name: ensure vsftpd is running (and enable it at boot)
    service: name=vsftpd state=started enabled=yes
    
  handlers:                     
    - name: restart vsftpd              
      service: name=vsftpd state=restarted

playbook orchestrates multiple hosts tasks

---			# ---Represents start (optional, no writing)
- hosts: 10.1.1.12
  remote_user: root
  tasks:
  - name: establish/test1/catalogue
    file: path=/test1/ state=directory
# It cannot be separated by - -- and syntax errors will be reported (YAML files are also written in the later course to play k8s arrangement, so - -- can be used to separate paragraphs)
- hosts: 10.1.1.13
  remote_user: root
  tasks:
  - name: establish/test2/catalogue
    file: path=/test2/ state=directory
...			# ... means end (optional, no writing)

Case: arranging nfs setup and client mount

1. Prepare the nfs configuration file on the master

# vim /etc/exports
/share  *(ro)

2. Compile yaml arrangement document

# vim /etc/ansible/playbook/nfs.yml
---
- hosts: 10.1.1.12
  remote_user: root
  tasks:
  - name: install nfs Service related software package
    yum: name=nfs-utils,rpcbind,setup  state=latest

  - name: Create shared directory
    file: path=/share/ state=directory

  - name: synchronization nfs configuration file
    copy: src=/etc/exports dest=/etc/exports

    notify: restart nfs

  - name: start-up rpcbind service,And set it to power on and self start
    service: name=rpcbind state=started enabled=on

  - name: start-up nfs service,And set it to power on and self start
    service: name=nfs state=started enabled=on

  handlers:
  - name: restart nfs
    service: name=nfs state=restarted

- hosts: 10.1.1.13
  remote_user: root
  tasks:
  - name: install nfs Client package
    yum: name=nfs-utils state=latest

  - name: mount  nfs Server sharing
    shell: mount 10.1.1.12:/share /mnt

3. Execute playbook

# ansible-playbook /etc/ansible/playbook/nfs.yaml

4, Roles (difficulties)

roles introduction

Roles: a mechanism by which variables, tasks and handlers are placed in separate directories and can be easily called.

If we want to write a playbook to install and manage the lamp environment, the playbook will be very long. Therefore, we hope to divide this large file into multiple functions, including apache management, php management and mysql management, and then directly call it when needed to avoid repeated writing. It is similar to the concept of modularity in programming to achieve code reuse The effect of.

Create a directory structure for roles

files: For storage by copy Module or script File called by the module.
tasks: At least one main.yml Documents, defining each tasks. 
handlers:There is one main.yml Documents, defining each handlers. 
templates: Used to store jinjia2 Template.
vars: There is one main.yml File, defining variables.
meta: There is one main.yml File that defines the special settings of this role and its dependencies.

Note: create files, tasks,handlers,templates,vars and meta directories respectively in the directory of each role. Unused directories can be created as empty directories

Implement lamp through roles

Three roles need to be customized: httpd, MySQL and PHP

Step 1: create roles directory and files, and confirm the directory structure

master# cd /etc/ansible/roles/
master# mkdir -p {httpd,mysql,php}/{files,tasks,handlers,templates,vars,meta}
master# touch {httpd,mysql,php}/{tasks,handlers,vars,meta}/main.yml

master# yum install tree -y
master# tree /etc/ansible/roles/
/etc/ansible/roles/
├── httpd
│   ├── files
│   ├── handlers
│   │   └── main.yml
│   ├── meta
│   │   └── main.yml
│   ├── tasks
│   │   └── main.yml
│   ├── templates
│   └── vars
│       └── main.yml
├── mysql
│   ├── files
│   ├── handlers
│   │   └── main.yml
│   ├── meta
│   │   └── main.yml
│   ├── tasks
│   │   └── main.yml
│   ├── templates
│   └── vars
│       └── main.yml
└── php
    ├── files
    ├── handlers
    │   └── main.yml
    ├── meta
    │   └── main.yml
    ├── tasks
    │   └── main.yml
    ├── templates
    └── vars
        └── main.yml

Step 2: prepare the home page file, php test page and configuration file of httpd server

master# echo "test main page" > /etc/ansible/roles/httpd/files/index.html


master# echo -e "<?php\n\tphpinfo();\n?>" > /etc/ansible/roles/httpd/files/test.php 


master# yum install httpd -y
 After modifying the configuration file as required,copy to httpd In the role directory files Subdirectory
master# vim /etc/httpd/conf/httpd.conf
master# cp /etc/httpd/conf/httpd.conf /etc/ansible/roles/httpd/files/

Step 3: write the main.yml file for the httpd role

---
 - name: install httpd
   yum: name=httpd,httpd-devel state=present

 - name: synchronization httpd configuration file
   copy: src=/etc/ansible/roles/httpd/files/httpd.conf dest=/etc/httpd/conf/httpd.conf

   notify: restart httpd

 - name: Synchronize home page files
   copy: src=/etc/ansible/roles/httpd/files/index.html dest=/var/www/html/index.html

 - name: synchronization php test page
   copy: src=/etc/ansible/roles/httpd/files/test.php dest=/var/www/html/test.php

 - name: start-up httpd Parallel startup and self startup
   service: name=httpd state=started enabled=yes

Step 4: write the handler in the httpd role

master# vim /etc/ansible/roles/httpd/handlers/main.yml
---
- name: restart httpd
  service: name=httpd state=restarted

Step 5: write the main.yml file of mysql role

---
- name: install mysql
  yum: name=mariadb,mariadb-server,mariadb-devel state=present

- name: start-up mysql Parallel startup and self startup
  service: name=mariadb state=started enabled=yes

Step 6: write the main.yml file for the php role

master# vim /etc/ansible/roles/php/tasks/main.yml
---
- name: install php And dependent packages
  yum: name=php,php-gd,php-ldap,php-odbc,php-pear,php-xml,php-xmlrpc,php-mbstring,php-snmp,php-soap,curl,curl-devel,php-bcmath,php-mysql state=present

  notify: restart httpd

Step 7: write the playbook file of lamp and call the three roles defined above

master# vim /etc/ansible/playbook/lamp.yaml
---
- hosts: group1
  remote_user: root
  roles:
    - httpd
    - mysql
    - php

Step 8: execute the playbook file of lamp

master# ansible-playbook /etc/ansible/playbook/lamp.yaml

Expansion case: lamp is implemented through roles and discuz is installed

Step 1: create roles directory and files, and confirm the directory structure

master# cd /etc/ansible/roles/
master# mkdir -p {httpd,mysql,php}/{files,tasks,handlers,templates,vars,meta}
master# touch {httpd,mysql,php}/{tasks,handlers,vars,meta}/main.yml

Step 2: prepare httpd related files

master# ls /etc/ansible/roles/httpd/files/
Discuz_X3.2_SC_UTF8.zip  					Discuz Related software packages
httpd.conf 									Configured httpd.conf configuration file

Step 3: write the main.yml file for the httpd role

master# vim /etc/ansible/roles/httpd/tasks/main.yml
- name: install httpd Related software packages
  yum: name=httpd,httpd-devel state=latest

- name: Synchronization profile
  copy: src=/etc/ansible/roles/httpd/files/httpd.conf dest=/etc/httpd/conf/httpd.conf

  notify: restart httpd

- name: Copy discuz Compressed package
  copy: src=/etc/ansible/roles/httpd/files/Discuz_X3.2_SC_UTF8.zip dest=/tmp/

- name: Unzip and mv Site files to httpd Home directory
  shell: rm -rf /var/www/html/*  && rm -rf /test/ && mkdir -p /test/ &&  unzip /tmp/Discuz_X3.2_SC_UTF8.zip -d /test/ &> /dev/null  && mv /test/upload/* /var/www/html/ && chown -R apache.apache /var/www/html/
# The above commands are a little too many. You can write them into scripts, and then use the script module to call and execute them

- name: start-up httpd Parallel startup and self startup
  service: name=httpd state=started enabled=on

Step 4: write the handler in the httpd role

master# vim /etc/ansible/roles/httpd/handlers/main.yml
---
- name: restart httpd
  service: name=httpd state=restarted

Step 5: write the main.yml file of mysql role

master# vim /etc/ansible/roles/mysql/tasks/main.yml
---
- name: install mariadb Related software packages
  yum: name=mariadb-server,mariadb-devel state=latest

- name: start-up mariadb Service and set startup self startup
  service: name=mariadb state=started enabled=on

- name: Execute database creation script
  script: /etc/ansible/roles/mysql/files/create.sh

Step 6: write mysql database creation script

master# vim /etc/ansible/roles/mysql/files/create.sh

#!/bin/bash

mysql << EOF
create database if not exists discuz default charset=utf8;
grant all on discuz.* to 'discuz'@'localhost' identified by '123';
flush privileges;
EOF

Step 7: write the main.yml file for the php role

master# vim /etc/ansible/roles/php/tasks/main.yml
---
- name: install php Related software packages
  yum: name=php,php-gd,php-ldap,php-odbc,php-pear,php-xml,php-xmlrpc,php-mbstring,php-snmp,php-soap,curl,curl-devel,php-bcmath,php-mysql state=present

  notify: restart httpd

Step 8: write the playbook file of lamp and call the three roles defined above

master# vim /etc/ansible/playbook/lamp.yaml
---
- hosts: group1
  remote_user: root
  roles:
    - httpd
    - mysql
    - php

Step 9: execute the playbook file of lamp

master# ansible-playbook /etc/ansible/playbook/lamp.yaml

practice

Please use role to implement lnmp

Tags: Linux Operation & Maintenance ssh cloud computing

Posted on Tue, 05 Oct 2021 15:14:56 -0400 by Begby