KVM virtualization, creation, cloning, QEMU guest agent installation and other practical processes

Due to the needs of the company, I have recently learned KVM. Here is my installation practice.

Install virtualization software

  • Check whether the CPU supports KVM
    egrep 'vmx|svm' /proc/cpuinfo --color=auto
    You can see the content of grep, indicating that kvm is supported.

  • Install via apt
apt-get install -y qemu-kvm libvirt-daemon libvirt-daemon-system 
  • Start and set power on
    systemctl start libvirtd && systemctl enable libvirtd

Configure the bridge adapter as follows

  • ubuntu path: vim /etc/netplan/01-network-manager-all.yaml
  version: 2
  renderer: NetworkManager
          dhcp4: yes
          dhcp6: yes
          #bridge: br0
          dhcp4: no
          dhcp6: no
          addresses: []
              addresses: [,]

Restart the network card

netplan apply

Install vncserver

apt-get install xrdp
apt-get install virt-manager
apt-get install tightvncserver

Use QEMU img command to create disk image file

qemu-img create -f qcow2 /root/test.qcow2 20G

Using the virt install command to create a virtual machine

Command line only virtual machine installation

  • First upload the image file of the virtual machine, which needs to be created, such as: CN > windows > Business > editions > version > 1909 > DVD > 0ca83907.iso, and the soft drive virtio-win-0.1.171 > AMD 64.vfd
virt-install --name win-win10 --ram 2048 --cdrom=/kvm/iso/cn_windows_10_business_editions_version_1909_x64_dvd_0ca83907.iso --disk path=/qcow2/win-win10.qcow2 --disk path=/kvm/iso/virtio-win-0.1.171_amd64.vfd,device=floppy  --network source=enp2s0,source.mode=bridge,type=direct --graphics vnc,password=root,port=5913,listen= --noautoconsole --check all=off
  • Use more virt install parameters
virt-install --help
  • reference resources
[root@localhost ~]# qemu-img create -f qcow2 /kvm/vfs/vm3.qcow2 20G
[root@localhost ~]# Virt install - n vm3 \ define virtual machine name
> -r 1024 \         Memory size
> --vcpus 1 \       CPU number
> -l /kvm/iso/Centos7.iso \      ISO position
> --disk path=/kvm/vfs/vm3.qcow2,format=qcow2 \      Disk file location and format
> --graphics vnc,listen=,port=5924, \         vnc Installation, using 5924 port
> --noautoconsole  \      Do not automatically attempt to connect to the client console
> --accelerate \          Speed up installation
> --autostart             Start domain automatically when booting the host
[root@localhost ~]# Firewall CMD -- add port = 5924 / TCP allow vnc connection

Then you can use the client vnc viewer to connect to the newly created virtual machine. ip:port, ip is the host ip, and the port is the specified port for creating virtual machine. If you do not specify a password for creating virtual machine, you do not need a password. You can also use the following command to view the vnc port of the virtual machine

virsh vncdisplay win-win10

Install QEMU guest agent

  1. First configure the channel segment in xml, and then start the virtual machine. A unix socket will be generated on the host, and a character device will be generated in vm. The generated unix socket and character device can be understood as both ends of a channel tunnel

  2. To start the QEMU guest agent daemons in the virtual machine, the daemons will listen to character devices

  3. Then the RPC instructions supported by QEMU guest agent in the virtual machine can be sent to the virtual machine through the channel on the host computer. After receiving data from the character device, QEMU guest agent in the virtual machine can execute the instructions, such as reading and writing files, modifying passwords, etc

  • Add the communication channel between the host and the virtual machine in the configuration file.xml of the virtual machine
virsh edit Virtual machine name
//Add the following information to the < Devices > label:

 <channel type='unix'>
      <source mode='bind' path='/tmp/channel.sock'/>
      <target type='virtio' name='org.qemu.guest_agent.0'/>
      <address type='virtio-serial' controller='0' bus='0' port='1'/>

Then shut down the virtual machine and start it

virsh shutdown VM
virsh start VM
  • If the virtual machine is Linux, just install QEMU guest agent
apt-get install qemu-guest-agent
yum install qemu-guest-agent
  • Windows
    First, you must download the virtio win driver iso
    Download wirtio win and mount it on the virtual machine manually

    Enter the virtual machine and go to Windows Device Manager
    Find PCI simple communication controller
    Right click - > update driver, and then select the directory as the disk mounted above:

Go to my computer - > select the CD-ROM drive that is manually mounted, and find the QEMU guest directory,
Double click to execute setup (qemu-ga-x64.msi (64 bit) or qemu-ga-x86.msi (32 bit)
After that, go to the service manager to see that QEMU guest agent should be up and running.

  • Pass command on host
virsh qemu-agent-command VM  --cmd '{"execute":"guest-info"}'
You can view all of its supported commands

Copy virtual machine:

  • Normal copy
virt-clone -o vm -n newvm -f /root/centos7_clone1.qcow2 (New disk image)
virt-clone --connect qemu:///system --original vm1 --name vm1-clone --file /vm-images/vm1-clone.qcow2
  • Copy virtual machine through template
    After the installation is complete, shut down the virtual machine
virsh shutdown vm1

copy the disk image file of the original virtual machine as the image template

cp /kvm/vm/vm1_0.qcow2 /data/kvm/template/tpl.qcow2

Get the configuration file of the virtual machine by using the virsh dumpxml command

virsh dumpxml --domain vm1 > /data/kvm/template/tpl.xml

Modify the specified location of the image file in tpl.xml to / data/kvm/template/tpl.qcow2, which is the path of the disk image file we copied.

<disk type='file' device='disk'>
    <driver name='qemu' type='qcow2'/>
    <source file='/data/kvm/template/tpl.qcow2'/>
    <target dev='vda' bus='virtio'/>
    <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>

Delete the following four lines in tpl.xml, where < source />The other three lines are only to avoid ambiguity.

<mac address='52:54:00:83:79:76'/>
<source mode='bind' path='/var/lib/libvirt/qemu/channel/target/domain-vm1/org.qemu.guest_agent.0'/>

tpl.qcow2 is processed through the virt Sysprep command, which can be used for the clone operation.

virt-sysprep -a /data/kvm/template/tpl.qcow2

In this way, our template file and template image are made. Its root password is the same as vm1's root password.

[root@localhost /data/kvm/template]# tree .

├── tpl.qcow2
└── tpl.xml

If you directly use virt Sysprep to reset the image, without any other parameters, the image will be reset to the state of almost completely new installation. The steps it performs can be viewed through the list command.

virt-sysprep --list-operations
 Some customized configurations can be set through parameters, such as hostname and root password.

virt-sysprep -a /data/kvm/template/tpl.qcow2  --hostname localhost --root-password password:testpwd

Creating a virtual machine from a template

virt-clone --connect qemu:///system \
  --original-xml /data/kvm/template/tpl.xml \
  --name Virtual machine name \
  --file /data/kvm/vm/vm5_0.qcow2 (New disk image)
virt-clone --connect qemu:///system \
  --original-xml /data/kvm/template/tpl.xml \
  --name vm3 \
  --file /data/kvm/vm/vm3_0.qcow2
clone The operation assigns new to the new virtual machine uuid and mac Address.

Define storage pools

  • Introduction to storage pools
    KVM platform manages storage in the form of storage pool. The so-called storage pool can be understood as local directory, which is allocated to disk or directory through remote disk array (ISCSI,NFS). Of course, it also supports all kinds of distributed file systems.
    Storage pool is the location where virtual machine storage is placed, which can be local or network storage. Specific virtual machine instances are placed on volumes.
    The storage pool created by KVM can be understood as a mapping relationship, that is, a piece of storage space attached to the host machine forms a logical storage pool that can be used by KVM to facilitate the management of virtual host

  • Define storage pools
    Create KVM storage pool in local directory mode
  mkdir -p /data/vmfs
  Virsh pool define as vmfspool -- type dir -- target / data / vmfs
  Virsh pool build vmfspool

 virsh pool-list --all
 Name status auto start
 vmfspool inactive no       

virsh pool-info vmfspool
 Name: vmfspool
UUID:           c6d5bd62-3229-4a16-b267-081d943be80a
 Status: inactive
 Persistent: Yes
 Auto start: no
  • Set storage pool auto start
virsh pool-autostart vmfspool
  • Start storage pool
virsh pool-start vmfspool
  • View storage pool information
Name: vmfspool
UUID:           c6d5bd62-3229-4a16-b267-081d943be80a
 Status: running
 Persistent: Yes
 Auto start: Yes
 Capacity: 49.98 GiB - display the total capacity of the mounted partition
 Allocation: 6.59 GiB ා partition used capacity
 Available: 43.38 GiB ා available capacity

3. Create image file in storage pool and install vm
#Create a testqcow2.img (disk image file) in the storage pool

virsh vol-create-as vmfspool  test.qcow2 10G --format qcow2
 ll /data/vmfs/
//Total consumption 196
-rw------- 1 root root 197120 10 26 13:39 oel3_qcow2.img
[root@node71 ~]# 
[root@node71 ~]# virsh vol-info --pool vmfspool /data/vmfs/oel3_qcow2.img
//Name: oel3 qcow2.img
//Types: files
//Capacity: 10.00 GiB
//Distribution: 196.00 KiB
  • Deletion of storage pool
virsh pool-destroy vmfspool
virsh pool-undefine vmfspool
virsh pool-delete vmfspool

Create Snapshot

  • Internal snapshot of disk (only image files in qcow2 format are supported)
virsh snapshot-create-as --domain vm --name vm1

View virtual machine snapshot list

virsh snapshot-list --domain vm1

Snapshot rollback

virsh snapshot-revert --domain vm --snapshotname vm1

Snapshot delete

virsh snapshot-delete --domain vm --snapshotname vm1
  • External snapshot of disk
    When an external disk snapshot is created, the disk in use will be saved as the backing file (this disk will no longer accept new data, only save the data before the snapshot), and a new disk will be created as the overlays to wait for new data to be written

virsh snapshot-create-as --domain vm1 vm1_sn1 --disk-only --diskspec vda,snapshot=external,file=/disk3/vm1_sn.qcow2 --atomic --no-metadata

Domain snapshot Vm1 ﹣ SN1 generated
Where / disk3/vm1_sn.qcow2 is the newly generated disk snapshot
View snapshot list

virsh snapshot-list vm1
  • Reference article

Tags: snapshot xml vnc network

Posted on Sun, 17 May 2020 05:57:42 -0400 by LuckyLucy