KVM
Contents
- 1 Check that your CPU supports hardware virtualization
- 2 Use a 64 bit kernel (if possible)
- 3 Установка KVM
- 4 Настройка сети для KVM
- 5 Создание гостевой системы
- 6 Клонирование гостевой системы
- 7 Hints/FAQ/HOWtos
- 8 Redirect FreeBSD Console To A Serial Port
- 9 KVM migration
- 10 Linux kvm private switch
- 11 KVM snapshot
- 12 KVM guest boot priority
- 13 KVM guest + OVS
- 14 Mounting raw image files and kpartx
- 15 VNC configuration for guest
- 16 Hypervisor memory and cpu usage stats
- 17 Documentation
- 18 TODO
Check that your CPU supports hardware virtualization
Для запуска KVM, процессор должен быть с поддержкой аппаратной виртуализации. Intel и AMD и разработали расширения для своих процессоров, соответственно Intel VT-х (кодовое название Vanderpool) и AMD-V (кодовое название Pacifica). Чтобы убедиться, что ваш процессор поддерживает один из них, вы можете просмотреть результаты этой команды:
egrep -c '(vmx|svm)' /proc/cpuinfo
0 означает что CPU не поддерживает аппаратную виртуализацию.
1 (или более) означает что CPU поддерживает аппаратную виртуализацию, но предварительно необходимо убедиться что виртуализация разрешена в BIOS.
Также на Ubuntu системах можно запустить команду kvm-ok. Вывод будет примерно такой:
INFO: /dev/kvm does not exist HINT: sudo modprobe kvm_intel INFO: Your CPU supports KVM extensions KVM acceleration can be used
Use a 64 bit kernel (if possible)
Использование 64-битного ядра на хостовой системе рекомендуется, но не обязательно. Для того чтобы выделить более 2GB памяти для VMs (виртуальных машин), необходимо использовать 64-битное ядро. С установленным 32-битным ядром максимально возможное количество памяти, которое может быть выделено виртуальной машине, будет ограничено 2 GB. Также на 64-битной системе можно создавать как 32-битные виртуальные машины так и 64-битные. 32-битная система позваоляет создавать только 32-битные виртуальные машины. Проверить поддержку 64-битных систем процессором можно запустив следующую команду:
egrep -c ' lm ' /proc/cpuinfo
0 - означает, что процессор не 64-битный
1 или более означает что процессор 64-битный. Note: lm stands for Long Mode which equates to a 64-bit CPU.
Установка KVM
Установка необходимых пакетов
В следующем примере подразумевается, что KVM устанавливается на системе без установленного X сервера (т.е. отсутствует графический режим). Необходимо установить следующие пакеты: Lucid (10.04) и старше:
sudo apt-get install qemu-kvm libvirt-bin ubuntu-vm-builder bridge-utils virtinst
Karmic (9.10) и более ранние версии:
sudo aptitude install kvm libvirt-bin ubuntu-vm-builder bridge-utils virtinst
- libvirt-bin предоставляет libvirtd, которая необходима для администрирования qemu и kvm виртуальных машины используя libvirt
- qemu-kvm (kvm в Karmic и ранних версиях) - backend
- ubuntu-vm-builder мощное средство командной строки для создания виртуальных машин
- bridge-utils предоставляет мост с внешней сети для виртуальных машин
Вы также можете установит virt-viewer, для просмотра виртуальных машин.
- Загружаем KVM kernel module
modprobe -a kvm modprobe -a kvm_intel lsmod | grep -i kvm kvm_intel 61643 4 kvm 384000 1 kvm_intel
Настройка сети для KVM
Существует несколько способов разрешить виртуальным машинам доступ во внешнюю сеть:
- The default virtual network configuration is known as Usermode Networking. Traffic is NATed through the host interface to the outside network.
- Alternately, you can configure Bridged Networking to enable external hosts to directly access services on the guest operating system.
Usermode Networking
В дефолтной настройке, гостевые системы будут иметь доступ к сетевым сервисам, но не будут видны остальным машинам в сети. Например гостевая система будет иметь возможность просматривать Web, но не будет способна иметь доступный из внешней сети Web-сервер.
По умолчанию, гостевая ОС получит IP адрес в 10.0.2.0/24 диапазоне и будет доступна по адресу 10.0.2.2. You should be able to ssh into the host OS (at 10.0.2.2) from inside the guest OS and use scp to copy files back and forth.
Если данная конфигурация подходит для ваших целей - никакие дополнительные настройки не нужны.
Usermode Networking
Assign CAP_NET_ADMIN Capability
Creating a network bridge on the host
nstall the bridge-utils package:
sudo apt-get install bridge-utils
We are going to change the network configuration1. To do it properly, you should first stop networking2:
sudo invoke-rc.d networking stop
If you are on a remote connection, and so cannot stop networking, go ahead with the following commands, and use sudo invoke-rc.d networking restart at the end. If you make a mistake, though, it won't come back up.
To set up a bridge interface, edit /etc/network/interfaces and either comment or replace the existing config with (replace with the values for your network):
auto lo iface lo inet loopback auto eth0 iface eth0 inet manual auto br0 iface br0 inet static address 192.168.0.10 network 192.168.0.0 netmask 255.255.255.0 broadcast 192.168.0.255 gateway 192.168.0.1 bridge_ports eth0 bridge_stp off bridge_fd 0 bridge_maxwait 0
or to use DHCP
auto lo iface lo inet loopback auto eth0 iface eth0 inet manual auto br0 iface br0 inet dhcp bridge_ports eth0 bridge_stp off bridge_fd 0 bridge_maxwait 0
This will create a virtual interface br0.
Now restart networking:
sudo /etc/init.d/networking restart
If your VM host "freezes" for a few seconds after starting or stopping a KVM guest when using bridged networking, it is because a Linux bridge will take the hardware address of the lowest numbered interface out of all of the connected interface. To work around this, add the following to your bridge configuration:
post-up ip link set br0 address f4:6d:04:08:f1:5f
and replace f4:6d:04:08:f1:5f with the hardware address of a physical ethernet adapter which will always be part of the bridge.
Создание гостевой системы
virt-install позволяет загрузить iso образ и установить практически любую ОС. Список поддерживаемых операционных систем можно увидеть на официальном сайте KVM:
http://www.linux-kvm.org/page/Guest_Support_Status
Ниже будет приведено несколько примеров установки различных ОС. Существуют некоторые проблемы в процессе установки, так что ниже будут предоставлены несколько путей решения их.
- Подготовка дискового образа гостевой системы
- Если используем LVM то можно подготовить образ по инструкции Understanding LVM
- Если используем файл в качестве образа гостевой системы то стоит использовать например следующие параметры --disk path=/var/lib/libvirt/images/guest.img,size=10. При этом для образа гостевой системы будет автоматически создан файл размеров 10GB
- Загрузка/Установка гостевой системы будет осуществляться с ISO образа, поэтому его необходимо заранее скачать и в некоторых случаях смонтировать в удобное место.
- Создание гостевой новой гостевой системы с последующей установкой CentOS 6 в текстовом режиме:
virt-install --name ${NewVMName} --vcpus ${NumberOfCPU} --ram ${RAM_Size_in_MB} \ --disk path=/dev/${VolumeGroupName}/${LogicalVolumeName} -l ${Path_To_ISO_Image_MountPoint} \ -v --accelerate --os-type linux \ --network=bridge:${NetworkBridgeName} --nographics -x "console=ttyS0"
создание новой гостевой системы и установки CentOS 6 в автоматическом режиме с использованием Kickstart файла:
virt-install --name ${NewVMName} --vcpus ${NumberOfCPU} --ram ${RAM_Size_in_MB} \ --disk path=/dev/${VolumeGroupName}/${LogicalVolumeName} -l ${Path_To_ISO_Image_MountPoint} \ -v --accelerate --os-type linux \ --network=bridge:${NetworkBridgeName} --nographics \ -x "console=ttyS0 ks=${kickstart_file_URL} ksdevice=eth0 ip=${ServerIP} netmask=${NetMask} dns=${ResolverIP} gateway=${GatewayIP}" \
Сгенерировать Kickstart file можно например используя графические средство system-config-kickstart
Для установки CentOS 6 по сети, можно использовать следующее зеркало:
http://mirror.centos.org/centos-6/6/os/x86_64/
- Создание гостевой новой гостевой системы с последующей установкой Win7:
virt-install --name=Win7.vm --ram=1024 --vcpus=2 -v \ --cdrom /media/iso/Windows_7_Ultimate_RU_x86_and_x64_\(Build_7600\).iso --disk path=/dev/vmarea/Win7 \ --accelerate --network=bridge:br0 --vnc --noautoconsole \
Usefull extra parameters
- Setting up a static IP for installation using kickstart:
ks=${kickstart_file_URL} ksdevice=eth0 ip=${ServerIP} netmask=${NetMask} dns=${ResolverIP}
- Obraint IP from DHCP for installation using kickstart:
ks=${kickstart_file_URL} ksdevice=eth0 ip=dhcp
- Disable ipv6 support
noipv6
Клонирование гостевой системы
- Для клонирования гостевой системы можно использовать следующую команду:
virt-clone -o ${ORIGINAL_GUEST} -n ${NEW_GUEST_NAME} -f /dev/${VolumeGrouP}/${NEW_GUEST_LogicalVolume_Name}
Hints/FAQ/HOWtos
Migrate from LVM disk image to file disk image
- Copy LVM image to file:
dd if=/dev/home/node1 of=/var/lib/libvirt/images/node1.img bs=512
- Modify the guest system config:
virsh edit node1
change
<disk type='block' device='disk'>
to
<disk type='file' device='disk'>
and change
<source dev='/dev/VG/node1'/>
to
<source file='/var/lib/libvirt/images/node1.img'/><pre> ===How to change a domain memory limit=== * Checking the current domain limits: <pre> root@workstation:~# virsh dominfo node1 Id: - Name: node1 UUID: 5b870745-2cea-f658-9548-1b9711ccb0ba OS Type: hvm State: shut off CPU(s): 2 Max memory: 2097152 kB Used memory: 2097152 kB Persistent: yes Autostart: disable Managed save: no Security model: apparmor Security DOI: 0
or
root@workstation:~# grep -i memory /etc/libvirt/qemu/node1.xml <memory>2097152</memory> <currentMemory>2097152</currentMemory>
- Set the needed memory limit in Kilobytes:
virsh setmaxmem node1 4194304
or edit directly the configuration file
- Checking results
virsh dominfo node1 Id: - Name: node1 UUID: 5b870745-2cea-f658-9548-1b9711ccb0ba OS Type: hvm State: shut off CPU(s): 2 Max memory: 4194304 kB Used memory: 2097152 kB Persistent: yes Autostart: disable Managed save: no Security model: apparmor Security DOI: 0
How to change a domain vCPU limit
- Shutdown a domain
virsh shutdown node1
- Edit the domain xml configuration file
vim /etc/libvirt/qemu/node1.xml
- Restart libvrt daemon
/etc/init.d/libvirt-bin restart
- Check results
virsh vcpuinfo node1 VCPU: 0 CPU: N/A State: N/A CPU time N/A CPU Affinity: yyyy
How to disable virbr0 NAT Interface
The virtual network (virbr0) used for Network address translation (NAT) which allows guests to access to network services. However, NAT slows down things and only recommended for desktop installations. To disable Network address translation (NAT) forwarding type the following commands: Display Current Setup
Type the following command:
ifconfig
Sample outputs:
virbr0 Link encap:Ethernet HWaddr 00:00:00:00:00:00 inet addr:192.168.122.1 Bcast:192.168.122.255 Mask:255.255.255.0 inet6 addr: fe80::200:ff:fe00:0/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:39 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:0 (0.0 b) TX bytes:7921 (7.7 KiB)
Or use the following command:
virsh net-list
Sample outputs:
Name State Autostart ----------------------------------------- default active yes
To disable virbr0, enter:
virsh net-destroy default virsh net-undefine default service libvirtd restart ifconfig
Redirect FreeBSD Console To A Serial Port
'm using KVM to run multiple virtual machines under Redhat Enterprise Linux server 5.5. I've installed FreeBSD 7.x 64 bit as guest operating systems. How do I redirect the FreeBSD version 6, 7 or 8 virtual machine console to a serial port using virsh console command for management purpose from host itself?
FreeBSD does support a dumb terminal on a serial port as a console. This is useful for quick login or debug guest system problem without using ssh. First, login as root using ssh to your guest operating systems:
ssh root@freebsd.guest.com
Edit /boot/loader.conf, enter:
vi /boot/loader.conf
Append the following entry:
console="comconsole"
Save and close the file. Edit /etc/ttys, enter:
vi /etc/ttys
Find the line that read as follows:
ttyd0 "/usr/libexec/getty std.9600" dialup off secure
Update it as follows:
ttyd0 "/usr/libexec/getty std.9600" vt100 on secure
Save and close the file. Reboot the guest, enter:
reboot
After reboot, you can connect to FreeBSD guest as follows from host (first guest the list of running guest operating systems):
virsh list
Sample outputs:
Id Name State ---------------------------------- 4 freebsd running
Now, connect to Freebsd guest, enter:
virsh console 4
OR
virsh console freebsd
KVM migration
Requirements
1. A virtualized guest installed on shared networked storage using one of the following protocols:
- Fibre Channel
- iSCSI
- NFS
- GFS2
2. Both systems must have identical network configurations. All bridging and network configurations must be exactly the same on both hosts.
3. Shared storage must mount at the same location on source and destination systems. The mounted directory name must be identical.
KVM live migration
- Share guest's disk image via NFS (iSCSI if possible):
cat /etc/exports /var/lib/libvirt/images 192.168.1.0/24(rw,no_root_squash,async)
- Mount exported FS in the same place on destination server
root@destination.server.com:~# mount source.server.com:/var/lib/libvirt/images /var/lib/libvirt/images
- On the source server run:
virsh migrate --live guest qemu+ssh://destination.server.com/system
- Stop the guest and copy guest's disk image to the destiong server:
rsync --progress -av --inplace /var/lib/libvirt/images/guest.img /var/lib/libvirt/migration/guest.img umount /var/lib/libvirt/images/ mv /var/lib/libvirt/migration/guest.img /var/lib/libvirt/images/guest.img
- That's it
Linux kvm private switch
- Create a network xml file
<network> <name>private</name> <bridge name='virbr0' stp='on' delay='0' /> <mac address='52:54:00:54:95:EE'/> <ip address='10.20.0.1' netmask='255.255.255.0'> </ip> </network>
- Import the network xml file
# virsh net-define /tmp/privatenet.xml Network privatenet created from /tmp/privatenet.xml
- Verify that the network is created in libvirt
[root@vicky]# virsh net-list --all Name State Autostart ----------------------------------------- default active yes privatenet active no [root@vicky] # virsh net-info privatenet Name privatenet UUID b13de960-72cc-4f66-32df-432ce2e45538 Active: yes Persistent: no Autostart: no Bridge: privatebr0
- Enable auto-start
virsh net-list Name State Autostart ----------------------------------------- default active yes virsh # net-list --all Name State Autostart ----------------------------------------- default active yes privatenet inactive no virsh # net-start privatenet Network privatenet started virsh # net-autostart privatenet Network privatenet marked as autostarted virsh #
- Start the domain in virt-manager or by virsh
# virsh Welcome to virsh, the virtualization interactive terminal. Type: 'help' for help with commands 'quit' to quit virsh # list --all Id Name State ---------------------------------- - centos6 shut off - debian shut off - fed1 shut off - fed2 shut off - mywin shut off - sol10 shut off virsh # start fed1 Domain fed1 started virsh #
- Original article:
http://www.wagemakers.be/english/doc/linux_kvm_private_virtual_switch
KVM snapshot
- Если образ виртуальной машины в raw(file) формате, то нужно его конвертировать в qcow2(file format for disk image files used by QEMU), т.к. raw снапшоты не поддерживает. Перевести в формат qcow2 можно следующей командой:
qemu-img convert -O qcow2 image.img image_new.qcow
- Затем необходимо отредактировать xml конфигурационый файл виртуальной машины:
virsh edit <guest-name>
и правим type и путь к образу. Например:
<disk type='file' device='disk'> <driver name='qemu' type='qcow2'/> <source file='/var/lib/libvirt/images/centos-6.qcow'/> <target dev='hda' bus='ide'/> <address type='drive' controller='0' bus='0' unit='0'/> </disk>
- Узнать детальную информацию образа ВМ.
qemu-img info image.qcow
- создать снапшот виртуальной машины
virsh snapshot-create <имя вм>
- откатиться к нужному снапшоту
virsh snapshot-revert <имя вм> <номер снапшота вм> -
- список снапшотов для данной виртуальной машины можно посмотреть командой:
virsh snapshot-list <имя вм>
KVM guest boot priority
- Modify boot priority:
<os> <type arch='x86_64' machine='pc-1.0'>hvm</type> <boot dev='cdrom'/> <boot dev='hd'/> </os>
- Add cdrom device:
<disk type='file' device='cdrom'> <driver name='qemu' type='raw'/> <source file='/home/libvirt/iso/centos-6.4.iso'/> <target dev='hdc' bus='ide'/> <readonly/> <address type='drive' controller='0' bus='1' unit='0'/> </disk>
KVM guest + OVS
Setup vSwitch
Technically the virtual switch is a type of bridge, but do not try to manage it with the tools provided by net-misc/bridge-utils. They are not aware of how this type of bridge works. There is a compatibility mode for Open vSwitch that allows the virtual switch to be managed by the bridge-utils. However that is outside the scope of this article.
- First, create the bridge. We'll call it "vbr0".
root # ovs-vsctl add-br vbr0
- Next, we will add eth1 to this bridge.
root # ovs-vsctl add-port vbr0 eth1
- The final change we need to make is to assign the bridge a controller. Without it, the bridge doesn't know what to do with the packets.
root # ovs-vsctl set-controller vbr0 ptcp:
The "ptcp:" option is set to match how the controller is setup in /etc/conf.d/ovs-controller, which by default is configured to listen to IP socket connections on port 6633. If you are only using the controller on the local machine, you can set the controller to use a unix domain sockets. Generally, unix domain sockets are more light-weight with less overhead than IP sockets, so they can provide faster communication between controller and bridge. If you want to go that route, you need to configure both the controller and bridge to use a unix socket. Please refer to the man pages for more details.
- One setting that is optional, but very highly recommended, is to turn on the spanning tree protocol.
root # ovs-vsctl set bridge vbr0 stp_enable=true
You will want to be sure the spanning tree protocol (stp) is enabled in your hardware routers and switchers as well.
Setup libvirt
Recent versions of libvirt support this type of bridge, we just have to configure the virtual machine to use it.
- Connect to the local QEMU manager:
root # virsh --connect qemu:///system
- Once logged into the virsh shell, get a listing of the virtual machines:
root # list --all
- Select the virtual machine you need to configure:
root # edit foo-vm
This will open up the xml config in your default text editor. Find the section that defines the virtual OS's network interfaces. It will usually look like something similar to this:
<interface type='network'> <source network='default'/> </interface>
There will probably be a few other tags inside the <interface> tag, however, we are only showing the ones you will need to change. They will need to change to the following:
<interface type='bridge'> <source bridge='vbr0'/> <virtualport type='openvswitch' /> </interface>
If you virtual machines is setup up with more than one network interface, you will need to edit each additional network interface tag accordingly. Save the file and exit. You will return to the virsh shell, which should confirm that the virtual machine config has been updated.
- Start the virtual machine:
root # start foo-vm
Mounting raw image files and kpartx
- list partitions that are to be found on that drive image.
# kpartx -l gothbook.img loop1p1 : 0 512000 /dev/loop1 63 loop1p2 : 0 512000 /dev/loop1 512063 loop1p3 : 0 45056000 /dev/loop1 1024063 loop1p5 : 0 8388608 /dev/loop1 46090548 loop1p6 : 0 39070017 /dev/loop1 54492543 loop1p7 : 0 62733762 /dev/loop1 93562623
I can see from the output of kpartx that my drive image contains 6 partitions. I can see their starting offsets. The first column tells me the names of the device files that will be created if I choose to add these device partitions. Lets add them now.
# kpartx -a -v gothbook.img add map loop1p1 (253:6): 0 512000 linear /dev/loop1 63 add map loop1p2 (253:7): 0 512000 linear /dev/loop1 512063 add map loop1p3 (253:8): 0 45056000 linear /dev/loop1 1024063 add map loop1p5 (253:9): 0 8388608 linear /dev/loop1 46090548 add map loop1p6 (253:10): 0 39070017 linear /dev/loop1 54492543 add map loop1p7 (253:11): 0 62733762 linear /dev/loop1 93562623 # ls -l /dev/mapper total 0 crw-rw---- 1 root root 10, 62 2010-06-15 17:40 control brw-rw-r-- 1 neil neil 253, 6 2010-08-16 00:28 loop1p1 brw-rw-r-- 1 neil neil 253, 7 2010-08-16 00:28 loop1p2 brw-rw-r-- 1 neil neil 253, 8 2010-08-16 00:28 loop1p3 brw-rw-r-- 1 neil neil 253, 9 2010-08-16 00:28 loop1p5 brw-rw-r-- 1 neil neil 253, 10 2010-08-16 00:28 loop1p6 brw-rw-r-- 1 neil neil 253, 11 2010-08-16 00:28 loop1p7
The preceding command added six device map files to /dev/mapper. Each of these device files corresponds to a partition from that hard drive image. We can now use these device files to mount these partitions and access any files they contain. I want to mount the fifth partition (/dev/mapper/loop1p6) and have a look at its files.
# mkdir /mnt/sysimage # mount /dev/mapper/loop1p6 /mnt/sysimage # ls /mnt/sysimage bin dev initrd.img lost+found opt sbin sys var boot etc initrd.img.old media proc selinux tmp vmlinuz cdrom home lib mnt root srv usr vmlinuz.old
After mounting the device file, you can access the files contained on that partition. When you are done, don’t forget to umount the partition and disconnect the device map files using kpartx.
# umount /mnt/sysimage # kpartx -d -v gothbook.img
VNC configuration for guest
With the -vnc option option, you can have QEMU listen on VNC display display and redirect the VGA display over the VNC session. When using the VNC display, you must use the -k parameter to set the keyboard layout if you are not using en-us. Valid syntax for the display is as follows: <sytanxhighlight lang="bash"> -vnc :0 -vnc 192.168.1.5:0 -vnc 0.0.0.0:5 -vnc 0.0.0.0:1 -k en-us
- Require that password based authentication is used for client connections ####
-vnc 0.0.0.0:1,password -k en-us </sytanxhighlight> In the following example start centos1 guest vm using vnc
/usr/libexec/qemu-kvm -S -M rhel5.4.0 -m 1024 -smp 1 -vnc 0.0.0.0:1 -k en-us -name centos1 -monitor pty -boot c -drive file=/var/lib/libvirt/images/centos1.img
I this particular case VNC server will listen on port 5901 and when you connect to VM with VNC client you should specify port 1. If you use SSH tunnel the tunneled port should be 5901.
Another method to set VNC settings in guest system XML file:
<graphics type='vnc' port='5902' autoport='yes' listen='127.0.0.1' keymap='en-us'> <listen type='address' address='127.0.0.1'/> </graphics>
Where,
- type='vnc': The graphics element has a mandatory type attribute which takes the value "sdl", "vnc", "rdp" or "desktop". In this case it is set to VNC for remote access.
- autoport='yes': The autoport attribute is the new preferred syntax for indicating autoallocation of the TCP port to use.
- passwd='YOUR-PASSWORD-HERE': The passwd attribute provides a VNC password in clear text.
- keymap='en-us': The keymap attribute specifies the keymap to use.
- listen='127.0.0.1': The listen attribute is an IP address for the server to listen on.
Hypervisor memory and cpu usage stats
- Get KVM Hypervisor(Host) Memory info
virsh nodememstats Output: total : 8027952 KiB free : 2772452 KiB buffers: 264476 KiB cached : 1677176 KiB
- Get KVM Hypervisor CPU info
virsh nodecpustats Output: user: 3916550000000 system: 1183160000000 idle: 21003220000000 iowait: 655350000000
Note: The above numbers are in nanoseconds of time available for user/system/idle etc.
If you want we can even get individual CPU, if you have more than 1 CPU. Suppose if we want to get CPU2 details use –cpu 1 for that
virsh nodecpustats –cpu 1 Output: user: 1131610000000 system: 348770000000 idle: 5141870000000 iowait: 136620000000
How about getting the values in percentage of total CPU available?
virsh nodecpustats –percent Output: usage: 20.3% user: 17.5% system: 2.8% idle: 79.7% iowait: 0.0%
Documentation
General docs
cpnfigure KVM in Ubuntu
CentOS / Redhat: Install KVM Virtualization Software
KVM Virtualization: Start VNC Remote Access For Guest Operating Systems
Эмуляция систем с помощью QEMU
libvirt.org
Using CGroups with libvirt and LXC/KVM guests in Fedora 12
Работа с виртуальными машинами KVM. Лимитирование ресурсов виртуальной машины
Лимит потребления процессора с иммунитетом к перезагрузке VDSа. KVM Mhz limit. KVM CPU limit
- good reference
- tips and tricks
[1]
KVM networking
Network management architecture
Network XML format
TODO
[root@kvm ~]# cd /var/lib/libvirt/images/ [root@kvm images]# ls kvm01.img kvm02.img kvm03-clone.img kvm03.img lost+found [root@kvm images]# fdisk -lu kvm03-clone.img You must set cylinders. You can do this from the extra functions menu. Disk kvm03-clone.img: 0 MB, 0 bytes 255 heads, 63 sectors/track, 0 cylinders, total 0 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x0007c171 Device Boot Start End Blocks Id System kvm03-clone.img1 * 2048 1026047 512000 83 Linux Partition 1 does not end on cylinder boundary. kvm03-clone.img2 1026048 16777215 7875584 8e Linux LVM Partition 2 has different physical/logical endings: phys=(1023, 254, 63) logical=(1044, 85, 1) [root@kvm images]# mount -o loop,offset=$[2048*512] kvm03-clone.img /mnt/disk/ [root@kvm images]# ls /mnt/disk/ config-2.6.32-71.el6.x86_64 initramfs-2.6.32-71.el6.x86_64.img System.map-2.6.32-71.el6.x86_64 efi lost+found vmlinuz-2.6.32-71.el6.x86_64 grub symvers-2.6.32-71.el6.x86_64.gz #这是虚拟机/boot的内容 [root@kvm images]# umount /mnt/disk/ [root@kvm images]# mount -o loop,offset=$[1026048*512] kvm03-clone.img /mnt/disk/ mount: unknown filesystem type 'LVM2_member' [root@kvm images]# losetup -f -o $[1026048*512] kvm03-clone.img [root@kvm images]# losetup -a /dev/loop0: [fd02]:15 (/var/lib/libvirt/images/kvm03-clone.img), offset 525336576 [root@kvm images]# lvm pvscan PV /dev/cciss/c0d1p5 VG vg_kvm1 lvm2 [68.33 GiB / 2.52 GiB free] PV /dev/cciss/c0d0p2 VG vg_kvm lvm2 [67.84 GiB / 0 free] PV /dev/loop0 VG VolGroup lvm2 [7.51 GiB / 0 free] Total: 3 [143.68 GiB] / in use: 3 [143.68 GiB] / in no VG: 0 [0 ] [root@kvm images]# ls /dev/mapper/ control vg_kvm1-lv_date vg_kvm1-lv_test vg_kvm-lv_home vg_kvm-lv_root vg_kvm-lv_swap [root@kvm images]# lvm vgchange -ay 2 logical volume(s) in volume group "vg_kvm1" now active 3 logical volume(s) in volume group "vg_kvm" now active 2 logical volume(s) in volume group "VolGroup" now active [root@kvm images]# ls /dev/mapper/ control vg_kvm1-lv_test vg_kvm-lv_root VolGroup-lv_root vg_kvm1-lv_date vg_kvm-lv_home vg_kvm-lv_swap VolGroup-lv_swap [root@kvm images]# mount /dev/mapper/VolGroup-lv_root /mnt/disk/ [root@kvm images]# ls /mnt/disk/ bin cgroup dev home lib64 media opt root selinux sys usr boot date etc lib lost+found mnt proc sbin srv tmp var 然后就是卸载工作: [root@kvm images]# umount /mnt/disk/ [root@kvm images]# lvm vgchange -an Can't deactivate volume group "vg_kvm1" with 2 open logical volume(s) Can't deactivate volume group "vg_kvm" with 3 open logical volume(s) 0 logical volume(s) in volume group "VolGroup" now active [root@kvm images]# losetup -a /dev/loop0: [fd02]:15 (/var/lib/libvirt/images/kvm03-clone.img), offset 525336576 [root@kvm images]# losetup -d /dev/loop0