首页 > 代码库 > OpenStack在线迁移
OpenStack在线迁移
OpenStack迁移需要将虚拟机创建运行在共享存储上才可以进行迁移。
一、配置共享存储
1、环境
OpenStack三个节点icehouse-gre模式部署一文部署了的OpenStack环境。
IP如下:
controller:10.1.101.11
network:10.1.101.21
compute:10.1.101.31
compute2:10.1.101.41
确保环境配置正确。
修改各个节点的nova.conf中vncserver_listen为:
vncserver_listen = 0.0.0.0
2、安装NFS服务器
由于迁移需要用到共享存储,我们在controller节点配置一个被所有计算节点共同使用的共享存储。这里使用NFS服务。
NFS服务了解更多可参考:NFS(Network File System)服务配置和使用
controller节点:
第一步,安装nfs服务
# apt-get install nfs-kernel-server nfs-common
第二步,创建一个目录作为nfs服务挂载的目录
# mkdir /var/nfs-storage
第三步,配置/etc/exports
root@controller:~# cat /etc/exports # /etc/exports: the access control list for filesystems which may be exported # to NFS clients. See exports(5). # # Example for NFSv2 and NFSv3: # /srv/homes hostname1(rw,sync,no_subtree_check) hostname2(ro,sync,no_subtree_check) # # Example for NFSv4: # /srv/nfs4 gss/krb5i(rw,sync,fsid=0,crossmnt,no_subtree_check) # /srv/nfs4/homes gss/krb5i(rw,sync,no_subtree_check) # /var/nfs-storage *(rw,sync,fsid=0,no_root_squash)
root@controller:~# exportfs -rv
3、在compute和comput1两个计算节点挂载NFS目录
Note:
a、挂载点必须是nova.conf配置文件中state_path=/var/lib/nova指定的目录,两个计算节点目录必须一致。
b、建议在配置前先删除计算节点的所有实例,不然会造成僵尸实例的产生。
确保计算节点有执行和查找目录的权限:
chmod o+x /var/lib/nova/instances
安装nfs服务
# apt-get install nfs-kernel-server nfs-common
开机自动挂载:
root@compute1:/var/log/libvirt/qemu# cat /etc/fstab # /etc/fstab: static file system information. # # Use ‘blkid‘ to print the universally unique identifier for a # device; this may be used with UUID= as a more robust way to name devices # that works even if disks are added and removed. See fstab(5). # # <file system> <mount point> <type> <options> <dump> <pass> proc /proc proc nodev,noexec,nosuid 0 0 # / was on /dev/xvda1 during installation UUID=0c681b37-97ed-4d10-bd79-8d5931c443f8 / ext4 errors=remount-ro 0 1 # swap was on /dev/xvda5 during installation UUID=9e2efc1b-ef13-4b7c-b616-34d2a62f04ea none swap sw 0 0 10.1.101.11:/var/nfs-storage /var/lib/nova/instances nfs defaults 0 0
root@compute1:/var/log/libvirt/qemu# mount -a
root@compute1:~# df -k
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/xvda1 19478204 2754448 15711276 15% /
udev 2530276 4 2530272 1% /dev
tmpfs 512512 224 512288 1% /run
none 5120 0 5120 0% /run/lock
none 2562556 0 2562556 0% /run/shm
cgroup 2562556 0 2562556 0% /sys/fs/cgroup
10.1.101.11:/var/nfs-storage 19478528 3164672 15301632 18% /var/lib/nova/instances
二、修改所有计算节点libvirt
第一步,修改/etc/libvirt/libvirtd.conf 【注意该目录还有一个libvirt.conf,不要弄错了】
改前:#listen_tls = 0
改后:listen_tls = 0
改前:#listen_tcp = 1
改后:listen_tcp = 1
改前:#auth_tcp = "sasl"
改后:auth_tcp = "none"
第二步,修改/etc/default/libvirt-bin
改前:libvirtd_opts="-d"
改后:libvirtd_opts="-d -l"
第三步,去掉 /etc/libvirt/qemu.conf 中以下三行注释
vnc_listen = "0.0.0.0" user = "root" group = "root"
第四步,重启libvirt-bin
service libvirt-bin restart
确认进程已启动
root@compute1:~# ps -ef |grep libvirt root 9518 1 0 Jan20 ? 00:01:23 /usr/sbin/libvirtd -d -l
重启nova-compute服务
service nova-compute restart
到此配置成功!注意/var/lib/nova/instances目录权限:
root@compute1:~# ll /var/lib/nova/ total 36 drwxr-xr-x 9 nova nova 4096 Jan 20 15:40 ./ drwxr-xr-x 42 root root 4096 Jan 20 13:59 ../ drwxr-xr-x 2 nova nova 4096 May 15 2014 buckets/ drwxr-xr-x 6 nova nova 4096 Jan 6 17:15 CA/ drwxr-xr-x 2 nova nova 4096 May 15 2014 images/ drwxr-xr-x 6 nova root 4096 Jan 20 17:06 instances/ drwxr-xr-x 2 nova nova 4096 May 15 2014 keys/ drwxr-xr-x 2 nova nova 4096 May 15 2014 networks/ drwxr-xr-x 2 nova nova 4096 May 15 2014 tmp/
三、测试迁移
把compute1的虚拟机迁移到compue2上,先看compute1上有哪些虚拟机
# nova-manage vm list | grep compute_one | awk ‘{print $1}‘
root@controller:~# nova-manage vm list instance node type state launched image kernel ramdisk project user zone index vm001 compute2 m1.tiny active 2015-01-20 08:30:21 a1de861a-be9c-4223-9a7a-cf5917489ce9 60a10cd7a61b493d910eabd353c07567 be1db0d2fd134025accd2654cfc66056 nova 0 vm002 compute1 m1.tiny active 2015-01-20 08:55:02 a1de861a-be9c-4223-9a7a-cf5917489ce9 60a10cd7a61b493d910eabd353c07567 be1db0d2fd134025accd2654cfc66056 nova 0 root@controller:~# nova-manage vm list |grep compute1 |awk ‘{print $1}‘ vm002
要查看需要迁移的实例vm001实例的名字
root@controller:~# nova show 190364a5-a5a7-4e5d-8f46-6c43fb5c3446 +--------------------------------------+------------------------------------------------------------+ | Property | Value | +--------------------------------------+------------------------------------------------------------+ | OS-DCF:diskConfig | AUTO | | OS-EXT-AZ:availability_zone | nova | | OS-EXT-SRV-ATTR:host | compute1 | | OS-EXT-SRV-ATTR:hypervisor_hostname | compute1 | | OS-EXT-SRV-ATTR:instance_name | instance-00000036 | | OS-EXT-STS:power_state | 1 | | OS-EXT-STS:task_state | - | | OS-EXT-STS:vm_state | active | | OS-SRV-USG:launched_at | 2015-01-20T08:55:02.000000 | | OS-SRV-USG:terminated_at | - | | accessIPv4 | | | accessIPv6 | | | config_drive | | | created | 2015-01-20T08:54:04Z | | flavor | m1.tiny (1) | | hostId | af2b0609eb984606e572ddc5135b10b0d992dc73a5f9cc581f01baec | | id | 190364a5-a5a7-4e5d-8f46-6c43fb5c3446 | | image | cirros-0.3.2-x86_64 (a1de861a-be9c-4223-9a7a-cf5917489ce9) | | key_name | - | | metadata | {} | | name | vm002 | | os-extended-volumes:volumes_attached | [] | | progress | 0 | | security_groups | default | | status | ACTIVE | | tenantA-Net network | 10.0.0.29, 10.1.101.83 | | tenant_id | 60a10cd7a61b493d910eabd353c07567 | | updated | 2015-01-20T08:55:03Z | | user_id | be1db0d2fd134025accd2654cfc66056 | +--------------------------------------+------------------------------------------------------------+ root@controller:~# nova show 190364a5-a5a7-4e5d-8f46-6c43fb5c3446 |grep instance_name | OS-EXT-SRV-ATTR:instance_name | instance-00000036 | root@controller:~# nova show 190364a5-a5a7-4e5d-8f46-6c43fb5c3446 |grep instance_name | awk ‘{print $4}‘ instance-00000036
控制节点执行:
root@controller:~# nova-manage host list host zone controller internal compute1 nova compute2 nova
查看可用的计算节点:
root@controller:~# nova-manage service list Binary Host Zone Status State Updated_At nova-cert controller internal enabled :-) 2015-01-20 08:57:09 nova-consoleauth controller internal enabled :-) 2015-01-20 08:57:10 nova-scheduler controller internal enabled :-) 2015-01-20 08:57:11 nova-conductor controller internal enabled :-) 2015-01-20 08:57:08 nova-compute compute1 nova enabled :-) 2015-01-20 08:57:13 nova-compute compute2 nova enabled :-) 2015-01-20 08:57:05 nova-compute controller nova enabled XXX 2015-01-19 06:52:42
查看目标节点资源:
root@controller:~# nova-manage service describe_resource compute2 HOST PROJECT cpu mem(mb) hdd compute2 (total) 2 2997 18 compute2 (used_now) 1 1024 1 compute2 (used_max) 1 512 1 compute2 60a10cd7a61b493d910eabd353c07567 1 512 1
迁移成功,没有输出
root@controller:~# nova live-migration 190364a5-a5a7-4e5d-8f46-6c43fb5c3446 compute2 root@controller:~#
迁移成功,再看虚拟机vm002运行在了compute2节点
root@controller:~# nova-manage vm list instance node type state launched image kernel ramdisk project user zone index vm001 compute2 m1.tiny active 2015-01-20 08:30:21 a1de861a-be9c-4223-9a7a-cf5917489ce9 60a10cd7a61b493d910eabd353c07567 be1db0d2fd134025accd2654cfc66056 nova 0 vm002 compute2 m1.tiny active 2015-01-20 08:55:02 a1de861a-be9c-4223-9a7a-cf5917489ce9 60a10cd7a61b493d910eabd353c07567 be1db0d2fd134025accd2654cfc66056 nova 0
OpenStack在线迁移