====== Openstack ======
===== Liens =====
* [[https://www.youtube.com/watch?v=Fx0sHVCOcpI
|Démo installation SuSE Cloud 4 en cluster]]
* [[https://ibm-blue-box-help.github.io/help-documentation/fr/openstack/|Quelques tuto en FR]]
* [[https://developer.openstack.org/api-guide/compute/server_concepts.html|Concepts]]
* Documentations officielles :
* [[https://docs.openstack.org/|Openstack]]
* [[https://access.redhat.com/documentation/en/red-hat-openstack-platform/|RedHat Openstack Platform]]
* [[https://www.suse.com/documentation/suse-openstack-cloud-7/|SusE Openstack cloud 7]]
{{https://www.suse.com/documentation/suse-openstack-cloud-7/singlehtml/book_cloud_admin/images/openstack-arch-kilo-logical-v1.png?nolink}}
===== Installation SuSE Cloud 7 =====
==== Installation du pattern ====
Sur le nœud d'admin :
$ zypper in -t pattern cloud_admin
==== Préparation du PXE ====
Sur le nœud d'admin, on monde successivement les CD de SLES 12 et SuSE Cloud :
# Copie du média SLES 12
$ mount /dev/sr0 /mnt/
$ rsync -avP /mnt/ /srv/tftpboot/suse-12.2/x86_64/install/
$ umount /mnt
# Copie du media SusE Cloud 7
$ mount /dev/sr1 /mnt/
$ rsync -avP /mnt/ /srv/tftpboot/suse-12.2/x86_64/repos/Cloud/
$ umount /mnt
==== Crowbar ====
Installation de crowbar :
* Lancer la commande ''yast crowbar''
* changer le mdp de crowbar
* Choisir le type de réseau ''Network Mode'' : ''Mode single'' (pour tests seulement)
* Choisir le type de dépôt : ''Repositories'' => ''Remote SMT Server'' (smt, SuSE Manager, ...)
* Editer le fichier ''/etc/crowbar/network.json''
* Démarrer les ervice ''crowbar-init'' : ''systemctl start crowbar-init''
* Créer la base : ''crowbarctl database create --db_username=crowbar --db_password=crowbar'' (pour l'aide ''crowbarctl database help create'')
* Cliquer sur **start Installation** sur http://MyAdminNode/
* Copier dans **provisionner** la clé publique (/root/.ssh/id_rsa.pub) du nœud d'admin
* Modifier le password par défaut : éditer le provisionner en mode **raw** et modifier la ligne **root_password_hash : "XXX**. Remplacer la chaine **xxx** par la chaine générée par la commande ''openssl passwd -1''
* Vérifier que la conf NTP / DNS des Barclamps sont correctes
==== Installation du nœud controller ====
* L'installation des nœuds se fait par autoyast. Le template se trouve dans ''/opt/dell/chef/cookbooks/provisioner/templates/default/autoyast.xml.erb'' et il faut ensuite lancer la commande ''knife cookbook upload -o /opt/dell/chef/cookbooks/ provisioner'' pour sa prise en compte
* Booter le nœud controller en PXE
* Sur l'interface web crowbar, cliquer sur le nœud (en jaune), puis sur **edit** :
* Modifier son alias, par exemple **controller1**
* Modifier le rôle à **controller**
* Cliquer sur **allocate**
==== Créer des groupes crowbar ====
Créer les groupes **admin** et **controller** et **compute**. Utiliser le drag & drop pour mettre les nœuds dans les groupes appropriés.
==== Configurer un ceph externe ====
* Pour Glance :
$ ceph auth get-or-create-key client.glance mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=images'
AQCKEoxZ7ExzDhAAwU1Tz3dYMpyQN50wFymntw==
$ ceph-authtool /etc/ceph/ceph.client.glance.keyring --create-keyring --name=client.glance --add-key=AQCKEoxZ7ExzDhAAwU1Tz3dYMpyQN50wFymntw==
creating /etc/ceph/ceph.client.glance.keyring
added entity client.glance auth auth(auid = 18446744073709551615 key=AQCKEoxZ7ExzDhAAwU1Tz3dYMpyQN50wFymntw== with 0 caps)
* Pour Cinder :
$ ceph auth get-or-create-key client.cinder mon 'allow r' osd 'allow rwx pool=volumes, allow rwx pool=images, allow rwx pool=vms'
AQCYF4xZjpUvBBAAy7c1M5Ua483ju2uStXTyqg==
$ ceph-authtool /etc/ceph/ceph.client.cinder.keyring --create-keyring --name=client.cinder --add-key=AQCYF4xZjpUvBBAAy7c1M5Ua483ju2uStXTyqg==
creating /etc/ceph/ceph.client.cinder.keyring
added entity client.cinder auth auth(auid = 18446744073709551615 key=AQCYF4xZjpUvBBAAy7c1M5Ua483ju2uStXTyqg== with 0 caps)
* Se connecter sur le nœud de storage :
$ zypper in -y openstack-glance openstack-cinder openstack-nova
$ mkdir /etc/ceph
$ scp root@ServeurAdminCeph:/etc/ceph/ceph.conf /etc/ceph/
$ chmod 664 /etc/ceph/ceph.conf
$ scp root@ServeurAdminCeph:/etc/ceph/ceph.client.cinder.keyring /etc/ceph
$ chmod 640 /etc/ceph/ceph.client.cinder.keyring
$ scp root@ServeurAdminCeph:/etc/ceph/ceph.client.glance.keyring /etc/ceph
$ chmod 640 /etc/ceph/ceph.client.glance.keyring
$ chown root.cinder /etc/ceph/ceph.client.cinder.keyring
$ chown root.glance /etc/ceph/ceph.client.glance.keyring
Sur le noeud d'admin :
$ crowbar network allocate_ip "default" d52-54-00-31-d9-e3.cloud.velannes.com "storage" "host"
Allocate ip default "{\"conduit\":\"intf1\",\"vlan\":200,\"use_vlan\":true,\"add_bridge\":false,\"mtu\":1500,\"subnet\":\"192.168.125.0\",\"netmask\":\"255.255.255.0\",\"broadcast\":\"192.168.125.255\",\"ranges\":{\"host\":{\"start\":\"192.168.125.10\",\"end\":\"192.168.125.239\"}},\"address\":\"192.168.125.11\"}"
$ chef-client
Sur le nœud d'admin Ceph (attention on met 1 juste pour gagner en espace disque pour du test, ne pas utiliser en production) :
$ ceph osd pool create rbd 32 32
$ ceph osd pool set rbd size 1
$ ceph osd pool set rbd min_size 1
$ ceph osd pool create images 32 32
$ ceph osd pool set images size 1
$ ceph osd pool set images min_size 1
$ ceph osd pool create volumes 32 32
$ ceph osd pool set volumes size 1
$ ceph osd pool set volumes min_size 1
$ ceph osd pool create vms 32 32
$ ceph osd pool set vms size 1
$ ceph osd pool set vms min_size 1
==== Bareclamps ====
On installe les bareclamps (ne pas installer les ceph pour si ceph externe existe) :
* Database
* Rabbitmq
* Keystone
* Glance => pour **Default Storage Store** choisir **Rados**
* Cinder => supprimercelui par défaut et selectionner **Rados**
* Neutron
* Nova
* Horizon
* Heat
On va configurer Nova pour qu'il crée par défaut ses instances dans Ceph (pool **vms**) si on ne lui spécifie pas de volumes associés :
* Sur les computes node copier les clé ceph (keyring) et ceph.conf
-rw-r----- 1 root nova 64 10 août 18:00 ceph.client.cinder.keyring
-rw-r----- 1 root glance 64 10 août 18:01 ceph.client.glance.keyring
-rw-rw-r-- 1 root root 297 10 août 17:58 ceph.conf
* Vérifier les droits des clés sur les nœuds controller :
-rw-r----- 1 root cinder 64 10 août 18:00 ceph.client.cinder.keyring
-rw-r----- 1 root glance 64 10 août 18:01 ceph.client.glance.keyring
* Sur les nœuds compute, créer le fichier ''/etc/nova/nova.conf.d/gigix.conf'' :
[libvirt]
images_type = rbd
images_rbd_pool = vms
images_rbd_ceph_conf = /etc/ceph/ceph.conf
rbd_user = cinder
rbd_secret_uuid = 7372d9de-ade8-4a2a-b534-96bd3eb46076
disk_cachemodes="network=writeback"
Il faut créer un secret pour la libvirt (KVM) (il faut reprendre l'uuid positionné dans la variable **rbd_secret_uuid**) :
cat > secret.xml <
457eb676-33da-42ec-9a8c-9293d545c337
client.cinder secret
EOF
On peut utiliser le binaire **uuidgen** pour générer un uuid aléatoire.
On définit lesecret pour la libvirt :
$ virsh secret-define --file secret.xml
Secret 457eb676-33da-42ec-9a8c-9293d545c337 created
$ virsh secret-set-value --secret 457eb676-33da-42ec-9a8c-9293d545c337 --base64 $(ceph auth print_key client.cinder) && rm secret.xml
===== Procédures =====
==== Environnement openstack ====
Il faut sourcer l'environnement ci-dessous pour pouvoir lancer des commandes :
export OS_USERNAME=admin
export OS_PASSWORD=crowbar
export OS_TENANT_NAME=openstack
export OS_PROJECT_NAME=openstack
export OS_AUTH_URL=http://controller1:5000/v2.0
Le mieux est de télécharger et sourcer le fichier RC dans l'onglet Projet/Compute/Accès et sécurité/Accès API => http://dashboard/project/access_and_security/
$ openstack user list
+----------------------------------+---------+
| ID | Name |
+----------------------------------+---------+
| e0257f9ab0bd4bcea52ee3596c6ff9e4 | admin |
| 88bbf7feac204a4cb69a64d93bf603ba | cinder |
| 2fd8f721cf814c39b9994391f0a3588b | crowbar |
| 73e80c62014046398e9ddd3280332689 | glance |
| a7401019fb4b4ae3be1c89a0a5875f02 | heat |
| 51efcaba6b6744aeb222c73a0d522885 | neutron |
| 183f476753554c99942011497452cab6 | nova |
+----------------------------------+---------+
==== Se connecter à une VM sur son réseau privé (fixed) ====
Il faut ouvrir le ping (non obligatoire) et le port SSH (tcp/22):
$ nova secgroup-add-rule default icmp -1 -1 0.0.0.0/0
$ nova secgroup-add-rule default tcp 22 22 0.0.0.0/0
$ ip netns
$ ip netns exec qdhcp-f3ae292e-8299-4faf-91ba-402629acd5b8 ping 192.168.123.11
$ ip netns exec qdhcp-f3ae292e-8299-4faf-91ba-402629acd5b8 ssh 192.168.123.11
==== Vérifier vos nœuds compute pour la virtualisation ====
$ virt-host-validate
==== Installer Openstack sur KVM ====
Il faut rajouter au boot du kernel **kvm-intel.nested=1**. Pour ce faire modifier la variable **GRUB_CMDLINE_LINUX** et rajouter en fin de variable la valeur **kvm-intel.nested=1** dans le fichier ''/boot/efi/EFI/fedora/grub.cfg'' et lancer la commande :
$ grub2-mkconfig -o /boot/efi/EFI/fedora/grub.cfg
Passer le cpu de la VM qui fera de la virtualisation en **host-passthrough** dans la libvirt, utiliser ''virsh edit mycomputenode'' pour remplacer le paramètre suivant :
==== Permettre à une VIP de se déplacer entre 2 ports ====
Par défaut Openstack bloque ce comportement.
* Attribuons un port et une IP pour notre VIP sur notre réseau dénommé **provider** :
$ neutron port-create --name vip-port provider
Created a new port:
+-----------------------+---------------------------------------------------------------------------------+
| Field | Value |
+-----------------------+---------------------------------------------------------------------------------+
| admin_state_up | True |
| allowed_address_pairs | |
| binding:host_id | |
| binding:profile | {} |
| binding:vif_details | {} |
| binding:vif_type | unbound |
| binding:vnic_type | normal |
| created_at | 2017-08-19T17:54:10Z |
| description | |
| device_id | |
| device_owner | |
| extra_dhcp_opts | |
| fixed_ips | {"subnet_id": "da01642b-d0eb-458a-81e9-d7215b82801b", "ip_address": "10.0.0.8"} |
| id | 7f25c1b9-fb93-4f89-bc45-31dad3bb96ef |
| mac_address | fa:16:3e:48:10:af |
| name | vip-port |
| network_id | 0310f1de-661b-4b52-91b6-432ea61e4ced |
| port_security_enabled | True |
| project_id | 8ee2aae87d9a437c86cb578a677aee7e |
| revision_number | 5 |
| security_groups | 99739027-b9ec-4ff6-a280-edb177952cc9 |
| status | DOWN |
| tenant_id | 8ee2aae87d9a437c86cb578a677aee7e |
| updated_at | 2017-08-19T17:54:10Z |
+-----------------------+---------------------------------------------------------------------------------+
Nous avons obtenu l'IP **10.0.0.8** (nous aurions pu la fixer).
* Créer 2 ports supplémentaires pour chacune des VMs en spécifiant notre adresse **10.0.0.8** pour le paramètre **--allowed-address-pair ip_address** :
$ neutron port-create --name vm1-port --allowed-address-pair ip_address=10.0.0.8 provider
Created a new port:
+-----------------------+---------------------------------------------------------------------------------+
| Field | Value |
+-----------------------+---------------------------------------------------------------------------------+
| admin_state_up | True |
| allowed_address_pairs | {"ip_address": "10.0.0.8", "mac_address": "fa:16:3e:f5:75:c8"} |
| binding:host_id | |
| binding:profile | {} |
| binding:vif_details | {} |
| binding:vif_type | unbound |
| binding:vnic_type | normal |
| created_at | 2017-08-19T17:57:39Z |
| description | |
| device_id | |
| device_owner | |
| extra_dhcp_opts | |
| fixed_ips | {"subnet_id": "da01642b-d0eb-458a-81e9-d7215b82801b", "ip_address": "10.0.0.3"} |
| id | 9a321511-49e5-42d6-8530-91742548ec75 |
| mac_address | fa:16:3e:f5:75:c8 |
| name | vm1-port |
| network_id | 0310f1de-661b-4b52-91b6-432ea61e4ced |
| port_security_enabled | True |
| project_id | 8ee2aae87d9a437c86cb578a677aee7e |
| revision_number | 6 |
| security_groups | 99739027-b9ec-4ff6-a280-edb177952cc9 |
| status | DOWN |
| tenant_id | 8ee2aae87d9a437c86cb578a677aee7e |
| updated_at | 2017-08-19T17:57:39Z |
+-----------------------+---------------------------------------------------------------------------------+
$ neutron port-create --name vm2-port --allowed-address-pair ip_address=10.0.0.8 provider
Created a new port:
+-----------------------+---------------------------------------------------------------------------------+
| Field | Value |
+-----------------------+---------------------------------------------------------------------------------+
| admin_state_up | True |
| allowed_address_pairs | {"ip_address": "10.0.0.8", "mac_address": "fa:16:3e:e9:17:12"} |
| binding:host_id | |
| binding:profile | {} |
| binding:vif_details | {} |
| binding:vif_type | unbound |
| binding:vnic_type | normal |
| created_at | 2017-08-19T17:57:48Z |
| description | |
| device_id | |
| device_owner | |
| extra_dhcp_opts | |
| fixed_ips | {"subnet_id": "da01642b-d0eb-458a-81e9-d7215b82801b", "ip_address": "10.0.0.5"} |
| id | d0b4d138-cd30-4f22-a128-cf2b12a2cea6 |
| mac_address | fa:16:3e:e9:17:12 |
| name | vm2-port |
| network_id | 0310f1de-661b-4b52-91b6-432ea61e4ced |
| port_security_enabled | True |
| project_id | 8ee2aae87d9a437c86cb578a677aee7e |
| revision_number | 6 |
| security_groups | 99739027-b9ec-4ff6-a280-edb177952cc9 |
| status | DOWN |
| tenant_id | 8ee2aae87d9a437c86cb578a677aee7e |
| updated_at | 2017-08-19T17:57:48Z |
+-----------------------+---------------------------------------------------------------------------------+
* Il faut maintenant autoriser le protocol [[https://fr.wikipedia.org/wiki/Virtual_Router_Redundancy_Protocol|VRRP]] entre les 2 VMs :
$ openstack security group create vrrp
+-----------------+--------------------------------------------------------------------------------------------------------------------------------------------------------+
| Field | Value |
+-----------------+--------------------------------------------------------------------------------------------------------------------------------------------------------+
| created_at | 2017-08-19T18:08:59Z |
| description | vrrp |
| headers | |
| id | 36d587b9-5f5c-49d9-9c49-72225ccb671b |
| name | vrrp |
| project_id | 8ee2aae87d9a437c86cb578a677aee7e |
| project_id | 8ee2aae87d9a437c86cb578a677aee7e |
| revision_number | 1 |
| rules | created_at='2017-08-19T18:08:59Z', direction='egress', ethertype='IPv4', id='b359d9d5-9d5b-43f0-b094-5a93f1cbe301', |
| | project_id='8ee2aae87d9a437c86cb578a677aee7e', revision_number='1', updated_at='2017-08-19T18:08:59Z' |
| | created_at='2017-08-19T18:08:59Z', direction='egress', ethertype='IPv6', id='2113c9d4-0f58-4728-85b9-0e4e341cb6ec', |
| | project_id='8ee2aae87d9a437c86cb578a677aee7e', revision_number='1', updated_at='2017-08-19T18:08:59Z' |
| updated_at | 2017-08-19T18:08:59Z |
+-----------------+--------------------------------------------------------------------------------------------------------------------------------------------------------+
$ openstack security group rule create --protocol 112 vrrp
+-------------------+--------------------------------------+
| Field | Value |
+-------------------+--------------------------------------+
| created_at | 2017-08-19T18:10:30Z |
| description | |
| direction | ingress |
| ethertype | IPv4 |
| headers | |
| id | 325073f8-f162-4a10-8b15-8b6d8f0cf3dd |
| port_range_max | None |
| port_range_min | None |
| project_id | 8ee2aae87d9a437c86cb578a677aee7e |
| project_id | 8ee2aae87d9a437c86cb578a677aee7e |
| protocol | 112 |
| remote_group_id | None |
| remote_ip_prefix | 0.0.0.0/0 |
| revision_number | 1 |
| security_group_id | 36d587b9-5f5c-49d9-9c49-72225ccb671b |
| updated_at | 2017-08-19T18:10:30Z |
+-------------------+--------------------------------------+
* On ajoute le security group **vrrp** au port des 2 VMs :
$ neutron port-update --security-group vrrp vm1-port
Updated port: vm1-port
$ neutron port-update --security-group vrrp vm2-port
Updated port: vm2-port
* On crée 3 floating IP (1 pour chaque port) :
$ openstack floating ip create floating
+---------------------+--------------------------------------+
| Field | Value |
+---------------------+--------------------------------------+
| created_at | 2017-08-19T18:15:36Z |
| description | |
| fixed_ip_address | None |
| floating_ip_address | 192.168.126.140 |
| floating_network_id | 53dd9c6a-d6c2-4ff2-8848-cee65769bf4a |
| headers | |
| id | 1131c9e5-7cf7-4368-9a39-2cfa3b740adf |
| port_id | None |
| project_id | 8ee2aae87d9a437c86cb578a677aee7e |
| project_id | 8ee2aae87d9a437c86cb578a677aee7e |
| revision_number | 1 |
| router_id | None |
| status | DOWN |
| updated_at | 2017-08-19T18:15:36Z |
+---------------------+--------------------------------------+
$ openstack floating ip create floating
+---------------------+--------------------------------------+
| Field | Value |
+---------------------+--------------------------------------+
| created_at | 2017-08-19T18:15:37Z |
| description | |
| fixed_ip_address | None |
| floating_ip_address | 192.168.126.131 |
| floating_network_id | 53dd9c6a-d6c2-4ff2-8848-cee65769bf4a |
| headers | |
| id | 90f3a01b-4982-4c60-892b-8b783db96546 |
| port_id | None |
| project_id | 8ee2aae87d9a437c86cb578a677aee7e |
| project_id | 8ee2aae87d9a437c86cb578a677aee7e |
| revision_number | 1 |
| router_id | None |
| status | DOWN |
| updated_at | 2017-08-19T18:15:37Z |
+---------------------+--------------------------------------+
$ openstack floating ip create floating
+---------------------+--------------------------------------+
| Field | Value |
+---------------------+--------------------------------------+
| created_at | 2017-08-19T18:16:55Z |
| description | |
| fixed_ip_address | None |
| floating_ip_address | 192.168.126.129 |
| floating_network_id | 53dd9c6a-d6c2-4ff2-8848-cee65769bf4a |
| headers | |
| id | 8c270b78-4fd8-45ac-8c38-9f141393bc4d |
| port_id | None |
| project_id | 8ee2aae87d9a437c86cb578a677aee7e |
| project_id | 8ee2aae87d9a437c86cb578a677aee7e |
| revision_number | 1 |
| router_id | None |
| status | DOWN |
| updated_at | 2017-08-19T18:16:55Z |
+---------------------+--------------------------------------+
* On attribue une IP flottante pour chacun des ports :
$ neutron floatingip-associate 1131c9e5-7cf7-4368-9a39-2cfa3b740adf 9a321511-49e5-42d6-8530-91742548ec75
Associated floating IP 1131c9e5-7cf7-4368-9a39-2cfa3b740adf
$ neutron floatingip-associate 8c270b78-4fd8-45ac-8c38-9f141393bc4d d0b4d138-cd30-4f22-a128-cf2b12a2cea6
Associated floating IP 8c270b78-4fd8-45ac-8c38-9f141393bc4d
$ neutron floatingip-associate 90f3a01b-4982-4c60-892b-8b783db96546 7f25c1b9-fb93-4f89-bc45-31dad3bb96ef
Associated floating IP 90f3a01b-4982-4c60-892b-8b783db96546
* On crée les 2 Vms que l'on attache aux ports précédemment créés :
$ openstack server create --image cirros-0.3.5 --flavor m1.tiny --nic port-id=vm1-port vm1
+--------------------------------------+-----------------------------------------------------+
| Field | Value |
+--------------------------------------+-----------------------------------------------------+
| OS-DCF:diskConfig | MANUAL |
| OS-EXT-AZ:availability_zone | |
| OS-EXT-SRV-ATTR:host | None |
| OS-EXT-SRV-ATTR:hypervisor_hostname | None |
| OS-EXT-SRV-ATTR:instance_name | |
| OS-EXT-STS:power_state | NOSTATE |
| OS-EXT-STS:task_state | scheduling |
| OS-EXT-STS:vm_state | building |
| OS-SRV-USG:launched_at | None |
| OS-SRV-USG:terminated_at | None |
| accessIPv4 | |
| accessIPv6 | |
| addresses | |
| adminPass | wJyevwkG9fK7 |
| config_drive | |
| created | 2017-08-19T18:19:49Z |
| flavor | m1.tiny (1) |
| hostId | |
| id | 58333836-d19a-43c4-9f3e-11cd330fd45c |
| image | cirros-0.3.5 (f3b66052-9a8b-48fd-b186-304a140c792a) |
| key_name | None |
| name | vm1 |
| os-extended-volumes:volumes_attached | [] |
| progress | 0 |
| project_id | 8ee2aae87d9a437c86cb578a677aee7e |
| properties | |
| security_groups | [{u'name': u'default'}] |
| status | BUILD |
| updated | 2017-08-19T18:19:50Z |
| user_id | e0257f9ab0bd4bcea52ee3596c6ff9e4 |
+--------------------------------------+-----------------------------------------------------+
$ openstack server create --image cirros-0.3.5 --flavor m1.tiny --nic port-id=vm2-port vm2
+--------------------------------------+-----------------------------------------------------+
| Field | Value |
+--------------------------------------+-----------------------------------------------------+
| OS-DCF:diskConfig | MANUAL |
| OS-EXT-AZ:availability_zone | |
| OS-EXT-SRV-ATTR:host | None |
| OS-EXT-SRV-ATTR:hypervisor_hostname | None |
| OS-EXT-SRV-ATTR:instance_name | |
| OS-EXT-STS:power_state | NOSTATE |
| OS-EXT-STS:task_state | scheduling |
| OS-EXT-STS:vm_state | building |
| OS-SRV-USG:launched_at | None |
| OS-SRV-USG:terminated_at | None |
| accessIPv4 | |
| accessIPv6 | |
| addresses | |
| adminPass | 5Knrnz4UC8EL |
| config_drive | |
| created | 2017-08-19T18:20:06Z |
| flavor | m1.tiny (1) |
| hostId | |
| id | 5905c6ad-c525-4aef-8d9f-ca9a72ada63c |
| image | cirros-0.3.5 (f3b66052-9a8b-48fd-b186-304a140c792a) |
| key_name | None |
| name | vm2 |
| os-extended-volumes:volumes_attached | [] |
| progress | 0 |
| project_id | 8ee2aae87d9a437c86cb578a677aee7e |
| properties | |
| security_groups | [{u'name': u'default'}] |
| status | BUILD |
| updated | 2017-08-19T18:20:06Z |
| user_id | e0257f9ab0bd4bcea52ee3596c6ff9e4 |
+--------------------------------------+-----------------------------------------------------+
Votre VIP peut maintenant basculer entre les ports.
===== Commandes =====
==== Network ====
https://docs.openstack.org/neutron/pike/admin/
=== agent ===
== lister l'état des agents ==
$ openstack network agent list
+--------------------------------------+----------------------+--------------------+-------------------+-------+-------+---------------------------+
| ID | Agent Type | Host | Availability Zone | Alive | State | Binary |
+--------------------------------------+----------------------+--------------------+-------------------+-------+-------+---------------------------+
| fe0566d5-667d-4d20-b1c6-0ee6df983133 | Open vSwitch agent | d52-54-00-2e-69-ac | None | True | UP | neutron-openvswitch-agent |
| 7b844e82-e506-4303-839c-f1538b5a3bc7 | Loadbalancerv2 agent | d52-54-00-31-d9-e3 | None | True | UP | neutron-lbaasv2-agent |
| f5513313-c3c1-455c-8bd4-7b58eff786b7 | L3 agent | d52-54-00-31-d9-e3 | nova | True | UP | neutron-l3-agent |
| 29e2f29c-7045-4194-b7d8-9192c8e5487d | Metering agent | d52-54-00-31-d9-e3 | None | True | UP | neutron-metering-agent |
| a19a7c03-cf93-4a47-b2a3-d4c9e4bc26db | DHCP agent | d52-54-00-31-d9-e3 | nova | True | UP | neutron-dhcp-agent |
| 5a530cc7-600b-4ee9-b6a8-b1c7a23bfdb6 | Open vSwitch agent | d52-54-00-31-d9-e3 | None | True | UP | neutron-openvswitch-agent |
| bea053b3-b020-43c8-9cb1-ac7ad2ae412a | Metadata agent | d52-54-00-2e-69-ac | None | True | UP | neutron-metadata-agent |
| 756e5b94-bff9-4830-8d0a-e99af5d1e394 | L3 agent | d52-54-00-2e-69-ac | nova | True | UP | neutron-l3-agent |
| a1ea2a85-9e5f-4d0b-914b-c479915fab60 | Metadata agent | d52-54-00-ae-26-d7 | None | True | UP | neutron-metadata-agent |
| 876d507f-c48c-42ba-8517-29788e2005c7 | L3 agent | d52-54-00-ae-26-d7 | nova | True | UP | neutron-l3-agent |
| 118d7860-50fe-43ac-a813-7e78d1942a9f | Metadata agent | d52-54-00-31-d9-e3 | None | True | UP | neutron-metadata-agent |
| 375f008a-65af-4956-93de-80173920bff6 | Open vSwitch agent | d52-54-00-ae-26-d7 | None | True | UP | neutron-openvswitch-agent |
+--------------------------------------+----------------------+--------------------+-------------------+-------+-------+---------------------------+
$ openstack network agent show 876d507f-c48c-42ba-8517-29788e2005c7
+---------------------+----------------------------------------------------------------------------------------------------------------------------------------------------+
| Field | Value |
+---------------------+----------------------------------------------------------------------------------------------------------------------------------------------------+
| admin_state_up | UP |
| agent_type | L3 agent |
| alive | False |
| availability_zone | nova |
| binary | neutron-l3-agent |
| configurations | agent_mode='dvr', ex_gw_ports='0', external_network_bridge='', floating_ips='0', gateway_external_network_id='', |
| | handle_internal_only_routers='True', interface_driver='neutron.agent.linux.interface.OVSInterfaceDriver', interfaces='0', |
| | log_agent_heartbeats='False', routers='0' |
| created_at | 2017-08-15 14:32:12.804970 |
| description | None |
| heartbeat_timestamp | 2017-08-17 21:46:40.509982 |
| host | d52-54-00-ae-26-d7 |
| id | 876d507f-c48c-42ba-8517-29788e2005c7 |
| started_at | 2017-08-17 21:45:10.551521 |
| topic | l3_agent |
+---------------------+----------------------------------------------------------------------------------------------------------------------------------------------------+
=== router ===
== Lister les routers ==
$ openstack router list
+--------------------------------------+-----------------+--------+-------+-------------+-------+----------------------------------+
| ID | Name | Status | State | Distributed | HA | Project |
+--------------------------------------+-----------------+--------+-------+-------------+-------+----------------------------------+
| 3d3a7b6d-8a1e-4cf4-8799-e12f45470168 | router-floating | ACTIVE | UP | True | False | fd45b94bf13f4836b84b325acaa84869 |
+--------------------------------------+-----------------+--------+-------+-------------+-------+----------------------------------+
$ openstack router show router1
+-------------------------+--------------------------------------+
| Field | Value |
+-------------------------+--------------------------------------+
| admin_state_up | UP |
| availability_zone_hints | |
| availability_zones | |
| created_at | 2017-08-17T18:53:37Z |
| description | |
| distributed | True |
| external_gateway_info | null |
| flavor_id | None |
| ha | False |
| id | 0bbdd97b-76ba-4fdd-9d1f-58b1cdbb1089 |
| name | router1 |
| project_id | 8ee2aae87d9a437c86cb578a677aee7e |
| project_id | 8ee2aae87d9a437c86cb578a677aee7e |
| revision_number | 3 |
| routes | |
| status | ACTIVE |
| updated_at | 2017-08-17T18:53:37Z |
+-------------------------+--------------------------------------+
== Lister les ports d'un router ==
$ neutron router-port-list router1
+--------------------------------------+------+-------------------+----------------------------------------------------------------------------------------+
| id | name | mac_address | fixed_ips |
+--------------------------------------+------+-------------------+----------------------------------------------------------------------------------------+
| 63cd1c22-170e-40a2-a195-9f573e6a3111 | | fa:16:3e:57:74:0e | {"subnet_id": "9753bf09-a6ba-46d5-aa22-55131fd4f0b2", "ip_address": "192.168.102.8"} |
| 7640dfb4-41aa-4774-a411-b4e6b1a7d599 | | fa:16:3e:05:f7:10 | {"subnet_id": "bff1b72f-1ca4-4220-91c6-8b155ce31afd", "ip_address": "192.168.126.254"} |
| b9454278-c716-4554-bd3c-70ba086cdae5 | | fa:16:3e:ea:ac:9c | {"subnet_id": "cbef554f-fba7-47c1-a9ed-b56849082413", "ip_address": "192.168.101.1"} |
| dec2b347-4604-4ad3-8fa9-4de5abae4739 | | fa:16:3e:56:59:ac | {"subnet_id": "cbef554f-fba7-47c1-a9ed-b56849082413", "ip_address": "192.168.101.11"} |
| fc67915b-2fa0-4e94-b42d-2f19b580d828 | | fa:16:3e:3c:a6:ef | {"subnet_id": "9753bf09-a6ba-46d5-aa22-55131fd4f0b2", "ip_address": "192.168.102.1"} |
+--------------------------------------+------+-------------------+----------------------------------------------------------------------------------------+
== Créer un router ==
$ openstack router create router1
+-------------------------+--------------------------------------+
| Field | Value |
+-------------------------+--------------------------------------+
| admin_state_up | UP |
| availability_zone_hints | |
| availability_zones | |
| created_at | 2017-08-17T18:43:55Z |
| description | |
| distributed | True |
| external_gateway_info | null |
| flavor_id | None |
| ha | False |
| headers | |
| id | 097be81c-5bdf-4270-a64b-9f34f7bcff54 |
| name | router1 |
| project_id | 8ee2aae87d9a437c86cb578a677aee7e |
| project_id | 8ee2aae87d9a437c86cb578a677aee7e |
| revision_number | 3 |
| routes | |
| status | ACTIVE |
| updated_at | 2017-08-17T18:43:55Z |
+-------------------------+--------------------------------------+
== Ajouter une gateway au router ==
$ neutron router-gateway-set --fixed-ip ip_address=192.168.126.254 router1 floating
Set gateway for router router1
$ openstack router show router1
+-------------------------+------------------------------------------------------------------------------------------------------------------------------------------------+
| Field | Value |
+-------------------------+------------------------------------------------------------------------------------------------------------------------------------------------+
| admin_state_up | UP |
| availability_zone_hints | |
| availability_zones | nova |
| created_at | 2017-08-17T20:30:51Z |
| description | |
| distributed | True |
| external_gateway_info | {"network_id": "53dd9c6a-d6c2-4ff2-8848-cee65769bf4a", "enable_snat": true, "external_fixed_ips": [{"subnet_id": "bff1b72f- |
| | 1ca4-4220-91c6-8b155ce31afd", "ip_address": "192.168.126.254"}]} |
| flavor_id | None |
| ha | False |
| id | 49c78170-1a0a-447b-b774-e6d00b91e6b3 |
| name | router1 |
| project_id | 8ee2aae87d9a437c86cb578a677aee7e |
| project_id | 8ee2aae87d9a437c86cb578a677aee7e |
| revision_number | 34 |
| routes | |
| status | ACTIVE |
| updated_at | 2017-08-18T20:06:07Z |
+-------------------------+------------------------------------------------------------------------------------------------------------------------------------------------+
== Supprimer la gateway du router ==
$ neutron router-gateway-clear router1
Removed gateway from router router1
== Ajouter des subnet ==
$ openstack router add subnet router1 subnet1
$ openstack router add subnet router1 subnet2
== Supprimer des subnet ==
$ openstack router remove subnet router1 subnet1
$ openstack router remove subnet router1 subnet2
== Supprimer un routeur ==
$ openstack router delete router1
=== network ===
== Lister les réseaux ==
$ openstack network list
+--------------------------------------+----------+-------------------------------------------------------+
| id | name | subnets |
+--------------------------------------+----------+-------------------------------------------------------+
| 04d86f38-1ecf-4c1a-a215-398a3ca2b661 | fixed | bf3f1422-266b-4304-938c-22fe735aabb8 192.168.123.0/24 |
| 53dd9c6a-d6c2-4ff2-8848-cee65769bf4a | floating | bff1b72f-1ca4-4220-91c6-8b155ce31afd 192.168.126.0/24 |
+--------------------------------------+----------+-------------------------------------------------------+
$ openstack network show floating
+---------------------------+--------------------------------------+
| Field | Value |
+---------------------------+--------------------------------------+
| admin_state_up | UP |
| availability_zone_hints | |
| availability_zones | nova |
| created_at | 2017-08-17T17:09:44Z |
| description | |
| id | 53dd9c6a-d6c2-4ff2-8848-cee65769bf4a |
| ipv4_address_scope | None |
| ipv6_address_scope | None |
| is_default | False |
| mtu | 1500 |
| name | floating |
| port_security_enabled | True |
| project_id | fd45b94bf13f4836b84b325acaa84869 |
| project_id | fd45b94bf13f4836b84b325acaa84869 |
| provider:network_type | flat |
| provider:physical_network | floating |
| provider:segmentation_id | None |
| revision_number | 13 |
| router:external | External |
| shared | False |
| status | ACTIVE |
| subnets | bff1b72f-1ca4-4220-91c6-8b155ce31afd |
| tags | [] |
| updated_at | 2017-08-17T18:06:20Z |
+---------------------------+--------------------------------------+
== Lister les réseaux externes ==
$ neutron net-external-list
+--------------------------------------+----------+-------------------------------------------------------+
| id | name | subnets |
+--------------------------------------+----------+-------------------------------------------------------+
| 53dd9c6a-d6c2-4ff2-8848-cee65769bf4a | floating | bff1b72f-1ca4-4220-91c6-8b155ce31afd 192.168.126.0/24 |
+--------------------------------------+----------+-------------------------------------------------------+
== Créer un réseau ==
$ openstack network create --provider-network-type gre --internal --enable --no-share network2
+---------------------------+--------------------------------------+
| Field | Value |
+---------------------------+--------------------------------------+
| admin_state_up | UP |
| availability_zone_hints | |
| availability_zones | |
| created_at | 2017-08-17T20:31:20Z |
| description | |
| headers | |
| id | abe12dc1-c33f-4fa3-b5aa-64da07bfe84e |
| ipv4_address_scope | None |
| ipv6_address_scope | None |
| mtu | 1458 |
| name | network1 |
| port_security_enabled | True |
| project_id | 8ee2aae87d9a437c86cb578a677aee7e |
| project_id | 8ee2aae87d9a437c86cb578a677aee7e |
| provider:network_type | gre |
| provider:physical_network | None |
| provider:segmentation_id | 85 |
| revision_number | 3 |
| router:external | Internal |
| shared | False |
| status | ACTIVE |
| subnets | |
| tags | [] |
| updated_at | 2017-08-17T20:31:20Z |
+---------------------------+--------------------------------------+
== Modifier les options d'un réseau ==
$ openstack network show network1|grep share -i
| shared | False |
$ openstack network set --share network1
$ openstack network show network1|grep share -i
| shared | True |
$ openstack network set --no-share network1
== Supprimer un réseau ==
$ openstack network delete network1
=== subnet ===
== Lister les subnets ==
$ openstack subnet list
+--------------------------------------+----------+------------------+--------------------------------------------------------+
| id | name | cidr | allocation_pools |
+--------------------------------------+----------+------------------+--------------------------------------------------------+
| bf3f1422-266b-4304-938c-22fe735aabb8 | fixed | 192.168.123.0/24 | {"start": "192.168.123.2", "end": "192.168.123.254"} |
| bff1b72f-1ca4-4220-91c6-8b155ce31afd | floating | 192.168.126.0/24 | {"start": "192.168.126.129", "end": "192.168.126.254"} |
+--------------------------------------+----------+------------------+--------------------------------------------------------+
$ openstack subnet show floating
+-------------------+--------------------------------------------------------+
| Field | Value |
+-------------------+--------------------------------------------------------+
| allocation_pools | {"start": "192.168.126.129", "end": "192.168.126.254"} |
| cidr | 192.168.126.0/24 |
| created_at | 2017-08-17T17:25:44Z |
| description | |
| dns_nameservers | |
| enable_dhcp | False |
| gateway_ip | 192.168.126.1 |
| host_routes | |
| id | bff1b72f-1ca4-4220-91c6-8b155ce31afd |
| ip_version | 4 |
| ipv6_address_mode | |
| ipv6_ra_mode | |
| name | floating |
| network_id | 53dd9c6a-d6c2-4ff2-8848-cee65769bf4a |
| project_id | fd45b94bf13f4836b84b325acaa84869 |
| revision_number | 2 |
| service_types | |
| subnetpool_id | |
| tenant_id | fd45b94bf13f4836b84b325acaa84869 |
| updated_at | 2017-08-17T17:25:44Z |
+-------------------+--------------------------------------------------------+
== Créer un subnet ==
$ openstack subnet create --network network1 --subnet-range 192.168.101.0/24 --dhcp subnet1
+-------------------+--------------------------------------+
| Field | Value |
+-------------------+--------------------------------------+
| allocation_pools | 192.168.101.2-192.168.101.254 |
| cidr | 192.168.101.0/24 |
| created_at | 2017-08-17T20:58:01Z |
| description | |
| dns_nameservers | |
| enable_dhcp | True |
| gateway_ip | 192.168.101.1 |
| headers | |
| host_routes | |
| id | cbef554f-fba7-47c1-a9ed-b56849082413 |
| ip_version | 4 |
| ipv6_address_mode | None |
| ipv6_ra_mode | None |
| name | subnet1 |
| network_id | abe12dc1-c33f-4fa3-b5aa-64da07bfe84e |
| project_id | 8ee2aae87d9a437c86cb578a677aee7e |
| project_id | 8ee2aae87d9a437c86cb578a677aee7e |
| revision_number | 2 |
| service_types | [] |
| subnetpool_id | None |
| updated_at | 2017-08-17T20:58:01Z |
+-------------------+--------------------------------------+
== Supprimer un subnet ==
$ openstack subnet delete mysubnet
=== subnet pool ===
== Lister les subnet pool ==
$ openstack subnet pool list
+--------------------------------------+-------------+-------------+
| ID | Name | Prefixes |
+--------------------------------------+-------------+-------------+
| 0af0ea56-9568-43e2-a6a5-40ee23341af1 | subnetpool1 | 10.0.0.0/16 |
+--------------------------------------+-------------+-------------+
$ openstack subnet pool show subnetpool1
+-------------------+--------------------------------------+
| Field | Value |
+-------------------+--------------------------------------+
| address_scope_id | None |
| created_at | 2017-08-17T22:04:22Z |
| default_prefixlen | 24 |
| default_quota | None |
| description | |
| id | 0af0ea56-9568-43e2-a6a5-40ee23341af1 |
| ip_version | 4 |
| is_default | False |
| max_prefixlen | 32 |
| min_prefixlen | 8 |
| name | subnetpool1 |
| prefixes | 10.0.0.0/16 |
| project_id | 8ee2aae87d9a437c86cb578a677aee7e |
| project_id | 8ee2aae87d9a437c86cb578a677aee7e |
| revision_number | 1 |
| shared | True |
| updated_at | 2017-08-17T22:04:22Z |
+-------------------+--------------------------------------+
== Créer un subnet pool ==
$ openstack subnet pool create --share --pool-prefix 10.0.0.0/16 --default-prefix-length 24 subnetpool1
+-------------------+--------------------------------------+
| Field | Value |
+-------------------+--------------------------------------+
| address_scope_id | None |
| created_at | 2017-08-17T22:04:22Z |
| default_prefixlen | 24 |
| default_quota | None |
| description | |
| headers | |
| id | 0af0ea56-9568-43e2-a6a5-40ee23341af1 |
| ip_version | 4 |
| is_default | False |
| max_prefixlen | 32 |
| min_prefixlen | 8 |
| name | subnetpool1 |
| prefixes | 10.0.0.0/16 |
| project_id | 8ee2aae87d9a437c86cb578a677aee7e |
| project_id | 8ee2aae87d9a437c86cb578a677aee7e |
| revision_number | 1 |
| shared | True |
| updated_at | 2017-08-17T22:04:22Z |
+-------------------+--------------------------------------+
Et attribuer ce pool au réseau provider :
$ openstack network create provider
$ openstack subnet create --prefix-length 24 --subnet-pool subnetpool1 --network provider provider
+-------------------+--------------------------------------+
| Field | Value |
+-------------------+--------------------------------------+
| allocation_pools | 10.0.0.2-10.0.0.254 |
| cidr | 10.0.0.0/24 |
| created_at | 2017-08-17T22:19:56Z |
| description | |
| dns_nameservers | |
| enable_dhcp | True |
| gateway_ip | 10.0.0.1 |
| headers | |
| host_routes | |
| id | da01642b-d0eb-458a-81e9-d7215b82801b |
| ip_version | 4 |
| ipv6_address_mode | None |
| ipv6_ra_mode | None |
| name | provider |
| network_id | 0310f1de-661b-4b52-91b6-432ea61e4ced |
| prefixlen | 24 |
| project_id | 8ee2aae87d9a437c86cb578a677aee7e |
| project_id | 8ee2aae87d9a437c86cb578a677aee7e |
| revision_number | 2 |
| service_types | [] |
| subnetpool_id | 0af0ea56-9568-43e2-a6a5-40ee23341af1 |
| updated_at | 2017-08-17T22:19:56Z |
+-------------------+--------------------------------------+
== modifier des options du pool ==
$ openstack subnet pool set --max-prefix-length 24 subnetpool1
== Supprimer un subnet pool==
$ openstack subnet pool delete subnetpool1
=== port ===
== lister les ports ==
$ openstack port list --device-owner="network:dhcp"
+--------------------------------------+------+-------------------+------------------------------------------------------------------------------+
| ID | Name | MAC Address | Fixed IP Addresses |
+--------------------------------------+------+-------------------+------------------------------------------------------------------------------+
| 98f09bc0-3176-4543-a520-4b0b6b67620f | | fa:16:3e:04:52:5b | ip_address='10.0.0.2', subnet_id='da01642b-d0eb-458a-81e9-d7215b82801b' |
| a0988b01-96c5-47cd-9516-bd6127d7c2ec | | fa:16:3e:71:69:4b | ip_address='192.168.102.2', subnet_id='9753bf09-a6ba-46d5-aa22-55131fd4f0b2' |
| f1451a5d-4596-401c-a9b7-4bbad590faad | | fa:16:3e:8b:dc:33 | ip_address='192.168.123.2', subnet_id='bf3f1422-266b-4304-938c-22fe735aabb8' |
| fbc44f52-0c1d-4d3a-acfc-894f0b3c9c1f | | fa:16:3e:74:16:a8 | ip_address='192.168.101.2', subnet_id='cbef554f-fba7-47c1-a9ed-b56849082413' |
+--------------------------------------+------+-------------------+------------------------------------------------------------------------------+
== Créer un port ==
$ openstack port create --network provider myport
+-----------------------+-------------------------------------------------------------------------+
| Field | Value |
+-----------------------+-------------------------------------------------------------------------+
| admin_state_up | UP |
| allowed_address_pairs | |
| binding_host_id | |
| binding_profile | |
| binding_vif_details | |
| binding_vif_type | unbound |
| binding_vnic_type | normal |
| created_at | 2017-08-18T12:28:53Z |
| description | |
| device_id | |
| device_owner | |
| extra_dhcp_opts | |
| fixed_ips | ip_address='10.0.0.4', subnet_id='da01642b-d0eb-458a-81e9-d7215b82801b' |
| headers | |
| id | 6b7e2cbe-8345-46d2-82af-da1356248e41 |
| mac_address | fa:16:3e:b1:68:81 |
| name | myport |
| network_id | 0310f1de-661b-4b52-91b6-432ea61e4ced |
| port_security_enabled | True |
| project_id | 8ee2aae87d9a437c86cb578a677aee7e |
| project_id | 8ee2aae87d9a437c86cb578a677aee7e |
| revision_number | 5 |
| security_groups | 99739027-b9ec-4ff6-a280-edb177952cc9 |
| status | DOWN |
| updated_at | 2017-08-18T12:28:53Z |
+-----------------------+-------------------------------------------------------------------------+
== Supprimer un port ==
$ openstack port delete myport
=== lbaas ===
Un loadbalancer est composé de plusieurs sous-ensembles : //listener//, //pool//, //member// et //healthmonitor//.
== lister les loadbalancer ==
* Liste les load balancer :
$ neutron lbaas-loadbalancer-list
+--------------------------------------+---------+---------------+---------------------+----------+
| id | name | vip_address | provisioning_status | provider |
+--------------------------------------+---------+---------------+---------------------+----------+
| 17c0767a-aaee-4069-97c7-8234164698c3 | test-lb | 192.168.101.7 | ACTIVE | haproxy |
+--------------------------------------+---------+---------------+---------------------+----------+
* Lister les listeners :
$ neutron lbaas-listener-list
+--------------------------------------+--------------------------------------+---------------+----------+---------------+----------------+
| id | default_pool_id | name | protocol | protocol_port | admin_state_up |
+--------------------------------------+--------------------------------------+---------------+----------+---------------+----------------+
| e46a1580-2182-4243-8582-1fb25b3836ba | a89316c6-a04b-4174-bbc7-f1b22cbe52e4 | test-lb-https | HTTPS | 443 | True |
| 84c5c830-e4eb-4c4a-9a8f-40bcc2f2896f | 171e17a4-ea60-47a7-a6b5-5b655e959239 | test-lb-http | HTTP | 80 | True |
+--------------------------------------+--------------------------------------+---------------+----------+---------------+----------------+
* Lister les pools:
$ neutron lbaas-pool-list
+--------------------------------------+--------------------+----------+----------------+
| id | name | protocol | admin_state_up |
+--------------------------------------+--------------------+----------+----------------+
| a89316c6-a04b-4174-bbc7-f1b22cbe52e4 | test-lb-pool-https | HTTPS | True |
| 171e17a4-ea60-47a7-a6b5-5b655e959239 | test-lb-pool-http | HTTP | True |
+--------------------------------------+--------------------+----------+----------------+
* Lister les members :
$ neutron lbaas-member-list test-lb-pool-http
+--------------------------------------+-----------------------+-----------------+---------------+--------+--------------------------------------+----------------+
| id | name | address | protocol_port | weight | subnet_id | admin_state_up |
+--------------------------------------+-----------------------+-----------------+---------------+--------+--------------------------------------+----------------+
| 81d91b94-8f14-4d74-a1f4-585c782bb713 | test-lb-http-member-1 | 192.168.101.100 | 80 | 1 | cbef554f-fba7-47c1-a9ed-b56849082413 | True |
| b8d02db4-afa2-4936-8ec5-345a5305926e | test-lb-http-member-2 | 192.168.102.100 | 80 | 1 | 9753bf09-a6ba-46d5-aa22-55131fd4f0b2 | True |
+--------------------------------------+-----------------------+-----------------+---------------+--------+--------------------------------------+----------------+
$ neutron lbaas-member-list test-lb-pool-https
+--------------------------------------+------------------------+-----------------+---------------+--------+--------------------------------------+----------------+
| id | name | address | protocol_port | weight | subnet_id | admin_state_up |
+--------------------------------------+------------------------+-----------------+---------------+--------+--------------------------------------+----------------+
| b78fbce5-0589-429c-b02d-6bdbbd6299cb | test-lb-https-member-1 | 192.168.101.100 | 443 | 1 | cbef554f-fba7-47c1-a9ed-b56849082413 | True |
| 514200b3-598e-48ad-be5f-13054ff7a72d | test-lb-https-member-2 | 192.168.102.100 | 443 | 1 | 9753bf09-a6ba-46d5-aa22-55131fd4f0b2 | True |
+--------------------------------------+------------------------+-----------------+---------------+--------+--------------------------------------+----------------+
* Lister les healthmonitors :
$ neutron lbaas-healthmonitor-list
+--------------------------------------+------------------------+-------+----------------+
| id | name | type | admin_state_up |
+--------------------------------------+------------------------+-------+----------------+
| 7438832f-b9ba-4df5-9b97-76e8aba6898f | test-lb-http-monitor | HTTP | True |
| c30554c8-b60a-4c67-a407-6fab811e763b | test-lb-https-monitors | HTTPS | True |
+--------------------------------------+------------------------+-------+----------------+
== Créer un load balancer ==
* Créer le load balancer :
$ neutron lbaas-loadbalancer-create --name test-lb subnet1
Created a new loadbalancer:
+---------------------+--------------------------------------+
| Field | Value |
+---------------------+--------------------------------------+
| admin_state_up | True |
| description | |
| id | 17c0767a-aaee-4069-97c7-8234164698c3 |
| listeners | |
| name | test-lb |
| operating_status | OFFLINE |
| pools | |
| provider | haproxy |
| provisioning_status | PENDING_CREATE |
| tenant_id | 8ee2aae87d9a437c86cb578a677aee7e |
| vip_address | 192.168.101.7 |
| vip_port_id | 5a438cd3-c2bf-41d6-b963-c26da37caca2 |
| vip_subnet_id | cbef554f-fba7-47c1-a9ed-b56849082413 |
+---------------------+--------------------------------------+
* Créer les listeners HTTP/HTTPS :
$ neutron lbaas-listener-create --name test-lb-http --loadbalancer test-lb --protocol HTTP --protocol-port 80
Created a new listener:
+---------------------------+------------------------------------------------+
| Field | Value |
+---------------------------+------------------------------------------------+
| admin_state_up | True |
| connection_limit | -1 |
| default_pool_id | |
| default_tls_container_ref | |
| description | |
| id | 84c5c830-e4eb-4c4a-9a8f-40bcc2f2896f |
| loadbalancers | {"id": "17c0767a-aaee-4069-97c7-8234164698c3"} |
| name | test-lb-http |
| protocol | HTTP |
| protocol_port | 80 |
| sni_container_refs | |
| tenant_id | 8ee2aae87d9a437c86cb578a677aee7e |
+---------------------------+------------------------------------------------+
$ neutron lbaas-listener-create --name test-lb-https --loadbalancer test-lb --protocol HTTPS --protocol-port 443
Created a new listener:
+---------------------------+------------------------------------------------+
| Field | Value |
+---------------------------+------------------------------------------------+
| admin_state_up | True |
| connection_limit | -1 |
| default_pool_id | |
| default_tls_container_ref | |
| description | |
| id | e46a1580-2182-4243-8582-1fb25b3836ba |
| loadbalancers | {"id": "17c0767a-aaee-4069-97c7-8234164698c3"} |
| name | test-lb-https |
| protocol | HTTPS |
| protocol_port | 443 |
| sni_container_refs | |
| tenant_id | 8ee2aae87d9a437c86cb578a677aee7e |
+---------------------------+------------------------------------------------+
* Créons nos pools (choisir un des 3 algorithmes //ROUND_ROBIN//, //LEAST_CONNECTIONS// ou //SOURCE_IP//) :
$ neutron lbaas-pool-create --name test-lb-pool-http --lb-algorithm ROUND_ROBIN --listener test-lb-http --protocol HTTP
Created a new pool:
+---------------------+------------------------------------------------+
| Field | Value |
+---------------------+------------------------------------------------+
| admin_state_up | True |
| description | |
| healthmonitor_id | |
| id | 171e17a4-ea60-47a7-a6b5-5b655e959239 |
| lb_algorithm | ROUND_ROBIN |
| listeners | {"id": "84c5c830-e4eb-4c4a-9a8f-40bcc2f2896f"} |
| loadbalancers | {"id": "17c0767a-aaee-4069-97c7-8234164698c3"} |
| members | |
| name | test-lb-pool-http |
| protocol | HTTP |
| session_persistence | |
| tenant_id | 8ee2aae87d9a437c86cb578a677aee7e |
+---------------------+------------------------------------------------+
$ neutron lbaas-pool-create --name test-lb-pool-https --lb-algorithm ROUND_ROBIN --listener test-lb-https --protocol HTTPS
Created a new pool:
+---------------------+------------------------------------------------+
| Field | Value |
+---------------------+------------------------------------------------+
| admin_state_up | True |
| description | |
| healthmonitor_id | |
| id | a89316c6-a04b-4174-bbc7-f1b22cbe52e4 |
| lb_algorithm | ROUND_ROBIN |
| listeners | {"id": "e46a1580-2182-4243-8582-1fb25b3836ba"} |
| loadbalancers | {"id": "17c0767a-aaee-4069-97c7-8234164698c3"} |
| members | |
| name | test-lb-pool-https |
| protocol | HTTPS |
| session_persistence | |
| tenant_id | 8ee2aae87d9a437c86cb578a677aee7e |
+---------------------+------------------------------------------------+
* Créons nos //members//:
$ neutron lbaas-member-create --name test-lb-http-member-1 --subnet subnet1 --address 192.168.101.100 --protocol-port 80 test-lb-pool-httpCreated a new member:
+----------------+--------------------------------------+
| Field | Value |
+----------------+--------------------------------------+
| address | 192.168.101.100 |
| admin_state_up | True |
| id | 81d91b94-8f14-4d74-a1f4-585c782bb713 |
| name | test-lb-http-member-1 |
| protocol_port | 80 |
| subnet_id | cbef554f-fba7-47c1-a9ed-b56849082413 |
| tenant_id | 8ee2aae87d9a437c86cb578a677aee7e |
| weight | 1 |
+----------------+--------------------------------------+
$ neutron lbaas-member-create --name test-lb-http-member-2 --subnet subnet2 --address 192.168.102.100 --protocol-port 80 test-lb-pool-http
Created a new member:
+----------------+--------------------------------------+
| Field | Value |
+----------------+--------------------------------------+
| address | 192.168.102.100 |
| admin_state_up | True |
| id | b8d02db4-afa2-4936-8ec5-345a5305926e |
| name | test-lb-http-member-2 |
| protocol_port | 80 |
| subnet_id | 9753bf09-a6ba-46d5-aa22-55131fd4f0b2 |
| tenant_id | 8ee2aae87d9a437c86cb578a677aee7e |
| weight | 1 |
+----------------+--------------------------------------+
$ neutron lbaas-member-create --name test-lb-https-member-1 --subnet subnet1 --address 192.168.101.100 --protocol-port 443 test-lb-pool-https
Created a new member:
+----------------+--------------------------------------+
| Field | Value |
+----------------+--------------------------------------+
| address | 192.168.101.100 |
| admin_state_up | True |
| id | b78fbce5-0589-429c-b02d-6bdbbd6299cb |
| name | test-lb-https-member-1 |
| protocol_port | 443 |
| subnet_id | cbef554f-fba7-47c1-a9ed-b56849082413 |
| tenant_id | 8ee2aae87d9a437c86cb578a677aee7e |
| weight | 1 |
+----------------+--------------------------------------+
$ neutron lbaas-member-create --name test-lb-https-member-2 --subnet subnet2 --address 192.168.102.100 --protocol-port 443 test-lb-pool-https
Created a new member:
+----------------+--------------------------------------+
| Field | Value |
+----------------+--------------------------------------+
| address | 192.168.102.100 |
| admin_state_up | True |
| id | 514200b3-598e-48ad-be5f-13054ff7a72d |
| name | test-lb-https-member-2 |
| protocol_port | 443 |
| subnet_id | 9753bf09-a6ba-46d5-aa22-55131fd4f0b2 |
| tenant_id | 8ee2aae87d9a437c86cb578a677aee7e |
| weight | 1 |
+----------------+--------------------------------------+
* Créons nos healthmonitor:
$ neutron lbaas-healthmonitor-create --name test-lb-http-monitor --delay 5 --max-retries 2 --timeout 10 --type HTTP --pool test-lb-pool-http
Created a new healthmonitor:
+------------------+------------------------------------------------+
| Field | Value |
+------------------+------------------------------------------------+
| admin_state_up | True |
| delay | 5 |
| expected_codes | 200 |
| http_method | GET |
| id | 7438832f-b9ba-4df5-9b97-76e8aba6898f |
| max_retries | 2 |
| max_retries_down | 3 |
| name | test-lb-http-monitor |
| pools | {"id": "171e17a4-ea60-47a7-a6b5-5b655e959239"} |
| tenant_id | 8ee2aae87d9a437c86cb578a677aee7e |
| timeout | 10 |
| type | HTTP |
| url_path | / |
+------------------+------------------------------------------------+
$ neutron lbaas-healthmonitor-create --name test-lb-https-monitor --delay 5 --max-retries 2 --timeout 10 --type HTTPS --pool test-lb-pool-https
Created a new healthmonitor:
+------------------+------------------------------------------------+
| Field | Value |
+------------------+------------------------------------------------+
| admin_state_up | True |
| delay | 5 |
| expected_codes | 200 |
| http_method | GET |
| id | b18d5ea8-b945-4e9c-bdb8-2763a1dda1d5 |
| max_retries | 2 |
| max_retries_down | 3 |
| name | test-lb-https-monitor |
| pools | {"id": "a89316c6-a04b-4174-bbc7-f1b22cbe52e4"} |
| tenant_id | 8ee2aae87d9a437c86cb578a677aee7e |
| timeout | 10 |
| type | HTTPS |
| url_path | / |
+------------------+------------------------------------------------+
* Vous pouvez [[#creer_une_floating_ip|ajouter une ip flottante]] à votre load balancer.
* Vous devez [[#creer_un_security_group|créer un security group]] et [[#ajouter_un_security_group_a_un_port|attribuer le port de votre LBASS au security group]] pour y accéder.
== Supprimer un load balancer ==
Il faut supprimer un load balencer dans le sens inverse qui a servi à se création, c'est à dire //healthmonitor//, //member//, //pool//, //listener// et le load balancer lui même.
* healthmonitor :
$ neutron lbaas-healthmonitor-delete test-lb-https-monitor test-lb-http-monitor
Deleted lbaas_healthmonitor(s): test-lb-https-monitor, test-lb-http-monitor
* member :
$ neutron lbaas-member-delete test-lb-http-member-1 test-lb-http-member-2 test-lb-pool-http
Deleted lbaas_member(s): test-lb-http-member-1, test-lb-http-member-2
$ neutron lbaas-member-delete test-lb-https-member-1 test-lb-https-member-2 test-lb-pool-https
Deleted lbaas_member(s): test-lb-https-member-1, test-lb-https-member-2
* pool :
$ neutron lbaas-pool-delete test-lb-pool-http test-lb-pool-https
Deleted lbaas_pool(s): test-lb-pool-http, test-lb-pool-https
* listener :
$ neutron lbaas-listener-delete test-lb-http test-lb-https
Deleted listener(s): test-lb-http, test-lb-https
* load balancer :
$ neutron lbaas-loadbalancer-delete test-lb
Deleted loadbalancer(s): test-lb
=== floating ip ===
== lister les floating ip ==
$ openstack floating ip list
+--------------------------------------+---------------------+------------------+------+
| ID | Floating IP Address | Fixed IP Address | Port |
+--------------------------------------+---------------------+------------------+------+
| 6b32caf6-e216-4167-b00f-1bb95f8a69f2 | 192.168.126.129 | None | None |
+--------------------------------------+---------------------+------------------+------+
$ openstack floating ip show 192.168.126.129
+---------------------+--------------------------------------+
| Field | Value |
+---------------------+--------------------------------------+
| created_at | 2017-08-18T14:54:55Z |
| description | |
| fixed_ip_address | None |
| floating_ip_address | 192.168.126.129 |
| floating_network_id | 53dd9c6a-d6c2-4ff2-8848-cee65769bf4a |
| id | 6b32caf6-e216-4167-b00f-1bb95f8a69f2 |
| port_id | None |
| project_id | 8ee2aae87d9a437c86cb578a677aee7e |
| project_id | 8ee2aae87d9a437c86cb578a677aee7e |
| revision_number | 1 |
| router_id | None |
| status | DOWN |
| updated_at | 2017-08-18T14:54:55Z |
+---------------------+--------------------------------------+
== créer une floating ip ==
Pour créer une floating ip, il faut que le serveur soit connecté à un réseau externe.
$ openstack floating ip create --floating-ip-address 192.168.126.129 floating
+---------------------+--------------------------------------+
| Field | Value |
+---------------------+--------------------------------------+
| created_at | 2017-08-18T14:54:55Z |
| description | |
| fixed_ip_address | None |
| floating_ip_address | 192.168.126.129 |
| floating_network_id | 53dd9c6a-d6c2-4ff2-8848-cee65769bf4a |
| headers | |
| id | 6b32caf6-e216-4167-b00f-1bb95f8a69f2 |
| port_id | None |
| project_id | 8ee2aae87d9a437c86cb578a677aee7e |
| project_id | 8ee2aae87d9a437c86cb578a677aee7e |
| revision_number | 1 |
| router_id | None |
| status | DOWN |
| updated_at | 2017-08-18T14:54:55Z |
+---------------------+--------------------------------------+
== Associer une floating ip ==
* Associer une floating IP à un serveur :
$ openstack server add floating ip server1 192.168.126.129
Le port est maintenant peuplé :
$ openstack floating ip show 192.168.126.129
+---------------------+--------------------------------------+
| Field | Value |
+---------------------+--------------------------------------+
| created_at | 2017-08-18T14:54:55Z |
| description | |
| fixed_ip_address | 192.168.101.100 |
| floating_ip_address | 192.168.126.129 |
| floating_network_id | 53dd9c6a-d6c2-4ff2-8848-cee65769bf4a |
| id | 6b32caf6-e216-4167-b00f-1bb95f8a69f2 |
| port_id | 43836e07-e723-4114-9437-097c74618f96 |
| project_id | 8ee2aae87d9a437c86cb578a677aee7e |
| project_id | 8ee2aae87d9a437c86cb578a677aee7e |
| revision_number | 2 |
| router_id | 49c78170-1a0a-447b-b774-e6d00b91e6b3 |
| status | ACTIVE |
| updated_at | 2017-08-18T14:57:55Z |
+---------------------+--------------------------------------+
$ openstack server show server1
+--------------------------------------+----------------------------------------------------------+
| Field | Value |
+--------------------------------------+----------------------------------------------------------+
| OS-DCF:diskConfig | MANUAL |
| OS-EXT-AZ:availability_zone | nova |
| OS-EXT-SRV-ATTR:host | d52-54-00-2e-69-ac |
| OS-EXT-SRV-ATTR:hypervisor_hostname | d52-54-00-2e-69-ac.cloud.velannes.com |
| OS-EXT-SRV-ATTR:instance_name | instance-0000008f |
| OS-EXT-STS:power_state | Running |
| OS-EXT-STS:task_state | None |
| OS-EXT-STS:vm_state | active |
| OS-SRV-USG:launched_at | 2017-08-17T21:00:52.000000 |
| OS-SRV-USG:terminated_at | None |
| accessIPv4 | |
| accessIPv6 | |
| addresses | network1=192.168.101.100, 192.168.126.129 |
| config_drive | |
| created | 2017-08-17T21:00:43Z |
| flavor | m1.tiny (1) |
| hostId | a56508f8f885320b8b764689e9a9ef75e71e0afbc396c82302cfbd23 |
| id | 88a21f03-bd23-4c8a-9d6e-7e7d006b1a0e |
| image | cirros-0.3.5 (f3b66052-9a8b-48fd-b186-304a140c792a) |
| key_name | None |
| name | server1 |
| os-extended-volumes:volumes_attached | [] |
| progress | 0 |
| project_id | 8ee2aae87d9a437c86cb578a677aee7e |
| properties | |
| security_groups | [{u'name': u'default'}] |
| status | ACTIVE |
| updated | 2017-08-18T14:31:27Z |
| user_id | e0257f9ab0bd4bcea52ee3596c6ff9e4 |
+--------------------------------------+----------------------------------------------------------+
* Associé une floating IP à un port :
neutron floatingip-associate 90f3a01b-4982-4c60-892b-8b783db96546 7f25c1b9-fb93-4f89-bc45-31dad3bb96ef
Associated floating IP 90f3a01b-4982-4c60-892b-8b783db96546
== Supprimer une floating ip à un serveur (port) ==
$ openstack server remove floating ip server1 192.168.126.129
== supprimer une floating ip ==
$ openstack floating ip delete 192.168.126.129
=== security group ==
Les security group sont appliqués à des [[#lister_les_ports|ports]] ! On peut appliquer également un security group à une instance mais cela revient à l'appliquer au port sur lequel l'instance est attachée.
== lister les security group ==
* Lister les security group :
$ openstack security group list
+--------------------------------------+---------+------------------------+----------------------------------+
| ID | Name | Description | Project |
+--------------------------------------+---------+------------------------+----------------------------------+
| 0ca4b62f-e07e-41a3-9279-dedcecd56610 | gigix | | 8ee2aae87d9a437c86cb578a677aee7e |
| 6f0f21fc-9ee5-49f8-aef9-1f62acb6c9c1 | default | Default security group | f2f37f75a5bc48ceb8703a373ea2eb14 |
| 99739027-b9ec-4ff6-a280-edb177952cc9 | default | Default security group | 8ee2aae87d9a437c86cb578a677aee7e |
| b9f08212-6ba3-4823-a48c-5c5a9c1561d1 | default | Default security group | fd45b94bf13f4836b84b325acaa84869 |
+--------------------------------------+---------+------------------------+----------------------------------+
$ openstack security group show 0ca4b62f-e07e-41a3-9279-dedcecd56610
+-----------------+--------------------------------------------------------------------------------------------------------------------------------------------------------+
| Field | Value |
+-----------------+--------------------------------------------------------------------------------------------------------------------------------------------------------+
| created_at | 2017-08-14T21:53:42Z |
| description | |
| id | 0ca4b62f-e07e-41a3-9279-dedcecd56610 |
| name | gigix |
| project_id | 8ee2aae87d9a437c86cb578a677aee7e |
| project_id | 8ee2aae87d9a437c86cb578a677aee7e |
| revision_number | 5 |
| rules | created_at='2017-08-14T21:53:42Z', direction='egress', ethertype='IPv4', id='b45f3e7f-9fee-47b0-b26b-a2f3d077a453', |
| | project_id='8ee2aae87d9a437c86cb578a677aee7e', revision_number='1', updated_at='2017-08-14T21:53:42Z' |
| | created_at='2017-08-14T21:54:52Z', direction='ingress', ethertype='IPv4', id='91ba9996-d753-485b-bbae-1df7a42d64a6', |
| | project_id='8ee2aae87d9a437c86cb578a677aee7e', protocol='icmp', remote_ip_prefix='0.0.0.0/0', revision_number='1', updated_at='2017-08-14T21:54:52Z' |
| | created_at='2017-08-14T21:55:21Z', direction='egress', ethertype='IPv4', id='bb85e792-4f46-4e08-8efe-64b6c00bb541', |
| | project_id='8ee2aae87d9a437c86cb578a677aee7e', protocol='icmp', remote_ip_prefix='0.0.0.0/0', revision_number='1', updated_at='2017-08-14T21:55:21Z' |
| | created_at='2017-08-14T21:56:20Z', direction='ingress', ethertype='IPv4', id='31caf7e0-6b9c-4cae-87e4-7240ebd60ad0', port_range_max='22', |
| | port_range_min='22', project_id='8ee2aae87d9a437c86cb578a677aee7e', protocol='tcp', remote_ip_prefix='0.0.0.0/0', revision_number='1', |
| | updated_at='2017-08-14T21:56:20Z' |
| updated_at | 2017-08-14T21:56:20Z |
+-----------------+--------------------------------------------------------------------------------------------------------------------------------------------------------+
* Lister les régles :
$ openstack security group rule list
+--------------------------------------+-------------+-----------+------------+--------------------------------------+--------------------------------------+
| ID | IP Protocol | IP Range | Port Range | Remote Security Group | Security Group |
+--------------------------------------+-------------+-----------+------------+--------------------------------------+--------------------------------------+
| 014e506a-954b-4796-b573-cd893827de73 | icmp | 0.0.0.0/0 | | None | b9f08212-6ba3-4823-a48c-5c5a9c1561d1 |
| 2c460e6d-b82c-40e1-85cf-db04b5703a80 | None | None | | b9f08212-6ba3-4823-a48c-5c5a9c1561d1 | b9f08212-6ba3-4823-a48c-5c5a9c1561d1 |
| 31caf7e0-6b9c-4cae-87e4-7240ebd60ad0 | tcp | 0.0.0.0/0 | 22:22 | None | 0ca4b62f-e07e-41a3-9279-dedcecd56610 |
| 38752c8e-ac7d-4de5-be91-efa8802b31a9 | icmp | 0.0.0.0/0 | | None | 99739027-b9ec-4ff6-a280-edb177952cc9 |
| 3e400148-5769-434d-9d38-d6f9f5b2480a | tcp | 0.0.0.0/0 | 22:22 | None | 6f0f21fc-9ee5-49f8-aef9-1f62acb6c9c1 |
| 59cb9100-5442-40b8-82b2-477c6a20b3b2 | None | None | | None | b9f08212-6ba3-4823-a48c-5c5a9c1561d1 |
| 65f20e3e-d1a4-41d8-9a9c-2325567b2593 | icmp | 0.0.0.0/0 | | None | 6f0f21fc-9ee5-49f8-aef9-1f62acb6c9c1 |
| 671a7a01-47b3-48a2-ae0c-9a56e40b3c9a | tcp | 0.0.0.0/0 | 22:22 | None | b9f08212-6ba3-4823-a48c-5c5a9c1561d1 |
| 71d323c0-b096-4ac6-af03-8328007e0986 | tcp | 0.0.0.0/0 | 80:80 | None | 2b40f71b-8eb7-454d-92ef-95684d4bfacb |
| 75dcdf70-52a9-4b02-a058-deefe51666fc | None | None | | 99739027-b9ec-4ff6-a280-edb177952cc9 | 99739027-b9ec-4ff6-a280-edb177952cc9 |
| 79b6215f-fe6b-4d09-ae6b-f154a31f3d3e | None | None | | None | 2b40f71b-8eb7-454d-92ef-95684d4bfacb |
| 7f15c35e-f7b7-44f5-92bc-51adc8cf54a3 | tcp | 0.0.0.0/0 | 22:22 | None | 99739027-b9ec-4ff6-a280-edb177952cc9 |
| 84c9ea28-9275-4b0c-b20b-a24be6437988 | None | None | | None | 99739027-b9ec-4ff6-a280-edb177952cc9 |
| 8dc9829d-8bff-4371-ac6c-e7d9a040306b | None | None | | None | b9f08212-6ba3-4823-a48c-5c5a9c1561d1 |
| 8efb6acc-ab66-48d6-af2f-88722ee5b5fc | icmp | 0.0.0.0/0 | | None | 6f0f21fc-9ee5-49f8-aef9-1f62acb6c9c1 |
| 91ba9996-d753-485b-bbae-1df7a42d64a6 | icmp | 0.0.0.0/0 | | None | 0ca4b62f-e07e-41a3-9279-dedcecd56610 |
| 9e9d9b6a-8e50-4188-b754-97c87c2c38b4 | None | None | | 6f0f21fc-9ee5-49f8-aef9-1f62acb6c9c1 | 6f0f21fc-9ee5-49f8-aef9-1f62acb6c9c1 |
| a57cc4d7-b40a-410b-9f90-c215984a5d88 | None | None | | 99739027-b9ec-4ff6-a280-edb177952cc9 | 99739027-b9ec-4ff6-a280-edb177952cc9 |
| b16a0283-11a0-4cd4-bfd2-039b4fc52f52 | None | None | | None | 6f0f21fc-9ee5-49f8-aef9-1f62acb6c9c1 |
| b45f3e7f-9fee-47b0-b26b-a2f3d077a453 | None | None | | None | 0ca4b62f-e07e-41a3-9279-dedcecd56610 |
| b62e2cf5-de84-4fdf-aaf8-4893003f190e | None | None | | None | 2b40f71b-8eb7-454d-92ef-95684d4bfacb |
| bb85e792-4f46-4e08-8efe-64b6c00bb541 | icmp | 0.0.0.0/0 | | None | 0ca4b62f-e07e-41a3-9279-dedcecd56610 |
| bd2b795b-761c-47d5-b2a4-073fb2a315bc | icmp | 0.0.0.0/0 | | None | b9f08212-6ba3-4823-a48c-5c5a9c1561d1 |
| bef0b0fb-a7a7-4ecd-87c0-112d286df0a9 | None | None | | None | 99739027-b9ec-4ff6-a280-edb177952cc9 |
| c2021007-8560-49ac-9b06-40cc6ceef23a | None | None | | None | 6f0f21fc-9ee5-49f8-aef9-1f62acb6c9c1 |
| cd94510b-d795-41ff-85ab-729ebd614cd0 | None | None | | 6f0f21fc-9ee5-49f8-aef9-1f62acb6c9c1 | 6f0f21fc-9ee5-49f8-aef9-1f62acb6c9c1 |
| e564a409-b41c-445a-ad18-ac3e51693837 | tcp | 0.0.0.0/0 | 443:443 | None | 2b40f71b-8eb7-454d-92ef-95684d4bfacb |
| fa458a30-0fdf-4981-a1b5-305501dced49 | icmp | 0.0.0.0/0 | | None | 99739027-b9ec-4ff6-a280-edb177952cc9 |
| fb626bbd-cefd-4955-a649-562e6491256f | None | None | | b9f08212-6ba3-4823-a48c-5c5a9c1561d1 | b9f08212-6ba3-4823-a48c-5c5a9c1561d1 |
+--------------------------------------+-------------+-----------+------------+--------------------------------------+--------------------------------------+
$ openstack security group rule show 7f15c35e-f7b7-44f5-92bc-51adc8cf54a3
+-------------------+--------------------------------------+
| Field | Value |
+-------------------+--------------------------------------+
| created_at | 2017-08-12T17:48:28Z |
| description | |
| direction | ingress |
| ethertype | IPv4 |
| id | 7f15c35e-f7b7-44f5-92bc-51adc8cf54a3 |
| port_range_max | 22 |
| port_range_min | 22 |
| project_id | 8ee2aae87d9a437c86cb578a677aee7e |
| project_id | 8ee2aae87d9a437c86cb578a677aee7e |
| protocol | tcp |
| remote_group_id | None |
| remote_ip_prefix | 0.0.0.0/0 |
| revision_number | 1 |
| security_group_id | 99739027-b9ec-4ff6-a280-edb177952cc9 |
| updated_at | 2017-08-12T17:48:28Z |
+-------------------+--------------------------------------+
== créer un security group ==
* Créer un security group :
$ openstack security group create http_https-in
+-----------------+--------------------------------------------------------------------------------------------------------------------------------------------------------+
| Field | Value |
+-----------------+--------------------------------------------------------------------------------------------------------------------------------------------------------+
| created_at | 2017-08-18T16:55:49Z |
| description | http_https-in |
| headers | |
| id | 2b40f71b-8eb7-454d-92ef-95684d4bfacb |
| name | http_https-in |
| project_id | 8ee2aae87d9a437c86cb578a677aee7e |
| project_id | 8ee2aae87d9a437c86cb578a677aee7e |
| revision_number | 1 |
| rules | created_at='2017-08-18T16:55:49Z', direction='egress', ethertype='IPv4', id='b62e2cf5-de84-4fdf-aaf8-4893003f190e', |
| | project_id='8ee2aae87d9a437c86cb578a677aee7e', revision_number='1', updated_at='2017-08-18T16:55:49Z' |
| | created_at='2017-08-18T16:55:49Z', direction='egress', ethertype='IPv6', id='79b6215f-fe6b-4d09-ae6b-f154a31f3d3e', |
| | project_id='8ee2aae87d9a437c86cb578a677aee7e', revision_number='1', updated_at='2017-08-18T16:55:49Z' |
| updated_at | 2017-08-18T16:55:49Z |
+-----------------+--------------------------------------------------------------------------------------------------------------------------------------------------------+
* Créer les règles :
$ openstack security group rule create --ingress --protocol tcp --src-ip 0.0.0.0/0 --dst-port 80:80 http_https-in
+-------------------+--------------------------------------+
| Field | Value |
+-------------------+--------------------------------------+
| created_at | 2017-08-18T17:01:10Z |
| description | |
| direction | ingress |
| ethertype | IPv4 |
| headers | |
| id | 71d323c0-b096-4ac6-af03-8328007e0986 |
| port_range_max | 80 |
| port_range_min | 80 |
| project_id | 8ee2aae87d9a437c86cb578a677aee7e |
| project_id | 8ee2aae87d9a437c86cb578a677aee7e |
| protocol | tcp |
| remote_group_id | None |
| remote_ip_prefix | 0.0.0.0/0 |
| revision_number | 1 |
| security_group_id | 2b40f71b-8eb7-454d-92ef-95684d4bfacb |
| updated_at | 2017-08-18T17:01:10Z |
+-------------------+--------------------------------------+
$ openstack security group rule create --ingress --protocol tcp --src-ip 0.0.0.0/0 --dst-port 443:443 http_https-in
+-------------------+--------------------------------------+
| Field | Value |
+-------------------+--------------------------------------+
| created_at | 2017-08-18T17:01:40Z |
| description | |
| direction | ingress |
| ethertype | IPv4 |
| headers | |
| id | e564a409-b41c-445a-ad18-ac3e51693837 |
| port_range_max | 443 |
| port_range_min | 443 |
| project_id | 8ee2aae87d9a437c86cb578a677aee7e |
| project_id | 8ee2aae87d9a437c86cb578a677aee7e |
| protocol | tcp |
| remote_group_id | None |
| remote_ip_prefix | 0.0.0.0/0 |
| revision_number | 1 |
| security_group_id | 2b40f71b-8eb7-454d-92ef-95684d4bfacb |
| updated_at | 2017-08-18T17:01:40Z |
+-------------------+--------------------------------------+
== Ajouter un security group à un port ==
$ neutron port-update --security-group gigix 5a438cd3-c2bf-41d6-b963-c26da37caca2
Updated port: 5a438cd3-c2bf-41d6-b963-c26da37caca2
$ openstack port list | grep 5a438cd3-c2bf-41d6-b963-c26da37caca2
| 5a438cd3-c2bf-41d6-b963-c26da37caca2 | loadbalancer-8e3a90cd-5443-428b-8886-1b6ec279bc0b | fa:16:3e:73:68:f8 | ip_address='192.168.101.7', subnet_id='cbef554f-fba7-47c1-a9ed-b56849082413' |
$ openstack port show 5a438cd3-c2bf-41d6-b963-c26da37caca2
+-----------------------+------------------------------------------------------------------------------+
| Field | Value |
+-----------------------+------------------------------------------------------------------------------+
| admin_state_up | DOWN |
| allowed_address_pairs | |
| binding_host_id | |
| binding_profile | |
| binding_vif_details | |
| binding_vif_type | unbound |
| binding_vnic_type | normal |
| created_at | 2017-08-18T16:24:14Z |
| description | None |
| device_id | 8e3a90cd-5443-428b-8886-1b6ec279bc0b |
| device_owner | neutron:LOADBALANCERV2 |
| extra_dhcp_opts | |
| fixed_ips | ip_address='192.168.101.7', subnet_id='cbef554f-fba7-47c1-a9ed-b56849082413' |
| id | 5a438cd3-c2bf-41d6-b963-c26da37caca2 |
| mac_address | fa:16:3e:73:68:f8 |
| name | loadbalancer-8e3a90cd-5443-428b-8886-1b6ec279bc0b |
| network_id | abe12dc1-c33f-4fa3-b5aa-64da07bfe84e |
| port_security_enabled | True |
| project_id | 8ee2aae87d9a437c86cb578a677aee7e |
| project_id | 8ee2aae87d9a437c86cb578a677aee7e |
| revision_number | 7 |
| security_groups | 0ca4b62f-e07e-41a3-9279-dedcecd56610 |
| status | DOWN |
| updated_at | 2017-08-18T16:25:23Z |
+-----------------------+------------------------------------------------------------------------------+
== Supprimer un security group d'un port ==
$ neutron port-update --no-security-groups 5a438cd3-c2bf-41d6-b963-c26da37caca2
Updated port: 5a438cd3-c2bf-41d6-b963-c26da37caca2
== supprimer un security group ==
* Supprimer une règle :
$ openstack security group rule delete e147ee8d-027c-4971-a90e-6584ff3e27bb
* Supprimer le security group :
$ openstack security group delete http_https-in
=== QOS réseau ===
Une QOS s'applique un des [[#lister_les_ports|ports]].
Non testé car je n'arrivais pas à déclarer la QOS (version d'openstack trop ancienne ???). Exemples fait d'après la documentation officielle.
Il existe 3 types de règles que l'on peut appliquer à une policy :
* bandwidth-limit
* minimum-bandwidth
* dscp-marking
== Lister QOS réseau ===
* Voir une policy
$ openstack network qos policy show bw-limiter
+-------------+--------------------------------------+
| Field | Value |
+-------------+--------------------------------------+
| description | |
| id | 5df855e9-a833-49a3-9c82-c0839a5f103f |
| is_default | True |
| name | qos1 |
| project_id | 4db7c1ed114a4a7fb0f077148155c500 |
| rules | [] |
| shared | False |
+-------------+--------------------------------------+
* Voir une règle :
$ openstack network qos rule show bw-limiter 92ceb52f-170f-49d0-9528-976e2fee2d6f
+----------------+--------------------------------------+
| Field | Value |
+----------------+--------------------------------------+
| direction | ingress |
| id | 92ceb52f-170f-49d0-9528-976e2fee2d6f |
| max_burst_kbps | 200 |
| max_kbps | 2000 |
+----------------+--------------------------------------+
== Créer une QOS réseau ===
* On crée une policy :
$ openstack network qos policy create bw-limiter
Created a new policy:
+-------------+--------------------------------------+
| Field | Value |
+-------------+--------------------------------------+
| description | |
| id | 5df855e9-a833-49a3-9c82-c0839a5f103f |
| is_default | False |
| name | qos1 |
| project_id | 4db7c1ed114a4a7fb0f077148155c500 |
| rules | [] |
| shared | False |
+-------------+--------------------------------------+
* On ajoute une règle à la policy :
$ openstack network qos rule create --type bandwidth-limit --max-kbps 3000 -max-burst-kbits 300 --egress bw-limiter
Created a new bandwidth_limit_rule:
+----------------+--------------------------------------+
| Field | Value |
+----------------+--------------------------------------+
| direction | egress |
| id | 92ceb52f-170f-49d0-9528-976e2fee2d6f |
| max_burst_kbps | 300 |
| max_kbps | 3000 |
+----------------+--------------------------------------+
* On applique la policy (et ses règles) à un port :
$ openstack port set --qos-policy bw-limiter 88101e57-76fa-4d12-b0e0-4fc7634b874a
Updated port: 88101e57-76fa-4d12-b0e0-4fc7634b874a
* On peut également appliquer la policy à un réseau :
$ openstack network set --qos-policy bw-limiter private
Updated network: private
* Chaque projet à une policy par défaut. Pour changer sa policy par défaut :
$ openstack network qos policy set --default bw-limiter
Created a new policy:
+-------------+--------------------------------------+
| Field | Value |
+-------------+--------------------------------------+
| description | |
| id | 5df855e9-a833-49a3-9c82-c0839a5f103f |
| is_default | False |
| name | qos1 |
| project_id | 4db7c1ed114a4a7fb0f077148155c500 |
| rules | [] |
| shared | False |
+-------------+--------------------------------------+
== Supprimer une QOS réseau ==
* Supprimer la policy d'un port :
$ openstack port unset --no-qos-policy 88101e57-76fa-4d12-b0e0-4fc7634b874a
Updated port: 88101e57-76fa-4d12-b0e0-4fc7634b874a
* Supprimer une règle :
$ openstack network qos rule delete bw-limiter 92ceb52f-170f-49d0-9528-976e2fee2d6f
Deleted rule: 92ceb52f-170f-49d0-9528-976e2fee2d6f
* Supprimer la policy :
$ openstack network qos policy delete bw-limiter
=== rbac ===
Permet de partager des QOS ou des réseaux en share ou external.
== Lister les rbac ==
$ openstack network rbac list
+--------------------------------------+-------------+--------------------------------------+
| ID | Object Type | Object ID |
+--------------------------------------+-------------+--------------------------------------+
| 763464f6-6b2b-48f7-93aa-816ded3f401d | network | 04d86f38-1ecf-4c1a-a215-398a3ca2b661 |
| deedef6a-a7cd-4273-8c5c-c30cf8f5089f | network | 53dd9c6a-d6c2-4ff2-8848-cee65769bf4a |
| 38ede9fe-7f33-40f7-8b02-c2ed8523c38d | network | 0310f1de-661b-4b52-91b6-432ea61e4ced |
+--------------------------------------+-------------+--------------------------------------+
$ openstack network rbac show 38ede9fe-7f33-40f7-8b02-c2ed8523c38d
+-------------------+--------------------------------------+
| Field | Value |
+-------------------+--------------------------------------+
| action | access_as_shared |
| id | 38ede9fe-7f33-40f7-8b02-c2ed8523c38d |
| object_id | 0310f1de-661b-4b52-91b6-432ea61e4ced |
| object_type | network |
| project_id | 8ee2aae87d9a437c86cb578a677aee7e |
| project_id | 8ee2aae87d9a437c86cb578a677aee7e |
| target_project_id | db7ebbd1281d4769b8ffdb3621410575 |
+-------------------+--------------------------------------+
== Créer un rbac ==
$ openstack network rbac create --target-project gigix --action access_as_shared --type network provider
+-------------------+--------------------------------------+
| Field | Value |
+-------------------+--------------------------------------+
| action | access_as_shared |
| headers | |
| id | 38ede9fe-7f33-40f7-8b02-c2ed8523c38d |
| object_id | 0310f1de-661b-4b52-91b6-432ea61e4ced |
| object_type | network |
| project_id | 8ee2aae87d9a437c86cb578a677aee7e |
| project_id | 8ee2aae87d9a437c86cb578a677aee7e |
| target_project_id | db7ebbd1281d4769b8ffdb3621410575 |
+-------------------+--------------------------------------+
Vous pouvez également partager une [[#qos_reseau|QOS policy]] :
$ openstack network rbac create --target-project gigix --action access_as_shared --type qos_policy bw-limiter
== Supprimer un rbac ==
Vous ne pourrez supprimer le rbac seulement lorsqu'il n'aura plus de port attaché par le projet cible.
$ openstack network rbac delete 38ede9fe-7f33-40f7-8b02-c2ed8523c38d
=== firewall ===
Contrairement au security group qui viennent placer leurs règles sur le port des réseaux, le firewall place ses règles sur le port des routers sur le network node :
Un firewall est composé de 3 parties :
* rule
* policy
* firewall (association des rule et policy)
== Lister les firewalls ==
* Lister les rule :
+--------------------------------------+----------+--------------------------------------+----------------------------+---------+
| id | name | firewall_policy_id | summary | enabled |
+--------------------------------------+----------+--------------------------------------+----------------------------+---------+
| dc7aaffe-9a8a-4dcf-bfcb-f34a3ff24eec | ssh-deny | 36372288-575e-426a-ab9d-693fcbf13d36 | TCP, | True |
| | | | source: none(none), | |
| | | | dest: none(22), | |
| | | | deny | |
+--------------------------------------+----------+--------------------------------------+----------------------------+---------+
* Lister les policy :
$ neutron firewall-rule-show ssh-deny
+------------------------+--------------------------------------+
| Field | Value |
+------------------------+--------------------------------------+
| action | deny |
| description | |
| destination_ip_address | |
| destination_port | 22 |
| enabled | True |
| firewall_policy_id | 36372288-575e-426a-ab9d-693fcbf13d36 |
| id | dc7aaffe-9a8a-4dcf-bfcb-f34a3ff24eec |
| ip_version | 4 |
| name | ssh-deny |
| position | 1 |
| project_id | 8ee2aae87d9a437c86cb578a677aee7e |
| protocol | tcp |
| shared | False |
| source_ip_address | |
| source_port | |
| tenant_id | 8ee2aae87d9a437c86cb578a677aee7e |
+------------------------+--------------------------------------+
* Lister les policy :
$ neutron firewall-policy-list
+--------------------------------------+-------------+----------------------------------------+
| id | name | firewall_rules |
+--------------------------------------+-------------+----------------------------------------+
| 36372288-575e-426a-ab9d-693fcbf13d36 | deny-policy | [dc7aaffe-9a8a-4dcf-bfcb-f34a3ff24eec] |
+--------------------------------------+-------------+----------------------------------------+
$ neutron firewall-policy-show deny-policy
+----------------+--------------------------------------+
| Field | Value |
+----------------+--------------------------------------+
| audited | False |
| description | |
| firewall_rules | dc7aaffe-9a8a-4dcf-bfcb-f34a3ff24eec |
| id | 36372288-575e-426a-ab9d-693fcbf13d36 |
| name | deny-policy |
| project_id | 8ee2aae87d9a437c86cb578a677aee7e |
| shared | False |
| tenant_id | 8ee2aae87d9a437c86cb578a677aee7e |
+----------------+--------------------------------------+
* Lister les firewall :
$ neutron firewall-list
+--------------------------------------+---------------+--------------------------------------+
| id | name | firewall_policy_id |
+--------------------------------------+---------------+--------------------------------------+
| cb485e66-dd33-46ed-9165-1a867bc3b4b8 | deny-firewall | 36372288-575e-426a-ab9d-693fcbf13d36 |
+--------------------------------------+---------------+--------------------------------------+
$ neutron firewall-show deny-firewall
+--------------------+--------------------------------------+
| Field | Value |
+--------------------+--------------------------------------+
| admin_state_up | True |
| description | |
| firewall_policy_id | 36372288-575e-426a-ab9d-693fcbf13d36 |
| id | cb485e66-dd33-46ed-9165-1a867bc3b4b8 |
| name | deny-firewall |
| project_id | 8ee2aae87d9a437c86cb578a677aee7e |
| router_ids | 49c78170-1a0a-447b-b774-e6d00b91e6b3 |
| status | ACTIVE |
| tenant_id | 8ee2aae87d9a437c86cb578a677aee7e |
+--------------------+--------------------------------------+
== Créer un firewall ==
* Création des rule :
$ neutron firewall-rule-create --protocol tcp --destination-port 22 --action deny --name ssh-deny
Created a new firewall_rule:
+------------------------+--------------------------------------+
| Field | Value |
+------------------------+--------------------------------------+
| action | deny |
| description | |
| destination_ip_address | |
| destination_port | 22 |
| enabled | True |
| firewall_policy_id | |
| id | dc7aaffe-9a8a-4dcf-bfcb-f34a3ff24eec |
| ip_version | 4 |
| name | ssh-deny |
| position | |
| project_id | 8ee2aae87d9a437c86cb578a677aee7e |
| protocol | tcp |
| shared | False |
| source_ip_address | |
| source_port | |
| tenant_id | 8ee2aae87d9a437c86cb578a677aee7e |
+------------------------+--------------------------------------+
* Création des policy :
$ neutron firewall-policy-create --firewall-rules ssh-deny deny-policy
Created a new firewall_policy:
+----------------+--------------------------------------+
| Field | Value |
+----------------+--------------------------------------+
| audited | False |
| description | |
| firewall_rules | dc7aaffe-9a8a-4dcf-bfcb-f34a3ff24eec |
| id | 36372288-575e-426a-ab9d-693fcbf13d36 |
| name | deny-policy |
| project_id | 8ee2aae87d9a437c86cb578a677aee7e |
| shared | False |
| tenant_id | 8ee2aae87d9a437c86cb578a677aee7e |
+----------------+--------------------------------------+
* Création du firewall :
$ neutron firewall-create --name deny-firewall --router router1 deny-policy
Created a new firewall:
+--------------------+--------------------------------------+
| Field | Value |
+--------------------+--------------------------------------+
| admin_state_up | True |
| description | |
| firewall_policy_id | 36372288-575e-426a-ab9d-693fcbf13d36 |
| id | cb485e66-dd33-46ed-9165-1a867bc3b4b8 |
| name | deny-firewall |
| project_id | 8ee2aae87d9a437c86cb578a677aee7e |
| router_ids | 49c78170-1a0a-447b-b774-e6d00b91e6b3 |
| status | CREATED |
| tenant_id | 8ee2aae87d9a437c86cb578a677aee7e |
+--------------------+--------------------------------------+
== Supprimer un firewall ==
La version testée est buggée lorsqu'on passe par le web pour supprimer le firewall !
J'ai été obligé de modifier la bdd à la main pour pouvoir supprimer le FW :
* Connectez vous avec l'utilisateur postgres :
$ su - postgres
* Connectez-vous à la base **neutron** :
$ psql neutron
* Mettez à jour la colonne status à **ACTIVE** de votre firewall :
neutron=# update firewalls SET status='ACTIVE' where name='deny-firewall';
UPDATE 1
Puis suivez le bloc du dessous.
* Supprimer le router associé :
$ neutron firewall-update --no-routers deny-firewall
Updated firewall: deny-firewall
$ neutron firewall-delete deny-firewall
Deleted firewall(s): deny-firewall
$ neutron firewall-policy-delete deny-policy
Deleted firewall_policy(s): deny-policy
$ neutron firewall-rule-delete ssh-deny
Deleted firewall_rule(s): ssh-deny
=== trunk ===
* https://docs.openstack.org/neutron/pike/admin/config-trunking.html
Le trunk permet de connecter une instance à 2 réseaux via une seule interface réseau (port).
=== ip ===
== Afficher les IP disponibles par réseau ==
$ openstack ip availability list
+--------------------------------------+--------------+-----------+----------+
| Network ID | Network Name | Total IPs | Used IPs |
+--------------------------------------+--------------+-----------+----------+
| abe12dc1-c33f-4fa3-b5aa-64da07bfe84e | network1 | 253 | 4 |
| cb922a51-54ca-4f71-906d-ab08f95dd1bc | network2 | 253 | 4 |
| 0310f1de-661b-4b52-91b6-432ea61e4ced | provider | 253 | 1 |
| 53dd9c6a-d6c2-4ff2-8848-cee65769bf4a | floating | 126 | 3 |
| 04d86f38-1ecf-4c1a-a215-398a3ca2b661 | fixed | 253 | 3 |
+--------------------------------------+--------------+-----------+----------+
$ openstack ip availability show network1
+------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------+
| Field | Value |
+------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------+
| network_id | abe12dc1-c33f-4fa3-b5aa-64da07bfe84e |
| network_name | network1 |
| project_id | 8ee2aae87d9a437c86cb578a677aee7e |
| subnet_ip_availability | cidr='192.168.101.0/24', ip_version='4', subnet_id='cbef554f-fba7-47c1-a9ed-b56849082413', subnet_name='subnet1', total_ips='253', used_ips='4' |
| total_ips | 253 |
| used_ips | 4 |
+------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------+
=== tag ===
== Lister les tags ==
$ openstack network show provider
+---------------------------+--------------------------------------+
| Field | Value |
+---------------------------+--------------------------------------+
| admin_state_up | UP |
| availability_zone_hints | |
| availability_zones | nova |
| created_at | 2017-08-17T22:13:48Z |
| description | |
| id | 0310f1de-661b-4b52-91b6-432ea61e4ced |
| ipv4_address_scope | None |
| ipv6_address_scope | None |
| mtu | 1458 |
| name | provider |
| port_security_enabled | True |
| project_id | 8ee2aae87d9a437c86cb578a677aee7e |
| project_id | 8ee2aae87d9a437c86cb578a677aee7e |
| provider:network_type | gre |
| provider:physical_network | None |
| provider:segmentation_id | 35 |
| revision_number | 5 |
| router:external | Internal |
| shared | False |
| status | ACTIVE |
| subnets | da01642b-d0eb-458a-81e9-d7215b82801b |
| tags | [u'blue', u'red'] |
| updated_at | 2017-08-17T22:19:56Z |
+---------------------------+--------------------------------------+
== Ajouter un tag ==
$ neutron tag-add --resource-type network --resource provider --tag red
$ neutron tag-add --resource-type network --resource provider --tag blue
== Supprimer les tag ==
* Supprimer un tag :
$ neutron tag-remove --resource-type network --resource provider --tag blue
* Supprimer tous les tags :
$ neutron tag-remove --resource-type network --resource provider --all
== Modifier un tag ==
$ neutron tag-replace --resource-type network --resource provider --tag blue --tag purple
== Lister les réseaux par tags ==
* Lister les réseaux qui possèdent l'ensemble des tags :
$ neutron net-list --tags red,blue
+--------------------------------------+----------+--------------------------------------------------+
| id | name | subnets |
+--------------------------------------+----------+--------------------------------------------------+
| 0310f1de-661b-4b52-91b6-432ea61e4ced | provider | da01642b-d0eb-458a-81e9-d7215b82801b 10.0.0.0/24 |
+--------------------------------------+----------+--------------------------------------------------+
* Lister les réseaux qui possèdent un des tags :
$ neutron net-list --tags-any red,blue
+--------------------------------------+----------+-------------------------------------------------------+
| id | name | subnets |
+--------------------------------------+----------+-------------------------------------------------------+
| 0310f1de-661b-4b52-91b6-432ea61e4ced | provider | da01642b-d0eb-458a-81e9-d7215b82801b 10.0.0.0/24 |
| abe12dc1-c33f-4fa3-b5aa-64da07bfe84e | network1 | cbef554f-fba7-47c1-a9ed-b56849082413 192.168.101.0/24 |
| cb922a51-54ca-4f71-906d-ab08f95dd1bc | network2 | 9753bf09-a6ba-46d5-aa22-55131fd4f0b2 192.168.102.0/24 |
+--------------------------------------+----------+-------------------------------------------------------+
* Lister les réseaux qui n'ont pas l'ensembles des tags :
$ neutron net-list --tags-any red,blue --not-tags red,blue
+--------------------------------------+----------+-------------------------------------------------------+
| id | name | subnets |
+--------------------------------------+----------+-------------------------------------------------------+
| abe12dc1-c33f-4fa3-b5aa-64da07bfe84e | network1 | cbef554f-fba7-47c1-a9ed-b56849082413 192.168.101.0/24 |
| cb922a51-54ca-4f71-906d-ab08f95dd1bc | network2 | 9753bf09-a6ba-46d5-aa22-55131fd4f0b2 192.168.102.0/24 |
+--------------------------------------+----------+-------------------------------------------------------+
* Lister les réseaux qui n'ont pas un des tags :
$ neutron net-list --not-tags-any red,blue
+--------------------------------------+----------+-------------------------------------------------------+
| id | name | subnets |
+--------------------------------------+----------+-------------------------------------------------------+
| 04d86f38-1ecf-4c1a-a215-398a3ca2b661 | fixed | bf3f1422-266b-4304-938c-22fe735aabb8 192.168.123.0/24 |
| 53dd9c6a-d6c2-4ff2-8848-cee65769bf4a | floating | bff1b72f-1ca4-4220-91c6-8b155ce31afd 192.168.126.0/24 |
+--------------------------------------+----------+-------------------------------------------------------+
==== project ====
=== Lister les projets ===
$ openstack project list
+----------------------------------+-----------+
| ID | Name |
+----------------------------------+-----------+
| f2f37f75a5bc48ceb8703a373ea2eb14 | admin |
| fd45b94bf13f4836b84b325acaa84869 | service |
| 8ee2aae87d9a437c86cb578a677aee7e | openstack |
+----------------------------------+-----------+
$ openstack project list
+----------------------------------+-----------+
| ID | Name |
+----------------------------------+-----------+
| f2f37f75a5bc48ceb8703a373ea2eb14 | admin |
| fd45b94bf13f4836b84b325acaa84869 | service |
| 8ee2aae87d9a437c86cb578a677aee7e | openstack |
+----------------------------------+-----------+
=== Créer un projet ===
$ openstack project create gigix
+-------------+----------------------------------+
| Field | Value |
+-------------+----------------------------------+
| description | |
| domain_id | default |
| enabled | True |
| id | 766990c1cb1743b0bf287f1444e4b84f |
| is_domain | False |
| name | gigix |
| parent_id | default |
+-------------+----------------------------------+
=== Supprimer un projet ===
$ openstack project delete gigix
==== Image ====
=== Lister les images ===
$ openstack image list
+--------------------------------------+--------------+--------+
| ID | Name | Status |
+--------------------------------------+--------------+--------+
| e4af9d33-02ac-4ec8-94ab-74e1b12a3094 | Debian-9 | active |
| 5e9f3b4d-41ff-4d16-a817-a53dc4379387 | Fedora-26 | active |
| f3b66052-9a8b-48fd-b186-304a140c792a | cirros-0.3.5 | active |
+--------------------------------------+--------------+--------+
=== Créer une image à partir d'une instance ===
$ openstack server image create --name mydemoimage demo
+------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------+
| Field | Value |
+------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------+
| checksum | d41d8cd98f00b204e9800998ecf8427e |
| container_format | bare |
| created_at | 2017-08-24T16:10:10Z |
| disk_format | qcow2 |
| file | /v2/images/13bd8486-1055-46fa-8be0-0266bd35c1cf/file |
| id | 13bd8486-1055-46fa-8be0-0266bd35c1cf |
| min_disk | 1 |
| min_ram | 0 |
| name | mydemoimage |
| owner | 8ee2aae87d9a437c86cb578a677aee7e |
| properties | architecture='x86_64', base_image_ref='f3b66052-9a8b-48fd-b186-304a140c792a', bdm_v2='True', block_device_mapping='[{"guest_format": null, |
| | "boot_index": 0, "delete_on_termination": false, "no_device": null, "snapshot_id": "f1bfc92a-f5e4-4754-82c8-2c943ee943d8", "device_name": "/dev/vda", |
| | "disk_bus": "virtio", "image_id": null, "source_type": "snapshot", "tag": null, "device_type": "disk", "volume_id": null, "destination_type": |
| | "volume", "volume_size": 1}]', root_device_name='/dev/vda' |
| protected | False |
| schema | /v2/schemas/image |
| size | 0 |
| status | active |
| tags | |
| updated_at | 2017-08-24T16:10:10Z |
| virtual_size | None |
| visibility | private |
+------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------+
=== Supprimer une image ===
$ openstack image delete mydemoimage
==== Host ====
=== Lister les hosts ===
Liste les hosts, leur services et leur zone de disponibilité :
$ openstack host list
+--------------------+-------------+----------+
| Host Name | Service | Zone |
+--------------------+-------------+----------+
| d52-54-00-31-d9-e3 | consoleauth | internal |
| d52-54-00-31-d9-e3 | conductor | internal |
| d52-54-00-31-d9-e3 | cert | internal |
| d52-54-00-31-d9-e3 | scheduler | internal |
| d52-54-00-2e-69-ac | compute | nova |
+--------------------+-------------+----------+
==== Compute ====
=== Lister les services ===
$ openstack compute service list
+----+------------------+--------------------+----------+---------+-------+----------------------------+
| ID | Binary | Host | Zone | Status | State | Updated At |
+----+------------------+--------------------+----------+---------+-------+----------------------------+
| 10 | nova-compute | d52-54-00-2e-69-ac | nova | enabled | up | 2017-08-24T09:18:02.226448 |
| 7 | nova-conductor | d52-54-00-31-d9-e3 | internal | enabled | up | 2017-08-24T09:18:03.843010 |
| 9 | nova-consoleauth | d52-54-00-31-d9-e3 | internal | enabled | up | 2017-08-24T09:18:04.122730 |
| 5 | nova-cert | d52-54-00-31-d9-e3 | internal | enabled | up | 2017-08-24T09:18:04.603632 |
| 6 | nova-scheduler | d52-54-00-31-d9-e3 | internal | enabled | up | 2017-08-24T09:18:04.985201 |
| 11 | nova-compute | d52-54-00-ae-26-d7 | nova | enabled | up | 2017-08-24T09:17:57.708305 |
+----+------------------+--------------------+----------+---------+-------+----------------------------+
=== Hosts ===
== Lister les hosts ==
$ openstack host list
+--------------------+-------------+----------+
| Host Name | Service | Zone |
+--------------------+-------------+----------+
| d52-54-00-2e-69-ac | compute | nova |
| d52-54-00-31-d9-e3 | conductor | internal |
| d52-54-00-31-d9-e3 | consoleauth | internal |
| d52-54-00-31-d9-e3 | cert | internal |
| d52-54-00-31-d9-e3 | scheduler | internal |
| d52-54-00-ae-26-d7 | compute | nova |
+--------------------+-------------+----------+
$ openstack host show d52-54-00-ae-26-d7
+--------------------+----------------------------------+-----+-----------+---------+
| Host | Project | CPU | Memory MB | Disk GB |
+--------------------+----------------------------------+-----+-----------+---------+
| d52-54-00-ae-26-d7 | (total) | 4 | 3950 | 44 |
| d52-54-00-ae-26-d7 | (used_now) | 4 | 2560 | 4 |
| d52-54-00-ae-26-d7 | (used_max) | 4 | 2048 | 4 |
| d52-54-00-ae-26-d7 | 8ee2aae87d9a437c86cb578a677aee7e | 4 | 2048 | 4 |
+--------------------+----------------------------------+-----+-----------+---------+
== Mettre un host en maintenance ==
Fonctionnalité non supporté lors de mes tests.
$ openstack host set --enable-maintenance d52-54-00-ae-26-d7
== Désactiver un host ==
Fonctionnalité non supporté lors de mes tests.
$ openstack host set --disable d52-54-00-ae-26-d7
=== Hypervisor ===
== Lister les hyperviseurs ==
$ openstack hypervisor list
+----+---------------------------------------+
| ID | Hypervisor Hostname |
+----+---------------------------------------+
| 1 | d52-54-00-2e-69-ac.cloud.velannes.com |
| 2 | d52-54-00-ae-26-d7.cloud.velannes.com |
+----+---------------------------------------+
=== availability ===
== Lister les zones de disponibilité ==
$ openstack availability zone list
+------+-----------+
| Name | Status |
+------+-----------+
| nova | available |
+------+-----------+
$ nova availability-zone-list
+-----------------------+----------------------------------------+
| Name | Status |
+-----------------------+----------------------------------------+
| internal | available |
| |- d52-54-00-31-d9-e3 | |
| | |- nova-conductor | enabled :-) 2017-08-22T16:49:31.679439 |
| | |- nova-consoleauth | enabled :-) 2017-08-22T16:49:31.671144 |
| | |- nova-cert | enabled :-) 2017-08-22T16:49:31.996396 |
| | |- nova-scheduler | enabled :-) 2017-08-22T16:49:31.967140 |
| nova | available |
| |- d52-54-00-2e-69-ac | |
| | |- nova-compute | enabled :-) 2017-08-22T16:49:31.703992 |
+-----------------------+----------------------------------------+
=== flavor ===
== Lister les flavor ==
$ openstack flavor list
+--------------------------------------+-----------+-------+------+-----------+-------+-----------+
| ID | Name | RAM | Disk | Ephemeral | VCPUs | Is Public |
+--------------------------------------+-----------+-------+------+-----------+-------+-----------+
| 1 | m1.tiny | 512 | 1 | 0 | 1 | True |
| 2 | m1.small | 2048 | 20 | 0 | 1 | True |
| 3 | m1.medium | 4096 | 40 | 0 | 2 | True |
| 4 | m1.large | 8192 | 80 | 0 | 4 | True |
| 491d5fcc-232d-49ea-88b7-58029ad3f519 | gigix2 | 1024 | 10 | 0 | 1 | True |
| 5 | m1.xlarge | 16384 | 160 | 0 | 8 | True |
| 62fb2d35-bdd8-41ec-b916-10509a9a136c | gigix | 1024 | 5 | 0 | 1 | True |
+--------------------------------------+-----------+-------+------+-----------+-------+-----------+
$ openstack flavor show gigix
+----------------------------+--------------------------------------+
| Field | Value |
+----------------------------+--------------------------------------+
| OS-FLV-DISABLED:disabled | False |
| OS-FLV-EXT-DATA:ephemeral | 0 |
| access_project_ids | None |
| disk | 5 |
| id | 62fb2d35-bdd8-41ec-b916-10509a9a136c |
| name | gigix |
| os-flavor-access:is_public | True |
| properties | |
| ram | 1024 |
| rxtx_factor | 1.0 |
| swap | |
| vcpus | 1 |
+----------------------------+--------------------------------------+
== Créer une flavor ==
On peu également [[https://docs.openstack.org/nova/pike/admin/flavors.html#extra-specs|limiter les ressources]] (bande passante disque, réseau, secure boot, topology processeur, etc...).
$ openstack flavor create --ram 4096 --vcpus 4 --disk 100 --public myflavor
+----------------------------+--------------------------------------+
| Field | Value |
+----------------------------+--------------------------------------+
| OS-FLV-DISABLED:disabled | False |
| OS-FLV-EXT-DATA:ephemeral | 0 |
| disk | 100 |
| id | 45320da3-29dd-45ad-a8dd-9d7bb6819652 |
| name | myflavor |
| os-flavor-access:is_public | True |
| properties | |
| ram | 4096 |
| rxtx_factor | 1.0 |
| swap | |
| vcpus | 4 |
+----------------------------+--------------------------------------+
== Changer de flavor pour une instance ==
$ openstack server resize --flavor m1.tiny demo
Il faut ensuite accepter le redimensionnement :
$ openstack server resize --confirm demo
== Supprimer une flavor ==
$ openstack flavor delete myflavor
=== server ===
== Lister les instances ==
$ openstack server list
+--------------------------------------+---------+--------+--------------------------+--------------+
| ID | Name | Status | Networks | Image Name |
+--------------------------------------+---------+--------+--------------------------+--------------+
| 63f82a2d-2355-4d76-8846-add1b6cbac4b | server2 | ACTIVE | network2=192.168.102.100 | cirros-0.3.5 |
| 88a21f03-bd23-4c8a-9d6e-7e7d006b1a0e | server1 | ACTIVE | network1=192.168.101.100 | cirros-0.3.5 |
+--------------------------------------+---------+--------+--------------------------+--------------+
$ openstack server show server1
+--------------------------------------+----------------------------------------------------------+
| Field | Value |
+--------------------------------------+----------------------------------------------------------+
| OS-DCF:diskConfig | MANUAL |
| OS-EXT-AZ:availability_zone | nova |
| OS-EXT-SRV-ATTR:host | d52-54-00-2e-69-ac |
| OS-EXT-SRV-ATTR:hypervisor_hostname | d52-54-00-2e-69-ac.cloud.velannes.com |
| OS-EXT-SRV-ATTR:instance_name | instance-0000008f |
| OS-EXT-STS:power_state | Running |
| OS-EXT-STS:task_state | None |
| OS-EXT-STS:vm_state | active |
| OS-SRV-USG:launched_at | 2017-08-17T21:00:52.000000 |
| OS-SRV-USG:terminated_at | None |
| accessIPv4 | |
| accessIPv6 | |
| addresses | network1=192.168.101.100 |
| config_drive | |
| created | 2017-08-17T21:00:43Z |
| flavor | m1.tiny (1) |
| hostId | a56508f8f885320b8b764689e9a9ef75e71e0afbc396c82302cfbd23 |
| id | 88a21f03-bd23-4c8a-9d6e-7e7d006b1a0e |
| image | cirros-0.3.5 (f3b66052-9a8b-48fd-b186-304a140c792a) |
| key_name | None |
| name | server1 |
| os-extended-volumes:volumes_attached | [] |
| progress | 0 |
| project_id | 8ee2aae87d9a437c86cb578a677aee7e |
| properties | |
| security_groups | [{u'name': u'default'}] |
| status | ACTIVE |
| updated | 2017-08-17T21:00:52Z |
| user_id | e0257f9ab0bd4bcea52ee3596c6ff9e4 |
+--------------------------------------+----------------------------------------------------------+
== Créer une instance ==
* On peut forcer l'adresse IP :
$ openstack server create --image cirros-0.3.5 --security-group default --flavor m1.tiny --nic net-id=network1,v4-fixed-ip=192.168.101.100 server1
+--------------------------------------+-----------------------------------------------------+
| Field | Value |
+--------------------------------------+-----------------------------------------------------+
| OS-DCF:diskConfig | MANUAL |
| OS-EXT-AZ:availability_zone | |
| OS-EXT-SRV-ATTR:host | None |
| OS-EXT-SRV-ATTR:hypervisor_hostname | None |
| OS-EXT-SRV-ATTR:instance_name | |
| OS-EXT-STS:power_state | NOSTATE |
| OS-EXT-STS:task_state | scheduling |
| OS-EXT-STS:vm_state | building |
| OS-SRV-USG:launched_at | None |
| OS-SRV-USG:terminated_at | None |
| accessIPv4 | |
| accessIPv6 | |
| addresses | |
| adminPass | 7aVppCkZU8kW |
| config_drive | |
| created | 2017-08-17T21:00:43Z |
| flavor | m1.tiny (1) |
| hostId | |
| id | 88a21f03-bd23-4c8a-9d6e-7e7d006b1a0e |
| image | cirros-0.3.5 (f3b66052-9a8b-48fd-b186-304a140c792a) |
| key_name | None |
| name | server1 |
| os-extended-volumes:volumes_attached | [] |
| progress | 0 |
| project_id | 8ee2aae87d9a437c86cb578a677aee7e |
| properties | |
| security_groups | [{u'name': u'default'}] |
| status | BUILD |
| updated | 2017-08-17T21:00:43Z |
| user_id | e0257f9ab0bd4bcea52ee3596c6ff9e4 |
+--------------------------------------+-----------------------------------------------------+
* On peut forcer également le démarrage d'une instance sur un host et booter sur un volume préalablement créé qui se nomme demo :
$ openstack server create --image cirros-0.3.5 --flavor m1.tiny --nic net-id=network1 --security-group default --availability-zone nova:d52-54-00-ae-26-d7 --block-device-mapping vda=demo demo
+--------------------------------------+-----------------------------------------------------+
| Field | Value |
+--------------------------------------+-----------------------------------------------------+
| OS-DCF:diskConfig | MANUAL |
| OS-EXT-AZ:availability_zone | nova |
| OS-EXT-SRV-ATTR:host | None |
| OS-EXT-SRV-ATTR:hypervisor_hostname | None |
| OS-EXT-SRV-ATTR:instance_name | |
| OS-EXT-STS:power_state | NOSTATE |
| OS-EXT-STS:task_state | scheduling |
| OS-EXT-STS:vm_state | building |
| OS-SRV-USG:launched_at | None |
| OS-SRV-USG:terminated_at | None |
| accessIPv4 | |
| accessIPv6 | |
| addresses | |
| adminPass | egNbx8RmdyMk |
| config_drive | |
| created | 2017-08-24T09:04:29Z |
| flavor | m1.tiny (1) |
| hostId | |
| id | 43972027-dd81-4830-a2d5-bc57997ae374 |
| image | cirros-0.3.5 (f3b66052-9a8b-48fd-b186-304a140c792a) |
| key_name | None |
| name | demo |
| os-extended-volumes:volumes_attached | [{u'id': u'45fc339f-2351-4da8-9229-37a99b3b6703'}] |
| progress | 0 |
| project_id | 8ee2aae87d9a437c86cb578a677aee7e |
| properties | |
| security_groups | [{u'name': u'default'}] |
| status | BUILD |
| updated | 2017-08-24T09:04:29Z |
| user_id | e0257f9ab0bd4bcea52ee3596c6ff9e4 |
+--------------------------------------+-----------------------------------------------------+
== Démarrer une instance ==
$ openstack server start demo
== Arrêter une instance ==
$ openstack server stop demo
== Redémarrer une instance ==
$ openstack server reboot demo
Si votre instance est dans un état anormal (**error** par exemple, un reboot hard peut corriger le problème :
$ openstack server reboot --hard demo
== Suspendre une instance ==
Contrairement à **pause**, suspend garde le contenu de la VM sur disque.
* Mettre en suspend une instance :
$ openstack server suspend demo
* Relancer l'instance :
$ openstack server resume demo
== Mettre en pause une instance ==
Garde le contenu de la VM en RAM.
* Mettre en pause une instance :
$ openstack server pause demo
* Sortir de pause une instance :
$ openstack server unpause demo
== Verrouiller une instance ==
* Verrouiller une instance :
$ openstack server lock demo
* Déverrouiller une instance :
$ openstack server unlock demo
== Ranger une instance (shelve) ==
A la différence de stopper une instance, shelve ne va pas compter les ressources consommées par cette instance sur l'hyperviseur :
* Verrouiller une instance :
$ openstack server shelve demo
* Déverrouiller une instance :
$ openstack server unshelve demo
== Reconstruire une instance ==
$ openstack server rebuild demo
== Se connecter à une instance en ssh ==
Il faut associer une IP flottante à l'instance pour pouvoir se connecter :
$ openstack server ssh --login cirros demo
Warning: Permanently added '192.168.126.134' (RSA) to the list of known hosts.
$
== Supprimer une instance ==
$ openstack server delete vm1 vm2
=== aggregate ===
== Lister les agrégats ==
$ openstack aggregate list --long
+----+------------+-------------------+-------------------+
| ID | Name | Availability Zone | Properties |
+----+------------+-------------------+-------------------+
| 2 | aggregate1 | nova | {u'env': u'prod'} |
+----+------------+-------------------+-------------------+
$ openstack aggregate show aggregate1
+-------------------+----------------------------+
| Field | Value |
+-------------------+----------------------------+
| availability_zone | nova |
| created_at | 2017-08-22T17:18:31.718030 |
| deleted | False |
| deleted_at | None |
| hosts | [u'd52-54-00-2e-69-ac'] |
| id | 2 |
| name | aggregate1 |
| properties | env='prod' |
| updated_at | None |
+-------------------+----------------------------+
== Créer un agrégat ==
$ openstack aggregate create --zone nova --property env=prod aggregate1
+-------------------+--------------------------------------------------+
| Field | Value |
+-------------------+--------------------------------------------------+
| availability_zone | nova |
| created_at | 2017-08-22T17:18:31.718030 |
| deleted | False |
| deleted_at | None |
| hosts | [] |
| id | 2 |
| metadata | {u'env': u'prod', u'availability_zone': u'nova'} |
| name | aggregate1 |
| updated_at | 2017-08-22T17:18:31.780010 |
+-------------------+--------------------------------------------------+
== Ajouter un host à un agrégat ==
$ openstack aggregate add host aggregate1 d52-54-00-2e-69-ac
+-------------------+--------------------------------------------------+
| Field | Value |
+-------------------+--------------------------------------------------+
| availability_zone | nova |
| created_at | 2017-08-22T17:18:31.718030 |
| deleted | False |
| deleted_at | None |
| hosts | [u'd52-54-00-2e-69-ac'] |
| id | 2 |
| metadata | {u'env': u'prod', u'availability_zone': u'nova'} |
| name | aggregate1 |
| updated_at | None |
+-------------------+--------------------------------------------------+
== Supprimer un agrégat ==
$ openstack aggregate delete aggregate1
=== migrate ===
Permet déplacer une instance d'un host vers un autre.
* Les commandes **nova evacuate** et **nova host-evacuate** sont à utiliser si le host est failed ou éteind. Il faut avoir un shared disk pourl'utiliser.
* Les commandes **nova migrate** et **nova host-servers-migrate** sont à utiliser sur des instances statiques (non running).
* Les commandes **nova live-migration** et **nova host-evacuate-live** sont à utiliser sur des instances **running**.
== Migrer une seule instance à froid (host failed) ==
Permet de déplacer une instance lorsqu'on a perdu un host.
Le host doit être down ! On peut spécifier le host de destination (non obligatoire).
$ nova evacuate --force demo d52-54-00-2e-69-ac
== Migrer à froid l'ensemble des instances d'un host (host failed) ==
Permet de déplacer les instances lorsqu'on a perdu un host.
Le host doit être down ! On peut spécifier le host de destination avec l'option **--target_host** (non obligatoire).
$ nova host-evacuate --target_host d52-54-00-2e-69-ac d52-54-00-ae-26-d7
+--------------------------------------+-------------------+---------------+
| Server UUID | Evacuate Accepted | Error Message |
+--------------------------------------+-------------------+---------------+
| 88a21f03-bd23-4c8a-9d6e-7e7d006b1a0e | True | |
| 63f82a2d-2355-4d76-8846-add1b6cbac4b | True | |
+--------------------------------------+-------------------+---------------+
== Migrer à froid une instance ==
$ nova migrate demo
Il faut ensuite confirmer la migration :
$ nova resize-confirm demo
== Migrer à froid toutes les instances d'un host ==
$ nova host-servers-migrate d52-54-00-ae-26-d7
+--------------------------------------+--------------------+---------------+
| Server UUID | Migration Accepted | Error Message |
+--------------------------------------+--------------------+---------------+
| 913f1163-763e-47f0-b652-0fda807e3044 | True | |
+--------------------------------------+--------------------+---------------+
== Migrer à chaud une seule instance ==
L'instance doit être **running** pour lancer cette action. On peut passer le host où l'on souhaite migrer en dernier argument comme dans l'exemple ci-dessous (non obligatoire).
$ nova live-migration server1 d52-54-00-ae-26-d7
== Migrer à chaud toutes les instances d'un host ==
En spécifiant l'option **--target-host** on peut indiquer vers quel host on souhaite migrer.
$ nova host-evacuate-live --target-host d52-54-00-2e-69-ac d52-54-00-ae-26-d7
+--------------------------------------+-------------------------+---------------+
| Server UUID | Live Migration Accepted | Error Message |
+--------------------------------------+-------------------------+---------------+
| 88a21f03-bd23-4c8a-9d6e-7e7d006b1a0e | True | |
| 63f82a2d-2355-4d76-8846-add1b6cbac4b | True | |
| 543acf62-b393-4288-b0eb-358f9a26085d | True | |
+--------------------------------------+-------------------------+---------------+
== Lister si une est en train de migrer ==
$ nova server-migration-list demo
+----+-------------+-----------+--------------------+--------------------+-----------+-----------+--------------------------------------+----------------------------+----------------------------+--------------------+------------------------+------------------------+------------------+----------------------+----------------------+
| Id | Source Node | Dest Node | Source Compute | Dest Compute | Dest Host | Status | Server UUID | Created At | Updated At | Total Memory Bytes | Processed Memory Bytes | Remaining Memory Bytes | Total Disk Bytes | Processed Disk Bytes | Remaining Disk Bytes |
+----+-------------+-----------+--------------------+--------------------+-----------+-----------+--------------------------------------+----------------------------+----------------------------+--------------------+------------------------+------------------------+------------------+----------------------+----------------------+
| 55 | - | - | d52-54-00-ae-26-d7 | d52-54-00-2e-69-ac | - | preparing | 43972027-dd81-4830-a2d5-bc57997ae374 | 2017-08-24T09:52:09.511040 | 2017-08-24T09:52:10.137602 | None | None | None | None | None | None |
+----+-------------+-----------+--------------------+--------------------+-----------+-----------+--------------------------------------+----------------------------+----------------------------+--------------------+------------------------+------------------------+------------------+----------------------+----------------------+
$ nova server-migration-show demo 55
+------------------------+--------------------------------------+
| Property | Value |
+------------------------+--------------------------------------+
| created_at | 2017-08-24T09:54:04.442665 |
| dest_compute | d52-54-00-ae-26-d7 |
| dest_host | - |
| dest_node | - |
| disk_processed_bytes | - |
| disk_remaining_bytes | - |
| disk_total_bytes | - |
| id | 55 |
| memory_processed_bytes | - |
| memory_remaining_bytes | - |
| memory_total_bytes | - |
| server_uuid | 43972027-dd81-4830-a2d5-bc57997ae374 |
| source_compute | d52-54-00-2e-69-ac |
| source_node | - |
| status | preparing |
| updated_at | 2017-08-24T09:54:04.894587 |
+------------------------+--------------------------------------+
== Annuler une migration ==
$ nova live-migration-abort demo 55
=== Gestion des volumes des instances ===
== Attacher un volume à une instance ==
$ openstack server add volume --device /dev/vdb demo myvolume
== Supprimer un volume d'une instance ==
$ openstack server remove volume demo myvolume
=== Lister les url novnc ===
$ nova get-vnc-console demo novnc
+-------+------------------------------------------------------------------------------------+
| Type | Url |
+-------+------------------------------------------------------------------------------------+
| novnc | http://192.168.126.2:6080/vnc_auto.html?token=3e070831-a109-442c-a074-4ab1e7f17e7b |
+-------+------------------------------------------------------------------------------------+
=== Gestion des clés SSH ===
== Lister les clés SSH ==
$ openstack keypair list
+-------+-------------------------------------------------+
| Name | Fingerprint |
+-------+-------------------------------------------------+
| gigix | 40:12:59:20:2a:00:4a:48:3f:62:67:80:7b:dc:27:92 |
+-------+-------------------------------------------------+
$ openstack keypair show gigix
+-------------+-------------------------------------------------+
| Field | Value |
+-------------+-------------------------------------------------+
| created_at | 2017-08-24T10:50:07.633546 |
| deleted | False |
| deleted_at | None |
| fingerprint | 40:12:59:20:2a:00:4a:48:3f:62:67:80:7b:dc:27:92 |
| id | 6 |
| name | gigix |
| updated_at | None |
| user_id | e0257f9ab0bd4bcea52ee3596c6ff9e4 |
+-------------+-------------------------------------------------+
== Créer une clé SSH ==
* Importation d'une clé existante :
$ openstack keypair create --public-key id_rsa.pub gigix
+-------------+-------------------------------------------------+
| Field | Value |
+-------------+-------------------------------------------------+
| fingerprint | 40:12:59:20:2a:00:4a:48:3f:62:67:80:7b:dc:27:92 |
| name | gigix2 |
| user_id | e0257f9ab0bd4bcea52ee3596c6ff9e4 |
+-------------+-------------------------------------------------+
* Générer une nouvelle clé:
$ openstack keypair create gigix2
-----BEGIN RSA PRIVATE KEY-----
MIIEpAIBAAKCAQEAwLxDO2AGW303hoAqR3ARE1PLtwYAEKkdN1yuhhDmZXjS+ZxV
QcYV905qrda0wMGJIOgw5ajje9YUMlTu1YQ63igoZGqCOHmi/1ip70TT1HTu3Rju
ioCs4dEx7YFUSMqHHq0Klione8CSCffLayCTQ+bviyVnRGvz3FQrSYh+FQvVlFp5
eGUwzWspi3KXQfRK9WS3FFXn1pDdF5epaKjcoQJKG4eFMJUWZvUpRtFXCX20VZU8
2Ph87Dx/x8xYej9U+g+p2VyQ3PSGUPrvEdBPU0EIH9mMIQi4RahuBnqnH8tY8EvM
8hWxIOxjiq5ikEFSSZpRq2u8TuEanvrtz87EuQIDAQABAoIBADu83nXT0ISg7gnh
Rbl4scI00cp7sJ95W1XigzGIoXDIH1RAsWg+lmZdxtD04HdyRSeO8EDutPuYIhBr
pM9HOdvLxKFDJfONOAk/GQTRZ3rNd4/N/3msYmlnprr/v/kD1Reb+NEMZcqEqH8w
b7tXkG2WcZ7GTBi4ARDEgdo71SB+TYrA6EVa+piv2jH0XeRI28P89cFmZL/B1ShD
sbgxBAumFThdfwgpC7T2ibPoZ7yZBS3Ki1OLlOjxlfg6vIyP6gTvyZn2owmQ9G5w
YbpclVhHmxYbHyqyyortEjl+j01wdH7DUPSigncjmDV5yDQCxAaafQJKiNNjc0UP
/n1HAdECgYEA7Qvws3h/i+FuZTluTizSFkZZni+w2m2PrhvkpCc53vkiFhA5P68R
Aoka1MwZnMKccawJ4cLCmfQJu+z/cLDKINEZo9Gy0atajGw7vhQUw7JjCUQtDv8O
4JVpOVRMwLZE5qENBWlTuEq0it0VJ00uqrKAi4lFqhA+yFofgiUSmG0CgYEA0CVT
FpDWnBdsZ1bv9YsPlAnJZvmwgaTCdexGf5EfysUgDPHaCzW4zys32MPWa3v9tSkp
2MKq/FuDs3xsB96sz/uElEsI0nX4uUlri3HGYzNP219R1WrYfhlVaPcU0rbO83j5
sdH8jGT4C/usPZIn1uV4hsT7cn4y7ghfhAzEBf0CgYBeXCpsxsK/A/XWBY6LP/xB
Ma/q8EEOMh7HyAKz9Ylr4PBYqAyh9SZoQ/uSSczIQg/UkA8+9zBP6H0XebgVO8q6
VYJHW+o63GMnEs6VU5kQbapOvfzRw2ZAsDk6wPvsmqHCzMlKJitVaSeFP4x0IJ07
BeN1qCc7E0xqpLV2MRu94QKBgQDDMa14pP1NRj4XrxS67N0AFCl2U0OuYGcolRoL
uXnZ+wCygv/asVeNmFb4BbeH9rAW+vJOX0hf/iZE5LKesrjXFmTfeHpee9lzUSH0
lA7aqp0B+aLRhDBgGLvbApLZhCwRcWqf0m+G7Y0cF7kPyIdp5KohoIq5dRWn2dxR
BnOxtQKBgQDTC/zlIOHWAok66ZCuAV6NagwM6gWJ4zsJ22TMgWobOIguj1D316T0
PUJXytEOp18ObWjiIjRjXLpsxKTN1/6fBbrS0Fw+XqCrAj0gya8HSI50r3sUVjuH
CFFRZgCiixhZPrIKqGhhS2EuPlqvFsFHMfADgnSP//yLyC+ociDeBA==
-----END RSA PRIVATE KEY-----
== Suppression d'un clé SSH ==
$ openstack keypair delete gigix
=== Statistiques ===
* Statistique d'une instance :
$ nova diagnostics demo
+---------------------------+------------+
| Property | Value |
+---------------------------+------------+
| cpu0_time | 5440000000 |
| memory | 524288 |
| memory-actual | 524288 |
| memory-rss | 156960 |
| tap093f5864-de_rx | 8709 |
| tap093f5864-de_rx_drop | 0 |
| tap093f5864-de_rx_errors | 0 |
| tap093f5864-de_rx_packets | 79 |
| tap093f5864-de_tx | 10954 |
| tap093f5864-de_tx_drop | 0 |
| tap093f5864-de_tx_errors | 0 |
| tap093f5864-de_tx_packets | 109 |
| vda_errors | -1 |
| vda_read | 20397056 |
| vda_read_req | 1026 |
| vda_write | 38912 |
| vda_write_req | 26 |
+---------------------------+------------+
* Statistique par projet :
$ openstack usage list
Usage from 2017-07-27 to 2017-08-25:
+----------------------------------+---------+--------------+-----------+---------------+
| Project | Servers | RAM MB-Hours | CPU Hours | Disk GB-Hours |
+----------------------------------+---------+--------------+-----------+---------------+
| 8ee2aae87d9a437c86cb578a677aee7e | 70 | 298091.7 | 461.41 | 944.23 |
| f2f37f75a5bc48ceb8703a373ea2eb14 | 4 | 3944.82 | 7.7 | 7.7 |
+----------------------------------+---------+--------------+-----------+---------------+
==== Volume ====
=== Lister les volumes ===
$ openstack volume list
+--------------------------------------+--------------+-----------+------+-------------------------------+
| ID | Display Name | Status | Size | Attached to |
+--------------------------------------+--------------+-----------+------+-------------------------------+
| 3215e0d7-ddd0-418c-89f3-11873c170bc3 | myvolume | available | 1 | |
| 45fc339f-2351-4da8-9229-37a99b3b6703 | demo | in-use | 1 | Attached to demo on /dev/vda |
+--------------------------------------+--------------+-----------+------+-------------------------------+
$ openstack volume show myvolume
+--------------------------------+---------------------------------------+
| Field | Value |
+--------------------------------+---------------------------------------+
| attachments | [] |
| availability_zone | nova |
| bootable | false |
| consistencygroup_id | None |
| created_at | 2017-08-24T10:20:13.112151 |
| description | None |
| encrypted | False |
| id | 3215e0d7-ddd0-418c-89f3-11873c170bc3 |
| migration_status | None |
| multiattach | False |
| name | myvolume |
| os-vol-host-attr:host | d52-54-00-31-d9-e3@backend-rbd-0#Ceph |
| os-vol-mig-status-attr:migstat | None |
| os-vol-mig-status-attr:name_id | None |
| os-vol-tenant-attr:tenant_id | 8ee2aae87d9a437c86cb578a677aee7e |
| properties | |
| replication_status | disabled |
| size | 1 |
| snapshot_id | None |
| source_volid | None |
| status | available |
| type | None |
| updated_at | 2017-08-24T10:20:13.492679 |
| user_id | e0257f9ab0bd4bcea52ee3596c6ff9e4 |
+--------------------------------+---------------------------------------+
=== Créer un volume ===
$ openstack volume create --size 1 myvolume
+---------------------+--------------------------------------+
| Field | Value |
+---------------------+--------------------------------------+
| attachments | [] |
| availability_zone | nova |
| bootable | false |
| consistencygroup_id | None |
| created_at | 2017-08-24T10:20:13.112151 |
| description | None |
| encrypted | False |
| id | 3215e0d7-ddd0-418c-89f3-11873c170bc3 |
| migration_status | None |
| multiattach | False |
| name | myvolume |
| properties | |
| replication_status | disabled |
| size | 1 |
| snapshot_id | None |
| source_volid | None |
| status | creating |
| type | None |
| updated_at | None |
| user_id | e0257f9ab0bd4bcea52ee3596c6ff9e4 |
+---------------------+--------------------------------------+
=== Supprimer un volume ===
$ openstack volume delete myvolume
Il peut arriver qu'il y ait un bug ! Par exemple le volume est marqué comme étant attaché à une instance qui n'existe plus. Pour récupérer le volume, il faut lancer les commandes suivantes :
$openstack volume list
+--------------------------------------+--------------+-----------+------+---------------------------------------------------------------+
| ID | Display Name | Status | Size | Attached to |
+--------------------------------------+--------------+-----------+------+---------------------------------------------------------------+
| 61c85832-c2ea-490f-b01c-18603574ef80 | | detaching | 1 | Attached to 9327fefe-4ce9-4ef7-b8f5-e1f2f362d55e on /dev/vda |
| 5a9b78de-4b87-4973-bb13-1dc676fadc6a | cirros | detaching | 1 | Attached to ee910c1b-7e72-4e71-b7de-66ce97698cd8 on /dev/vda |
+--------------------------------------+--------------+-----------+------+---------------------------------------------------------------+
* On passe le voume dans l'état **available** :
$ openstack volume set --state available 61c85832-c2ea-490f-b01c-18603574ef80
* Connectez vous avec l'utilisateur postgres :
$ su - postgres
* Connectez-vous à la base **cinder** :
$ psql cinder
* Mettez à jour la colonne **attach_status** à **detached** de votre volume :
cinder=# update volumes set attach_status='detached',status='available' where id='61c85832-c2ea-490f-b01c-18603574ef80';
UPDATE 1
==== Projet ====
=== Lister les projets ===
$ openstack project list
+----------------------------------+-----------+
| ID | Name |
+----------------------------------+-----------+
| f2f37f75a5bc48ceb8703a373ea2eb14 | admin |
| fd45b94bf13f4836b84b325acaa84869 | service |
| 8ee2aae87d9a437c86cb578a677aee7e | openstack |
+----------------------------------+-----------+
$ openstack project show openstack
+-------------+----------------------------------+
| Field | Value |
+-------------+----------------------------------+
| description | |
| domain_id | default |
| enabled | True |
| id | 8ee2aae87d9a437c86cb578a677aee7e |
| is_domain | False |
| name | openstack |
| parent_id | default |
+-------------+----------------------------------+
=== Lister les quota d'un projet ===
$ openstack limits show --absolute
+--------------------------+-------+
| Name | Value |
+--------------------------+-------+
| maxServerMeta | 128 |
| maxTotalInstances | 10 |
| maxPersonality | 5 |
| totalServerGroupsUsed | 0 |
| maxImageMeta | 128 |
| maxPersonalitySize | 10240 |
| maxTotalRAMSize | 51200 |
| maxServerGroups | 10 |
| maxSecurityGroupRules | 20 |
| maxTotalKeypairs | 100 |
| totalCoresUsed | 3 |
| totalRAMUsed | 1536 |
| maxSecurityGroups | 10 |
| totalFloatingIpsUsed | 0 |
| totalInstancesUsed | 3 |
| maxServerGroupMembers | 10 |
| maxTotalFloatingIps | 10 |
| totalSecurityGroupsUsed | 1 |
| maxTotalCores | 20 |
| totalSnapshotsUsed | 1 |
| maxTotalBackups | 10 |
| maxTotalVolumeGigabytes | 1000 |
| maxTotalSnapshots | 10 |
| maxTotalBackupGigabytes | 1000 |
| totalBackupGigabytesUsed | 0 |
| maxTotalVolumes | 10 |
| totalVolumesUsed | 2 |
| totalBackupsUsed | 0 |
| totalGigabytesUsed | 3 |
+--------------------------+-------+
$ nova limits
+------+-----+-------+--------+------+----------------+
| Verb | URI | Value | Remain | Unit | Next_Available |
+------+-----+-------+--------+------+----------------+
+------+-----+-------+--------+------+----------------+
+--------------------+------+-------+
| Name | Used | Max |
+--------------------+------+-------+
| Cores | 3 | 20 |
| ImageMeta | - | 128 |
| Instances | 3 | 10 |
| Keypairs | - | 100 |
| Personality | - | 5 |
| Personality Size | - | 10240 |
| RAM | 1536 | 51200 |
| Server Meta | - | 128 |
| ServerGroupMembers | - | 10 |
| ServerGroups | 0 | 10 |
+--------------------+------+-------+
==== Extensions ====
Affiche les extensions et l'url OpenStack d'aide associée (avec l'option **--long**) :
$ openstack extension list --long --network
$ openstack extension list --network -c Alias -c Name
===== Python =====
==== openstack sdk ====
Exemple de code avec la librairie openstacksdk (utilisée par la commande openstack) :
#!/usr/bin/env python2
# Author : Ghislain LE MEUR
# Doc : https://developer.openstack.org/sdks/python/openstacksdk/
# Exemples : https://github.com/openstack/python-openstacksdk/tree/master/examples
import os
from openstack import connection
from openstack import utils
#utils.enable_logging(debug=True, stream=sys.stdout)
#utils.enable_logging(debug=True, path='openstack.log', stream=sys.stdout)
#import logging
#logger = logging.getLogger('requests')
#formatter = logging.Formatter(
# '%(asctime)s %(levelname)s: %(name)s %(message)s')
#console = logging.StreamHandler(sys.stdout)
#console.setFormatter(formatter)
#logger.setLevel(logging.DEBUG)
#logger.addHandler(console)
conn = connection.Connection(auth_url=os.environ['OS_AUTH_URL'],
project_name=os.environ['OS_PROJECT_NAME'],
username=os.environ['OS_USERNAME'],
password=os.environ['OS_PASSWORD'])
print('Utilisateurs : %s' % ', '.join([user.name for user in conn.identity.users()]))
print('Images : %s' % ', '.join([image.name for image in conn.image.images()]))
print('Serveurs : %s' % ', '.join([server.name for server in conn.compute.servers()]))
print('Images : %s' % ', '.join([network.name for network in conn.network.networks()]))
==== lib spécifique à chaque API ====
**Attention :** ces librairies sont obsolètes, utiliser la librairie [[#openstack_sdk|Openstack]].
Exemple de code avec les librairies python-keystone, python-neutron, python-cinder, python-glance, python-nova, etc... (utilisée par les commandes neutron, cinder, glance, nova, etc...) :
#!/usr/bin/env python2
# Author : Ghislain LE MEUR
from os import environ as env
# keystone => https://docs.openstack.org/python-keystoneclient/latest/
from keystoneauth1 import loading
from keystoneauth1 import session
loader = loading.get_plugin_loader('password')
auth = loader.load_from_options(auth_url=env['OS_AUTH_URL'],
username=env['OS_USERNAME'],
password=env['OS_PASSWORD'],
project_id=env['OS_PROJECT_ID'])
sess = session.Session(auth=auth)
# Glance => https://docs.openstack.org/python-glanceclient/latest/
import glanceclient.client as glclient
glance = glclient.Client(version='2', session=sess)
# Nova => https://docs.openstack.org/python-novaclient/latest/
import novaclient.client as nvclient
nova = nvclient.Client(version='2', session=sess)
# Neutron => https://docs.openstack.org/python-neutronclient/latest/
import neutronclient.v2_0.client as ntclient
neutron = ntclient.Client(session=sess)
# Cinder => https://docs.openstack.org/python-cinderclient/latest/
import cinderclient.client as cdclient
cinder = cdclient.Client(version='2', session=sess)
### MAIN ###
if __name__ == '__main__':
print('Images : %s' % ', '.join([image.name for image in glance.images.list()]))
print('Reseaux : %s' % ', '.join([network['name'] for network in neutron.list_networks()['networks']]))
print('Volumes : %s' % ', '.join([volume.name for volume in cinder.volumes.list()]))
print('Serveurs : %s' % ', '.join([server.name for server in nova.servers.list()]))