Draft
Install Openstack on virtual machines running on a CentOS 7 KVM host. The following document covers a single Undercloud or a director node , single Openstack Controller and Two Compute nodes installation
Undercloud
1 Director vm
CPU: 8 cores
Memory: 16 GB
Network interfaces: 2
nic1: KVM default NAT network on 192.168.122.0/24
NAME=ens3 DEVICE=ens3 ONBOOT=yes IPADDR=192.168.122.24 NETMASK=255.255.255.0 GATEWAY=192.168.122.1 DNS1=192.168.122.1
nic2: Provisioning network on 192.168.126.0/24
TYPE=Ethernet DEVICE=ens10 IPADDR=192.168.126.2 NETMASK=255.255.255.0
Disk: virtio 80 GB
Overcloud
1 Controller vm
CPU: 4 cores
Memory: 12 GB
Network interfaces: 2
Disk: virtio 60 GB
2 Compute vms
CPU: 4 cores
Memory: 12 GB
Network interfaces: 2
Undercloud OS
CentOS 8 default install
Disable libvirtd
Do not
Boot CentOS to nonGUI
Copy ssh key to the undercloud vm. Undercloud will change the root password after it is installed
From host machine to undercloud vm: ssh-copy-id root@undercloud.localdomain
Undercloud install
stack user
repository setup
Create necessary directories
Create container image prepare yaml file
Prepare undercloud.conf
cp /usr/share/python-tripleoclient/undercloud.conf.sample ~/undercloud.conf
edit undercloud.conf to look like
[DEFAULT] clean_nodes = true container_images_file=/home/stack/templates/containers-prepare-parameter.yaml inspection_extras = true local_interface = ens10 local_ip = 192.168.126.2/24 local_subnet = ctlplane-subnet subnets = ctlplane-subnet undercloud_admin_host = 192.168.126.3 undercloud_debug = true undercloud_hostname = undercloud.localdomain undercloud_nameservers = 192.168.122.1 undercloud_ntp_servers = 0.pool.ntp.org,1.pool.ntp.org,2.pool.ntp.org,3.pool.ntp.org undercloud_public_host = 192.168.126.4 [ctlplane-subnet] cidr = 192.168.126.0/24 dhcp_end = 192.168.126.24 dhcp_start = 192.168.126.5 dns_nameservers = 192.168.122.1 gateway = 192.168.126.2 inspection_iprange = 192.168.126.100,192.168.126.120 masquerade_network = true masquerade = true
Verify undercloud.conf
openstack undercloud install
Expected output
Time : upto 2 hrs
########################################################
Deployment successful!
########################################################
Writing the stack virtual update mark file /var/lib/tripleo-heat-installer/update_mark_undercloud
##########################################################
The Undercloud has been successfully installed.
Useful files:
Password file is at /home/stack/undercloud-passwords.conf The stackrc file is at ~/stackrc
Use these files to interact with OpenStack services, and ensure they are secured.
##########################################################
Validate undercloud
sudo podman ps
source stackrc
openstack service list
Log file: /home/stack/install-undercloud.log
Overcloud
All commands are run on undercloud
Download overcloud images
Extract and upload to glance
Set DNS server
On the host machine create qcow2 images for controller and compute images
cd /stor ( Assuming /stor as a storage area for libvirt)
qemu-img create -f qcow2 -o preallocation=metadata overcloud-controller.qcow2 60G
qemu-img create -f qcow2 -o preallocation=metadata overcloud-compute1.qcow2 60G
qemu-img create -f qcow2 -o preallocation=metadata overcloud-compute2.qcow2 60G
chown qemu:qemu overcloud-*
Generate and define libvirt xml files
virt-install --ram 12288 --vcpus 4 --os-variant rhel7 --disk path=/stor/overcloud-controller.qcow2,device=disk,bus=virtio,format=qcow2 --noautoconsole --vnc --network network:prov --network network:default --name overcloud-controller --cpu Haswell,+vmx --dry-run --print-xml > /tmp/overcloud-controller.xml
virt-install --ram 12288 --vcpus 4 --os-variant rhel7 --disk path=/stor/overcloud-compute1.qcow2,device=disk,bus=virtio,format=qcow2 --noautoconsole --vnc --network network:prov --network network:default --name overcloud-compute1 --cpu Haswell,+vmx --dry-run --print-xml > /tmp/overcloud-compute1.xml
virt-install --ram 12288 --vcpus 4 --os-variant rhel7 --disk path=/stor/overcloud-compute2.qcow2,device=disk,bus=virtio,format=qcow2 --noautoconsole --vnc --network network:prov --network network:default --name overcloud-compute2 --cpu Haswell,+vmx --dry-run --print-xml > /tmp/overcloud-compute2.xml
Edit the xml files to change the cpu type as follows
On the undercloud vm
VBMC setup. Using VMBC instead of pxe_ssh
su - stack
sudo yum install python3-virtualbmc -y
copy the ssh key to the host machine using ssh-copy-id
vbmc add overcloud-controller --port 6001 --username admin --password password --libvirt-uri qemu+ssh://root@192.168.122.1/system
vbmc add overcloud-compute1 --port 6002 --username admin --password password --libvirt-uri qemu+ssh://root@192.168.122.1/system
vbmc add overcloud-compute2 --port 6003 --username admin --password password --libvirt-uri qemu+ssh://root@192.168.122.1/system
vbmc start overcloud-controller
vbmc start overcloud-compute1
vbmc start overcloud-compute2
Verify
To delete just in case
Grab the mac id for the provision interface of all the VMs
Create an overcloud.json
{ "nodes": [ { "arch": "x86_64", "disk": "60", "memory": "12288", "name": "overcloud-controller", "pm_user": "admin", "pm_addr": "127.0.0.1", "pm_password": "password", "pm_port": "6001", "pm_type": "pxe_ipmitool", "mac": [ "52:54:00:f3:25:52" ], "cpu": "4" }, { "arch": "x86_64", "disk": "60", "memory": "12288", "name": "overcloud-compute1", "pm_user": "admin", "pm_addr": "127.0.0.1", "pm_password": "password", "pm_port": "6002", "pm_type": "pxe_ipmitool", "mac": [ "52:54:00:86:ba:61" ], "cpu": "4" }, { "arch": "x86_64", "disk": "60", "memory": "12288", "name": "overcloud-compute2", "pm_user": "admin", "pm_addr": "127.0.0.1", "pm_password": "password", "pm_port": "6003", "pm_type": "pxe_ipmitool", "mac": [ "52:54:00:1e:03:e8" ], "cpu": "4" } ] }
Introspect and make the vms ready
Set the profiles for controller and computes
Create nodes.yaml
parameter_defaults: ControllerCount: 1 OvercloudControllerFlavor: control ComputeCount: 2 OvercloudComputeFlavor: compute NeutronPublicInterface: ens4
Make sure containers-prepare-parameter.yaml looks like
parameter_defaults: ContainerImagePrepare:
push_destination: true set: ceph_alertmanager_image: alertmanager ceph_alertmanager_namespace: quay.io/prometheus ceph_alertmanager_tag: v0.16.2 ceph_grafana_image: grafana ceph_grafana_namespace: quay.io/app-sre ceph_grafana_tag: 5.2.4 ceph_image: daemon ceph_namespace: quay.ceph.io/ceph-ci ceph_node_exporter_image: node-exporter ceph_node_exporter_namespace: quay.io/prometheus ceph_node_exporter_tag: v0.17.0 ceph_prometheus_image: prometheus ceph_prometheus_namespace: quay.io/prometheus ceph_prometheus_tag: v2.7.2 ceph_tag: v4.0.13-stable-4.0-nautilus-centos-7-x86_64 name_prefix: centos-binary- name_suffix: '' namespace: docker.io/tripleotraincentos8 neutron_driver: ovn rhel_containers: false tag: current-tripleo tag_from_label: rdo_version
Deploy using
openstack overcloud deploy --templates -e nodes.yaml -e /home/stack/templates/containers-prepare-parameter.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/podman.yaml
Expected output sample
Ansible passed.Overcloud configuration completed. Overcloud Endpoint: http://192.168.126.6:5000 Overcloud Horizon Dashboard URL: http://192.168.126.6:80/dashboard Overcloud rc file: /home/stack/overcloudrc Overcloud Deployed without error
Debug in case of failures
Debugging ansible
Delete the deployment if needed
How to find the IP of the controller for example
cat /etc/hosts
OR
openstack stack resource list overcloud|grep Controller
openstack stack show UUID
Useful commands
Flavors
Post deployment
IP info
source overcloudrc
Create public network
Create a flavor for the guest vms
Download cloud image and upload it to glance
Create private network
Create a router
Keypair
Security
Guest server vm
openstack server create --flavor tiny --image cirros --key-name default --security-group basic --network private myserver
openstack server show -c status myserver
openstack floating ip create public
openstack floating ip list
openstack server add floating ip myserver 192.168.122.X
Test