Red Hat Virtualization (RHV) Definitions, Requirements, and Installation

Mindwatering Incorporated

Author: Tripp W Black

Created: 10/07 at 04:33 PM

 

Category:
Linux
RH V

Definitions:
Host: Hardware that runs the virtualization as Layer 2 (add-on service) or Layer 1, integrated Hypervisor OS.

Guest: VM or Container (Pod of Containers actually) running on the Host

Hypervisor: Software that partitions the host hardware into multiple VMs (CPU, Memory, Storage/Disk, and Networking) and runs them

HA: High Availability - If one host goes offline (isolation or hardware failure), the VMs can be re-started on the remaining "up" hosts.


Red Hat Virtualization Concepts/Definitions:
RHV - Open source virtualization platform that provides centralized management of hosts, VMs, and VDs (virtual desktops) across an enterprise datacenter. It consists of three major components:
- RHV M
- Physical hosts (RVH H or RVH self-hosted engine/hypervisors - type 1)
- - kernel-based KVM hypervisor, requiring hardware virtualization extensions supporting Intel-VT-x or AMD-V and also the No eXecute (NX) flag. IBM POWER8 is also supported.
- Storage domains

Minimum Host Requirements:
- 2 GB RAM, and up to 4 TB
- The Intel-VT-x or AMD-V with the NX flag on CPU
- Minimal storage is 55 GB
- /var/tmp at least 5 GB
Minimum Host Network Requirements:
- 1 NIC @ 1 Gbps
- - However min 3 recommended: 1 for mgmt traffic, 1 for VM guest traffic, and 1 for data domain storage traffic
- External DNS and NTP servers not allowed on VMs of the RHV since its hosts come up before their VMs and have to have forward and reverse DNS entries.
- RVH-H firewall is auto-configured for required network services

Storage Domain:
- Also, a RHEV Storage Domain. It is represented as a Volume Group (VG). Each VM w/in the VG has it's one LV (Logical Volume) which becomes the VM's disk. Performance degrades with high numbers of LGs in VGs (300+), so the soft limit is 300. To have more scalability, you create new Storage Domains (VGs). See RH Technote: 441203 for performance limits. When a snapshot is created, a new LV is created for the VM with the Storage Domain PV.
- Types:
- - Data Domain: stores the hard disk images of the VMs (the LVs) and VM templates. Data Domains can utilize NFS, iSCSI, FCP, GlusterFS (deprecated), and POSIX storage. A data domain cannot be shared between data centers.
- - Export Domain: (deprecated) stores VM LVs disk images and VM templates for transfer between data centers, and where backups of VMs are copied. Export Domains are NFS. Multiple datacenters can access a single export domain, but can only be used by one at a time.
- - ISO Domain: (deprecated) stores disk images for installations

Data Domain: hosted_storage
Management Logical Network: ovirtmgmt
DataCenter: default
Cluster: default

RHV-H = Host - Host Types:
Engine Host: Host w/Manager VM. Two hosts running manager (engines) = HA capable
Guest Host: Host running VMs

Two ways to get to a working RHV-H host/hypervisor:
- RHV-H ISO (or other methods)
- RHEL linux host with the Virtualization repository packages and modules added.

Hosts talk to the RHV-M VM (or separate server) via the Virtual Desktop and Server Manager (VDSM)
- Monitors memory, storage, and networks
- Creates, migrates, and destroys VMs

RHV-H - Red Hat Virtualization Host:
- Is a standalone minimal operating system based on RHEL
- Includes a graphical web admin management interface
- Installs via ISO file, USB storage, PXE network distribution, or by cloning

RHHI-V - Red Hat Hyper-converged Infrastructure for Virtualization
- Uses "self-hosted" engine
- Gluster Storage

RHV Installation Methods:
- Standalone = Manager separate (not hosted on the hosts = not self hosted)
or
- Self-Hosted Engine = Manager runs on the first host after host is installed first.

Host Graphical UI:
https://rhvhosta.mindwatering.net:9090
- accept the self-cert

When RHV-H hosts are installed (e.g. ISO) manually, they have to be subscription registered and enabled:
[root@hostname ~]# subscription-manager repos --enable=rhel-8-server-rhvh-4-rpms

RHV-M - Red Hat Virtualization Manager:
- Integrates w/various Directory services (JBOSS --> LDAP) for user management
- Manages physical and virtual resources in a RHV environment
- Uses local PostgreSQL db for config engine (engine) and the data warehouse (ovirt-engine-history) databases

Manager Standalone Installation Order:
1. Manager installed on separate server
2. Install hosts (min. 2 for HA)
3. Connect hosts to Manager
4. Attach storage accessible to all hosts

Self-Hosted Engine Installation Order:
1. Install first self-hosted engine host
- subscribe host to entitlements for RHV and enable software repos
- confirm DNS forward and reverse working
2. Create Manager VM on host
- create via the host graphical web console
or
- create on the host via: hosted-engine --deploy command. The GUI is recommended method.
3. Attach storage accessible to all hosts
- Back-end storage is typically NFS or iSCSI. The storage attached becomes the "Default" data center and the "default" cluster. This default storage contains the LV/storage of the RHV-M VM created.
- This also illustrates that the actual creation of the RHV-M VM is actually after the storage
4. Install additional self-hosted engine hosts (min. 2 for HA)

RVH-M Minimum Requirements:
- CPU: 2 core, quad core recommended
- Memory: 16 GB recommended, but can run with 4 GB if data warehouse is not installed, and memory not being consumed by existing processes
- Storage: 25 GB locally accessible/writable, but 50 GB or more recommended
- Network: 1 NIC, 1 Gbps min.

RHV-M Administration Portal:
https://rhvmgr.mindwatering.net
or
https://rhvmgr.mindwatering.net/ovirt-engine/webadmin/
User: admin
Password: <pwd set before clicking start button>

Verification of RHV-M appliance services:
$ ssh root@rhvmgr.mindwatering.net
<enter root pwd>
[root@rhvmgr ~]# host rhvmgr.mindwatering.com
<view output - confirm name and IP, note IP for next validation>
[root@rhvmgr ~]# host -t PTR <IP shown above>
<view output - confirm IP resolves to name>
[root@rhvmgr ~]# ip add show eth0
<view output of NIC config - confirm IP, broadcast, mtu, and up>
[root@rhvmgr ~]# free
<view output - confirm memory and swap sufficient to minimums>
[root@rhvmgr ~]# lscpu | grep 'CPU(s)'
<view output - confirm number of cores>
[root@rhvmgr ~]# grep -A1 localhost /usr/share/ovirt-engine-dwh/services/ovirt-engine-dwhd/ovirt-engine-dwhd.conf | grep -v '^#'
<view output - confirm PostgreSQL db listening and note ports>
[root@rhvmgr ~]# systemctl is-active ovirt-engine.service
<view output - confirm says "active">
[root@rhvmgr ~]# systemctl is-enabled ovirt-engine.service
<view output - confirm says "enabled">

Download of RVH-M CA root certificate for browser trust/install:
http://rhvmgr.mindwatering.net/ovirt-engine/services/pki-resource?resource=ca-certificate&format=X509-PEM-CA

If uploads (ISOs, etc) fail on upload, did the browser get the Manager certificate installed?
e.g. "Connection to ovirt-imageio-proxy service has failed. Make sure the service is installed, configured, and ovirt-engine-certificate is registered as a valid CA in the browser."

NFS Export Configuration for Storage Domains
- storage server is not allowed to be one of the VMs hosted as it must be up before hosts are started
- read/write mode
- edit config file as either:
- - /etc/exports
or
- - /etc/exports.d/rhv.exports
- ownership:
- - top-level directory owned by vdsm w/UID 36 and by group kvm GID 36, with 755 access:
- - - user vdsm has rwx access (7)
- - - group kvm has rx access (6)
- - - other has rx access only (5)
- ensure NFS server service running (e.g. nfs-server.service enabled and active/running)

Verification of Export:
$ ssh root@rvhstor1.mindwatering.net
<enter root pwd>
[root@rvhstor1 ~]# systemctl is-active nfs-server.service
<view output - confirm active>
[root@rvhstor1 ~]# systemctl is-enabled nfs-server.service
<view output - confirm enabled>
[root@rvhstor1 ~]# firewall-cmd --get-active-zones
<view output - e.g. public>
[root@rvhstor1 ~]# firewall-cmd --list-all --zone=public
<view output - verify ports and services for nfs are included>
[root@rvhstor1 ~]# exportfs
<view output - verify that the /exports/<datanames> are listed and match what will be used in RHV e.g. hosted_engine>
[root@rvhstor1 ~]# ls -ld /exports/hosted_engine/
<view output - check permissions and ownership - e.g. drwxr-xr-x. 3 vdsm kvm 76 Mon 12 12:34 /exports/hosted_engine/>

Console View Apps for Administrative workstations using GUI consoles:
- Linux:
- - virt-viewer
- - $ sudo yum -y install virt-viewer
- MS Windows:
- - Download both the viewer and the USB device add-on from the RVH-M appliance:
- - - https://rhvmgr.mindwatering.net/ovirt-engine/services/files/spice/virt-viewer-x64.msi (64-bit)
- - - https://rhvmgr.mindwatering.net/ovirt-engine/services/files/spice/usbdk-x64.msi (64-bit)
- Viewer has headless mode and VNC option/use
- Viewer SPICE supports file transfer and clipboard copy-and-paste

Creating VM Notes:
- Disk Interface Types:
- - IDE: oldest and slowest, use for older OS and disk compatibility, not recommended
- - VirtIO: /dev/vdX drives, must faster than IDE, but more limited than SCSI. Use when advanced features not needed.
- - VirtIO-SCSI: /dev/sdX drives, improves scalability, replaces the virtio-blk driver, can connect directly to SCSI LUNs and handles 100s of devices on single controller
- Allocation Policy - both thin and think options
- - same as vSphere, thick all of disk formatted and allocated, thin/sparse for only initial space needed, rest is allocated as used.
- - thin may be a little slower as disk usage requires new allocations to be zeroed before new data blocks are written.

- Boot Notes:
- Run Once parameters persist THROUGH reboots intentionally because it often needs to for software installation. You must shutdown VM before the run once defaults back to a normal boot.





previous page

×