Overhead in a virtualized environment
There are many sources of overhead in a virtualized environment; finding them requires analysis both from the top down and from the bottom up.
Sources of overhead in QNX hypervisor systems include:
- Guest exits
 - These occur whenever a virtualization event requires that the guest leave
						the native environment (i.e., stop executing). Such events include the
						hypervisor examining and passing on interrupts, and looking after privileged
						operations such as updating registers. You can reduce these events by configuring
						your guest OSs so that they don't unneccesarily trigger exits (see 
Guest exits
andGuest-triggered exits
). - Interrupts
 - Whether they are initiated by the hardware or, indirectly, by a guest request,
						individual interrupts in a hypervisor system require a guest exit so the
						hypervisor can manage them. This means the cost of interrupts can be
						significantly greater than in a non-virtualized system. You can reduce
						the cost of interrupts by eliminating superfluous interrupts
						(e.g., by using VIRTIO devices where appropriate) and by using hardware
						virtualization support to reduce the cost of the necessary interrupts (see
								
Interrupts
andVirtual I/O (VIRTIO)
). - Hypervisor overload
 - Poor VM configuration can force the hypervisor to trigger guest exits to
						manage competing demands for resources. You can often resolve many overload
						problems by modifying your VM configuration (see 
vCPUs and hypervisor performance
). 
A good general rule to follow when tuning your hypervisor system for performance is that for a guest OS in a VM, the relative costs of accessing privileged or device registers and memory are different than for an OS running in a non-virtualized environment.
Usually, for a guest OS, the cost of accessing memory is comparable to the cost for an OS performing the same action in a non-virtualized environment. However, accessing privileged or device registers requires guest exits, and thus incurs significant additional overhead compared to the same action in a non-virtualized environment.
For instructions on how to get more information about what your hypervisor and
					its guests are doing, see the Monitoring and Troubleshooting
 chapter.
Measuring overhead
Finding sources of overhead requires analysis both from the top down and from the bottom up.
Top-down analysis
- Run a system in a non-virtualized environment, and record a benchmark (
N
for native). Run the same system in a VM, and record the same benchmark information (
V
for virtual).Usually benchmark N will show better performance, though the opposite is possible if the VM is able to optimize an inefficient guest OS.
Assuming that benchmark N was better than benchmark V, adjust the virtual environment to isolate the source of the highest additional overhead. If benchmark V was better, you may want to examine your guest OS in a non-virtualized environment before proceeding.
When you have identified the sources of the most significant increases to overhead, you will know where your tuning efforts are likely to yield the greatest benefits.
Bottom-up analysis
- Run one or more guests in your hypervisor system.
 - Record every hypervisor event over a specific time interval (
T
). - Use the data recorded in T to analyze the system.
 
The hypervisor events include all guest exits. Guest exits are a significant source of
				overhead in a hypervisor system (see Guest exits
 and
						Guest-triggered exits
 in this chapter).
SoC configuration
QNX hypervisors include a QNX OS microkernel with support for virtualization. As with all QNX microkernel systems, the bootloader and startup code pre-configure the SoC, including use of the physical CPUs (e.g., SMP configuration) and memory cache configuration. The hypervisor doesn't modify this configuration; however, you can do so to provide performance gains.
For more information about how to change the bootloader and startup code, see Building Embedded Systems in the QNX OS documentation, and your Board Support Package (BSP) User's Guide.
Multiprocessing
The QNX hypervisor supports both symmetric multiprocessing (SMP) and asymmetric multiprocessing (AMP). Also, you can configure threads in the hypervisor host to run on specific physical CPUs (pCPUs). This includes vCPU threads and other threads in your qvm processes. You can pin these threads to one or several pCPUs by using the cpu cluster option when you assemble your VMs.
For more information about QNX OS support for multiprocessing,
				see the Multicore Processing
 chapter in the
				QNX OS Programmer's Guide.
