Configuring guests

Guest images in a QNX virtualized environment are configured in the same way as they are configured in a non-virtualized environment. For example, for QNX guests, use a buildfile.

CAUTION:
When you configure a guest, make sure its configuration aligns with the configuration of the VM in which it will run, just as you would have to make sure that an OS is configured properly to run on a specific hardware platform. This means the guest must match the VM in terms of architecture, board-specifics, memory and CPUs, devices, etc. For more information, see Assembling and configuring VMs.

Configuration of any resources such as devices and memory regions that a guest needs, the interrupts delivered to the guest, and the host-domain vCPUs threads that run guest code is all done through the VM configuration file. The sections that follow explain how to configure each of these components.

Information about building guest images is given in Building guests in the Building a QNX Hypervisor System chapter. Information about starting guests is provided in Starting a guest in the Booting and Shutting Down chapter.

Guest resource types

For a guest, every resource exists in a specific location (or space). Both interrupts and devices are represented as resources in distinct spaces. In the VM configuration, interrupts are specified through the intr keyword, which can be applied to the pass option if the interrupt is being passed through, or to the vdev option if it is being generated by a virtual device (vdev). Details on making interrupts visible to the guest are given below in Guest interrupts.

Devices are accessed in the guest as collections of resources. In the VM configuration, device resource types are specified through the loc keyword. This keyword is applied to the pass option if the resource is accessed as a pass-through device, or to the vdev option if the resource is accessed as a vdev. The loc keyword can be followed by one of the following resource type identifiers (and other related parameters):
mem:address ...
The location is a guest-physical address (intermediate physical address). The first parameter gives the physical address as understood by the guest.

For an explanation and examples of all parameters that can be used with this identifier for pass-through devices, see the first form of the pass loc option in the VM Configuration Reference chapter.

When this option is applied to virtual devices, it typically specifies the base address of the device registers. For more information, see the entry for the specific vdev you're using in the Virtual Device Reference chapter.

io:port ...
The location is in the x86 I/O space. The first parameter is the port number.

For an explanation of all parameters supported for this identifier, see the first form of the pass loc option if you're using a pass-through device, or the appropriate vdev entry in the Virtual Device Reference chapter if you're using a virtual device.

pci:{vid/did[/idx]|bus:dev[.func] ...}
The location is in the PCI function space. The PCI function can be specified either by the vendor ID, device ID, and if necessary, the function index, or by the PCI bus, device, and optionally, function number. If the function number isn't given, it's assumed to be function 0, which is always present.

For an explanation and examples of these two different ways of accessing PCI functionality, see the second form and the third form of the pass loc option if you're passing through the PCI function, or the appropriate vdev entry in the Virtual Device Reference chapter if you're using a virtual device to provide the PCI function.

devid:guest_base,length ...
The location is a range of Device IDs visible to the guest starting at guest_base and continuing for length. Because it is meant for supporting devices capable of using Message Signaled Interrupts (MSIs), this resource type is supported only for the pass loc option. For more information, see the fourth form of the pass loc option.

If no resource type identifier is specified for a certain loc option, a suitable default type is chosen. For pass-through devices, the default type is mem:. For vdev, the most common default type is also mem:, but it varies from one device to another. For example, the default for VIRTIO devices is pci:, though these devices can also be specified as mem: resources. For more information, see the relevant vdev entry in the Virtual Device Reference chapter.

Guest interrupts

A guest interrupt is specified by a pass intr or vdev intr entry in the VM configuration, depending on whether the interrupt is being passed through (i.e., routed by the host from the device to the guest) or delivered through a vdev (i.e., generated by vdev code running in the host).

In the pass-through case, the intr option can either provide the interrupt number to use in the guest or refer to a Programmable Interrupt Controller (PIC) on the host that defines the mapping between the host vector number and the interrupt number for the guest. For more details, see the pass entry in the VM Configuration Reference chapter.

When a vdev is used to deliver the interrupt, the intr option either refers to the PIC hardware on the host that guest vdevs can send their interrupts to, or it names the guest device interrupt controller and specifies the interrupt line to be asserted by the device. For more information about this option's parameters, see the Common vdev options section in the Virtual Device Reference chapter. Here, we give some examples of how to use vdevs to provide interrupts on the various supported architectures.

On x86 platforms, the Local component Advanced Programmable Interrupt Controller (LAPIC) hardware is automatically supplied. There is no need to specify a vdev for it, and guest vdevs should simply specify intr apic for their interrupts; no input line number needs to be stated.

For example, the following creates a virtual I/O APIC device and a ser8250 serial (non-APIC) device on an x86 system:
vdev ioapic
    intr apic
    name myioapic
vdev ser8250
    hostdev >-
    intr myioapic:4

On ARM platforms, the Generic Interrupt Controller (GIC) hardware is automatically supplied; it is not necessary to specify this vdev. You can still specify it if you want to change its option values, including the input line that gets asserted (for details, see the vdev gic reference). The default name for guest devices that feed interrupts to the GIC is gic, but you can use the vdev's name property to change this.

The following example creates a virtual PL011 device on an ARM system:
vdev pl011
    loc 0x1c090000 
    intr gic:37
    name mygic

Recommendations in configuring vCPUs

Many OSs auto-detect functionality offered by the underlying CPUs. For guest OSs in a hypervisor system, you should usually configure a vCPU to run only on pCPUs (cores) of the same type. For details on doing so, see the cpu cluster option. If you want to run a vCPU on different core types, ensure you know which CPU features the guest will use and you restrict the vCPU to a cluster of pCPUs that support these features.

For the vCPU priority, you can set it in the VM configuration and the underlying qvm process will apply this priority when starting the guest. Afterwards, there's no qvm mechanism for changing it. In the hypervisor host, you can adjust the priority of any thread, including a vCPU thread, via the QNX OS mechanisms such as ThreadCtl(). It is best practice to configure the vCPUs in a given VM with the same thread priority, whether at startup or any time afterwards. Aside from priority, though, you should avoid adjusting any scheduling parameters of vCPU threads after startup, as explained in the Adjusting thread scheduling parameters section of the Virtual Device Developer's Guide.

Also, note that thread priorities within a guest are distinct from those visible to the host scheduler. If this scheduler has a low priority host thread that is ready to run and this thread's priority is higher than the vCPU's priority, it does not matter that inside the guest it is running a high priority guest thread. For further explanation, see the Scheduling section in the Understanding QNX Virtual Environments chapter.

Page updated: