Memory

In a QNX virtualized environment, the guest-physical memory that a guest sees as contiguous physical memory may in fact be discontiguous host-physical memory assembled by the virtualization.

A guest in a QNX virtualized environment uses memory for:

Note the following about memory in a QNX hypervisor:

  • With the exception of shared memory, memory allocated to a VM is for the exclusive use of the guest hosted by the VM; that is, the address space for each guest is exclusive and independent of the address space of any other guest in the hypervisor system.
  • If there isn't enough free memory on the system to complete the configured memory allocation for a VM, the hypervisor won't complete the configuration and won't start the VM.
  • If the memory allocated to its hosting VM is insufficient for a guest, the guest can't start, no matter how much memory is available on the board.
  • You can limit the total memory usage of the VM by defining RLIMIT_AS for the qvm process. Defining this limit helps prevent runaway memory usage by a guest, whether it's shared memory or memory exclusive to that guest, including pass-through memory.

    The launcher program on provides a way to do this. Refer to the on documentation in the QNX OS Utilities Reference for details. For instance, a guest that's given 1G of RAM in its configuration file and has modest pass-through memory can be given a limit of 2G as follows:
    on -L 6:2000000000:2000000000 qvm @myguest.qvmconf
    You can then verify this memory usage limit with pidin:
    pidin -F "%N %#" -P qvm
  • With the exception of memory used for pass-through devices to prevent information leakage, the hypervisor zeroes memory before it allocates it to a VM. Depending on the amount of memory assigned to the guest, this may take several seconds.

Memory in a virtualized environment

In a QNX virtualized environment, a guest configured with 1 GB of RAM will see 1 GB available to it, just as it would see the RAM available to it if it were running in a non-virtualized environment. This memory allocation appears to the guest as physical memory. It is in fact memory assembled by the virtualization configuration from discontiguous physical memory. ARM calls this assembled memory intermediate physical memory; Intel calls it guest physical memory. For simplicity we will use guest-physical memory, regardless of the platform (see Guest-physical memory in the Terminology section).

When you are configuring and accessing memory in a QNX virtualized environment, it is important to remember the following:

  • The total amount of memory allocated to guests and any other use may not exceed the physical memory available on the board.
  • Memory allocations in the hypervisor host must be in multiples of the QNX OS system page size (4 KB). However, memory smaller than a page can be passed through to the guest. An example with source code for a vdev that passes through small amounts of memory is provided in the QNX Hypervisor GitLab Repository at https://gitlab.com/qnx/hypervisor.
  • Guest-physical memory appears to the guest OS just like physical memory would appear to it in a non-virtualized system; for example, on x86 systems it would have the same gaps for legacy devices.
  • The apparently physical addresses a guest sees are in fact addresses in the guest-physical memory, which is assembled by the qvm process when it creates a VM.
  • There is no correspondence between the address a guest sees (guest-physical address) and the physical address to which the guest-physical address translates below the virtualization layer (host-physical address).
Figure 1Illustration of how the memory a guest OS sees as contiguous physical memory is assembled by the virtualization configuration from unrelated regions of physical memory.


The diagram above presents a simplified illustration of how in a QNX virtualized system guest memory allocations are assembled from discontiguous chunks of physical memory. To simplify the diagram, the memory allocation for Guest 1 is incomplete, and in the interests of legibility a gap has been added between the guests. Note also that some regions of physical memory may be reserved (e.g., for devices the board architecture requires at specific locations) and can't be allocated to a guest.

When you configure memory for guests, you need to configure the size of the memory allocation, and any platform-specifics, such as gaps for legacy devices (x86) or the RAM start address (ARM). The hypervisor looks after assembling blocks of physical memory into allocations of guest-physical memory for each guest.

Memory mapping for pass-through devices

A guest accessing physical devices as pass-through devices needs to map them into the memory regions configured to be accessible to that guest. Note that there is no correspondence between the physical address seen by the guest and the physical address seen from the hypervisor host domain.

Figure 2Illustration of how the guest-physical address for a pass-through device doesn't correspond to the host-physical address of the device.


For example, Device A may be configured at 0x100 in Guest 0's physical memory. In the hypervisor host, this device may be configured as a pass-through device for Guest 0 at this same location (0x100), so when Guest 0 needs to access the device, it looks at 0x100. However, remember that the guest-physical address that a guests sees translates into some other address in host-physical memory, say 0xC00000100.

For more information about pass-through devices, see Pass-through devices in this chapter.

Shared memory

Portions of physical memory can be allocated to be shared between guests. Guests will use a virtual device such as vdev-shmem to attach to the same physical address (PA) and use the shared memory area to share data, triggering each other whenever new data is available or has been read.

Note:
The PA is actually the host-physical address, not the guest-physical address. The guest-physical addresses will probably differ between the guests.
Figure 3Illustration of memory shared between two guests. Each guest sees the memory as its own.


For information about how to implement memory sharing, see Memory sharing in the Using Virtual Devices, Networking, and Memory Sharing chapter.

Configuring memory

To allocate memory for a virtual machine (VM), you must allocate the memory required by each part of the system. You don't allocate all of the memory for the VM in one chunk. Rather, you allocate the memory required for the image, then the memory required for your various devices, etc.

Many systems have reserved areas of memory. This is particularly true of x86 systems, which require many vestigial devices at specific locations. These rules apply to guests running in virtual environments, because the OSs are expecting the vestigial devices to be present.

When you allocate memory, it may be efficient to specify only the location of the bootable image in guest-physical memory, and let the qvm process pick the location for devices, shared memory, etc.

If a RAM location is to be used for the guest's bootable image, the guest must be configured to look for the image at this address inside its memory. In addition, if you use the qvm configuration load component to load your bootable image, the address where you load the image must match your guest's configuration. (If you don't specify the address, the qvm process looks after this for you.)

For example, for a QNX guest, you might do the following:

  • Allocate RAM for the guest image with a start address in guest-physical memory of 0x80000000:
    ram 0x80000000,128M
  • Load the bootable image to this location:
    load 0x80000000,/vm/images/qnx8.ifs
  • Specify this location in the guest's buildfile:
    [image=0x80000000]
    [virtual=aarch64le,elf] .bootstrap = {
       [+keeplinked] startup-armv8_fm -v -H
       [+keeplinked] PATH=/proc/boot procnto -v
    }

See ram in the VM Configuration Reference chapter.

DMA device containment

The hypervisor can use an IOMMU/SMMU Manager such as smmuman to ensure that no DMA pass-through device is able to access host-physical memory to which it has not been explicitly granted access.

For more information about the smmuman service and how to use it, see DMA device containment in the QNX Hypervisor: Protection Features chapter, and the SMMUMAN User's Guide. For information about using another component to manage IOMMU/SMMU services, see the smmu-externally-managed configuration variable description.

Page updated: