cpu
Create a new vCPU in the VM
Synopsis:
cpu [options]*
Options:
- partition name
- If adaptive partitioning (APS) is implemented in the hypervisor host domain, run the vCPU in the host domain APS partition specified by name. If the partition option isn't specified, the vCPU thread will run in the partition where the qvm process was started.
- runmask cpu_number{,cpu_number}
- Allow the vCPU to run only on the specified physical CPUs (pCPUs); this is known as core pinning. CPU numbering is zero-based. The default is no restrictions (floating).
- sched priority[r | f | o ]
- sched high_priority,low_priority,max_replacements,replacement_period,initial_budget s
-
Set the vCPU's scheduling priority and scheduling algorithm. The
algorithm can be round-robin (
r), FIFO (f), or sporadic (s). The other (o) algorithm is reserved for future use; currently it is equivalent tor.
Description:
The cpu option creates a new vCPU in the VM. Every vCPU is a thread, so a runmask can be used to restrict the vCPU to specific physical CPUs. Standard thread scheduling priorities and algorithms can be applied to the vCPU. Note that vCPU threads are threads in the hypervisor host domain.
If no cpu option is specified, the qvm process instance creates a single vCPU.
For more information, see vCPUs and hypervisor performance
in the Performance Tuning
chapter.
Configuring sporadic scheduling
For sporadic scheduling, you need to specify the following five parameters:
- high_priority – the high priority value
- low_priority – the low priority value
- max_replacements – the maximum number of times the vCPU's budget can be replenished due to blocking
- replacement_period – the number of nanoseconds that must elapse before the vCPU's budget can be replenished after being blocked, or after overrunning max_replacements
- initial_budget – the number of nanoseconds to run at high_priority before being dropped to low_priority
Maximum vCPUs per guest
The maximum number of vCPUs that may be defined for each guest running in a hypervisor VM is limited by a number of factors:
- Hardware
- On supported AArch64 (ARMv8) and x86-64 platforms, the hardware currently allows a maximum of 254 vCPUs on the board. This number may change with newer hardware.
- Guest OS
- Current QNX OSs support a maximum of 32 CPUs (except on ARM boards with GICv2, for which the limit is 8 CPUs). This limit also applies to vCPUs, since a guest OS makes no distinction between a CPU and a vCPU.
Examples:
Example 1: pin vCPU, set scheduling priority
cpu runmask 3 sched 8rThe priority is 8. The scheduling algorithm is round-robin.
Example 2: floating vCPUs, set scheduling priority
cpu sched 10
cpu sched 10
cpu sched 10
cpu sched 10The runmask option isn't specified, so the default of no restrictions (floating) is used.
Since no processor affinity has been specified for any of the vCPU threads, the hypervisor microkernel scheduler can run each vCPU thread on whatever available physical CPU it deems most appropriate.
Example 3: two vCPUs pinned to physical CPUs, default scheduling
cpu runmask 2,3 # vCPU 0 may run only on pCPU 2 or 3.
cpu runmask 2,3 # vCPU 1 may run only on pCPU 2 or 3.
cpu # vCPU 2 may run on any pCPU.
cpu # vCPU 3 may run on any pCPU.For vCPUs 0 and 1, their runmask options are set to pin them to pCPUs 2 and 3. This allows them to run only on these pCPUs; they won't migrate to pCPU 0 or 1 even if these pCPUs are idle. No runmask option is specified for vCPUs 2 and 3, so they will use the default (no restrictions). They can run on any available physical CPU (including pCPUs 2 and 3).
For information about how priorities for hypervisor threads and guest threads are
handled, see Scheduling
in the Understanding QNX Virtual Environments
chapter.
For more information about processor affinity and scheduling, see the
Processor affinity, clusters, runmasks, and inherit masks
topic in the
Multicore Processing
chapter of the QNX Neutrino
Programmer's Guide.
