Re: The number of Post-launch RTVM
For Graphics, actually the default config is ‘GVT-d’ in latest release. So we may suggest you directly pass-through Graphics to Windows guest on your WhiskeyLake board.
Because for RTVM, we noticed some impact for RT performance, if you share GVT-g with other VMs. So you can run Control application in RTVM and report status/get command to WaaG with UI/Graphics.
Here is WaaG ‘sample launch script’ with GVT-d: -s 2,passthru,0/2/0,gpu \
But if you still want to try gvt-g on WHL with legacy v2.0 release, you can also find WaaG ‘sample launch script’ : -s 2,pci-gvt -G "$2" \ launch_win 1 "64 448 8" is actually $2, that is the default config for low_gm_sz(64M),high_gm_sz(448M) and fence_sz(8).
Actually such memory allocation in WaaG launch script is dedicated memory assigned to VM, you cannot accurately allocate GPU resource percentage to each VM, but maintain workload Queue with tasks from each vGPU and scheduled by ‘workload scheduler’, then submit to i915 driver. More details in: https://projectacrn.github.io/2.0/developer-guides/hld/hld-APL_GVT-g.html#workload-scheduler
BTW< there is also some other memory settings in latest gvt github, but we didn’t try in ACRN-GT, just for your reference: https://github.com/intel/gvt-linux/wiki/GVTg_Setup_Guide#53-create-vgpu-kvmgt-only
y aperture memory size hidden memory size fence size weight
8 64 384 4 2
4 128 512 4 4
2 256 1024 4 8
1 512 2048 4 16
Best & Regards
From: acrn-users@... <acrn-users@...> On Behalf Of Jianjie Lin
Sent: Tuesday, July 27, 2021 9:29 PM
Subject: Re: [acrn-users] The number of Post-launch RTVM
Hi Terry, hi Geoffroy,
Your both response means a lot to me. We try to find the possibilities to use as many RTVM as possible for the applications. Pre-launched RTVM is better for me, post-launched RTVM is also possible for me.
Besides RTVM, I get a new configuration questions in terms of the gvt-g.
We choose the whisky lake board as the first testing board, which is suggested from the acrn project, and supported by gvt-g.
Based on the command “acrn-gm” or “launch XML”, the only possible to set the gvt args is
-G, --gvtargs <GVT_args>
ACRN implements GVT-g for graphics virtualization (aka AcrnGT). This option allows you to set some of its parameters.
GVT_args format: low_gm_sz high_gm_sz fence_sz
· low_gm_sz: GVT-g aperture size, unit is MB
· high_gm_sz: GVT-g hidden gfx memory size, unit is MB
· fence_sz: the number of fence registers
Now, for example, if we have two UOSs (no matter RTVM or standard UOS).
How can I use these args to set the parameter for sharing the Igpu? Such as 40 % to UOS1 and 60 % UOS2?
I do not know how to set those low_gm_sz, high_gm_sz, and fence_sz to achieve this goal.
Sorry, I have so many questions.
Thank you very much again for your response.
Yes, it is doable to modify default ‘Hybrid RT scenario’ and add more pre-launched RTVMs. Actually the max number of pre-launched RTVMs is limited by your platform resource capability: only support partition/pass-through CPU cores, Ethernet, storage to each pre-launched VMs... But in post-launched RTVM, technically you still have the possibility to share some resources (not critical to RT performance) between RTVMs, e.g., storage, and the PM/VM management could also be in service OS.
So only if you have high isolation, safety and RT performance concerns, e.g., keep RTVM alive even Service OS reboot, for multi-RTVM, I may suggest you launch more post-launched RTVMs to both have resource sharing flexibility and device pass-through performance.
Anyway @Jianjie, if you want to try multiply pre-launched RTVMs, just let us know your detailed failure step/git-issue if happens, we will analysis and help you move forward😊
Best & Regards
Hi Jianjie Lin, Terry,
@Terry, I’m not quite sure how to interpret your response regarding the max number of pre-launched RTVMs. You say we have one predefined in one of our scenario, but I think what Jianjie wanted to know is if he could modify it or create a new scenario and put multiple pre-launched RTVMs?
The code at the moment only supports one Kata Container. But when I dug into why, it appears that it should be fairly easily doable to change that. It comes down to how UUIDs are handled in the ACRN – Kata code, it’s essentially using only one UUID for Kata but I couldn’t find any reason why the code couldn’t be improved to essentially use any other UUID. There is, in fact, the list of all ACRN predefined UUIDs in the Kata Container ACRN driver code – see https://github.com/kata-containers/kata-containers/blob/main/src/runtime/virtcontainers/acrn.go#L58-L80. I filed an issue/feature request for this a while ago (back in April), see https://github.com/projectacrn/acrn-hypervisor/issues/5966. If you’re interested in digging into this to improve the situation, this would be very welcome 😉
A Kata Container VM today will start as a standard post-launched VM. I do not know if it would be possible to improve the code to make it run an RTVM. I guess it would, and in that case, it would also be useful to have some sort of control for the user to state whether he wants a Kata Container to run as a standard or RTVM. When I was thinking about that previously, I was missing the use-case behind it. What I mean is that people who want to run RTVMs do not typically want to orchestrate them, and to me this is the biggest advantage of using Kata: it’s a different way of starting a VM and one that allows tools that are typically used for containers such as Docker, Kubernetes, etc.
As a word of caution: I do not believe there are many eyes looking at the Kata implementation in ACRN so you may come across some bugs if you try with the latest releases. If that happens, please report those so they can be investigated!
Thanks for your interests with ACRN hypervisor. For VM configuration, we have several pre-defined scenarios in: https://projectacrn.github.io/latest/introduction/index.html#id3, and the most flexible scenario is ‘industry’ scenario, it runs up to 8 VMs including Service VM, so 7 Users VMs in this config.
Specific to RTVM, the max number is also limited by your target platform capability, you know to secure RT performance, RTVM may need dedicated CPU Core (2 cores for Preempt RT), and pass-through Ethernet, even disc.
//Terry: Usually only one Pre-launch RTVM, refer to ‘Hybrid RT scenario’ in: https://projectacrn.github.io/latest/introduction/index.html#hybrid-real-time-rt-scenario
//Terry: According to your platform CPU core number and pass-through devices, usually up to 3~4 post-RTVMs, with 8 cores and 4 ethernet ports.
//Terry: there is one kata-container VM in ‘industry scenario’: https://projectacrn.github.io/latest/tutorials/run_kata_containers.html#run-kata-containers.
//Terry: From pass-through device perspective, they are all isolated in pre and post-launch RTVM. From OS severity perspective, post-VM is launched/managed by service OS, but Pre-launch RTVM could be still alive/working even when Service OS reboot with abnormal failures.
//Terry: Basically Kata-container may not secure the certainty/latency of RT task, so only user level applications e.g., AI computing workload, are recommended in container.
Best & Regards
Hello ACRN community,
We have a project about the Hypervisor, and we require some Real time VMs. After reading the documentation from the acrn project. I have some understanding questions:
Thank you very much for your reply, and explanations. Thank you.
Mit freundlichen Grüßen