WHL-ipc-i7 gpu display problem
Jianjie Lin
Hi ACRN-community,
Our team faces the problem of the GPU-d/g for a long time. We try all the possibilities according to the resource from the tutorial or GitHub; however, the issue remains. According to the tutorial, we use a whiskey lake i7 board for the gpu sharing or pass-through, which is fully supported by the acrn hypervisor.
We have tried release v2.5 as the starting point. We can successfully launch the uos in the console version, but the GPU version is either automatically rebooted or stuck for an hour without further action. Then we return back the v2.0 since, based on the issue report, the gvt-g is a known issue since the v2.1. However, it still does not help after we install the release version 2.0.
I attach three files for your diagnose
Thank you very much for your support. Looking forward for your reply.
Mit freundlichen Grüßen / Kind regards Jianjie Lin
|
|||||||||||||||||||||||||||||||||||||
|
|||||||||||||||||||||||||||||||||||||
2021 ACRN Project Technical Community Meeting Minutes - WW34'21
Zou, Terry
ACRN Project TCM - 18th Aug 2021
Location
Attendees (Total 15, 18/08)
Note: If you need to edit this document, please ask for access. We disabled anonymous editing to keep track of changes
and identify who are the owners of the opens and agenda items.
Opens Note: When adding opens or agenda items, please provide details (not
only links), add your name next to the item you have added and specify your expectation from the TCM
Agenda
Download foil from ACRN Presentation->WW34’21
Description: We will introduce one storage requirement background and Polling VS Interrupt mode design with performance
comparison.
Q&A:
A: From a function perspective, yes, it is doable to enable 2 RTVM and share one storage. But the performance of real time and storage would be a little bit different.
A: No, RTVM is VirtIO polling mode, but WaaG could be VirtIO interrupt mode.
Marketing/Events N/A
Resources Project URL:
|
|||||||||||||||||||||||||||||||||||||
|
|||||||||||||||||||||||||||||||||||||
Re: How to enable/disable Intel turbo boost technology
Victor Sun
Yes you got it.
BR, Victor
From: acrn-users@... <acrn-users@...>
On Behalf Of Jianjie Lin
Hi Victor,
Thank you very much for your quick response. In order to avoid the misunderstanding, I would like to make sure the meaning of the pcpu in your response that, you mean pcpu as core or a real CPU.?
Since in the whiskey lake, we have only one CPU but it contains 4 Cores. In my scenario, I assign the Core1 and Core2 to the UOS1 and the Core 3 and Core4 to UOS2.
Now, what I want is to enable the turbo boost in the Core1 and Core2 for UOS1, but disable this feature in Core3 and Core4 for UOS2 By using the “cpufreq.off=1” in the bootargs setting can make it possible? Thank you again for your support. BR, Jianjie
Von:
acrn-users@... [mailto:acrn-users@...]
Im Auftrag von Victor Sun
Hi Jianjie, The CPU turbo could be enabled for each VM only when the pinned pCPU is wholly assigned to one VM, i.e. no CPU sharing between VMs. ACRN provides ACPI P state table for post-launched VM so UOS kernel could enable P state by either acpi-cpufreq driver or intel_pstate driver if pCPU support intel_pstate. For pre-launched VM, only intel_pstate is supported.
Yes we can use different device model parameter to enable/disable turbo by returning modified vCPUID result, but currently we have no plan to implement it. For Linux guest, you can simply add kernel parameter of “cpufreq.off=1” to disable turbo.
BR, Victor
From:
acrn-users@... <acrn-users@...>
On Behalf Of Jianjie Lin
Hello acrn project teams,
Here I would like to ask about the Enable and Disable feature of Intel turbo boost technology. As I know that for a PC if I want to disable the intel turbo boost technology, I need to set the BIOS setting before booting the system. But for the hypervisor type1, I would like to ask if it is possible to set the turbo boost technology for each UOSs individually.
For example, I have to UOSs: Uos1, Uos2. I want to enable the turbo boost technology in the UOS1 but disable this feature in the UOS2.
I found the device model parameter offers a flag. -B, --bootargs <bootargs> Set the User VM kernel command-line arguments. The maximum length is 1023. The bootargs string will be passed to the kernel as its cmdline. Example: -B "loglevel=7" specifies the kernel log level at 7
Therefore, I would like to ask if it is possible to use this flag for setting this parameter. If it is possible, how can I write the cmdline? Thank you very much for your support.
Cheers Jianjie Lin
|
|||||||||||||||||||||||||||||||||||||
|
|||||||||||||||||||||||||||||||||||||
Re: How to enable/disable Intel turbo boost technology
Jianjie Lin
Hi Victor,
Thank you very much for your quick response. In order to avoid the misunderstanding, I would like to make sure the meaning of the pcpu in your response that, you mean pcpu as core or a real CPU.?
Since in the whiskey lake, we have only one CPU but it contains 4 Cores. In my scenario, I assign the Core1 and Core2 to the UOS1 and the Core 3 and Core4 to UOS2.
Now, what I want is to enable the turbo boost in the Core1 and Core2 for UOS1, but disable this feature in Core3 and Core4 for UOS2 By using the “cpufreq.off=1” in the bootargs setting can make it possible? Thank you again for your support. BR, Jianjie
Von: acrn-users@... [mailto:acrn-users@...]
Im Auftrag von Victor Sun
Hi Jianjie, The CPU turbo could be enabled for each VM only when the pinned pCPU is wholly assigned to one VM, i.e. no CPU sharing between VMs. ACRN provides ACPI P state table for post-launched VM so UOS kernel could enable P state by either acpi-cpufreq driver or intel_pstate driver if pCPU support intel_pstate. For pre-launched VM, only intel_pstate is supported.
Yes we can use different device model parameter to enable/disable turbo by returning modified vCPUID result, but currently we have no plan to implement it. For Linux guest, you can simply add kernel parameter of “cpufreq.off=1” to disable turbo.
BR, Victor
From:
acrn-users@... <acrn-users@...>
On Behalf Of Jianjie Lin
Hello acrn project teams,
Here I would like to ask about the Enable and Disable feature of Intel turbo boost technology. As I know that for a PC if I want to disable the intel turbo boost technology, I need to set the BIOS setting before booting the system. But for the hypervisor type1, I would like to ask if it is possible to set the turbo boost technology for each UOSs individually.
For example, I have to UOSs: Uos1, Uos2. I want to enable the turbo boost technology in the UOS1 but disable this feature in the UOS2.
I found the device model parameter offers a flag. -B, --bootargs <bootargs> Set the User VM kernel command-line arguments. The maximum length is 1023. The bootargs string will be passed to the kernel as its cmdline. Example: -B "loglevel=7" specifies the kernel log level at 7
Therefore, I would like to ask if it is possible to use this flag for setting this parameter. If it is possible, how can I write the cmdline? Thank you very much for your support.
Cheers Jianjie Lin
|
|||||||||||||||||||||||||||||||||||||
|
|||||||||||||||||||||||||||||||||||||
Re: How to enable/disable Intel turbo boost technology
Victor Sun
Hi Jianjie, The CPU turbo could be enabled for each VM only when the pinned pCPU is wholly assigned to one VM, i.e. no CPU sharing between VMs. ACRN provides ACPI P state table for post-launched VM so UOS kernel could enable P state by either acpi-cpufreq driver or intel_pstate driver if pCPU support intel_pstate. For pre-launched VM, only intel_pstate is supported.
Yes we can use different device model parameter to enable/disable turbo by returning modified vCPUID result, but currently we have no plan to implement it. For Linux guest, you can simply add kernel parameter of “cpufreq.off=1” to disable turbo.
BR, Victor
From: acrn-users@... <acrn-users@...>
On Behalf Of Jianjie Lin
Hello acrn project teams,
Here I would like to ask about the Enable and Disable feature of Intel turbo boost technology. As I know that for a PC if I want to disable the intel turbo boost technology, I need to set the BIOS setting before booting the system. But for the hypervisor type1, I would like to ask if it is possible to set the turbo boost technology for each UOSs individually.
For example, I have to UOSs: Uos1, Uos2. I want to enable the turbo boost technology in the UOS1 but disable this feature in the UOS2.
I found the device model parameter offers a flag. -B, --bootargs <bootargs> Set the User VM kernel command-line arguments. The maximum length is 1023. The bootargs string will be passed to the kernel as its cmdline. Example: -B "loglevel=7" specifies the kernel log level at 7
Therefore, I would like to ask if it is possible to use this flag for setting this parameter. If it is possible, how can I write the cmdline? Thank you very much for your support.
Cheers Jianjie Lin
|
|||||||||||||||||||||||||||||||||||||
|
|||||||||||||||||||||||||||||||||||||
Acrn configures some particular security and performance features.
Jianjie Lin
Hello acrn hypervisor team,
After looking into the details of the hardware, in my case, whiskey lake board, many security features of whiskey lake caught my eye. I'd appreciate it if you could let me know if I can set the following features in the acrn hypervisor, or more specifically, setting those parameters for individual UOS.
1. Advanced Vector extension(AVX) (https://en.wikipedia.org/wiki/Advanced_Vector_Extensions) 2. Execute disable Bit (https://www.intel.com/content/www/us/en/support/articles/000005771/processors.html) 3. Dynamic Acceleration technology (https://www.techarp.com/bios-guide/intel-dynamic-acceleration/) 4. Performance and Energy Bias Hit support (EPB) (https://www.kernel.org/doc/html/latest/admin-guide/pm/intel_epb.html)
Thank you very much in advanced for your reply and support.
Mit freundlichen Grüßen Jianjie Lin ------------------------------------------------------------------ Jianjie Lin Technical University of Munich Department of Informatics Chair of Robotics, Artificial Intelligence, and Real-Time Systems Boltzmannstr. 3 85748 Garching bei München
|
|||||||||||||||||||||||||||||||||||||
|
|||||||||||||||||||||||||||||||||||||
How to enable/disable Intel turbo boost technology
Jianjie Lin
Hello acrn project teams,
Here I would like to ask about the Enable and Disable feature of Intel turbo boost technology. As I know that for a PC if I want to disable the intel turbo boost technology, I need to set the BIOS setting before booting the system. But for the hypervisor type1, I would like to ask if it is possible to set the turbo boost technology for each UOSs individually.
For example, I have to UOSs: Uos1, Uos2. I want to enable the turbo boost technology in the UOS1 but disable this feature in the UOS2.
I found the device model parameter offers a flag. -B, --bootargs <bootargs> Set the User VM kernel command-line arguments. The maximum length is 1023. The bootargs string will be passed to the kernel as its cmdline. Example: -B "loglevel=7" specifies the kernel log level at 7
Therefore, I would like to ask if it is possible to use this flag for setting this parameter. If it is possible, how can I write the cmdline? Thank you very much for your support.
Cheers Jianjie Lin
|
|||||||||||||||||||||||||||||||||||||
|
|||||||||||||||||||||||||||||||||||||
2021 ACRN Project Technical Community Meeting Minutes - WW31'21
Zou, Terry
ACRN Project TCM - 28th July 2021
Location
Attendees (Total 10, 28/07)
Note: If you need to edit this document, please ask for access. We disabled anonymous editing to keep track of changes
and identify who are the owners of the opens and agenda items.
Opens Note: When adding opens or agenda items, please provide details (not
only links), add your name next to the item you have added and specify your expectation from the TCM
Agenda
Download foil from ACRN Presentation->WW31’21
Description: We will introduce the background and concept of Config Tool 2.0 design, then some typical examples of how to use Config Tool 2.0 for real scenario
configuration.
Q&A:
A: ACRN config tool is targeted to generate real user scenarios on top of ACRN, so not only SoC capability, but also board info need be collected and configured in the
config tool, e.g BIOS and PCI devices that may be pass-through/shared to different VMs.
Marketing/Events N/A
Resources Project URL:
|
|||||||||||||||||||||||||||||||||||||
|
|||||||||||||||||||||||||||||||||||||
2021 ACRN Project Technical Community Meeting (2021/1~2021/12): @ Monthly 3rd Wednesday 4PM (China-Shanghai), Wednesday 10AM (Europe-Munich), Tuesday 1AM (US-West Coast)
Zou, Terry
Reschedule ‘ACRN Config Tool 2.0 Introduction’ session to WW31: 7/28/2021 thanks.
Special Notes: If you have Zoom connection issue by using
web browser, please install & launch Zoom application, manually input the meeting ID (320664063)
to join the Zoom meeting.
Agenda & Archives:
Project ACRN: A flexible, light-weight, open source reference hypervisor for IoT devices
We invite you to attend a monthly "Technical Community" meeting where we'll meet community members and talk about the ACRN project and plans.
As we explore community interest and involvement opportunities, we'll (re)schedule these meetings at a time convenient to most attendees:
Or visit Github wiki if you can’t access Google doc: https://github.com/projectacrn/acrn-hypervisor/wiki/ACRN-Committee-and-Working-Group-Meetings#technical-community-meetings
|
|||||||||||||||||||||||||||||||||||||
|
|||||||||||||||||||||||||||||||||||||
Re: The number of Post-launch RTVM
Zou, Terry
Hi Jianjie,
For Graphics, actually the default config is ‘GVT-d’ in latest release. So we may suggest you directly pass-through Graphics to Windows guest on your WhiskeyLake board. Because for RTVM, we noticed some impact for RT performance, if you share GVT-g with other VMs. So you can run Control application in RTVM and report status/get command to WaaG with UI/Graphics.
Here is WaaG ‘sample launch script’ with GVT-d: -s 2,passthru,0/2/0,gpu \
But if you still want to try gvt-g on WHL with legacy v2.0 release, you can also find WaaG ‘sample launch script’ : -s 2,pci-gvt -G "$2" \ launch_win 1 "64 448 8" is actually $2, that is the default config for low_gm_sz(64M),high_gm_sz(448M) and fence_sz(8).
Actually such memory allocation in WaaG launch script is dedicated memory assigned to VM, you cannot accurately allocate GPU resource percentage to each VM, but maintain workload Queue with tasks from each vGPU and scheduled by ‘workload scheduler’, then submit to i915 driver. More details in: https://projectacrn.github.io/2.0/developer-guides/hld/hld-APL_GVT-g.html#workload-scheduler
BTW< there is also some other memory settings in latest gvt github, but we didn’t try in ACRN-GT, just for your reference: https://github.com/intel/gvt-linux/wiki/GVTg_Setup_Guide#53-create-vgpu-kvmgt-only y aperture memory size hidden memory size fence size weight ---------------------------------------------------------------------------------- 8 64 384 4 2
4 128 512 4 4
2 256 1024 4 8
1 512 2048 4 16
Best & Regards Terry
From: acrn-users@... <acrn-users@...>
On Behalf Of Jianjie Lin
Hi Terry, hi Geoffroy,
Your both response means a lot to me. We try to find the possibilities to use as many RTVM as possible for the applications. Pre-launched RTVM is better for me, post-launched RTVM is also possible for me.
Besides RTVM, I get a new configuration questions in terms of the gvt-g. We choose the whisky lake board as the first testing board, which is suggested from the acrn project, and supported by gvt-g.
Based on the command “acrn-gm” or “launch XML”, the only possible to set the gvt args is -G, --gvtargs <GVT_args> ACRN implements GVT-g for graphics virtualization (aka AcrnGT). This option allows you to set some of its parameters. GVT_args format: low_gm_sz high_gm_sz fence_sz Where: · low_gm_sz: GVT-g aperture size, unit is MB · high_gm_sz: GVT-g hidden gfx memory size, unit is MB · fence_sz: the number of fence registers Now, for example, if we have two UOSs (no matter RTVM or standard UOS). How can I use these args to set the parameter for sharing the Igpu? Such as 40 % to UOS1 and 60 % UOS2?
I do not know how to set those low_gm_sz, high_gm_sz, and fence_sz to achieve this goal.
Sorry, I have so many questions. Thank you very much again for your response.
Cheers, Jianjie Lin
Von:
acrn-users@... [mailto:acrn-users@...]
Im Auftrag von Zou, Terry
Yes, it is doable to modify default ‘Hybrid RT scenario’ and add more pre-launched RTVMs. Actually the max number of pre-launched RTVMs is limited by your platform resource capability: only support partition/pass-through CPU cores, Ethernet, storage to each pre-launched VMs... But in post-launched RTVM, technically you still have the possibility to share some resources (not critical to RT performance) between RTVMs, e.g., storage, and the PM/VM management could also be in service OS. So only if you have high isolation, safety and RT performance concerns, e.g., keep RTVM alive even Service OS reboot, for multi-RTVM, I may suggest you launch more post-launched RTVMs to both have resource sharing flexibility and device pass-through performance. Anyway @Jianjie, if you want to try multiply pre-launched RTVMs, just let us know your detailed failure step/git-issue if happens, we will analysis and help you move forward😊
Best & Regards Terry From:
acrn-users@... <acrn-users@...>
On Behalf Of Geoffroy Van Cutsem
Hi Jianjie Lin, Terry,
@Terry, I’m not quite sure how to interpret your response regarding the max number of pre-launched RTVMs. You say we have one predefined in one of our scenario, but I think what Jianjie wanted to know is if he could modify it or create a new scenario and put multiple pre-launched RTVMs?
The code at the moment only supports one Kata Container. But when I dug into why, it appears that it should be fairly easily doable to change that. It comes down to how UUIDs are handled in the ACRN – Kata code, it’s essentially using only one UUID for Kata but I couldn’t find any reason why the code couldn’t be improved to essentially use any other UUID. There is, in fact, the list of all ACRN predefined UUIDs in the Kata Container ACRN driver code – see https://github.com/kata-containers/kata-containers/blob/main/src/runtime/virtcontainers/acrn.go#L58-L80. I filed an issue/feature request for this a while ago (back in April), see https://github.com/projectacrn/acrn-hypervisor/issues/5966. If you’re interested in digging into this to improve the situation, this would be very welcome 😉
A Kata Container VM today will start as a standard post-launched VM. I do not know if it would be possible to improve the code to make it run an RTVM. I guess it would, and in that case, it would also be useful to have some sort of control for the user to state whether he wants a Kata Container to run as a standard or RTVM. When I was thinking about that previously, I was missing the use-case behind it. What I mean is that people who want to run RTVMs do not typically want to orchestrate them, and to me this is the biggest advantage of using Kata: it’s a different way of starting a VM and one that allows tools that are typically used for containers such as Docker, Kubernetes, etc.
As a word of caution: I do not believe there are many eyes looking at the Kata implementation in ACRN so you may come across some bugs if you try with the latest releases. If that happens, please report those so they can be investigated! Cheers, Geoffroy
From:
acrn-users@... <acrn-users@...>
On Behalf Of Zou, Terry
Hi Jianjie, Thanks for your interests with ACRN hypervisor. For VM configuration, we have several pre-defined scenarios in: https://projectacrn.github.io/latest/introduction/index.html#id3, and the most flexible scenario is ‘industry’ scenario, it runs up to 8 VMs including Service VM, so 7 Users VMs in this config. Specific to RTVM, the max number is also limited by your target platform capability, you know to secure RT performance, RTVM may need dedicated CPU Core (2 cores for Preempt RT), and pass-through Ethernet, even disc.
//Terry: Usually only one Pre-launch RTVM, refer to ‘Hybrid RT scenario’ in: https://projectacrn.github.io/latest/introduction/index.html#hybrid-real-time-rt-scenario
//Terry: According to your platform CPU core number and pass-through devices, usually up to 3~4 post-RTVMs, with 8 cores and 4 ethernet ports.
//Terry: there is one kata-container VM in ‘industry scenario’: https://projectacrn.github.io/latest/tutorials/run_kata_containers.html#run-kata-containers.
//Terry: From pass-through device perspective, they are all isolated in pre and post-launch RTVM. From OS severity perspective, post-VM is launched/managed by service OS, but Pre-launch RTVM could be still alive/working even when Service OS reboot with abnormal failures.
//Terry: Basically Kata-container may not secure the certainty/latency of RT task, so only user level applications e.g., AI computing workload, are recommended in container.
Best & Regards Terry
From:
acrn-users@... <acrn-users@...>
On Behalf Of Jianjie Lin
Hello ACRN community,
We have a project about the Hypervisor, and we require some Real time VMs. After reading the documentation from the acrn project. I have some understanding questions:
Thank you very much for your reply, and explanations. Thank you.
Mit freundlichen Grüßen Jianjie Lin
|
|||||||||||||||||||||||||||||||||||||
|
|||||||||||||||||||||||||||||||||||||
Re: Android on ACRN on QEMU
Zou, Terry
Hi Kishore, ‘AaaG on ACRN on QEMU’, do you mean first run ACRN on QEMU/KVM, then launch AaaG/celadon on ACRN right. Actually there maybe some version mis-match in this scenario, especially for AaaG/Celedon, we only verified with ACRN 1.0/5, and QEMU/KVM was started with ACRN 2.0.
You can find reference documents below, you may find some bugs (report git-issue please) if you try with legacy/latest release, especially for Celedon, there is not much effort/support in ACRN now.
1. Run ACRN on QEMU/KVM : https://projectacrn.github.io/2.0/tutorials/acrn_on_qemu.html?highlight=qemu 2. Launch AaaG/Celedon on ACRN: https://projectacrn.github.io/1.5/tutorials/using_celadon_as_uos.html
Best & Regards Terry
From: acrn-users@... <acrn-users@...>
On Behalf Of Kishore Kanala
Hi,
Did anyone try AaaG on ACRN on QEMU? We are in bad need of this.
Regards, Kishore
On Sat, Jul 3, 2021, 19:49 Kishore Kanala via
lists.projectacrn.org <kishorekanala=gmail.com@...> wrote:
|
|||||||||||||||||||||||||||||||||||||
|
|||||||||||||||||||||||||||||||||||||
Re: The number of Post-launch RTVM
Jianjie Lin
Hi Terry, hi Geoffroy,
Your both response means a lot to me. We try to find the possibilities to use as many RTVM as possible for the applications. Pre-launched RTVM is better for me, post-launched RTVM is also possible for me.
Besides RTVM, I get a new configuration questions in terms of the gvt-g. We choose the whisky lake board as the first testing board, which is suggested from the acrn project, and supported by gvt-g.
Based on the command “acrn-gm” or “launch XML”, the only possible to set the gvt args is -G, --gvtargs <GVT_args> ACRN implements GVT-g for graphics virtualization (aka AcrnGT). This option allows you to set some of its parameters. GVT_args format: low_gm_sz high_gm_sz fence_sz Where: · low_gm_sz: GVT-g aperture size, unit is MB · high_gm_sz: GVT-g hidden gfx memory size, unit is MB · fence_sz: the number of fence registers Now, for example, if we have two UOSs (no matter RTVM or standard UOS). How can I use these args to set the parameter for sharing the Igpu? Such as 40 % to UOS1 and 60 % UOS2?
I do not know how to set those low_gm_sz, high_gm_sz, and fence_sz to achieve this goal.
Sorry, I have so many questions. Thank you very much again for your response.
Cheers, Jianjie Lin
Von: acrn-users@... [mailto:acrn-users@...]
Im Auftrag von Zou, Terry
Yes, it is doable to modify default ‘Hybrid RT scenario’ and add more pre-launched RTVMs. Actually the max number of pre-launched RTVMs is limited by your platform resource capability: only support partition/pass-through CPU cores, Ethernet, storage to each pre-launched VMs... But in post-launched RTVM, technically you still have the possibility to share some resources (not critical to RT performance) between RTVMs, e.g., storage, and the PM/VM management could also be in service OS. So only if you have high isolation, safety and RT performance concerns, e.g., keep RTVM alive even Service OS reboot, for multi-RTVM, I may suggest you launch more post-launched RTVMs to both have resource sharing flexibility and device pass-through performance. Anyway @Jianjie, if you want to try multiply pre-launched RTVMs, just let us know your detailed failure step/git-issue if happens, we will analysis and help you move forward😊
Best & Regards Terry From:
acrn-users@... <acrn-users@...>
On Behalf Of Geoffroy Van Cutsem
Hi Jianjie Lin, Terry,
@Terry, I’m not quite sure how to interpret your response regarding the max number of pre-launched RTVMs. You say we have one predefined in one of our scenario, but I think what Jianjie wanted to know is if he could modify it or create a new scenario and put multiple pre-launched RTVMs?
The code at the moment only supports one Kata Container. But when I dug into why, it appears that it should be fairly easily doable to change that. It comes down to how UUIDs are handled in the ACRN – Kata code, it’s essentially using only one UUID for Kata but I couldn’t find any reason why the code couldn’t be improved to essentially use any other UUID. There is, in fact, the list of all ACRN predefined UUIDs in the Kata Container ACRN driver code – see https://github.com/kata-containers/kata-containers/blob/main/src/runtime/virtcontainers/acrn.go#L58-L80. I filed an issue/feature request for this a while ago (back in April), see https://github.com/projectacrn/acrn-hypervisor/issues/5966. If you’re interested in digging into this to improve the situation, this would be very welcome 😉
A Kata Container VM today will start as a standard post-launched VM. I do not know if it would be possible to improve the code to make it run an RTVM. I guess it would, and in that case, it would also be useful to have some sort of control for the user to state whether he wants a Kata Container to run as a standard or RTVM. When I was thinking about that previously, I was missing the use-case behind it. What I mean is that people who want to run RTVMs do not typically want to orchestrate them, and to me this is the biggest advantage of using Kata: it’s a different way of starting a VM and one that allows tools that are typically used for containers such as Docker, Kubernetes, etc.
As a word of caution: I do not believe there are many eyes looking at the Kata implementation in ACRN so you may come across some bugs if you try with the latest releases. If that happens, please report those so they can be investigated! Cheers, Geoffroy
From:
acrn-users@... <acrn-users@...>
On Behalf Of Zou, Terry
Hi Jianjie, Thanks for your interests with ACRN hypervisor. For VM configuration, we have several pre-defined scenarios in: https://projectacrn.github.io/latest/introduction/index.html#id3, and the most flexible scenario is ‘industry’ scenario, it runs up to 8 VMs including Service VM, so 7 Users VMs in this config. Specific to RTVM, the max number is also limited by your target platform capability, you know to secure RT performance, RTVM may need dedicated CPU Core (2 cores for Preempt RT), and pass-through Ethernet, even disc.
//Terry: Usually only one Pre-launch RTVM, refer to ‘Hybrid RT scenario’ in: https://projectacrn.github.io/latest/introduction/index.html#hybrid-real-time-rt-scenario
//Terry: According to your platform CPU core number and pass-through devices, usually up to 3~4 post-RTVMs, with 8 cores and 4 ethernet ports.
//Terry: there is one kata-container VM in ‘industry scenario’: https://projectacrn.github.io/latest/tutorials/run_kata_containers.html#run-kata-containers.
//Terry: From pass-through device perspective, they are all isolated in pre and post-launch RTVM. From OS severity perspective, post-VM is launched/managed by service OS, but Pre-launch RTVM could be still alive/working even when Service OS reboot with abnormal failures.
//Terry: Basically Kata-container may not secure the certainty/latency of RT task, so only user level applications e.g., AI computing workload, are recommended in container.
Best & Regards Terry
From:
acrn-users@... <acrn-users@...>
On Behalf Of Jianjie Lin
Hello ACRN community,
We have a project about the Hypervisor, and we require some Real time VMs. After reading the documentation from the acrn project. I have some understanding questions:
Thank you very much for your reply, and explanations. Thank you.
Mit freundlichen Grüßen Jianjie Lin
|
|||||||||||||||||||||||||||||||||||||
|
|||||||||||||||||||||||||||||||||||||
Re: Android on ACRN on QEMU
Kishore Kanala
Hi, Did anyone try AaaG on ACRN on QEMU? We are in bad need of this. Regards, Kishore
On Sat, Jul 3, 2021, 19:49 Kishore Kanala via lists.projectacrn.org <kishorekanala=gmail.com@...> wrote:
|
|||||||||||||||||||||||||||||||||||||
|
|||||||||||||||||||||||||||||||||||||
Re: The number of Post-launch RTVM
Zou, Terry
Yes, it is doable to modify default ‘Hybrid RT scenario’ and add more pre-launched RTVMs. Actually the max number of pre-launched RTVMs is limited by your platform resource capability: only support partition/pass-through CPU cores, Ethernet, storage to each pre-launched VMs... But in post-launched RTVM, technically you still have the possibility to share some resources (not critical to RT performance) between RTVMs, e.g., storage, and the PM/VM management could also be in service OS. So only if you have high isolation, safety and RT performance concerns, e.g., keep RTVM alive even Service OS reboot, for multi-RTVM, I may suggest you launch more post-launched RTVMs to both have resource sharing flexibility and device pass-through performance. Anyway @Jianjie, if you want to try multiply pre-launched RTVMs, just let us know your detailed failure step/git-issue if happens, we will analysis and help you move forward😊
Best & Regards Terry From: acrn-users@... <acrn-users@...>
On Behalf Of Geoffroy Van Cutsem
Hi Jianjie Lin, Terry,
@Terry, I’m not quite sure how to interpret your response regarding the max number of pre-launched RTVMs. You say we have one predefined in one of our scenario, but I think what Jianjie wanted to know is if he could modify it or create a new scenario and put multiple pre-launched RTVMs?
The code at the moment only supports one Kata Container. But when I dug into why, it appears that it should be fairly easily doable to change that. It comes down to how UUIDs are handled in the ACRN – Kata code, it’s essentially using only one UUID for Kata but I couldn’t find any reason why the code couldn’t be improved to essentially use any other UUID. There is, in fact, the list of all ACRN predefined UUIDs in the Kata Container ACRN driver code – see https://github.com/kata-containers/kata-containers/blob/main/src/runtime/virtcontainers/acrn.go#L58-L80. I filed an issue/feature request for this a while ago (back in April), see https://github.com/projectacrn/acrn-hypervisor/issues/5966. If you’re interested in digging into this to improve the situation, this would be very welcome 😉
A Kata Container VM today will start as a standard post-launched VM. I do not know if it would be possible to improve the code to make it run an RTVM. I guess it would, and in that case, it would also be useful to have some sort of control for the user to state whether he wants a Kata Container to run as a standard or RTVM. When I was thinking about that previously, I was missing the use-case behind it. What I mean is that people who want to run RTVMs do not typically want to orchestrate them, and to me this is the biggest advantage of using Kata: it’s a different way of starting a VM and one that allows tools that are typically used for containers such as Docker, Kubernetes, etc.
As a word of caution: I do not believe there are many eyes looking at the Kata implementation in ACRN so you may come across some bugs if you try with the latest releases. If that happens, please report those so they can be investigated! Cheers, Geoffroy
From:
acrn-users@... <acrn-users@...>
On Behalf Of Zou, Terry
Hi Jianjie, Thanks for your interests with ACRN hypervisor. For VM configuration, we have several pre-defined scenarios in: https://projectacrn.github.io/latest/introduction/index.html#id3, and the most flexible scenario is ‘industry’ scenario, it runs up to 8 VMs including Service VM, so 7 Users VMs in this config. Specific to RTVM, the max number is also limited by your target platform capability, you know to secure RT performance, RTVM may need dedicated CPU Core (2 cores for Preempt RT), and pass-through Ethernet, even disc.
//Terry: Usually only one Pre-launch RTVM, refer to ‘Hybrid RT scenario’ in: https://projectacrn.github.io/latest/introduction/index.html#hybrid-real-time-rt-scenario
//Terry: According to your platform CPU core number and pass-through devices, usually up to 3~4 post-RTVMs, with 8 cores and 4 ethernet ports.
//Terry: there is one kata-container VM in ‘industry scenario’: https://projectacrn.github.io/latest/tutorials/run_kata_containers.html#run-kata-containers.
//Terry: From pass-through device perspective, they are all isolated in pre and post-launch RTVM. From OS severity perspective, post-VM is launched/managed by service OS, but Pre-launch RTVM could be still alive/working even when Service OS reboot with abnormal failures.
//Terry: Basically Kata-container may not secure the certainty/latency of RT task, so only user level applications e.g., AI computing workload, are recommended in container.
Best & Regards Terry
From:
acrn-users@... <acrn-users@...>
On Behalf Of Jianjie Lin
Hello ACRN community,
We have a project about the Hypervisor, and we require some Real time VMs. After reading the documentation from the acrn project. I have some understanding questions:
Thank you very much for your reply, and explanations. Thank you.
Mit freundlichen Grüßen Jianjie Lin
|
|||||||||||||||||||||||||||||||||||||
|
|||||||||||||||||||||||||||||||||||||
Re: Cache Allocation Technology support for whiskey lake board
Hi Feng, You can also reach out your Intel sale representative to get the CAT on WHL information. As far as I know, there’re at least two silicon families have CAT official support, code name Elkhart Lake (EHL) for Atom and Tiger Lake (TGL) for Core. Be aware that, not all SKU of EHL/TGL CPU supports CAT, need to check their specification first.
Note: SKU: Stock Keeping Unit
From: acrn-users@... <acrn-users@...>
On Behalf Of Geoffroy Van Cutsem
Hi Feng,
I’m not 100% sure about this. Do you have a system available to you already? If so, the easiest would be to check following the procedure described here: https://projectacrn.github.io/latest/tutorials/rdt_configuration.html#rdt-detection-capabilities
Thanks!
From:
acrn-users@... <acrn-users@...>
On Behalf Of Pan, Fengjunjie
Hello everyone,
Does anyone know if the Cache Allocation Technology is supported by Acrn for whiskey lake board (Board: WHL-IPC-I7)?
Thanks and best regards, Feng
|
|||||||||||||||||||||||||||||||||||||
|
|||||||||||||||||||||||||||||||||||||
Re: Cache Allocation Technology support for whiskey lake board
Hi Feng,
I’m not 100% sure about this. Do you have a system available to you already? If so, the easiest would be to check following the procedure described here: https://projectacrn.github.io/latest/tutorials/rdt_configuration.html#rdt-detection-capabilities
Thanks!
From: acrn-users@... <acrn-users@...>
On Behalf Of Pan, Fengjunjie
Hello everyone,
Does anyone know if the Cache Allocation Technology is supported by Acrn for whiskey lake board (Board: WHL-IPC-I7)?
Thanks and best regards, Feng
|
|||||||||||||||||||||||||||||||||||||
|
|||||||||||||||||||||||||||||||||||||
Re: How to launch using xml config file
Hi Feng,
These XML files can be used by the ACRN Configuration Tool to create launch script. The tool is described here: https://projectacrn.github.io/latest/tutorials/acrn_configuration_tool.html.
Hope this helps! Geoffroy
From: acrn-users@... <acrn-users@...>
On Behalf Of Pan, Fengjunjie
Hello everyone,
I would like to try the Acrn Hypervisor on a Whiskey Lake Board and found some useful config files from the github including a launch config: https://github.com/projectacrn/acrn-hypervisor/blob/master/misc/config_tools/data/whl-ipc-i7/hybrid_rt_launch_1uos_waag.xml
I went through the Acrn guide and it mentioned that we need to write the bash file to launch. However I can not find any information about how to use such a xml launch config. Can anyone help me?
Thank you and best regards, Feng
|
|||||||||||||||||||||||||||||||||||||
|
|||||||||||||||||||||||||||||||||||||
Re: The number of Post-launch RTVM
Hi Jianjie Lin, Terry,
@Terry, I’m not quite sure how to interpret your response regarding the max number of pre-launched RTVMs. You say we have one predefined in one of our scenario, but I think what Jianjie wanted to know is if he could modify it or create a new scenario and put multiple pre-launched RTVMs?
The code at the moment only supports one Kata Container. But when I dug into why, it appears that it should be fairly easily doable to change that. It comes down to how UUIDs are handled in the ACRN – Kata code, it’s essentially using only one UUID for Kata but I couldn’t find any reason why the code couldn’t be improved to essentially use any other UUID. There is, in fact, the list of all ACRN predefined UUIDs in the Kata Container ACRN driver code – see https://github.com/kata-containers/kata-containers/blob/main/src/runtime/virtcontainers/acrn.go#L58-L80. I filed an issue/feature request for this a while ago (back in April), see https://github.com/projectacrn/acrn-hypervisor/issues/5966. If you’re interested in digging into this to improve the situation, this would be very welcome 😉
A Kata Container VM today will start as a standard post-launched VM. I do not know if it would be possible to improve the code to make it run an RTVM. I guess it would, and in that case, it would also be useful to have some sort of control for the user to state whether he wants a Kata Container to run as a standard or RTVM. When I was thinking about that previously, I was missing the use-case behind it. What I mean is that people who want to run RTVMs do not typically want to orchestrate them, and to me this is the biggest advantage of using Kata: it’s a different way of starting a VM and one that allows tools that are typically used for containers such as Docker, Kubernetes, etc.
As a word of caution: I do not believe there are many eyes looking at the Kata implementation in ACRN so you may come across some bugs if you try with the latest releases. If that happens, please report those so they can be investigated! Cheers, Geoffroy
From: acrn-users@... <acrn-users@...>
On Behalf Of Zou, Terry
Hi Jianjie, Thanks for your interests with ACRN hypervisor. For VM configuration, we have several pre-defined scenarios in: https://projectacrn.github.io/latest/introduction/index.html#id3, and the most flexible scenario is ‘industry’ scenario, it runs up to 8 VMs including Service VM, so 7 Users VMs in this config. Specific to RTVM, the max number is also limited by your target platform capability, you know to secure RT performance, RTVM may need dedicated CPU Core (2 cores for Preempt RT), and pass-through Ethernet, even disc.
//Terry: Usually only one Pre-launch RTVM, refer to ‘Hybrid RT scenario’ in: https://projectacrn.github.io/latest/introduction/index.html#hybrid-real-time-rt-scenario
//Terry: According to your platform CPU core number and pass-through devices, usually up to 3~4 post-RTVMs, with 8 cores and 4 ethernet ports.
//Terry: there is one kata-container VM in ‘industry scenario’: https://projectacrn.github.io/latest/tutorials/run_kata_containers.html#run-kata-containers.
//Terry: From pass-through device perspective, they are all isolated in pre and post-launch RTVM. From OS severity perspective, post-VM is launched/managed by service OS, but Pre-launch RTVM could be still alive/working even when Service OS reboot with abnormal failures.
//Terry: Basically Kata-container may not secure the certainty/latency of RT task, so only user level applications e.g., AI computing workload, are recommended in container.
Best & Regards Terry
From:
acrn-users@... <acrn-users@...>
On Behalf Of Jianjie Lin
Hello ACRN community,
We have a project about the Hypervisor, and we require some Real time VMs. After reading the documentation from the acrn project. I have some understanding questions:
Thank you very much for your reply, and explanations. Thank you.
Mit freundlichen Grüßen Jianjie Lin
|
|||||||||||||||||||||||||||||||||||||
|
|||||||||||||||||||||||||||||||||||||
Re: The number of Post-launch RTVM
Zou, Terry
Hi Jianjie, Thanks for your interests with ACRN hypervisor. For VM configuration, we have several pre-defined scenarios in: https://projectacrn.github.io/latest/introduction/index.html#id3, and the most flexible scenario is ‘industry’ scenario, it runs up to 8 VMs including Service VM, so 7 Users VMs in this config. Specific to RTVM, the max number is also limited by your target platform capability, you know to secure RT performance, RTVM may need dedicated CPU Core (2 cores for Preempt RT), and pass-through Ethernet, even disc.
//Terry: Usually only one Pre-launch RTVM, refer to ‘Hybrid RT scenario’ in: https://projectacrn.github.io/latest/introduction/index.html#hybrid-real-time-rt-scenario
//Terry: According to your platform CPU core number and pass-through devices, usually up to 3~4 post-RTVMs, with 8 cores and 4 ethernet ports.
//Terry: there is one kata-container VM in ‘industry scenario’: https://projectacrn.github.io/latest/tutorials/run_kata_containers.html#run-kata-containers.
//Terry: From pass-through device perspective, they are all isolated in pre and post-launch RTVM. From OS severity perspective, post-VM is launched/managed by service OS, but Pre-launch RTVM could be still alive/working even when Service OS reboot with abnormal failures.
//Terry: Basically Kata-container may not secure the certainty/latency of RT task, so only user level applications e.g., AI computing workload, are recommended in container.
Best & Regards Terry
From: acrn-users@... <acrn-users@...>
On Behalf Of Jianjie Lin
Hello ACRN community,
We have a project about the Hypervisor, and we require some Real time VMs. After reading the documentation from the acrn project. I have some understanding questions:
Thank you very much for your reply, and explanations. Thank you.
Mit freundlichen Grüßen Jianjie Lin
|
|||||||||||||||||||||||||||||||||||||
|
|||||||||||||||||||||||||||||||||||||
The number of Post-launch RTVM
Jianjie Lin
Hello ACRN community,
We have a project about the Hypervisor, and we require some Real time VMs. After reading the documentation from the acrn project. I have some understanding questions:
1. Does acrn have any number limitation of pre-launch RTVM? 2. Does acrn have any number limitation of post-launch RTVM? 3. Does acrn have any number limitation of kata-containers? 4. What’s the difference between pre-launch RTVM and Post-launch RTVM, since they both can access the hardware directly? 5. Can Kata-container also be possible for real time?
Thank you very much for your reply, and explanations. Thank you.
Mit freundlichen Grüßen Jianjie Lin
|
|||||||||||||||||||||||||||||||||||||
|