Date   

About Inter-VM Communication - Shared Memory

Musa Ünal <umusasadik@...>
 

Hello all,
Is it possible to configure one VM to have R/W privilege and another VM to only read. I couldn't find it in the documentation. https://projectacrn.github.io/latest/developer-guides/hld/ivshmem-hld.html
Thanks



Re: WHL-ipc-i7 gpu display problem

Jianjie Lin
 

Hi Fuzhong,

 

Thank you very much for your reply,

The board is finally working now.

 

Some notifications:

 

I still follow the tutorial https://projectacrn.github.io/2.0/getting-started/rt_industry_ubuntu.html,

since the

https://projectacrn.github.io/2.0/tutorials/using_ubuntu_as_sos.html use the clear-linux as the kernel vm. I do not if it is necessary.

 

There still have some bugs by compiling the acrn- hypervisor.

Since there has no board inspector in the v2.0, we use the board.xml, which is generated by board inspector from the v2.5.

Some tricky steps are made for building.

 

Furthermore, there have additional modifications by in the acrn-kernel

Suggestions Adding following kernel cmdline in /etc/grub.d/40_custom; then sudo update-grub

 

i915.nuclear_pageflip=1 i915.avail_planes_per_pipe=0x01010F i915.domain_plane_owners=0x011111110000 i915.enable_gvt=1 i915.enable_guc=0

 

Modifications: rm rootwait i915.nuclear_pageflip=1 i915.avail_planes_per_pipe=0x010101 i915.domain_plane_owners=0x011100001111 i915.enable_gvt=1 i915.enable_guc=0, i915.enable_conformance_check=0

 

But still thanks a lot

 

Cheers,

Jianjie LIN

 

Von: acrn-users@... [mailto:acrn-users@...] Im Auftrag von Liu, Fuzhong
Gesendet: Mittwoch, 25. August 2021 09:51
An: acrn-users@...
Betreff: Re: [acrn-users] WHL-ipc-i7 gpu display problem

 

Hi Jianjie

ACRN v2.0 support GVT-g and GVT-d.

https://projectacrn.github.io/2.0/tutorials/using_ubuntu_as_sos.html base on GVt-g

and

https://projectacrn.github.io/2.0/getting-started/rt_industry_ubuntu.html base on GVT-d

 

That is why you found the big difference between them.

 

For GVt-g:

1) Connect two monitors to your board; and set Aperture Size to 512(Chipset--->Graphics Configuration--->Aperture Size) in the BIOS settings

2) Please refer https://projectacrn.github.io/2.0/tutorials/using_ubuntu_as_sos.html#install-the-service-vm-kernel

Adding following kernel cmdline in /etc/grub.d/40_custom; then sudo update-grub

 

i915.nuclear_pageflip=1 i915.avail_planes_per_pipe=0x01010F i915.domain_plane_owners=0x011111110000 i915.enable_gvt=1 i915.enable_guc=0

 

3)Remove --windows \ in your launch script(gpu_launch_uos2) since you will launch ubuntu user VM, not Windows VM.

 

If GVT-g still doesn’t work on your side, please connect UART cable to COM1 of WHL, then input “vcpu_list“ after ACRN:\>  prompt; and share us the log.

 

BR.

Fuzhong

From: acrn-users@... <acrn-users@...> On Behalf Of Jianjie Lin
Sent: Tuesday, August 24, 2021 5:39 PM
To: acrn-users@...
Subject: Re: [acrn-users] WHL-ipc-i7 gpu display problem

 

Hi Terry 

Thank you for your reply.

SOS:  https://projectacrn.github.io/2.0/tutorials/using_ubuntu_as_sos.html

i915.nuclear_pageflip=1 i915.avail_planes_per_pipe=0x01010F i915.domain_plane_owners=0x011111110000 i915.enable_gvt=1 i915.enable_guc=0

 

è  I put the kernel cmdline in the /etc/grud.d/40_custom

I found the https://projectacrn.github.io/2.0/tutorials/using_ubuntu_as_sos.html has a big difference compared to  https://projectacrn.github.io/2.0/getting-started/rt_industry_ubuntu.html,  which I follows to install the acrn hypervisor and acrn kernel.

 

Ubuntu as a Guest:  https://projectacrn.github.io/1.6/tutorials/running_ubun_as_user_vm.html

i915.nuclear_pageflip=1 i915.avail_planes_per_pipe=0x010101 i915.domain_plane_owners=0x011100001111 i915.enable_gvt=1 i915.enable_conformance_check=0 i915.enable_guc=0

 

è  I put this cmdline with –B “  ”  à did not work out

è  I put this cmdline in the Ubuntu guest /etc/default/grud of linux command default line  à it did not work out

 

$sudo mount /dev/sda1 /mnt
$ sudo sed -i "s/0x01010F/0x010101/" /mnt/loader/entries/acrn.conf 
$ sudo sed -i "s/0x011111110000/0x011100001111/" /mnt/loader/entries/acrn.conf
$ sed -i 3"s/$/ i915.enable_conformance_check=0/" /mnt/loader/entries/acrn.conf
$ sudo sync && sudo umount /mnt && reboot
 

 à I can not find this file  mnt/loader/entries/acrn.conf, do you know why?

 

In a short summary, after I tried those cmdline, it did not really solve the problem. The terminal still stuck in the line of Unhandled ps2 mouse command 0x88.

Do you have any further suggestion?

 

Cheers

Jianjie Lin

 


发件人: acrn-users@... <acrn-users@...> 代表 Zou, Terry <terry.zou@...>
发送时间: 2021824日星期二 04:25
收件人: acrn-users@...
主题: Re: [acrn-users] WHL-ipc-i7 gpu display problem

 

Hi Jianjie, Yes for v2.5 release, the default config is gvt-d. If you really want to try gvt-g, switch back to v2.0 release branch please.

Remember switch GSG document to v:2.0 also, there are setting guide for gvt-g, especially kernel command line below:

 

SOS:  https://projectacrn.github.io/2.0/tutorials/using_ubuntu_as_sos.html

i915.nuclear_pageflip=1 i915.avail_planes_per_pipe=0x01010F i915.domain_plane_owners=0x011111110000 i915.enable_gvt=1 i915.enable_guc=0

 

Ubuntu as a Guest:  https://projectacrn.github.io/1.6/tutorials/running_ubun_as_user_vm.html

i915.nuclear_pageflip=1 i915.avail_planes_per_pipe=0x010101 i915.domain_plane_owners=0x011100001111 i915.enable_gvt=1 i915.enable_conformance_check=0 i915.enable_guc=0

$sudo mount /dev/sda1 /mnt
$ sudo sed -i "s/0x01010F/0x010101/" /mnt/loader/entries/acrn.conf
$ sudo sed -i "s/0x011111110000/0x011100001111/" /mnt/loader/entries/acrn.conf
$ sed -i 3"s/$/ i915.enable_conformance_check=0/" /mnt/loader/entries/acrn.conf
$ sudo sync && sudo umount /mnt && reboot

 

WaaG image creation: https://projectacrn.github.io/1.6/tutorials/using_windows_as_uos.html?highlight=gvt

i915.nuclear_pageflip=1 i915.avail_planes_per_pipe=0x010101 i915.domain_plane_owners=0x011100001111 i915.enable_gvt=1 i915.enable_conformance_check=0 i915.enable_guc=0

 

Best & Regards

Terry

 

From: acrn-users@... <acrn-users@...> On Behalf Of Jianjie Lin
Sent: Monday, August 23, 2021 10:58 PM
To:
acrn-users@...
Subject: [acrn-users] WHL-ipc-i7 gpu display problem

 

Hi ACRN-community,

 

Our team faces the problem of the GPU-d/g for a long time. We try all the possibilities according to the resource from the tutorial or GitHub; however, the issue remains.

According to the tutorial, we use a whiskey lake i7 board for the gpu sharing or pass-through, which is fully supported by the acrn hypervisor.

 

We have tried release v2.5 as the starting point. We can successfully launch the uos in the console version, but the GPU version is either automatically rebooted or stuck for an hour without further action.

Then we return back the v2.0 since, based on the issue report, the gvt-g is a known issue since the v2.1.

However, it still does not help after we install the release version 2.0.

 

I attach three files for your diagnose

  1. the board information
  2. launch gpu file
  3. the log information.

Thank you very much for your support. Looking forward for your reply.

 

 

Mit freundlichen Grüßen / Kind regards
--------------------------------------------------------

Jianjie Lin

 

 


Re: WHL-ipc-i7 gpu display problem

Liu, Fuzhong
 

Hi Jianjie

ACRN v2.0 support GVT-g and GVT-d.

https://projectacrn.github.io/2.0/tutorials/using_ubuntu_as_sos.html base on GVt-g

and

https://projectacrn.github.io/2.0/getting-started/rt_industry_ubuntu.html base on GVT-d

 

That is why you found the big difference between them.

 

For GVt-g:

1) Connect two monitors to your board; and set Aperture Size to 512(Chipset--->Graphics Configuration--->Aperture Size) in the BIOS settings

2) Please refer https://projectacrn.github.io/2.0/tutorials/using_ubuntu_as_sos.html#install-the-service-vm-kernel

Adding following kernel cmdline in /etc/grub.d/40_custom; then sudo update-grub

 

i915.nuclear_pageflip=1 i915.avail_planes_per_pipe=0x01010F i915.domain_plane_owners=0x011111110000 i915.enable_gvt=1 i915.enable_guc=0

 

3)Remove --windows \ in your launch script(gpu_launch_uos2) since you will launch ubuntu user VM, not Windows VM.

 

If GVT-g still doesn’t work on your side, please connect UART cable to COM1 of WHL, then input “vcpu_list“ after ACRN:\>  prompt; and share us the log.

 

BR.

Fuzhong

From: acrn-users@... <acrn-users@...> On Behalf Of Jianjie Lin
Sent: Tuesday, August 24, 2021 5:39 PM
To: acrn-users@...
Subject: Re: [acrn-users] WHL-ipc-i7 gpu display problem

 

Hi Terry 

Thank you for your reply.

SOS:  https://projectacrn.github.io/2.0/tutorials/using_ubuntu_as_sos.html

i915.nuclear_pageflip=1 i915.avail_planes_per_pipe=0x01010F i915.domain_plane_owners=0x011111110000 i915.enable_gvt=1 i915.enable_guc=0

 

è  I put the kernel cmdline in the /etc/grud.d/40_custom

I found the https://projectacrn.github.io/2.0/tutorials/using_ubuntu_as_sos.html has a big difference compared to  https://projectacrn.github.io/2.0/getting-started/rt_industry_ubuntu.html,  which I follows to install the acrn hypervisor and acrn kernel.

 

Ubuntu as a Guest:  https://projectacrn.github.io/1.6/tutorials/running_ubun_as_user_vm.html

i915.nuclear_pageflip=1 i915.avail_planes_per_pipe=0x010101 i915.domain_plane_owners=0x011100001111 i915.enable_gvt=1 i915.enable_conformance_check=0 i915.enable_guc=0

 

è  I put this cmdline with –B “  ”  à did not work out

è  I put this cmdline in the Ubuntu guest /etc/default/grud of linux command default line  à it did not work out

 

$sudo mount /dev/sda1 /mnt
$ sudo sed -i "s/0x01010F/0x010101/" /mnt/loader/entries/acrn.conf 
$ sudo sed -i "s/0x011111110000/0x011100001111/" /mnt/loader/entries/acrn.conf
$ sed -i 3"s/$/ i915.enable_conformance_check=0/" /mnt/loader/entries/acrn.conf
$ sudo sync && sudo umount /mnt && reboot
 

 à I can not find this file  mnt/loader/entries/acrn.conf, do you know why?

 

In a short summary, after I tried those cmdline, it did not really solve the problem. The terminal still stuck in the line of Unhandled ps2 mouse command 0x88.

Do you have any further suggestion?

 

Cheers

Jianjie Lin

 


发件人: acrn-users@... <acrn-users@...> 代表 Zou, Terry <terry.zou@...>
发送时间: 2021824日星期二 04:25
收件人: acrn-users@...
主题: Re: [acrn-users] WHL-ipc-i7 gpu display problem

 

Hi Jianjie, Yes for v2.5 release, the default config is gvt-d. If you really want to try gvt-g, switch back to v2.0 release branch please.

Remember switch GSG document to v:2.0 also, there are setting guide for gvt-g, especially kernel command line below:

 

SOS:  https://projectacrn.github.io/2.0/tutorials/using_ubuntu_as_sos.html

i915.nuclear_pageflip=1 i915.avail_planes_per_pipe=0x01010F i915.domain_plane_owners=0x011111110000 i915.enable_gvt=1 i915.enable_guc=0

 

Ubuntu as a Guest:  https://projectacrn.github.io/1.6/tutorials/running_ubun_as_user_vm.html

i915.nuclear_pageflip=1 i915.avail_planes_per_pipe=0x010101 i915.domain_plane_owners=0x011100001111 i915.enable_gvt=1 i915.enable_conformance_check=0 i915.enable_guc=0

$sudo mount /dev/sda1 /mnt
$ sudo sed -i "s/0x01010F/0x010101/" /mnt/loader/entries/acrn.conf
$ sudo sed -i "s/0x011111110000/0x011100001111/" /mnt/loader/entries/acrn.conf
$ sed -i 3"s/$/ i915.enable_conformance_check=0/" /mnt/loader/entries/acrn.conf
$ sudo sync && sudo umount /mnt && reboot

 

WaaG image creation: https://projectacrn.github.io/1.6/tutorials/using_windows_as_uos.html?highlight=gvt

i915.nuclear_pageflip=1 i915.avail_planes_per_pipe=0x010101 i915.domain_plane_owners=0x011100001111 i915.enable_gvt=1 i915.enable_conformance_check=0 i915.enable_guc=0

 

Best & Regards

Terry

 

From: acrn-users@... <acrn-users@...> On Behalf Of Jianjie Lin
Sent: Monday, August 23, 2021 10:58 PM
To:
acrn-users@...
Subject: [acrn-users] WHL-ipc-i7 gpu display problem

 

Hi ACRN-community,

 

Our team faces the problem of the GPU-d/g for a long time. We try all the possibilities according to the resource from the tutorial or GitHub; however, the issue remains.

According to the tutorial, we use a whiskey lake i7 board for the gpu sharing or pass-through, which is fully supported by the acrn hypervisor.

 

We have tried release v2.5 as the starting point. We can successfully launch the uos in the console version, but the GPU version is either automatically rebooted or stuck for an hour without further action.

Then we return back the v2.0 since, based on the issue report, the gvt-g is a known issue since the v2.1.

However, it still does not help after we install the release version 2.0.

 

I attach three files for your diagnose

  1. the board information
  2. launch gpu file
  3. the log information.

Thank you very much for your support. Looking forward for your reply.

 

 

Mit freundlichen Grüßen / Kind regards
--------------------------------------------------------

Jianjie Lin

 

 


Re: WHL-ipc-i7 gpu display problem

Jianjie Lin
 

Hi Terry 

Thank you for your reply.

SOS:  https://projectacrn.github.io/2.0/tutorials/using_ubuntu_as_sos.html

i915.nuclear_pageflip=1 i915.avail_planes_per_pipe=0x01010F i915.domain_plane_owners=0x011111110000 i915.enable_gvt=1 i915.enable_guc=0

 

è  I put the kernel cmdline in the /etc/grud.d/40_custom

I found the https://projectacrn.github.io/2.0/tutorials/using_ubuntu_as_sos.html has a big difference compared to  https://projectacrn.github.io/2.0/getting-started/rt_industry_ubuntu.html,  which I follows to install the acrn hypervisor and acrn kernel.

 

Ubuntu as a Guest:  https://projectacrn.github.io/1.6/tutorials/running_ubun_as_user_vm.html

i915.nuclear_pageflip=1 i915.avail_planes_per_pipe=0x010101 i915.domain_plane_owners=0x011100001111 i915.enable_gvt=1 i915.enable_conformance_check=0 i915.enable_guc=0

 

è  I put this cmdline with –B “  ”  à did not work out

è  I put this cmdline in the Ubuntu guest /etc/default/grud of linux command default line  à it did not work out

 

$sudo mount /dev/sda1 /mnt
$ sudo sed -i "s/0x01010F/0x010101/" /mnt/loader/entries/acrn.conf 
$ sudo sed -i "s/0x011111110000/0x011100001111/" /mnt/loader/entries/acrn.conf
$ sed -i 3"s/$/ i915.enable_conformance_check=0/" /mnt/loader/entries/acrn.conf
$ sudo sync && sudo umount /mnt && reboot
 

 à I can not find this file  mnt/loader/entries/acrn.conf, do you know why?

 

In a short summary, after I tried those cmdline, it did not really solve the problem. The terminal still stuck in the line of Unhandled ps2 mouse command 0x88.

Do you have any further suggestion?

 

Cheers

Jianjie Lin

 


发件人: acrn-users@... <acrn-users@...> 代表 Zou, Terry <terry.zou@...>
发送时间: 2021824日星期二 04:25
收件人: acrn-users@...
主题: Re: [acrn-users] WHL-ipc-i7 gpu display problem



Hi Jianjie, Yes for v2.5 release, the default config is gvt-d. If you really want to try gvt-g, switch back to v2.0 release branch please.

Remember switch GSG document to v:2.0 also, there are setting guide for gvt-g, especially kernel command line below:

 

SOS:  https://projectacrn.github.io/2.0/tutorials/using_ubuntu_as_sos.html

i915.nuclear_pageflip=1 i915.avail_planes_per_pipe=0x01010F i915.domain_plane_owners=0x011111110000 i915.enable_gvt=1 i915.enable_guc=0

 

Ubuntu as a Guest:  https://projectacrn.github.io/1.6/tutorials/running_ubun_as_user_vm.html

i915.nuclear_pageflip=1 i915.avail_planes_per_pipe=0x010101 i915.domain_plane_owners=0x011100001111 i915.enable_gvt=1 i915.enable_conformance_check=0 i915.enable_guc=0

$sudo mount /dev/sda1 /mnt
$ sudo sed -i "s/0x01010F/0x010101/" /mnt/loader/entries/acrn.conf
$ sudo sed -i "s/0x011111110000/0x011100001111/" /mnt/loader/entries/acrn.conf
$ sed -i 3"s/$/ i915.enable_conformance_check=0/" /mnt/loader/entries/acrn.conf
$ sudo sync && sudo umount /mnt && reboot

 

WaaG image creation: https://projectacrn.github.io/1.6/tutorials/using_windows_as_uos.html?highlight=gvt

i915.nuclear_pageflip=1 i915.avail_planes_per_pipe=0x010101 i915.domain_plane_owners=0x011100001111 i915.enable_gvt=1 i915.enable_conformance_check=0 i915.enable_guc=0

 

Best & Regards

Terry

 

From: acrn-users@... <acrn-users@...> On Behalf Of Jianjie Lin
Sent: Monday, August 23, 2021 10:58 PM
To: acrn-users@...
Subject: [acrn-users] WHL-ipc-i7 gpu display problem

 

Hi ACRN-community,

 

Our team faces the problem of the GPU-d/g for a long time. We try all the possibilities according to the resource from the tutorial or GitHub; however, the issue remains.

According to the tutorial, we use a whiskey lake i7 board for the gpu sharing or pass-through, which is fully supported by the acrn hypervisor.

 

We have tried release v2.5 as the starting point. We can successfully launch the uos in the console version, but the GPU version is either automatically rebooted or stuck for an hour without further action.

Then we return back the v2.0 since, based on the issue report, the gvt-g is a known issue since the v2.1.

However, it still does not help after we install the release version 2.0.

 

I attach three files for your diagnose

  1. the board information
  2. launch gpu file
  3. the log information.

Thank you very much for your support. Looking forward for your reply.

 

 

Mit freundlichen Grüßen / Kind regards
--------------------------------------------------------

Jianjie Lin

 

 


Re: WHL-ipc-i7 gpu display problem

Zou, Terry
 

Hi Jianjie, Yes for v2.5 release, the default config is gvt-d. If you really want to try gvt-g, switch back to v2.0 release branch please.

Remember switch GSG document to v:2.0 also, there are setting guide for gvt-g, especially kernel command line below:

 

SOS:  https://projectacrn.github.io/2.0/tutorials/using_ubuntu_as_sos.html

i915.nuclear_pageflip=1 i915.avail_planes_per_pipe=0x01010F i915.domain_plane_owners=0x011111110000 i915.enable_gvt=1 i915.enable_guc=0

 

Ubuntu as a Guest:  https://projectacrn.github.io/1.6/tutorials/running_ubun_as_user_vm.html

i915.nuclear_pageflip=1 i915.avail_planes_per_pipe=0x010101 i915.domain_plane_owners=0x011100001111 i915.enable_gvt=1 i915.enable_conformance_check=0 i915.enable_guc=0

$sudo mount /dev/sda1 /mnt
$ sudo sed -i "s/0x01010F/0x010101/" /mnt/loader/entries/acrn.conf
$ sudo sed -i "s/0x011111110000/0x011100001111/" /mnt/loader/entries/acrn.conf
$ sed -i 3"s/$/ i915.enable_conformance_check=0/" /mnt/loader/entries/acrn.conf
$ sudo sync && sudo umount /mnt && reboot

 

WaaG image creation: https://projectacrn.github.io/1.6/tutorials/using_windows_as_uos.html?highlight=gvt

i915.nuclear_pageflip=1 i915.avail_planes_per_pipe=0x010101 i915.domain_plane_owners=0x011100001111 i915.enable_gvt=1 i915.enable_conformance_check=0 i915.enable_guc=0

 

Best & Regards

Terry

 

From: acrn-users@... <acrn-users@...> On Behalf Of Jianjie Lin
Sent: Monday, August 23, 2021 10:58 PM
To: acrn-users@...
Subject: [acrn-users] WHL-ipc-i7 gpu display problem

 

Hi ACRN-community,

 

Our team faces the problem of the GPU-d/g for a long time. We try all the possibilities according to the resource from the tutorial or GitHub; however, the issue remains.

According to the tutorial, we use a whiskey lake i7 board for the gpu sharing or pass-through, which is fully supported by the acrn hypervisor.

 

We have tried release v2.5 as the starting point. We can successfully launch the uos in the console version, but the GPU version is either automatically rebooted or stuck for an hour without further action.

Then we return back the v2.0 since, based on the issue report, the gvt-g is a known issue since the v2.1.

However, it still does not help after we install the release version 2.0.

 

I attach three files for your diagnose

  1. the board information
  2. launch gpu file
  3. the log information.

Thank you very much for your support. Looking forward for your reply.

 

 

Mit freundlichen Grüßen / Kind regards
--------------------------------------------------------

Jianjie Lin

 


WHL-ipc-i7 gpu display problem

Jianjie Lin
 

Hi ACRN-community,

 

Our team faces the problem of the GPU-d/g for a long time. We try all the possibilities according to the resource from the tutorial or GitHub; however, the issue remains.

According to the tutorial, we use a whiskey lake i7 board for the gpu sharing or pass-through, which is fully supported by the acrn hypervisor.

 

We have tried release v2.5 as the starting point. We can successfully launch the uos in the console version, but the GPU version is either automatically rebooted or stuck for an hour without further action.

Then we return back the v2.0 since, based on the issue report, the gvt-g is a known issue since the v2.1.

However, it still does not help after we install the release version 2.0.

 

I attach three files for your diagnose

  1. the board information
  2. launch gpu file
  3. the log information.

Thank you very much for your support. Looking forward for your reply.

 

 

Mit freundlichen Grüßen / Kind regards
--------------------------------------------------------

Jianjie Lin

 


2021 ACRN Project Technical Community Meeting Minutes - WW34'21

Zou, Terry
 

ACRN Project TCM - 18th Aug 2021
Location
  1. Online by Zoom: https://zoom.com.cn/j/320664063   
Attendees (Total 15, 18/08)
 
Note: If you need to edit this document, please ask for access. We disabled anonymous editing to keep track of changes and identify who are the owners of the opens and agenda items.
Opens  Note: When adding opens or agenda items, please provide details (not only links), add your name next to the item you have added and specify your expectation from the TCM 
Agenda
  1. ACRN project update: N/A
  2. WW34’21 ACRN RTVM  Performance of Sharing Storage’ Cao Minggui
Download foil from ACRN Presentation->WW34’21
Description: We will introduce one storage requirement background and Polling VS Interrupt mode design with performance comparison.
  1. All: Community open discussion.
Q&A: 
  1. Could we support multi-RTVM and share one storage ?
A: From a function perspective, yes, it is doable to enable 2 RTVM and share one storage. But the performance of real time and storage would be a little bit different.
  1. Adding the virtio poll parameter on ACRN-DM means that the GPOS/Windows VM driver will also need to be in polling mode for storage?
A: No, RTVM is VirtIO polling mode, but WaaG could be VirtIO interrupt mode.
  1. Next meeting agenda proposal:
WW Topic Presenter Status
WW04 ACRN PCI based vUART introduction Tao Yuhong 1/20/2021
Chinese New Year Break
WW13 ACRN  Real-Time Enhancement Huang Yonghua 3/24/2021
WW17 Enable ACRN on TGL NUC11 Liu Fuzhong 4/21/2021
WW21 ACRN Memory Layout Related Boot Issue Diagnosis Sun Victor 5/19/2021
WW31 ACRN Config Tool 2.0 Introduction Xie Nanlin 7/28/2021
WW34 ACRN RTVM  Performance of Sharing Storage Cao Minggui 8/18/2021
WW39 ACRN Software SRAM Introduction Huang Yonghua 9/15/2021
WW43 ACRN Nested Virtualization Introduction Shen Fangfang 10/20/2021
Marketing/Events   N/A
Resources     Project URL: 
  1. Portal: https://projectacrn.org   
  2. Source code: https://github.com/projectacrn   
  3. email: info@... 
  4. Technical Mailing list: acrn-dev@... 
 
 


Re: How to enable/disable Intel turbo boost technology

Victor Sun
 

Yes you got it.

 

BR,

Victor

 

From: acrn-users@... <acrn-users@...> On Behalf Of Jianjie Lin
Sent: Tuesday, August 3, 2021 10:18 PM
To: acrn-users@...
Subject: Re: [acrn-users] How to enable/disable Intel turbo boost technology

 

Hi Victor,

 

Thank you very much for your quick response.

In order to avoid the misunderstanding, I would like to make sure the meaning of the pcpu in your response that, you mean pcpu as core or a real CPU.?

 

Since in the whiskey lake, we have only one CPU but it contains 4 Cores.

In my scenario, I assign the Core1 and Core2 to the UOS1 and the Core 3 and Core4 to UOS2.

 

Now, what I want is to enable the turbo boost in the Core1 and Core2 for UOS1, but disable this feature in Core3 and Core4 for UOS2

By using the “cpufreq.off=1” in the bootargs setting can make it possible?

Thank you again for your support.

BR,

Jianjie

 

 

Von: acrn-users@... [mailto:acrn-users@...] Im Auftrag von Victor Sun
Gesendet: Dienstag, 3. August 2021 15:50
An: acrn-users@...
Betreff: Re: [acrn-users] How to enable/disable Intel turbo boost technology

 

Hi Jianjie,

The CPU turbo could be enabled for each VM only when the pinned pCPU is wholly assigned to one VM, i.e. no CPU sharing between VMs.

ACRN provides ACPI P state table for post-launched VM so UOS kernel could enable P state by either acpi-cpufreq driver or intel_pstate driver if pCPU support intel_pstate.

For pre-launched VM, only intel_pstate is supported.

 

Yes we can use different device model parameter to enable/disable turbo by returning modified vCPUID result, but currently we have no plan to implement it.

For Linux guest, you can simply add kernel parameter of “cpufreq.off=1” to disable turbo.

 

BR,

Victor

 

 

From: acrn-users@... <acrn-users@...> On Behalf Of Jianjie Lin
Sent: Tuesday, August 3, 2021 5:00 PM
To: acrn-users@...
Subject: [acrn-users] How to enable/disable Intel turbo boost technology

 

Hello acrn project teams,

 

Here I would like to ask about the Enable and Disable feature of Intel turbo boost technology.

As I know that for a PC if I want to disable the intel turbo boost technology, I need to set the BIOS setting before booting the system.

But for the hypervisor type1, I would like to ask if it is possible to set the turbo boost technology for each UOSs individually.

 

For example, I have to UOSs: Uos1, Uos2.

I want to enable the turbo boost technology in the UOS1 but disable this feature in the UOS2.

 

I found the device model parameter offers a flag.

-B, --bootargs <bootargs>

    Set the User VM kernel command-line arguments. The maximum length is 1023. The bootargs string will be passed to the kernel as its cmdline.

    Example:

    -B "loglevel=7"

    specifies the kernel log level at 7

 

Therefore, I would like to ask if it is possible to use this flag for setting this parameter. If it is possible, how can I write the cmdline?

Thank you very much for your support.

 

Cheers

Jianjie Lin

 


Re: How to enable/disable Intel turbo boost technology

Jianjie Lin
 

Hi Victor,

 

Thank you very much for your quick response.

In order to avoid the misunderstanding, I would like to make sure the meaning of the pcpu in your response that, you mean pcpu as core or a real CPU.?

 

Since in the whiskey lake, we have only one CPU but it contains 4 Cores.

In my scenario, I assign the Core1 and Core2 to the UOS1 and the Core 3 and Core4 to UOS2.

 

Now, what I want is to enable the turbo boost in the Core1 and Core2 for UOS1, but disable this feature in Core3 and Core4 for UOS2

By using the “cpufreq.off=1” in the bootargs setting can make it possible?

Thank you again for your support.

BR,

Jianjie

 

 

Von: acrn-users@... [mailto:acrn-users@...] Im Auftrag von Victor Sun
Gesendet: Dienstag, 3. August 2021 15:50
An: acrn-users@...
Betreff: Re: [acrn-users] How to enable/disable Intel turbo boost technology

 

Hi Jianjie,

The CPU turbo could be enabled for each VM only when the pinned pCPU is wholly assigned to one VM, i.e. no CPU sharing between VMs.

ACRN provides ACPI P state table for post-launched VM so UOS kernel could enable P state by either acpi-cpufreq driver or intel_pstate driver if pCPU support intel_pstate.

For pre-launched VM, only intel_pstate is supported.

 

Yes we can use different device model parameter to enable/disable turbo by returning modified vCPUID result, but currently we have no plan to implement it.

For Linux guest, you can simply add kernel parameter of “cpufreq.off=1” to disable turbo.

 

BR,

Victor

 

 

From: acrn-users@... <acrn-users@...> On Behalf Of Jianjie Lin
Sent: Tuesday, August 3, 2021 5:00 PM
To: acrn-users@...
Subject: [acrn-users] How to enable/disable Intel turbo boost technology

 

Hello acrn project teams,

 

Here I would like to ask about the Enable and Disable feature of Intel turbo boost technology.

As I know that for a PC if I want to disable the intel turbo boost technology, I need to set the BIOS setting before booting the system.

But for the hypervisor type1, I would like to ask if it is possible to set the turbo boost technology for each UOSs individually.

 

For example, I have to UOSs: Uos1, Uos2.

I want to enable the turbo boost technology in the UOS1 but disable this feature in the UOS2.

 

I found the device model parameter offers a flag.

-B, --bootargs <bootargs>

    Set the User VM kernel command-line arguments. The maximum length is 1023. The bootargs string will be passed to the kernel as its cmdline.

    Example:

    -B "loglevel=7"

    specifies the kernel log level at 7

 

Therefore, I would like to ask if it is possible to use this flag for setting this parameter. If it is possible, how can I write the cmdline?

Thank you very much for your support.

 

Cheers

Jianjie Lin

 


Re: How to enable/disable Intel turbo boost technology

Victor Sun
 

Hi Jianjie,

The CPU turbo could be enabled for each VM only when the pinned pCPU is wholly assigned to one VM, i.e. no CPU sharing between VMs.

ACRN provides ACPI P state table for post-launched VM so UOS kernel could enable P state by either acpi-cpufreq driver or intel_pstate driver if pCPU support intel_pstate.

For pre-launched VM, only intel_pstate is supported.

 

Yes we can use different device model parameter to enable/disable turbo by returning modified vCPUID result, but currently we have no plan to implement it.

For Linux guest, you can simply add kernel parameter of “cpufreq.off=1” to disable turbo.

 

BR,

Victor

 

 

From: acrn-users@... <acrn-users@...> On Behalf Of Jianjie Lin
Sent: Tuesday, August 3, 2021 5:00 PM
To: acrn-users@...
Subject: [acrn-users] How to enable/disable Intel turbo boost technology

 

Hello acrn project teams,

 

Here I would like to ask about the Enable and Disable feature of Intel turbo boost technology.

As I know that for a PC if I want to disable the intel turbo boost technology, I need to set the BIOS setting before booting the system.

But for the hypervisor type1, I would like to ask if it is possible to set the turbo boost technology for each UOSs individually.

 

For example, I have to UOSs: Uos1, Uos2.

I want to enable the turbo boost technology in the UOS1 but disable this feature in the UOS2.

 

I found the device model parameter offers a flag.

-B, --bootargs <bootargs>

    Set the User VM kernel command-line arguments. The maximum length is 1023. The bootargs string will be passed to the kernel as its cmdline.

    Example:

    -B "loglevel=7"

    specifies the kernel log level at 7

 

Therefore, I would like to ask if it is possible to use this flag for setting this parameter. If it is possible, how can I write the cmdline?

Thank you very much for your support.

 

Cheers

Jianjie Lin

 


Acrn configures some particular security and performance features.

Jianjie Lin
 

Hello acrn hypervisor team,

 

After looking into the details of the hardware, in my case, whiskey lake board, many security features of whiskey lake caught my eye.

I'd appreciate it if you could let me know if I can set the following features in the acrn hypervisor, or more specifically, setting those parameters for individual UOS.

 

1.       Advanced Vector extension(AVX) (https://en.wikipedia.org/wiki/Advanced_Vector_Extensions)

2.       Execute disable Bit (https://www.intel.com/content/www/us/en/support/articles/000005771/processors.html)

3.       Dynamic Acceleration technology (https://www.techarp.com/bios-guide/intel-dynamic-acceleration/)

4.       Performance and Energy Bias Hit support (EPB) (https://www.kernel.org/doc/html/latest/admin-guide/pm/intel_epb.html)

 

Thank you very much in advanced for your reply and support.

 

Mit freundlichen Grüßen

Jianjie Lin

------------------------------------------------------------------

Jianjie Lin

Technical University of Munich

Department of Informatics

Chair of Robotics, Artificial Intelligence, and Real-Time Systems

Boltzmannstr. 3

85748 Garching bei München

 


How to enable/disable Intel turbo boost technology

Jianjie Lin
 

Hello acrn project teams,

 

Here I would like to ask about the Enable and Disable feature of Intel turbo boost technology.

As I know that for a PC if I want to disable the intel turbo boost technology, I need to set the BIOS setting before booting the system.

But for the hypervisor type1, I would like to ask if it is possible to set the turbo boost technology for each UOSs individually.

 

For example, I have to UOSs: Uos1, Uos2.

I want to enable the turbo boost technology in the UOS1 but disable this feature in the UOS2.

 

I found the device model parameter offers a flag.

-B, --bootargs <bootargs>

    Set the User VM kernel command-line arguments. The maximum length is 1023. The bootargs string will be passed to the kernel as its cmdline.

    Example:

    -B "loglevel=7"

    specifies the kernel log level at 7

 

Therefore, I would like to ask if it is possible to use this flag for setting this parameter. If it is possible, how can I write the cmdline?

Thank you very much for your support.

 

Cheers

Jianjie Lin

 


2021 ACRN Project Technical Community Meeting Minutes - WW31'21

Zou, Terry
 

ACRN Project TCM - 28th July 2021
Location
  1. Online by Zoom: https://zoom.com.cn/j/320664063   
Attendees (Total 10, 28/07)
 
Note: If you need to edit this document, please ask for access. We disabled anonymous editing to keep track of changes and identify who are the owners of the opens and agenda items.
Opens  Note: When adding opens or agenda items, please provide details (not only links), add your name next to the item you have added and specify your expectation from the TCM 
Agenda
  1. ACRN project update: 
  1. Version 2.5 releasehttps://projectacrn.github.io/latest/release_notes/release_notes_2.5.html
  2. TCM agenda update. Both in TCM meeting invitation to mailing-list and ACRN home page: https://github.com/projectacrn/acrn-hypervisor/wiki/ACRN-Committee-and-Working-Group-Meetings#technical-community-meetings
  1. “WW31’21 ACRN Config Tool 2.0 Introduction” Xie Nanlin
Download foil from ACRN Presentation->WW31’21
Description: We will introduce the background and concept of Config Tool 2.0 design, then some typical examples of how to use Config Tool 2.0 for real scenario configuration.
  1. All: Community open discussion.
Q&A: 
  1. The ACRN config tool is based on Chipset or Board ? 
A: ACRN config tool is targeted to generate real user scenarios on top of ACRN, so not only SoC capability, but also board info need be collected and configured in the config tool, e.g BIOS and PCI devices that may be pass-through/shared to different VMs.
  1. Next meeting agenda proposal:
WW Topic Presenter Status
WW04 ACRN PCI based vUART introduction Tao Yuhong 1/20/2021
Chinese New Year Break
WW13 ACRN  Real-Time Enhancement Huang Yonghua 3/24/2021
WW17 Enable ACRN on TGL NUC11 Liu Fuzhong 4/21/2021
WW21 ACRN Memory Layout Related Boot Issue Diagnosis Sun Victor 5/19/2021
WW31 ACRN Config Tool 2.0 Introduction Xie Nanlin 7/28/2021
WW34 ACRN RTVM  Performance of Sharing Storage Cao Minggui 8/18/2021
WW39 ACRN Software SRAM Introduction Huang Yonghua 9/15/2021
WW43 ACRN Nested Virtualization Introduction Shen Fangfang 10/20/2021
Marketing/Events   N/A
Resources     Project URL: 
  1. Portal: https://projectacrn.org   
  2. Source code: https://github.com/projectacrn   
  3. email: info@... 
  4. Technical Mailing list: acrn-dev@... 
 
 


2021 ACRN Project Technical Community Meeting (2021/1~2021/12): @ Monthly 3rd Wednesday 4PM (China-Shanghai), Wednesday 10AM (Europe-Munich), Tuesday 1AM (US-West Coast)

Zou, Terry
 

Reschedule ACRN Config Tool 2.0 Introduction session to WW31: 7/28/2021 thanks.
 
Special Notes: If you have Zoom connection issue by using web browser, please install & launch Zoom application, manually input the meeting ID (320664063) to join the Zoom meeting.
 
Agenda & Archives:
WW Topic Presenter Status
WW04 ACRN PCI based vUART introduction Tao Yuhong 1/20/2021
Chinese New Year Break
WW13 ACRN Real-Time Enhancement Introduction Huang Yonghua 3/24/2021
WW17 Enable ACRN on TGL NUC11 Liu Fuzhong 4/21/2021
WW21 ACRN Memory Layout Related Boot Issue Diagnosis Sun Victor 5/19/2021
WW31 ACRN Config Tool 2.0 Introduction Xie Nanlin 7/28/2021
WW34 ACRN RTVM  Performance of Sharing Storage Cao Minggui 8/18/2021
WW39 ACRN Software SRAM Introduction Huang Yonghua 9/15/2021
WW43 ACRN Nested Virtualization Introduction Shen Fangfang 10/20/2021
 
Project ACRN: A flexible, light-weight, open source reference hypervisor for IoT devices
We invite you to attend a monthly "Technical Community" meeting where we'll meet community members and talk about the ACRN project and plans.
As we explore community interest and involvement opportunities, we'll (re)schedule these meetings at a time convenient to most attendees:
  • Meets every 3rd Wednesday, Starting Jan 20, 2021: 4-5:00 PM (China-Shanghai), Wednesday 10-11:00 AM (Europe-Munich), Tuesday 1-2:00 AM (US-West Coast)
  • Chairperson: Terry ZOU, terry.zou@... (Intel)
  • Online conference link: https://zoom.com.cn/j/320664063
  • Zoom Meeting ID: 320 664 063
  • Special Notes: If you have Zoom connection issue by using web browser, please launch Zoom application, manually input the meeting ID (320664063) to join the Zoom meeting.
  • Online conference phone:
  • China: +86 010 87833177  or 400 669 9381 (Toll Free)
  • Germany: +49 (0) 30 3080 6188  or +49 800 724 3138 (Toll Free)
  • US: +1 669 900 6833  or +1 646 558 8656   or +1 877 369 0926 (Toll Free) or +1 855 880 1246 (Toll Free)
  • Additional international phone numbers
  • Meeting Notes:
 


Re: The number of Post-launch RTVM

Zou, Terry
 

Hi Jianjie,

 

For Graphics, actually the default config is ‘GVT-d’ in latest release. So we may suggest you directly pass-through Graphics to Windows guest on your WhiskeyLake board.

Because for RTVM, we noticed some impact for RT performance, if you share GVT-g with other VMs. So you can run Control application in RTVM and report status/get command to WaaG with UI/Graphics.

 

Here is WaaG ‘sample launch script’ with GVT-d:    -s 2,passthru,0/2/0,gpu  \

https://github.com/projectacrn/acrn-hypervisor/blob/master/misc/config_tools/data/sample_launch_scripts/nuc/launch_win.sh#L100

 

But if you still want to try gvt-g on WHL with legacy v2.0 release, you can also find WaaG  ‘sample launch script’ :  -s 2,pci-gvt -G "$2" \  launch_win 1 "64 448 8" is actually $2, that is the default config for low_gm_sz(64M),high_gm_sz(448M) and fence_sz(8).

https://github.com/projectacrn/acrn-hypervisor/blob/release_2.0/devicemodel/samples/nuc/launch_win.sh#L21

 

Actually such memory allocation in WaaG launch script is dedicated memory assigned to VM, you cannot accurately allocate GPU resource percentage to each VM, but maintain workload Queue with tasks from each vGPU and scheduled by ‘workload scheduler’, then submit to i915 driver. More details in: https://projectacrn.github.io/2.0/developer-guides/hld/hld-APL_GVT-g.html#workload-scheduler

 

BTW< there is also some other memory settings in latest gvt github, but we didn’t try in ACRN-GT, just for your reference: https://github.com/intel/gvt-linux/wiki/GVTg_Setup_Guide#53-create-vgpu-kvmgt-only

y       aperture memory size     hidden memory size        fence size       weight

----------------------------------------------------------------------------------

8       64                       384                      4                 2

 

4       128                      512                      4                 4

 

2       256                      1024                     4                 8

 

1       512                      2048                     4                 16

 

Best & Regards

Terry

 

From: acrn-users@... <acrn-users@...> On Behalf Of Jianjie Lin
Sent: Tuesday, July 27, 2021 9:29 PM
To: acrn-users@...
Subject: Re: [acrn-users] The number of Post-launch RTVM

 

Hi Terry, hi Geoffroy,

 

Your both response means a lot to me. We try to find the possibilities to use as many RTVM as possible for the applications. Pre-launched RTVM is better for me, post-launched RTVM is also possible for me.

 

 

Besides RTVM, I get a new configuration questions in terms of the gvt-g.

We choose the whisky lake board as the first testing board, which is suggested from the acrn project, and supported by gvt-g.

 

Based on the command “acrn-gm” or “launch XML”, the only possible to set the gvt args is

-G, --gvtargs <GVT_args>

ACRN implements GVT-g for graphics virtualization (aka AcrnGT). This option allows you to set some of its parameters.

GVT_args format: low_gm_sz high_gm_sz fence_sz

Where:

·       low_gm_sz: GVT-g aperture size, unit is MB

·       high_gm_sz: GVT-g hidden gfx memory size, unit is MB

·       fence_sz: the number of fence registers

Now, for example, if we have two UOSs (no matter RTVM or standard UOS).

How can I use these args to set the parameter for sharing the Igpu?  Such as 40 % to UOS1 and 60 % UOS2?

 

I do not know how to set those low_gm_sz, high_gm_sz, and fence_sz to achieve this goal.

 

Sorry, I have so many questions.

Thank you very much again for your response.

 

Cheers,

Jianjie Lin

 

 

Von: acrn-users@... [mailto:acrn-users@...] Im Auftrag von Zou, Terry
Gesendet: Dienstag, 27. Juli 2021 05:30
An:
acrn-users@...
Betreff: Re: [acrn-users] The number of Post-launch RTVM

 

Yes, it is doable to modify default ‘Hybrid RT scenario’ and add more pre-launched RTVMs. Actually the max number of pre-launched RTVMs is limited by your platform resource capability: only support partition/pass-through CPU cores, Ethernet, storage to each pre-launched VMs... But in post-launched RTVM, technically you still have the possibility to share some resources (not critical to RT performance) between RTVMs, e.g., storage, and the PM/VM management could also be in service OS.

So only if you have high isolation, safety and RT performance concerns, e.g., keep RTVM alive even Service OS reboot, for multi-RTVM, I may suggest you launch more post-launched RTVMs to both have resource sharing flexibility and device pass-through performance.  

Anyway @Jianjie, if you want to try multiply pre-launched RTVMs, just let us know your detailed failure step/git-issue if happens, we will analysis and help you move forward😊

 

Best & Regards

Terry

From: acrn-users@... <acrn-users@...> On Behalf Of Geoffroy Van Cutsem
Sent: Monday, July 26, 2021 11:25 PM
To: acrn-users@...
Subject: Re: [acrn-users] The number of Post-launch RTVM

 

Hi Jianjie Lin, Terry,

 

@Terry, I’m not quite sure how to interpret your response regarding the max number of pre-launched RTVMs. You say we have one predefined in one of our scenario, but I think what Jianjie wanted to know is if he could modify it or create a new scenario and put multiple pre-launched RTVMs?

 

The code at the moment only supports one Kata Container. But when I dug into why, it appears that it should be fairly easily doable to change that. It comes down to how UUIDs are handled in the ACRN – Kata code, it’s essentially using only one UUID for Kata but I couldn’t find any reason why the code couldn’t be improved to essentially use any other UUID. There is, in fact, the list of all ACRN predefined UUIDs in the Kata Container ACRN driver code – see https://github.com/kata-containers/kata-containers/blob/main/src/runtime/virtcontainers/acrn.go#L58-L80. I filed an issue/feature request for this a while ago (back in April), see https://github.com/projectacrn/acrn-hypervisor/issues/5966. If you’re interested in digging into this to improve the situation, this would be very welcome 😉

 

A Kata Container VM today will start as a standard post-launched VM. I do not know if it would be possible to improve the code to make it run an RTVM. I guess it would, and in that case, it would also be useful to have some sort of control for the user to state whether he wants a Kata Container to run as a standard or RTVM. When I was thinking about that previously, I was missing the use-case behind it. What I mean is that people who want to run RTVMs do not typically want to orchestrate them, and to me this is the biggest advantage of using Kata: it’s a different way of starting a VM and one that allows tools that are typically used for containers such as Docker, Kubernetes, etc.

 

As a word of caution: I do not believe there are many eyes looking at the Kata implementation in ACRN so you may come across some bugs if you try with the latest releases. If that happens, please report those so they can be investigated!

Cheers,

Geoffroy

 

From: acrn-users@... <acrn-users@...> On Behalf Of Zou, Terry
Sent: Monday, July 26, 2021 4:56 am
To: acrn-users@...
Subject: Re: [acrn-users] The number of Post-launch RTVM

 

Hi Jianjie,

Thanks for your interests with ACRN hypervisor. For VM configuration, we have several pre-defined scenarios in: https://projectacrn.github.io/latest/introduction/index.html#id3, and the most flexible scenario is ‘industry’ scenario, it runs up to 8 VMs including Service VM, so 7 Users VMs in this config.

Specific to RTVM, the max number is also limited by your target platform capability, you know to secure RT performance, RTVM may need dedicated CPU Core (2 cores for Preempt RT), and pass-through Ethernet, even disc.

 

  1. Does acrn have any number limitation of pre-launch RTVM?

//Terry: Usually only one Pre-launch RTVM, refer to ‘Hybrid RT scenario’ in: https://projectacrn.github.io/latest/introduction/index.html#hybrid-real-time-rt-scenario  

  1. Does acrn have any number limitation of post-launch RTVM?

//Terry: According to your platform CPU core number and pass-through devices, usually up to 3~4 post-RTVMs, with 8 cores and 4 ethernet ports.

  1. Does acrn have any number limitation of kata-containers?

//Terry: there is one kata-container VM in ‘industry scenario’: https://projectacrn.github.io/latest/tutorials/run_kata_containers.html#run-kata-containers.

  1. What’s the difference between pre-launch RTVM and Post-launch RTVM, since they both can access the hardware directly?

//Terry: From pass-through device perspective, they are all isolated in pre and post-launch RTVM.  From OS severity perspective, post-VM is launched/managed by service OS, but Pre-launch RTVM could be still alive/working even when Service OS reboot with abnormal failures.

  1. Can Kata-container also be possible for real time?

//Terry: Basically Kata-container may not secure the certainty/latency of RT task, so only user level applications e.g., AI computing workload, are recommended in container.

 

Best & Regards

Terry

 

From: acrn-users@... <acrn-users@...> On Behalf Of Jianjie Lin
Sent: Friday, July 23, 2021 10:06 PM
To: acrn-users@...
Subject: [acrn-users] The number of Post-launch RTVM

 

Hello ACRN community,

 

We have a project about the Hypervisor, and we require some Real time VMs. After reading the documentation from the acrn project. I have some understanding questions:

 

  1. Does acrn have any number limitation of pre-launch RTVM?
  2. Does acrn have any number limitation of post-launch RTVM?
  3. Does acrn have any number limitation of kata-containers?
  4. What’s the difference between pre-launch RTVM and Post-launch RTVM, since they both can access the hardware directly?
  5. Can Kata-container also be possible for real time?

 

Thank you very much for your reply, and explanations. Thank you.

 

Mit freundlichen Grüßen

Jianjie Lin

 


Re: Android on ACRN on QEMU

Zou, Terry
 

Hi Kishore, ‘AaaG on ACRN on QEMU’, do you mean first run ACRN on QEMU/KVM, then launch AaaG/celadon on ACRN right.

Actually there maybe some version mis-match in this scenario, especially for AaaG/Celedon, we only verified with ACRN 1.0/5,  and QEMU/KVM was started with ACRN 2.0.

 

You can find reference documents below, you may find some bugs (report git-issue please) if you try with legacy/latest release, especially for Celedon, there is not much effort/support in ACRN now.

 

1.       Run ACRN on QEMU/KVM : https://projectacrn.github.io/2.0/tutorials/acrn_on_qemu.html?highlight=qemu

2.       Launch AaaG/Celedon on ACRN:  https://projectacrn.github.io/1.5/tutorials/using_celadon_as_uos.html

 

Best & Regards

Terry

 

From: acrn-users@... <acrn-users@...> On Behalf Of Kishore Kanala
Sent: Tuesday, July 27, 2021 7:57 PM
To: acrn-users@...
Subject: Re: [acrn-users] Android on ACRN on QEMU

 

Hi,

 

    Did anyone try AaaG on ACRN on QEMU? We are in bad need of this. 

 

Regards,

Kishore

 

On Sat, Jul 3, 2021, 19:49 Kishore Kanala via lists.projectacrn.org <kishorekanala=gmail.com@...> wrote:

Thanks David and Terry.

It will be very helpful if doc was available for AaaG on ACRN on QEMU. Is it possible for someone to help me with this?

 

On Sat, Jul 3, 2021 at 7:51 AM Zou, Terry <terry.zou@...> wrote:

Thanks David, 'Using Celadon as UOS' is the right AaaG doc in my mind :)  https://projectacrn.github.io/1.5/tutorials/using_celadon_as_uos.html

Best & Regards
Terry


Re: The number of Post-launch RTVM

Jianjie Lin
 

Hi Terry, hi Geoffroy,

 

Your both response means a lot to me. We try to find the possibilities to use as many RTVM as possible for the applications. Pre-launched RTVM is better for me, post-launched RTVM is also possible for me.

 

 

Besides RTVM, I get a new configuration questions in terms of the gvt-g.

We choose the whisky lake board as the first testing board, which is suggested from the acrn project, and supported by gvt-g.

Based on the command “acrn-gm” or “launch XML”, the only possible to set the gvt args is

-G, --gvtargs <GVT_args>

ACRN implements GVT-g for graphics virtualization (aka AcrnGT). This option allows you to set some of its parameters.

GVT_args format: low_gm_sz high_gm_sz fence_sz

Where:

·         low_gm_sz: GVT-g aperture size, unit is MB

·         high_gm_sz: GVT-g hidden gfx memory size, unit is MB

·         fence_sz: the number of fence registers

Now, for example, if we have two UOSs (no matter RTVM or standard UOS).

How can I use these args to set the parameter for sharing the Igpu?  Such as 40 % to UOS1 and 60 % UOS2?

 

I do not know how to set those low_gm_sz, high_gm_sz, and fence_sz to achieve this goal.

 

Sorry, I have so many questions.

Thank you very much again for your response.

 

Cheers,

Jianjie Lin

 

 

Von: acrn-users@... [mailto:acrn-users@...] Im Auftrag von Zou, Terry
Gesendet: Dienstag, 27. Juli 2021 05:30
An: acrn-users@...
Betreff: Re: [acrn-users] The number of Post-launch RTVM

 

Yes, it is doable to modify default ‘Hybrid RT scenario’ and add more pre-launched RTVMs. Actually the max number of pre-launched RTVMs is limited by your platform resource capability: only support partition/pass-through CPU cores, Ethernet, storage to each pre-launched VMs... But in post-launched RTVM, technically you still have the possibility to share some resources (not critical to RT performance) between RTVMs, e.g., storage, and the PM/VM management could also be in service OS.

So only if you have high isolation, safety and RT performance concerns, e.g., keep RTVM alive even Service OS reboot, for multi-RTVM, I may suggest you launch more post-launched RTVMs to both have resource sharing flexibility and device pass-through performance.  

Anyway @Jianjie, if you want to try multiply pre-launched RTVMs, just let us know your detailed failure step/git-issue if happens, we will analysis and help you move forward😊

 

Best & Regards

Terry

From: acrn-users@... <acrn-users@...> On Behalf Of Geoffroy Van Cutsem
Sent: Monday, July 26, 2021 11:25 PM
To: acrn-users@...
Subject: Re: [acrn-users] The number of Post-launch RTVM

 

Hi Jianjie Lin, Terry,

 

@Terry, I’m not quite sure how to interpret your response regarding the max number of pre-launched RTVMs. You say we have one predefined in one of our scenario, but I think what Jianjie wanted to know is if he could modify it or create a new scenario and put multiple pre-launched RTVMs?

 

The code at the moment only supports one Kata Container. But when I dug into why, it appears that it should be fairly easily doable to change that. It comes down to how UUIDs are handled in the ACRN – Kata code, it’s essentially using only one UUID for Kata but I couldn’t find any reason why the code couldn’t be improved to essentially use any other UUID. There is, in fact, the list of all ACRN predefined UUIDs in the Kata Container ACRN driver code – see https://github.com/kata-containers/kata-containers/blob/main/src/runtime/virtcontainers/acrn.go#L58-L80. I filed an issue/feature request for this a while ago (back in April), see https://github.com/projectacrn/acrn-hypervisor/issues/5966. If you’re interested in digging into this to improve the situation, this would be very welcome 😉

 

A Kata Container VM today will start as a standard post-launched VM. I do not know if it would be possible to improve the code to make it run an RTVM. I guess it would, and in that case, it would also be useful to have some sort of control for the user to state whether he wants a Kata Container to run as a standard or RTVM. When I was thinking about that previously, I was missing the use-case behind it. What I mean is that people who want to run RTVMs do not typically want to orchestrate them, and to me this is the biggest advantage of using Kata: it’s a different way of starting a VM and one that allows tools that are typically used for containers such as Docker, Kubernetes, etc.

 

As a word of caution: I do not believe there are many eyes looking at the Kata implementation in ACRN so you may come across some bugs if you try with the latest releases. If that happens, please report those so they can be investigated!

Cheers,

Geoffroy

 

From: acrn-users@... <acrn-users@...> On Behalf Of Zou, Terry
Sent: Monday, July 26, 2021 4:56 am
To: acrn-users@...
Subject: Re: [acrn-users] The number of Post-launch RTVM

 

Hi Jianjie,

Thanks for your interests with ACRN hypervisor. For VM configuration, we have several pre-defined scenarios in: https://projectacrn.github.io/latest/introduction/index.html#id3, and the most flexible scenario is ‘industry’ scenario, it runs up to 8 VMs including Service VM, so 7 Users VMs in this config.

Specific to RTVM, the max number is also limited by your target platform capability, you know to secure RT performance, RTVM may need dedicated CPU Core (2 cores for Preempt RT), and pass-through Ethernet, even disc.

 

  1. Does acrn have any number limitation of pre-launch RTVM?

//Terry: Usually only one Pre-launch RTVM, refer to ‘Hybrid RT scenario’ in: https://projectacrn.github.io/latest/introduction/index.html#hybrid-real-time-rt-scenario  

  1. Does acrn have any number limitation of post-launch RTVM?

//Terry: According to your platform CPU core number and pass-through devices, usually up to 3~4 post-RTVMs, with 8 cores and 4 ethernet ports.

  1. Does acrn have any number limitation of kata-containers?

//Terry: there is one kata-container VM in ‘industry scenario’: https://projectacrn.github.io/latest/tutorials/run_kata_containers.html#run-kata-containers.

  1. What’s the difference between pre-launch RTVM and Post-launch RTVM, since they both can access the hardware directly?

//Terry: From pass-through device perspective, they are all isolated in pre and post-launch RTVM.  From OS severity perspective, post-VM is launched/managed by service OS, but Pre-launch RTVM could be still alive/working even when Service OS reboot with abnormal failures.

  1. Can Kata-container also be possible for real time?

//Terry: Basically Kata-container may not secure the certainty/latency of RT task, so only user level applications e.g., AI computing workload, are recommended in container.

 

Best & Regards

Terry

 

From: acrn-users@... <acrn-users@...> On Behalf Of Jianjie Lin
Sent: Friday, July 23, 2021 10:06 PM
To: acrn-users@...
Subject: [acrn-users] The number of Post-launch RTVM

 

Hello ACRN community,

 

We have a project about the Hypervisor, and we require some Real time VMs. After reading the documentation from the acrn project. I have some understanding questions:

 

  1. Does acrn have any number limitation of pre-launch RTVM?
  2. Does acrn have any number limitation of post-launch RTVM?
  3. Does acrn have any number limitation of kata-containers?
  4. What’s the difference between pre-launch RTVM and Post-launch RTVM, since they both can access the hardware directly?
  5. Can Kata-container also be possible for real time?

 

Thank you very much for your reply, and explanations. Thank you.

 

Mit freundlichen Grüßen

Jianjie Lin

 


Re: Android on ACRN on QEMU

Kishore Kanala
 

Hi,

    Did anyone try AaaG on ACRN on QEMU? We are in bad need of this. 

Regards,
Kishore

On Sat, Jul 3, 2021, 19:49 Kishore Kanala via lists.projectacrn.org <kishorekanala=gmail.com@...> wrote:
Thanks David and Terry.
It will be very helpful if doc was available for AaaG on ACRN on QEMU. Is it possible for someone to help me with this?

On Sat, Jul 3, 2021 at 7:51 AM Zou, Terry <terry.zou@...> wrote:
Thanks David, 'Using Celadon as UOS' is the right AaaG doc in my mind :)  https://projectacrn.github.io/1.5/tutorials/using_celadon_as_uos.html

Best & Regards
Terry


Re: The number of Post-launch RTVM

Zou, Terry
 

Yes, it is doable to modify default ‘Hybrid RT scenario’ and add more pre-launched RTVMs. Actually the max number of pre-launched RTVMs is limited by your platform resource capability: only support partition/pass-through CPU cores, Ethernet, storage to each pre-launched VMs... But in post-launched RTVM, technically you still have the possibility to share some resources (not critical to RT performance) between RTVMs, e.g., storage, and the PM/VM management could also be in service OS.

So only if you have high isolation, safety and RT performance concerns, e.g., keep RTVM alive even Service OS reboot, for multi-RTVM, I may suggest you launch more post-launched RTVMs to both have resource sharing flexibility and device pass-through performance.  

Anyway @Jianjie, if you want to try multiply pre-launched RTVMs, just let us know your detailed failure step/git-issue if happens, we will analysis and help you move forward😊

 

Best & Regards

Terry

From: acrn-users@... <acrn-users@...> On Behalf Of Geoffroy Van Cutsem
Sent: Monday, July 26, 2021 11:25 PM
To: acrn-users@...
Subject: Re: [acrn-users] The number of Post-launch RTVM

 

Hi Jianjie Lin, Terry,

 

@Terry, I’m not quite sure how to interpret your response regarding the max number of pre-launched RTVMs. You say we have one predefined in one of our scenario, but I think what Jianjie wanted to know is if he could modify it or create a new scenario and put multiple pre-launched RTVMs?

 

The code at the moment only supports one Kata Container. But when I dug into why, it appears that it should be fairly easily doable to change that. It comes down to how UUIDs are handled in the ACRN – Kata code, it’s essentially using only one UUID for Kata but I couldn’t find any reason why the code couldn’t be improved to essentially use any other UUID. There is, in fact, the list of all ACRN predefined UUIDs in the Kata Container ACRN driver code – see https://github.com/kata-containers/kata-containers/blob/main/src/runtime/virtcontainers/acrn.go#L58-L80. I filed an issue/feature request for this a while ago (back in April), see https://github.com/projectacrn/acrn-hypervisor/issues/5966. If you’re interested in digging into this to improve the situation, this would be very welcome 😉

 

A Kata Container VM today will start as a standard post-launched VM. I do not know if it would be possible to improve the code to make it run an RTVM. I guess it would, and in that case, it would also be useful to have some sort of control for the user to state whether he wants a Kata Container to run as a standard or RTVM. When I was thinking about that previously, I was missing the use-case behind it. What I mean is that people who want to run RTVMs do not typically want to orchestrate them, and to me this is the biggest advantage of using Kata: it’s a different way of starting a VM and one that allows tools that are typically used for containers such as Docker, Kubernetes, etc.

 

As a word of caution: I do not believe there are many eyes looking at the Kata implementation in ACRN so you may come across some bugs if you try with the latest releases. If that happens, please report those so they can be investigated!

Cheers,

Geoffroy

 

From: acrn-users@... <acrn-users@...> On Behalf Of Zou, Terry
Sent: Monday, July 26, 2021 4:56 am
To: acrn-users@...
Subject: Re: [acrn-users] The number of Post-launch RTVM

 

Hi Jianjie,

Thanks for your interests with ACRN hypervisor. For VM configuration, we have several pre-defined scenarios in: https://projectacrn.github.io/latest/introduction/index.html#id3, and the most flexible scenario is ‘industry’ scenario, it runs up to 8 VMs including Service VM, so 7 Users VMs in this config.

Specific to RTVM, the max number is also limited by your target platform capability, you know to secure RT performance, RTVM may need dedicated CPU Core (2 cores for Preempt RT), and pass-through Ethernet, even disc.

 

  1. Does acrn have any number limitation of pre-launch RTVM?

//Terry: Usually only one Pre-launch RTVM, refer to ‘Hybrid RT scenario’ in: https://projectacrn.github.io/latest/introduction/index.html#hybrid-real-time-rt-scenario  

  1. Does acrn have any number limitation of post-launch RTVM?

//Terry: According to your platform CPU core number and pass-through devices, usually up to 3~4 post-RTVMs, with 8 cores and 4 ethernet ports.

  1. Does acrn have any number limitation of kata-containers?

//Terry: there is one kata-container VM in ‘industry scenario’: https://projectacrn.github.io/latest/tutorials/run_kata_containers.html#run-kata-containers.

  1. What’s the difference between pre-launch RTVM and Post-launch RTVM, since they both can access the hardware directly?

//Terry: From pass-through device perspective, they are all isolated in pre and post-launch RTVM.  From OS severity perspective, post-VM is launched/managed by service OS, but Pre-launch RTVM could be still alive/working even when Service OS reboot with abnormal failures.

  1. Can Kata-container also be possible for real time?

//Terry: Basically Kata-container may not secure the certainty/latency of RT task, so only user level applications e.g., AI computing workload, are recommended in container.

 

Best & Regards

Terry

 

From: acrn-users@... <acrn-users@...> On Behalf Of Jianjie Lin
Sent: Friday, July 23, 2021 10:06 PM
To: acrn-users@...
Subject: [acrn-users] The number of Post-launch RTVM

 

Hello ACRN community,

 

We have a project about the Hypervisor, and we require some Real time VMs. After reading the documentation from the acrn project. I have some understanding questions:

 

  1. Does acrn have any number limitation of pre-launch RTVM?
  2. Does acrn have any number limitation of post-launch RTVM?
  3. Does acrn have any number limitation of kata-containers?
  4. What’s the difference between pre-launch RTVM and Post-launch RTVM, since they both can access the hardware directly?
  5. Can Kata-container also be possible for real time?

 

Thank you very much for your reply, and explanations. Thank you.

 

Mit freundlichen Grüßen

Jianjie Lin

 


Re: Cache Allocation Technology support for whiskey lake board

Wang, Hongbo
 

Hi Feng,

You can also reach out your Intel sale representative to get the CAT on WHL information.

As far as I know, there’re at least two silicon families have CAT official support, code name Elkhart Lake (EHL) for Atom and Tiger Lake (TGL) for Core.

Be aware that, not all SKU of EHL/TGL CPU supports CAT, need to check their specification first.

 

Note: SKU: Stock Keeping Unit

 

 

Best regards.

Hongbo

Tel: +86-21-6116 7445

MP: +86-1364 1793 689

Mail: hongbo.wang@...

 

 

From: acrn-users@... <acrn-users@...> On Behalf Of Geoffroy Van Cutsem
Sent: 2021726 23:54
To: acrn-users@...
Subject: Re: [acrn-users] Cache Allocation Technology support for whiskey lake board

 

Hi Feng,

 

I’m not 100% sure about this. Do you have a system available to you already? If so, the easiest would be to check following the procedure described here: https://projectacrn.github.io/latest/tutorials/rdt_configuration.html#rdt-detection-capabilities

 

Thanks!
Geoffroy

 

From: acrn-users@... <acrn-users@...> On Behalf Of Pan, Fengjunjie
Sent: Thursday, July 22, 2021 5:44 pm
To: acrn-users@...
Subject: [acrn-users] Cache Allocation Technology support for whiskey lake board

 

Hello everyone,

 

Does anyone know if the Cache Allocation Technology is supported by Acrn for whiskey lake board (Board: WHL-IPC-I7)?

 

Thanks and best regards,

Feng

 

241 - 260 of 1237