Date   

[PATCH v1 0/3] Guess presence of L3 CAT in the board inspector

Junjie Mao
 

Some Intel platforms are known to be equipped with L3 CAT while they do not
report its presence or parameters via the architectural CPUID
interface. The public real-time tuning guide for 11th gen Core(TM)
processors suggests to detect the availability of L3 CAT by trying to
access the CAT MSRs directly.

This patch series implement such logic in the board inspector so that users
can leverage the hardware feature without the need to guess its presence by
themselves. It includes three patches, with the former two refactoring and
fixing the current MSR parsers and the last do the actual guesswork.

Junjie Mao (3):
config_tools: board_inspector: refactors MSR utilities
config_tools: board_inspector: fix MSR reads and writes
config_tools: board_inspector: guess L3 CAT parameters if not reported
via CPUID

.../board_inspector/cpuparser/msr.py | 187 +++++++++++-------
.../board_inspector/cpuparser/platformbase.py | 30 ++-
.../board_inspector/extractors/20-cache.py | 113 +++++++++--
.../schema/checks/platform_capabilities.xsd | 4 +-
4 files changed, 234 insertions(+), 100 deletions(-)

--
2.30.2


Re: [PATCH] hv: sched: fix bug when reboot vm

Yu Wang
 

Hi Conghui,

On Mon, Aug 01, 2022 at 02:33:14PM +0800, Chen, Conghui wrote:
Hi yu,


if (!list_empty(&bvt_ctl->runqueue)) {
tmp_obj = get_first_item(&bvt_ctl->runqueue, struct
thread_object, data);
obj_data = (struct sched_bvt_data *)tmp_obj->data;
svt = obj_data->avt;
I think the svt is maintained by scheduler itself. Should not obtain
from the schedule objects.
Yes, the svt is a concept for a scheduler itself. Currently, the method to get the svt is to calculate each time when a thread is try to wake up.

So can we update the svt in
sched_tick_handler? And get_svt is just return the svt directly.
Sched_tick_handler is triggered every 1ms, but during the 1ms, many wakeup and sleep may happen, runqueue may become empty , so, cannot provide correct svt.
I just re-check the BVT paper. Actually, the paper has covered this
case, it is resolved by SVT(scheduler virtual time (SVT)).

Copy from paper:

When thread i becomes runnable after sleeping
Ai <- max(Ai, S V T )
where scheduler virtual time (SVT) is a scheduler variable
indicating the minimum Ai of any runnable thread 1. This adjustment
prevents a thread from claiming an excessive share
of the CPU after sleeping for a long time as might happen
if there was no adjustment, as illustrated in Figure 2. With
this adjustment, a thread gets the same share of the CPU on
wakeup as if it has been runnable but not dispatched to this
point because it is given the same AVT in both cases

So this patch should be one bug fixing as we haven't implemented the SVT
correctly..

Please add a svt member in the struct sched_bvt_control. And it needs to
be updated in sched_bvt_sleep and sched_bvt_wake. And the wake-up vcpu
should follow the "Ai <- max(Ai, SVT)" to adjust its avt.

Thanks
Yu



Regards,
Conghui.

+ } else {
+ svt = bvt_ctl->last_avt;
}
return svt;
}
@@ -156,6 +158,7 @@ static int sched_bvt_init(struct sched_control *ctl)

ctl->priv = bvt_ctl;
INIT_LIST_HEAD(&bvt_ctl->runqueue);
+ bvt_ctl->last_avt = 0U;

/* The tick_timer is periodically */
initialize_timer(&bvt_ctl->tick_timer, sched_tick_handler, ctl,
@@ -270,9 +273,21 @@ static struct thread_object
*sched_bvt_pick_next(struct sched_control *ctl)
return next;
}

+static void update_svt(struct thread_object *obj)
+{
+ struct sched_bvt_data *data;
+ struct sched_bvt_control *bvt_ctl =
+ (struct sched_bvt_control *)obj->sched_ctl->priv;
+
+ data = (struct sched_bvt_data *)obj->data;
+ if (list_empty(&data->list))
+ bvt_ctl->last_avt = data->avt;
+}
+
static void sched_bvt_sleep(struct thread_object *obj)
{
runqueue_remove(obj);
+ update_svt(obj);
Rename the func name to update_last_avt.

}


}

static void sched_bvt_wake(struct thread_object *obj)
diff --git a/hypervisor/include/common/schedule.h
b/hypervisor/include/common/schedule.h
index c4d34bbbb..8528aa4ed 100644
--- a/hypervisor/include/common/schedule.h
+++ b/hypervisor/include/common/schedule.h
@@ -105,6 +105,7 @@ extern struct acrn_scheduler sched_bvt;
struct sched_bvt_control {
struct list_head runqueue;
struct hv_timer tick_timer;
+ int last_avt;
};

extern struct acrn_scheduler sched_prio;
--
2.25.1









Re: [PATCH] hv: sched: fix bug when reboot vm

Conghui Chen
 

Hi yu,


if (!list_empty(&bvt_ctl->runqueue)) {
tmp_obj = get_first_item(&bvt_ctl->runqueue, struct
thread_object, data);
obj_data = (struct sched_bvt_data *)tmp_obj->data;
svt = obj_data->avt;
I think the svt is maintained by scheduler itself. Should not obtain
from the schedule objects.
Yes, the svt is a concept for a scheduler itself. Currently, the method to get the svt is to calculate each time when a thread is try to wake up.

So can we update the svt in
sched_tick_handler? And get_svt is just return the svt directly.
Sched_tick_handler is triggered every 1ms, but during the 1ms, many wakeup and sleep may happen, runqueue may become empty , so, cannot provide correct svt.


Regards,
Conghui.

+ } else {
+ svt = bvt_ctl->last_avt;
}
return svt;
}
@@ -156,6 +158,7 @@ static int sched_bvt_init(struct sched_control *ctl)

ctl->priv = bvt_ctl;
INIT_LIST_HEAD(&bvt_ctl->runqueue);
+ bvt_ctl->last_avt = 0U;

/* The tick_timer is periodically */
initialize_timer(&bvt_ctl->tick_timer, sched_tick_handler, ctl,
@@ -270,9 +273,21 @@ static struct thread_object
*sched_bvt_pick_next(struct sched_control *ctl)
return next;
}

+static void update_svt(struct thread_object *obj)
+{
+ struct sched_bvt_data *data;
+ struct sched_bvt_control *bvt_ctl =
+ (struct sched_bvt_control *)obj->sched_ctl->priv;
+
+ data = (struct sched_bvt_data *)obj->data;
+ if (list_empty(&data->list))
+ bvt_ctl->last_avt = data->avt;
+}
+
static void sched_bvt_sleep(struct thread_object *obj)
{
runqueue_remove(obj);
+ update_svt(obj);
Rename the func name to update_last_avt.

}


}

static void sched_bvt_wake(struct thread_object *obj)
diff --git a/hypervisor/include/common/schedule.h
b/hypervisor/include/common/schedule.h
index c4d34bbbb..8528aa4ed 100644
--- a/hypervisor/include/common/schedule.h
+++ b/hypervisor/include/common/schedule.h
@@ -105,6 +105,7 @@ extern struct acrn_scheduler sched_bvt;
struct sched_bvt_control {
struct list_head runqueue;
struct hv_timer tick_timer;
+ int last_avt;
};

extern struct acrn_scheduler sched_prio;
--
2.25.1









Re: [PATCH] hv: sched: fix bug when reboot vm

Yu Wang
 

On Thu, Jul 28, 2022 at 05:24:00PM +0800, Yu Wang wrote:
On Wed, Jul 27, 2022 at 08:01:36PM +0800, Conghui Chen wrote:
BVT schedule rule:
When a new thread is wakeup and added to runqueue, it will get the
smallest avt (svt) from runqueue to initiate its avt. If the svt is
smaller than it's avt, it will keep the original avt.

For the reboot issue, when the VM is reboot, it means a new vcpu thread
is wakeup, but at this time, the Service VM's vcpu thread is blocked,
and removed from the runqueue, and the runqueue is empty, so the svt is
0. The new vcpu thread will get avt=0. avt=0 means very high priority,
and can run for a very long time until it catch up with other thread's
avt in runqueue.
At this time, when Service VM's vcpu thread wakeup, it will check the
svt, but the svt is very small, so will not update it's avt according to
the rule, thus has a very low priority and cannot be scheduled.

To fix it, add a variable in structure sched_bvt_control, which is
shared by all thread objects in a pCPU. The variable is used to record
the avt of the last thread object removed from runqueue. This value is
used to update the svt when there is no object in runqueue.
Does the BVT paper mention this case? Can you please paste the related
statements?


Signed-off-by: Conghui <conghui.chen@...>
---
hypervisor/common/sched_bvt.c | 17 ++++++++++++++++-
hypervisor/include/common/schedule.h | 1 +
2 files changed, 17 insertions(+), 1 deletion(-)

diff --git a/hypervisor/common/sched_bvt.c b/hypervisor/common/sched_bvt.c
index 8354b13d7..431a2cdf1 100644
--- a/hypervisor/common/sched_bvt.c
+++ b/hypervisor/common/sched_bvt.c
@@ -104,12 +104,14 @@ static int64_t get_svt(struct thread_object *obj)
struct sched_bvt_control *bvt_ctl = (struct sched_bvt_control *)obj->sched_ctl->priv;
struct sched_bvt_data *obj_data;
struct thread_object *tmp_obj;
- int64_t svt = 0;
+ int64_t svt;

if (!list_empty(&bvt_ctl->runqueue)) {
tmp_obj = get_first_item(&bvt_ctl->runqueue, struct thread_object, data);
obj_data = (struct sched_bvt_data *)tmp_obj->data;
svt = obj_data->avt;
I think the svt is maintained by scheduler itself. Should not obtain
from the schedule objects. So can we update the svt in
sched_tick_handler? And get_svt is just return the svt directly.

+ } else {
+ svt = bvt_ctl->last_avt;
}
return svt;
}
@@ -156,6 +158,7 @@ static int sched_bvt_init(struct sched_control *ctl)

ctl->priv = bvt_ctl;
INIT_LIST_HEAD(&bvt_ctl->runqueue);
+ bvt_ctl->last_avt = 0U;

/* The tick_timer is periodically */
initialize_timer(&bvt_ctl->tick_timer, sched_tick_handler, ctl,
@@ -270,9 +273,21 @@ static struct thread_object *sched_bvt_pick_next(struct sched_control *ctl)
return next;
}

+static void update_svt(struct thread_object *obj)
+{
+ struct sched_bvt_data *data;
+ struct sched_bvt_control *bvt_ctl =
+ (struct sched_bvt_control *)obj->sched_ctl->priv;
+
+ data = (struct sched_bvt_data *)obj->data;
+ if (list_empty(&data->list))
+ bvt_ctl->last_avt = data->avt;
+}
+
static void sched_bvt_sleep(struct thread_object *obj)
{
runqueue_remove(obj);
+ update_svt(obj);
Rename the func name to update_last_avt.

}


}

static void sched_bvt_wake(struct thread_object *obj)
diff --git a/hypervisor/include/common/schedule.h b/hypervisor/include/common/schedule.h
index c4d34bbbb..8528aa4ed 100644
--- a/hypervisor/include/common/schedule.h
+++ b/hypervisor/include/common/schedule.h
@@ -105,6 +105,7 @@ extern struct acrn_scheduler sched_bvt;
struct sched_bvt_control {
struct list_head runqueue;
struct hv_timer tick_timer;
+ int last_avt;
};

extern struct acrn_scheduler sched_prio;
--
2.25.1









Re: [PATCH] configurator does not consider L3 CAT config when opening an exiting configuration

Junjie Mao
 

"Ke, ChuangX" <chuangx.ke@...> writes:

Maybe you are right, allow me to check again for the issue 2 another time.
But normally If all the function style like below , we should follow the style, put CACHE ID first
getCLOSMask(CACHE_ID, CACHE_LEVEL, vmID, vmName, VCPU, TYPE: PolicyType, maxLength: number)
This is the only one style like this (put CACHE LEVEL first), I'm not saying it's a mistake, but if bugs comes out, we have to spend a lot time to analysis just because style...
newPolicy(CACHE_LEVEL, CACHE_ID, vmConfig, VCPU, TYPE: PolicyType, maxLength): Policy
I agree that we should always use the same order of cache_id and
cache_level in all functions. If there is already inconsistency in the
code, feel free to fix it as long as you state it clearly in your commit
messages.

--
Best Regards
Junjie Mao



Best Regards,
Chuang Ke

-----Original Message-----
From: Mao, Junjie <junjie.mao@...>
Sent: 2022年7月29日 23:56
To: Ke, ChuangX <chuangx.ke@...>
Cc: Xie, Nanlin <nanlin.xie@...>; acrn-dev@...
Subject: Re: [acrn-dev] [PATCH] configurator does not consider L3 CAT config when opening an exiting configuration

"Ke, ChuangX" <chuangx.ke@...> writes:

When I try to fix the bug from bug report description, I find two issue cause the problem:
1. part of code in the translate function not match xml file(user
offered) anymore, I have to rewrite the function to trans xml data so
that xml data could import in correctly 2. when I fix issue 1, the
import still not working, All non-rt VMs have set way mask to 0x0ff,
because some mask function can't get data. Then I find out after
translate xml file, regionData structure have issue, regionData.level and regionData.id are in the reverse order.
I'm still confused. Why does ordering of fields in an object affects the correctness of the logic? Even that's the case, why changing the order of the initializer parameters helps here?

--
Best Regards
Junjie Mao

In fact, it means config tools can't read any xml regionData, that's why it set All non-rt VMs mask to default value 0x0ff when it can't get data.
Because level and id is in reverse order, I also believe it export error data.
Right now , I think I fixed the two problem. maybe someone need to check and test on it.

Best Regards,
Chuang Ke

-----Original Message-----
From: acrn-dev@... <acrn-dev@...>
On Behalf Of Junjie Mao
Sent: 2022年7月27日 23:46
To: Ke, ChuangX <chuangx.ke@...>
Cc: acrn-dev@...; Xie, Nanlin <nanlin.xie@...>
Subject: Re: [acrn-dev] [PATCH] configurator does not consider L3 CAT
config when opening an exiting configuration

Chuang Ke <chuangx.ke@...> writes:

Found errors when import xml file through this bug report, after fix the clos mask could import correctly, and show the clos mask bar in the view correctly, and also save clos mask data correctly.
From your description above, I do not quite get either your intention or the actual changes you have made in this patch ...


Signed-off-by: Chuang-Ke <chuangx.ke@...>
Tracked-On: #7927
---
.../packages/configurator/src/lib/acrn.ts | 86 +++++++++++--------
1 file changed, 49 insertions(+), 37 deletions(-)

diff --git
a/misc/config_tools/configurator/packages/configurator/src/lib/acrn.t
s
b/misc/config_tools/configurator/packages/configurator/src/lib/acrn.t
s
index 218f350b7..0467dd0ee 100644
---
a/misc/config_tools/configurator/packages/configurator/src/lib/acrn.t
s
+++ b/misc/config_tools/configurator/packages/configurator/src/lib/ac
+++ r
+++ n.ts
@@ -242,6 +242,8 @@ class CAT {
// get CAT schema && scenario && board basic data
this.schemaData = window.getSchemaData();
this.scenario = window.getCurrentScenarioData();
+ console.log('this.scenario...')
+ console.log(this.scenario)
this.CAT_REGION_INFO = window.getBoardData().CAT_INFO;

// check scenario data is empty @@ -392,7 +394,7 @@ class
CAT {
return preLaunchedVMCPUs;
}

- newPolicy(CACHE_LEVEL, CACHE_ID, vmConfig, VCPU, TYPE: PolicyType, maxLength): Policy {
+ newPolicy(CACHE_ID, CACHE_LEVEL, vmConfig, VCPU, TYPE:
+ PolicyType, maxLength): Policy {
Why do we need to change the order of the parameters?

let originPolicy = {
VM: vmConfig.name,
VCPU, TYPE,
@@ -418,6 +420,8 @@ class CAT {
CATData.META.vmid === vmID && CATData.VCPU === VCPU && CATData.TYPE === TYPE
) {
return CATData
+ console.log("catDB")
+ console.log(this.CATDB)
}
}
return false;
@@ -441,13 +445,14 @@ class CAT {

getCLOSMask(CACHE_ID, CACHE_LEVEL, vmID, vmName, VCPU, TYPE: PolicyType, maxLength: number) {
let CATData = this.selectCATData(CACHE_ID, CACHE_LEVEL,
vmID, vmName, VCPU, TYPE);
+ // console.log(CATData)
+ let CLOS_MASK
if (CATData !== false) {
let CLOS_MASK = CATData.CLOS_MASK;
// ensure CLOS_MASK length is shorter or equal to maxLength
CLOS_MASK = this.rangeToHex(this.hexToRange(CLOS_MASK, maxLength), maxLength);
return CLOS_MASK;
- }
- let CLOS_MASK = "0x" + parseInt('1'.repeat(maxLength), 2).toString(16)
+ } else CLOS_MASK = "0x" + parseInt('1'.repeat(maxLength),
+ 2).toString(16)
Please, write the body of the else-branch on a separate row.

this.CATDB.push({
META: {vmid: vmID},
CACHE_ID, CACHE_LEVEL,
@@ -469,10 +474,10 @@ class CAT {
pcpu.real_time_vcpu === 'y'
) {
if (!this.switches.CDP_ENABLED) {
- RTCoreData.push(this.newPolicy(regionData.level, regionData.id, vmConfig, index, 'Unified', regionData.capacity_mask_length))
+
+ RTCoreData.push(this.newPolicy(regionData.id, regionData.level,
+ vmConfig, index, 'Unified', regionData.capacity_mask_length))
} else {
- RTCoreData.push(this.newPolicy(regionData.level, regionData.id, vmConfig, index, 'Code', regionData.capacity_mask_length))
- RTCoreData.push(this.newPolicy(regionData.level, regionData.id, vmConfig, index, 'Data', regionData.capacity_mask_length))
+
+ RTCoreData.push(this.newPolicy(regionData.id, regionData.level,
+ vmConfig, index, 'Code', regionData.capacity_mask_length))
+
+ RTCoreData.push(this.newPolicy(regionData.id, regionData.level,
+ vmConfig, index, 'Data', regionData.capacity_mask_length))
}
}
}
@@ -498,10 +503,10 @@ class CAT {
(pcpu.real_time_vcpu === 'y' && vmConfig.vm_type !== 'RTVM')
) {
if (!this.switches.CDP_ENABLED) {
- StandardData.push(this.newPolicy(regionData.level, regionData.id, vmConfig, index, 'Unified', regionData.capacity_mask_length))
+
+ StandardData.push(this.newPolicy(regionData.id, regionData.level,
+ vmConfig, index, 'Unified', regionData.capacity_mask_length))
} else {
- StandardData.push(this.newPolicy(regionData.level, regionData.id, vmConfig, index, "Code", regionData.capacity_mask_length))
- StandardData.push(this.newPolicy(regionData.level, regionData.id, vmConfig, index, "Data", regionData.capacity_mask_length))
+
+ StandardData.push(this.newPolicy(regionData.id, regionData.level,
+ vmConfig, index, "Code", regionData.capacity_mask_length))
+
+ StandardData.push(this.newPolicy(regionData.id, regionData.level,
+ vmConfig, index, "Data", regionData.capacity_mask_length))
}
}
}
@@ -524,10 +529,10 @@ class CAT {
this.serviceVMCPUs.map((pcpuID, index) => {
if (regionData.processors.indexOf(pcpuID) !== -1) {
if (!this.switches.CDP_ENABLED) {
- ServiceData.push(this.newPolicy(regionData.level, regionData.id, this.serviceVM, index, "Unified", regionData.capacity_mask_length))
+ ServiceData.push(this.newPolicy(regionData.id,
+ regionData.level, this.serviceVM, index, "Unified",
+ regionData.capacity_mask_length))
} else {
- ServiceData.push(this.newPolicy(regionData.level, regionData.id, this.serviceVM, index, "Code", regionData.capacity_mask_length))
- ServiceData.push(this.newPolicy(regionData.level, regionData.id, this.serviceVM, index, "Data", regionData.capacity_mask_length))
+ ServiceData.push(this.newPolicy(regionData.id, regionData.level, this.serviceVM, index, "Code", regionData.capacity_mask_length))
+ ServiceData.push(this.newPolicy(regionData.id,
+ regionData.level, this.serviceVM, index, "Data",
+ regionData.capacity_mask_length))
}
}
})
@@ -545,7 +550,7 @@ class CAT {
vmConfig.virtual_cat_support === "y"
) {
VCATData.push(
- this.newPolicy(regionData.level, regionData.id, vmConfig, 0, "Unified", vmConfig.virtual_cat_number)
+ this.newPolicy(regionData.id,
+ regionData.level, vmConfig, 0, "Unified",
+ vmConfig.virtual_cat_number)
)
}
})
@@ -580,39 +585,46 @@ class CAT {
}


- private getCATDataFromScenario() {
+ getCATDataFromScenario() {
let hv = this.scenario.hv;
- let scenarioCATData: CATDBRecord[] = []

+ let scenarioCATData: CATDBRecord[] = []
// noinspection JSUnresolvedVariable
if (
hv !== null &&
- hv.hasOwnProperty('CACHE_REGION') &&
- hv.CACHE_REGION.hasOwnProperty('CACHE_ALLOCATION') &&
- _.isArray(hv.CACHE_REGION.CACHE_ALLOCATION)
+ hv.hasOwnProperty('CACHE_REGION')
) {
- // noinspection JSUnresolvedVariable
- hv.CACHE_REGION.CACHE_ALLOCATION.map((cache_region) => {
- if (
- cache_region.hasOwnProperty('POLICY') &&
- cache_region.POLICY.length > 0
- ) {
- cache_region.POLICY.map(policy => {
- scenarioCATData.push({
- CACHE_ID: cache_region.CACHE_ID,
- CACHE_LEVEL: cache_region.CACHE_LEVEL,
- CLOS_MASK: policy.CLOS_MASK,
- META: {vmid: this.vmIDs[policy.VM]},
- TYPE: policy.TYPE,
- VCPU: policy.VCPU,
- VM: policy.VM
+ let cacheRegion = hv.CACHE_REGION
+ // console.log('cacheRegion.CACHE_ALLOCATION')
+ // console.log(cacheRegion.CACHE_ALLOCATION)
Do not check in commented logs.

--
Best Regards
Junjie Mao


Noted

Best Regards,
Chuang Ke



+ if (cacheRegion !== null && cacheRegion.hasOwnProperty('CACHE_ALLOCATION') &&
+ _.isArray(cacheRegion.CACHE_ALLOCATION)) {
+ // noinspection JSUnresolvedVariable
+
+ cacheRegion.CACHE_ALLOCATION.map((cache_allocation) => {
+ if (
+ cache_allocation.hasOwnProperty('POLICY') &&
+ cache_allocation.POLICY.length > 0
+ ) {
+ cache_allocation.POLICY.map(policy => {
+ scenarioCATData.push({
+ CACHE_ID: cache_allocation.CACHE_ID,
+ CACHE_LEVEL: cache_allocation.CACHE_LEVEL,
+ CLOS_MASK: policy.CLOS_MASK,
+ META: {vmid: this.vmIDs[policy.VM]},
+ TYPE: policy.TYPE,
+ VCPU: policy.VCPU,
+ VM: policy.VM
+ })
})
- })
- }
+ }

- })
- }
+ })

+ }
+ }
+ console.log('scenarioCATData')
+ console.log(scenarioCATData)
return scenarioCATData
}




Re: [PATCH] hv: sched: fix bug when reboot vm

Conghui Chen
 

Hi Yu,

To fix it, add a variable in structure sched_bvt_control, which is
shared by all thread objects in a pCPU. The variable is used to record
the avt of the last thread object removed from runqueue. This value is
used to update the svt when there is no object in runqueue.
Does the BVT paper mention this case? Can you please paste the related
statements?
NO, the paper does not mention this case.

Regards,
Conghui.


Re: [PATCH] configurator does not consider L3 CAT config when opening an exiting configuration

Chuang Ke
 

Maybe you are right, allow me to check again for the issue 2 another time.
But normally If all the function style like below , we should follow the style, put CACHE ID first
getCLOSMask(CACHE_ID, CACHE_LEVEL, vmID, vmName, VCPU, TYPE: PolicyType, maxLength: number)
This is the only one style like this (put CACHE LEVEL first), I'm not saying it's a mistake, but if bugs comes out, we have to spend a lot time to analysis just because style...
newPolicy(CACHE_LEVEL, CACHE_ID, vmConfig, VCPU, TYPE: PolicyType, maxLength): Policy

Best Regards,
Chuang Ke

-----Original Message-----
From: Mao, Junjie <junjie.mao@...>
Sent: 2022年7月29日 23:56
To: Ke, ChuangX <chuangx.ke@...>
Cc: Xie, Nanlin <nanlin.xie@...>; acrn-dev@...
Subject: Re: [acrn-dev] [PATCH] configurator does not consider L3 CAT config when opening an exiting configuration

"Ke, ChuangX" <chuangx.ke@...> writes:

When I try to fix the bug from bug report description, I find two issue cause the problem:
1. part of code in the translate function not match xml file(user
offered) anymore, I have to rewrite the function to trans xml data so
that xml data could import in correctly 2. when I fix issue 1, the
import still not working, All non-rt VMs have set way mask to 0x0ff,
because some mask function can't get data. Then I find out after
translate xml file, regionData structure have issue, regionData.level and regionData.id are in the reverse order.
I'm still confused. Why does ordering of fields in an object affects the correctness of the logic? Even that's the case, why changing the order of the initializer parameters helps here?

--
Best Regards
Junjie Mao

In fact, it means config tools can't read any xml regionData, that's why it set All non-rt VMs mask to default value 0x0ff when it can't get data.
Because level and id is in reverse order, I also believe it export error data.
Right now , I think I fixed the two problem. maybe someone need to check and test on it.

Best Regards,
Chuang Ke

-----Original Message-----
From: acrn-dev@... <acrn-dev@...>
On Behalf Of Junjie Mao
Sent: 2022年7月27日 23:46
To: Ke, ChuangX <chuangx.ke@...>
Cc: acrn-dev@...; Xie, Nanlin <nanlin.xie@...>
Subject: Re: [acrn-dev] [PATCH] configurator does not consider L3 CAT
config when opening an exiting configuration

Chuang Ke <chuangx.ke@...> writes:

Found errors when import xml file through this bug report, after fix the clos mask could import correctly, and show the clos mask bar in the view correctly, and also save clos mask data correctly.
From your description above, I do not quite get either your intention or the actual changes you have made in this patch ...


Signed-off-by: Chuang-Ke <chuangx.ke@...>
Tracked-On: #7927
---
.../packages/configurator/src/lib/acrn.ts | 86 +++++++++++--------
1 file changed, 49 insertions(+), 37 deletions(-)

diff --git
a/misc/config_tools/configurator/packages/configurator/src/lib/acrn.t
s
b/misc/config_tools/configurator/packages/configurator/src/lib/acrn.t
s
index 218f350b7..0467dd0ee 100644
---
a/misc/config_tools/configurator/packages/configurator/src/lib/acrn.t
s
+++ b/misc/config_tools/configurator/packages/configurator/src/lib/ac
+++ r
+++ n.ts
@@ -242,6 +242,8 @@ class CAT {
// get CAT schema && scenario && board basic data
this.schemaData = window.getSchemaData();
this.scenario = window.getCurrentScenarioData();
+ console.log('this.scenario...')
+ console.log(this.scenario)
this.CAT_REGION_INFO = window.getBoardData().CAT_INFO;

// check scenario data is empty @@ -392,7 +394,7 @@ class
CAT {
return preLaunchedVMCPUs;
}

- newPolicy(CACHE_LEVEL, CACHE_ID, vmConfig, VCPU, TYPE: PolicyType, maxLength): Policy {
+ newPolicy(CACHE_ID, CACHE_LEVEL, vmConfig, VCPU, TYPE:
+ PolicyType, maxLength): Policy {
Why do we need to change the order of the parameters?

let originPolicy = {
VM: vmConfig.name,
VCPU, TYPE,
@@ -418,6 +420,8 @@ class CAT {
CATData.META.vmid === vmID && CATData.VCPU === VCPU && CATData.TYPE === TYPE
) {
return CATData
+ console.log("catDB")
+ console.log(this.CATDB)
}
}
return false;
@@ -441,13 +445,14 @@ class CAT {

getCLOSMask(CACHE_ID, CACHE_LEVEL, vmID, vmName, VCPU, TYPE: PolicyType, maxLength: number) {
let CATData = this.selectCATData(CACHE_ID, CACHE_LEVEL,
vmID, vmName, VCPU, TYPE);
+ // console.log(CATData)
+ let CLOS_MASK
if (CATData !== false) {
let CLOS_MASK = CATData.CLOS_MASK;
// ensure CLOS_MASK length is shorter or equal to maxLength
CLOS_MASK = this.rangeToHex(this.hexToRange(CLOS_MASK, maxLength), maxLength);
return CLOS_MASK;
- }
- let CLOS_MASK = "0x" + parseInt('1'.repeat(maxLength), 2).toString(16)
+ } else CLOS_MASK = "0x" + parseInt('1'.repeat(maxLength),
+ 2).toString(16)
Please, write the body of the else-branch on a separate row.

this.CATDB.push({
META: {vmid: vmID},
CACHE_ID, CACHE_LEVEL,
@@ -469,10 +474,10 @@ class CAT {
pcpu.real_time_vcpu === 'y'
) {
if (!this.switches.CDP_ENABLED) {
- RTCoreData.push(this.newPolicy(regionData.level, regionData.id, vmConfig, index, 'Unified', regionData.capacity_mask_length))
+
+ RTCoreData.push(this.newPolicy(regionData.id, regionData.level,
+ vmConfig, index, 'Unified', regionData.capacity_mask_length))
} else {
- RTCoreData.push(this.newPolicy(regionData.level, regionData.id, vmConfig, index, 'Code', regionData.capacity_mask_length))
- RTCoreData.push(this.newPolicy(regionData.level, regionData.id, vmConfig, index, 'Data', regionData.capacity_mask_length))
+
+ RTCoreData.push(this.newPolicy(regionData.id, regionData.level,
+ vmConfig, index, 'Code', regionData.capacity_mask_length))
+
+ RTCoreData.push(this.newPolicy(regionData.id, regionData.level,
+ vmConfig, index, 'Data', regionData.capacity_mask_length))
}
}
}
@@ -498,10 +503,10 @@ class CAT {
(pcpu.real_time_vcpu === 'y' && vmConfig.vm_type !== 'RTVM')
) {
if (!this.switches.CDP_ENABLED) {
- StandardData.push(this.newPolicy(regionData.level, regionData.id, vmConfig, index, 'Unified', regionData.capacity_mask_length))
+
+ StandardData.push(this.newPolicy(regionData.id, regionData.level,
+ vmConfig, index, 'Unified', regionData.capacity_mask_length))
} else {
- StandardData.push(this.newPolicy(regionData.level, regionData.id, vmConfig, index, "Code", regionData.capacity_mask_length))
- StandardData.push(this.newPolicy(regionData.level, regionData.id, vmConfig, index, "Data", regionData.capacity_mask_length))
+
+ StandardData.push(this.newPolicy(regionData.id, regionData.level,
+ vmConfig, index, "Code", regionData.capacity_mask_length))
+
+ StandardData.push(this.newPolicy(regionData.id, regionData.level,
+ vmConfig, index, "Data", regionData.capacity_mask_length))
}
}
}
@@ -524,10 +529,10 @@ class CAT {
this.serviceVMCPUs.map((pcpuID, index) => {
if (regionData.processors.indexOf(pcpuID) !== -1) {
if (!this.switches.CDP_ENABLED) {
- ServiceData.push(this.newPolicy(regionData.level, regionData.id, this.serviceVM, index, "Unified", regionData.capacity_mask_length))
+ ServiceData.push(this.newPolicy(regionData.id,
+ regionData.level, this.serviceVM, index, "Unified",
+ regionData.capacity_mask_length))
} else {
- ServiceData.push(this.newPolicy(regionData.level, regionData.id, this.serviceVM, index, "Code", regionData.capacity_mask_length))
- ServiceData.push(this.newPolicy(regionData.level, regionData.id, this.serviceVM, index, "Data", regionData.capacity_mask_length))
+ ServiceData.push(this.newPolicy(regionData.id, regionData.level, this.serviceVM, index, "Code", regionData.capacity_mask_length))
+ ServiceData.push(this.newPolicy(regionData.id,
+ regionData.level, this.serviceVM, index, "Data",
+ regionData.capacity_mask_length))
}
}
})
@@ -545,7 +550,7 @@ class CAT {
vmConfig.virtual_cat_support === "y"
) {
VCATData.push(
- this.newPolicy(regionData.level, regionData.id, vmConfig, 0, "Unified", vmConfig.virtual_cat_number)
+ this.newPolicy(regionData.id,
+ regionData.level, vmConfig, 0, "Unified",
+ vmConfig.virtual_cat_number)
)
}
})
@@ -580,39 +585,46 @@ class CAT {
}


- private getCATDataFromScenario() {
+ getCATDataFromScenario() {
let hv = this.scenario.hv;
- let scenarioCATData: CATDBRecord[] = []

+ let scenarioCATData: CATDBRecord[] = []
// noinspection JSUnresolvedVariable
if (
hv !== null &&
- hv.hasOwnProperty('CACHE_REGION') &&
- hv.CACHE_REGION.hasOwnProperty('CACHE_ALLOCATION') &&
- _.isArray(hv.CACHE_REGION.CACHE_ALLOCATION)
+ hv.hasOwnProperty('CACHE_REGION')
) {
- // noinspection JSUnresolvedVariable
- hv.CACHE_REGION.CACHE_ALLOCATION.map((cache_region) => {
- if (
- cache_region.hasOwnProperty('POLICY') &&
- cache_region.POLICY.length > 0
- ) {
- cache_region.POLICY.map(policy => {
- scenarioCATData.push({
- CACHE_ID: cache_region.CACHE_ID,
- CACHE_LEVEL: cache_region.CACHE_LEVEL,
- CLOS_MASK: policy.CLOS_MASK,
- META: {vmid: this.vmIDs[policy.VM]},
- TYPE: policy.TYPE,
- VCPU: policy.VCPU,
- VM: policy.VM
+ let cacheRegion = hv.CACHE_REGION
+ // console.log('cacheRegion.CACHE_ALLOCATION')
+ // console.log(cacheRegion.CACHE_ALLOCATION)
Do not check in commented logs.

--
Best Regards
Junjie Mao


Noted

Best Regards,
Chuang Ke



+ if (cacheRegion !== null && cacheRegion.hasOwnProperty('CACHE_ALLOCATION') &&
+ _.isArray(cacheRegion.CACHE_ALLOCATION)) {
+ // noinspection JSUnresolvedVariable
+
+ cacheRegion.CACHE_ALLOCATION.map((cache_allocation) => {
+ if (
+ cache_allocation.hasOwnProperty('POLICY') &&
+ cache_allocation.POLICY.length > 0
+ ) {
+ cache_allocation.POLICY.map(policy => {
+ scenarioCATData.push({
+ CACHE_ID: cache_allocation.CACHE_ID,
+ CACHE_LEVEL: cache_allocation.CACHE_LEVEL,
+ CLOS_MASK: policy.CLOS_MASK,
+ META: {vmid: this.vmIDs[policy.VM]},
+ TYPE: policy.TYPE,
+ VCPU: policy.VCPU,
+ VM: policy.VM
+ })
})
- })
- }
+ }

- })
- }
+ })

+ }
+ }
+ console.log('scenarioCATData')
+ console.log(scenarioCATData)
return scenarioCATData
}




Re: [PATCH] configurator creates duplicate VM name

Chuang Ke
 

I'm pretty sure new VM generater use MaxID generate a new VM name from the code addVM function.
But I reviewed again ,the bug seems somehow fixed , I can't to reproduce the bug again.
Let me test another day.

Best Regards,
Chuang Ke

-----Original Message-----
From: Mao, Junjie <junjie.mao@...>
Sent: 2022年7月29日 23:47
To: Ke, ChuangX <chuangx.ke@...>
Cc: acrn-dev@...; Xie, Nanlin <nanlin.xie@...>
Subject: Re: [PATCH] configurator creates duplicate VM name

It looks strange to me that we are adding "MaxVMID" rather than "VMID"
in the default VM name. Shouldn't MaxVMID being a variable we tracked for internal operations and VMID being the one we want to use to generate a unique default VM name?

--
Best Regards
Junjie Mao

"Ke, ChuangX" <chuangx.ke@...> writes:

From my view, when user add a new VM, VM creation use VMID as part of name , so that it can be unique.
the scenario push args MaxVMID into the new VM as VMID. But the
maxVMID do not update when you delete a VM. So it cause duplicate names Add this condition means maxVMID always equal current max vmID+1, to make sure it do not creates duplicate VM name.
+ if (vmID >= maxVMID) {
maxVMID = vmID
}
...
maxVMID++
Best Regards,
Chuang Ke

-----Original Message-----
From: Mao, Junjie <junjie.mao@...>
Sent: 2022年7月27日 23:15
To: Ke, ChuangX <chuangx.ke@...>
Cc: acrn-dev@...; Xie, Nanlin <nanlin.xie@...>
Subject: Re: [PATCH] configurator creates duplicate VM name

Chuang Ke <chuangx.ke@...> writes:

fix VM duplicate name when create new VM
Please explain more on why this VM ID check is related to duplicate names on VM creation in the commit message.

--
Best Regards
Junjie Mao


Signed-off-by: Chuang-Ke <chuangx.ke@...>
Tracked-On: #7920
---
.../configurator/packages/configurator/src/pages/Config.vue | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)

diff --git
a/misc/config_tools/configurator/packages/configurator/src/pages/Conf
i
g.vue
b/misc/config_tools/configurator/packages/configurator/src/pages/Conf
i
g.vue
index 55e5c6ed7..e8bd417b3 100644
---
a/misc/config_tools/configurator/packages/configurator/src/pages/Conf
i
g.vue
+++ b/misc/config_tools/configurator/packages/configurator/src/pages/
+++ C
+++ onfig.vue
@@ -228,7 +228,7 @@ export default {
let haveService = false;
this.scenario.vm.map((vmConfig) => {
let vmID = vmConfig['@id'];
- if (vmID > maxVMID) {
+ if (vmID >= maxVMID) {
maxVMID = vmID
}
if (vmConfig['load_order'] === 'SERVICE_VM') {


Re: [PATCH v2] misc: refine slot issue of launch script

Junjie Mao
 

Chenli Wei <chenli.wei@...> writes:

From: Chenli Wei <chenli.wei@...>

The current launch script allocate bdf for ivshmem by itself and have
not get bdf from scenario.

This patch refine the above logic and generate slot by user settings.

Signed-off-by: Chenli Wei <chenli.wei@...>
Reviewed-by: Junjie Mao <junjie.mao@...>

--
Best Regards
Junjie Mao


v1->v2:
1 add a function to get slot by vbdf
---
.../launch_config/launch_cfg_gen.py | 18 ++++++++++++------
1 file changed, 12 insertions(+), 6 deletions(-)

diff --git a/misc/config_tools/launch_config/launch_cfg_gen.py b/misc/config_tools/launch_config/launch_cfg_gen.py
index 910b2e6df..38d843c8b 100755
--- a/misc/config_tools/launch_config/launch_cfg_gen.py
+++ b/misc/config_tools/launch_config/launch_cfg_gen.py
@@ -209,6 +209,11 @@ def cpu_id_to_lapic_id(board_etree, vm_name, cpus):

return ret

+def get_slot_by_vbdf(vbdf):
+ if vbdf is not None:
+ return int((vbdf.split(":")[1].split(".")[0]), 16)
+ else:
+ return None

def generate_for_one_vm(board_etree, hv_scenario_etree, vm_scenario_etree, vm_id):
vm_name = eval_xpath(vm_scenario_etree, "./name/text()", f"ACRN Post-Launched VM")
@@ -263,10 +268,14 @@ def generate_for_one_vm(board_etree, hv_scenario_etree, vm_scenario_etree, vm_id
#ivshmem and vuart own reserved slots which setting by user

for ivshmem in eval_xpath_all(vm_scenario_etree, f"//IVSHMEM_REGION[PROVIDED_BY = 'Device Model' and .//VM_NAME = '{vm_name}']"):
- script.add_virtual_device("ivshmem", options=f"dm:/{ivshmem.find('NAME').text},{ivshmem.find('IVSHMEM_SIZE').text}")
+ vbdf = eval_xpath(ivshmem, f".//VBDF/text()")
+ slot = get_slot_by_vbdf(vbdf)
+ script.add_virtual_device("ivshmem", slot, options=f"dm:/{ivshmem.find('NAME').text},{ivshmem.find('IVSHMEM_SIZE').text}")

for ivshmem in eval_xpath_all(vm_scenario_etree, f"//IVSHMEM_REGION[PROVIDED_BY = 'Hypervisor' and .//VM_NAME = '{vm_name}']"):
- script.add_virtual_device("ivshmem", options=f"hv:/{ivshmem.find('NAME').text},{ivshmem.find('IVSHMEM_SIZE').text}")
+ vbdf = eval_xpath(ivshmem, f".//VBDF/text()")
+ slot = get_slot_by_vbdf(vbdf)
+ script.add_virtual_device("ivshmem", slot, options=f"hv:/{ivshmem.find('NAME').text},{ivshmem.find('IVSHMEM_SIZE').text}")

if eval_xpath(vm_scenario_etree, ".//console_vuart/text()") == "PCI":
script.add_virtual_device("uart", options="vuart_idx:0")
@@ -274,10 +283,7 @@ def generate_for_one_vm(board_etree, hv_scenario_etree, vm_scenario_etree, vm_id
for idx, conn in enumerate(eval_xpath_all(hv_scenario_etree, f"//vuart_connection[endpoint/vm_name/text() = '{vm_name}']"), start=1):
if eval_xpath(conn, f"./type/text()") == "pci":
vbdf = eval_xpath(conn, f"./endpoint[vm_name/text() = '{vm_name}']/vbdf/text()")
- if vbdf is not None:
- slot = int((vbdf.split(":")[1].split(".")[0]), 16)
- else:
- slot = None
+ slot = get_slot_by_vbdf(vbdf)
script.add_virtual_device("uart", slot, options=f"vuart_idx:{idx}")

# Mediated PCI devices, including virtio


Re: [PATCH] config-tools: solve hv and vm memory address conflict

Junjie Mao
 

"Li, Ziheng" <ziheng.li@...> writes:

From d16ae3186184d82cb286e662cd3e467ae13539c4 Mon Sep 17 00:00:00 2001
From: zihengL1 <ziheng.li@...>
Date: Fri, 29 Jul 2022 09:19:14 +0800
Subject: [PATCH] config-tools: solve hv and vm memory address conflict

Fixed the problem that acrn can still build normally
when the memory addresses of HV and VM conflict,
which causes the hypervisor to hang.

Tracked-On: #7913
Signed-off-by: Ziheng Li <ziheng.li@...>
---
.../static_allocators/memory_allocator.py | 26 ++++++++++++-------
1 file changed, 17 insertions(+), 9 deletions(-)

diff --git a/misc/config_tools/static_allocators/memory_allocator.py b/misc/config_tools/static_allocators/memory_allocator.py
index eb00f2749..316a8e3ad 100644
--- a/misc/config_tools/static_allocators/memory_allocator.py
+++ b/misc/config_tools/static_allocators/memory_allocator.py
@@ -15,9 +15,15 @@ def import_memory_info(board_etree):
ram_range = {}
start = board_etree.xpath("/acrn-config/memory/range/@start")
size = board_etree.xpath("/acrn-config/memory/range/@size")
+
+ hv_start_addr = int(common.get_node("/acrn-config/hv/MEMORY/HV_RAM_START/text()", allocation_etree), 16)
+ hv_ram_size = int(common.get_node("/acrn-config/hv/MEMORY/HV_RAM_SIZE/text()", allocation_etree), 16)
Is it possible to define a class to represent a memory range so that we
can hide range manipulation details from the allocation logic?

--
Best Regards
Junjie Mao

for i in range(len(start)):
start_hex = int(start[i], 16)
size_hex = int(size[i], 10)
+ if start_hex < hv_start_addr and start_hex + size_hex > hv_start_addr + hv_ram_size:
+ ram_range[start_hex] = hv_start_addr - start_hex
+ ram_range[hv_start_addr + hv_ram_size] = start_hex + size_hex - hv_start_addr - hv_ram_size
ram_range[start_hex] = size_hex

return ram_range
@@ -79,20 +85,22 @@ def alloc_hpa_region(ram_range_info, mem_info_list, vm_node_index_list):
mem_key = sorted(ram_range_info)
for mem_start in mem_key:
mem_size = ram_range_info[mem_start]
+ mem_end = mem_start + mem_size
for hpa_start in hpa_key:
hpa_size = mem_info_list[vm_index][hpa_start]
+ hpa_end = hpa_start + hpa_size
if hpa_start != 0:
- if mem_start < hpa_start and mem_start + mem_size > hpa_start + hpa_size:
+ if mem_start < hpa_start and mem_end > hpa_end:
ram_range_info[mem_start] = hpa_start - mem_start
- ram_range_info[hpa_start + hpa_size] = mem_start + mem_size - hpa_start - hpa_size
- elif mem_start == hpa_start and mem_start + mem_size > hpa_start + hpa_size:
+ ram_range_info[hpa_end] = mem_end - hpa_end
+ elif mem_start == hpa_start and mem_start + mem_size > hpa_end:
del ram_range_info[mem_start]
- ram_range_info[hpa_start + hpa_size] = mem_start + mem_size - hpa_start - hpa_size
- elif mem_start < hpa_start and mem_start + mem_size == hpa_start + hpa_size:
+ ram_range_info[hpa_end] = mem_end - hpa_end
+ elif mem_start < hpa_start and mem_end == hpa_end:
ram_range_info[mem_start] = hpa_start - mem_start
- elif mem_start == hpa_start and mem_start + mem_size == hpa_start + hpa_size:
+ elif mem_start == hpa_start and mem_end == hpa_end:
del ram_range_info[mem_start]
- elif mem_start > hpa_start or mem_start + mem_size < hpa_start + hpa_size:
+ elif mem_start > hpa_start or mem_end < hpa_end:
raise lib.error.ResourceError(f"Start address of HPA is out of available memory range: vm id: {vm_index}, hpa_start: {hpa_start}.")
elif mem_size < hpa_size:
raise lib.error.ResourceError(f"Size of HPA is out of available memory range: vm id: {vm_index}, hpa_size: {hpa_size}.")
@@ -149,14 +157,14 @@ def write_hpa_info(allocation_etree, mem_info_list, vm_node_index_list):
region_index = region_index + 1

def alloc_vm_memory(board_etree, scenario_etree, allocation_etree):
- ram_range_info = import_memory_info(board_etree)
+ ram_range_info = import_memory_info(board_etree, allocation_etree)
ram_range_info, mem_info_list, vm_node_index_list = alloc_memory(scenario_etree, ram_range_info)
write_hpa_info(allocation_etree, mem_info_list, vm_node_index_list)

def allocate_hugepages(board_etree, scenario_etree, allocation_etree):
hugepages_1gb = 0
hugepages_2mb = 0
- ram_range_info = import_memory_info(board_etree)
+ ram_range_info = import_memory_info(board_etree, allocation_etree)
total_hugepages = int(sum(ram_range_info[i] for i in ram_range_info if i >= 0x100000000)*0.98/(1024*1024*1024) \
- sum(int(i) for i in scenario_etree.xpath("//vm[load_order = 'PRE_LAUNCHED_VM']/memory/hpa_region/size_hpa/text()"))/1024 \
- 5 - 300/1024 * len(scenario_etree.xpath("//virtio_devices/gpu")))
--
2.25.1


Re: [PATCH] config_tools: add tooltips for cpu affinity

Junjie Mao
 

Kunhui-Li <kunhuix.li@...> writes:

add tooltips for cpu affinity and tiny fix for virtio console device.

Tracked-On: #7933
Signed-off-by: Kunhui-Li <kunhuix.li@...>
For UI only changes, you can PR once tested that the issue is properly fixed.

---
.../CustomWidget/Virtio/Console.vue | 2 +-
.../ConfigForm/CustomWidget/cpu_affinity.vue | 19 ++++++++++++++++---
2 files changed, 17 insertions(+), 4 deletions(-)

diff --git a/misc/config_tools/configurator/packages/configurator/src/pages/Config/ConfigForm/CustomWidget/Virtio/Console.vue b/misc/config_tools/configurator/packages/configurator/src/pages/Config/ConfigForm/CustomWidget/Virtio/Console.vue
index 173dd7168..17fe4b7ec 100644
--- a/misc/config_tools/configurator/packages/configurator/src/pages/Config/ConfigForm/CustomWidget/Virtio/Console.vue
+++ b/misc/config_tools/configurator/packages/configurator/src/pages/Config/ConfigForm/CustomWidget/Virtio/Console.vue
@@ -131,7 +131,7 @@ export default {
},
data() {
return {
- ConsoleConfiguration: this.rootSchema.definitions['BasicVirtioConsoleBackendType'],
+ ConsoleConfiguration: this.rootSchema.definitions['VirtioConsoleConfiguration'],
enumNames: this.rootSchema.definitions['VirtioConsoleUseType']['enumNames'],
enum: this.rootSchema.definitions['VirtioConsoleUseType']['enum'],
ConsoleBackendType: this.rootSchema.definitions['BasicVirtioConsoleBackendType']['enum'],
diff --git a/misc/config_tools/configurator/packages/configurator/src/pages/Config/ConfigForm/CustomWidget/cpu_affinity.vue b/misc/config_tools/configurator/packages/configurator/src/pages/Config/ConfigForm/CustomWidget/cpu_affinity.vue
index c796a2d56..387bc08d9 100644
--- a/misc/config_tools/configurator/packages/configurator/src/pages/Config/ConfigForm/CustomWidget/cpu_affinity.vue
+++ b/misc/config_tools/configurator/packages/configurator/src/pages/Config/ConfigForm/CustomWidget/cpu_affinity.vue
@@ -13,8 +13,15 @@
<b-row class="align-items-center"
v-for="(cpu,index) in defaultVal.pcpu">
<b-col>
- <span v-if="index===0" style="color: red; margin-left: -10px;margin-right: 4px">*</span>
- pCPU ID
+ <label class="requiredField" v-if="index===0"></label>
+ <label>
+ <n-popover trigger="hover" placement="top-start">
+ <template #trigger>
+ <IconInfo/>
+ </template>
+ <span v-html="this.CPUAffinityConfiguration.properties.pcpu_id.description"></span>
+ </n-popover>pCPU ID
+ </label>
</b-col>
<b-col>
<b-form-select :state="validateCPUAffinity(cpu.pcpu_id)" v-model="cpu.pcpu_id"
@@ -73,10 +80,11 @@ import _ from 'lodash'
import {Icon} from "@vicons/utils";
import {Plus, Minus} from '@vicons/fa'
import {BFormInput, BRow} from "bootstrap-vue-3";
+import IconInfo from '@lljj/vjsf-utils/icons/IconInfo.vue';

export default {
name: "cpu_affinity",
- components: {BRow, BFormInput, Icon, Plus, Minus},
+ components: {BRow, BFormInput, Icon, Plus, Minus, IconInfo},
props: {
...fieldProps,
},
@@ -110,6 +118,7 @@ export default {
},
data() {
return {
+ CPUAffinityConfiguration: this.rootSchema.definitions['CPUAffinityConfiguration'],
defaultVal: vueUtils.getPathVal(this.rootFormData, this.curNodePath)
}
},
@@ -141,6 +150,10 @@ export default {
</script>

<style scoped>
+.requiredField:before {
+ content: '*';
+ color: red;
+}

.ToolSet {
display: flex;
--
Best Regards
Junjie Mao


Re: [PATCH v2] config_tools: remove invaild hugepage check

Junjie Mao
 

Kunhui-Li <kunhuix.li@...> writes:

Currently, on the whl-ipc-i5 platform, we found a warning message when
building ACRN with the shared scenario XML file from github.
However, this doesn't affect any feature of ACRN according to the QA's
test result.

So this patch removes this check in order not to confuse users at the first.
If necessary, we will add back the check after getting more detail.

v1-->v2
degrade the log level to debug.

Tracked-On: #7926
Signed-off-by: Kunhui-Li <kunhuix.li@...>
Reviewed-by: Junjie Mao <junjie.mao@...>

--
Best Regards
Junjie Mao

---
misc/config_tools/static_allocators/memory_allocator.py | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/misc/config_tools/static_allocators/memory_allocator.py b/misc/config_tools/static_allocators/memory_allocator.py
index eb00f2749..14bbabf7c 100644
--- a/misc/config_tools/static_allocators/memory_allocator.py
+++ b/misc/config_tools/static_allocators/memory_allocator.py
@@ -172,7 +172,7 @@ def allocate_hugepages(board_etree, scenario_etree, allocation_etree):

post_vms_memory = sum(int(i) for i in scenario_etree.xpath("//vm[load_order = 'POST_LAUNCHED_VM']/memory/size/text()")) / 1024
if total_hugepages - post_vms_memory < 0:
- logging.warning(f"The sum {post_vms_memory} of memory configured in post launch VMs should not be larger than " \
+ logging.debug(f"The sum {post_vms_memory} of memory configured in post launch VMs should not be larger than " \
f"the calculated total hugepages {total_hugepages} of service VMs. Please update the configuration in post launch VMs")

allocation_service_vm_node = common.get_node("/acrn-config/vm[load_order = 'SERVICE_VM']", allocation_etree)


Re: [PATCH] configurator does not consider L3 CAT config when opening an exiting configuration

Junjie Mao
 

"Ke, ChuangX" <chuangx.ke@...> writes:

When I try to fix the bug from bug report description, I find two issue cause the problem:
1. part of code in the translate function not match xml file(user offered)
anymore, I have to rewrite the function to trans xml data so that xml data could
import in correctly 2. when I fix issue 1, the import still not working, All
non-rt VMs have set way mask to 0x0ff, because some mask function can't get
data. Then I find out after translate xml file, regionData structure have issue,
regionData.level and regionData.id are in the reverse order.
I'm still confused. Why does ordering of fields in an object affects the
correctness of the logic? Even that's the case, why changing the order
of the initializer parameters helps here?

--
Best Regards
Junjie Mao

In fact, it means config tools can't read any xml regionData, that's why it set All non-rt VMs mask to default value 0x0ff when it can't get data.
Because level and id is in reverse order, I also believe it export error data.
Right now , I think I fixed the two problem. maybe someone need to check and test on it.

Best Regards,
Chuang Ke

-----Original Message-----
From: acrn-dev@... <acrn-dev@...> On Behalf Of Junjie Mao
Sent: 2022年7月27日 23:46
To: Ke, ChuangX <chuangx.ke@...>
Cc: acrn-dev@...; Xie, Nanlin <nanlin.xie@...>
Subject: Re: [acrn-dev] [PATCH] configurator does not consider L3 CAT config when opening an exiting configuration

Chuang Ke <chuangx.ke@...> writes:

Found errors when import xml file through this bug report, after fix the clos mask could import correctly, and show the clos mask bar in the view correctly, and also save clos mask data correctly.
From your description above, I do not quite get either your intention or the actual changes you have made in this patch ...


Signed-off-by: Chuang-Ke <chuangx.ke@...>
Tracked-On: #7927
---
.../packages/configurator/src/lib/acrn.ts | 86 +++++++++++--------
1 file changed, 49 insertions(+), 37 deletions(-)

diff --git
a/misc/config_tools/configurator/packages/configurator/src/lib/acrn.ts
b/misc/config_tools/configurator/packages/configurator/src/lib/acrn.ts
index 218f350b7..0467dd0ee 100644
---
a/misc/config_tools/configurator/packages/configurator/src/lib/acrn.ts
+++ b/misc/config_tools/configurator/packages/configurator/src/lib/acr
+++ n.ts
@@ -242,6 +242,8 @@ class CAT {
// get CAT schema && scenario && board basic data
this.schemaData = window.getSchemaData();
this.scenario = window.getCurrentScenarioData();
+ console.log('this.scenario...')
+ console.log(this.scenario)
this.CAT_REGION_INFO = window.getBoardData().CAT_INFO;

// check scenario data is empty @@ -392,7 +394,7 @@ class CAT
{
return preLaunchedVMCPUs;
}

- newPolicy(CACHE_LEVEL, CACHE_ID, vmConfig, VCPU, TYPE: PolicyType, maxLength): Policy {
+ newPolicy(CACHE_ID, CACHE_LEVEL, vmConfig, VCPU, TYPE:
+ PolicyType, maxLength): Policy {
Why do we need to change the order of the parameters?

let originPolicy = {
VM: vmConfig.name,
VCPU, TYPE,
@@ -418,6 +420,8 @@ class CAT {
CATData.META.vmid === vmID && CATData.VCPU === VCPU && CATData.TYPE === TYPE
) {
return CATData
+ console.log("catDB")
+ console.log(this.CATDB)
}
}
return false;
@@ -441,13 +445,14 @@ class CAT {

getCLOSMask(CACHE_ID, CACHE_LEVEL, vmID, vmName, VCPU, TYPE: PolicyType, maxLength: number) {
let CATData = this.selectCATData(CACHE_ID, CACHE_LEVEL, vmID,
vmName, VCPU, TYPE);
+ // console.log(CATData)
+ let CLOS_MASK
if (CATData !== false) {
let CLOS_MASK = CATData.CLOS_MASK;
// ensure CLOS_MASK length is shorter or equal to maxLength
CLOS_MASK = this.rangeToHex(this.hexToRange(CLOS_MASK, maxLength), maxLength);
return CLOS_MASK;
- }
- let CLOS_MASK = "0x" + parseInt('1'.repeat(maxLength), 2).toString(16)
+ } else CLOS_MASK = "0x" + parseInt('1'.repeat(maxLength),
+ 2).toString(16)
Please, write the body of the else-branch on a separate row.

this.CATDB.push({
META: {vmid: vmID},
CACHE_ID, CACHE_LEVEL,
@@ -469,10 +474,10 @@ class CAT {
pcpu.real_time_vcpu === 'y'
) {
if (!this.switches.CDP_ENABLED) {
- RTCoreData.push(this.newPolicy(regionData.level, regionData.id, vmConfig, index, 'Unified', regionData.capacity_mask_length))
+
+ RTCoreData.push(this.newPolicy(regionData.id, regionData.level,
+ vmConfig, index, 'Unified', regionData.capacity_mask_length))
} else {
- RTCoreData.push(this.newPolicy(regionData.level, regionData.id, vmConfig, index, 'Code', regionData.capacity_mask_length))
- RTCoreData.push(this.newPolicy(regionData.level, regionData.id, vmConfig, index, 'Data', regionData.capacity_mask_length))
+ RTCoreData.push(this.newPolicy(regionData.id, regionData.level, vmConfig, index, 'Code', regionData.capacity_mask_length))
+
+ RTCoreData.push(this.newPolicy(regionData.id, regionData.level,
+ vmConfig, index, 'Data', regionData.capacity_mask_length))
}
}
}
@@ -498,10 +503,10 @@ class CAT {
(pcpu.real_time_vcpu === 'y' && vmConfig.vm_type !== 'RTVM')
) {
if (!this.switches.CDP_ENABLED) {
- StandardData.push(this.newPolicy(regionData.level, regionData.id, vmConfig, index, 'Unified', regionData.capacity_mask_length))
+
+ StandardData.push(this.newPolicy(regionData.id, regionData.level,
+ vmConfig, index, 'Unified', regionData.capacity_mask_length))
} else {
- StandardData.push(this.newPolicy(regionData.level, regionData.id, vmConfig, index, "Code", regionData.capacity_mask_length))
- StandardData.push(this.newPolicy(regionData.level, regionData.id, vmConfig, index, "Data", regionData.capacity_mask_length))
+ StandardData.push(this.newPolicy(regionData.id, regionData.level, vmConfig, index, "Code", regionData.capacity_mask_length))
+
+ StandardData.push(this.newPolicy(regionData.id, regionData.level,
+ vmConfig, index, "Data", regionData.capacity_mask_length))
}
}
}
@@ -524,10 +529,10 @@ class CAT {
this.serviceVMCPUs.map((pcpuID, index) => {
if (regionData.processors.indexOf(pcpuID) !== -1) {
if (!this.switches.CDP_ENABLED) {
- ServiceData.push(this.newPolicy(regionData.level, regionData.id, this.serviceVM, index, "Unified", regionData.capacity_mask_length))
+ ServiceData.push(this.newPolicy(regionData.id,
+ regionData.level, this.serviceVM, index, "Unified",
+ regionData.capacity_mask_length))
} else {
- ServiceData.push(this.newPolicy(regionData.level, regionData.id, this.serviceVM, index, "Code", regionData.capacity_mask_length))
- ServiceData.push(this.newPolicy(regionData.level, regionData.id, this.serviceVM, index, "Data", regionData.capacity_mask_length))
+ ServiceData.push(this.newPolicy(regionData.id, regionData.level, this.serviceVM, index, "Code", regionData.capacity_mask_length))
+ ServiceData.push(this.newPolicy(regionData.id,
+ regionData.level, this.serviceVM, index, "Data",
+ regionData.capacity_mask_length))
}
}
})
@@ -545,7 +550,7 @@ class CAT {
vmConfig.virtual_cat_support === "y"
) {
VCATData.push(
- this.newPolicy(regionData.level, regionData.id, vmConfig, 0, "Unified", vmConfig.virtual_cat_number)
+ this.newPolicy(regionData.id,
+ regionData.level, vmConfig, 0, "Unified",
+ vmConfig.virtual_cat_number)
)
}
})
@@ -580,39 +585,46 @@ class CAT {
}


- private getCATDataFromScenario() {
+ getCATDataFromScenario() {
let hv = this.scenario.hv;
- let scenarioCATData: CATDBRecord[] = []

+ let scenarioCATData: CATDBRecord[] = []
// noinspection JSUnresolvedVariable
if (
hv !== null &&
- hv.hasOwnProperty('CACHE_REGION') &&
- hv.CACHE_REGION.hasOwnProperty('CACHE_ALLOCATION') &&
- _.isArray(hv.CACHE_REGION.CACHE_ALLOCATION)
+ hv.hasOwnProperty('CACHE_REGION')
) {
- // noinspection JSUnresolvedVariable
- hv.CACHE_REGION.CACHE_ALLOCATION.map((cache_region) => {
- if (
- cache_region.hasOwnProperty('POLICY') &&
- cache_region.POLICY.length > 0
- ) {
- cache_region.POLICY.map(policy => {
- scenarioCATData.push({
- CACHE_ID: cache_region.CACHE_ID,
- CACHE_LEVEL: cache_region.CACHE_LEVEL,
- CLOS_MASK: policy.CLOS_MASK,
- META: {vmid: this.vmIDs[policy.VM]},
- TYPE: policy.TYPE,
- VCPU: policy.VCPU,
- VM: policy.VM
+ let cacheRegion = hv.CACHE_REGION
+ // console.log('cacheRegion.CACHE_ALLOCATION')
+ // console.log(cacheRegion.CACHE_ALLOCATION)
Do not check in commented logs.

--
Best Regards
Junjie Mao


Noted

Best Regards,
Chuang Ke



+ if (cacheRegion !== null && cacheRegion.hasOwnProperty('CACHE_ALLOCATION') &&
+ _.isArray(cacheRegion.CACHE_ALLOCATION)) {
+ // noinspection JSUnresolvedVariable
+
+ cacheRegion.CACHE_ALLOCATION.map((cache_allocation) => {
+ if (
+ cache_allocation.hasOwnProperty('POLICY') &&
+ cache_allocation.POLICY.length > 0
+ ) {
+ cache_allocation.POLICY.map(policy => {
+ scenarioCATData.push({
+ CACHE_ID: cache_allocation.CACHE_ID,
+ CACHE_LEVEL: cache_allocation.CACHE_LEVEL,
+ CLOS_MASK: policy.CLOS_MASK,
+ META: {vmid: this.vmIDs[policy.VM]},
+ TYPE: policy.TYPE,
+ VCPU: policy.VCPU,
+ VM: policy.VM
+ })
})
- })
- }
+ }

- })
- }
+ })

+ }
+ }
+ console.log('scenarioCATData')
+ console.log(scenarioCATData)
return scenarioCATData
}




Re: [PATCH] cofig_tools: apply vBDF pattern check to vUART and ivshmem

Junjie Mao
 

Kunhui-Li <kunhuix.li@...> writes:

apply vBDF pattern check to vUART and ivshmem.

v1 --> v2
update the vBDF schema using the "acrn:errormsg" for reusing error message.

Tracked-On: #7925
Signed-off-by: Kunhui-Li <kunhuix.li@...>
Reviewed-by: Junjie Mao <junjie.mao@...>

--
Best Regards
Junjie Mao

---
.../ConfigForm/CustomWidget/IVSHMEM_REGION.vue | 9 +++++++--
.../Config/ConfigForm/CustomWidget/VUART.vue | 15 +++++++++++++--
misc/config_tools/schema/types.xsd | 2 +-
3 files changed, 21 insertions(+), 5 deletions(-)

diff --git a/misc/config_tools/configurator/packages/configurator/src/pages/Config/ConfigForm/CustomWidget/IVSHMEM_REGION.vue b/misc/config_tools/configurator/packages/configurator/src/pages/Config/ConfigForm/CustomWidget/IVSHMEM_REGION.vue
index 948f414b7..ba22f59ee 100644
--- a/misc/config_tools/configurator/packages/configurator/src/pages/Config/ConfigForm/CustomWidget/IVSHMEM_REGION.vue
+++ b/misc/config_tools/configurator/packages/configurator/src/pages/Config/ConfigForm/CustomWidget/IVSHMEM_REGION.vue
@@ -87,9 +87,9 @@
</b-form-invalid-feedback>
</b-col>
<b-col sm="3">
- <b-form-input v-model="IVSHMEM_VM.VBDF" placeholder="00:[device].[function], e.g. 00:0c.0. All fields are in hexadecimal."/>
+ <b-form-input :state="validateVBDF(IVSHMEM_VM.VBDF)" v-model="IVSHMEM_VM.VBDF" placeholder="00:[device].[function], e.g. 00:0c.0. All fields are in hexadecimal."/>
<b-form-invalid-feedback>
- must have value
+ {{ this.VBDFType["err:pattern"] }}
</b-form-invalid-feedback>
</b-col>
<b-col sm="3">
@@ -171,6 +171,7 @@ export default {
IVSHMEMSize: this.rootSchema.definitions['IVSHMEMSize']['enum'],
IVSHMEMRegionType: this.rootSchema.definitions['IVSHMEMRegionType'],
IVSHMEM_VM: this.rootSchema.definitions['IVSHMEMVM'],
+ VBDFType: this.rootSchema.definitions['VBDFType'],
defaultVal: vueUtils.getPathVal(this.rootFormData, this.curNodePath)
};
},
@@ -196,6 +197,10 @@ export default {
var regexp = new RegExp(this.IVSHMEMRegionType.properties.NAME.pattern);
return (value != null) && regexp.test(value);
},
+ validateVBDF(value) {
+ var regexp = new RegExp(this.VBDFType.pattern);
+ return (value != null) && regexp.test(value);
+ },
validation(value) {
return (value != null) && (value.length !== 0);
},
diff --git a/misc/config_tools/configurator/packages/configurator/src/pages/Config/ConfigForm/CustomWidget/VUART.vue b/misc/config_tools/configurator/packages/configurator/src/pages/Config/ConfigForm/CustomWidget/VUART.vue
index f00ccbec1..7f4ff3f0e 100644
--- a/misc/config_tools/configurator/packages/configurator/src/pages/Config/ConfigForm/CustomWidget/VUART.vue
+++ b/misc/config_tools/configurator/packages/configurator/src/pages/Config/ConfigForm/CustomWidget/VUART.vue
@@ -96,14 +96,20 @@
<b-col sm="4"> Connection_{{ index + 1 }}-{{ VUARTConn.endpoint[0].vm_name }} </b-col>
<b-col sm="4">
<b-form-input v-model="VUARTConn.endpoint[0].io_port" v-if="VUARTConn.type === 'legacy'" :placeholder="vIoPortPlaceholder"/>
- <b-form-input v-model="VUARTConn.endpoint[0].vbdf" v-else-if="VUARTConn.type === 'pci'" :placeholder="vBDFPlaceholder"/>
+ <b-form-input :state="validateVBDF(VUARTConn.endpoint[0].vbdf)" v-model="VUARTConn.endpoint[0].vbdf" v-else-if="VUARTConn.type === 'pci'" :placeholder="vBDFPlaceholder"/>
+ <b-form-invalid-feedback>
+ {{ this.VBDFType["err:pattern"] }}
+ </b-form-invalid-feedback>
</b-col>
</b-row>
<b-row class="justify-content-sm-start align-items-center">
<b-col sm="4"> Connection_{{ index + 1 }}-{{ VUARTConn.endpoint[1].vm_name }} </b-col>
<b-col sm="4">
<b-form-input v-model="VUARTConn.endpoint[1].io_port" v-if="VUARTConn.type === 'legacy'" :placeholder="vIoPortPlaceholder"/>
- <b-form-input v-model="VUARTConn.endpoint[1].vbdf" v-else-if="VUARTConn.type === 'pci'" :placeholder="vBDFPlaceholder"/>
+ <b-form-input :state="validateVBDF(VUARTConn.endpoint[1].vbdf)" v-model="VUARTConn.endpoint[1].vbdf" v-else-if="VUARTConn.type === 'pci'" :placeholder="vBDFPlaceholder"/>
+ <b-form-invalid-feedback>
+ {{ this.VBDFType["err:pattern"] }}
+ </b-form-invalid-feedback>
</b-col>
</b-row>
</div>
@@ -186,6 +192,7 @@ export default {
IOPortDefault: this.rootSchema.definitions['VuartEndpointType']['properties']['io_port']['default'],
VMConfigType: this.rootSchema.definitions['VMConfigType'],
VuartConnectionType: this.rootSchema.definitions['VuartConnectionType'],
+ VBDFType: this.rootSchema.definitions['VBDFType'],
defaultVal: vueUtils.getPathVal(this.rootFormData, this.curNodePath)
};
},
@@ -210,6 +217,10 @@ export default {
validation(value) {
return (value != null) && (value.length != 0);
},
+ validateVBDF(value) {
+ var regexp = new RegExp(this.VBDFType.pattern);
+ return (value != null) && regexp.test(value);
+ },
removeVUARTConnection(index) {
this.defaultVal.splice(index, 1);
},
diff --git a/misc/config_tools/schema/types.xsd b/misc/config_tools/schema/types.xsd
index cdfffb218..4aa4c5cbc 100644
--- a/misc/config_tools/schema/types.xsd
+++ b/misc/config_tools/schema/types.xsd
@@ -178,7 +178,7 @@ Read more about the available scheduling options in :ref:`cpu_sharing`.</xs:docu
</xs:simpleType>

<xs:simpleType name="VBDFType">
- <xs:annotation>
+ <xs:annotation acrn:errormsg="'pattern': 'A string with up to two hex digits, a ``:``, two hex digits, a ``.``, and one digit between 0-7.'">
<xs:documentation>A string with up to two hex digits, a ``:``, two hex digits, a ``.``, and one digit between 0-7.</xs:documentation>
</xs:annotation>
<xs:restriction base="xs:string">


Re: [PATCH] configurator creates duplicate VM name

Junjie Mao
 

It looks strange to me that we are adding "MaxVMID" rather than "VMID"
in the default VM name. Shouldn't MaxVMID being a variable we tracked
for internal operations and VMID being the one we want to use to
generate a unique default VM name?

--
Best Regards
Junjie Mao

"Ke, ChuangX" <chuangx.ke@...> writes:

From my view, when user add a new VM, VM creation use VMID as part of name , so that it can be unique.
the scenario push args MaxVMID into the new VM as VMID. But the maxVMID do not update when you delete a VM. So it cause duplicate names
Add this condition means maxVMID always equal current max vmID+1, to make sure it do not creates duplicate VM name.
+ if (vmID >= maxVMID) {
maxVMID = vmID
}
...
maxVMID++
Best Regards,
Chuang Ke

-----Original Message-----
From: Mao, Junjie <junjie.mao@...>
Sent: 2022年7月27日 23:15
To: Ke, ChuangX <chuangx.ke@...>
Cc: acrn-dev@...; Xie, Nanlin <nanlin.xie@...>
Subject: Re: [PATCH] configurator creates duplicate VM name

Chuang Ke <chuangx.ke@...> writes:

fix VM duplicate name when create new VM
Please explain more on why this VM ID check is related to duplicate names on VM creation in the commit message.

--
Best Regards
Junjie Mao


Signed-off-by: Chuang-Ke <chuangx.ke@...>
Tracked-On: #7920
---
.../configurator/packages/configurator/src/pages/Config.vue | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)

diff --git
a/misc/config_tools/configurator/packages/configurator/src/pages/Confi
g.vue
b/misc/config_tools/configurator/packages/configurator/src/pages/Confi
g.vue
index 55e5c6ed7..e8bd417b3 100644
---
a/misc/config_tools/configurator/packages/configurator/src/pages/Confi
g.vue
+++ b/misc/config_tools/configurator/packages/configurator/src/pages/C
+++ onfig.vue
@@ -228,7 +228,7 @@ export default {
let haveService = false;
this.scenario.vm.map((vmConfig) => {
let vmID = vmConfig['@id'];
- if (vmID > maxVMID) {
+ if (vmID >= maxVMID) {
maxVMID = vmID
}
if (vmConfig['load_order'] === 'SERVICE_VM') {


[PATCH v1 4/4] config_tools: acpi_gen: generate vRTCT instead of copying a physical one

Junjie Mao
 

As the last step to simplify the steps to enable software SRAM passthrough
to a pre-launched RT VM, this patch generates a virtual RTCT which only
contains a compatibility entry (to indicate that the format of the RTCT is
v2) and a couple of SSRAM or SSRAM waymask entries to report the software
SRAM blocks that pre-launched VM has access. That follows the practice how
ACRN device model generates virtual RTCT for post-launched VMs today.

In case RTCT v1 is used physically, this patch still generates a v2 RTCT
for the pre-launched VM but does not add an SSRAM waymask entry there
due to lack of information.

Signed-off-by: Junjie Mao <junjie.mao@...>
---
misc/config_tools/acpi_gen/acpi_const.py | 4 +-
misc/config_tools/acpi_gen/asl_gen.py | 91 +++++++++++++++++--
misc/config_tools/acpi_gen/bin_gen.py | 8 +-
.../board_inspector/acpiparser/rdt.py | 2 +-
4 files changed, 91 insertions(+), 14 deletions(-)

diff --git a/misc/config_tools/acpi_gen/acpi_const.py b/misc/config_tools/acpi_gen/acpi_const.py
index 024d25b7d..f7e7171a6 100644
--- a/misc/config_tools/acpi_gen/acpi_const.py
+++ b/misc/config_tools/acpi_gen/acpi_const.py
@@ -15,7 +15,7 @@ TEMPLATE_ACPI_PATH = os.path.join(VM_CONFIGS_PATH, 'acpi_template', 'template')

ACPI_TABLE_LIST = [('rsdp.asl', 'rsdp.aml'), ('xsdt.asl', 'xsdt.aml'), ('facp.asl', 'facp.aml'),
('mcfg.asl', 'mcfg.aml'), ('apic.asl', 'apic.aml'), ('tpm2.asl', 'tpm2.aml'),
- ('dsdt.aml', 'dsdt.aml'), ('PTCT', 'ptct.aml'), ('RTCT', 'rtct.aml')]
+ ('dsdt.aml', 'dsdt.aml'), ('ptct.aml', 'ptct.aml'), ('rtct.aml', 'rtct.aml')]

ACPI_BASE = 0x7fe00000

@@ -49,5 +49,3 @@ ACPI_MADT_TYPE_LOCAL_APIC_NMI = 4
TSN_DEVICE_LIST = ['8086:4ba0',
'8086:4bb0',
'8086:4b32']
-
-RTCT = ['RTCT', 'PTCT']
diff --git a/misc/config_tools/acpi_gen/asl_gen.py b/misc/config_tools/acpi_gen/asl_gen.py
index 5a9748a67..4693f0680 100644
--- a/misc/config_tools/acpi_gen/asl_gen.py
+++ b/misc/config_tools/acpi_gen/asl_gen.py
@@ -12,6 +12,8 @@ import collections
import lxml.etree

sys.path.append(os.path.join(os.path.dirname(os.path.abspath(__file__)), '..', 'board_inspector'))
+from acpiparser._utils import TableHeader
+from acpiparser import rtct
from acpiparser import rdt
from acpiparser.dsdt import parse_tree
from acpiparser.aml import builder
@@ -764,6 +766,87 @@ def gen_dsdt(board_etree, scenario_etree, allocation_etree, vm_id, dest_path):

dest.write(binary)

+def gen_rtct(board_etree, scenario_etree, allocation_etree, vm_id, dest_path):
+ def cpu_id_to_lapic_id(cpu_id):
+ return common.get_node(f"//thread[cpu_id = '{cpu_id}']/apic_id/text()", board_etree)
+
+ vm_node = common.get_node(f"//vm[@id='{vm_id}']", scenario_etree)
+ if vm_node is None:
+ return False
+
+ vcpus = ",".join(map(cpu_id_to_lapic_id, vm_node.xpath("cpu_affinity//pcpu_id/text()")))
+ rtct_entries = []
+
+ # ACPI table header
+
+ common_header = create_object(
+ TableHeader,
+ signature = b'RTCT',
+ revision = 1,
+ oemid = b'ACRN ',
+ oemtableid = b'ACRNRTCT',
+ oemrevision = 5,
+ creatorid = b'INTL',
+ creatorrevision = 0x100000d
+ )
+ rtct_entries.append(common_header)
+
+ # Compatibility entry
+
+ compat_entry = create_object(
+ rtct.RTCTSubtableCompatibility,
+ subtable_size = ctypes.sizeof(rtct.RTCTSubtableCompatibility),
+ format_or_version = 1,
+ type = rtct.ACPI_RTCT_TYPE_COMPATIBILITY,
+ rtct_version_major = 2,
+ rtct_version_minor = 0,
+ rtcd_version_major = 0,
+ rtcd_version_minor = 0
+ )
+ rtct_entries.append(compat_entry)
+
+ # SSRAM entries
+
+ # Look for the cache blocks that are visible to this VM and have software SRAM in it. Those software SRAM will be
+ # exposed to the VM in RTCT.
+ for cache in board_etree.xpath(f"//caches/cache[count(processors/processor[contains('{vcpus}', .)]) and capability[@id = 'Software SRAM']]"):
+ ssram_cap = common.get_node("capability[@id = 'Software SRAM']", cache)
+
+ ssram_entry = create_object(
+ rtct.RTCTSubtableSoftwareSRAM_v2,
+ subtable_size = ctypes.sizeof(rtct.RTCTSubtableSoftwareSRAM_v2),
+ format_or_version = 2,
+ type = rtct.ACPI_RTCT_V2_TYPE_SoftwareSRAM,
+ level = int(cache.get("level")),
+ cache_id = int(cache.get("id"), base=16),
+ base = int(ssram_cap.find("start").text, base=16),
+ size = int(ssram_cap.find("size").text),
+ shared = 0
+ )
+ rtct_entries.append(ssram_entry)
+
+ if ssram_cap.find("waymask") is not None:
+ ssram_waymask_entry = create_object(
+ rtct.RTCTSubtableSSRAMWayMask,
+ subtable_size = ctypes.sizeof(rtct.RTCTSubtableSSRAMWayMask),
+ format_or_version = 1,
+ type = rtct.ACPI_RTCT_V2_TYPE_SSRAM_WayMask,
+ level = int(cache.get("level")),
+ cache_id = int(cache.get("id"), base=16),
+ waymask = int(ssram_cap.find("waymask").text, base=16)
+ )
+ rtct_entries.append(ssram_waymask_entry)
+
+ with open(dest_path, "wb") as dest:
+ length = sum(map(ctypes.sizeof, rtct_entries))
+ common_header.length = length
+ binary = bytearray().join(map(bytearray, rtct_entries))
+
+ checksum = (256 - (sum(binary) % 256)) % 256
+ binary[9] = checksum
+
+ dest.write(binary)
+
def main(args):

err_dic = {}
@@ -846,12 +929,8 @@ def main(args):
if not os.path.isdir(dest_vm_acpi_path):
os.makedirs(dest_vm_acpi_path)
if PASSTHROUGH_RTCT is True and vm_id == PRELAUNCHED_RTVM_ID:
- for f in RTCT:
- if os.path.isfile(os.path.join(VM_CONFIGS_PATH, 'acpi_template', board_type, f)):
- passthru_devices.append(f)
- shutil.copy(os.path.join(VM_CONFIGS_PATH, 'acpi_template', board_type, f),
- dest_vm_acpi_path)
- break
+ passthru_devices.append("RTCT")
+ gen_rtct(board_etree, scenario_etree, allocation_etree, vm_id, os.path.join(dest_vm_acpi_path, "rtct.aml"))
gen_rsdp(dest_vm_acpi_path)
gen_xsdt(dest_vm_acpi_path, passthru_devices)
gen_fadt(dest_vm_acpi_path, board_root)
diff --git a/misc/config_tools/acpi_gen/bin_gen.py b/misc/config_tools/acpi_gen/bin_gen.py
index 57ef01ad5..8d8cec462 100644
--- a/misc/config_tools/acpi_gen/bin_gen.py
+++ b/misc/config_tools/acpi_gen/bin_gen.py
@@ -70,7 +70,7 @@ def asl_to_aml(dest_vm_acpi_path, dest_vm_acpi_bin_path, scenario_etree, allocat
os.remove(os.path.join(dest_vm_acpi_path, acpi_table[1]))
rmsg = 'failed to compile {}'.format(acpi_table[0])
break
- elif acpi_table[0] in ['PTCT', 'RTCT']:
+ elif acpi_table[0] in ['ptct.aml', 'rtct.aml']:
if acpi_table[0] in os.listdir(dest_vm_acpi_path):
rtct = acpiparser.rtct.RTCT(os.path.join(dest_vm_acpi_path, acpi_table[0]))
outfile = os.path.join(dest_vm_acpi_bin_path, acpi_table[1])
@@ -95,7 +95,7 @@ def asl_to_aml(dest_vm_acpi_path, dest_vm_acpi_bin_path, scenario_etree, allocat
os.remove(os.path.join(dest_vm_acpi_path, acpi_table[1]))
rmsg = 'failed to compile {}'.format(acpi_table[0])
break
- elif acpi_table[0].endswith(".aml"):
+ elif acpi_table[0].endswith(".aml") and acpi_table[0] in os.listdir(dest_vm_acpi_path):
shutil.copy(os.path.join(dest_vm_acpi_path, acpi_table[0]),
os.path.join(dest_vm_acpi_bin_path, acpi_table[1]))

@@ -171,11 +171,11 @@ def aml_to_bin(dest_vm_acpi_path, dest_vm_acpi_bin_path, acpi_bin_name, board_et
with open(os.path.join(dest_vm_acpi_bin_path, ACPI_TABLE_LIST[6][1]), 'rb') as asl:
acpi_bin.write(asl.read())

- if 'PTCT' in os.listdir(dest_vm_acpi_path):
+ if ACPI_TABLE_LIST[7][1] in os.listdir(dest_vm_acpi_path):
acpi_bin.seek(ACPI_RTCT_ADDR_OFFSET)
with open(os.path.join(dest_vm_acpi_bin_path, ACPI_TABLE_LIST[7][1]), 'rb') as asl:
acpi_bin.write(asl.read())
- elif 'RTCT' in os.listdir(dest_vm_acpi_path):
+ elif ACPI_TABLE_LIST[8][1] in os.listdir(dest_vm_acpi_path):
acpi_bin.seek(ACPI_RTCT_ADDR_OFFSET)
with open(os.path.join(dest_vm_acpi_bin_path, ACPI_TABLE_LIST[8][1]), 'rb') as asl:
acpi_bin.write(asl.read())
diff --git a/misc/config_tools/board_inspector/acpiparser/rdt.py b/misc/config_tools/board_inspector/acpiparser/rdt.py
index 308c76e66..c4fc5e35b 100644
--- a/misc/config_tools/board_inspector/acpiparser/rdt.py
+++ b/misc/config_tools/board_inspector/acpiparser/rdt.py
@@ -30,7 +30,7 @@ SMALL_RESOURCE_ITEM_END_TAG = 0x0F

# 6.4.2.1 IRQ Descriptor

-def SmallResourceItemIRQ_factory(_len):
+def SmallResourceItemIRQ_factory(_len=2):
class SmallResourceItemIRQ(cdata.Struct):
_pack_ = 1
_fields_ = SmallResourceDataTag._fields_ + [
--
2.30.2


[PATCH v1 3/4] config_tools: acpi_gen: refactor ACPI table generation logic

Junjie Mao
 

While functionally correct, the ACPI table (mostly DSDT) generation logic
in asl_gen.py contains multiple occurrences that share the same code
structure as follows:

cls = <class of the table>
length = ctypes.sizeof(cls)
data = bytearray(length)
res = cls.from_buffer(data)
<setting multiple fields in res>

To minimize code duplication, this patch refactors the logic by abstracting
the creation of an ACPI table into a method which returns a newly created
object of the given class after setting the specified fields.

Signed-off-by: Junjie Mao <junjie.mao@...>
---
misc/config_tools/acpi_gen/asl_gen.py | 231 +++++++++++++-------------
1 file changed, 118 insertions(+), 113 deletions(-)

diff --git a/misc/config_tools/acpi_gen/asl_gen.py b/misc/config_tools/acpi_gen/asl_gen.py
index b23eecfc9..5a9748a67 100644
--- a/misc/config_tools/acpi_gen/asl_gen.py
+++ b/misc/config_tools/acpi_gen/asl_gen.py
@@ -344,82 +344,89 @@ def encode_eisa_id(s):
]
return int.from_bytes(bytes(encoded), sys.byteorder)

+def create_object(cls, **kwargs):
+ length = ctypes.sizeof(cls)
+ data = bytearray(length)
+ obj = cls.from_buffer(data)
+ for key, value in kwargs.items():
+ setattr(obj, key, value)
+ return obj
+
def gen_root_pci_bus(path, prt_packages):
resources = []

# Bus number
- cls = rdt.LargeResourceItemWordAddressSpace_factory()
- length = ctypes.sizeof(cls)
- data = bytearray(length)
- res = cls.from_buffer(data)
- res.type = 1 # Large type
- res.name = rdt.LARGE_RESOURCE_ITEM_WORD_ADDRESS_SPACE
- res.length = length - 3
- res._TYP = 2 # Bus number range
- res._DEC = 0 # Positive decoding
- res._MIF = 1 # Minimum address fixed
- res._MAF = 1 # Maximum address fixed
- res.flags = 0
- res._MAX = 0xff
- res._LEN = 0x100
- resources.append(data)
+ word_address_space_cls = rdt.LargeResourceItemWordAddressSpace_factory()
+ res = create_object(
+ word_address_space_cls,
+ type = 1, # Large type
+ name = rdt.LARGE_RESOURCE_ITEM_WORD_ADDRESS_SPACE,
+ length = ctypes.sizeof(word_address_space_cls) - 3,
+ _TYP = 2, # Bus number range
+ _DEC = 0, # Positive decoding
+ _MIF = 1, # Minimum address fixed
+ _MAF = 1, # Maximum address fixed
+ flags = 0,
+ _MAX = 0xff,
+ _LEN = 0x100
+ )
+ resources.append(res)

# The PCI hole below 4G
- cls = rdt.LargeResourceItemDWordAddressSpace_factory()
- length = ctypes.sizeof(cls)
- data = bytearray(length)
- res = cls.from_buffer(data)
- res.type = 1 # Large type
- res.name = rdt.LARGE_RESOURCE_ITEM_ADDRESS_SPACE_RESOURCE
- res.length = length - 3
- res._TYP = 0 # Memory range
- res._DEC = 0 # Positive decoding
- res._MIF = 1 # Minimum address fixed
- res._MAF = 1 # Maximum address fixed
- res.flags = 1 # read-write, non-cachable, TypeStatic
- res._MIN = 0x80000000
- res._MAX = 0xdfffffff
- res._LEN = 0x60000000
- resources.append(data)
+ dword_address_space_cls = rdt.LargeResourceItemDWordAddressSpace_factory()
+ res = create_object(
+ dword_address_space_cls,
+ type = 1, # Large type
+ name = rdt.LARGE_RESOURCE_ITEM_ADDRESS_SPACE_RESOURCE,
+ length = ctypes.sizeof(dword_address_space_cls) - 3,
+ _TYP = 0, # Memory range
+ _DEC = 0, # Positive decoding
+ _MIF = 1, # Minimum address fixed
+ _MAF = 1, # Maximum address fixed
+ flags = 1, # read-write, non-cachable, TypeStatic
+ _MIN = 0x80000000,
+ _MAX = 0xdfffffff,
+ _LEN = 0x60000000
+ )
+ resources.append(res)

# The PCI hole above 4G
- cls = rdt.LargeResourceItemQWordAddressSpace_factory()
- length = ctypes.sizeof(cls)
- data = bytearray(length)
- res = cls.from_buffer(data)
- res.type = 1 # Large type
- res.name = rdt.LARGE_RESOURCE_ITEM_QWORD_ADDRESS_SPACE
- res.length = length - 3
- res._TYP = 0 # Memory range
- res._DEC = 0 # Positive decoding
- res._MIF = 1 # Minimum address fixed
- res._MAF = 1 # Maximum address fixed
- res.flags = 1 # read-write, non-cachable, TypeStatic
- res._MIN = 0x4000000000
- res._MAX = 0x7fffffffff
- res._LEN = 0x4000000000
- resources.append(data)
-
- cls = rdt.LargeResourceItemDWordAddressSpace_factory()
- length = ctypes.sizeof(cls)
- data = bytearray(length)
- res = cls.from_buffer(data)
- res.type = 1 # Large type
- res.name = rdt.LARGE_RESOURCE_ITEM_ADDRESS_SPACE_RESOURCE
- res.length = length - 3
- res._TYP = 1 # Memory range
- res._DEC = 0 # Positive decoding
- res._MIF = 1 # Minimum address fixed
- res._MAF = 1 # Maximum address fixed
- res.flags = 3 # Entire range, TypeStatic
- res._MIN = 0x0
- res._MAX = 0xffff
- res._LEN = 0x10000
- resources.append(data)
+ res = create_object(
+ dword_address_space_cls,
+ type = 1, # Large type
+ name = rdt.LARGE_RESOURCE_ITEM_ADDRESS_SPACE_RESOURCE,
+ length = ctypes.sizeof(dword_address_space_cls) - 3,
+ _TYP = 0, # Memory range
+ _DEC = 0, # Positive decoding
+ _MIF = 1, # Minimum address fixed
+ _MAF = 1, # Maximum address fixed
+ flags = 1, # read-write, non-cachable, TypeStatic
+ _MIN = 0x4000000000,
+ _MAX = 0x7fffffffff,
+ _LEN = 0x4000000000
+ )
+ resources.append(res)
+
+ # The PCI hole for I/O ports
+ res = create_object(
+ dword_address_space_cls,
+ type = 1, # Large type
+ name = rdt.LARGE_RESOURCE_ITEM_ADDRESS_SPACE_RESOURCE,
+ length = ctypes.sizeof(dword_address_space_cls) - 3,
+ _TYP = 1, # I/O range
+ _DEC = 0, # Positive decoding
+ _MIF = 1, # Minimum address fixed
+ _MAF = 1, # Maximum address fixed
+ flags = 3, # Entire range, TypeStatic
+ _MIN = 0x0,
+ _MAX = 0xffff,
+ _LEN = 0x10000
+ )
+ resources.append(res)

# End tag
resources.append(bytes([0x79, 0]))
- resource_buf = bytearray().join(resources)
+ resource_buf = bytearray().join(map(bytearray, resources))
checksum = (256 - (sum(resource_buf) % 256)) % 256
resource_buf[-1] = checksum

@@ -458,33 +465,32 @@ def gen_root_pci_bus(path, prt_packages):
def pnp_uart(path, uid, ddn, port, irq):
resources = []

- cls = rdt.SmallResourceItemIOPort
- length = ctypes.sizeof(cls)
- data = bytearray(length)
- res = cls.from_buffer(data)
- res.type = 0
- res.name = rdt.SMALL_RESOURCE_ITEM_IO_PORT
- res.length = 7
- res._DEC = 1
- res._MIN = port
- res._MAX = port
- res._ALN = 1
- res._LEN = 8
- resources.append(data)
-
- cls = rdt.SmallResourceItemIRQ_factory(2)
- length = ctypes.sizeof(cls)
- data = bytearray(length)
- res = cls.from_buffer(data)
- res.type = 0
- res.name = rdt.SMALL_RESOURCE_ITEM_IRQ_FORMAT
- res.length = 2
- res._INT = 1 << irq
- resources.append(data)
+ res = create_object(
+ rdt.SmallResourceItemIOPort,
+ type = 0,
+ name = rdt.SMALL_RESOURCE_ITEM_IO_PORT,
+ length = ctypes.sizeof(rdt.SmallResourceItemIOPort) - 1,
+ _DEC = 1,
+ _MIN = port,
+ _MAX = port,
+ _ALN = 1,
+ _LEN = 8
+ )
+ resources.append(res)
+
+ cls = rdt.SmallResourceItemIRQ_factory()
+ res = create_object(
+ cls,
+ type = 0,
+ name = rdt.SMALL_RESOURCE_ITEM_IRQ_FORMAT,
+ length = ctypes.sizeof(cls) - 1,
+ _INT = 1 << irq
+ )
+ resources.append(res)

resources.append(bytes([0x79, 0]))

- resource_buf = bytearray().join(resources)
+ resource_buf = bytearray().join(map(bytearray, resources))
checksum = (256 - (sum(resource_buf) % 256)) % 256
resource_buf[-1] = checksum
uart = builder.DefDevice(
@@ -512,33 +518,32 @@ def pnp_uart(path, uid, ddn, port, irq):
def pnp_rtc(path):
resources = []

- cls = rdt.SmallResourceItemIOPort
- length = ctypes.sizeof(cls)
- data = bytearray(length)
- res = cls.from_buffer(data)
- res.type = 0
- res.name = rdt.SMALL_RESOURCE_ITEM_IO_PORT
- res.length = 7
- res._DEC = 1
- res._MIN = 0x70
- res._MAX = 0x70
- res._ALN = 1
- res._LEN = 8
- resources.append(data)
-
- cls = rdt.SmallResourceItemIRQ_factory(2)
- length = ctypes.sizeof(cls)
- data = bytearray(length)
- res = cls.from_buffer(data)
- res.type = 0
- res.name = rdt.SMALL_RESOURCE_ITEM_IRQ_FORMAT
- res.length = 2
- res._INT = 1 << 8
- resources.append(data)
+ res = create_object(
+ rdt.SmallResourceItemIOPort,
+ type = 0,
+ name = rdt.SMALL_RESOURCE_ITEM_IO_PORT,
+ length = ctypes.sizeof(rdt.SmallResourceItemIOPort) - 1,
+ _DEC = 1,
+ _MIN = 0x70,
+ _MAX = 0x70,
+ _ALN = 1,
+ _LEN = 8
+ )
+ resources.append(res)
+
+ cls = rdt.SmallResourceItemIRQ_factory()
+ res = create_object(
+ cls,
+ type = 0,
+ name = rdt.SMALL_RESOURCE_ITEM_IRQ_FORMAT,
+ length = ctypes.sizeof(cls) - 1,
+ _INT = 1 << 8
+ )
+ resources.append(res)

resources.append(bytes([0x79, 0]))

- resource_buf = bytearray().join(resources)
+ resource_buf = bytearray().join(map(bytearray, resources))
checksum = (256 - (sum(resource_buf) % 256)) % 256
resource_buf[-1] = checksum
rtc = builder.DefDevice(
--
2.30.2


[PATCH v1 2/4] config_tools: board_inspector: record all details from RTCT in board XML

Junjie Mao
 

Today users still need to manually copy the RTCT binary file when they want
to passthrough software SRAM to a pre-launched RTVM, which is far from
being user friendly.

To get rid of that step, this patch extracts all information from the RTCT
table and format them in the board XML which is the only file users need to
copy from their target platform to build the hypervisor. The patch that
immediately follows will then use such information to generate vRTCT for
the pre-launched VM.

A side effect of this change is that more ranges, which represents those
reported by RTCT such as the CRL binary or the error log area, will be
added to the `memory` section of the board XML. The `id` attributes of
those range will be used to identify what that range is for. As a result,
getting RAM of the physical platform from the board XML requires additional
conditions on the `id` attributes to avoid counting non-RAM regions
unintendedly.

Signed-off-by: Junjie Mao <junjie.mao@...>
---
.../board_inspector/extractors/20-cache.py | 29 -----
.../extractors/40-acpi-tables.py | 105 +++++++++++++++++-
.../board_inspector/legacy/acpi.py | 9 --
.../static_allocators/memory_allocator.py | 10 +-
4 files changed, 106 insertions(+), 47 deletions(-)

diff --git a/misc/config_tools/board_inspector/extractors/20-cache.py b/misc/config_tools/board_inspector/extractors/20-cache.py
index 511db81d8..7f92327f0 100644
--- a/misc/config_tools/board_inspector/extractors/20-cache.py
+++ b/misc/config_tools/board_inspector/extractors/20-cache.py
@@ -67,39 +67,10 @@ def extract_topology(root_node, caches_node):
return (level, id, type)
caches_node[:] = sorted(caches_node, key=getkey)

-def extract_tcc_capabilities(caches_node):
- try:
- rtct = parse_rtct()
- if rtct.version == 1:
- for entry in rtct.entries:
- if entry.type == acpiparser.rtct.ACPI_RTCT_V1_TYPE_SoftwareSRAM:
- cache_node = get_node(caches_node, f"cache[@level='{entry.cache_level}' and processors/processor='{hex(entry.apic_id_tbl[0])}']")
- if cache_node is None:
- logging.debug(f"Cannot find the level {entry.cache_level} cache of physical processor with apic ID {entry.apic_id_tbl[0]}")
- continue
- cap = add_child(cache_node, "capability", None, id="Software SRAM")
- add_child(cap, "start", "0x{:08x}".format(entry.base))
- add_child(cap, "end", "0x{:08x}".format(entry.base + entry.size - 1))
- add_child(cap, "size", str(entry.size))
- elif rtct.version == 2:
- for entry in rtct.entries:
- if entry.type == acpiparser.rtct.ACPI_RTCT_V2_TYPE_SoftwareSRAM:
- cache_node = get_node(caches_node, f"cache[@level='{entry.level}' and @id='{hex(entry.cache_id)}']")
- if cache_node is None:
- logging.debug(f"Cannot find the level {entry.level} cache with cache ID {entry.cache_id}")
- continue
- cap = add_child(cache_node, "capability", None, id="Software SRAM")
- add_child(cap, "start", "0x{:08x}".format(entry.base))
- add_child(cap, "end", "0x{:08x}".format(entry.base + entry.size - 1))
- add_child(cap, "size", str(entry.size))
- except FileNotFoundError:
- pass
-
def extract(args, board_etree):
root_node = board_etree.getroot()
caches_node = get_node(board_etree, "//caches")
extract_topology(root_node, caches_node)
- extract_tcc_capabilities(caches_node)

# Inject the explicitly specified CAT capability if exists
if args.add_llc_cat:
diff --git a/misc/config_tools/board_inspector/extractors/40-acpi-tables.py b/misc/config_tools/board_inspector/extractors/40-acpi-tables.py
index 0e2de3e9b..8e74e8619 100644
--- a/misc/config_tools/board_inspector/extractors/40-acpi-tables.py
+++ b/misc/config_tools/board_inspector/extractors/40-acpi-tables.py
@@ -9,7 +9,7 @@ import re, os, fcntl, errno
import lxml.etree
from extractors.helpers import add_child, get_node

-from acpiparser import parse_apic, apic
+import acpiparser

DEFAULT_MAX_IOAPIC_LINES = 240

@@ -40,16 +40,113 @@ def extract_gsi_number(ioapic_node, apic_id):

def extract_topology(ioapics_node, tables):
for subtable in tables.interrupt_controller_structures:
- if subtable.subtype == apic.MADT_TYPE_IO_APIC:
+ if subtable.subtype == acpiparser.apic.MADT_TYPE_IO_APIC:
apic_id = subtable.io_apic_id
ioapic_node = add_child(ioapics_node, "ioapic", None, id=hex(apic_id))
add_child(ioapic_node, "address", hex(subtable.io_apic_addr))
add_child(ioapic_node, "gsi_base", hex(subtable.global_sys_int_base))
extract_gsi_number(ioapic_node, apic_id)

+def extract_tcc_capabilities_from_rtct_v1(board_etree, rtct):
+ caches_node = get_node(board_etree, "//caches")
+
+ for entry in rtct.entries:
+ if entry.type == acpiparser.rtct.ACPI_RTCT_V1_TYPE_SoftwareSRAM:
+ cache_node = get_node(caches_node, f"cache[@level='{entry.cache_level}' and processors/processor='{hex(entry.apic_id_tbl[0])}']")
+ if cache_node is None:
+ logging.debug(f"Cannot find the level {entry.cache_level} cache of physical processor with apic ID {entry.apic_id_tbl[0]}")
+ continue
+ cap = add_child(cache_node, "capability", None, id="Software SRAM")
+ add_child(cap, "start", "0x{:08x}".format(entry.base))
+ add_child(cap, "end", "0x{:08x}".format(entry.base + entry.size - 1))
+ add_child(cap, "size", str(entry.size))
+
+def extract_tcc_capabilities_from_rtct_v2(board_etree, rtct):
+ def get_or_create_ssram_node(cache_node):
+ ssram_node = get_node(cache_node, "capability[@id = 'Software SRAM']")
+ if ssram_node is None:
+ ssram_node = add_child(cache_node, "capability", None, id="Software SRAM")
+ return ssram_node
+
+ caches_node = get_node(board_etree, "//caches")
+ memory_node = get_node(board_etree, "//memory")
+ devices_node = get_node(board_etree, "//devices")
+
+ for entry in rtct.entries:
+ cache_node = None
+ if hasattr(entry, "level") and hasattr(entry, "cache_id"):
+ cache_node = get_node(caches_node, f"cache[@level='{entry.level}' and @id='{hex(entry.cache_id)}']")
+ if cache_node is None:
+ logging.debug(f"Cannot find the level {entry.level} cache with cache ID {entry.cache_id}")
+ continue
+
+ if entry.type == acpiparser.rtct.ACPI_RTCT_TYPE_COMPATIBILITY:
+ rtct_version_node = add_child(caches_node, "parameter", None, id="RTCT Version")
+ add_child(rtct_version_node, "major", str(entry.rtct_version_major))
+ add_child(rtct_version_node, "minor", str(entry.rtct_version_minor))
+ rtcd_version_node = add_child(caches_node, "parameter", None, id="RTCD Version")
+ add_child(rtcd_version_node, "major", str(entry.rtcd_version_major))
+ add_child(rtcd_version_node, "minor", str(entry.rtcd_version_minor))
+ elif entry.type == acpiparser.rtct.ACPI_RTCT_V2_TYPE_RTCD_Limits:
+ add_child(caches_node, "parameter", str(entry.total_ia_l2_clos), id="Total IA L2 CLOS")
+ add_child(caches_node, "parameter", str(entry.total_ia_l3_clos), id="Total IA L3 CLOS")
+ add_child(caches_node, "parameter", str(entry.total_l2_instances), id="Total IA L2 Instances")
+ add_child(caches_node, "parameter", str(entry.total_l3_instances), id="Total IA L2 Instances")
+ add_child(caches_node, "parameter", str(entry.total_gt_clos), id="Total GT CLOS")
+ add_child(caches_node, "parameter", str(entry.total_wrc_clos), id="Total WRC CLOS")
+ add_child(devices_node, "parameter", str(entry.max_tcc_streams), id="Maximum TCC Streams")
+ add_child(devices_node, "parameter", str(entry.max_tcc_registers), id="Maximum TCC Registers")
+ elif entry.type == acpiparser.rtct.ACPI_RTCT_V2_TYPE_CRL_Binary:
+ start = "0x{:016x}".format(entry.address)
+ end = "0x{:016x}".format(entry.address + entry.size - 1)
+ size = str(entry.size)
+ region = add_child(memory_node, "range", None, id="CRL Binary", start=start, end=end, size=size)
+ elif entry.type == acpiparser.rtct.ACPI_RTCT_V2_TYPE_IA_WayMasks:
+ cap = add_child(cache_node, "parameter", None, id="IA Waymask")
+ for waymask in entry.waymask:
+ add_child(cap, "waymask", hex(waymask))
+ elif entry.type == acpiparser.rtct.ACPI_RTCT_V2_TYPE_WRC_WayMasks:
+ cap = add_child(cache_node, "parameter", None, id="WRC Waymask")
+ add_child(cap, "waymask", hex(entry.waymask))
+ elif entry.type == acpiparser.rtct.ACPI_RTCT_V2_TYPE_GT_WayMasks:
+ cap = add_child(cache_node, "parameter", None, id="GT Waymask")
+ for waymask in entry.waymask:
+ add_child(cap, "waymask", hex(waymask))
+ elif entry.type == acpiparser.rtct.ACPI_RTCT_V2_TYPE_SSRAM_WayMask:
+ ssram_node = get_or_create_ssram_node(cache_node)
+ add_child(ssram_node, "waymask", hex(entry.waymask))
+ elif entry.type == acpiparser.rtct.ACPI_RTCT_V2_TYPE_SoftwareSRAM:
+ ssram_node = get_or_create_ssram_node(cache_node)
+ add_child(ssram_node, "start", "0x{:08x}".format(entry.base))
+ add_child(ssram_node, "end", "0x{:08x}".format(entry.base + entry.size - 1))
+ add_child(ssram_node, "size", str(entry.size))
+ elif entry.type == acpiparser.rtct.ACPI_RTCT_V2_TYPE_MemoryHierarchyLatency:
+ for cache_id in entry.cache_id:
+ cache_node = get_node(caches_node, f"cache[@level='{entry.hierarchy}' and @id='{hex(cache_id)}']")
+ if cache_node is None:
+ logging.debug(f"Cannot find the level {entry.hierarchy} cache with cache ID {cache_id}")
+ continue
+ param = add_child(cache_node, "parameter", None, id="Worst Case Access Latency")
+ add_child(param, "native", str(entry.clock_cycles))
+ elif entry.type == acpiparser.rtct.ACPI_RTCT_V2_TYPE_ErrorLogAddress:
+ start = "0x{:016x}".format(entry.address)
+ end = "0x{:016x}".format(entry.address + entry.size - 1)
+ size = str(entry.size)
+ region = add_child(memory_node, "range", None, id="TCC Error Log", start=start, end=end, size=size)
+
+def extract_tcc_capabilities(board_etree):
+ try:
+ rtct = acpiparser.parse_rtct()
+ if rtct.version == 1:
+ extract_tcc_capabilities_from_rtct_v1(board_etree, rtct)
+ elif rtct.version == 2:
+ extract_tcc_capabilities_from_rtct_v2(board_etree, rtct)
+ except FileNotFoundError:
+ pass
+
def extract(args, board_etree):
try:
- tables = parse_apic()
+ tables = acpiparser.parse_apic()
except Exception as e:
logging.warning(f"Parse ACPI tables failed: {str(e)}")
logging.warning(f"Will not extract information from ACPI tables")
@@ -57,3 +154,5 @@ def extract(args, board_etree):

ioapics_node = get_node(board_etree, "//ioapics")
extract_topology(ioapics_node, tables)
+
+ extract_tcc_capabilities(board_etree)
diff --git a/misc/config_tools/board_inspector/legacy/acpi.py b/misc/config_tools/board_inspector/legacy/acpi.py
index 39c5dd3a8..3633645c4 100644
--- a/misc/config_tools/board_inspector/legacy/acpi.py
+++ b/misc/config_tools/board_inspector/legacy/acpi.py
@@ -656,12 +656,3 @@ def generate_info(board_file):
# Generate board info
with open(board_file, 'a+') as config:
gen_acpi_info(config)
-
- # get the PTCT/RTCT table from native environment
- out_dir = os.path.dirname(board_file)
- if os.path.isfile(SYS_PATH[1] + 'PTCT'):
- shutil.copy(SYS_PATH[1] + 'PTCT', out_dir if out_dir != "" else "./")
- logging.info("PTCT table has been saved to {} successfully!".format(os.path.join(out_dir, 'PTCT')))
- if os.path.isfile(SYS_PATH[1] + 'RTCT'):
- shutil.copy(SYS_PATH[1] + 'RTCT', out_dir if out_dir != "" else "./")
- logging.info("RTCT table has been saved to {} successfully!".format(os.path.join(out_dir, 'RTCT')))
diff --git a/misc/config_tools/static_allocators/memory_allocator.py b/misc/config_tools/static_allocators/memory_allocator.py
index eb00f2749..0f1f39955 100644
--- a/misc/config_tools/static_allocators/memory_allocator.py
+++ b/misc/config_tools/static_allocators/memory_allocator.py
@@ -13,12 +13,10 @@ import common, math, logging

def import_memory_info(board_etree):
ram_range = {}
- start = board_etree.xpath("/acrn-config/memory/range/@start")
- size = board_etree.xpath("/acrn-config/memory/range/@size")
- for i in range(len(start)):
- start_hex = int(start[i], 16)
- size_hex = int(size[i], 10)
- ram_range[start_hex] = size_hex
+ for memory_range in board_etree.xpath("/acrn-config/memory/range[not(@id) or @id = 'RAM']"):
+ start = int(memory_range.get("start"), base=16)
+ size = int(memory_range.get("size"), base=10)
+ ram_range[start] = size

return ram_range

--
2.30.2


[PATCH v1 1/4] config_tools: board_inspector: refactor ACPI RTCT parser

Junjie Mao
 

This patch refactors and fixes the following in the ACPI RTCT parser of the
board inspector.

1. Refactor to expose the RTCTSubtableSoftwareSRAM_v2 class directly as
it is a fixed-size entry. There is no need to create a dynamic class
which is mostly for variable-length entries.

2. Rename the "format" field in RTCT entry header to "format_or_version",
as that field actually means "version" in RTCT v2.

3. Properly parse the RTCT compatibility entry which is currently parsed
as an unknown entry with raw data.

Signed-off-by: Junjie Mao <junjie.mao@...>
---
.../board_inspector/acpiparser/rtct.py | 28 +++++++++----------
1 file changed, 14 insertions(+), 14 deletions(-)

diff --git a/misc/config_tools/board_inspector/acpiparser/rtct.py b/misc/config_tools/board_inspector/acpiparser/rtct.py
index 46b3097a4..6bfce3cfa 100644
--- a/misc/config_tools/board_inspector/acpiparser/rtct.py
+++ b/misc/config_tools/board_inspector/acpiparser/rtct.py
@@ -15,7 +15,7 @@ class RTCTSubtable(cdata.Struct):
_pack_ = 1
_fields_ = [
('subtable_size', ctypes.c_uint16),
- ('format', ctypes.c_uint16),
+ ('format_or_version', ctypes.c_uint16),
('type', ctypes.c_uint32),
]

@@ -152,17 +152,15 @@ class RTCTSubtableSSRAMWayMask(cdata.Struct):
('waymask', ctypes.c_uint32),
]

-def RTCTSubtableSoftwareSRAM_v2_factory(data_len):
- class RTCTSubtableSoftwareSRAM_v2(cdata.Struct):
- _pack_ = 1
- _fields_ = copy.copy(RTCTSubtable._fields_) + [
- ('level', ctypes.c_uint32),
- ('cache_id', ctypes.c_uint32),
- ('base', ctypes.c_uint64),
- ('size', ctypes.c_uint32),
- ('shared', ctypes.c_uint32),
- ]
- return RTCTSubtableSoftwareSRAM_v2
+class RTCTSubtableSoftwareSRAM_v2(cdata.Struct):
+ _pack_ = 1
+ _fields_ = copy.copy(RTCTSubtable._fields_) + [
+ ('level', ctypes.c_uint32),
+ ('cache_id', ctypes.c_uint32),
+ ('base', ctypes.c_uint64),
+ ('size', ctypes.c_uint32),
+ ('shared', ctypes.c_uint32),
+ ]

def RTCTSubtableMemoryHierarchyLatency_v2_factory(data_len):
class RTCTSubtableMemoryHierarchyLatency_v2(cdata.Struct):
@@ -226,7 +224,9 @@ def rtct_v2_subtable_list(addr, length):
subtable_num += 1
subtable = RTCTSubtable.from_address(addr)
data_len = subtable.subtable_size - ctypes.sizeof(RTCTSubtable)
- if subtable.type == ACPI_RTCT_V2_TYPE_RTCD_Limits:
+ if subtable.type == ACPI_RTCT_TYPE_COMPATIBILITY:
+ cls = RTCTSubtableCompatibility
+ elif subtable.type == ACPI_RTCT_V2_TYPE_RTCD_Limits:
cls = RTCTSubtableRTCDLimits
elif subtable.type == ACPI_RTCT_V2_TYPE_CRL_Binary:
cls = RTCTSubtableRTCMBinary
@@ -239,7 +239,7 @@ def rtct_v2_subtable_list(addr, length):
elif subtable.type == ACPI_RTCT_V2_TYPE_SSRAM_WayMask:
cls = RTCTSubtableSSRAMWayMask
elif subtable.type == ACPI_RTCT_V2_TYPE_SoftwareSRAM:
- cls = RTCTSubtableSoftwareSRAM_v2_factory(data_len)
+ cls = RTCTSubtableSoftwareSRAM_v2
elif subtable.type == ACPI_RTCT_V2_TYPE_MemoryHierarchyLatency:
cls = RTCTSubtableMemoryHierarchyLatency_v2_factory(data_len)
elif subtable.type == ACPI_RTCT_V2_TYPE_ErrorLogAddress:
--
2.30.2


[PATCH v1 0/4] Generate vRTCT for pre-launched RT VMs

Junjie Mao
 

Today ACRN build system reuses the physical RTCT as the vRTCT for a
pre-launched RT VM which has access to software SRAM. This results in two
issues:

1. Users have to manually copy the physical RTCT to their build machine.
That violates ACRN development framework which defines the board XML
as the only file users need to copy.

2. All information in the physical RTCT will be visible to that
pre-launched RT VM, including L2 software SRAM it does not have access
to, way masks of GT even when graphics is not passed through to it and
the addresses of CRL binary and TCC error log.

To resolve the developer experience and functionality concerns above, this
patch series puts information extracted from RTCT to the board XML file and
generates a virtual RTCT table (with only the information meaningful to
that VM) when needed.

The series contains four patches, with 1/4 and 3/4 being refactoring, 2/4
changing the board inspector and 4/4 adding the vRTCT generation logic.

Junjie Mao (4):
config_tools: board_inspector: refactor ACPI RTCT parser
config_tools: board_inspector: record all details from RTCT in board
XML
config_tools: acpi_gen: refactor ACPI table generation logic
config_tools: acpi_gen: generate vRTCT instead of copying a physical
one

misc/config_tools/acpi_gen/acpi_const.py | 4 +-
misc/config_tools/acpi_gen/asl_gen.py | 322 +++++++++++-------
misc/config_tools/acpi_gen/bin_gen.py | 8 +-
.../board_inspector/acpiparser/rdt.py | 2 +-
.../board_inspector/acpiparser/rtct.py | 28 +-
.../board_inspector/extractors/20-cache.py | 29 --
.../extractors/40-acpi-tables.py | 105 +++++-
.../board_inspector/legacy/acpi.py | 9 -
.../static_allocators/memory_allocator.py | 10 +-
9 files changed, 329 insertions(+), 188 deletions(-)

--
2.30.2

921 - 940 of 37344