Date   

Re: [PATCH 1/2] hv: Change sched_event back to boolean-based implementation

Eddie Dong
 

Please PR with my ack.

-----Original Message-----
From: Liu, Yifan1 <yifan1.liu@...>
Sent: Monday, July 11, 2022 5:32 PM
To: Dong, Eddie <eddie.dong@...>; acrn-dev@...
Subject: RE: [acrn-dev] [PATCH 1/2] hv: Change sched_event back to boolean-
based implementation

Yes, and also partially revert commit
10963b04d127c3a321b30467b986f13cbd09db66 (the vcpu_try_cancel_request
function), which is no longer needed if we revert the
d575edf79a9b8dcde16574e28c213be7013470c7.

-----Original Message-----
From: Dong, Eddie <eddie.dong@...>
Sent: Tuesday, July 12, 2022 8:02 AM
To: acrn-dev@...
Cc: Liu, Yifan1 <yifan1.liu@...>
Subject: RE: [acrn-dev] [PATCH 1/2] hv: Change sched_event back to
boolean-based implementation

This is to revert the commit d575edf79a9b8dcde16574e28c213be7013470c7,
right?

-----Original Message-----
From: acrn-dev@...
<acrn-dev@...> On Behalf Of Liu, Yifan1
Sent: Tuesday, July 12, 2022 12:35 AM
To: acrn-dev@...
Cc: Liu, Yifan1 <yifan1.liu@...>
Subject: [acrn-dev] [PATCH 1/2] hv: Change sched_event back to
boolean-based implementation

From: Yifan Liu <yifan1.liu@...>

Commit d575edf79a9b8dcde16574e28c213be7013470c7 changes the
internal
implementation of wait_event and signal_event to use a counter
instead of a boolean value.

The background was:
ACRN utilizes vcpu_make_request and signal_event pair to shoot down
other vcpus and let them wait for signals. vcpu_make_request
eventually leads to target vcpu calling wait_event.

However vcpu_make_request/signal_event pair was not thread-safe, and
concurrent calls of this pair of API could lead to problems.
One such example is the concurrent wbinvd emulation, where vcpus may
concurrently issue vcpu_make_request/signal_event to synchronize
wbinvd emulation.

d575edf commit uses a counter in internal implementation of
wait_event/signal_event to avoid data races.

However by using a counter, the wait/signal pair now carries
semantics of semaphores instead of events. Semaphores require caller
to carefully plan their calls instead of multiply signaling any
number of times to the same event, which deviates from the original
"event" semantics.

This patch changes the API implementation back to boolean-based, and
re- resolve the issue of concurrent wbinvd in next patch.

This also partially reverts commit
10963b04d127c3a321b30467b986f13cbd09db66,
which was introduced because of the d575edf.

Signed-off-by: Yifan Liu <yifan1.liu@...>
---
hypervisor/arch/x86/guest/lock_instr_emul.c | 20 +-------------------
hypervisor/arch/x86/guest/virq.c | 6 ------
hypervisor/common/event.c | 10 +++++-----
hypervisor/include/arch/x86/asm/guest/virq.h | 1 -
hypervisor/include/common/event.h | 2 +-
5 files changed, 7 insertions(+), 32 deletions(-)

diff --git a/hypervisor/arch/x86/guest/lock_instr_emul.c
b/hypervisor/arch/x86/guest/lock_instr_emul.c
index 8729e3369..10947a424 100644
--- a/hypervisor/arch/x86/guest/lock_instr_emul.c
+++ b/hypervisor/arch/x86/guest/lock_instr_emul.c
@@ -60,25 +60,7 @@ void vcpu_complete_lock_instr_emulation(struct
acrn_vcpu *cur_vcpu)
if (cur_vcpu->vm->hw.created_vcpus > 1U) {
foreach_vcpu(i, cur_vcpu->vm, other) {
if ((other != cur_vcpu) && (other->state ==
VCPU_RUNNING)) {
- /*
- * When we vcpu_make_request above,
there is a time window between kick_vcpu and
- * the target vcpu actually does
acrn_handle_pending_request (and eventually wait_event).
- * It is possible that the issuing vcpu has
already completed lock emulation and
- * calls signal_event, while the target vcpu
has not yet reaching acrn_handle_pending_request.
- *
- * This causes problem: Say we have vcpuA
make request to vcpuB.
- * During the above said time window, if A
does kick_lock/complete_lock pair multiple times,
- * or some other vcpu C makes request to B
after A releases the vm lock below, then the bit
- * in B's pending_req will be cleared only once,
and B will call wait_event only once,
- * while other vcpu may call signal_event
many times to B. I.e., one bit is not enough
- * to cache multiple requests.
- *
- * To work this around, we try to cancel the
request (clear the bit in pending_req) if it
- * is unhandled, and signal_event only when
the request was already handled.
- */
- if (!vcpu_try_cancel_request(other,
ACRN_REQUEST_SPLIT_LOCK)) {
- signal_event(&other-
events[VCPU_EVENT_SPLIT_LOCK]);
- }
+ signal_event(&other-
events[VCPU_EVENT_SPLIT_LOCK]);
}
}

diff --git a/hypervisor/arch/x86/guest/virq.c
b/hypervisor/arch/x86/guest/virq.c
index 02c3c5dd9..014094014 100644
--- a/hypervisor/arch/x86/guest/virq.c
+++ b/hypervisor/arch/x86/guest/virq.c
@@ -133,12 +133,6 @@ void vcpu_make_request(struct acrn_vcpu *vcpu,
uint16_t eventid)
kick_vcpu(vcpu);
}

-/* Return true if an unhandled request is cancelled, false
otherwise. */ -bool vcpu_try_cancel_request(struct acrn_vcpu *vcpu,
uint16_t eventid) -{
- return bitmap_test_and_clear_lock(eventid, &vcpu-
arch.pending_req);
-}
-
/*
* @retval true when INT is injected to guest.
* @retval false when otherwise
diff --git a/hypervisor/common/event.c b/hypervisor/common/event.c
index
cf3ce7099..eb1fc88a0 100644
--- a/hypervisor/common/event.c
+++ b/hypervisor/common/event.c
@@ -11,7 +11,7 @@
void init_event(struct sched_event *event) {
spinlock_init(&event->lock);
- event->nqueued = 0;
+ event->set = false;
event->waiting_thread = NULL;
}

@@ -20,7 +20,7 @@ void reset_event(struct sched_event *event)
uint64_t rflag;

spinlock_irqsave_obtain(&event->lock, &rflag);
- event->nqueued = 0;
+ event->set = false;
event->waiting_thread = NULL;
spinlock_irqrestore_release(&event->lock, rflag); } @@ -33,13
+33,13 @@ void wait_event(struct sched_event *event)
spinlock_irqsave_obtain(&event->lock, &rflag);
ASSERT((event->waiting_thread == NULL), "only support exclusive
waiting");
event->waiting_thread = sched_get_current(get_pcpu_id());
- event->nqueued++;
- while ((event->nqueued > 0) && (event->waiting_thread != NULL)) {
+ while (!event->set && (event->waiting_thread != NULL)) {
sleep_thread(event->waiting_thread);
spinlock_irqrestore_release(&event->lock, rflag);
schedule();
spinlock_irqsave_obtain(&event->lock, &rflag);
}
+ event->set = false;
event->waiting_thread = NULL;
spinlock_irqrestore_release(&event->lock, rflag); } @@ -49,7
+49,7 @@ void signal_event(struct sched_event *event)
uint64_t rflag;

spinlock_irqsave_obtain(&event->lock, &rflag);
- event->nqueued--;
+ event->set = true;
if (event->waiting_thread != NULL) {
wake_thread(event->waiting_thread);
}
diff --git a/hypervisor/include/arch/x86/asm/guest/virq.h
b/hypervisor/include/arch/x86/asm/guest/virq.h
index 4e962e455..00cace991 100644
--- a/hypervisor/include/arch/x86/asm/guest/virq.h
+++ b/hypervisor/include/arch/x86/asm/guest/virq.h
@@ -103,7 +103,6 @@ void vcpu_inject_ud(struct acrn_vcpu *vcpu);
*/
void vcpu_inject_ss(struct acrn_vcpu *vcpu); void
vcpu_make_request(struct acrn_vcpu *vcpu, uint16_t eventid); -bool
vcpu_try_cancel_request(struct acrn_vcpu *vcpu, uint16_t eventid);

/*
* @pre vcpu != NULL
diff --git a/hypervisor/include/common/event.h
b/hypervisor/include/common/event.h
index 2aba27b32..78430b21e 100644
--- a/hypervisor/include/common/event.h
+++ b/hypervisor/include/common/event.h
@@ -4,7 +4,7 @@

struct sched_event {
spinlock_t lock;
- int8_t nqueued;
+ bool set;
struct thread_object* waiting_thread; };

--
2.25.1





Re: [PATCH 2/2] hv: Serialize WBINVD using wbinvd_lock

Eddie Dong
 

OK. This is probably for Windows guest where timeout may be a problem.

Please PR with my ack.

-----Original Message-----
From: Liu, Yifan1 <yifan1.liu@...>
Sent: Monday, July 11, 2022 5:32 PM
To: Dong, Eddie <eddie.dong@...>; acrn-dev@...
Subject: RE: [acrn-dev] [PATCH 2/2] hv: Serialize WBINVD using wbinvd_lock



-----Original Message-----
From: Dong, Eddie <eddie.dong@...>
Sent: Tuesday, July 12, 2022 8:02 AM
To: acrn-dev@...
Cc: Liu, Yifan1 <yifan1.liu@...>
Subject: RE: [acrn-dev] [PATCH 2/2] hv: Serialize WBINVD using
wbinvd_lock



-----Original Message-----
From: acrn-dev@...
<acrn-dev@...> On Behalf Of Liu, Yifan1
Sent: Tuesday, July 12, 2022 12:35 AM
To: acrn-dev@...
Cc: Liu, Yifan1 <yifan1.liu@...>
Subject: [acrn-dev] [PATCH 2/2] hv: Serialize WBINVD using
wbinvd_lock

From: Yifan Liu <yifan1.liu@...>

As mentioned in previous patch, wbinvd utilizes the
vcpu_make_request and signal_event call pair to stall other vcpus.
Due to the fact that these two calls are not thread-safe, we need to avoid
concurrent call to this API pair.

This patch adds wbinvd lock to serialize wbinvd emulation.

Signed-off-by: Yifan Liu <yifan1.liu@...>
---
hypervisor/arch/x86/guest/vmexit.c | 2 ++
hypervisor/include/arch/x86/asm/guest/vm.h | 1 +
2 files changed, 3 insertions(+)

diff --git a/hypervisor/arch/x86/guest/vmexit.c
b/hypervisor/arch/x86/guest/vmexit.c
index ca4bdbf2a..fad807ba8 100644
--- a/hypervisor/arch/x86/guest/vmexit.c
+++ b/hypervisor/arch/x86/guest/vmexit.c
@@ -438,6 +438,7 @@ static int32_t wbinvd_vmexit_handler(struct
acrn_vcpu *vcpu)
if (is_rt_vm(vcpu->vm)) {
I am curious why we need to handle differently for RT VM and non-RT VM.
Do you happen to know?
[Liu, Yifan1] Handling differently was introduced in commit
9a15ea82ee659b52ff82a76c51768675b6690ec3.
Before this commit, the code looks like this:

if (has_rt_vm() == false) {
cache_flush_invalidate_all(); } else {
walk_ept_table(vcpu->vm, ept_flush_leaf_page); }

Quoting commit message of 9a15ea82:

hv: pause all other vCPUs in same VM when do wbinvd emulation

Invalidate cache by scanning and flushing the whole guest memory is
inefficient which might cause long execution time for WBINVD emulation.
A long execution in hypervisor might cause a vCPU stuck phenomenon what
impact Windows Guest booting.

This patch introduce a workaround method that pausing all other vCPUs in
the same VM when do wbinvd emulation.

Tracked-On: #4703
Signed-off-by: Shuo A Liu <shuo.a.liu@...>
Acked-by: Eddie Dong <eddie.dong@...>


walk_ept_table(vcpu->vm, ept_flush_leaf_page);
} else {
+ spinlock_obtain(&vcpu->vm->wbinvd_lock);
/* Pause other vcpus and let them wait for the wbinvd
completion
*/
foreach_vcpu(i, vcpu->vm, other) {
if (other != vcpu) {
@@ -452,6 +453,7 @@ static int32_t wbinvd_vmexit_handler(struct
acrn_vcpu *vcpu)
signal_event(&other-
events[VCPU_EVENT_SYNC_WBINVD]);
}
}
+ spinlock_release(&vcpu->vm->wbinvd_lock);
}
}

diff --git a/hypervisor/include/arch/x86/asm/guest/vm.h
b/hypervisor/include/arch/x86/asm/guest/vm.h
index c05b9b212..d06300600 100644
--- a/hypervisor/include/arch/x86/asm/guest/vm.h
+++ b/hypervisor/include/arch/x86/asm/guest/vm.h
@@ -149,6 +149,7 @@ struct acrn_vm {
* the initialization depends on the clear BSS section
*/
spinlock_t vm_state_lock;
+ spinlock_t wbinvd_lock; /* Spin-lock used to serialize wbinvd
emulation */
spinlock_t vlapic_mode_lock; /* Spin-lock used to protect
vlapic_mode modifications for a VM */
spinlock_t ept_lock; /* Spin-lock used to protect ept
add/modify/remove for a VM */
spinlock_t emul_mmio_lock; /* Used to protect emulation
mmio_node concurrent access for a VM */
--
2.25.1





Re: [PATCH 2/2] hv: Serialize WBINVD using wbinvd_lock

Liu, Yifan1
 

-----Original Message-----
From: Dong, Eddie <eddie.dong@...>
Sent: Tuesday, July 12, 2022 8:02 AM
To: acrn-dev@...
Cc: Liu, Yifan1 <yifan1.liu@...>
Subject: RE: [acrn-dev] [PATCH 2/2] hv: Serialize WBINVD using wbinvd_lock



-----Original Message-----
From: acrn-dev@... <acrn-dev@...> On
Behalf Of Liu, Yifan1
Sent: Tuesday, July 12, 2022 12:35 AM
To: acrn-dev@...
Cc: Liu, Yifan1 <yifan1.liu@...>
Subject: [acrn-dev] [PATCH 2/2] hv: Serialize WBINVD using wbinvd_lock

From: Yifan Liu <yifan1.liu@...>

As mentioned in previous patch, wbinvd utilizes the vcpu_make_request and
signal_event call pair to stall other vcpus. Due to the fact that these two calls
are not thread-safe, we need to avoid concurrent call to this API pair.

This patch adds wbinvd lock to serialize wbinvd emulation.

Signed-off-by: Yifan Liu <yifan1.liu@...>
---
hypervisor/arch/x86/guest/vmexit.c | 2 ++
hypervisor/include/arch/x86/asm/guest/vm.h | 1 +
2 files changed, 3 insertions(+)

diff --git a/hypervisor/arch/x86/guest/vmexit.c
b/hypervisor/arch/x86/guest/vmexit.c
index ca4bdbf2a..fad807ba8 100644
--- a/hypervisor/arch/x86/guest/vmexit.c
+++ b/hypervisor/arch/x86/guest/vmexit.c
@@ -438,6 +438,7 @@ static int32_t wbinvd_vmexit_handler(struct
acrn_vcpu *vcpu)
if (is_rt_vm(vcpu->vm)) {
I am curious why we need to handle differently for RT VM and non-RT VM.
Do you happen to know?
[Liu, Yifan1] Handling differently was introduced in commit 9a15ea82ee659b52ff82a76c51768675b6690ec3.
Before this commit, the code looks like this:

if (has_rt_vm() == false) {
cache_flush_invalidate_all();
} else {
walk_ept_table(vcpu->vm, ept_flush_leaf_page);
}

Quoting commit message of 9a15ea82:

hv: pause all other vCPUs in same VM when do wbinvd emulation

Invalidate cache by scanning and flushing the whole guest memory is
inefficient which might cause long execution time for WBINVD emulation.
A long execution in hypervisor might cause a vCPU stuck phenomenon what
impact Windows Guest booting.

This patch introduce a workaround method that pausing all other vCPUs in
the same VM when do wbinvd emulation.

Tracked-On: #4703
Signed-off-by: Shuo A Liu <shuo.a.liu@...>
Acked-by: Eddie Dong <eddie.dong@...>


walk_ept_table(vcpu->vm, ept_flush_leaf_page);
} else {
+ spinlock_obtain(&vcpu->vm->wbinvd_lock);
/* Pause other vcpus and let them wait for the wbinvd
completion */
foreach_vcpu(i, vcpu->vm, other) {
if (other != vcpu) {
@@ -452,6 +453,7 @@ static int32_t wbinvd_vmexit_handler(struct
acrn_vcpu *vcpu)
signal_event(&other-
events[VCPU_EVENT_SYNC_WBINVD]);
}
}
+ spinlock_release(&vcpu->vm->wbinvd_lock);
}
}

diff --git a/hypervisor/include/arch/x86/asm/guest/vm.h
b/hypervisor/include/arch/x86/asm/guest/vm.h
index c05b9b212..d06300600 100644
--- a/hypervisor/include/arch/x86/asm/guest/vm.h
+++ b/hypervisor/include/arch/x86/asm/guest/vm.h
@@ -149,6 +149,7 @@ struct acrn_vm {
* the initialization depends on the clear BSS section
*/
spinlock_t vm_state_lock;
+ spinlock_t wbinvd_lock; /* Spin-lock used to serialize wbinvd
emulation */
spinlock_t vlapic_mode_lock; /* Spin-lock used to protect
vlapic_mode modifications for a VM */
spinlock_t ept_lock; /* Spin-lock used to protect ept
add/modify/remove for a VM */
spinlock_t emul_mmio_lock; /* Used to protect emulation
mmio_node concurrent access for a VM */
--
2.25.1





Re: [PATCH 1/2] hv: Change sched_event back to boolean-based implementation

Liu, Yifan1
 

Yes, and also partially revert commit 10963b04d127c3a321b30467b986f13cbd09db66 (the vcpu_try_cancel_request function), which is no longer needed if we revert the d575edf79a9b8dcde16574e28c213be7013470c7.

-----Original Message-----
From: Dong, Eddie <eddie.dong@...>
Sent: Tuesday, July 12, 2022 8:02 AM
To: acrn-dev@...
Cc: Liu, Yifan1 <yifan1.liu@...>
Subject: RE: [acrn-dev] [PATCH 1/2] hv: Change sched_event back to boolean-based implementation

This is to revert the commit d575edf79a9b8dcde16574e28c213be7013470c7, right?

-----Original Message-----
From: acrn-dev@... <acrn-dev@...> On
Behalf Of Liu, Yifan1
Sent: Tuesday, July 12, 2022 12:35 AM
To: acrn-dev@...
Cc: Liu, Yifan1 <yifan1.liu@...>
Subject: [acrn-dev] [PATCH 1/2] hv: Change sched_event back to boolean-based
implementation

From: Yifan Liu <yifan1.liu@...>

Commit d575edf79a9b8dcde16574e28c213be7013470c7 changes the internal
implementation of wait_event and signal_event to use a counter instead of a
boolean value.

The background was:
ACRN utilizes vcpu_make_request and signal_event pair to shoot down other
vcpus and let them wait for signals. vcpu_make_request eventually leads to
target vcpu calling wait_event.

However vcpu_make_request/signal_event pair was not thread-safe, and
concurrent calls of this pair of API could lead to problems.
One such example is the concurrent wbinvd emulation, where vcpus may
concurrently issue vcpu_make_request/signal_event to synchronize wbinvd
emulation.

d575edf commit uses a counter in internal implementation of
wait_event/signal_event to avoid data races.

However by using a counter, the wait/signal pair now carries semantics of
semaphores instead of events. Semaphores require caller to carefully plan their
calls instead of multiply signaling any number of times to the same event,
which deviates from the original "event" semantics.

This patch changes the API implementation back to boolean-based, and re-
resolve the issue of concurrent wbinvd in next patch.

This also partially reverts commit
10963b04d127c3a321b30467b986f13cbd09db66,
which was introduced because of the d575edf.

Signed-off-by: Yifan Liu <yifan1.liu@...>
---
hypervisor/arch/x86/guest/lock_instr_emul.c | 20 +-------------------
hypervisor/arch/x86/guest/virq.c | 6 ------
hypervisor/common/event.c | 10 +++++-----
hypervisor/include/arch/x86/asm/guest/virq.h | 1 -
hypervisor/include/common/event.h | 2 +-
5 files changed, 7 insertions(+), 32 deletions(-)

diff --git a/hypervisor/arch/x86/guest/lock_instr_emul.c
b/hypervisor/arch/x86/guest/lock_instr_emul.c
index 8729e3369..10947a424 100644
--- a/hypervisor/arch/x86/guest/lock_instr_emul.c
+++ b/hypervisor/arch/x86/guest/lock_instr_emul.c
@@ -60,25 +60,7 @@ void vcpu_complete_lock_instr_emulation(struct
acrn_vcpu *cur_vcpu)
if (cur_vcpu->vm->hw.created_vcpus > 1U) {
foreach_vcpu(i, cur_vcpu->vm, other) {
if ((other != cur_vcpu) && (other->state ==
VCPU_RUNNING)) {
- /*
- * When we vcpu_make_request above,
there is a time window between kick_vcpu and
- * the target vcpu actually does
acrn_handle_pending_request (and eventually wait_event).
- * It is possible that the issuing vcpu has
already completed lock emulation and
- * calls signal_event, while the target vcpu
has not yet reaching acrn_handle_pending_request.
- *
- * This causes problem: Say we have vcpuA
make request to vcpuB.
- * During the above said time window, if A
does kick_lock/complete_lock pair multiple times,
- * or some other vcpu C makes request to B
after A releases the vm lock below, then the bit
- * in B's pending_req will be cleared only once,
and B will call wait_event only once,
- * while other vcpu may call signal_event
many times to B. I.e., one bit is not enough
- * to cache multiple requests.
- *
- * To work this around, we try to cancel the
request (clear the bit in pending_req) if it
- * is unhandled, and signal_event only when
the request was already handled.
- */
- if (!vcpu_try_cancel_request(other,
ACRN_REQUEST_SPLIT_LOCK)) {
- signal_event(&other-
events[VCPU_EVENT_SPLIT_LOCK]);
- }
+ signal_event(&other-
events[VCPU_EVENT_SPLIT_LOCK]);
}
}

diff --git a/hypervisor/arch/x86/guest/virq.c b/hypervisor/arch/x86/guest/virq.c
index 02c3c5dd9..014094014 100644
--- a/hypervisor/arch/x86/guest/virq.c
+++ b/hypervisor/arch/x86/guest/virq.c
@@ -133,12 +133,6 @@ void vcpu_make_request(struct acrn_vcpu *vcpu,
uint16_t eventid)
kick_vcpu(vcpu);
}

-/* Return true if an unhandled request is cancelled, false otherwise. */ -bool
vcpu_try_cancel_request(struct acrn_vcpu *vcpu, uint16_t eventid) -{
- return bitmap_test_and_clear_lock(eventid, &vcpu-
arch.pending_req);
-}
-
/*
* @retval true when INT is injected to guest.
* @retval false when otherwise
diff --git a/hypervisor/common/event.c b/hypervisor/common/event.c index
cf3ce7099..eb1fc88a0 100644
--- a/hypervisor/common/event.c
+++ b/hypervisor/common/event.c
@@ -11,7 +11,7 @@
void init_event(struct sched_event *event) {
spinlock_init(&event->lock);
- event->nqueued = 0;
+ event->set = false;
event->waiting_thread = NULL;
}

@@ -20,7 +20,7 @@ void reset_event(struct sched_event *event)
uint64_t rflag;

spinlock_irqsave_obtain(&event->lock, &rflag);
- event->nqueued = 0;
+ event->set = false;
event->waiting_thread = NULL;
spinlock_irqrestore_release(&event->lock, rflag); } @@ -33,13 +33,13
@@ void wait_event(struct sched_event *event)
spinlock_irqsave_obtain(&event->lock, &rflag);
ASSERT((event->waiting_thread == NULL), "only support exclusive
waiting");
event->waiting_thread = sched_get_current(get_pcpu_id());
- event->nqueued++;
- while ((event->nqueued > 0) && (event->waiting_thread != NULL)) {
+ while (!event->set && (event->waiting_thread != NULL)) {
sleep_thread(event->waiting_thread);
spinlock_irqrestore_release(&event->lock, rflag);
schedule();
spinlock_irqsave_obtain(&event->lock, &rflag);
}
+ event->set = false;
event->waiting_thread = NULL;
spinlock_irqrestore_release(&event->lock, rflag); } @@ -49,7 +49,7
@@ void signal_event(struct sched_event *event)
uint64_t rflag;

spinlock_irqsave_obtain(&event->lock, &rflag);
- event->nqueued--;
+ event->set = true;
if (event->waiting_thread != NULL) {
wake_thread(event->waiting_thread);
}
diff --git a/hypervisor/include/arch/x86/asm/guest/virq.h
b/hypervisor/include/arch/x86/asm/guest/virq.h
index 4e962e455..00cace991 100644
--- a/hypervisor/include/arch/x86/asm/guest/virq.h
+++ b/hypervisor/include/arch/x86/asm/guest/virq.h
@@ -103,7 +103,6 @@ void vcpu_inject_ud(struct acrn_vcpu *vcpu);
*/
void vcpu_inject_ss(struct acrn_vcpu *vcpu); void vcpu_make_request(struct
acrn_vcpu *vcpu, uint16_t eventid); -bool vcpu_try_cancel_request(struct
acrn_vcpu *vcpu, uint16_t eventid);

/*
* @pre vcpu != NULL
diff --git a/hypervisor/include/common/event.h
b/hypervisor/include/common/event.h
index 2aba27b32..78430b21e 100644
--- a/hypervisor/include/common/event.h
+++ b/hypervisor/include/common/event.h
@@ -4,7 +4,7 @@

struct sched_event {
spinlock_t lock;
- int8_t nqueued;
+ bool set;
struct thread_object* waiting_thread;
};

--
2.25.1





Re: [PATCH 2/2] hv: Serialize WBINVD using wbinvd_lock

Eddie Dong
 

-----Original Message-----
From: acrn-dev@... <acrn-dev@...> On
Behalf Of Liu, Yifan1
Sent: Tuesday, July 12, 2022 12:35 AM
To: acrn-dev@...
Cc: Liu, Yifan1 <yifan1.liu@...>
Subject: [acrn-dev] [PATCH 2/2] hv: Serialize WBINVD using wbinvd_lock

From: Yifan Liu <yifan1.liu@...>

As mentioned in previous patch, wbinvd utilizes the vcpu_make_request and
signal_event call pair to stall other vcpus. Due to the fact that these two calls
are not thread-safe, we need to avoid concurrent call to this API pair.

This patch adds wbinvd lock to serialize wbinvd emulation.

Signed-off-by: Yifan Liu <yifan1.liu@...>
---
hypervisor/arch/x86/guest/vmexit.c | 2 ++
hypervisor/include/arch/x86/asm/guest/vm.h | 1 +
2 files changed, 3 insertions(+)

diff --git a/hypervisor/arch/x86/guest/vmexit.c
b/hypervisor/arch/x86/guest/vmexit.c
index ca4bdbf2a..fad807ba8 100644
--- a/hypervisor/arch/x86/guest/vmexit.c
+++ b/hypervisor/arch/x86/guest/vmexit.c
@@ -438,6 +438,7 @@ static int32_t wbinvd_vmexit_handler(struct
acrn_vcpu *vcpu)
if (is_rt_vm(vcpu->vm)) {
I am curious why we need to handle differently for RT VM and non-RT VM.
Do you happen to know?

walk_ept_table(vcpu->vm, ept_flush_leaf_page);
} else {
+ spinlock_obtain(&vcpu->vm->wbinvd_lock);
/* Pause other vcpus and let them wait for the wbinvd
completion */
foreach_vcpu(i, vcpu->vm, other) {
if (other != vcpu) {
@@ -452,6 +453,7 @@ static int32_t wbinvd_vmexit_handler(struct
acrn_vcpu *vcpu)
signal_event(&other-
events[VCPU_EVENT_SYNC_WBINVD]);
}
}
+ spinlock_release(&vcpu->vm->wbinvd_lock);
}
}

diff --git a/hypervisor/include/arch/x86/asm/guest/vm.h
b/hypervisor/include/arch/x86/asm/guest/vm.h
index c05b9b212..d06300600 100644
--- a/hypervisor/include/arch/x86/asm/guest/vm.h
+++ b/hypervisor/include/arch/x86/asm/guest/vm.h
@@ -149,6 +149,7 @@ struct acrn_vm {
* the initialization depends on the clear BSS section
*/
spinlock_t vm_state_lock;
+ spinlock_t wbinvd_lock; /* Spin-lock used to serialize wbinvd
emulation */
spinlock_t vlapic_mode_lock; /* Spin-lock used to protect
vlapic_mode modifications for a VM */
spinlock_t ept_lock; /* Spin-lock used to protect ept
add/modify/remove for a VM */
spinlock_t emul_mmio_lock; /* Used to protect emulation
mmio_node concurrent access for a VM */
--
2.25.1





Re: [PATCH 1/2] hv: Change sched_event back to boolean-based implementation

Eddie Dong
 

This is to revert the commit d575edf79a9b8dcde16574e28c213be7013470c7, right?

-----Original Message-----
From: acrn-dev@... <acrn-dev@...> On
Behalf Of Liu, Yifan1
Sent: Tuesday, July 12, 2022 12:35 AM
To: acrn-dev@...
Cc: Liu, Yifan1 <yifan1.liu@...>
Subject: [acrn-dev] [PATCH 1/2] hv: Change sched_event back to boolean-based
implementation

From: Yifan Liu <yifan1.liu@...>

Commit d575edf79a9b8dcde16574e28c213be7013470c7 changes the internal
implementation of wait_event and signal_event to use a counter instead of a
boolean value.

The background was:
ACRN utilizes vcpu_make_request and signal_event pair to shoot down other
vcpus and let them wait for signals. vcpu_make_request eventually leads to
target vcpu calling wait_event.

However vcpu_make_request/signal_event pair was not thread-safe, and
concurrent calls of this pair of API could lead to problems.
One such example is the concurrent wbinvd emulation, where vcpus may
concurrently issue vcpu_make_request/signal_event to synchronize wbinvd
emulation.

d575edf commit uses a counter in internal implementation of
wait_event/signal_event to avoid data races.

However by using a counter, the wait/signal pair now carries semantics of
semaphores instead of events. Semaphores require caller to carefully plan their
calls instead of multiply signaling any number of times to the same event,
which deviates from the original "event" semantics.

This patch changes the API implementation back to boolean-based, and re-
resolve the issue of concurrent wbinvd in next patch.

This also partially reverts commit
10963b04d127c3a321b30467b986f13cbd09db66,
which was introduced because of the d575edf.

Signed-off-by: Yifan Liu <yifan1.liu@...>
---
hypervisor/arch/x86/guest/lock_instr_emul.c | 20 +-------------------
hypervisor/arch/x86/guest/virq.c | 6 ------
hypervisor/common/event.c | 10 +++++-----
hypervisor/include/arch/x86/asm/guest/virq.h | 1 -
hypervisor/include/common/event.h | 2 +-
5 files changed, 7 insertions(+), 32 deletions(-)

diff --git a/hypervisor/arch/x86/guest/lock_instr_emul.c
b/hypervisor/arch/x86/guest/lock_instr_emul.c
index 8729e3369..10947a424 100644
--- a/hypervisor/arch/x86/guest/lock_instr_emul.c
+++ b/hypervisor/arch/x86/guest/lock_instr_emul.c
@@ -60,25 +60,7 @@ void vcpu_complete_lock_instr_emulation(struct
acrn_vcpu *cur_vcpu)
if (cur_vcpu->vm->hw.created_vcpus > 1U) {
foreach_vcpu(i, cur_vcpu->vm, other) {
if ((other != cur_vcpu) && (other->state ==
VCPU_RUNNING)) {
- /*
- * When we vcpu_make_request above,
there is a time window between kick_vcpu and
- * the target vcpu actually does
acrn_handle_pending_request (and eventually wait_event).
- * It is possible that the issuing vcpu has
already completed lock emulation and
- * calls signal_event, while the target vcpu
has not yet reaching acrn_handle_pending_request.
- *
- * This causes problem: Say we have vcpuA
make request to vcpuB.
- * During the above said time window, if A
does kick_lock/complete_lock pair multiple times,
- * or some other vcpu C makes request to B
after A releases the vm lock below, then the bit
- * in B's pending_req will be cleared only once,
and B will call wait_event only once,
- * while other vcpu may call signal_event
many times to B. I.e., one bit is not enough
- * to cache multiple requests.
- *
- * To work this around, we try to cancel the
request (clear the bit in pending_req) if it
- * is unhandled, and signal_event only when
the request was already handled.
- */
- if (!vcpu_try_cancel_request(other,
ACRN_REQUEST_SPLIT_LOCK)) {
- signal_event(&other-
events[VCPU_EVENT_SPLIT_LOCK]);
- }
+ signal_event(&other-
events[VCPU_EVENT_SPLIT_LOCK]);
}
}

diff --git a/hypervisor/arch/x86/guest/virq.c b/hypervisor/arch/x86/guest/virq.c
index 02c3c5dd9..014094014 100644
--- a/hypervisor/arch/x86/guest/virq.c
+++ b/hypervisor/arch/x86/guest/virq.c
@@ -133,12 +133,6 @@ void vcpu_make_request(struct acrn_vcpu *vcpu,
uint16_t eventid)
kick_vcpu(vcpu);
}

-/* Return true if an unhandled request is cancelled, false otherwise. */ -bool
vcpu_try_cancel_request(struct acrn_vcpu *vcpu, uint16_t eventid) -{
- return bitmap_test_and_clear_lock(eventid, &vcpu-
arch.pending_req);
-}
-
/*
* @retval true when INT is injected to guest.
* @retval false when otherwise
diff --git a/hypervisor/common/event.c b/hypervisor/common/event.c index
cf3ce7099..eb1fc88a0 100644
--- a/hypervisor/common/event.c
+++ b/hypervisor/common/event.c
@@ -11,7 +11,7 @@
void init_event(struct sched_event *event) {
spinlock_init(&event->lock);
- event->nqueued = 0;
+ event->set = false;
event->waiting_thread = NULL;
}

@@ -20,7 +20,7 @@ void reset_event(struct sched_event *event)
uint64_t rflag;

spinlock_irqsave_obtain(&event->lock, &rflag);
- event->nqueued = 0;
+ event->set = false;
event->waiting_thread = NULL;
spinlock_irqrestore_release(&event->lock, rflag); } @@ -33,13 +33,13
@@ void wait_event(struct sched_event *event)
spinlock_irqsave_obtain(&event->lock, &rflag);
ASSERT((event->waiting_thread == NULL), "only support exclusive
waiting");
event->waiting_thread = sched_get_current(get_pcpu_id());
- event->nqueued++;
- while ((event->nqueued > 0) && (event->waiting_thread != NULL)) {
+ while (!event->set && (event->waiting_thread != NULL)) {
sleep_thread(event->waiting_thread);
spinlock_irqrestore_release(&event->lock, rflag);
schedule();
spinlock_irqsave_obtain(&event->lock, &rflag);
}
+ event->set = false;
event->waiting_thread = NULL;
spinlock_irqrestore_release(&event->lock, rflag); } @@ -49,7 +49,7
@@ void signal_event(struct sched_event *event)
uint64_t rflag;

spinlock_irqsave_obtain(&event->lock, &rflag);
- event->nqueued--;
+ event->set = true;
if (event->waiting_thread != NULL) {
wake_thread(event->waiting_thread);
}
diff --git a/hypervisor/include/arch/x86/asm/guest/virq.h
b/hypervisor/include/arch/x86/asm/guest/virq.h
index 4e962e455..00cace991 100644
--- a/hypervisor/include/arch/x86/asm/guest/virq.h
+++ b/hypervisor/include/arch/x86/asm/guest/virq.h
@@ -103,7 +103,6 @@ void vcpu_inject_ud(struct acrn_vcpu *vcpu);
*/
void vcpu_inject_ss(struct acrn_vcpu *vcpu); void vcpu_make_request(struct
acrn_vcpu *vcpu, uint16_t eventid); -bool vcpu_try_cancel_request(struct
acrn_vcpu *vcpu, uint16_t eventid);

/*
* @pre vcpu != NULL
diff --git a/hypervisor/include/common/event.h
b/hypervisor/include/common/event.h
index 2aba27b32..78430b21e 100644
--- a/hypervisor/include/common/event.h
+++ b/hypervisor/include/common/event.h
@@ -4,7 +4,7 @@

struct sched_event {
spinlock_t lock;
- int8_t nqueued;
+ bool set;
struct thread_object* waiting_thread;
};

--
2.25.1





[PATCH 2/2] hv: Serialize WBINVD using wbinvd_lock

Liu, Yifan1
 

From: Yifan Liu <yifan1.liu@...>

As mentioned in previous patch, wbinvd utilizes the vcpu_make_request
and signal_event call pair to stall other vcpus. Due to the fact that
these two calls are not thread-safe, we need to avoid concurrent call to
this API pair.

This patch adds wbinvd lock to serialize wbinvd emulation.

Signed-off-by: Yifan Liu <yifan1.liu@...>
---
hypervisor/arch/x86/guest/vmexit.c | 2 ++
hypervisor/include/arch/x86/asm/guest/vm.h | 1 +
2 files changed, 3 insertions(+)

diff --git a/hypervisor/arch/x86/guest/vmexit.c b/hypervisor/arch/x86/guest/vmexit.c
index ca4bdbf2a..fad807ba8 100644
--- a/hypervisor/arch/x86/guest/vmexit.c
+++ b/hypervisor/arch/x86/guest/vmexit.c
@@ -438,6 +438,7 @@ static int32_t wbinvd_vmexit_handler(struct acrn_vcpu *vcpu)
if (is_rt_vm(vcpu->vm)) {
walk_ept_table(vcpu->vm, ept_flush_leaf_page);
} else {
+ spinlock_obtain(&vcpu->vm->wbinvd_lock);
/* Pause other vcpus and let them wait for the wbinvd completion */
foreach_vcpu(i, vcpu->vm, other) {
if (other != vcpu) {
@@ -452,6 +453,7 @@ static int32_t wbinvd_vmexit_handler(struct acrn_vcpu *vcpu)
signal_event(&other->events[VCPU_EVENT_SYNC_WBINVD]);
}
}
+ spinlock_release(&vcpu->vm->wbinvd_lock);
}
}

diff --git a/hypervisor/include/arch/x86/asm/guest/vm.h b/hypervisor/include/arch/x86/asm/guest/vm.h
index c05b9b212..d06300600 100644
--- a/hypervisor/include/arch/x86/asm/guest/vm.h
+++ b/hypervisor/include/arch/x86/asm/guest/vm.h
@@ -149,6 +149,7 @@ struct acrn_vm {
* the initialization depends on the clear BSS section
*/
spinlock_t vm_state_lock;
+ spinlock_t wbinvd_lock; /* Spin-lock used to serialize wbinvd emulation */
spinlock_t vlapic_mode_lock; /* Spin-lock used to protect vlapic_mode modifications for a VM */
spinlock_t ept_lock; /* Spin-lock used to protect ept add/modify/remove for a VM */
spinlock_t emul_mmio_lock; /* Used to protect emulation mmio_node concurrent access for a VM */
--
2.25.1


[PATCH 1/2] hv: Change sched_event back to boolean-based implementation

Liu, Yifan1
 

From: Yifan Liu <yifan1.liu@...>

Commit d575edf79a9b8dcde16574e28c213be7013470c7 changes the internal
implementation of wait_event and signal_event to use a counter instead
of a boolean value.

The background was:
ACRN utilizes vcpu_make_request and signal_event pair to shoot down
other vcpus and let them wait for signals. vcpu_make_request eventually
leads to target vcpu calling wait_event.

However vcpu_make_request/signal_event pair was not thread-safe,
and concurrent calls of this pair of API could lead to problems.
One such example is the concurrent wbinvd emulation, where vcpus may
concurrently issue vcpu_make_request/signal_event to synchronize wbinvd
emulation.

d575edf commit uses a counter in internal implementation of
wait_event/signal_event to avoid data races.

However by using a counter, the wait/signal pair now carries semantics of
semaphores instead of events. Semaphores require caller to carefully
plan their calls instead of multiply signaling any number of times to the same
event, which deviates from the original "event" semantics.

This patch changes the API implementation back to boolean-based, and
re-resolve the issue of concurrent wbinvd in next patch.

This also partially reverts commit 10963b04d127c3a321b30467b986f13cbd09db66,
which was introduced because of the d575edf.

Signed-off-by: Yifan Liu <yifan1.liu@...>
---
hypervisor/arch/x86/guest/lock_instr_emul.c | 20 +-------------------
hypervisor/arch/x86/guest/virq.c | 6 ------
hypervisor/common/event.c | 10 +++++-----
hypervisor/include/arch/x86/asm/guest/virq.h | 1 -
hypervisor/include/common/event.h | 2 +-
5 files changed, 7 insertions(+), 32 deletions(-)

diff --git a/hypervisor/arch/x86/guest/lock_instr_emul.c b/hypervisor/arch/x86/guest/lock_instr_emul.c
index 8729e3369..10947a424 100644
--- a/hypervisor/arch/x86/guest/lock_instr_emul.c
+++ b/hypervisor/arch/x86/guest/lock_instr_emul.c
@@ -60,25 +60,7 @@ void vcpu_complete_lock_instr_emulation(struct acrn_vcpu *cur_vcpu)
if (cur_vcpu->vm->hw.created_vcpus > 1U) {
foreach_vcpu(i, cur_vcpu->vm, other) {
if ((other != cur_vcpu) && (other->state == VCPU_RUNNING)) {
- /*
- * When we vcpu_make_request above, there is a time window between kick_vcpu and
- * the target vcpu actually does acrn_handle_pending_request (and eventually wait_event).
- * It is possible that the issuing vcpu has already completed lock emulation and
- * calls signal_event, while the target vcpu has not yet reaching acrn_handle_pending_request.
- *
- * This causes problem: Say we have vcpuA make request to vcpuB.
- * During the above said time window, if A does kick_lock/complete_lock pair multiple times,
- * or some other vcpu C makes request to B after A releases the vm lock below, then the bit
- * in B's pending_req will be cleared only once, and B will call wait_event only once,
- * while other vcpu may call signal_event many times to B. I.e., one bit is not enough
- * to cache multiple requests.
- *
- * To work this around, we try to cancel the request (clear the bit in pending_req) if it
- * is unhandled, and signal_event only when the request was already handled.
- */
- if (!vcpu_try_cancel_request(other, ACRN_REQUEST_SPLIT_LOCK)) {
- signal_event(&other->events[VCPU_EVENT_SPLIT_LOCK]);
- }
+ signal_event(&other->events[VCPU_EVENT_SPLIT_LOCK]);
}
}

diff --git a/hypervisor/arch/x86/guest/virq.c b/hypervisor/arch/x86/guest/virq.c
index 02c3c5dd9..014094014 100644
--- a/hypervisor/arch/x86/guest/virq.c
+++ b/hypervisor/arch/x86/guest/virq.c
@@ -133,12 +133,6 @@ void vcpu_make_request(struct acrn_vcpu *vcpu, uint16_t eventid)
kick_vcpu(vcpu);
}

-/* Return true if an unhandled request is cancelled, false otherwise. */
-bool vcpu_try_cancel_request(struct acrn_vcpu *vcpu, uint16_t eventid)
-{
- return bitmap_test_and_clear_lock(eventid, &vcpu->arch.pending_req);
-}
-
/*
* @retval true when INT is injected to guest.
* @retval false when otherwise
diff --git a/hypervisor/common/event.c b/hypervisor/common/event.c
index cf3ce7099..eb1fc88a0 100644
--- a/hypervisor/common/event.c
+++ b/hypervisor/common/event.c
@@ -11,7 +11,7 @@
void init_event(struct sched_event *event)
{
spinlock_init(&event->lock);
- event->nqueued = 0;
+ event->set = false;
event->waiting_thread = NULL;
}

@@ -20,7 +20,7 @@ void reset_event(struct sched_event *event)
uint64_t rflag;

spinlock_irqsave_obtain(&event->lock, &rflag);
- event->nqueued = 0;
+ event->set = false;
event->waiting_thread = NULL;
spinlock_irqrestore_release(&event->lock, rflag);
}
@@ -33,13 +33,13 @@ void wait_event(struct sched_event *event)
spinlock_irqsave_obtain(&event->lock, &rflag);
ASSERT((event->waiting_thread == NULL), "only support exclusive waiting");
event->waiting_thread = sched_get_current(get_pcpu_id());
- event->nqueued++;
- while ((event->nqueued > 0) && (event->waiting_thread != NULL)) {
+ while (!event->set && (event->waiting_thread != NULL)) {
sleep_thread(event->waiting_thread);
spinlock_irqrestore_release(&event->lock, rflag);
schedule();
spinlock_irqsave_obtain(&event->lock, &rflag);
}
+ event->set = false;
event->waiting_thread = NULL;
spinlock_irqrestore_release(&event->lock, rflag);
}
@@ -49,7 +49,7 @@ void signal_event(struct sched_event *event)
uint64_t rflag;

spinlock_irqsave_obtain(&event->lock, &rflag);
- event->nqueued--;
+ event->set = true;
if (event->waiting_thread != NULL) {
wake_thread(event->waiting_thread);
}
diff --git a/hypervisor/include/arch/x86/asm/guest/virq.h b/hypervisor/include/arch/x86/asm/guest/virq.h
index 4e962e455..00cace991 100644
--- a/hypervisor/include/arch/x86/asm/guest/virq.h
+++ b/hypervisor/include/arch/x86/asm/guest/virq.h
@@ -103,7 +103,6 @@ void vcpu_inject_ud(struct acrn_vcpu *vcpu);
*/
void vcpu_inject_ss(struct acrn_vcpu *vcpu);
void vcpu_make_request(struct acrn_vcpu *vcpu, uint16_t eventid);
-bool vcpu_try_cancel_request(struct acrn_vcpu *vcpu, uint16_t eventid);

/*
* @pre vcpu != NULL
diff --git a/hypervisor/include/common/event.h b/hypervisor/include/common/event.h
index 2aba27b32..78430b21e 100644
--- a/hypervisor/include/common/event.h
+++ b/hypervisor/include/common/event.h
@@ -4,7 +4,7 @@

struct sched_event {
spinlock_t lock;
- int8_t nqueued;
+ bool set;
struct thread_object* waiting_thread;
};

--
2.25.1


[PATCH] config_tools: resolve incompatibility with elementpath 2.5.3

Junjie Mao
 

This patch adds to the customized function `number-of-clos-id-needed` more
robust checks, which ensures that a given node is a concrete element,
before that function passes the node to `get_policy_list`. This resolves
the incompatibility issue with elementpath 2.5.3 which is reported in v3.0.

Tracked-On: #7867
Signed-off-by: Junjie Mao <junjie.mao@...>
---
.../scenario_config/elementpath_overlay.py | 15 +++++++++++----
1 file changed, 11 insertions(+), 4 deletions(-)

diff --git a/misc/config_tools/scenario_config/elementpath_overlay.py b/misc/config_tools/scenario_config/elementpath_overlay.py
index 3e7081896..3454d8146 100644
--- a/misc/config_tools/scenario_config/elementpath_overlay.py
+++ b/misc/config_tools/scenario_config/elementpath_overlay.py
@@ -96,10 +96,17 @@ def select_duplicate_values_function(self, context=None):
@method(function('number-of-clos-id-needed', nargs=1))
def evaluate_number_of_clos_id_needed(self, context=None):
op = self.get_argument(context, index=0)
- try:
- return len(rdt.get_policy_list(op.elem)) if op else 0
- except AttributeError as err:
- raise self.error('XPTY0004', err)
+ if op is not None:
+ if isinstance(op, elementpath.TypedElement):
+ op = op.elem
+
+ # This function may be invoked when the xmlschema library parses the data check schemas, in which case `op` will
+ # be an object of class Xsd11Element. Only attempt to calculate the needed CLOS IDs when a real acrn-config node
+ # is given.
+ if hasattr(op, "xpath"):
+ return len(rdt.get_policy_list(op))
+
+ return 0

###
# Collection of counter examples
--
2.30.2


Re: [PATCH] misc: refine slot issue of launch script

chenli.wei
 

On 7/8/2022 10:52 PM, Junjie Mao wrote:
Chenli Wei <chenli.wei@...> writes:

From: Chenli Wei <chenli.wei@...>

The current launch script allocate bdf for ivshmem by itself and have
not get bdf from scenario.

This patch refine the above logic and generate slot by user settings.

Signed-off-by: Chenli Wei <chenli.wei@...>
---
misc/config_tools/launch_config/launch_cfg_gen.py | 14 ++++++++++++--
1 file changed, 12 insertions(+), 2 deletions(-)

diff --git a/misc/config_tools/launch_config/launch_cfg_gen.py b/misc/config_tools/launch_config/launch_cfg_gen.py
index 910b2e6df..b8950ce08 100755
--- a/misc/config_tools/launch_config/launch_cfg_gen.py
+++ b/misc/config_tools/launch_config/launch_cfg_gen.py
@@ -263,10 +263,20 @@ def generate_for_one_vm(board_etree, hv_scenario_etree, vm_scenario_etree, vm_id
#ivshmem and vuart own reserved slots which setting by user
for ivshmem in eval_xpath_all(vm_scenario_etree, f"//IVSHMEM_REGION[PROVIDED_BY = 'Device Model' and .//VM_NAME = '{vm_name}']"):
- script.add_virtual_device("ivshmem", options=f"dm:/{ivshmem.find('NAME').text},{ivshmem.find('IVSHMEM_SIZE').text}")
+ vbdf = eval_xpath(ivshmem, f".//VBDF/text()")
+ if vbdf is not None:
+ slot = int((vbdf.split(":")[1].split(".")[0]), 16)
+ else:
+ slot = None
This piece of logic which extracts the slot number from a BDF specified
in the scenario XML appears three times now. Please abstract it as a
helper method to avoid duplicated code.

---
Best Regards
Junjie Mao
OK, I will refine it.
+ script.add_virtual_device("ivshmem", slot, options=f"dm:/{ivshmem.find('NAME').text},{ivshmem.find('IVSHMEM_SIZE').text}")
for ivshmem in eval_xpath_all(vm_scenario_etree, f"//IVSHMEM_REGION[PROVIDED_BY = 'Hypervisor' and .//VM_NAME = '{vm_name}']"):
- script.add_virtual_device("ivshmem", options=f"hv:/{ivshmem.find('NAME').text},{ivshmem.find('IVSHMEM_SIZE').text}")
+ vbdf = eval_xpath(ivshmem, f".//VBDF/text()")
+ if vbdf is not None:
+ slot = int((vbdf.split(":")[1].split(".")[0]), 16)
+ else:
+ slot = None
+ script.add_virtual_device("ivshmem", slot, options=f"hv:/{ivshmem.find('NAME').text},{ivshmem.find('IVSHMEM_SIZE').text}")
if eval_xpath(vm_scenario_etree, ".//console_vuart/text()") == "PCI":
script.add_virtual_device("uart", options="vuart_idx:0")


Re: [PATCH] config_tools: Add core type infomation to pCPU

Junjie Mao
 

"Zhao, Yuanyuan" <yuanyuan.zhao@...> writes:

On 2022/7/8 1:06, Junjie Mao wrote:
Yuanyuan Zhao <yuanyuan.zhao@...> writes:

Prompt user wheter this pCPU is P-core or E-core when
set the CPU affinity for VMs.

Signed-off-by: Yuanyuan Zhao <yuanyuan.zhao@...>
---
.../pages/Config/ConfigForm/CustomWidget/cpu_affinity.vue | 8 ++++++++
misc/config_tools/configurator/pyodide/loadBoard.py | 7 ++++++-
2 files changed, 14 insertions(+), 1 deletion(-)

diff --git a/misc/config_tools/configurator/packages/configurator/src/pages/Config/ConfigForm/CustomWidget/cpu_affinity.vue b/misc/config_tools/configurator/packages/configurator/src/pages/Config/ConfigForm/CustomWidget/cpu_affinity.vue
index c796a2d56..b14e65c85 100644
--- a/misc/config_tools/configurator/packages/configurator/src/pages/Config/ConfigForm/CustomWidget/cpu_affinity.vue
+++ b/misc/config_tools/configurator/packages/configurator/src/pages/Config/ConfigForm/CustomWidget/cpu_affinity.vue
@@ -8,6 +8,7 @@
<b-col></b-col>
<b-col>Virtual CPU ID</b-col>
<b-col class="ps-5">Real-time vCPU:</b-col>
+ <b-col class="ps-5">Core type:</b-col>
<b-col></b-col>
</b-row>
<b-row class="align-items-center"
@@ -31,6 +32,10 @@
<b-col class="p-3">
<b-form-checkbox v-model="cpu.real_time_vcpu" :value="'y'" :uncheckedValue="'n'" :disabled="!isRTVM"/>
</b-col>
+ <b-col class="p-3">
+ <h2 v-if="getPCPUtype(index)!=='Core' && getPCPUtype(index)!=='Reserved'"> E-core </h2>
+ <h1 v-else> P-core </h1>
+ </b-col>
<b-col>
<div class="ToolSet">
<div @click="addPCPU(index)" :class="{'d-none': (this.defaultVal.pcpu.length-1)!==index}">
@@ -135,6 +140,9 @@ export default {
return
}
this.defaultVal.pcpu.splice(index, 1)
+ },
+ getPCPUtype(index){
+ return window.getBoardData().CORE_TYPE[this.rootFormData.cpu_affinity.pcpu[index].pcpu_id]
}
}
}
diff --git a/misc/config_tools/configurator/pyodide/loadBoard.py b/misc/config_tools/configurator/pyodide/loadBoard.py
index 3843381c0..4d150dc1f 100644
--- a/misc/config_tools/configurator/pyodide/loadBoard.py
+++ b/misc/config_tools/configurator/pyodide/loadBoard.py
@@ -77,6 +77,10 @@ def get_dynamic_scenario(board):
return form_schemas
+def get_core_type(soup):
+ threads = soup.select('core thread')
+ core_type = {thread.select_one('cpu_id').text: thread.core_type.string for thread in threads}
+ return core_type
"core_type" may be empty on old platforms that do not support CPUID leaf
0x1a (in which case the CPU is unlikely to have hybrid cores). Is that
OK from UI point of view?
It will be masked as 'P-core'.
That is where things get incorrect. Cores of an Atom-based platform will
be reported as "P-core" as well.

--
Best Regards
Junjie Mao


Re: [PATCH] config_tools: Add core type infomation to pCPU

Junjie Mao
 

"Geoffroy Van Cutsem" <geoffroy.vancutsem@...> writes:

-----Original Message-----
From: acrn-dev@... <acrn-dev@...> On
Behalf Of Junjie Mao
Sent: Thursday, July 7, 2022 7:07 pm
To: Yuanyuan Zhao <yuanyuan.zhao@...>
Cc: acrn-dev@...; Zhao, Yuanyuan
<yuanyuan.zhao@...>
Subject: Re: [acrn-dev] [PATCH] config_tools: Add core type infomation to
pCPU

Yuanyuan Zhao <yuanyuan.zhao@...> writes:

Prompt user wheter this pCPU is P-core or E-core when set the CPU
affinity for VMs.

Signed-off-by: Yuanyuan Zhao <yuanyuan.zhao@...>
---
.../pages/Config/ConfigForm/CustomWidget/cpu_affinity.vue | 8
++++++++
misc/config_tools/configurator/pyodide/loadBoard.py | 7 ++++++-
2 files changed, 14 insertions(+), 1 deletion(-)

diff --git
a/misc/config_tools/configurator/packages/configurator/src/pages/Confi
g/ConfigForm/CustomWidget/cpu_affinity.vue
b/misc/config_tools/configurator/packages/configurator/src/pages/Confi
g/ConfigForm/CustomWidget/cpu_affinity.vue
index c796a2d56..b14e65c85 100644
---
a/misc/config_tools/configurator/packages/configurator/src/pages/Confi
g/ConfigForm/CustomWidget/cpu_affinity.vue
+++ b/misc/config_tools/configurator/packages/configurator/src/pages/C
+++ onfig/ConfigForm/CustomWidget/cpu_affinity.vue
@@ -8,6 +8,7 @@
<b-col></b-col>
<b-col>Virtual CPU ID</b-col>
<b-col class="ps-5">Real-time vCPU:</b-col>
+ <b-col class="ps-5">Core type:</b-col>
<b-col></b-col>
</b-row>
<b-row class="align-items-center"
@@ -31,6 +32,10 @@
<b-col class="p-3">
<b-form-checkbox v-model="cpu.real_time_vcpu" :value="'y'"
:uncheckedValue="'n'" :disabled="!isRTVM"/>
</b-col>
+ <b-col class="p-3">
+ <h2 v-if="getPCPUtype(index)!=='Core' &&
getPCPUtype(index)!=='Reserved'"> E-core </h2>
+ <h1 v-else> P-core </h1>
+ </b-col>
<b-col>
<div class="ToolSet">
<div @click="addPCPU(index)" :class="{'d-none':
(this.defaultVal.pcpu.length-1)!==index}">
@@ -135,6 +140,9 @@ export default {
return
}
this.defaultVal.pcpu.splice(index, 1)
+ },
+ getPCPUtype(index){
+ return
+
window.getBoardData().CORE_TYPE[this.rootFormData.cpu_affinity.pcpu[
+ index].pcpu_id]
}
}
}
diff --git a/misc/config_tools/configurator/pyodide/loadBoard.py
b/misc/config_tools/configurator/pyodide/loadBoard.py
index 3843381c0..4d150dc1f 100644
--- a/misc/config_tools/configurator/pyodide/loadBoard.py
+++ b/misc/config_tools/configurator/pyodide/loadBoard.py
@@ -77,6 +77,10 @@ def get_dynamic_scenario(board):

return form_schemas

+def get_core_type(soup):
+ threads = soup.select('core thread')
+ core_type = {thread.select_one('cpu_id').text: thread.core_type.string
for thread in threads}
+ return core_type
"core_type" may be empty on old platforms that do not support CPUID leaf
0x1a (in which case the CPU is unlikely to have hybrid cores). Is that OK from
UI point of view?
Thinking about that some more... and I am not sure I understand the purpose of this new UI element. Are we asking the user if the pCPU he wants to assign to a particular VM is an Atom or Core processor?

The way this looks today is that you select from a drop-down menu what pCPU ID
you want to assign to a VM. If that's still the case on master (which I haven't
checked), it does not make sense to ask the user which type of processor it is,
it should be known as it's part of the platform. What we should/could add though
is a hint next to the pCPU ID (in brackets for example) if that particular pCPU
ID is the one of an Atom or Core core.
Make sense to me.

@Yuanyuan Can we indicate core types by changing the names of choices in
the pCPU ID widget?

--
Best Regards
Junjie Mao


--
Best Regards
Junjie Mao


def get_cat_info(soup):
threads = soup.select('core thread') @@ -126,7 +130,8 @@ def
get_board_info(board, path):
'content': board,
'CAT_INFO': get_cat_info(soup),
'BIOS_INFO': soup.select_one('BIOS_INFO').text,
- 'BASE_BOARD_INFO': soup.select_one('BASE_BOARD_INFO').text
+ 'BASE_BOARD_INFO': soup.select_one('BASE_BOARD_INFO').text,
+ 'CORE_TYPE': get_core_type(soup)
}
return result







Re: [PATCH] config_tools: Add core type infomation to pCPU

Zhao, Yuanyuan
 

On 2022/7/8 1:06, Junjie Mao wrote:
Yuanyuan Zhao <yuanyuan.zhao@...> writes:

Prompt user wheter this pCPU is P-core or E-core when
set the CPU affinity for VMs.

Signed-off-by: Yuanyuan Zhao <yuanyuan.zhao@...>
---
.../pages/Config/ConfigForm/CustomWidget/cpu_affinity.vue | 8 ++++++++
misc/config_tools/configurator/pyodide/loadBoard.py | 7 ++++++-
2 files changed, 14 insertions(+), 1 deletion(-)

diff --git a/misc/config_tools/configurator/packages/configurator/src/pages/Config/ConfigForm/CustomWidget/cpu_affinity.vue b/misc/config_tools/configurator/packages/configurator/src/pages/Config/ConfigForm/CustomWidget/cpu_affinity.vue
index c796a2d56..b14e65c85 100644
--- a/misc/config_tools/configurator/packages/configurator/src/pages/Config/ConfigForm/CustomWidget/cpu_affinity.vue
+++ b/misc/config_tools/configurator/packages/configurator/src/pages/Config/ConfigForm/CustomWidget/cpu_affinity.vue
@@ -8,6 +8,7 @@
<b-col></b-col>
<b-col>Virtual CPU ID</b-col>
<b-col class="ps-5">Real-time vCPU:</b-col>
+ <b-col class="ps-5">Core type:</b-col>
<b-col></b-col>
</b-row>
<b-row class="align-items-center"
@@ -31,6 +32,10 @@
<b-col class="p-3">
<b-form-checkbox v-model="cpu.real_time_vcpu" :value="'y'" :uncheckedValue="'n'" :disabled="!isRTVM"/>
</b-col>
+ <b-col class="p-3">
+ <h2 v-if="getPCPUtype(index)!=='Core' && getPCPUtype(index)!=='Reserved'"> E-core </h2>
+ <h1 v-else> P-core </h1>
+ </b-col>
<b-col>
<div class="ToolSet">
<div @click="addPCPU(index)" :class="{'d-none': (this.defaultVal.pcpu.length-1)!==index}">
@@ -135,6 +140,9 @@ export default {
return
}
this.defaultVal.pcpu.splice(index, 1)
+ },
+ getPCPUtype(index){
+ return window.getBoardData().CORE_TYPE[this.rootFormData.cpu_affinity.pcpu[index].pcpu_id]
}
}
}
diff --git a/misc/config_tools/configurator/pyodide/loadBoard.py b/misc/config_tools/configurator/pyodide/loadBoard.py
index 3843381c0..4d150dc1f 100644
--- a/misc/config_tools/configurator/pyodide/loadBoard.py
+++ b/misc/config_tools/configurator/pyodide/loadBoard.py
@@ -77,6 +77,10 @@ def get_dynamic_scenario(board):
return form_schemas
+def get_core_type(soup):
+ threads = soup.select('core thread')
+ core_type = {thread.select_one('cpu_id').text: thread.core_type.string for thread in threads}
+ return core_type
"core_type" may be empty on old platforms that do not support CPUID leaf
0x1a (in which case the CPU is unlikely to have hybrid cores). Is that
OK from UI point of view?
It will be masked as 'P-core'.


Re: [PATCH] misc: refine slot issue of launch script

Junjie Mao
 

Chenli Wei <chenli.wei@...> writes:

From: Chenli Wei <chenli.wei@...>

The current launch script allocate bdf for ivshmem by itself and have
not get bdf from scenario.

This patch refine the above logic and generate slot by user settings.

Signed-off-by: Chenli Wei <chenli.wei@...>
---
misc/config_tools/launch_config/launch_cfg_gen.py | 14 ++++++++++++--
1 file changed, 12 insertions(+), 2 deletions(-)

diff --git a/misc/config_tools/launch_config/launch_cfg_gen.py b/misc/config_tools/launch_config/launch_cfg_gen.py
index 910b2e6df..b8950ce08 100755
--- a/misc/config_tools/launch_config/launch_cfg_gen.py
+++ b/misc/config_tools/launch_config/launch_cfg_gen.py
@@ -263,10 +263,20 @@ def generate_for_one_vm(board_etree, hv_scenario_etree, vm_scenario_etree, vm_id
#ivshmem and vuart own reserved slots which setting by user

for ivshmem in eval_xpath_all(vm_scenario_etree, f"//IVSHMEM_REGION[PROVIDED_BY = 'Device Model' and .//VM_NAME = '{vm_name}']"):
- script.add_virtual_device("ivshmem", options=f"dm:/{ivshmem.find('NAME').text},{ivshmem.find('IVSHMEM_SIZE').text}")
+ vbdf = eval_xpath(ivshmem, f".//VBDF/text()")
+ if vbdf is not None:
+ slot = int((vbdf.split(":")[1].split(".")[0]), 16)
+ else:
+ slot = None
This piece of logic which extracts the slot number from a BDF specified
in the scenario XML appears three times now. Please abstract it as a
helper method to avoid duplicated code.

---
Best Regards
Junjie Mao

+ script.add_virtual_device("ivshmem", slot, options=f"dm:/{ivshmem.find('NAME').text},{ivshmem.find('IVSHMEM_SIZE').text}")

for ivshmem in eval_xpath_all(vm_scenario_etree, f"//IVSHMEM_REGION[PROVIDED_BY = 'Hypervisor' and .//VM_NAME = '{vm_name}']"):
- script.add_virtual_device("ivshmem", options=f"hv:/{ivshmem.find('NAME').text},{ivshmem.find('IVSHMEM_SIZE').text}")
+ vbdf = eval_xpath(ivshmem, f".//VBDF/text()")
+ if vbdf is not None:
+ slot = int((vbdf.split(":")[1].split(".")[0]), 16)
+ else:
+ slot = None
+ script.add_virtual_device("ivshmem", slot, options=f"hv:/{ivshmem.find('NAME').text},{ivshmem.find('IVSHMEM_SIZE').text}")

if eval_xpath(vm_scenario_etree, ".//console_vuart/text()") == "PCI":
script.add_virtual_device("uart", options="vuart_idx:0")


Re: [PATCH] config_tools: Add core type infomation to pCPU

Geoffroy Van Cutsem
 

-----Original Message-----
From: acrn-dev@... <acrn-dev@...> On
Behalf Of Junjie Mao
Sent: Thursday, July 7, 2022 7:07 pm
To: Yuanyuan Zhao <yuanyuan.zhao@...>
Cc: acrn-dev@...; Zhao, Yuanyuan
<yuanyuan.zhao@...>
Subject: Re: [acrn-dev] [PATCH] config_tools: Add core type infomation to
pCPU

Yuanyuan Zhao <yuanyuan.zhao@...> writes:

Prompt user wheter this pCPU is P-core or E-core when set the CPU
affinity for VMs.

Signed-off-by: Yuanyuan Zhao <yuanyuan.zhao@...>
---
.../pages/Config/ConfigForm/CustomWidget/cpu_affinity.vue | 8
++++++++
misc/config_tools/configurator/pyodide/loadBoard.py | 7 ++++++-
2 files changed, 14 insertions(+), 1 deletion(-)

diff --git
a/misc/config_tools/configurator/packages/configurator/src/pages/Confi
g/ConfigForm/CustomWidget/cpu_affinity.vue
b/misc/config_tools/configurator/packages/configurator/src/pages/Confi
g/ConfigForm/CustomWidget/cpu_affinity.vue
index c796a2d56..b14e65c85 100644
---
a/misc/config_tools/configurator/packages/configurator/src/pages/Confi
g/ConfigForm/CustomWidget/cpu_affinity.vue
+++ b/misc/config_tools/configurator/packages/configurator/src/pages/C
+++ onfig/ConfigForm/CustomWidget/cpu_affinity.vue
@@ -8,6 +8,7 @@
<b-col></b-col>
<b-col>Virtual CPU ID</b-col>
<b-col class="ps-5">Real-time vCPU:</b-col>
+ <b-col class="ps-5">Core type:</b-col>
<b-col></b-col>
</b-row>
<b-row class="align-items-center"
@@ -31,6 +32,10 @@
<b-col class="p-3">
<b-form-checkbox v-model="cpu.real_time_vcpu" :value="'y'"
:uncheckedValue="'n'" :disabled="!isRTVM"/>
</b-col>
+ <b-col class="p-3">
+ <h2 v-if="getPCPUtype(index)!=='Core' &&
getPCPUtype(index)!=='Reserved'"> E-core </h2>
+ <h1 v-else> P-core </h1>
+ </b-col>
<b-col>
<div class="ToolSet">
<div @click="addPCPU(index)" :class="{'d-none':
(this.defaultVal.pcpu.length-1)!==index}">
@@ -135,6 +140,9 @@ export default {
return
}
this.defaultVal.pcpu.splice(index, 1)
+ },
+ getPCPUtype(index){
+ return
+
window.getBoardData().CORE_TYPE[this.rootFormData.cpu_affinity.pcpu[
+ index].pcpu_id]
}
}
}
diff --git a/misc/config_tools/configurator/pyodide/loadBoard.py
b/misc/config_tools/configurator/pyodide/loadBoard.py
index 3843381c0..4d150dc1f 100644
--- a/misc/config_tools/configurator/pyodide/loadBoard.py
+++ b/misc/config_tools/configurator/pyodide/loadBoard.py
@@ -77,6 +77,10 @@ def get_dynamic_scenario(board):

return form_schemas

+def get_core_type(soup):
+ threads = soup.select('core thread')
+ core_type = {thread.select_one('cpu_id').text: thread.core_type.string
for thread in threads}
+ return core_type
"core_type" may be empty on old platforms that do not support CPUID leaf
0x1a (in which case the CPU is unlikely to have hybrid cores). Is that OK from
UI point of view?
Thinking about that some more... and I am not sure I understand the purpose of this new UI element. Are we asking the user if the pCPU he wants to assign to a particular VM is an Atom or Core processor?

The way this looks today is that you select from a drop-down menu what pCPU ID you want to assign to a VM. If that's still the case on master (which I haven't checked), it does not make sense to ask the user which type of processor it is, it should be known as it's part of the platform. What we should/could add though is a hint next to the pCPU ID (in brackets for example) if that particular pCPU ID is the one of an Atom or Core core.


--
Best Regards
Junjie Mao


def get_cat_info(soup):
threads = soup.select('core thread') @@ -126,7 +130,8 @@ def
get_board_info(board, path):
'content': board,
'CAT_INFO': get_cat_info(soup),
'BIOS_INFO': soup.select_one('BIOS_INFO').text,
- 'BASE_BOARD_INFO': soup.select_one('BASE_BOARD_INFO').text
+ 'BASE_BOARD_INFO': soup.select_one('BASE_BOARD_INFO').text,
+ 'CORE_TYPE': get_core_type(soup)
}
return result



Re: [PATCH] config_tools: Add core type infomation to pCPU

Geoffroy Van Cutsem
 

-----Original Message-----
From: Zhao, Yuanyuan <yuanyuan.zhao@...>
Sent: Thursday, July 7, 2022 3:40 am
To: VanCutsem, Geoffroy <geoffroy.vancutsem@...>; acrn-
dev@...; Mao, Junjie <junjie.mao@...>
Cc: Zhao, Yuanyuan <yuanyuan.zhao@...>
Subject: Re: [acrn-dev] [PATCH] config_tools: Add core type infomation to
pCPU

On 2022/7/6 17:49, VanCutsem, Geoffroy wrote:
What are those P-core and E-core?
P-core for performance core.
E-core for efficiency core such as Atom.
Or do you have more popular expression?
I don't. Is that an expression that is already in use?

I looked up the Intel SDM and found this (which I think is what this is about):
Hybrid Information Enumeration Leaf (EAX = 1AH, ECX = 0)
1AH EAX Enumerates the native model ID and core type.
Bits 31-24: Core type
10H: Reserved
20H: Intel Atom®
30H: Reserved
40H: Intel® Core™
Bits 23-0: Native model ID of the core. The core-type and native mode ID can be used to uniquely identify
the microarchitecture of the core. This native model ID is not unique across core types, and not related to
the model ID reported in CPUID leaf 01H, and does not identify the SOC

Should we rather call these by their official names, i.e. Atom and Core?


-----Original Message-----
From: acrn-dev@... <acrn-dev@...>
On Behalf Of Zhao, Yuanyuan
Sent: Wednesday, July 6, 2022 3:06 am
To: Mao, Junjie <junjie.mao@...>;
acrn-dev@...
Cc: Zhao, Yuanyuan <yuanyuan.zhao@...>; Yuanyuan Zhao
<yuanyuan.zhao@...>
Subject: [acrn-dev] [PATCH] config_tools: Add core type infomation to
pCPU

Prompt user wheter this pCPU is P-core or E-core when set the CPU
affinity
whether -> whether
OK.

for VMs.

Signed-off-by: Yuanyuan Zhao <yuanyuan.zhao@...>
---
.../pages/Config/ConfigForm/CustomWidget/cpu_affinity.vue | 8
++++++++
misc/config_tools/configurator/pyodide/loadBoard.py | 7 ++++++-
2 files changed, 14 insertions(+), 1 deletion(-)

diff --git
a/misc/config_tools/configurator/packages/configurator/src/pages/Conf
ig/C onfigForm/CustomWidget/cpu_affinity.vue
b/misc/config_tools/configurator/packages/configurator/src/pages/Conf
ig/C onfigForm/CustomWidget/cpu_affinity.vue
index c796a2d56..b14e65c85 100644
---
a/misc/config_tools/configurator/packages/configurator/src/pages/Conf
ig/C onfigForm/CustomWidget/cpu_affinity.vue
+++ b/misc/config_tools/configurator/packages/configurator/src/pages/
+++ Con fig/ConfigForm/CustomWidget/cpu_affinity.vue
@@ -8,6 +8,7 @@
<b-col></b-col>
<b-col>Virtual CPU ID</b-col>
<b-col class="ps-5">Real-time vCPU:</b-col>
+ <b-col class="ps-5">Core type:</b-col>
<b-col></b-col>
</b-row>
<b-row class="align-items-center"
@@ -31,6 +32,10 @@
<b-col class="p-3">
<b-form-checkbox v-model="cpu.real_time_vcpu" :value="'y'"
:uncheckedValue="'n'" :disabled="!isRTVM"/>
</b-col>
+ <b-col class="p-3">
+ <h2 v-if="getPCPUtype(index)!=='Core' &&
getPCPUtype(index)!=='Reserved'"> E-core </h2>
+ <h1 v-else> P-core </h1>
+ </b-col>
<b-col>
<div class="ToolSet">
<div @click="addPCPU(index)" :class="{'d-none':
(this.defaultVal.pcpu.length-1)!==index}">
@@ -135,6 +140,9 @@ export default {
return
}
this.defaultVal.pcpu.splice(index, 1)
+ },
+ getPCPUtype(index){
+ return
+
window.getBoardData().CORE_TYPE[this.rootFormData.cpu_affinity.pcpu[i
n
+ dex].pcpu_id]
}
}
}
diff --git a/misc/config_tools/configurator/pyodide/loadBoard.py
b/misc/config_tools/configurator/pyodide/loadBoard.py
index 3843381c0..4d150dc1f 100644
--- a/misc/config_tools/configurator/pyodide/loadBoard.py
+++ b/misc/config_tools/configurator/pyodide/loadBoard.py
@@ -77,6 +77,10 @@ def get_dynamic_scenario(board):

return form_schemas

+def get_core_type(soup):
+ threads = soup.select('core thread')
+ core_type = {thread.select_one('cpu_id').text:
+thread.core_type.string
for thread in threads}
+ return core_type

def get_cat_info(soup):
threads = soup.select('core thread') @@ -126,7 +130,8 @@ def
get_board_info(board, path):
'content': board,
'CAT_INFO': get_cat_info(soup),
'BIOS_INFO': soup.select_one('BIOS_INFO').text,
- 'BASE_BOARD_INFO': soup.select_one('BASE_BOARD_INFO').text
+ 'BASE_BOARD_INFO': soup.select_one('BASE_BOARD_INFO').text,
+ 'CORE_TYPE': get_core_type(soup)
}
return result

--
2.25.1





Re: [PATCH v3 1/4] dm: refine the detection of "iasl" utility

Yu Wang
 

Acked-by: Wang, Yu1 <yu1.wang@...>

On Thu, Jul 07, 2022 at 02:19:23PM +0800, Shiqing Gao wrote:
At run time (on the *target* machine), acrn-dm depends on "iasl" to build
the ACPI tables for post-launched VMs.

This patch does:
- remove the dependency on ASL_COMPILER, which would only be used at build time
- add a new acrn-dm parameter "--iasl <iasl_compiler_path>"
If "--iasl <iasl_compiler_path>" is specified as the acrn-dm parameter,
acrn-dm uses <iasl_compiler_path> as the path to the "iasl" compiler;
otherwise, "which iasl" is used to detect the "iasl" compiler.

If "iasl" is not found at run time, refuse to launch the post-launched VM
and exit directly.

v2 -> v3:
- use "strlen" rather than "strncmp" to check whether asl_compiler
has been set or not

v1 -> v2:
- remove "iasl_param" and "with_iasl_param" to simplify the logic

Signed-off-by: Victor Sun <victor.sun@...>
Signed-off-by: Shiqing Gao <shiqing.gao@...>
---
Makefile | 2 +-
devicemodel/Makefile | 4 ---
devicemodel/core/main.c | 16 +++++++++-
devicemodel/hw/platform/acpi/acpi.c | 64 +++++++++++++++++++++++++++++++++----
devicemodel/include/acpi.h | 3 ++
5 files changed, 77 insertions(+), 12 deletions(-)

diff --git a/Makefile b/Makefile
index 3872b2c..b0cdf47 100644
--- a/Makefile
+++ b/Makefile
@@ -132,7 +132,7 @@ hvapplydiffconfig:
@$(MAKE) applydiffconfig $(HV_MAKEOPTS) PATCH=$(abspath $(PATCH))

devicemodel: tools
- $(MAKE) -C $(T)/devicemodel DM_OBJDIR=$(DM_OUT) DM_BUILD_VERSION=$(BUILD_VERSION) DM_BUILD_TAG=$(BUILD_TAG) DM_ASL_COMPILER=$(ASL_COMPILER) TOOLS_OUT=$(TOOLS_OUT) RELEASE=$(RELEASE)
+ $(MAKE) -C $(T)/devicemodel DM_OBJDIR=$(DM_OUT) DM_BUILD_VERSION=$(BUILD_VERSION) DM_BUILD_TAG=$(BUILD_TAG) TOOLS_OUT=$(TOOLS_OUT) RELEASE=$(RELEASE)

tools:
mkdir -p $(TOOLS_OUT)
diff --git a/devicemodel/Makefile b/devicemodel/Makefile
index dd28db6..fc69e1d 100644
--- a/devicemodel/Makefile
+++ b/devicemodel/Makefile
@@ -45,10 +45,6 @@ CFLAGS += -I$(SYSROOT)/usr/include/SDL2
CFLAGS += -I$(SYSROOT)/usr/include/EGL
CFLAGS += -I$(SYSROOT)/usr/include/GLES2

-ifneq (, $(DM_ASL_COMPILER))
-CFLAGS += -DASL_COMPILER=\"$(DM_ASL_COMPILER)\"
-endif
-
GCC_MAJOR=$(shell echo __GNUC__ | $(CC) -E -x c - | tail -n 1)
GCC_MINOR=$(shell echo __GNUC_MINOR__ | $(CC) -E -x c - | tail -n 1)

diff --git a/devicemodel/core/main.c b/devicemodel/core/main.c
index 0e3d77b..7a3b46e 100644
--- a/devicemodel/core/main.c
+++ b/devicemodel/core/main.c
@@ -146,6 +146,7 @@ usage(int code)
" %*s [-k kernel_image_path]\n"
" %*s [-l lpc] [-m mem] [-r ramdisk_image_path]\n"
" %*s [-s pci] [--ovmf ovmf_file_path]\n"
+ " %*s [--iasl iasl_compiler_path]\n"
" %*s [--enable_trusty] [--intr_monitor param_setting]\n"
" %*s [--acpidev_pt HID] [--mmiodev_pt MMIO_Regions]\n"
" %*s [--vtpm2 sock_path] [--virtio_poll interval]\n"
@@ -162,6 +163,7 @@ usage(int code)
" -s: <slot,driver,configinfo> PCI slot config\n"
" -v: version\n"
" --ovmf: ovmf file path\n"
+ " --iasl: iasl compiler path\n"
" --ssram: Congfiure Software SRAM parameters\n"
" --cpu_affinity: list of Service VM vCPUs assigned to this User VM, the vCPUs are"
" identified by their local APIC IDs.\n"
@@ -185,7 +187,7 @@ usage(int code)
(int)strnlen(progname, PATH_MAX), "", (int)strnlen(progname, PATH_MAX), "",
(int)strnlen(progname, PATH_MAX), "", (int)strnlen(progname, PATH_MAX), "",
(int)strnlen(progname, PATH_MAX), "", (int)strnlen(progname, PATH_MAX), "",
- (int)strnlen(progname, PATH_MAX), "");
+ (int)strnlen(progname, PATH_MAX), "", (int)strnlen(progname, PATH_MAX), "");

exit(code);
}
@@ -762,6 +764,7 @@ sig_handler_term(int signo)
enum {
CMD_OPT_VSBL = 1000,
CMD_OPT_OVMF,
+ CMD_OPT_IASL,
CMD_OPT_CPU_AFFINITY,
CMD_OPT_PART_INFO,
CMD_OPT_TRUSTY_ENABLE,
@@ -806,6 +809,7 @@ static struct option long_options[] = {
#endif
{"vsbl", required_argument, 0, CMD_OPT_VSBL},
{"ovmf", required_argument, 0, CMD_OPT_OVMF},
+ {"iasl", required_argument, 0, CMD_OPT_IASL},
{"cpu_affinity", required_argument, 0, CMD_OPT_CPU_AFFINITY},
{"part_info", required_argument, 0, CMD_OPT_PART_INFO},
{"enable_trusty", no_argument, 0,
@@ -926,6 +930,10 @@ main(int argc, char *argv[])
errx(EX_USAGE, "invalid ovmf param %s", optarg);
skip_pci_mem64bar_workaround = true;
break;
+ case CMD_OPT_IASL:
+ if (acrn_parse_iasl(optarg) != 0)
+ errx(EX_USAGE, "invalid iasl param %s", optarg);
+ break;
case CMD_OPT_CPU_AFFINITY:
if (acrn_parse_cpu_affinity(optarg) != 0)
errx(EX_USAGE, "invalid pcpu param %s", optarg);
@@ -1020,6 +1028,12 @@ main(int argc, char *argv[])
usage(1);
}
}
+
+ if (get_iasl_compiler() != 0) {
+ pr_err("Cannot find Intel ACPI ASL compiler tool \"iasl\".\n");
+ exit(1);
+ }
+
argc -= optind;
argv += optind;

diff --git a/devicemodel/hw/platform/acpi/acpi.c b/devicemodel/hw/platform/acpi/acpi.c
index 0ba25e8..a0a5fe5 100644
--- a/devicemodel/hw/platform/acpi/acpi.c
+++ b/devicemodel/hw/platform/acpi/acpi.c
@@ -95,9 +95,8 @@

#define ASL_TEMPLATE "dm.XXXXXXX"
#define ASL_SUFFIX ".aml"
-#ifndef ASL_COMPILER
-#define ASL_COMPILER "/usr/sbin/iasl"
-#endif
+
+static char asl_compiler[MAXPATHLEN] = {0};

uint64_t audio_nhlt_len = 0;

@@ -972,7 +971,7 @@ basl_compile(struct vmctx *ctx,
uint64_t offset)
{
struct basl_fio io[2];
- static char iaslbuf[3*MAXPATHLEN + 10];
+ static char iaslbuf[4*MAXPATHLEN + 10];
int err;

err = basl_start(&io[0], &io[1]);
@@ -990,12 +989,12 @@ basl_compile(struct vmctx *ctx,
if (basl_verbose_iasl)
snprintf(iaslbuf, sizeof(iaslbuf),
"%s -p %s %s",
- ASL_COMPILER,
+ asl_compiler,
io[1].f_name, io[0].f_name);
else
snprintf(iaslbuf, sizeof(iaslbuf),
"/bin/sh -c \"%s -p %s %s\" 1> /dev/null",
- ASL_COMPILER,
+ asl_compiler,
io[1].f_name, io[0].f_name);

err = system(iaslbuf);
@@ -1111,6 +1110,59 @@ get_acpi_table_length(void)
}

int
+get_default_iasl_compiler(void)
+{
+ int ret = -1;
+ char c;
+ int i = 0;
+ FILE *fd_iasl = popen("which iasl", "r");
+
+ if (fd_iasl != NULL)
+ {
+ while (i < (MAXPATHLEN - 1)) {
+
+ c = fgetc(fd_iasl);
+ if ((c == EOF) || (c == '\n') || (c == '\r') || (c == 0)) {
+ break;
+ }
+
+ asl_compiler[i++] = c;
+ }
+ if (strlen(asl_compiler) > 0) {
+ pr_info("Found default iasl path: %s\n", asl_compiler);
+ ret = 0;
+ }
+ pclose(fd_iasl);
+ }
+ return ret;
+}
+
+int
+acrn_parse_iasl(char *arg)
+{
+ size_t len = strnlen(arg, MAXPATHLEN);
+
+ if (len < MAXPATHLEN) {
+ strncpy(asl_compiler, arg, len + 1);
+ pr_info("iasl path is given by --iasl at run time: %s\n", asl_compiler);
+ return 0;
+ } else
+ return -1;
+}
+
+int
+get_iasl_compiler(void)
+{
+ int ret = 0;
+
+ if(strlen(asl_compiler) == 0) {
+ ret = get_default_iasl_compiler();
+ }
+
+ return ret;
+}
+
+int
acpi_build(struct vmctx *ctx, int ncpu)
{
#define RTCT_BUF_LEN 0x200
diff --git a/devicemodel/include/acpi.h b/devicemodel/include/acpi.h
index 52130d2..0f46e38 100644
--- a/devicemodel/include/acpi.h
+++ b/devicemodel/include/acpi.h
@@ -121,4 +121,7 @@ int lapicid_from_pcpuid(int pcpu_id);
int lapic_to_pcpu(int lapic);

int parse_madt(void);
+int acrn_parse_iasl(char *arg);
+int get_iasl_compiler(void);
+
#endif /* _ACPI_H_ */
--
2.7.4






Re: [PATCH] config_tools: Add core type infomation to pCPU

Junjie Mao
 

Yuanyuan Zhao <yuanyuan.zhao@...> writes:

Prompt user wheter this pCPU is P-core or E-core when
set the CPU affinity for VMs.

Signed-off-by: Yuanyuan Zhao <yuanyuan.zhao@...>
---
.../pages/Config/ConfigForm/CustomWidget/cpu_affinity.vue | 8 ++++++++
misc/config_tools/configurator/pyodide/loadBoard.py | 7 ++++++-
2 files changed, 14 insertions(+), 1 deletion(-)

diff --git a/misc/config_tools/configurator/packages/configurator/src/pages/Config/ConfigForm/CustomWidget/cpu_affinity.vue b/misc/config_tools/configurator/packages/configurator/src/pages/Config/ConfigForm/CustomWidget/cpu_affinity.vue
index c796a2d56..b14e65c85 100644
--- a/misc/config_tools/configurator/packages/configurator/src/pages/Config/ConfigForm/CustomWidget/cpu_affinity.vue
+++ b/misc/config_tools/configurator/packages/configurator/src/pages/Config/ConfigForm/CustomWidget/cpu_affinity.vue
@@ -8,6 +8,7 @@
<b-col></b-col>
<b-col>Virtual CPU ID</b-col>
<b-col class="ps-5">Real-time vCPU:</b-col>
+ <b-col class="ps-5">Core type:</b-col>
<b-col></b-col>
</b-row>
<b-row class="align-items-center"
@@ -31,6 +32,10 @@
<b-col class="p-3">
<b-form-checkbox v-model="cpu.real_time_vcpu" :value="'y'" :uncheckedValue="'n'" :disabled="!isRTVM"/>
</b-col>
+ <b-col class="p-3">
+ <h2 v-if="getPCPUtype(index)!=='Core' && getPCPUtype(index)!=='Reserved'"> E-core </h2>
+ <h1 v-else> P-core </h1>
+ </b-col>
<b-col>
<div class="ToolSet">
<div @click="addPCPU(index)" :class="{'d-none': (this.defaultVal.pcpu.length-1)!==index}">
@@ -135,6 +140,9 @@ export default {
return
}
this.defaultVal.pcpu.splice(index, 1)
+ },
+ getPCPUtype(index){
+ return window.getBoardData().CORE_TYPE[this.rootFormData.cpu_affinity.pcpu[index].pcpu_id]
}
}
}
diff --git a/misc/config_tools/configurator/pyodide/loadBoard.py b/misc/config_tools/configurator/pyodide/loadBoard.py
index 3843381c0..4d150dc1f 100644
--- a/misc/config_tools/configurator/pyodide/loadBoard.py
+++ b/misc/config_tools/configurator/pyodide/loadBoard.py
@@ -77,6 +77,10 @@ def get_dynamic_scenario(board):

return form_schemas

+def get_core_type(soup):
+ threads = soup.select('core thread')
+ core_type = {thread.select_one('cpu_id').text: thread.core_type.string for thread in threads}
+ return core_type
"core_type" may be empty on old platforms that do not support CPUID leaf
0x1a (in which case the CPU is unlikely to have hybrid cores). Is that
OK from UI point of view?

--
Best Regards
Junjie Mao


def get_cat_info(soup):
threads = soup.select('core thread')
@@ -126,7 +130,8 @@ def get_board_info(board, path):
'content': board,
'CAT_INFO': get_cat_info(soup),
'BIOS_INFO': soup.select_one('BIOS_INFO').text,
- 'BASE_BOARD_INFO': soup.select_one('BASE_BOARD_INFO').text
+ 'BASE_BOARD_INFO': soup.select_one('BASE_BOARD_INFO').text,
+ 'CORE_TYPE': get_core_type(soup)
}
return result


[PATCH] hv: tlfs: add tlfs TSC freq MSR support

Jian Jun Chen
 

TLFS defined 2 vMSRs which can be used by Windows guest to get the
TSC/APIC frequencies from hypervisor. This patch adds the support
of HV_X64_MSR_TSC_FREQUENCY/HV_X64_MSR_APIC_FREQUENCY vMSRS whose
availability is exposed by CPUID.0x40000003:EAX[bit11] and EDX[bit8].

vTSC support for Linux will be implemented in Linux kernel as
separate patch.

Tracked-On: #7876
Signed-off-by: Jian Jun Chen <jian.jun.chen@...>
---
hypervisor/arch/x86/guest/hyperv.c | 18 ++++++++++++++++--
hypervisor/arch/x86/guest/vmsr.c | 4 ++++
hypervisor/include/arch/x86/asm/guest/hyperv.h | 2 ++
3 files changed, 22 insertions(+), 2 deletions(-)

diff --git a/hypervisor/arch/x86/guest/hyperv.c b/hypervisor/arch/x86/guest/hyperv.c
index fc68e5398..3022317fa 100644
--- a/hypervisor/arch/x86/guest/hyperv.c
+++ b/hypervisor/arch/x86/guest/hyperv.c
@@ -24,6 +24,10 @@
#define CPUID3A_VP_INDEX_MSR (1U << 6U)
/* Partition reference TSC MSR (HV_X64_MSR_REFERENCE_TSC) */
#define CPUID3A_REFERENCE_TSC_MSR (1U << 9U)
+/* Partition local APIC and TSC frequency registers (HV_X64_MSR_TSC_FREQUENCY/HV_X64_MSR_APIC_FREQUENCY) */
+#define CPUID3A_ACCESS_FREQUENCY_MSRS (1U << 11U)
+/* Frequency MSRs available */
+#define CPUID3D_FREQ_MSRS_AVAILABLE (1U << 8U)

struct HV_REFERENCE_TSC_PAGE {
uint32_t tsc_sequence;
@@ -167,6 +171,8 @@ hyperv_wrmsr(struct acrn_vcpu *vcpu, uint32_t msr, uint64_t wval)
break;
case HV_X64_MSR_VP_INDEX:
case HV_X64_MSR_TIME_REF_COUNT:
+ case HV_X64_MSR_TSC_FREQUENCY:
+ case HV_X64_MSR_APIC_FREQUENCY:
/* read only */
/* fallthrough */
default:
@@ -202,6 +208,13 @@ hyperv_rdmsr(struct acrn_vcpu *vcpu, uint32_t msr, uint64_t *rval)
case HV_X64_MSR_REFERENCE_TSC:
*rval = vcpu->vm->arch_vm.hyperv.ref_tsc_page.val64;
break;
+ case HV_X64_MSR_TSC_FREQUENCY:
+ *rval = get_tsc_khz() * 1000UL;
+ break;
+ case HV_X64_MSR_APIC_FREQUENCY:
+ /* both KVM and XEN hardcode the APIC freq as 1GHz ... */
+ *rval = 1000000000UL;
+ break;
default:
pr_err("hv: %s: unexpected MSR[0x%x] read", __func__, msr);
ret = -1;
@@ -264,10 +277,11 @@ hyperv_init_vcpuid_entry(uint32_t leaf, uint32_t subleaf, uint32_t flags,
break;
case 0x40000003U: /* HV supported feature */
entry->eax = CPUID3A_HYPERCALL_MSR | CPUID3A_VP_INDEX_MSR |
- CPUID3A_TIME_REF_COUNT_MSR | CPUID3A_REFERENCE_TSC_MSR;
+ CPUID3A_TIME_REF_COUNT_MSR | CPUID3A_REFERENCE_TSC_MSR |
+ CPUID3A_ACCESS_FREQUENCY_MSRS;
entry->ebx = 0U;
entry->ecx = 0U;
- entry->edx = 0U;
+ entry->edx = CPUID3D_FREQ_MSRS_AVAILABLE;
break;
case 0x40000004U: /* HV Recommended hypercall usage */
entry->eax = 0U;
diff --git a/hypervisor/arch/x86/guest/vmsr.c b/hypervisor/arch/x86/guest/vmsr.c
index 92dfc9a26..48e04b900 100644
--- a/hypervisor/arch/x86/guest/vmsr.c
+++ b/hypervisor/arch/x86/guest/vmsr.c
@@ -581,6 +581,8 @@ int32_t rdmsr_vmexit_handler(struct acrn_vcpu *vcpu)
case HV_X64_MSR_VP_INDEX:
case HV_X64_MSR_REFERENCE_TSC:
case HV_X64_MSR_TIME_REF_COUNT:
+ case HV_X64_MSR_TSC_FREQUENCY:
+ case HV_X64_MSR_APIC_FREQUENCY:
{
err = hyperv_rdmsr(vcpu, msr, &v);
break;
@@ -943,6 +945,8 @@ int32_t wrmsr_vmexit_handler(struct acrn_vcpu *vcpu)
case HV_X64_MSR_VP_INDEX:
case HV_X64_MSR_REFERENCE_TSC:
case HV_X64_MSR_TIME_REF_COUNT:
+ case HV_X64_MSR_TSC_FREQUENCY:
+ case HV_X64_MSR_APIC_FREQUENCY:
{
err = hyperv_wrmsr(vcpu, msr, v);
break;
diff --git a/hypervisor/include/arch/x86/asm/guest/hyperv.h b/hypervisor/include/arch/x86/asm/guest/hyperv.h
index ae8055178..5cac0c713 100644
--- a/hypervisor/include/arch/x86/asm/guest/hyperv.h
+++ b/hypervisor/include/arch/x86/asm/guest/hyperv.h
@@ -16,6 +16,8 @@

#define HV_X64_MSR_TIME_REF_COUNT 0x40000020U
#define HV_X64_MSR_REFERENCE_TSC 0x40000021U
+#define HV_X64_MSR_TSC_FREQUENCY 0x40000022U
+#define HV_X64_MSR_APIC_FREQUENCY 0x40000023U

union hyperv_ref_tsc_page_msr {
uint64_t val64;
--
2.35.1


[PATCH] hv: tsc: calibrate TSC by HPET

Jian Jun Chen
 

On some platforms CPUID.0x15:ECX is zero and CPUID.0x16 can
only return the TSC frequency in MHZ which is not accurate.
This patch adds the support of using HPET to calibrate TSC
when HPET is available which is more accurate than PIT.

Tracked-On: #7876
Signed-off-by: Jian Jun Chen <jian.jun.chen@...>
---
hypervisor/arch/x86/tsc.c | 119 +++++++++++++++++++++++++++++++--
hypervisor/boot/acpi_base.c | 12 ++++
hypervisor/boot/include/acpi.h | 11 +++
3 files changed, 136 insertions(+), 6 deletions(-)

diff --git a/hypervisor/arch/x86/tsc.c b/hypervisor/arch/x86/tsc.c
index 4f3aaec74..ef794ffc7 100644
--- a/hypervisor/arch/x86/tsc.c
+++ b/hypervisor/arch/x86/tsc.c
@@ -10,10 +10,16 @@
#include <asm/cpu_caps.h>
#include <asm/io.h>
#include <asm/tsc.h>
+#include <asm/cpu.h>
+#include <acpi.h>

#define CAL_MS 10U

+#define HPET_PERIOD 0x004U
+#define HPET_COUNTER 0x0F0U
+
static uint32_t tsc_khz;
+static void *hpet_hva;

static uint64_t pit_calibrate_tsc(uint32_t cal_ms_arg)
{
@@ -65,10 +71,94 @@ static uint64_t pit_calibrate_tsc(uint32_t cal_ms_arg)
return (current_tsc / cal_ms) * 1000U;
}

+static inline void hpet_init(void)
+{
+ if (hpet_hva == NULL) {
+ hpet_hva = parse_hpet();
+ }
+}
+
+static inline bool is_hpet_enabled(void)
+{
+ return (hpet_hva != NULL);
+}
+
+static inline uint32_t hpet_read(uint32_t offset)
+{
+ return mmio_read32(hpet_hva + offset);
+}
+
+static inline uint64_t tsc_read_hpet(uint64_t *p)
+{
+ uint64_t current_tsc;
+
+ /* read hpet first */
+ *p = hpet_read(HPET_COUNTER) & 0xFFFFFFFFU;
+ current_tsc = rdtsc();
+
+ return current_tsc;
+}
+
+static uint64_t calc_tsc_by_hpet(uint64_t tsc1, uint64_t hpet1,
+ uint64_t tsc2, uint64_t hpet2)
+{
+ uint64_t delta_tsc, delta_fs, tsc_khz;
+
+ if (hpet2 < hpet1) {
+ hpet2 += 0x100000000ULL;
+ }
+ delta_fs = (hpet2 - hpet1) * hpet_read(HPET_PERIOD);
+ delta_tsc = tsc2 - tsc1;
+ /*
+ * FS_PER_S = 10 ^ 15
+ *
+ * tsc_khz = delta_tsc / (delta_fs / FS_PER_S) / 1000UL;
+ * = delta_tsc / delta_fs * (10 ^ 12)
+ * = (delta_tsc * (10 ^ 6)) / (delta_fs / (10 ^ 6))
+ */
+ tsc_khz = (delta_tsc * 1000000UL) / (delta_fs / 1000000UL);
+ return tsc_khz * 1000U;
+}
+
+static uint64_t hpet_calibrate_tsc(uint32_t cal_ms_arg, uint64_t tsc_ref_hz)
+{
+ uint64_t tsc1, tsc2, hpet1, hpet2, delta;
+ uint64_t tsc_pit_hz = 0UL, tsc_hpet_hz, ret_tsc_hz = 0UL;
+ uint64_t rflags;
+ uint32_t i;
+
+ for (i = 0U; i < 3U; i++) {
+ CPU_INT_ALL_DISABLE(&rflags);
+ tsc1 = tsc_read_hpet(&hpet1);
+ tsc_pit_hz = pit_calibrate_tsc(cal_ms_arg);
+ tsc2 = tsc_read_hpet(&hpet2);
+ CPU_INT_ALL_RESTORE(rflags);
+
+ tsc_hpet_hz = calc_tsc_by_hpet(tsc1, hpet1, tsc2, hpet2);
+ if (tsc_ref_hz != 0UL) {
+ delta = (tsc_hpet_hz * 100UL) / tsc_ref_hz;
+ if (delta >= 95UL && delta <= 105UL) {
+ ret_tsc_hz = tsc_hpet_hz;
+ break;
+ }
+ } else {
+ ret_tsc_hz = tsc_hpet_hz;
+ break;
+ }
+ }
+
+ /* if hpet calibraion failed, use tsc_pit_hz */
+ if (ret_tsc_hz == 0UL) {
+ ret_tsc_hz = tsc_pit_hz;
+ }
+
+ return ret_tsc_hz;
+}
+
/*
- * Determine TSC frequency via CPUID 0x15 and 0x16.
+ * Determine TSC frequency via CPUID 0x15.
*/
-static uint64_t native_calibrate_tsc(void)
+static uint64_t native_calibrate_tsc_cpuid_0x15(void)
{
uint64_t tsc_hz = 0UL;
const struct cpuinfo_x86 *cpu_info = get_pcpu_info();
@@ -85,7 +175,18 @@ static uint64_t native_calibrate_tsc(void)
}
}

- if ((tsc_hz == 0UL) && (cpu_info->cpuid_level >= 0x16U)) {
+ return tsc_hz;
+}
+
+/*
+ * Determine TSC frequency via CPUID 0x16.
+ */
+static uint64_t native_calibrate_tsc_cpuid_0x16(void)
+{
+ uint64_t tsc_hz = 0UL;
+ const struct cpuinfo_x86 *cpu_info = get_pcpu_info();
+
+ if (cpu_info->cpuid_level >= 0x16U) {
uint32_t eax_base_mhz, ebx_max_mhz, ecx_bus_mhz, edx;

cpuid_subleaf(0x16U, 0x0U, &eax_base_mhz, &ebx_max_mhz, &ecx_bus_mhz, &edx);
@@ -99,9 +200,15 @@ void calibrate_tsc(void)
{
uint64_t tsc_hz;

- tsc_hz = native_calibrate_tsc();
- if (tsc_hz == 0U) {
- tsc_hz = pit_calibrate_tsc(CAL_MS);
+ hpet_init();
+ tsc_hz = native_calibrate_tsc_cpuid_0x15();
+ if (tsc_hz == 0UL) {
+ tsc_hz = native_calibrate_tsc_cpuid_0x16();
+ if (is_hpet_enabled()) {
+ tsc_hz = hpet_calibrate_tsc(CAL_MS, tsc_hz);
+ } else {
+ tsc_hz = pit_calibrate_tsc(CAL_MS);
+ }
}
tsc_khz = (uint32_t)(tsc_hz / 1000UL);
}
diff --git a/hypervisor/boot/acpi_base.c b/hypervisor/boot/acpi_base.c
index 75d2c105a..83b104074 100644
--- a/hypervisor/boot/acpi_base.c
+++ b/hypervisor/boot/acpi_base.c
@@ -244,3 +244,15 @@ uint8_t parse_madt_ioapic(struct ioapic_info *ioapic_id_array)

return ioapic_idx;
}
+
+void *parse_hpet(void)
+{
+ const struct acpi_table_hpet *hpet = (const struct acpi_table_hpet *)get_acpi_tbl(ACPI_SIG_HPET);
+ uint64_t addr = 0UL;
+
+ if (hpet != NULL) {
+ addr = hpet->address.address;
+ }
+
+ return hpa2hva(addr);
+}
diff --git a/hypervisor/boot/include/acpi.h b/hypervisor/boot/include/acpi.h
index 7028e0ece..e7f10c39c 100644
--- a/hypervisor/boot/include/acpi.h
+++ b/hypervisor/boot/include/acpi.h
@@ -59,6 +59,7 @@
#define ACPI_SIG_TPM2 "TPM2" /* Trusted Platform Module hardware interface table */
#define ACPI_SIG_RTCT "PTCT" /* Platform Tuning Configuration Table (Real-Time Configuration Table) */
#define ACPI_SIG_RTCT_V2 "RTCT" /* Platform Tuning Configuration Table (Real-Time Configuration Table) V2 */
+#define ACPI_SIG_HPET "HPET" /* High Precision Event Timer table */

struct packed_gas {
uint8_t space_id;
@@ -242,12 +243,22 @@ struct acpi_table_tpm2 {
#endif
} __packed;

+struct acpi_table_hpet {
+ struct acpi_table_header header;
+ uint32_t id;
+ struct packed_gas address;
+ uint8_t sequence;
+ uint16_t minimum_tick;
+ uint8_t flags;
+} __packed;
+
void init_acpi(void);
void *get_acpi_tbl(const char *signature);

struct ioapic_info;
uint16_t parse_madt(uint32_t lapic_id_array[MAX_PCPU_NUM]);
uint8_t parse_madt_ioapic(struct ioapic_info *ioapic_id_array);
+void *parse_hpet(void);

#ifdef CONFIG_ACPI_PARSE_ENABLED
int32_t acpi_fixup(void);
--
2.35.1

1101 - 1120 of 37344