Re: [PATCH 2/2] hv: Serialize WBINVD using wbinvd_lock

Eddie Dong

OK. This is probably for Windows guest where timeout may be a problem.

Please PR with my ack.

-----Original Message-----
From: Liu, Yifan1 <yifan1.liu@...>
Sent: Monday, July 11, 2022 5:32 PM
To: Dong, Eddie <eddie.dong@...>; acrn-dev@...
Subject: RE: [acrn-dev] [PATCH 2/2] hv: Serialize WBINVD using wbinvd_lock

-----Original Message-----
From: Dong, Eddie <eddie.dong@...>
Sent: Tuesday, July 12, 2022 8:02 AM
To: acrn-dev@...
Cc: Liu, Yifan1 <yifan1.liu@...>
Subject: RE: [acrn-dev] [PATCH 2/2] hv: Serialize WBINVD using

-----Original Message-----
From: acrn-dev@...
<acrn-dev@...> On Behalf Of Liu, Yifan1
Sent: Tuesday, July 12, 2022 12:35 AM
To: acrn-dev@...
Cc: Liu, Yifan1 <yifan1.liu@...>
Subject: [acrn-dev] [PATCH 2/2] hv: Serialize WBINVD using

From: Yifan Liu <yifan1.liu@...>

As mentioned in previous patch, wbinvd utilizes the
vcpu_make_request and signal_event call pair to stall other vcpus.
Due to the fact that these two calls are not thread-safe, we need to avoid
concurrent call to this API pair.

This patch adds wbinvd lock to serialize wbinvd emulation.

Signed-off-by: Yifan Liu <yifan1.liu@...>
hypervisor/arch/x86/guest/vmexit.c | 2 ++
hypervisor/include/arch/x86/asm/guest/vm.h | 1 +
2 files changed, 3 insertions(+)

diff --git a/hypervisor/arch/x86/guest/vmexit.c
index ca4bdbf2a..fad807ba8 100644
--- a/hypervisor/arch/x86/guest/vmexit.c
+++ b/hypervisor/arch/x86/guest/vmexit.c
@@ -438,6 +438,7 @@ static int32_t wbinvd_vmexit_handler(struct
acrn_vcpu *vcpu)
if (is_rt_vm(vcpu->vm)) {
I am curious why we need to handle differently for RT VM and non-RT VM.
Do you happen to know?
[Liu, Yifan1] Handling differently was introduced in commit
Before this commit, the code looks like this:

if (has_rt_vm() == false) {
cache_flush_invalidate_all(); } else {
walk_ept_table(vcpu->vm, ept_flush_leaf_page); }

Quoting commit message of 9a15ea82:

hv: pause all other vCPUs in same VM when do wbinvd emulation

Invalidate cache by scanning and flushing the whole guest memory is
inefficient which might cause long execution time for WBINVD emulation.
A long execution in hypervisor might cause a vCPU stuck phenomenon what
impact Windows Guest booting.

This patch introduce a workaround method that pausing all other vCPUs in
the same VM when do wbinvd emulation.

Tracked-On: #4703
Signed-off-by: Shuo A Liu <shuo.a.liu@...>
Acked-by: Eddie Dong <eddie.dong@...>

walk_ept_table(vcpu->vm, ept_flush_leaf_page);
} else {
+ spinlock_obtain(&vcpu->vm->wbinvd_lock);
/* Pause other vcpus and let them wait for the wbinvd
foreach_vcpu(i, vcpu->vm, other) {
if (other != vcpu) {
@@ -452,6 +453,7 @@ static int32_t wbinvd_vmexit_handler(struct
acrn_vcpu *vcpu)
+ spinlock_release(&vcpu->vm->wbinvd_lock);

diff --git a/hypervisor/include/arch/x86/asm/guest/vm.h
index c05b9b212..d06300600 100644
--- a/hypervisor/include/arch/x86/asm/guest/vm.h
+++ b/hypervisor/include/arch/x86/asm/guest/vm.h
@@ -149,6 +149,7 @@ struct acrn_vm {
* the initialization depends on the clear BSS section
spinlock_t vm_state_lock;
+ spinlock_t wbinvd_lock; /* Spin-lock used to serialize wbinvd
emulation */
spinlock_t vlapic_mode_lock; /* Spin-lock used to protect
vlapic_mode modifications for a VM */
spinlock_t ept_lock; /* Spin-lock used to protect ept
add/modify/remove for a VM */
spinlock_t emul_mmio_lock; /* Used to protect emulation
mmio_node concurrent access for a VM */

Join to automatically receive all group messages.