Re: [PATCH V2 3/6] HV: re-enable #AC for splitlock at every VM-Exit #ac

Tao, Yuhong

Hi, Eddie

Have discussed with Jason & Like, because when splitlock access happened, there are only 2 choice: (1) let it run with LOCK# signal. (2) terminate
The design is now become that:

Always enable #AC on split lock in HV side, if Splitlock detected in HV, report it and continue

The VM handle its #AC, no VM-Exit
For high severity VM, it take control of this feature (allow it run with LOCK#)
For low severity VM, this feature is forced to be enabled (hang or oops at kernel, terminate at userspace)


-----Original Message-----
From: Dong, Eddie <eddie.dong@...>
Sent: Tuesday, March 24, 2020 10:07 AM
To: acrn-dev@...
Cc: Tao, Yuhong <yuhong.tao@...>; Dong, Eddie <eddie.dong@...>
Subject: RE: [acrn-dev] [PATCH V2 3/6] HV: re-enable #AC for splitlock at every

This policy is not the initial design. I copied the design here:

Always enable #AC on split lock in HV side

Present virtual split lock capability to guest, and directly deliver #AC to guest; but
only highest severity VM can override the setting of #AC on split lock.

If split lock happens in ACRN, this is a software bug we must fix it.

Assumption: highest severity VM can gracefully handles #AC on split lock, or
never trigger #AC on split lock Other VMs: if it has the handler for #AC on split
lock, it can handle gracefully; otherwise die in guest

-----Original Message-----
From: acrn-dev@... <acrn-dev@...>
On Behalf Of Tao, Yuhong
Sent: Monday, March 23, 2020 9:59 PM
To: acrn-dev@...
Cc: Tao, Yuhong <yuhong.tao@...>
Subject: [acrn-dev] [PATCH V2 3/6] HV: re-enable #AC for splitlock at
every VM-Exit

Once Splitlock Access happens in HV, alignment check exception handler
will disable splitlock detection, that allows HV go on running. We can
re-enabled it at every VM-Exit, to make sure #AC for splitlock is
always enabled under VMX root mode.

Because Splitlock Access would barely happen at HV, and TEST_CTL is
for per core(Detect simultaneously on logical thread is not
guaranteed).Just report the 1st splitlock during each VM-EXIT, waiting
it to be fixed, and repeat this iteration.

Tracked-On: #4496
Signed-off-by: Tao Yuhong <yuhong.tao@...>
hypervisor/arch/x86/guest/vcpu.c | 8 ++++++++
1 file changed, 8 insertions(+)

diff --git a/hypervisor/arch/x86/guest/vcpu.c
index aee87f1..7c93ea5 100644
--- a/hypervisor/arch/x86/guest/vcpu.c
+++ b/hypervisor/arch/x86/guest/vcpu.c
@@ -571,6 +571,10 @@ int32_t run_vcpu(struct acrn_vcpu *vcpu)
/* Launch the VM */
status = vmx_vmrun(ctx, VM_LAUNCH, ibrs_type);

+ if (has_ac_for_splitlock() == true) {
+ enable_ac_for_splitlock();
+ }
/* See if VM launched successfully */
if (status == 0) {
if (is_vcpu_bsp(vcpu)) {
@@ -595,6 +599,10 @@ int32_t run_vcpu(struct acrn_vcpu *vcpu)

/* Resume the VM */
status = vmx_vmrun(ctx, VM_RESUME, ibrs_type);
+ if (has_ac_for_splitlock() == true) {
+ enable_ac_for_splitlock();
+ }

vcpu->reg_cached = 0UL;

Join to automatically receive all group messages.