Date   

[PATCH 4/7] HV: ignore the NMI injection request for lapic-pt vCPUs

Kaige Fu
 

NMI will be used as notification signal for lapic-pt vCPUs and
we don't support vNMI yet. So, this patch ignore the pending NMI
request and EXCP with vector 2.

Signed-off-by: Kaige Fu <kaige.fu@...>
---
hypervisor/arch/x86/guest/virq.c | 55 ++++++++++++++++++++------------
1 file changed, 35 insertions(+), 20 deletions(-)

diff --git a/hypervisor/arch/x86/guest/virq.c b/hypervisor/arch/x86/guest/virq.c
index 60358534..1c9904c4 100644
--- a/hypervisor/arch/x86/guest/virq.c
+++ b/hypervisor/arch/x86/guest/virq.c
@@ -211,26 +211,36 @@ int32_t vcpu_queue_exception(struct acrn_vcpu *vcpu, uint32_t vector_arg, uint32
static void vcpu_inject_exception(struct acrn_vcpu *vcpu, uint32_t vector)
{
if (bitmap_test_and_clear_lock(ACRN_REQUEST_EXCP, &vcpu->arch.pending_req)) {
-
- if ((exception_type[vector] & EXCEPTION_ERROR_CODE_VALID) != 0U) {
- exec_vmwrite32(VMX_ENTRY_EXCEPTION_ERROR_CODE,
- vcpu->arch.exception_info.error);
- }
+ if (is_lapic_pt_enabled(vcpu) && (vector == IDT_NMI)) {
+ /*
+ * NMI will be used as notification signal for lapic-pt vCPUs and we
+ * don't support vNMI yet. So, here we just ignore the NMI injection
+ * request.
+ */
+ pr_warn("Don't allow to inject NMI to lapic-pt vCPU%u. Ignore this request.", vcpu->vcpu_id);
+ vcpu->arch.exception_info.exception = VECTOR_INVALID;
+ } else {
+ if ((exception_type[vector] & EXCEPTION_ERROR_CODE_VALID) != 0U) {
+ exec_vmwrite32(VMX_ENTRY_EXCEPTION_ERROR_CODE,
+ vcpu->arch.exception_info.error);
+ }

- exec_vmwrite32(VMX_ENTRY_INT_INFO_FIELD, VMX_INT_INFO_VALID |
- (exception_type[vector] << 8U) | (vector & 0xFFU));
+ exec_vmwrite32(VMX_ENTRY_INT_INFO_FIELD, VMX_INT_INFO_VALID |
+ (exception_type[vector] << 8U) | (vector & 0xFFU));

- vcpu->arch.exception_info.exception = VECTOR_INVALID;
+ vcpu->arch.exception_info.exception = VECTOR_INVALID;

- /* retain rip for exception injection */
- vcpu_retain_rip(vcpu);
+ /* retain rip for exception injection */
+ vcpu_retain_rip(vcpu);

- /* SDM 17.3.1.1 For any fault-class exception except a debug exception generated in response to an
- * instruction breakpoint, the value pushed for RF is 1.
- * #DB is treated as Trap in get_exception_type, so RF will not be set for instruction breakpoint.
- */
- if (get_exception_type(vector) == EXCEPTION_FAULT) {
- vcpu_set_rflags(vcpu, vcpu_get_rflags(vcpu) | HV_ARCH_VCPU_RFLAGS_RF);
+ /* SDM 17.3.1.1 For any fault-class exception except a debug exception generated
+ * in response to an instruction breakpoint, the value pushed for RF is 1.
+ * #DB is treated as Trap in get_exception_type, so RF will not be set for
+ * instruction breakpoint.
+ */
+ if (get_exception_type(vector) == EXCEPTION_FAULT) {
+ vcpu_set_rflags(vcpu, vcpu_get_rflags(vcpu) | HV_ARCH_VCPU_RFLAGS_RF);
+ }
}
}
}
@@ -383,10 +393,15 @@ int32_t acrn_handle_pending_request(struct acrn_vcpu *vcpu)
if (!injected) {
/* inject NMI before maskable hardware interrupt */
if (bitmap_test_and_clear_lock(ACRN_REQUEST_NMI, pending_req_bits)) {
- /* Inject NMI vector = 2 */
- exec_vmwrite32(VMX_ENTRY_INT_INFO_FIELD,
- VMX_INT_INFO_VALID | (VMX_INT_TYPE_NMI << 8U) | IDT_NMI);
- injected = true;
+ if (!is_lapic_pt_enabled(vcpu)) {
+ /* Inject NMI vector = 2 */
+ exec_vmwrite32(VMX_ENTRY_INT_INFO_FIELD,
+ VMX_INT_INFO_VALID | (VMX_INT_TYPE_NMI << 8U) | IDT_NMI);
+ injected = true;
+ } else {
+ pr_warn("Don't allow to inject NMI to lapic-pt vCPU%u. Ignore this request.",
+ vcpu->vcpu_id);
+ }
} else {
/* handling pending vector injection:
* there are many reason inject failed, we need re-inject again
--
2.20.0


[PATCH 3/7] HV: Use NMI to kick lapic-pt vCPU's thread

Kaige Fu
 

ACRN hypervisor needs to kick vCPU off VMX non-root mode to do some
operations in hypervisor, such as interrupt/exception injection, EPT
flush etc. For non lapic-pt vCPUs, we can use IPI to do so. But, it
doesn't work for lapic-pt vCPUs as the IPI will be injected to VMs
directly without vmexit.

There may be fatal errors triggered. 1). Certain operation may not be
carried out on time which may further lead to fatal errors. Taking the
EPT flush request as an example, once we don't flush the EPT on time and
the guest access the out-of-date EPT, fatal error happens. 2). The IPI
vector will be delivered to VMs directly. If the guest can't handle it
properly, further interrupts might be blocked which will cause the VMs
hang.

The NMI can be used as the notification signal to kick the vCPU off VMX
non-root mode for lapic-pt vCPUs. This patch does it by enable NMI-exiting
after passthroughing the lapic to vCPU.

Signed-off-by: Kaige Fu <kaige.fu@...>
---
hypervisor/arch/x86/guest/vmcs.c | 11 +++++++++++
hypervisor/arch/x86/irq.c | 20 ++++++++++++--------
hypervisor/common/schedule.c | 19 ++++++++++++++++---
hypervisor/include/common/schedule.h | 1 +
4 files changed, 40 insertions(+), 11 deletions(-)

diff --git a/hypervisor/arch/x86/guest/vmcs.c b/hypervisor/arch/x86/guest/vmcs.c
index 2bad083e..adf8d7cd 100644
--- a/hypervisor/arch/x86/guest/vmcs.c
+++ b/hypervisor/arch/x86/guest/vmcs.c
@@ -568,6 +568,7 @@ void switch_apicv_mode_x2apic(struct acrn_vcpu *vcpu)
* Disable posted interrupt processing
* update x2apic msr bitmap for pass-thru
* enable inteception only for ICR
+ * enable NMI exit as we will use NMI to kick vCPU thread
* disable pre-emption for TSC DEADLINE MSR
* Disable Register Virtualization and virtual interrupt delivery
* Disable "use TPR shadow"
@@ -578,6 +579,16 @@ void switch_apicv_mode_x2apic(struct acrn_vcpu *vcpu)
if (is_apicv_advanced_feature_supported()) {
value32 &= ~VMX_PINBASED_CTLS_POST_IRQ;
}
+
+ /*
+ * ACRN hypervisor needs to kick vCPU off VMX non-root mode to do some
+ * operations in hypervisor, such as interrupt/exception injection, EPT
+ * flush etc. For non lapic-pt vCPUs, we can use IPI to do so. But, it
+ * doesn't work for lapic-pt vCPUs as the IPI will be injected to VMs
+ * directly without vmexit. So, here we enable NMI-exiting and use NMI
+ * as notification signal after passthroughing the lapic to vCPU.
+ */
+ value32 |= VMX_PINBASED_CTLS_NMI_EXIT;
exec_vmwrite32(VMX_PIN_VM_EXEC_CONTROLS, value32);

value32 = exec_vmread32(VMX_EXIT_CONTROLS);
diff --git a/hypervisor/arch/x86/irq.c b/hypervisor/arch/x86/irq.c
index 6a55418c..4460b44d 100644
--- a/hypervisor/arch/x86/irq.c
+++ b/hypervisor/arch/x86/irq.c
@@ -369,17 +369,21 @@ void dispatch_exception(struct intr_excp_ctx *ctx)
{
uint16_t pcpu_id = get_pcpu_id();

- /* Obtain lock to ensure exception dump doesn't get corrupted */
- spinlock_obtain(&exception_spinlock);
+ if (ctx->vector == IDT_NMI) {
+ /* TODO: we will handle it later */
+ } else {
+ /* Obtain lock to ensure exception dump doesn't get corrupted */
+ spinlock_obtain(&exception_spinlock);

- /* Dump exception context */
- dump_exception(ctx, pcpu_id);
+ /* Dump exception context */
+ dump_exception(ctx, pcpu_id);

- /* Release lock to let other CPUs handle exception */
- spinlock_release(&exception_spinlock);
+ /* Release lock to let other CPUs handle exception */
+ spinlock_release(&exception_spinlock);

- /* Halt the CPU */
- cpu_dead();
+ /* Halt the CPU */
+ cpu_dead();
+ }
}

static void init_irq_descs(void)
diff --git a/hypervisor/common/schedule.c b/hypervisor/common/schedule.c
index 847bafd0..327d4cc6 100644
--- a/hypervisor/common/schedule.c
+++ b/hypervisor/common/schedule.c
@@ -118,7 +118,7 @@ struct thread_object *sched_get_current(uint16_t pcpu_id)
}

/**
- * @pre delmode == DEL_MODE_IPI || delmode == DEL_MODE_INIT
+ * @pre delmode == DEL_MODE_IPI || delmode == DEL_MODE_INIT || delmode == DEL_MODE_NMI
*/
void make_reschedule_request(uint16_t pcpu_id, uint16_t delmode)
{
@@ -133,6 +133,9 @@ void make_reschedule_request(uint16_t pcpu_id, uint16_t delmode)
case DEL_MODE_INIT:
send_single_init(pcpu_id);
break;
+ case DEL_MODE_NMI:
+ send_single_nmi(pcpu_id);
+ break;
default:
ASSERT(false, "Unknown delivery mode %u for pCPU%u", delmode, pcpu_id);
break;
@@ -230,10 +233,20 @@ void kick_thread(const struct thread_object *obj)
obtain_schedule_lock(pcpu_id, &rflag);
if (is_running(obj)) {
if (get_pcpu_id() != pcpu_id) {
- send_single_ipi(pcpu_id, VECTOR_NOTIFY_VCPU);
+ if (obj->notify_mode == SCHED_NOTIFY_IPI) {
+ send_single_ipi(pcpu_id, VECTOR_NOTIFY_VCPU);
+ } else {
+ /* For lapic-pt vCPUs */
+ send_single_nmi(pcpu_id);
+ }
}
} else if (is_runnable(obj)) {
- make_reschedule_request(pcpu_id, DEL_MODE_IPI);
+ if (obj->notify_mode == SCHED_NOTIFY_IPI) {
+ make_reschedule_request(pcpu_id, DEL_MODE_IPI);
+ } else {
+ /* For lapic-pt vCPUs */
+ make_reschedule_request(pcpu_id, DEL_MODE_NMI);
+ }
} else {
/* do nothing */
}
diff --git a/hypervisor/include/common/schedule.h b/hypervisor/include/common/schedule.h
index 808beacc..0a407fb1 100644
--- a/hypervisor/include/common/schedule.h
+++ b/hypervisor/include/common/schedule.h
@@ -12,6 +12,7 @@

#define DEL_MODE_INIT (1U)
#define DEL_MODE_IPI (2U)
+#define DEL_MODE_NMI (3U)

#define THREAD_DATA_SIZE (256U)

--
2.20.0


[PATCH 2/7] HV: Add helper function send_single_nmi

Kaige Fu
 

This patch adds a helper function send_single_nmi. The fisrt caller
will soon come with the following patch.

Signed-off-by: Kaige Fu <kaige.fu@...>
---
hypervisor/arch/x86/lapic.c | 15 +++++++++++++++
hypervisor/include/arch/x86/lapic.h | 9 +++++++++
2 files changed, 24 insertions(+)

diff --git a/hypervisor/arch/x86/lapic.c b/hypervisor/arch/x86/lapic.c
index c06daaab..a54cb067 100644
--- a/hypervisor/arch/x86/lapic.c
+++ b/hypervisor/arch/x86/lapic.c
@@ -288,3 +288,18 @@ void send_single_init(uint16_t pcpu_id)

msr_write(MSR_IA32_EXT_APIC_ICR, icr.value);
}
+
+/**
+ * @pre pcpu_id < CONFIG_MAX_PCPU_NUM
+ *
+ * @return None
+ */
+void send_single_nmi(uint16_t pcpu_id)
+{
+ union apic_icr icr;
+
+ icr.value_32.hi_32 = per_cpu(lapic_id, pcpu_id);
+ icr.value_32.lo_32 = (INTR_LAPIC_ICR_PHYSICAL << 11U) | (INTR_LAPIC_ICR_NMI << 8U);
+
+ msr_write(MSR_IA32_EXT_APIC_ICR, icr.value);
+}
diff --git a/hypervisor/include/arch/x86/lapic.h b/hypervisor/include/arch/x86/lapic.h
index 5e490b8c..9b59c21d 100644
--- a/hypervisor/include/arch/x86/lapic.h
+++ b/hypervisor/include/arch/x86/lapic.h
@@ -183,4 +183,13 @@ void send_single_ipi(uint16_t pcpu_id, uint32_t vector);
*/
void send_single_init(uint16_t pcpu_id);

+/**
+ * @brief Send an NMI signal to a single pCPU
+ *
+ * @param[in] pcpu_id The id of destination physical cpu
+ *
+ * @return None
+ */
+void send_single_nmi(uint16_t pcpu_id);
+
#endif /* INTR_LAPIC_H */
--
2.20.0


[PATCH 1/7] HV: Push NMI vector on to the exception stack

Kaige Fu
 

This patch pushs the NMI vector (2) on to the exception stack.
So, we can get the right vector in dispatch_exception.

Signed-off-by: Kaige Fu <kaige.fu@...>
---
hypervisor/arch/x86/idt.S | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/hypervisor/arch/x86/idt.S b/hypervisor/arch/x86/idt.S
index cb42ea9e..bd6d7c47 100644
--- a/hypervisor/arch/x86/idt.S
+++ b/hypervisor/arch/x86/idt.S
@@ -110,9 +110,9 @@ excp_debug:

.align 8
excp_nmi:
-
-
-
+ pushq $0x0
+ pushq $0x02 /* pseudo error code */
+ jmp excp_save_frame

.align 8
excp_breakpoint:
--
2.20.0


[PATCH 0/7] Use NMI to nofity vCPUs with lapic-pt

Kaige Fu
 

ACRN hypervisor needs to kick vCPU off VMX non-root mode to do some
operations in hypervisor, such as interrupt/exception injection, EPT
flush etc. For non lapic-pt vCPUs, we can use IPI to do so. But, it
doesn't work for lapic-pt vCPUs as the IPI will be injected to VMs
directly without vmexit.

Consequently, there may be fatal errors triggered. 1). Certain operation may not be
carried out on time which may further lead to fatal errors. Taking the
EPT flush request as an example, once we don't flush the EPT on time and
the guest access the out-of-date EPT, fatal error happens. 2). The IPI
vector will be delivered to VMs directly. If the guest can't handle it
properly, further interrupts might be blocked which will cause the VMs
hang.

The NMI can be used as the notification signal to kick the vCPU off VMX
non-root mode for lapic-pt vCPUs. This patchset does it by enable NMI-exiting
after passthroughing the lapic to vCPU.

TODOs:
- Filter out all NMI sources:
* Write ICR with deliver mode NMI
* Program the MSI data with deliver mode NMI
* Program the LVTs with deliver mode NMI
- Implement the smp_call for lapic-pt VMs to facilitate the debug of lapic-pt VMs.

Kaige Fu (7):
HV: Push NMI vector on to the exception stack
HV: Add helper function send_single_nmi
HV: Use NMI to kick lapic-pt vCPU's thread
HV: ignore the NMI injection request for lapic-pt vCPUs
HV: Use NMI-window exiting to address req missing issue
HV: Use NMI to replace INIT signal for lapic-pt VMs S5
HV: Remove INIT signal notification related code

hypervisor/arch/x86/guest/virq.c | 76 ++++++++++++++++++++--------
hypervisor/arch/x86/guest/vmcs.c | 18 +++++--
hypervisor/arch/x86/guest/vmexit.c | 23 +--------
hypervisor/arch/x86/idt.S | 6 +--
hypervisor/arch/x86/irq.c | 55 ++++++++++++++++----
hypervisor/arch/x86/lapic.c | 9 +---
hypervisor/common/schedule.c | 24 ++++++---
hypervisor/include/arch/x86/irq.h | 1 +
hypervisor/include/arch/x86/lapic.h | 4 +-
hypervisor/include/common/schedule.h | 4 +-
10 files changed, 146 insertions(+), 74 deletions(-)

--
2.20.0


Re: [PATCH v2 0/9] Add one scheduler which support cpu sharing: sched_iorr

Eddie Dong
 

Let us merge 1/2/3/4/5/8/9 first, after fixing the comments.
6/7 could be later.

-----Original Message-----
From: acrn-dev@... <acrn-dev@...> On
Behalf Of Shuo A Liu
Sent: Thursday, December 5, 2019 5:15 PM
To: acrn-dev@...
Cc: Wang, Yu1 <yu1.wang@...>; Chen, Jason CJ
<jason.cj.chen@...>; Chen, Conghui <conghui.chen@...>; Liu,
Shuo A <shuo.a.liu@...>
Subject: [acrn-dev] [PATCH v2 0/9] Add one scheduler which support cpu
sharing: sched_iorr

This patchset will add one scheduler named sched_iorr which support cpu
sharing.
sched_iorr is short for IO sensitive Round-robin scheduler. It aims to schedule
threads with round-robin as basic policy, and be enhanced with some
faireness configuration such as preemption occurs on running out of timeslice,
IO request pending thread will be scheduled in high priority.

The patchset also add PAUSE-loop, HLT emulation support. HLT emulation is a
simple version for improving cpu sharing performance for now, which will be
improved later.

v2: Add @pre and assert for sched_iorr_init

Shuo A Liu (9):
hv: sched_iorr: Add IO sensitive Round-robin scheduler
hv: sched_iorr: add init functions of sched_iorr
hv: sched_iorr: add tick handler and runqueue operations
hv: sched_iorr: add some interfaces implementation of sched_iorr
hv: sched: add yield support
hv: sched: PAUSE-loop exiting support in hypervisor
hv: sched: simple HLT emulation in hypervisor
hv: sched: suspend/resume scheduling in suspend-to-ram path
hv: sched: use hypervisor configuration to choose scheduler

hypervisor/Makefile | 5 +
hypervisor/arch/x86/Kconfig | 21 ++++
hypervisor/arch/x86/guest/vmcs.c | 9 +-
hypervisor/arch/x86/guest/vmexit.c | 19 +++-
hypervisor/arch/x86/pm.c | 2 +
hypervisor/common/sched_iorr.c | 179
++++++++++++++++++++++++++++++++++
hypervisor/common/schedule.c | 28 ++++++
hypervisor/include/arch/x86/per_cpu.h | 1 +
hypervisor/include/common/schedule.h | 11 +++
9 files changed, 271 insertions(+), 4 deletions(-) create mode 100644
hypervisor/common/sched_iorr.c

--
2.8.3



Re: [PATCH v2 8/9] hv: sched: suspend/resume scheduling in suspend-to-ram path

Eddie Dong
 

Suspend/resume needs to be done for each AP...
BTW, if we don't do suspend/resume, seems it should be fine as if the timer if restored (physical TSC is restored).

-----Original Message-----
From: acrn-dev@... <acrn-dev@...> On
Behalf Of Shuo A Liu
Sent: Thursday, December 5, 2019 5:15 PM
To: acrn-dev@...
Cc: Wang, Yu1 <yu1.wang@...>; Chen, Jason CJ
<jason.cj.chen@...>; Chen, Conghui <conghui.chen@...>; Liu,
Shuo A <shuo.a.liu@...>
Subject: [acrn-dev] [PATCH v2 8/9] hv: sched: suspend/resume scheduling in
suspend-to-ram path

Signed-off-by: Shuo A Liu <shuo.a.liu@...>
---
hypervisor/arch/x86/pm.c | 2 ++
hypervisor/common/schedule.c | 18 ++++++++++++++++++
hypervisor/include/common/schedule.h | 2 ++
3 files changed, 22 insertions(+)

diff --git a/hypervisor/arch/x86/pm.c b/hypervisor/arch/x86/pm.c index
bbce3c8..07b712e 100644
--- a/hypervisor/arch/x86/pm.c
+++ b/hypervisor/arch/x86/pm.c
@@ -194,6 +194,7 @@ void host_enter_s3(const struct pm_s_state_data
*sstate_data, uint32_t pm1a_cnt_
CPU_IRQ_DISABLE();
vmx_off();

+ suspend_sched();
suspend_console();
suspend_ioapic();
suspend_iommu();
@@ -225,6 +226,7 @@ void host_enter_s3(const struct pm_s_state_data
*sstate_data, uint32_t pm1a_cnt_

/* console must be resumed after TSC restored since it will setup timer
base on TSC */
resume_console();
+ resume_sched();
}

void reset_host(void)
diff --git a/hypervisor/common/schedule.c b/hypervisor/common/schedule.c
index 681c17a..32a60bd 100644
--- a/hypervisor/common/schedule.c
+++ b/hypervisor/common/schedule.c
@@ -240,6 +240,24 @@ void kick_thread(const struct thread_object *obj)
release_schedule_lock(pcpu_id, rflag); }

+void suspend_sched(void)
+{
+ struct sched_control *ctl = &per_cpu(sched_ctl, get_pcpu_id());
+
+ if (ctl->scheduler->deinit != NULL) {
+ ctl->scheduler->deinit(ctl);
+ }
+}
+
+void resume_sched(void)
+{
+ struct sched_control *ctl = &per_cpu(sched_ctl, get_pcpu_id());
+
+ if (ctl->scheduler->init != NULL) {
+ ctl->scheduler->init(ctl);
+ }
+}
+
void yield(void)
{
make_reschedule_request(get_pcpu_id(), DEL_MODE_IPI); diff --git
a/hypervisor/include/common/schedule.h
b/hypervisor/include/common/schedule.h
index a237cdb..2753ef7 100644
--- a/hypervisor/include/common/schedule.h
+++ b/hypervisor/include/common/schedule.h
@@ -111,6 +111,8 @@ void wake_thread(struct thread_object *obj); void
kick_thread(const struct thread_object *obj); void yield(void); void
schedule(void);
+void suspend_sched(void);
+void resume_sched(void);

void arch_switch_to(void *prev_sp, void *next_sp); void
run_idle_thread(void);
--
2.8.3



Re: [PATCH v2 5/9] hv: sched: add yield support

Eddie Dong
 

Acked-by: Eddie Dong <eddie.dong@...>

-----Original Message-----
From: acrn-dev@... <acrn-dev@...> On
Behalf Of Shuo A Liu
Sent: Thursday, December 5, 2019 5:15 PM
To: acrn-dev@...
Cc: Wang, Yu1 <yu1.wang@...>; Chen, Jason CJ
<jason.cj.chen@...>; Chen, Conghui <conghui.chen@...>; Liu,
Shuo A <shuo.a.liu@...>
Subject: [acrn-dev] [PATCH v2 5/9] hv: sched: add yield support

Add yield support for schedule, which can give up pcpu proactively.

Signed-off-by: Jason Chen CJ <jason.cj.chen@...>
Signed-off-by: Yu Wang <yu1.wang@...>
Signed-off-by: Shuo A Liu <shuo.a.liu@...>
---
hypervisor/common/schedule.c | 5 +++++
hypervisor/include/common/schedule.h | 1 +
2 files changed, 6 insertions(+)

diff --git a/hypervisor/common/schedule.c b/hypervisor/common/schedule.c
index 847bafd..681c17a 100644
--- a/hypervisor/common/schedule.c
+++ b/hypervisor/common/schedule.c
@@ -240,6 +240,11 @@ void kick_thread(const struct thread_object *obj)
release_schedule_lock(pcpu_id, rflag); }

+void yield(void)
+{
+ make_reschedule_request(get_pcpu_id(), DEL_MODE_IPI); }
+
void run_thread(struct thread_object *obj) {
uint64_t rflag;
diff --git a/hypervisor/include/common/schedule.h
b/hypervisor/include/common/schedule.h
index 5179b52..a237cdb 100644
--- a/hypervisor/include/common/schedule.h
+++ b/hypervisor/include/common/schedule.h
@@ -109,6 +109,7 @@ void run_thread(struct thread_object *obj); void
sleep_thread(struct thread_object *obj); void wake_thread(struct
thread_object *obj); void kick_thread(const struct thread_object *obj);
+void yield(void);
void schedule(void);

void arch_switch_to(void *prev_sp, void *next_sp);
--
2.8.3



Re: [PATCH v2 9/9] hv: sched: use hypervisor configuration to choose scheduler

Eddie Dong
 

Acked-by: Eddie Dong <eddie.dong@...>

-----Original Message-----
From: acrn-dev@... <acrn-dev@...> On
Behalf Of Shuo A Liu
Sent: Thursday, December 5, 2019 5:15 PM
To: acrn-dev@...
Cc: Wang, Yu1 <yu1.wang@...>; Chen, Jason CJ
<jason.cj.chen@...>; Chen, Conghui <conghui.chen@...>; Liu,
Shuo A <shuo.a.liu@...>
Subject: [acrn-dev] [PATCH v2 9/9] hv: sched: use hypervisor configuration to
choose scheduler

For now, we set NOOP scheduler as default. User can choose IORR scheduler
as needed.

Signed-off-by: Shuo A Liu <shuo.a.liu@...>
---
hypervisor/Makefile | 4 ++++
hypervisor/arch/x86/Kconfig | 21 +++++++++++++++++++++
hypervisor/common/schedule.c | 5 +++++
3 files changed, 30 insertions(+)

diff --git a/hypervisor/Makefile b/hypervisor/Makefile index
fc19bb8..4a6569d 100644
--- a/hypervisor/Makefile
+++ b/hypervisor/Makefile
@@ -211,8 +211,12 @@ HW_C_SRCS += arch/x86/cat.c HW_C_SRCS +=
arch/x86/sgx.c HW_C_SRCS += common/softirq.c HW_C_SRCS +=
common/schedule.c
+ifeq ($(CONFIG_SCHED_NOOP),y)
HW_C_SRCS += common/sched_noop.c
+endif
+ifeq ($(CONFIG_SCHED_IORR),y)
HW_C_SRCS += common/sched_iorr.c
+endif
HW_C_SRCS += hw/pci.c
HW_C_SRCS += arch/x86/configs/vm_config.c HW_C_SRCS +=
arch/x86/configs/$(CONFIG_BOARD)/board.c
diff --git a/hypervisor/arch/x86/Kconfig b/hypervisor/arch/x86/Kconfig index
84bc7a0..12adced 100644
--- a/hypervisor/arch/x86/Kconfig
+++ b/hypervisor/arch/x86/Kconfig
@@ -37,6 +37,27 @@ config HYBRID

endchoice

+choice
+ prompt "ACRN Scheduler"
+ default SCHED_NOOP
+ help
+ Select the NOOP scheduler for hypervisor.
+ one vCPU running on one pCPU.
+
+config SCHED_NOOP
+ bool "NOOP scheduler"
+ help
+ The NOOP(No-Operation) scheduler only supports one vCPU running
on one pCPU.
+
+config SCHED_IORR
+ bool "IORR scheduler"
+ help
+ IORR (IO sensitive Round Robin) scheduler supports multipule
vCPUs running on
+ on one pCPU, and they will be scheduled by a IO sensitive round robin
policy.
+
+endchoice
+
+
config BOARD
string "Target board"
help
diff --git a/hypervisor/common/schedule.c b/hypervisor/common/schedule.c
index 32a60bd..71414e5 100644
--- a/hypervisor/common/schedule.c
+++ b/hypervisor/common/schedule.c
@@ -73,7 +73,12 @@ void init_sched(uint16_t pcpu_id)
ctl->flags = 0UL;
ctl->curr_obj = NULL;
ctl->pcpu_id = pcpu_id;
+#ifdef CONFIG_SCHED_NOOP
ctl->scheduler = &sched_noop;
+#endif
+#ifdef CONFIG_SCHED_IORR
+ ctl->scheduler = &sched_iorr;
+#endif
if (ctl->scheduler->init != NULL) {
ctl->scheduler->init(ctl);
}
--
2.8.3



Re: [PATCH v2 4/9] hv: sched_iorr: add some interfaces implementation of sched_iorr

Eddie Dong
 

-----Original Message-----
From: acrn-dev@... <acrn-dev@...> On
Behalf Of Shuo A Liu
Sent: Thursday, December 5, 2019 5:15 PM
To: acrn-dev@...
Cc: Wang, Yu1 <yu1.wang@...>; Chen, Jason CJ
<jason.cj.chen@...>; Chen, Conghui <conghui.chen@...>; Liu,
Shuo A <shuo.a.liu@...>
Subject: [acrn-dev] [PATCH v2 4/9] hv: sched_iorr: add some interfaces
implementation of sched_iorr

Implement .sleep/.wake/.pick_next of sched_iorr.
In .pick_next, we count current object's timeslice and pick the next avaiable
one. The policy is
1) get the first item in runqueue firstly
2) if object picked has no time_cycles, replenish it pick this one
3) At least take one idle sched object if we have no runnable object
after step 1) and 2)
In .wake, we start the tick if we have more than one active thread_object in
runqueue. In .sleep, stop the tick timer if necessary.

Signed-off-by: Jason Chen CJ <jason.cj.chen@...>
Signed-off-by: Yu Wang <yu1.wang@...>
Signed-off-by: Shuo A Liu <shuo.a.liu@...>
---
hypervisor/common/sched_iorr.c | 47
++++++++++++++++++++++++++++++++++++++----
1 file changed, 43 insertions(+), 4 deletions(-)

diff --git a/hypervisor/common/sched_iorr.c
b/hypervisor/common/sched_iorr.c index d6d0ad9..6b12d01 100644
--- a/hypervisor/common/sched_iorr.c
+++ b/hypervisor/common/sched_iorr.c
@@ -121,17 +121,56 @@ void sched_iorr_init_data(struct thread_object
*obj)
data->left_cycles = data->slice_cycles = CONFIG_SLICE_MS *
CYCLES_PER_MS; }

-static struct thread_object *sched_iorr_pick_next(__unused struct
sched_control *ctl)
+static struct thread_object *sched_iorr_pick_next(struct sched_control
+*ctl)
{
- return NULL;
+ struct sched_iorr_control *iorr_ctl = (struct sched_iorr_control
*)ctl->priv;
+ struct thread_object *next = NULL;
+ struct thread_object *current = NULL;
+ struct sched_iorr_data *data;
+ uint64_t now = rdtsc();
+
+ current = ctl->curr_obj;
+ data = (struct sched_iorr_data *)current->data;
+ /* Ignore the idle object, inactive objects */
+ if (!is_idle_thread(current) && is_inqueue(current)) {
OK, we may have different understand to idle thread... IMO, idle thread is a normal thread, inside runqueue, but with least priority. Seems it is not in run queue in your proposal.
What is the tradeoff here?

+ data->left_cycles -= now - data->last_cycles;
+ if (data->left_cycles <= 0) {
+ /* replenish thread_object with slice_cycles */
+ data->left_cycles += data->slice_cycles;
+ }
+ /* move the thread_object to tail */
+ runqueue_remove(current);
+ runqueue_add_tail(current);
+ }
+
+ /*
+ * Pick the next runnable sched object
+ * 1) get the first item in runqueue firstly
+ * 2) if object picked has no time_cycles, replenish it pick this one
+ * 3) At least take one idle sched object if we have no runnable one after
step 1) and 2)
+ */
+ if (!list_empty(&iorr_ctl->runqueue)) {
+ next = get_first_item(&iorr_ctl->runqueue, struct thread_object,
data);
+ data = (struct sched_iorr_data *)next->data;
+ data->last_cycles = now;
+ while (data->left_cycles <= 0) {
+ data->left_cycles += data->slice_cycles;
+ }
I can understand we may have to add one slice. But why we add unlimited slice till it becomes + ?
In my understanding, a thread runs with a time slice, and all the way to be <0. Once it becomes < 0, we should trigger a scheduling (and hence it is only a little below 0).

+ } else {
+ next = &get_cpu_var(idle);
+ }
+
+ return next;
}

-static void sched_iorr_sleep(__unused struct thread_object *obj)
+static void sched_iorr_sleep(struct thread_object *obj)
{
+ runqueue_remove(obj);
If thread A wants to sleep thread B, this API is fine.

If thread A wants to sleep itself, this API may be bogus. Right? We need to pick next and trigger a switch to next...
}

-static void sched_iorr_wake(__unused struct thread_object *obj)
+static void sched_iorr_wake(struct thread_object *obj)
{
+ runqueue_add_head(obj);
}

struct acrn_scheduler sched_iorr = {
--
2.8.3



Re: [PATCH] acrn-config: Fix target xml generation issue when no P-state scaling driver is present

Vijay Dhanraj
 

Hi Victor,

Sure will fix it with the PR.

Thanks,
-Vijay

On Dec 5, 2019, at 6:19 PM, Sun, Victor <victor.sun@...> wrote:

hi Vijay,

Thanks for your patch. The store_cx_data() also has same risk, could you please fix it with the PR?

BR,

Victor

On 12/6/2019 10:09 AM, Victor Sun wrote:
Acked-by: Victor Sun <victor.sun@...>

On 12/6/2019 9:44 AM, Vijay Dhanraj wrote:
There seems to be a corner case where target xml file
fails to get generated if there was no P-state scaling driver
present. This patch addresses it by handling the file not present
exception and setting a warning as well a "not available" string
to successfully generate target xml file.

Signed-off-by: Vijay Dhanraj <vijay.dhanraj@...>
Tracked-On: #4199
---
misc/acrn-config/target/acpi.py | 30 ++++++++++++++++++------------
1 file changed, 18 insertions(+), 12 deletions(-)

diff --git a/misc/acrn-config/target/acpi.py b/misc/acrn-config/target/acpi.py
index bc9fcb72..3cd09e17 100644
--- a/misc/acrn-config/target/acpi.py
+++ b/misc/acrn-config/target/acpi.py
@@ -501,18 +501,24 @@ def store_px_data(sysnode, config):
"""
px_tmp = PxPkg()
px_data = {}
- with open(sysnode+'cpu0/cpufreq/scaling_driver', 'r') as f_node:
- freq_driver = f_node.read()
- if freq_driver.find("acpi-cpufreq") == -1:
- parser_lib.print_yel("The Px data for ACRN relies on " +\
- "acpi-cpufreq driver but it is not found, ", warn=True, end=False)
- if freq_driver.find("intel_pstate") == 0:
- print("please add intel_pstate=disable in kernel " +\
- "cmdline to fall back to acpi-cpufreq driver")
- else:
- parser_lib.print_yel("please make sure ACPI Pstate is enabled in BIOS.", warn=True)
- print("\t/* Px data is not available */", file=config)
- return
+
+ try:
+ with open(sysnode+'cpu0/cpufreq/scaling_driver', 'r') as f_node:
+ freq_driver = f_node.read()
+ if freq_driver.find("acpi-cpufreq") == -1:
+ parser_lib.print_yel("The Px data for ACRN relies on " +\
+ "acpi-cpufreq driver but it is not found, ", warn=True, end=False)
+ if freq_driver.find("intel_pstate") == 0:
+ print("please add intel_pstate=disable in kernel " +\
+ "cmdline to fall back to acpi-cpufreq driver")
+ else:
+ parser_lib.print_yel("please make sure ACPI Pstate is enabled in BIOS.", warn=True)
+ print("\t/* Px data is not available */", file=config)
+ return
+ except IOError:
+ parser_lib.print_yel("No scaling_driver found.", warn=True)
+ print("\t/* Px data is not available */", file=config)
+ return
try:
with open(sysnode+'cpufreq/boost', 'r') as f_node:



Re: [PATCH] acrn-config: Fix target xml generation issue when no P-state scaling driver is present

Victor Sun
 

hi Vijay,

Thanks for your patch. The store_cx_data() also has same risk, could you please fix it with the PR?

BR,

Victor

On 12/6/2019 10:09 AM, Victor Sun wrote:
Acked-by: Victor Sun <victor.sun@...>

On 12/6/2019 9:44 AM, Vijay Dhanraj wrote:
There seems to be a corner case where target xml file
fails to get generated if there was no P-state scaling driver
present. This patch addresses it by handling the file not present
exception and setting a warning as well a "not available" string
to successfully generate target xml file.

Signed-off-by: Vijay Dhanraj <vijay.dhanraj@...>
Tracked-On: #4199
---
  misc/acrn-config/target/acpi.py | 30 ++++++++++++++++++------------
  1 file changed, 18 insertions(+), 12 deletions(-)

diff --git a/misc/acrn-config/target/acpi.py b/misc/acrn-config/target/acpi.py
index bc9fcb72..3cd09e17 100644
--- a/misc/acrn-config/target/acpi.py
+++ b/misc/acrn-config/target/acpi.py
@@ -501,18 +501,24 @@ def store_px_data(sysnode, config):
      """
      px_tmp = PxPkg()
      px_data = {}
-    with open(sysnode+'cpu0/cpufreq/scaling_driver', 'r') as f_node:
-        freq_driver = f_node.read()
-        if freq_driver.find("acpi-cpufreq") == -1:
-            parser_lib.print_yel("The Px data for ACRN relies on " +\
-                 "acpi-cpufreq driver but it is not found, ", warn=True, end=False)
-            if freq_driver.find("intel_pstate") == 0:
-                print("please add intel_pstate=disable in kernel " +\
-                    "cmdline to fall back to acpi-cpufreq driver")
-            else:
-                parser_lib.print_yel("please make sure ACPI Pstate is enabled in BIOS.", warn=True)
-            print("\t/* Px data is not available */", file=config)
-            return
+
+    try:
+        with open(sysnode+'cpu0/cpufreq/scaling_driver', 'r') as f_node:
+            freq_driver = f_node.read()
+            if freq_driver.find("acpi-cpufreq") == -1:
+                parser_lib.print_yel("The Px data for ACRN relies on " +\
+                     "acpi-cpufreq driver but it is not found, ", warn=True, end=False)
+                if freq_driver.find("intel_pstate") == 0:
+                    print("please add intel_pstate=disable in kernel " +\
+                        "cmdline to fall back to acpi-cpufreq driver")
+                else:
+                    parser_lib.print_yel("please make sure ACPI Pstate is enabled in BIOS.", warn=True)
+                print("\t/* Px data is not available */", file=config)
+                return
+    except IOError:
+        parser_lib.print_yel("No scaling_driver found.", warn=True)
+        print("\t/* Px data is not available */", file=config)
+        return
        try:
          with open(sysnode+'cpufreq/boost', 'r') as f_node:


Re: [PATCH] acrn-config: Fix target xml generation issue when no P-state scaling driver is present

Victor Sun
 

Acked-by: Victor Sun <victor.sun@...>

On 12/6/2019 9:44 AM, Vijay Dhanraj wrote:
There seems to be a corner case where target xml file
fails to get generated if there was no P-state scaling driver
present. This patch addresses it by handling the file not present
exception and setting a warning as well a "not available" string
to successfully generate target xml file.

Signed-off-by: Vijay Dhanraj <vijay.dhanraj@...>
Tracked-On: #4199
---
misc/acrn-config/target/acpi.py | 30 ++++++++++++++++++------------
1 file changed, 18 insertions(+), 12 deletions(-)

diff --git a/misc/acrn-config/target/acpi.py b/misc/acrn-config/target/acpi.py
index bc9fcb72..3cd09e17 100644
--- a/misc/acrn-config/target/acpi.py
+++ b/misc/acrn-config/target/acpi.py
@@ -501,18 +501,24 @@ def store_px_data(sysnode, config):
"""
px_tmp = PxPkg()
px_data = {}
- with open(sysnode+'cpu0/cpufreq/scaling_driver', 'r') as f_node:
- freq_driver = f_node.read()
- if freq_driver.find("acpi-cpufreq") == -1:
- parser_lib.print_yel("The Px data for ACRN relies on " +\
- "acpi-cpufreq driver but it is not found, ", warn=True, end=False)
- if freq_driver.find("intel_pstate") == 0:
- print("please add intel_pstate=disable in kernel " +\
- "cmdline to fall back to acpi-cpufreq driver")
- else:
- parser_lib.print_yel("please make sure ACPI Pstate is enabled in BIOS.", warn=True)
- print("\t/* Px data is not available */", file=config)
- return
+
+ try:
+ with open(sysnode+'cpu0/cpufreq/scaling_driver', 'r') as f_node:
+ freq_driver = f_node.read()
+ if freq_driver.find("acpi-cpufreq") == -1:
+ parser_lib.print_yel("The Px data for ACRN relies on " +\
+ "acpi-cpufreq driver but it is not found, ", warn=True, end=False)
+ if freq_driver.find("intel_pstate") == 0:
+ print("please add intel_pstate=disable in kernel " +\
+ "cmdline to fall back to acpi-cpufreq driver")
+ else:
+ parser_lib.print_yel("please make sure ACPI Pstate is enabled in BIOS.", warn=True)
+ print("\t/* Px data is not available */", file=config)
+ return
+ except IOError:
+ parser_lib.print_yel("No scaling_driver found.", warn=True)
+ print("\t/* Px data is not available */", file=config)
+ return
try:
with open(sysnode+'cpufreq/boost', 'r') as f_node:


[PATCH] acrn-config: Fix target xml generation issue when no P-state scaling driver is present

Vijay Dhanraj
 

There seems to be a corner case where target xml file
fails to get generated if there was no P-state scaling driver
present. This patch addresses it by handling the file not present
exception and setting a warning as well a "not available" string
to successfully generate target xml file.

Signed-off-by: Vijay Dhanraj <vijay.dhanraj@...>
Tracked-On: #4199
---
misc/acrn-config/target/acpi.py | 30 ++++++++++++++++++------------
1 file changed, 18 insertions(+), 12 deletions(-)

diff --git a/misc/acrn-config/target/acpi.py b/misc/acrn-config/target/acpi.py
index bc9fcb72..3cd09e17 100644
--- a/misc/acrn-config/target/acpi.py
+++ b/misc/acrn-config/target/acpi.py
@@ -501,18 +501,24 @@ def store_px_data(sysnode, config):
"""
px_tmp = PxPkg()
px_data = {}
- with open(sysnode+'cpu0/cpufreq/scaling_driver', 'r') as f_node:
- freq_driver = f_node.read()
- if freq_driver.find("acpi-cpufreq") == -1:
- parser_lib.print_yel("The Px data for ACRN relies on " +\
- "acpi-cpufreq driver but it is not found, ", warn=True, end=False)
- if freq_driver.find("intel_pstate") == 0:
- print("please add intel_pstate=disable in kernel " +\
- "cmdline to fall back to acpi-cpufreq driver")
- else:
- parser_lib.print_yel("please make sure ACPI Pstate is enabled in BIOS.", warn=True)
- print("\t/* Px data is not available */", file=config)
- return
+
+ try:
+ with open(sysnode+'cpu0/cpufreq/scaling_driver', 'r') as f_node:
+ freq_driver = f_node.read()
+ if freq_driver.find("acpi-cpufreq") == -1:
+ parser_lib.print_yel("The Px data for ACRN relies on " +\
+ "acpi-cpufreq driver but it is not found, ", warn=True, end=False)
+ if freq_driver.find("intel_pstate") == 0:
+ print("please add intel_pstate=disable in kernel " +\
+ "cmdline to fall back to acpi-cpufreq driver")
+ else:
+ parser_lib.print_yel("please make sure ACPI Pstate is enabled in BIOS.", warn=True)
+ print("\t/* Px data is not available */", file=config)
+ return
+ except IOError:
+ parser_lib.print_yel("No scaling_driver found.", warn=True)
+ print("\t/* Px data is not available */", file=config)
+ return

try:
with open(sysnode+'cpufreq/boost', 'r') as f_node:
--
2.17.2


[PATCH] HV: Kconfig changes to support server platform.

Vijay Dhanraj
 

This patch updates kconfig to support server platforms
for increased number of VCPUs per VM and PT IRQ number.

Signed-off-by: Vijay Dhanraj <vijay.dhanraj@...>
Tracked-On: #4196
---
hypervisor/arch/x86/Kconfig | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/hypervisor/arch/x86/Kconfig b/hypervisor/arch/x86/Kconfig
index 84bc7a0e..e5d93322 100644
--- a/hypervisor/arch/x86/Kconfig
+++ b/hypervisor/arch/x86/Kconfig
@@ -57,7 +57,7 @@ config RELEASE

config MAX_VCPUS_PER_VM
int "Maximum number of VCPUs per VM"
- range 1 8
+ range 1 48
default 8
help
The maximum number of virtual CPUs the hypervisor can support in a
@@ -70,7 +70,7 @@ config MAX_EMULATED_MMIO_REGIONS

config MAX_PT_IRQ_ENTRIES
int "Maximum number of interrupt source for PT devices"
- range 0 128
+ range 0 256
default 64

config STACK_SIZE
--
2.17.2


[PATCH] hypervisor: handle reboot from non-privileged pre-launched guests

Chen, Zide
 

To handle reboot requests from pre-launched VMs that don't have
GUEST_FLAG_HIGHEST_SEVERITY, we shutdown the target VM explicitly
other than ignoring them.

Tracked-On: #2700
Signed-off-by: Zide Chen <zide.chen@...>
---
hypervisor/arch/x86/guest/vm_reset.c | 9 ++++++++-
1 file changed, 8 insertions(+), 1 deletion(-)

diff --git a/hypervisor/arch/x86/guest/vm_reset.c b/hypervisor/arch/x86/guest/vm_reset.c
index 87304197a8a5..cb7b314f57ee 100644
--- a/hypervisor/arch/x86/guest/vm_reset.c
+++ b/hypervisor/arch/x86/guest/vm_reset.c
@@ -94,9 +94,16 @@ static bool handle_common_reset_reg_write(struct acrn_vm *vm, bool reset)
if (reset && is_rt_vm(vm)) {
vm->state = VM_POWERING_OFF;
}
+ } else if (is_prelaunched_vm(vm)) {
+ /* Don't support re-launch for now, just shutdown the guest */
+ pause_vm(vm);
+
+ struct acrn_vcpu *bsp = vcpu_from_vid(vm, BOOT_CPU_ID);
+ per_cpu(shutdown_vm_id, pcpuid_from_vcpu(bsp)) = vm->vm_id;
+ make_shutdown_vm_request(pcpuid_from_vcpu(bsp));
} else {
/*
- * ignore writes from SOS or pre-launched VMs.
+ * ignore writes from SOS.
* equivalent to hide this port from guests.
*/
}
--
2.17.1


[PATCH 2/2] acrn-config: support non-contiguous HPA for pre-launched VM

Vijay Dhanraj
 

This patch modifies the python scripts in scenario,
board and vm-configuration to support,
1. Generation of seperate ve820 for each VM.
2. Ability to configure each VM's HPA from
acrn-config UI tool.
3. Non-contiguous HPAs for pre-launched VMs.

Signed-off-by: Vijay Dhanraj <vijay.dhanraj@...>
Tracked-On: #4195
---
misc/acrn-config/board_config/ve820_c.py | 191 ++++++++++++------
.../scenario_config/scenario_item.py | 8 +
.../scenario_config/vm_configurations_c.py | 4 +
.../scenario_config/vm_configurations_h.py | 11 +-
.../config-xmls/generic/logical_partition.xml | 6 +-
5 files changed, 154 insertions(+), 66 deletions(-)

diff --git a/misc/acrn-config/board_config/ve820_c.py b/misc/acrn-config/board_config/ve820_c.py
index 5e2bfd63..17349d9f 100644
--- a/misc/acrn-config/board_config/ve820_c.py
+++ b/misc/acrn-config/board_config/ve820_c.py
@@ -6,12 +6,11 @@
import sys
import board_cfg_lib

-
FOUR_GBYTE = 4 * 1024 * 1024 * 1024
LOW_MEM_TO_PCI_HOLE = 0x20000000


-def ve820_per_launch(config, hpa_size):
+def ve820_per_launch(config, hpa_size, hpa2_size):
"""
Start to generate board.c
:param config: it is a file pointer of board information for writing to
@@ -22,88 +21,137 @@ def ve820_per_launch(config, hpa_size):

board_name = board_cfg_lib.undline_name(board_name)

- high_mem_hpa_len = 0x0
low_mem_to_pci_hole_len = '0xA0000000'
low_mem_to_pci_hole = '0x20000000'
pci_hole_addr = '0xe0000000'
pci_hole_len = '0x20000000'
start_low_hpa = 0x100000
- if (int(hpa_size, 16) <= 512 * 1024 * 1024):
- low_mem_hpa_len = int(hpa_size, 16) - 1 * 1024 * 1024
- else:
- low_mem_hpa_len = 511 * 1024 * 1024
- high_mem_hpa_len = int(hpa_size, 16) - 512 * 1024 * 1024
+ low_mem_hpa_len = []
+ high_mem_hpa_len = []
+ high_mem_hpa2_len = []
+ high_mem_hpa2_addr = []

# pre_launch memroy: mem_size is the ve820 length
print("#include <e820.h>", file=config)
print("#include <vm.h>", file=config)
print("", file=config)
- if (high_mem_hpa_len == 0):
- print("#define VE820_ENTRIES_{}\t{}U".format(board_name, 5), file=config)
- else:
- print("#define VE820_ENTRIES_{}\t{}U".format(board_name, 6), file=config)
-
- print("static const struct e820_entry ve820_entry[{}] = {{".format(
- "VE820_ENTRIES_{}".format(board_name)), file=config)
- print("\t{\t/* usable RAM under 1MB */", file=config)
- print("\t\t.baseaddr = 0x0UL,", file=config)
- print("\t\t.length = 0xF0000UL,\t\t/* 960KB */", file=config)
- print("\t\t.type = E820_TYPE_RAM", file=config)
- print("\t},", file=config)
- print("", file=config)
- print("\t{\t/* mptable */", file=config)
- print("\t\t.baseaddr = 0xF0000UL,\t\t/* 960KB */", file=config)
- print("\t\t.length = 0x10000UL,\t\t/* 16KB */", file=config)
- print("\t\t.type = E820_TYPE_RESERVED", file=config)
- print("\t},", file=config)
- print("", file=config)
-
- print("\t{\t/* lowmem */", file=config)

- print("\t\t.baseaddr = {}UL,\t\t/* 1MB */".format(
- hex(start_low_hpa)), file=config)
- print("\t\t.length = {}UL,\t/* {}MB */".format(
- hex(low_mem_hpa_len), low_mem_hpa_len / 1024 / 1024), file=config)
+ for i in range(board_cfg_lib.VM_COUNT):
+ if (int(hpa_size[i], 16) <= 512 * 1024 * 1024):
+ low_mem_hpa_len.append(int(hpa_size[i], 16) - 1 * 1024 * 1024)
+ high_mem_hpa_len.append(0)
+ else:
+ low_mem_hpa_len.append(511 * 1024 * 1024)
+ high_mem_hpa_len.append(int(hpa_size[i], 16) - 512 * 1024 * 1024)
+
+ #HPA2 is always allocated in >4G space.
+ high_mem_hpa2_len.append(int(hpa2_size[i], 16))
+ if (high_mem_hpa_len[i] != 0) and (high_mem_hpa2_len[i] != 0):
+ high_mem_hpa2_addr.append(hex(FOUR_GBYTE) + high_mem_hpa_len[i])
+ else:
+ high_mem_hpa2_addr.append(hex(FOUR_GBYTE))
+
+ if (high_mem_hpa_len[i] != 0) and (high_mem_hpa2_len[i] != 0):
+ print("#define VM{}_VE820_ENTRIES_{}\t{}U".format(i, board_name, 7), file=config)
+ elif (high_mem_hpa_len[i] != 0) or (high_mem_hpa2_len[i] != 0):
+ print("#define VM{}_VE820_ENTRIES_{}\t{}U".format(i, board_name, 6), file=config)
+ else:
+ print("#define VM{}_VE820_ENTRIES_{}\t{}U".format(i, board_name, 5), file=config)
+
+ for i in range(board_cfg_lib.VM_COUNT):
+ print("static const struct e820_entry vm{}_ve820_entry[{}] = {{".format(
+ i, "VM{}_VE820_ENTRIES_{}".format(i, board_name)), file=config)
+ print("\t{\t/* usable RAM under 1MB */", file=config)
+ print("\t\t.baseaddr = 0x0UL,", file=config)
+ print("\t\t.length = 0xF0000UL,\t\t/* 960KB */", file=config)
+ print("\t\t.type = E820_TYPE_RAM", file=config)
+ print("\t},", file=config)
+ print("", file=config)
+ print("\t{\t/* mptable */", file=config)
+ print("\t\t.baseaddr = 0xF0000UL,\t\t/* 960KB */", file=config)
+ print("\t\t.length = 0x10000UL,\t\t/* 64KB */", file=config)
+ print("\t\t.type = E820_TYPE_RESERVED", file=config)
+ print("\t},", file=config)
+ print("", file=config)

- print("\t\t.type = E820_TYPE_RAM", file=config)
- print("\t},", file=config)
- print("", file=config)
+ print("\t{\t/* lowmem */", file=config)

- print("\t{\t/* between lowmem and PCI hole */", file=config)
- print("\t\t.baseaddr = {}UL,\t/* {}MB */".format(
- low_mem_to_pci_hole, int(low_mem_to_pci_hole, 16) / 1024 / 1024), file=config)
- print("\t\t.length = {}UL,\t/* {}MB */".format(
- low_mem_to_pci_hole_len, int(low_mem_to_pci_hole_len, 16) / 1024 / 1024), file=config)
- print("\t\t.type = E820_TYPE_RESERVED", file=config)
- print("\t},", file=config)
- print("", file=config)
- print("\t{\t/* between PCI hole and 4 GB */", file=config)
- print("\t\t.baseaddr = {}UL,\t/* {}GB */".format(
- hex(int(pci_hole_addr, 16)), int(pci_hole_addr, 16) / 1024 / 1024 / 1024), file=config)
- print("\t\t.length = {}UL,\t/* {}MB */".format(
- hex(int(pci_hole_len, 16)), int(pci_hole_len, 16) / 1024 / 1024), file=config)
- print("\t\t.type = E820_TYPE_RESERVED", file=config)
- print("\t},", file=config)
- print("", file=config)
- if (high_mem_hpa_len != 0):
- print("\t{\t/* high mem after 4GB*/", file=config)
- print("\t\t.baseaddr = {}UL,\t/* 4 GB */".format(
- hex(FOUR_GBYTE)), file=config)
+ print("\t\t.baseaddr = {}UL,\t\t/* 1MB */".format(
+ hex(start_low_hpa)), file=config)
print("\t\t.length = {}UL,\t/* {}MB */".format(
- hex(high_mem_hpa_len), high_mem_hpa_len / 1024 / 1024), file=config)
+ hex(low_mem_hpa_len[i]), low_mem_hpa_len[i] / 1024 / 1024), file=config)
+
print("\t\t.type = E820_TYPE_RAM", file=config)
print("\t},", file=config)
print("", file=config)

- print("};", file=config)
- print("", file=config)
+ print("\t{\t/* between lowmem and PCI hole */", file=config)
+ print("\t\t.baseaddr = {}UL,\t/* {}MB */".format(
+ low_mem_to_pci_hole, int(low_mem_to_pci_hole, 16) / 1024 / 1024), file=config)
+ print("\t\t.length = {}UL,\t/* {}MB */".format(
+ low_mem_to_pci_hole_len, int(low_mem_to_pci_hole_len, 16) / 1024 / 1024), file=config)
+ print("\t\t.type = E820_TYPE_RESERVED", file=config)
+ print("\t},", file=config)
+ print("", file=config)
+ print("\t{\t/* between PCI hole and 4 GB */", file=config)
+ print("\t\t.baseaddr = {}UL,\t/* {}GB */".format(
+ hex(int(pci_hole_addr, 16)), int(pci_hole_addr, 16) / 1024 / 1024 / 1024), file=config)
+ print("\t\t.length = {}UL,\t/* {}MB */".format(
+ hex(int(pci_hole_len, 16)), int(pci_hole_len, 16) / 1024 / 1024), file=config)
+ print("\t\t.type = E820_TYPE_RESERVED", file=config)
+ print("\t},", file=config)
+ print("", file=config)
+ if (high_mem_hpa_len[i] != 0) and (high_mem_hpa2_len[i] != 0):
+ print("\t{\t/* high mem after 4GB*/", file=config)
+ print("\t\t.baseaddr = {}UL,\t/* 4 GB */".format(
+ hex(FOUR_GBYTE)), file=config)
+ print("\t\t.length = {}UL,\t/* {}MB */".format(
+ hex(high_mem_hpa_len[i]), high_mem_hpa_len[i] / 1024 / 1024), file=config)
+ print("\t\t.type = E820_TYPE_RAM", file=config)
+ print("\t},", file=config)
+ print("", file=config)
+ print("\t{\t/* HPA2 after high mem*/", file=config)
+ print("\t\t.baseaddr = {}UL,\t/* {}GB */".format(
+ hex(high_mem_hpa2_addr[i]), int(high_mem_hpa2_addr[i], 16) / 1024 / 1024 / 1024), file=config)
+ print("\t\t.length = {}UL,\t/* {}MB */".format(
+ hex(high_mem_hpa_len[i]), high_mem_hpa_len[i] / 1024 / 1024), file=config)
+ print("\t\t.type = E820_TYPE_RAM", file=config)
+ print("\t},", file=config)
+ print("", file=config)
+ elif (high_mem_hpa_len[i] != 0):
+ print("\t{\t/* high mem after 4GB*/", file=config)
+ print("\t\t.baseaddr = {}UL,\t/* 4 GB */".format(
+ hex(FOUR_GBYTE)), file=config)
+ print("\t\t.length = {}UL,\t/* {}MB */".format(
+ hex(high_mem_hpa_len[i]), high_mem_hpa_len[i] / 1024 / 1024), file=config)
+ print("\t\t.type = E820_TYPE_RAM", file=config)
+ print("\t},", file=config)
+ print("", file=config)
+ elif(high_mem_hpa2_len[i] != 0):
+ print("\t{\t/* HPA2 after 4GB*/", file=config)
+ print("\t\t.baseaddr = {}UL,\t/* 4 GB */".format(
+ hex(FOUR_GBYTE)), file=config)
+ print("\t\t.length = {}UL,\t/* {}MB */".format(
+ hex(high_mem_hpa2_len[i]), high_mem_hpa2_len[i] / 1024 / 1024), file=config)
+ print("\t\t.type = E820_TYPE_RAM", file=config)
+ print("\t},", file=config)
+ print("", file=config)
+ print("};", file=config)
+ print("", file=config)
+
print("/**", file=config)
print(" * @pre vm != NULL", file=config)
print("*/", file=config)
print("void create_prelaunched_vm_e820(struct acrn_vm *vm)", file=config)
print("{", file=config)
- print("\tvm->e820_entry_num = VE820_ENTRIES_{};".format(board_name), file=config)
- print("\tvm->e820_entries = ve820_entry;", file=config)
+ for i in range(board_cfg_lib.VM_COUNT):
+ print("\tif (vm->vm_id == {}U)".format(hex(i)), file=config)
+ print("\t{", file=config)
+ print("\t\tvm->e820_entry_num = VM{}_VE820_ENTRIES_{};".format(i, board_name), file=config)
+ print("\t\tvm->e820_entries = vm{}_ve820_entry;".format(i), file=config)
+ print("\t}", file=config)
+ print("", file=config)
+
print("}", file=config)

return err_dic
@@ -144,9 +192,24 @@ def generate_file(config):
err_dic['board config: generate ve820.c failed'] = "Unknow type of host physical address size"
return err_dic

- hpa_size = hpa_size_list[0]
- if pre_vm_cnt != 0 and ('0x' in hpa_size or '0X' in hpa_size):
- err_dic = ve820_per_launch(config, hpa_size)
+ # read HPA2 mem size from scenario.xml
+ hpa2_size_list = board_cfg_lib.get_sub_leaf_tag(
+ board_cfg_lib.SCENARIO_INFO_FILE, "memory", "size_hpa2")
+ ret = board_cfg_lib.is_hpa_size(hpa2_size_list)
+ if not ret:
+ board_cfg_lib.print_red("Unknow type of second host physical address size", err=True)
+ err_dic['board config: generate ve820.c failed'] = "Unknow type of second host physical address size"
+ return err_dic
+
+ # HPA size for both VMs should have valid length.
+ for i in range(board_cfg_lib.VM_COUNT):
+ if hpa_size_list[i] == '0x0' or hpa_size_list[i] == '0X0':
+ board_cfg_lib.print_red("HPA size should not be zero", err=True)
+ err_dic['board config: generate ve820.c failed'] = "HPA size should not be zero"
+ return err_dic
+
+ if pre_vm_cnt != 0:
+ err_dic = ve820_per_launch(config, hpa_size_list, hpa2_size_list)
else:
non_ve820_pre_launch(config)

diff --git a/misc/acrn-config/scenario_config/scenario_item.py b/misc/acrn-config/scenario_config/scenario_item.py
index 7dcde5f5..bef8aeed 100644
--- a/misc/acrn-config/scenario_config/scenario_item.py
+++ b/misc/acrn-config/scenario_config/scenario_item.py
@@ -176,6 +176,8 @@ class MemInfo:
""" This is Abstract of class of memory setting information """
mem_start_hpa = {}
mem_size = {}
+ mem_start_hpa2 = {}
+ mem_size_hpa2 = {}

def __init__(self, scenario_file):
self.scenario_info = scenario_file
@@ -189,6 +191,10 @@ class MemInfo:
self.scenario_info, "memory", "start_hpa")
self.mem_size = scenario_cfg_lib.get_leaf_tag_map(
self.scenario_info, "memory", "size")
+ self.mem_start_hpa2 = scenario_cfg_lib.get_leaf_tag_map(
+ self.scenario_info, "memory", "start_hpa2")
+ self.mem_size_hpa2 = scenario_cfg_lib.get_leaf_tag_map(
+ self.scenario_info, "memory", "size_hpa2")

def check_item(self):
"""
@@ -197,6 +203,8 @@ class MemInfo:
"""
scenario_cfg_lib.mem_start_hpa_check(self.mem_start_hpa, "memory", "start_hpa")
scenario_cfg_lib.mem_size_check(self.mem_size, "memory", "size")
+ scenario_cfg_lib.mem_start_hpa_check(self.mem_start_hpa2, "memory", "start_hpa2")
+ scenario_cfg_lib.mem_size_check(self.mem_size_hpa2, "memory", "size_hpa2")


class CfgPci:
diff --git a/misc/acrn-config/scenario_config/vm_configurations_c.py b/misc/acrn-config/scenario_config/vm_configurations_c.py
index 80ee6678..c323c487 100644
--- a/misc/acrn-config/scenario_config/vm_configurations_c.py
+++ b/misc/acrn-config/scenario_config/vm_configurations_c.py
@@ -457,6 +457,8 @@ def gen_logical_partition_source(vm_info, config):
print("\t\t.memory = {", file=config)
print("\t\t\t.start_hpa = VM{0}_CONFIG_MEM_START_HPA,".format(i), file=config)
print("\t\t\t.size = VM{0}_CONFIG_MEM_SIZE,".format(i), file=config)
+ print("\t\t\t.start_hpa2 = VM{0}_CONFIG_MEM_START_HPA2,".format(i), file=config)
+ print("\t\t\t.size_hpa2 = VM{0}_CONFIG_MEM_SIZE_HPA2,".format(i), file=config)
print("\t\t},", file=config)
is_need_epc(vm_info.epc_section, i, config)
print("\t\t.os_config = {", file=config)
@@ -588,6 +590,8 @@ def gen_hybrid_source(vm_info, config):
if i == 0:
print("\t\t\t.start_hpa = VM0_CONFIG_MEM_START_HPA,", file=config)
print("\t\t\t.size = VM0_CONFIG_MEM_SIZE,", file=config)
+ print("\t\t\t.start_hpa2 = VM0_CONFIG_MEM_START_HPA2,", file=config)
+ print("\t\t\t.size_hpa2 = VM0_CONFIG_MEM_SIZE_HPA2,", file=config)
elif i == 1:
print("\t\t\t.start_hpa = 0UL,", file=config)
print("\t\t\t.size = CONFIG_SOS_RAM_SIZE,", file=config)
diff --git a/misc/acrn-config/scenario_config/vm_configurations_h.py b/misc/acrn-config/scenario_config/vm_configurations_h.py
index d564e744..05ce22aa 100644
--- a/misc/acrn-config/scenario_config/vm_configurations_h.py
+++ b/misc/acrn-config/scenario_config/vm_configurations_h.py
@@ -138,6 +138,8 @@ def logic_max_vm_num(config):
print(" *\tVMX_CONFIG_VCPU_AFFINITY", file=config)
print(" *\tVMX_CONFIG_MEM_START_HPA", file=config)
print(" *\tVMX_CONFIG_MEM_SIZE", file=config)
+ print(" *\tVMX_CONFIG_MEM_START_HPA2", file=config)
+ print(" *\tVMX_CONFIG_MEM_SIZE_HPA2", file=config)
print(" *\tVMX_CONFIG_OS_BOOTARG_ROOT", file=config)
print(" *\tVMX_CONFIG_OS_BOOTARG_MAX_CPUS", file=config)
print(" *\tVMX_CONFIG_OS_BOOTARG_CONSOLE", file=config)
@@ -171,7 +173,11 @@ def gen_logical_partition_header(vm_info, config):
print("#define VM{0}_CONFIG_MEM_START_HPA\t\t{1}UL".format(
i, vm_info.mem_info.mem_start_hpa[i]), file=config)
print("#define VM{0}_CONFIG_MEM_SIZE\t\t\t{1}UL".format(
- i, vm_info.mem_info.mem_size[0]), file=config)
+ i, vm_info.mem_info.mem_size[i]), file=config)
+ print("#define VM{0}_CONFIG_MEM_START_HPA2\t\t{1}UL".format(
+ i, vm_info.mem_info.mem_start_hpa2[i]), file=config)
+ print("#define VM{0}_CONFIG_MEM_SIZE_HPA2\t\t\t{1}UL".format(
+ i, vm_info.mem_info.mem_size_hpa2[i]), file=config)
print('#define VM{0}_CONFIG_OS_BOOTARG_ROOT\t\t"root={1} "'.format(
i, vm_info.os_cfg.kern_root_dev[i]), file=config)
print('#define VM{0}_CONFIG_OS_BOOTARG_MAXCPUS\t\t"maxcpus={1} "'.format(
@@ -271,6 +277,9 @@ def gen_hybrid_header(vm_info, config):
print("#define VM0_CONFIG_MEM_START_HPA\t{0}UL".format(
vm_info.mem_info.mem_start_hpa[0]), file=config)
print("#define VM0_CONFIG_MEM_SIZE\t\t{0}UL".format(vm_info.mem_info.mem_size[0]), file=config)
+ print("#define VM0_CONFIG_MEM_START_HPA2\t{0}UL".format(
+ vm_info.mem_info.mem_start_hpa2[0]), file=config)
+ print("#define VM0_CONFIG_MEM_SIZE_HPA2\t\t{0}UL".format(vm_info.mem_info.mem_size_hpa2[0]), file=config)
print("", file=config)
print("#define SOS_VM_BOOTARGS\t\t\tSOS_ROOTFS\t\\", file=config)
print('\t\t\t\t\t"rw rootwait "\t\\', file=config)
diff --git a/misc/acrn-config/xmls/config-xmls/generic/logical_partition.xml b/misc/acrn-config/xmls/config-xmls/generic/logical_partition.xml
index d23c7085..8e7c995a 100644
--- a/misc/acrn-config/xmls/config-xmls/generic/logical_partition.xml
+++ b/misc/acrn-config/xmls/config-xmls/generic/logical_partition.xml
@@ -19,6 +19,8 @@
<memory>
<start_hpa desc="The start physical address in host for the VM">0x100000000</start_hpa>
<size desc="The memory size in Bytes for the VM">0x20000000</size>
+ <start_hpa2 desc="The start physical address in host for the VM">0x0</start_hpa2>
+ <size_hpa2 desc="The memory size in Bytes for the VM">0x0</size_hpa2>
</memory>
<os_config>
<name desc="Specify the OS name of VM, currently it is not referenced by hypervisor code.">ClearLinux</name>
@@ -64,7 +66,9 @@
</epc_section>
<memory>
<start_hpa desc="The start physical address in host for the VM">0x120000000</start_hpa>
- <size desc="The memory size in Bytes for the VM" readonly="true">0x20000000</size>
+ <size desc="The memory size in Bytes for the VM">0x20000000</size>
+ <start_hpa2 desc="The start physical address in host for the VM">0x0</start_hpa2>
+ <size_hpa2 desc="The memory size in Bytes for the VM">0x0</size_hpa2>
</memory>
<os_config>
<name desc="Specify the OS name of VM, currently it is not referenced by hypervisor code.">ClearLinux</name>
--
2.17.2


[PATCH 1/2] HV: Add support to assign non-contiguous HPA regions for pre-launched VM

Vijay Dhanraj
 

On some platforms, HPA regions for Virtual Machine can not be
contiguous because of E820 reserved type or PCI hole. In such
cases, pre-launched VMs need to be assigned non-contiguous memory
regions and this patch addresses it.

To keep things simple, current design has the following assumptions,
1. HPA2 always will be placed after HPA1
2. HPA1 and HPA2 don’t share a single ve820 entry.
(Create multiple entries if needed but not shared)
3. Only support 2 non-contiguous HPA regions (can extend
at a later point for multiple non-contiguous HPA)

Signed-off-by: Vijay Dhanraj <vijay.dhanraj@...>
Tracked-On: #4195
---
hypervisor/arch/x86/guest/vm.c | 18 ++++++++++++++++--
hypervisor/include/arch/x86/vm_config.h | 2 ++
2 files changed, 18 insertions(+), 2 deletions(-)

diff --git a/hypervisor/arch/x86/guest/vm.c b/hypervisor/arch/x86/guest/vm.c
index e0a4487c..cbdd1bc7 100644
--- a/hypervisor/arch/x86/guest/vm.c
+++ b/hypervisor/arch/x86/guest/vm.c
@@ -188,7 +188,9 @@ static inline uint16_t get_vm_bsp_pcpu_id(const struct acrn_vm_config *vm_config
*/
static void prepare_prelaunched_vm_memmap(struct acrn_vm *vm, const struct acrn_vm_config *vm_config)
{
+ bool is_hpa1 = true;
uint64_t base_hpa = vm_config->memory.start_hpa;
+ uint64_t remaining_hpa_size = vm_config->memory.size;
uint32_t i;

for (i = 0U; i < vm->e820_entry_num; i++) {
@@ -199,19 +201,31 @@ static void prepare_prelaunched_vm_memmap(struct acrn_vm *vm, const struct acrn_
}

/* Do EPT mapping for GPAs that are backed by physical memory */
- if (entry->type == E820_TYPE_RAM) {
+ if ((entry->type == E820_TYPE_RAM) && (remaining_hpa_size >= entry->length)) {
ept_add_mr(vm, (uint64_t *)vm->arch_vm.nworld_eptp, base_hpa, entry->baseaddr,
entry->length, EPT_RWX | EPT_WB);

base_hpa += entry->length;
+ remaining_hpa_size -= entry->length;
+ } else if ((entry->type == E820_TYPE_RAM) && (remaining_hpa_size < entry->length)) {
+ pr_warn("%s: HPA size incorrectly configured in v820\n", __func__);
}

+
/* GPAs under 1MB are always backed by physical memory */
- if ((entry->type != E820_TYPE_RAM) && (entry->baseaddr < (uint64_t)MEM_1M)) {
+ if ((entry->type != E820_TYPE_RAM) && (entry->baseaddr < (uint64_t)MEM_1M) &&
+ (remaining_hpa_size >= entry->length)) {
ept_add_mr(vm, (uint64_t *)vm->arch_vm.nworld_eptp, base_hpa, entry->baseaddr,
entry->length, EPT_RWX | EPT_UNCACHED);

base_hpa += entry->length;
+ remaining_hpa_size -= entry->length;
+ }
+
+ if (remaining_hpa_size == 0 && is_hpa1) {
+ is_hpa1 = false;
+ base_hpa = vm_config->memory.start_hpa2;
+ remaining_hpa_size = vm_config->memory.size_hpa2;
}
}
}
diff --git a/hypervisor/include/arch/x86/vm_config.h b/hypervisor/include/arch/x86/vm_config.h
index e0a0ce09..f4a0672e 100644
--- a/hypervisor/include/arch/x86/vm_config.h
+++ b/hypervisor/include/arch/x86/vm_config.h
@@ -37,6 +37,8 @@ enum acrn_vm_load_order {
struct acrn_vm_mem_config {
uint64_t start_hpa; /* the start HPA of VM memory configuration, for pre-launched VMs only */
uint64_t size; /* VM memory size configuration */
+ uint64_t start_hpa2; /* the start HPA of VM memory configuration, for pre-launched VMs only */
+ uint64_t size_hpa2; /* VM shared memory size configuration */
};

struct target_vuart {
--
2.17.2


[PATCH 0/2]Non-contiguous HPA support for logical

Vijay Dhanraj
 

On some platforms, HPA regions for Virtual Machine can not be
contiguous because of E820 reserved type or PCI hole. In such
cases, pre-launched VMs need to be assigned non-contiguous memory
regions and this patch series addresses it.

Vijay Dhanraj (2):
HV: Add support to assign non-contiguous HPA regions for pre-launched
VM
acrn-config: support non-contiguous HPA for pre-launched VM

hypervisor/arch/x86/guest/vm.c | 18 +-
hypervisor/include/arch/x86/vm_config.h | 2 +
misc/acrn-config/board_config/ve820_c.py | 191 ++++++++++++------
.../scenario_config/scenario_item.py | 8 +
.../scenario_config/vm_configurations_c.py | 4 +
.../scenario_config/vm_configurations_h.py | 11 +-
.../config-xmls/generic/logical_partition.xml | 6 +-
7 files changed, 173 insertions(+), 67 deletions(-)

--
2.17.2


[PATCH] Makefile: fix make failure for logical_partition or hybrid scenario

Chen, Zide
 

If "make menuconfig" chooses hybrid or logical_partition scenario,
currently the build fails with "SCENARIO <$(SCENARIO)> is not supported.".

- CONFIG_LOGICAL_PARTITION: "awk -F "_|=" '{print $$2}'" yields the second
field (LOGICAL) only. The new "cut -d '_' -f 2-" keeps all fields after
"CONFIG_".

- Add the missing HYBRID scenario.

Tracked-On: #4067
Signed-off-by: Zide Chen <zide.chen@...>
---
Makefile | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/Makefile b/Makefile
index a4de4f6665d1..febb59015c67 100644
--- a/Makefile
+++ b/Makefile
@@ -59,8 +59,8 @@ include $(T)/hypervisor/scripts/makefile/cfg_update.mk

ifeq ($(DEFAULT_MENU_CONFIG_FILE), $(wildcard $(DEFAULT_MENU_CONFIG_FILE)))
BOARD_IN_MENUCONFIG := $(shell grep CONFIG_BOARD= $(DEFAULT_MENU_CONFIG_FILE) | awk -F '"' '{print $$2}')
- SCENARIO_IN_MENUCONFIG := $(shell grep -E "SDC=y|SDC2=y|INDUSTRY=y|LOGICAL_PARTITION=y" \
- $(DEFAULT_MENU_CONFIG_FILE) | awk -F "_|=" '{print $$2}' | tr A-Z a-z)
+ SCENARIO_IN_MENUCONFIG := $(shell grep -E "SDC=y|SDC2=y|INDUSTRY=y|LOGICAL_PARTITION=y|HYBRID=y" \
+ $(DEFAULT_MENU_CONFIG_FILE) | awk -F "=" '{print $$1}' | cut -d '_' -f 2- | tr A-Z a-z)

RELEASE := $(shell grep CONFIG_RELEASE=y $(DEFAULT_MENU_CONFIG_FILE))
ifneq ($(RELEASE),)
--
2.17.1

11181 - 11200 of 37094