Date   

Re: [PATCH v3 2/5] HV: vuart: enable vuart console for VM

Eddie Dong
 


Yes, *vuart is better, thanks.
I find the idx is used to register the PIO handler, as we need it to get the proper
PIO index (UART_PIO_IDX0 or UART_PIO_IDX1).
I see. That is fine then.

If I move the vuart_register_io_handler out of vuart_setup, I have to add
condition " if (vu_config->type == VUART_LEGACY_PIO) " before call it.
So, it seems, idx is not able to removed, but I can change idx to vuart_idx to be
more readable.

static void vuart_setup(struct acrn_vm *vm,
| | struct vuart_config *vu_config, uint16_t idx)
{
| uint32_t divisor;
| /* Set baud rate*/
| divisor = (UART_CLOCK_RATE / BAUD_115200) >> 4U;
| vm->vuart[idx].dll = (uint8_t)divisor;
| vm->vuart[idx].dlh = (uint8_t)(divisor >> 8U);
| vm->vuart[idx].vm = vm;
| vuart_fifo_init(&vm->vuart[idx]);
| vuart_lock_init(&vm->vuart[idx]);
| if (vu_config->type == VUART_LEGACY_PIO) {
| | vm->vuart[idx].base = vu_config->addr.port_base;
| | vm->vuart[idx].irq = vu_config->irq;
| | if (vuart_register_io_handler(vm, vu_config->addr.port_base,
idx)) { <----------idx is used here
| | | vm->vuart[idx].active = true;
| | }
| } else {
| | /*TODO: add pci vuart support here*/
| | printf("PCI vuart is not support\n");
| }
}


Regards,
Conghui.


Re: [PATCH v7 3/5] HV: Reshuffle start_cpus and start_cpu

Yan, Like
 

On Fri, Apr 19, 2019 at 03:49:34AM +0000, Eddie Dong wrote:
-
- /* Error condition - loop endlessly for now */
- do {
- } while (1);
+ pr_fatal("Secondary CPU%hu failed to come up", pcpu_id);
+ cpu_set_current_state(pcpu_id, PCPU_STATE_INVALID);
I am questioning the need of the flag.

As if one PCPU fails to start, resume fails or boot fails. We should explicitly
do this. This is what panic does now.
Do we need to handle this additional state with this assumption?
Yes. As we taked in previous version of the patchset. We will reuse the
start_cpus (calling start_cpu) when reset the cpu of lapic_pt vm. If the pcpu
fails to start, we should mark it as INVALID and return error to the caller.
Marking INVALID here aims at preventing the later launched vm use it.

This is a question of how we want to do if shutdown of an RTVM fails or bootup fails.

For power on bootup, we clearly defines our policy to PANIC. This doesn't require any additional logic for further processing.
For shutdown failure of RTVM, do we want to power off/on entire platform, or do we want to try to restart the RTVM. I don't think the later one is feasible.
If this is correct, we just PANIC here and no need further logic.
I thought we had made the decision in comments to v3: the principle is not to impace safety VM/SOS VM/other RTVM, so we just "Marking the PCPU as dead CPU/bad CPU" if shutdown failed.
Please correct us if we misunderstood anything.
https://lists.projectacrn.org/g/acrn-dev/topic/patch_v3_2_2_hv_reset/31028081?p=Created,,,20,1,20,0::recentpostdate%2Fsticky,,,20,2,120,31028081&jump=1


In case of FuSa, the PANIC will trigger certain action to show MESSAGE to users. This is another story.




}
}

-void start_cpus(void)
+
+/**
+ * @brief Start all cpus if the bit is set in mask except itself
+ *
+ * @param[in] mask mask bits of cpus which should be started
+ *
+ * @return true if all cpus set in mask are started
+ * @return false if there are any cpus set in mask aren't started
+*/ bool start_cpus(uint64_t mask)
{
uint16_t i;
+ uint16_t pcpu_id = get_cpu_id();
+ uint64_t need_start_mask = mask;
+ uint64_t expected_start_mask = mask;

/* secondary cpu start up will wait for pcpu_sync -> 0UL */
atomic_store64(&pcpu_sync, 1UL);

- for (i = 0U; i < phys_cpu_num; i++) {
- if (get_cpu_id() == i) {
- continue;
+ i = ffs64(need_start_mask);
+ while (i != INVALID_BIT_INDEX) {
This style is fine too. But, I am wondering if following style is better?

For (i = ffs64(need_start_mask); i != INVALID_BIT_INDEX; i =
ffs64(need_start_mask)) {

Please check from MISRAC p.o.v.
Checked with Huihuang. From MISRAC p.o.v, both while and for is fine.
BTW, all current code related to ffs64(mask) use while.

Do you prefer we use for? If yes, I will do in next version.
I am fine now. IF we do, we can do later to cover all codes.



+ bitmap_clear_nolock(i, &need_start_mask);
+
+ if (pcpu_id == i) {
+ continue; /* Avoid start itself */
}

start_cpu(i);
+ i = ffs64(need_start_mask);
}

/* Trigger event to allow secondary CPUs to continue */
atomic_store64(&pcpu_sync, 0UL);
+
+ return ((pcpu_active_bitmap & expected_start_mask) ==
+expected_start_mask);
Here we can directly use mask, since this is readonly. This way, we can avoid
use of both need_start_mask/ expected_start_mask.
Yes, we can use mask directly and will use it in next version.
But, need_start_mask is still needed as we will change it in runtime.
We save one. And you can use more readable name. I really couldn't distinguish expected_start_mask and need_start_mask.


}

void stop_cpus(void)
diff --git a/hypervisor/arch/x86/pm.c b/hypervisor/arch/x86/pm.c
index 3a0d1b2c..2f27565a 100644
--- a/hypervisor/arch/x86/pm.c
+++ b/hypervisor/arch/x86/pm.c
@@ -4,6 +4,7 @@
*/

#include <types.h>
+ #include <bits.h>
#include <acrn_common.h>
#include <default_acpi_info.h>
#include <platform_acpi_info.h>
@@ -140,7 +141,10 @@ void do_acpi_s3(struct pm_s_state_data
*sstate_data, uint32_t pm1a_cnt_val, uint

void host_enter_s3(struct pm_s_state_data *sstate_data, uint32_t
pm1a_cnt_val, uint32_t pm1b_cnt_val) {
+ uint16_t pcpu_nums = get_pcpu_nums();
uint64_t pmain_entry_saved;
+ uint64_t mask;
+ uint16_t i;

stac();

@@ -185,6 +189,13 @@ void host_enter_s3(struct pm_s_state_data
*sstate_data, uint32_t pm1a_cnt_val, u
write_trampoline_sym(main_entry, pmain_entry_saved);
clac();

+ /* Skip BSP */
+ for (i = 1U; i < pcpu_nums; i++) {
+ bitmap_set_nolock(i, &mask);
+ }
+
Maybe not necessary. See above comments.
BTW, we may have a bug here: if one PCPU is already offline, this code
seems to be not correct.
Anyway, this is not an issue of this patch, we can decouple from this one.
Please check after this patch.
Will check it.
No need to block this patch for now.


/* online all APs again */
- start_cpus();
+ if (!start_cpus(mask)) {
+ panic("Failed to start all APs!");
+ }
}
diff --git a/hypervisor/include/arch/x86/cpu.h
b/hypervisor/include/arch/x86/cpu.h
index 13c2fd9e..8f25ca5d 100644
--- a/hypervisor/include/arch/x86/cpu.h
+++ b/hypervisor/include/arch/x86/cpu.h
@@ -250,6 +250,7 @@ enum pcpu_boot_state {
PCPU_STATE_RUNNING,
PCPU_STATE_HALTED,
PCPU_STATE_DEAD,
+ PCPU_STATE_INVALID,
};

/* Function prototypes */
@@ -259,7 +260,7 @@ void trampoline_start16(void); void
load_cpu_state_data(void); void init_cpu_pre(uint16_t
pcpu_id_args); void init_cpu_post(uint16_t pcpu_id); -void
start_cpus(void);
+bool start_cpus(uint64_t mask);
void stop_cpus(void);
void wait_sync_change(uint64_t *sync, uint64_t wake_sync);

--
2.20.0







Re: [PATCH v3 2/5] HV: vuart: enable vuart console for VM

Conghui Chen
 

Hi Eddie,


- if (vu->active) {
- return vu;
+static void vuart_setup(struct acrn_vm *vm,
+ struct vuart_config *vu_config, uint16_t idx) {
Will it be more readable to pass "*vuart", rather than idx? Idx is too
implementation specific.
Yes, *vuart is better, thanks.
I find the idx is used to register the PIO handler, as we need it to get the proper PIO index (UART_PIO_IDX0 or UART_PIO_IDX1).
If I move the vuart_register_io_handler out of vuart_setup, I have to add condition " if (vu_config->type == VUART_LEGACY_PIO) " before call it.
So, it seems, idx is not able to removed, but I can change idx to vuart_idx to be more readable.

static void vuart_setup(struct acrn_vm *vm,
| | struct vuart_config *vu_config, uint16_t idx)
{
| uint32_t divisor;
| /* Set baud rate*/
| divisor = (UART_CLOCK_RATE / BAUD_115200) >> 4U;
| vm->vuart[idx].dll = (uint8_t)divisor;
| vm->vuart[idx].dlh = (uint8_t)(divisor >> 8U);
| vm->vuart[idx].vm = vm;
| vuart_fifo_init(&vm->vuart[idx]);
| vuart_lock_init(&vm->vuart[idx]);
| if (vu_config->type == VUART_LEGACY_PIO) {
| | vm->vuart[idx].base = vu_config->addr.port_base;
| | vm->vuart[idx].irq = vu_config->irq;
| | if (vuart_register_io_handler(vm, vu_config->addr.port_base, idx)) { <----------idx is used here
| | | vm->vuart[idx].active = true;
| | }
| } else {
| | /*TODO: add pci vuart support here*/
| | printf("PCI vuart is not support\n");
| }
}


Regards,
Conghui.


Re: [PATCH] hv: instr_emul: check the bit 0(w bit) of opcode when necessary

Li, Fei1
 

On Thu, Apr 18, 2019 at 09:13:22PM +0800, Yin, Fengwei wrote:
Hi Fei,

On 4/18/2019 8:25 AM, Li, Fei1 wrote:
Not every instruction supports the operand-size bit (w). This patch try to correct
what done in commit-id 9df8790 by setting a flag VIE_OP_F_BYTE_OP to indicate which
instruction supports the operand-size bit (w).

This bug is found by removing VMX_PROCBASED_CTLS2_VAPIC_REGS VMCS setting when the
physical doesn't support this APICv feature. However, if emulated this in MRB board,
the android can't boot because when switch to trusty world, it will check
"Delivery Status" in ICR first. It turns out that this Bit Test instruction is not
emulated correctly.

Signed-off-by: Qi Yadong <yadong.qi@...>
Signed-off-by: Li, Fei1 <fei1.li@...>
---
hypervisor/arch/x86/guest/instr_emul.c | 28 +++++++++++++++++---------
1 file changed, 18 insertions(+), 10 deletions(-)

diff --git a/hypervisor/arch/x86/guest/instr_emul.c b/hypervisor/arch/x86/guest/instr_emul.c
index d11f7bc9e712..8a392967b3bd 100644
--- a/hypervisor/arch/x86/guest/instr_emul.c
+++ b/hypervisor/arch/x86/guest/instr_emul.c
@@ -84,15 +84,17 @@
#define VIE_OP_TYPE_TEST 15U
/* struct vie_op.op_flags */
-#define VIE_OP_F_IMM (1U << 0U) /* 16/32-bit immediate operand */
-#define VIE_OP_F_IMM8 (1U << 1U) /* 8-bit immediate operand */
-#define VIE_OP_F_MOFFSET (1U << 2U) /* 16/32/64-bit immediate moffset */
-#define VIE_OP_F_NO_MODRM (1U << 3U)
-#define VIE_OP_F_CHECK_GVA_DI (1U << 4U) /* for movs, need to check DI */
+#define VIE_OP_F_IMM (1U << 0U) /* 16/32-bit immediate operand */
+#define VIE_OP_F_IMM8 (1U << 1U) /* 8-bit immediate operand */
+#define VIE_OP_F_MOFFSET (1U << 2U) /* 16/32/64-bit immediate moffset */
+#define VIE_OP_F_NO_MODRM (1U << 3U)
+#define VIE_OP_F_CHECK_GVA_DI (1U << 4U) /* for movs, need to check DI */
+#define VIE_OP_F_BYTE_OP (1U << 5U) /* 8-bit operands. */
static const struct instr_emul_vie_op two_byte_opcodes[256] = {
[0xB6] = {
.op_type = VIE_OP_TYPE_MOVZX,
+ .op_flags = VIE_OP_F_BYTE_OP,
}
Re-think the movzx case again, this patch can't cover movzx case.

movzx need two operand size: one is for mmio size, another is for
reg size. Both should be returned from instr_decoder.

The patch itself is fine for merging first. We need a extra patch
to handle movzx case.
Got it.
Will try to fix it in another patch.

Thanks.


Regards
Yin, Fengwei
,
[0xB7] = {
.op_type = VIE_OP_TYPE_MOVZX,
@@ -103,6 +105,7 @@ static const struct instr_emul_vie_op two_byte_opcodes[256] = {
},
[0xBE] = {
.op_type = VIE_OP_TYPE_MOVSX,
+ .op_flags = VIE_OP_F_BYTE_OP,
},
};
@@ -121,12 +124,14 @@ static const struct instr_emul_vie_op one_byte_opcodes[256] = {
},
[0x88] = {
.op_type = VIE_OP_TYPE_MOV,
+ .op_flags = VIE_OP_F_BYTE_OP,
},
[0x89] = {
.op_type = VIE_OP_TYPE_MOV,
},
[0x8A] = {
.op_type = VIE_OP_TYPE_MOV,
+ .op_flags = VIE_OP_F_BYTE_OP,
},
[0x8B] = {
.op_type = VIE_OP_TYPE_MOV,
@@ -141,15 +146,15 @@ static const struct instr_emul_vie_op one_byte_opcodes[256] = {
},
[0xA4] = {
.op_type = VIE_OP_TYPE_MOVS,
- .op_flags = VIE_OP_F_NO_MODRM | VIE_OP_F_CHECK_GVA_DI
+ .op_flags = VIE_OP_F_NO_MODRM | VIE_OP_F_CHECK_GVA_DI | VIE_OP_F_BYTE_OP,
},
[0xA5] = {
.op_type = VIE_OP_TYPE_MOVS,
- .op_flags = VIE_OP_F_NO_MODRM | VIE_OP_F_CHECK_GVA_DI
+ .op_flags = VIE_OP_F_NO_MODRM | VIE_OP_F_CHECK_GVA_DI,
},
[0xAA] = {
.op_type = VIE_OP_TYPE_STOS,
- .op_flags = VIE_OP_F_NO_MODRM
+ .op_flags = VIE_OP_F_NO_MODRM | VIE_OP_F_BYTE_OP,
},
[0xAB] = {
.op_type = VIE_OP_TYPE_STOS,
@@ -158,7 +163,7 @@ static const struct instr_emul_vie_op one_byte_opcodes[256] = {
[0xC6] = {
/* XXX Group 11 extended opcode - not just MOV */
.op_type = VIE_OP_TYPE_MOV,
- .op_flags = VIE_OP_F_IMM8,
+ .op_flags = VIE_OP_F_IMM8 | VIE_OP_F_BYTE_OP,
},
[0xC7] = {
.op_type = VIE_OP_TYPE_MOV,
@@ -184,12 +189,14 @@ static const struct instr_emul_vie_op one_byte_opcodes[256] = {
},
[0x84] = {
.op_type = VIE_OP_TYPE_TEST,
+ .op_flags = VIE_OP_F_BYTE_OP,
},
[0x85] = {
.op_type = VIE_OP_TYPE_TEST,
},
[0x08] = {
.op_type = VIE_OP_TYPE_OR,
+ .op_flags = VIE_OP_F_BYTE_OP,
},
[0x09] = {
.op_type = VIE_OP_TYPE_OR,
@@ -1893,7 +1900,8 @@ static int32_t decode_opcode(struct instr_emul_vie *vie)
* by prefix and default operand size attribute (handled
* in decode_prefixes).
*/
- if ((ret == 0) && ((vie->opcode & 0x1U) == 0U)) {
+ if ((ret == 0) && ((vie->op.op_flags & VIE_OP_F_BYTE_OP) != 0U &&
+ ((vie->opcode & 0x1U) == 0U))) {
vie->opsize = 1U;
}
}



Re: [PATCH] hv: free ptdev device IRQs when shutting down VM

Eddie Dong
 

Acked-by: Eddie Dong <eddie.dong@...>

-----Original Message-----
From: acrn-dev@... [mailto:acrn-dev@...]
On Behalf Of Chen, Zide
Sent: Friday, April 19, 2019 8:13 AM
To: acrn-dev@...
Cc: Chen, Zide <zide.chen@...>; Grandhi, Sainath
<sainath.grandhi@...>
Subject: [acrn-dev] [PATCH] hv: free ptdev device IRQs when shutting down
VM

Currently ptdev_release_all_entries() calls ptirq_release_entry() to release
ptdev IRQs, which is not sufficient because free_irq() is not included in
ptirq_release_entry().

Furthermore, function ptirq_deactivate_entry() and ptirq_release_entry() have
lots of overlaps which make the differences between them are not clear.

This patch does:

- Adds ptirq_deactivate_entry() call in ptdev_release_all_entries() if the
irq has not already freed.

- Remove unnecessary code from ptirq_deactivate_entry() to make it do the
exact opposite to function ptirq_activate_entry(), except that
entry->allocated_pirq is not reset, which is not necessary.

- Added the missing del_timer() to ptirq_release_entry() to make it almost
opposite to ptirq_alloc_entry(); Added memset() to clear the whole entry,
which doesn't have impacts to the functionalities but keep the data
structure
clean.

Tracked-On: #2700
Signed-off-by: Sainath Grandhi <sainath.grandhi@...>
Signed-off-by: Zide Chen <zide.chen@...>
---
hypervisor/common/ptdev.c | 22 ++++++----------------
1 file changed, 6 insertions(+), 16 deletions(-)

diff --git a/hypervisor/common/ptdev.c b/hypervisor/common/ptdev.c index
e67d57b19f41..6f5418b25ee9 100644
--- a/hypervisor/common/ptdev.c
+++ b/hypervisor/common/ptdev.c
@@ -112,16 +112,15 @@ void ptirq_release_entry(struct
ptirq_remapping_info *entry) {
uint64_t rflags;

- /*
- * remove entry from softirq list
- * is required before calling ptirq_release_entry.
- */
spinlock_irqsave_obtain(&entry->vm->softirq_dev_lock, &rflags);
list_del_init(&entry->softirq_node);
+ del_timer(&entry->intr_delay_timer);
spinlock_irqrestore_release(&entry->vm->softirq_dev_lock, rflags);
- atomic_clear32(&entry->active, ACTIVE_FLAG);
+
bitmap_clear_nolock((entry->ptdev_entry_id) & 0x3FU,
&ptirq_entry_bitmaps[((entry->ptdev_entry_id) & 0x3FU) >> 6U]);
+
+ (void)memset((void *)entry, 0U, sizeof(struct ptirq_remapping_info));
}

/* interrupt context */
@@ -176,18 +175,8 @@ int32_t ptirq_activate_entry(struct
ptirq_remapping_info *entry, uint32_t phys_i

void ptirq_deactivate_entry(struct ptirq_remapping_info *entry) {
- uint64_t rflags;
-
atomic_clear32(&entry->active, ACTIVE_FLAG);
-
free_irq(entry->allocated_pirq);
- entry->allocated_pirq = IRQ_INVALID;
-
- /* remove from softirq list if added */
- spinlock_irqsave_obtain(&entry->vm->softirq_dev_lock, &rflags);
- list_del_init(&entry->softirq_node);
- del_timer(&entry->intr_delay_timer);
- spinlock_irqrestore_release(&entry->vm->softirq_dev_lock, rflags);
}

void ptdev_init(void)
@@ -207,8 +196,9 @@ void ptdev_release_all_entries(const struct acrn_vm
*vm)
/* VM already down */
for (idx = 0U; idx < CONFIG_MAX_PT_IRQ_ENTRIES; idx++) {
entry = &ptirq_entries[idx];
- if (entry->vm == vm) {
+ if ((entry->vm == vm) && is_entry_active(entry)) {
spinlock_obtain(&ptdev_lock);
+ ptirq_deactivate_entry(entry);
ptirq_release_entry(entry);
spinlock_release(&ptdev_lock);
}
--
2.17.1



Re: [PATCH v7 5/5] HV: Reset physical core of lapic_pt vm when shutdown

Kaige Fu
 

On 04-19 Fri 03:40, Eddie Dong wrote:


I got your comment. But here is what I think: I refine the stop_cpus to take one
more parameter 'mask'. So, we can reuse the stop_cpus here directly instead
of doing like the following:

1. make_pcpu_offline()
2. wait the pcpus offline.

These above logic is already in stop_cpus. IMHO, it is better to reuse stop_cpus.
Otherwise, we have duplicated code here.

If you prefer make_pcpu_offline and wait here, I will do it in next version. Sorry
for confusing.
We don't need to call stop_cpus again. It is just one line of code that can work as stop_cpus did.
Wait pcpus offline can be a separate API that can be shared.
Got it. Will do in next version.




Re: [PATCH] dm: uart: fix windows booting failure issue

Eddie Dong
 

-----Original Message-----
From: acrn-dev@... [mailto:acrn-dev@...]
On Behalf Of Liu Yuan
Sent: Friday, April 19, 2019 9:30 AM
To: acrn-dev@...
Cc: Wang, Yu1 <yu1.wang@...>; Chen, Jian Jun
<jian.jun.chen@...>; Liu, Yuan1 <yuan1.liu@...>
Subject: [acrn-dev] [PATCH] dm: uart: fix windows booting failure issue

Enable thre_int_pending as long as set IER_ETXRDY in IER, no longer need to
always set thre_int_pending in IIR.

Signed-off-by: Yuan Liu <yuan1.liu@...>
---
devicemodel/hw/uart_core.c | 9 ++++-----
1 file changed, 4 insertions(+), 5 deletions(-)

diff --git a/devicemodel/hw/uart_core.c b/devicemodel/hw/uart_core.c index
6a56dbe..dc51d28 100644
--- a/devicemodel/hw/uart_core.c
+++ b/devicemodel/hw/uart_core.c
@@ -434,6 +434,9 @@ uart_write(struct uart_vdev *uart, int offset, uint8_t
value)
uart->thre_int_pending = true;
break;
case REG_IER:
+ if (((uart->ier & IER_ETXRDY) == 0) &&
+ ((value & IER_ETXRDY) != 0))
+ uart->thre_int_pending = true;
/*
* Apply mask so that bits 4-7 are 0
* Also enables bits 0-3 only if they're 1 @@ -553,12 +556,8 @@
uart_read(struct uart_vdev *uart, int offset)
/*
* Reading the IIR register clears the THRE INT.
*/
- if (intr_reason == IIR_TXRDY) {
+ if (intr_reason == IIR_TXRDY)
uart->thre_int_pending = false;
- uart_toggle_intr(uart);
- }
- /* THRE INT is re-generated since the THR register is always empty in
here */
- uart->thre_int_pending = true;
While, state change of thre_int_pending is invisble to OS. This is why we need uart_toggle_intr(uart) to make it visible thru IRQ signal for example.
If you do so, do you get any endorsement from spec?


iir |= intr_reason;

--
2.7.4



Re: [PATCH v2 1/2] HV: check vm id param when dispatching hypercall

Eddie Dong
 

Need Yong hua to review, since he ever fixed some of the similar issues.

-----Original Message-----
From: acrn-dev@... [mailto:acrn-dev@...]
On Behalf Of Victor Sun
Sent: Friday, April 19, 2019 10:49 AM
To: acrn-dev@...
Subject: [acrn-dev] [PATCH v2 1/2] HV: check vm id param when dispatching
hypercall

If the vmcall param passed from guest is representing a vmid, we should make
sure it is a valid one because it is a pre-condition of following
get_vm_from_vmid(). And then we don't need to do NULL VM pointer check in
is_valid_vm() because get_vm_from_vmid() would never return NULL.

Signed-off-by: Victor Sun <victor.sun@...>
---
hypervisor/arch/x86/guest/vm.c | 5 +-
hypervisor/arch/x86/guest/vmcall.c | 107
+++++++++++++++++++++++++------------
2 files changed, 77 insertions(+), 35 deletions(-)

diff --git a/hypervisor/arch/x86/guest/vm.c b/hypervisor/arch/x86/guest/vm.c
index 28f3122..242e716 100644
--- a/hypervisor/arch/x86/guest/vm.c
+++ b/hypervisor/arch/x86/guest/vm.c
@@ -47,9 +47,12 @@ uint16_t get_vmid_by_uuid(const uint8_t *uuid)
return vm_id;
}

+/**
+ * @pre vm != NULL
+ */
bool is_valid_vm(const struct acrn_vm *vm) {
- return (vm != NULL) && (vm->state != VM_STATE_INVALID);
+ return (vm->state != VM_STATE_INVALID);
}

bool is_sos_vm(const struct acrn_vm *vm) diff --git
a/hypervisor/arch/x86/guest/vmcall.c b/hypervisor/arch/x86/guest/vmcall.c
index cee4797..783a31a 100644
--- a/hypervisor/arch/x86/guest/vmcall.c
+++ b/hypervisor/arch/x86/guest/vmcall.c
@@ -28,7 +28,10 @@ static int32_t dispatch_hypercall(struct acrn_vcpu
*vcpu)
uint64_t param1 = vcpu_get_gpreg(vcpu, CPU_REG_RDI);
/* hypercall param2 from guest*/
uint64_t param2 = vcpu_get_gpreg(vcpu, CPU_REG_RSI);
- int32_t ret;
+ /* in case hypercall param1 is a vm id */
+ uint16_t vm_id = (uint16_t)param1;
+ bool vmid_is_valid = (vm_id < CONFIG_MAX_VM_NUM) ? true : false;
+ int32_t ret = -1;

switch (hypcall_id) {
case HC_SOS_OFFLINE_CPU:
@@ -57,69 +60,89 @@ static int32_t dispatch_hypercall(struct acrn_vcpu
*vcpu)

case HC_DESTROY_VM:
/* param1: vmid */
- spinlock_obtain(&vmm_hypercall_lock);
- ret = hcall_destroy_vm((uint16_t)param1);
- spinlock_release(&vmm_hypercall_lock);
+ if (vmid_is_valid) {
+ spinlock_obtain(&vmm_hypercall_lock);
+ ret = hcall_destroy_vm(vm_id);
+ spinlock_release(&vmm_hypercall_lock);
+ }
break;

case HC_START_VM:
/* param1: vmid */
- spinlock_obtain(&vmm_hypercall_lock);
- ret = hcall_start_vm((uint16_t)param1);
- spinlock_release(&vmm_hypercall_lock);
+ if (vmid_is_valid) {
+ spinlock_obtain(&vmm_hypercall_lock);
+ ret = hcall_start_vm(vm_id);
+ spinlock_release(&vmm_hypercall_lock);
+ }
break;

case HC_RESET_VM:
/* param1: vmid */
- spinlock_obtain(&vmm_hypercall_lock);
- ret = hcall_reset_vm((uint16_t)param1);
- spinlock_release(&vmm_hypercall_lock);
+ if (vmid_is_valid) {
+ spinlock_obtain(&vmm_hypercall_lock);
+ ret = hcall_reset_vm(vm_id);
+ spinlock_release(&vmm_hypercall_lock);
+ }
break;

case HC_PAUSE_VM:
/* param1: vmid */
- spinlock_obtain(&vmm_hypercall_lock);
- ret = hcall_pause_vm((uint16_t)param1);
- spinlock_release(&vmm_hypercall_lock);
+ if (vmid_is_valid) {
+ spinlock_obtain(&vmm_hypercall_lock);
+ ret = hcall_pause_vm(vm_id);
+ spinlock_release(&vmm_hypercall_lock);
+ }
break;

case HC_CREATE_VCPU:
/* param1: vmid */
- spinlock_obtain(&vmm_hypercall_lock);
- ret = hcall_create_vcpu(vm, (uint16_t)param1, param2);
- spinlock_release(&vmm_hypercall_lock);
+ if (vmid_is_valid) {
+ spinlock_obtain(&vmm_hypercall_lock);
+ ret = hcall_create_vcpu(vm, vm_id, param2);
+ spinlock_release(&vmm_hypercall_lock);
+ }
break;

case HC_SET_VCPU_REGS:
/* param1: vmid */
- spinlock_obtain(&vmm_hypercall_lock);
- ret = hcall_set_vcpu_regs(vm, (uint16_t)param1, param2);
- spinlock_release(&vmm_hypercall_lock);
+ if (vmid_is_valid) {
+ spinlock_obtain(&vmm_hypercall_lock);
+ ret = hcall_set_vcpu_regs(vm, vm_id, param2);
+ spinlock_release(&vmm_hypercall_lock);
+ }
break;

case HC_SET_IRQLINE:
/* param1: vmid */
- ret = hcall_set_irqline(vm, (uint16_t)param1,
- (struct acrn_irqline_ops *)&param2);
+ if (vmid_is_valid) {
+ ret = hcall_set_irqline(vm, vm_id,
+ (struct acrn_irqline_ops *)&param2);
+ }
break;

case HC_INJECT_MSI:
/* param1: vmid */
- ret = hcall_inject_msi(vm, (uint16_t)param1, param2);
+ if (vmid_is_valid) {
+ ret = hcall_inject_msi(vm, vm_id, param2);
+ }
break;

case HC_SET_IOREQ_BUFFER:
/* param1: vmid */
- spinlock_obtain(&vmm_hypercall_lock);
- ret = hcall_set_ioreq_buffer(vm, (uint16_t)param1, param2);
- spinlock_release(&vmm_hypercall_lock);
+ if (vmid_is_valid) {
+ spinlock_obtain(&vmm_hypercall_lock);
+ ret = hcall_set_ioreq_buffer(vm, vm_id, param2);
+ spinlock_release(&vmm_hypercall_lock);
+ }
break;

case HC_NOTIFY_REQUEST_FINISH:
/* param1: vmid
* param2: vcpu_id */
- ret = hcall_notify_ioreq_finish((uint16_t)param1,
- (uint16_t)param2);
+ if (vmid_is_valid) {
+ ret = hcall_notify_ioreq_finish(vm_id,
+ (uint16_t)param2);
+ }
break;

case HC_VM_SET_MEMORY_REGIONS:
@@ -127,7 +150,10 @@ static int32_t dispatch_hypercall(struct acrn_vcpu
*vcpu)
break;

case HC_VM_WRITE_PROTECT_PAGE:
- ret = hcall_write_protect_page(vm, (uint16_t)param1, param2);
+ /* param1: vmid */
+ if (vmid_is_valid) {
+ ret = hcall_write_protect_page(vm, vm_id, param2);
+ }
break;

/*
@@ -140,27 +166,37 @@ static int32_t dispatch_hypercall(struct acrn_vcpu
*vcpu)

case HC_VM_GPA2HPA:
/* param1: vmid */
- ret = hcall_gpa_to_hpa(vm, (uint16_t)param1, param2);
+ if (vmid_is_valid) {
+ ret = hcall_gpa_to_hpa(vm, vm_id, param2);
+ }
break;

case HC_ASSIGN_PTDEV:
/* param1: vmid */
- ret = hcall_assign_ptdev(vm, (uint16_t)param1, param2);
+ if (vmid_is_valid) {
+ ret = hcall_assign_ptdev(vm, vm_id, param2);
+ }
break;

case HC_DEASSIGN_PTDEV:
/* param1: vmid */
- ret = hcall_deassign_ptdev(vm, (uint16_t)param1, param2);
+ if (vmid_is_valid) {
+ ret = hcall_deassign_ptdev(vm, vm_id, param2);
+ }
break;

case HC_SET_PTDEV_INTR_INFO:
/* param1: vmid */
- ret = hcall_set_ptdev_intr_info(vm, (uint16_t)param1, param2);
+ if (vmid_is_valid) {
+ ret = hcall_set_ptdev_intr_info(vm, vm_id, param2);
+ }
break;

case HC_RESET_PTDEV_INTR_INFO:
/* param1: vmid */
- ret = hcall_reset_ptdev_intr_info(vm, (uint16_t)param1, param2);
+ if (vmid_is_valid) {
+ ret = hcall_reset_ptdev_intr_info(vm, vm_id, param2);
+ }
break;

case HC_WORLD_SWITCH:
@@ -180,7 +216,10 @@ static int32_t dispatch_hypercall(struct acrn_vcpu
*vcpu)
break;

case HC_VM_INTR_MONITOR:
- ret = hcall_vm_intr_monitor(vm, (uint16_t)param1, param2);
+ /* param1: vmid */
+ if (vmid_is_valid) {
+ ret = hcall_vm_intr_monitor(vm, vm_id, param2);
+ }
break;

default:
--
2.7.4



Re: [PATCH v2 2/2] HV: validate target vm in hypercall

Eddie Dong
 

Acked-by: Eddie Dong <eddie.dong@...>

-----Original Message-----
From: acrn-dev@... [mailto:acrn-dev@...]
On Behalf Of Victor Sun
Sent: Friday, April 19, 2019 10:49 AM
To: acrn-dev@...
Subject: [acrn-dev] [PATCH v2 2/2] HV: validate target vm in hypercall

- The target vm in most of hypercalls should be a NORMAL_VM, in some
exceptions it might be a SOS_VM, we should validate them.

- Please be aware that some hypercall might have limitation on specific target
vm like RT or SAFETY VMs, this leaves "TODO" in future;

- Unify the coding style:

int32_t hcall_foo(vm, target_vm_id, ...)
{
int32_t ret = -1;
...

if ((is_valid_vm(target_vm) && is_normal_vm(target_vm)) {
ret = ....
}

return ret;
}

Signed-off-by: Victor Sun <victor.sun@...>
---
hypervisor/arch/x86/guest/vm.c | 9 ++
hypervisor/common/hypercall.c | 256
+++++++++++++++------------------
hypervisor/include/arch/x86/guest/vm.h | 1 +
3 files changed, 124 insertions(+), 142 deletions(-)

diff --git a/hypervisor/arch/x86/guest/vm.c b/hypervisor/arch/x86/guest/vm.c
index 242e716..3dd9a02 100644
--- a/hypervisor/arch/x86/guest/vm.c
+++ b/hypervisor/arch/x86/guest/vm.c
@@ -64,6 +64,15 @@ bool is_sos_vm(const struct acrn_vm *vm)
* @pre vm != NULL
* @pre vm->vmid < CONFIG_MAX_VM_NUM
*/
+bool is_normal_vm(const struct acrn_vm *vm) {
+ return (get_vm_config(vm->vm_id)->type == NORMAL_VM); }
+
+/**
+ * @pre vm != NULL
+ * @pre vm->vmid < CONFIG_MAX_VM_NUM
+ */
bool is_prelaunched_vm(const struct acrn_vm *vm) {
struct acrn_vm_config *vm_config;
diff --git a/hypervisor/common/hypercall.c b/hypervisor/common/hypercall.c
index 697bcda..bc2cdcb 100644
--- a/hypervisor/common/hypercall.c
+++ b/hypervisor/common/hypercall.c
@@ -200,12 +200,11 @@ int32_t hcall_create_vm(struct acrn_vm *vm,
uint64_t param)
*/
int32_t hcall_destroy_vm(uint16_t vmid) {
- int32_t ret;
+ int32_t ret = -1;
struct acrn_vm *target_vm = get_vm_from_vmid(vmid);

- if (!is_valid_vm(target_vm)) {
- ret = -1;
- } else {
+ if (is_valid_vm(target_vm) && is_normal_vm(target_vm)) {
+ /* TODO: check target_vm guest_flags */
ret = shutdown_vm(target_vm);
}

@@ -225,15 +224,13 @@ int32_t hcall_destroy_vm(uint16_t vmid)
*/
int32_t hcall_start_vm(uint16_t vmid)
{
- int32_t ret = 0;
+ int32_t ret = -1;
struct acrn_vm *target_vm = get_vm_from_vmid(vmid);

- if (!is_valid_vm(target_vm)) {
- ret = -1;
- } else if (target_vm->sw.io_shared_page == NULL) {
- ret = -1;
- } else {
+ if (is_valid_vm(target_vm) && is_normal_vm(target_vm) &&
(target_vm->sw.io_shared_page != NULL)) {
+ /* TODO: check target_vm guest_flags */
start_vm(target_vm);
+ ret = 0;
}

return ret;
@@ -253,11 +250,10 @@ int32_t hcall_start_vm(uint16_t vmid) int32_t
hcall_pause_vm(uint16_t vmid) {
struct acrn_vm *target_vm = get_vm_from_vmid(vmid);
- int32_t ret;
+ int32_t ret = -1;

- if (!is_valid_vm(target_vm)) {
- ret = -1;
- } else {
+ if (is_valid_vm(target_vm) && is_normal_vm(target_vm)) {
+ /* TODO: check target_vm guest_flags */
pause_vm(target_vm);
ret = 0;
}
@@ -282,23 +278,21 @@ int32_t hcall_pause_vm(uint16_t vmid)
*/
int32_t hcall_create_vcpu(struct acrn_vm *vm, uint16_t vmid, uint64_t
param) {
- int32_t ret;
+ int32_t ret = -1;
uint16_t pcpu_id;
struct acrn_create_vcpu cv;
struct acrn_vm *target_vm = get_vm_from_vmid(vmid);

- if (!is_valid_vm(target_vm) || (param == 0U)) {
- ret = -1;
- } else if (copy_from_gpa(vm, &cv, param, sizeof(cv)) != 0) {
- pr_err("%s: Unable copy param to vm\n", __func__);
- ret = -1;
- } else {
- pcpu_id = allocate_pcpu();
- if (pcpu_id == INVALID_CPU_ID) {
- pr_err("%s: No physical available\n", __func__);
- ret = -1;
+ if (is_valid_vm(target_vm) && is_normal_vm(target_vm) && (param !=
0U)) {
+ if (copy_from_gpa(vm, &cv, param, sizeof(cv)) != 0) {
+ pr_err("%s: Unable copy param to vm\n", __func__);
} else {
- ret = prepare_vcpu(target_vm, pcpu_id);
+ pcpu_id = allocate_pcpu();
+ if (pcpu_id == INVALID_CPU_ID) {
+ pr_err("%s: No physical available\n", __func__);
+ } else {
+ ret = prepare_vcpu(target_vm, pcpu_id);
+ }
}
}

@@ -320,11 +314,10 @@ int32_t hcall_create_vcpu(struct acrn_vm *vm,
uint16_t vmid, uint64_t param) int32_t hcall_reset_vm(uint16_t vmid) {
struct acrn_vm *target_vm = get_vm_from_vmid(vmid);
- int32_t ret;
+ int32_t ret = -1;

- if (!is_valid_vm(target_vm) || is_sos_vm(target_vm)) {
- ret = -1;
- } else {
+ if (is_valid_vm(target_vm) && is_normal_vm(target_vm)) {
+ /* TODO: check target_vm guest_flags */
ret = reset_vm(target_vm);
}
return ret;
@@ -352,7 +345,7 @@ int32_t hcall_set_vcpu_regs(struct acrn_vm *vm,
uint16_t vmid, uint64_t param)
int32_t ret = -1;

/* Only allow setup init ctx while target_vm is inactive */
- if (is_valid_vm(target_vm) && (param != 0U) && (!is_sos_vm(target_vm))
&& (target_vm->state != VM_STARTED)) {
+ if (is_valid_vm(target_vm) && (param != 0U) &&
is_normal_vm(target_vm)
+&& (target_vm->state != VM_STARTED)) {
if (copy_from_gpa(vm, &vcpu_regs, param, sizeof(vcpu_regs)) != 0) {
pr_err("%s: Unable copy param to vm\n", __func__);
} else if (vcpu_regs.vcpu_id >= CONFIG_MAX_VCPUS_PER_VM) { @@
-388,26 +381,24 @@ int32_t hcall_set_irqline(const struct acrn_vm *vm,
uint16_t vmid, {
uint32_t irq_pic;
struct acrn_vm *target_vm = get_vm_from_vmid(vmid);
- int32_t ret;
+ int32_t ret = -1;

- if (!is_valid_vm(target_vm)) {
- ret = -EINVAL;
- } else if (ops->gsi >= vioapic_pincount(vm)) {
- ret = -EINVAL;
- } else {
- if (ops->gsi < vpic_pincount()) {
- /*
- * IRQ line for 8254 timer is connected to
- * I/O APIC pin #2 but PIC pin #0,route GSI
- * number #2 to PIC IRQ #0.
- */
- irq_pic = (ops->gsi == 2U) ? 0U : ops->gsi;
- vpic_set_irqline(target_vm, irq_pic, ops->op);
- }
+ if (is_valid_vm(target_vm) && is_normal_vm(target_vm)) {
+ if (ops->gsi < vioapic_pincount(vm)) {
+ if (ops->gsi < vpic_pincount()) {
+ /*
+ * IRQ line for 8254 timer is connected to
+ * I/O APIC pin #2 but PIC pin #0,route GSI
+ * number #2 to PIC IRQ #0.
+ */
+ irq_pic = (ops->gsi == 2U) ? 0U : ops->gsi;
+ vpic_set_irqline(target_vm, irq_pic, ops->op);
+ }

- /* handle IOAPIC irqline */
- vioapic_set_irqline_lock(target_vm, ops->gsi, ops->op);
- ret = 0;
+ /* handle IOAPIC irqline */
+ vioapic_set_irqline_lock(target_vm, ops->gsi, ops->op);
+ ret = 0;
+ }
}

return ret;
@@ -479,11 +470,10 @@ int32_t hcall_inject_msi(struct acrn_vm *vm,
uint16_t vmid, uint64_t param)
struct acrn_msi_entry msi;
struct acrn_vm *target_vm = get_vm_from_vmid(vmid);

- if (is_valid_vm(target_vm)) {
+ if (is_valid_vm(target_vm) && is_normal_vm(target_vm)) {
(void)memset((void *)&msi, 0U, sizeof(msi));
if (copy_from_gpa(vm, &msi, param, sizeof(msi)) != 0) {
pr_err("%s: Unable copy param to vm\n", __func__);
- ret = -1;
} else {
/* For target cpu with lapic pt, send ipi instead of injection via
vlapic */
if (is_lapic_pt(target_vm)) {
@@ -518,30 +508,29 @@ int32_t hcall_set_ioreq_buffer(struct acrn_vm *vm,
uint16_t vmid, uint64_t param
struct acrn_set_ioreq_buffer iobuf;
struct acrn_vm *target_vm = get_vm_from_vmid(vmid);
uint16_t i;
- int32_t ret;
+ int32_t ret = -1;

(void)memset((void *)&iobuf, 0U, sizeof(iobuf));
- if (!is_valid_vm(target_vm) || (copy_from_gpa(vm, &iobuf, param,
sizeof(iobuf)) != 0)) {
- pr_err("%p %s: target_vm is not valid or Unable copy param to
vm\n", target_vm, __func__);
- ret = -1;
- } else {
- dev_dbg(ACRN_DBG_HYCALL, "[%d] SET BUFFER=0x%p",
- vmid, iobuf.req_buf);
-
- hpa = gpa2hpa(vm, iobuf.req_buf);
- if (hpa == INVALID_HPA) {
- pr_err("%s,vm[%hu] gpa 0x%llx,GPA is unmapping.",
- __func__, vm->vm_id, iobuf.req_buf);
- target_vm->sw.io_shared_page = NULL;
- ret = -EINVAL;
- } else {
- target_vm->sw.io_shared_page = hpa2hva(hpa);
- for (i = 0U; i < VHM_REQUEST_MAX; i++) {
- set_vhm_req_state(target_vm, i, REQ_STATE_FREE);
+ if (is_valid_vm(target_vm) && is_normal_vm(target_vm)) {
+ if (copy_from_gpa(vm, &iobuf, param, sizeof(iobuf)) != 0) {
+ pr_err("%p %s: Unable copy param to vm\n", target_vm,
__func__);
+ } else {
+ dev_dbg(ACRN_DBG_HYCALL, "[%d] SET BUFFER=0x%p",
+ vmid, iobuf.req_buf);
+
+ hpa = gpa2hpa(vm, iobuf.req_buf);
+ if (hpa == INVALID_HPA) {
+ pr_err("%s,vm[%hu] gpa 0x%llx,GPA is unmapping.",
+ __func__, vm->vm_id, iobuf.req_buf);
+ target_vm->sw.io_shared_page = NULL;
+ } else {
+ target_vm->sw.io_shared_page = hpa2hva(hpa);
+ for (i = 0U; i < VHM_REQUEST_MAX; i++) {
+ set_vhm_req_state(target_vm, i, REQ_STATE_FREE);
+ }
+ ret = 0;
}
-
- ret = 0;
- }
+ }
}

return ret;
@@ -562,10 +551,10 @@ int32_t hcall_notify_ioreq_finish(uint16_t vmid,
uint16_t vcpu_id) {
struct acrn_vcpu *vcpu;
struct acrn_vm *target_vm = get_vm_from_vmid(vmid);
- int32_t ret = -EINVAL;
+ int32_t ret = -1;

/* make sure we have set req_buf */
- if (is_valid_vm(target_vm) && (target_vm->sw.io_shared_page != NULL)) {
+ if (is_valid_vm(target_vm) && is_normal_vm(target_vm) &&
+(target_vm->sw.io_shared_page != NULL)) {
dev_dbg(ACRN_DBG_HYCALL, "[%d] NOTIFY_FINISH for vcpu %d",
vmid, vcpu_id);

@@ -693,26 +682,21 @@ int32_t hcall_set_vm_memory_regions(struct
acrn_vm *vm, uint64_t param) {
struct set_regions regions;
struct vm_memory_region mr;
- struct acrn_vm *target_vm;
- uint16_t target_vm_id;
+ struct acrn_vm *target_vm = NULL;
uint32_t idx;
- int32_t ret = -EFAULT;
-
+ int32_t ret = -1;

(void)memset((void *)&regions, 0U, sizeof(regions));

if (copy_from_gpa(vm, &regions, param, sizeof(regions)) == 0) {
- target_vm = get_vm_from_vmid(regions.vmid);
- target_vm_id = target_vm->vm_id;
- if ((target_vm_id >= CONFIG_MAX_VM_NUM) ||
(get_vm_config(target_vm_id)->type != NORMAL_VM)
- || (target_vm->state == VM_STATE_INVALID)) {
- pr_err("%p %s:target_vm is invalid or Targeting to service vm",
target_vm, __func__);
- } else {
+ if (regions.vmid < CONFIG_MAX_VM_NUM) {
+ target_vm = get_vm_from_vmid(regions.vmid);
+ }
+ if ((target_vm != NULL) && is_valid_vm(target_vm) &&
+is_normal_vm(target_vm)) {
idx = 0U;
while (idx < regions.mr_num) {
if (copy_from_gpa(vm, &mr, regions.regions_gpa + idx *
sizeof(mr), sizeof(mr)) != 0) {
pr_err("%s: Copy mr entry fail from vm\n", __func__);
- ret = -EFAULT;
break;
}

@@ -722,6 +706,8 @@ int32_t hcall_set_vm_memory_regions(struct
acrn_vm *vm, uint64_t param)
}
idx++;
}
+ } else {
+ pr_err("%p %s:target_vm is invalid or Targeting to service vm",
+target_vm, __func__);
}
}

@@ -781,20 +767,18 @@ int32_t hcall_write_protect_page(struct acrn_vm
*vm, uint16_t vmid, uint64_t wp_ {
struct wp_data wp;
struct acrn_vm *target_vm = get_vm_from_vmid(vmid);
- int32_t ret;
+ int32_t ret = -1;

- if (!is_valid_vm(target_vm) || is_sos_vm(target_vm)) {
- pr_err("%p %s: target_vm is invalid or Targeting to service vm",
target_vm, __func__);
- ret = -EINVAL;
- } else {
+ if (is_valid_vm(target_vm) && is_normal_vm(target_vm)) {
(void)memset((void *)&wp, 0U, sizeof(wp));

if (copy_from_gpa(vm, &wp, wp_gpa, sizeof(wp)) != 0) {
pr_err("%s: Unable copy param to vm\n", __func__);
- ret = -EFAULT;
} else {
ret = write_protect_page(target_vm, &wp);
}
+ } else {
+ pr_err("%p %s: target_vm is invalid", target_vm, __func__);
}

return ret;
@@ -815,26 +799,24 @@ int32_t hcall_write_protect_page(struct acrn_vm
*vm, uint16_t vmid, uint64_t wp_
*/
int32_t hcall_gpa_to_hpa(struct acrn_vm *vm, uint16_t vmid, uint64_t
param) {
- int32_t ret;
+ int32_t ret = -1;
struct vm_gpa2hpa v_gpa2hpa;
struct acrn_vm *target_vm = get_vm_from_vmid(vmid);

(void)memset((void *)&v_gpa2hpa, 0U, sizeof(v_gpa2hpa));
- if (!is_valid_vm(target_vm) || (copy_from_gpa(vm, &v_gpa2hpa, param,
sizeof(v_gpa2hpa)) != 0)) {
- pr_err("target_vm is invalid or HCALL gpa2hpa: Unable copy param
from vm\n");
- ret = -1;
- } else {
+ if (is_valid_vm(target_vm) && (!is_prelaunched_vm(target_vm))
+ && (copy_from_gpa(vm, &v_gpa2hpa, param,
sizeof(v_gpa2hpa)) == 0)) {
v_gpa2hpa.hpa = gpa2hpa(target_vm, v_gpa2hpa.gpa);
if (v_gpa2hpa.hpa == INVALID_HPA) {
pr_err("%s,vm[%hu] gpa 0x%llx,GPA is unmapping.",
__func__, target_vm->vm_id, v_gpa2hpa.gpa);
- ret = -EINVAL;
} else if (copy_to_gpa(vm, &v_gpa2hpa, param,
sizeof(v_gpa2hpa)) != 0) {
pr_err("%s: Unable copy param to vm\n", __func__);
- ret = -1;
} else {
ret = 0;
}
+ } else {
+ pr_err("target_vm is invalid or HCALL gpa2hpa: Unable copy param
from
+vm\n");
}

return ret;
@@ -860,7 +842,7 @@ int32_t hcall_assign_ptdev(struct acrn_vm *vm,
uint16_t vmid, uint64_t param)
bool bdf_valid = true;
bool iommu_valid = true;

- if (is_valid_vm(target_vm)) {
+ if (is_valid_vm(target_vm) && is_normal_vm(target_vm)) {
if (param < 0x10000UL) {
bdf = (uint16_t) param;
} else {
@@ -894,7 +876,7 @@ int32_t hcall_assign_ptdev(struct acrn_vm *vm,
uint16_t vmid, uint64_t param)
(uint8_t)(bdf >> 8U), (uint8_t)(bdf & 0xffU));
}
} else {
- pr_err("%s, vm is null\n", __func__);
+ pr_err("%s, target vm is invalid\n", __func__);
ret = -EINVAL;
}

@@ -915,21 +897,18 @@ int32_t hcall_assign_ptdev(struct acrn_vm *vm,
uint16_t vmid, uint64_t param)
*/
int32_t hcall_deassign_ptdev(struct acrn_vm *vm, uint16_t vmid, uint64_t
param) {
- int32_t ret = 0;
+ int32_t ret = -1;
uint16_t bdf;
bool bdf_valid = true;
struct acrn_vm *target_vm = get_vm_from_vmid(vmid);

- if (!is_valid_vm(target_vm)) {
- ret = -1;
- } else {
+ if (is_valid_vm(target_vm) && is_normal_vm(target_vm)) {
if (param < 0x10000UL) {
bdf = (uint16_t) param;
} else {
if (copy_from_gpa(vm, &bdf, param, sizeof(bdf)) != 0) {
pr_err("%s: Unable copy param to vm\n", __func__);
bdf_valid = false;
- ret = -1;
}
}

@@ -955,35 +934,32 @@ int32_t hcall_deassign_ptdev(struct acrn_vm *vm,
uint16_t vmid, uint64_t param)
*/
int32_t hcall_set_ptdev_intr_info(struct acrn_vm *vm, uint16_t vmid,
uint64_t param) {
- int32_t ret;
+ int32_t ret = -1;
struct hc_ptdev_irq irq;
struct acrn_vm *target_vm = get_vm_from_vmid(vmid);

(void)memset((void *)&irq, 0U, sizeof(irq));
- if (!is_valid_vm(target_vm)) {
- ret = -1;
- } else if (copy_from_gpa(vm, &irq, param, sizeof(irq)) != 0) {
- pr_err("%s: Unable copy param to vm\n", __func__);
- ret = -1;
- } else {
- /* Inform vPCI about the interupt info changes */
- vpci_set_ptdev_intr_info(target_vm, irq.virt_bdf, irq.phys_bdf);
-
- if (irq.type == IRQ_INTX) {
- ret = ptirq_add_intx_remapping(target_vm, irq.is.intx.virt_pin,
- irq.is.intx.phys_pin, irq.is.intx.pic_pin);
- } else if (((irq.type == IRQ_MSI) || (irq.type == IRQ_MSIX)) &&
- (irq.is.msix.vector_cnt <=
CONFIG_MAX_MSIX_TABLE_NUM)) {
- ret = ptirq_add_msix_remapping(target_vm,
- irq.virt_bdf, irq.phys_bdf,
- irq.is.msix.vector_cnt);
+ if (is_valid_vm(target_vm) && is_normal_vm(target_vm)) {
+ if (copy_from_gpa(vm, &irq, param, sizeof(irq)) != 0) {
+ pr_err("%s: Unable copy param to vm\n", __func__);
} else {
- pr_err("%s: Invalid irq type: %u or MSIX vector count: %u\n",
- __func__, irq.type, irq.is.msix.vector_cnt);
- ret = -1;
+ /* Inform vPCI about the interupt info changes */
+ vpci_set_ptdev_intr_info(target_vm, irq.virt_bdf, irq.phys_bdf);
+
+ if (irq.type == IRQ_INTX) {
+ ret = ptirq_add_intx_remapping(target_vm,
irq.is.intx.virt_pin,
+ irq.is.intx.phys_pin, irq.is.intx.pic_pin);
+ } else if (((irq.type == IRQ_MSI) || (irq.type == IRQ_MSIX)) &&
+ (irq.is.msix.vector_cnt <=
CONFIG_MAX_MSIX_TABLE_NUM)) {
+ ret = ptirq_add_msix_remapping(target_vm,
+ irq.virt_bdf, irq.phys_bdf,
+ irq.is.msix.vector_cnt);
+ } else {
+ pr_err("%s: Invalid irq type: %u or MSIX vector
count: %u\n",
+ __func__, irq.type, irq.is.msix.vector_cnt);
+ }
}
}
-
return ret;
}

@@ -1001,20 +977,20 @@ int32_t hcall_set_ptdev_intr_info(struct acrn_vm
*vm, uint16_t vmid, uint64_t pa int32_t hcall_reset_ptdev_intr_info(struct
acrn_vm *vm, uint16_t vmid, uint64_t param) {
- int32_t ret = 0;
+ int32_t ret = -1;
struct hc_ptdev_irq irq;
struct acrn_vm *target_vm = get_vm_from_vmid(vmid);

- if (is_valid_vm(target_vm)) {
+ if (is_valid_vm(target_vm) && is_normal_vm(target_vm)) {
(void)memset((void *)&irq, 0U, sizeof(irq));

if (copy_from_gpa(vm, &irq, param, sizeof(irq)) != 0) {
pr_err("%s: Unable copy param to vm\n", __func__);
- ret = -1;
} else if (irq.type == IRQ_INTX) {
ptirq_remove_intx_remapping(target_vm,
irq.is.intx.virt_pin,
irq.is.intx.pic_pin);
+ ret = 0;
} else if (((irq.type == IRQ_MSI) || (irq.type == IRQ_MSIX)) &&
(irq.is.msix.vector_cnt <=
CONFIG_MAX_MSIX_TABLE_NUM)) {

@@ -1027,13 +1003,11 @@ hcall_reset_ptdev_intr_info(struct acrn_vm *vm,
uint16_t vmid, uint64_t param)
ptirq_remove_msix_remapping(target_vm,
irq.virt_bdf,
irq.is.msix.vector_cnt);
+ ret = 0;
} else {
pr_err("%s: Invalid irq type: %u or MSIX vector count: %u\n",
__func__, irq.type, irq.is.msix.vector_cnt);
- ret = -1;
}
- } else {
- ret = -1;
}

return ret;
@@ -1052,16 +1026,14 @@ hcall_reset_ptdev_intr_info(struct acrn_vm *vm,
uint16_t vmid, uint64_t param) int32_t hcall_get_cpu_pm_state(struct
acrn_vm *vm, uint64_t cmd, uint64_t param) {
uint16_t target_vm_id;
- struct acrn_vm *target_vm;
- int32_t ret;
+ struct acrn_vm *target_vm = NULL;
+ int32_t ret = -1;

target_vm_id = (uint16_t)((cmd & PMCMD_VMID_MASK) >>
PMCMD_VMID_SHIFT);
- target_vm = get_vm_from_vmid(target_vm_id);
-
- if (!is_valid_vm(target_vm)) {
- ret = -1;
- } else {
-
+ if (target_vm_id < CONFIG_MAX_VM_NUM) {
+ target_vm = get_vm_from_vmid(target_vm_id);
+ }
+ if ((target_vm != NULL) && is_valid_vm(target_vm) &&
+is_normal_vm(target_vm)) {
switch (cmd & PMCMD_TYPE_MASK) {
case PMCMD_GET_PX_CNT: {

@@ -1175,7 +1147,7 @@ int32_t hcall_vm_intr_monitor(struct acrn_vm *vm,
uint16_t vmid, uint64_t param)
uint64_t hpa;
struct acrn_vm *target_vm = get_vm_from_vmid(vmid);

- if (is_valid_vm(target_vm)) {
+ if (is_valid_vm(target_vm) && is_normal_vm(target_vm)) {
/* the param for this hypercall is page aligned */
hpa = gpa2hpa(vm, param);
if (hpa != INVALID_HPA) {
diff --git a/hypervisor/include/arch/x86/guest/vm.h
b/hypervisor/include/arch/x86/guest/vm.h
index b9762f5..20b6687 100644
--- a/hypervisor/include/arch/x86/guest/vm.h
+++ b/hypervisor/include/arch/x86/guest/vm.h
@@ -205,6 +205,7 @@ void prepare_vm(uint16_t vm_id, struct
acrn_vm_config *vm_config); void launch_vms(uint16_t pcpu_id); bool
is_valid_vm(const struct acrn_vm *vm); bool is_sos_vm(const struct
acrn_vm *vm);
+bool is_normal_vm(const struct acrn_vm *vm);
bool is_prelaunched_vm(const struct acrn_vm *vm); uint16_t
get_vmid_by_uuid(const uint8_t *uuid); struct acrn_vm
*get_vm_from_vmid(uint16_t vm_id);
--
2.7.4



Re: [PATCH] hv: release vpid from reset_vcpu()

Eddie Dong
 

If " vpid = vm_id * CONFIG_MAX_VCPUS_PER_VM + vcpu_id + 1;" works, it is much simpler than a pre-assigned value in vm_config.

Comments from others?

-----Original Message-----
From: Grandhi, Sainath
Sent: Friday, April 19, 2019 11:24 AM
To: acrn-dev@...; Dong, Eddie <eddie.dong@...>
Subject: RE: [acrn-dev] [PATCH] hv: release vpid from reset_vcpu()

How about setting static array in vm_config?

-----Original Message-----
From: acrn-dev@...
[mailto:acrn-dev@...]
On Behalf Of Chen, Zide
Sent: Thursday, April 18, 2019 5:35 PM
To: Dong, Eddie <eddie.dong@...>; acrn-dev@...
Cc: Grandhi, Sainath <sainath.grandhi@...>
Subject: Re: [acrn-dev] [PATCH] hv: release vpid from reset_vcpu()

How about this simple way:

/*
* + 1, because 0 is invalid for vpid
* vpid is 16 bits which should be large enough for ACRN
*/
vpid = vm_id * CONFIG_MAX_VCPUS_PER_VM + vcpu_id + 1;

Thanks,
Zide

On 4/18/19 5:20 PM, Dong, Eddie wrote:
Good catch !

While, moving forward, we are in the style of using static allocated
resource,
rather than dynamic resources, due to FuSa reason. Can we change the
design to use static vpid as well?
We may be able to map vm:vcpu to a dedicated vpid.

Comments from others?

Thx Eddie

-----Original Message-----
From: acrn-dev@...
[mailto:acrn-dev@...]
On Behalf Of Chen, Zide
Sent: Friday, April 19, 2019 6:53 AM
To: acrn-dev@...
Cc: Chen, Zide <zide.chen@...>; Grandhi, Sainath
<sainath.grandhi@...>
Subject: [acrn-dev] [PATCH] hv: release vpid from reset_vcpu()

Currently vpid is not released in reset_vcpu() hence the vpid
resource could be exhausted easily if any VMs are re-launched.

This patch implements the vpid resource as bitmap other than the
simple incremental allocation, so that it can be freed.

Tracked-On: #2700
Signed-off-by: Zide Chen <zide.chen@...>
Signed-off-by: Sainath Grandhi <sainath.grandhi@...>
---
hypervisor/arch/x86/guest/vcpu.c | 2 ++
hypervisor/arch/x86/mmu.c | 25 +++++++++++++++----------
hypervisor/include/arch/x86/mmu.h | 8 ++++++++
hypervisor/include/arch/x86/vmx.h | 3 ---
4 files changed, 25 insertions(+), 13 deletions(-)

diff --git a/hypervisor/arch/x86/guest/vcpu.c
b/hypervisor/arch/x86/guest/vcpu.c
index 81bcc1a75a08..f48ca50d9d12 100644
--- a/hypervisor/arch/x86/guest/vcpu.c
+++ b/hypervisor/arch/x86/guest/vcpu.c
@@ -579,6 +579,8 @@ void reset_vcpu(struct acrn_vcpu *vcpu)
}
vcpu->arch.cur_context = NORMAL_WORLD;

+ release_vpid(vcpu->arch.vpid);
+
vlapic = vcpu_vlapic(vcpu);
vlapic_reset(vlapic);

diff --git a/hypervisor/arch/x86/mmu.c b/hypervisor/arch/x86/mmu.c
index 9e3e4701e586..651bc51d472a 100644
--- a/hypervisor/arch/x86/mmu.c
+++ b/hypervisor/arch/x86/mmu.c
@@ -29,6 +29,7 @@

#include <types.h>
#include <atomic.h>
+#include <bits.h>
#include <page.h>
#include <pgtable.h>
#include <cpu_caps.h>
@@ -49,7 +50,9 @@ static uint8_t sanitized_page[PAGE_SIZE]
__aligned(PAGE_SIZE);
* is the value of the VPID VM-execution control field in the VMCS.
* (VM entry ensures that this value is never 0000H).
*/
-static uint16_t vmx_vpid_nr = VMX_MIN_NR_VPID;
+#define VMX_MAX_NR_VPID (CONFIG_MAX_VM_NUM *
CONFIG_MAX_VCPUS_PER_VM)
+#define VPID_BITMAP_ARRAY_SIZE
INT_DIV_ROUNDUP(VMX_MAX_NR_VPID,
64U)
+static uint64_t vmx_vpid_bitmaps[VPID_BITMAP_ARRAY_SIZE];

#define INVEPT_TYPE_SINGLE_CONTEXT 1UL
#define INVEPT_TYPE_ALL_CONTEXTS 2UL
@@ -121,23 +124,25 @@ static inline void local_invept(uint64_t
type, struct invept_desc desc)

uint16_t allocate_vpid(void)
{
- uint16_t vpid = atomic_xadd16(&vmx_vpid_nr, 1U);
+ uint16_t vpid = (uint16_t)ffz64_ex(vmx_vpid_bitmaps,
VMX_MAX_NR_VPID);

- /* TODO: vpid overflow */
if (vpid >= VMX_MAX_NR_VPID) {
- pr_err("%s, vpid overflow\n", __func__);
- /*
- * set vmx_vpid_nr to VMX_MAX_NR_VPID to disable vpid
- * since next atomic_xadd16 will always large than
- * VMX_MAX_NR_VPID.
- */
- vmx_vpid_nr = VMX_MAX_NR_VPID;
+ pr_fatal("%s, vpid overflow: 0x%x\n", __func__, vpid);
+
+ /* Should not happen. */
vpid = 0U;
+ } else {
+ bitmap_set_lock((uint16_t)(vpid & 0x3FU), vmx_vpid_bitmaps
+
(vpid >>
+6U));
}

return vpid;
}

+void release_vpid(uint16_t vpid)
+{
+ bitmap_clear_lock((vpid & 0x3FU), vmx_vpid_bitmaps + (vpid >>
+6U)); }
+
void flush_vpid_single(uint16_t vpid)
{
if (vpid != 0U) {
diff --git a/hypervisor/include/arch/x86/mmu.h
b/hypervisor/include/arch/x86/mmu.h
index b307e03b274d..0edb205fb1b5 100644
--- a/hypervisor/include/arch/x86/mmu.h
+++ b/hypervisor/include/arch/x86/mmu.h
@@ -143,6 +143,14 @@ void
hv_access_memory_region_update(uint64_t
base, uint64_t size);
* @retval >0 the valid VPID
*/
uint16_t allocate_vpid(void);
+/**
+ * @brief VPID release
+ *
+ * @param[in] vpid the VPID to be released
+ *
+ * @return None
+ */
+void release_vpid(uint16_t vpid);
/**
* @brief Specified signle VPID flush
*
diff --git a/hypervisor/include/arch/x86/vmx.h
b/hypervisor/include/arch/x86/vmx.h
index ffb7bb9734ef..f41ec9e9391e 100644
--- a/hypervisor/include/arch/x86/vmx.h
+++ b/hypervisor/include/arch/x86/vmx.h
@@ -325,9 +325,6 @@
#define VMX_EPT_INVEPT_SINGLE_CONTEXT (1U << 25U)
#define VMX_EPT_INVEPT_GLOBAL_CONTEXT (1U << 26U)

-#define VMX_MIN_NR_VPID 1U
-#define VMX_MAX_NR_VPID (1U << 5U)
-
#define VMX_VPID_TYPE_INDIVIDUAL_ADDR 0UL
#define VMX_VPID_TYPE_SINGLE_CONTEXT 1UL
#define VMX_VPID_TYPE_ALL_CONTEXT 2UL
--
2.17.1



Re: [PATCH v7 3/5] HV: Reshuffle start_cpus and start_cpu

Eddie Dong
 

-
- /* Error condition - loop endlessly for now */
- do {
- } while (1);
+ pr_fatal("Secondary CPU%hu failed to come up", pcpu_id);
+ cpu_set_current_state(pcpu_id, PCPU_STATE_INVALID);
I am questioning the need of the flag.

As if one PCPU fails to start, resume fails or boot fails. We should explicitly
do this. This is what panic does now.
Do we need to handle this additional state with this assumption?
Yes. As we taked in previous version of the patchset. We will reuse the
start_cpus (calling start_cpu) when reset the cpu of lapic_pt vm. If the pcpu
fails to start, we should mark it as INVALID and return error to the caller.
Marking INVALID here aims at preventing the later launched vm use it.

This is a question of how we want to do if shutdown of an RTVM fails or bootup fails.

For power on bootup, we clearly defines our policy to PANIC. This doesn't require any additional logic for further processing.
For shutdown failure of RTVM, do we want to power off/on entire platform, or do we want to try to restart the RTVM. I don't think the later one is feasible.
If this is correct, we just PANIC here and no need further logic.

In case of FuSa, the PANIC will trigger certain action to show MESSAGE to users. This is another story.




}
}

-void start_cpus(void)
+
+/**
+ * @brief Start all cpus if the bit is set in mask except itself
+ *
+ * @param[in] mask mask bits of cpus which should be started
+ *
+ * @return true if all cpus set in mask are started
+ * @return false if there are any cpus set in mask aren't started
+*/ bool start_cpus(uint64_t mask)
{
uint16_t i;
+ uint16_t pcpu_id = get_cpu_id();
+ uint64_t need_start_mask = mask;
+ uint64_t expected_start_mask = mask;

/* secondary cpu start up will wait for pcpu_sync -> 0UL */
atomic_store64(&pcpu_sync, 1UL);

- for (i = 0U; i < phys_cpu_num; i++) {
- if (get_cpu_id() == i) {
- continue;
+ i = ffs64(need_start_mask);
+ while (i != INVALID_BIT_INDEX) {
This style is fine too. But, I am wondering if following style is better?

For (i = ffs64(need_start_mask); i != INVALID_BIT_INDEX; i =
ffs64(need_start_mask)) {

Please check from MISRAC p.o.v.
Checked with Huihuang. From MISRAC p.o.v, both while and for is fine.
BTW, all current code related to ffs64(mask) use while.

Do you prefer we use for? If yes, I will do in next version.
I am fine now. IF we do, we can do later to cover all codes.



+ bitmap_clear_nolock(i, &need_start_mask);
+
+ if (pcpu_id == i) {
+ continue; /* Avoid start itself */
}

start_cpu(i);
+ i = ffs64(need_start_mask);
}

/* Trigger event to allow secondary CPUs to continue */
atomic_store64(&pcpu_sync, 0UL);
+
+ return ((pcpu_active_bitmap & expected_start_mask) ==
+expected_start_mask);
Here we can directly use mask, since this is readonly. This way, we can avoid
use of both need_start_mask/ expected_start_mask.
Yes, we can use mask directly and will use it in next version.
But, need_start_mask is still needed as we will change it in runtime.
We save one. And you can use more readable name. I really couldn't distinguish expected_start_mask and need_start_mask.


}

void stop_cpus(void)
diff --git a/hypervisor/arch/x86/pm.c b/hypervisor/arch/x86/pm.c
index 3a0d1b2c..2f27565a 100644
--- a/hypervisor/arch/x86/pm.c
+++ b/hypervisor/arch/x86/pm.c
@@ -4,6 +4,7 @@
*/

#include <types.h>
+ #include <bits.h>
#include <acrn_common.h>
#include <default_acpi_info.h>
#include <platform_acpi_info.h>
@@ -140,7 +141,10 @@ void do_acpi_s3(struct pm_s_state_data
*sstate_data, uint32_t pm1a_cnt_val, uint

void host_enter_s3(struct pm_s_state_data *sstate_data, uint32_t
pm1a_cnt_val, uint32_t pm1b_cnt_val) {
+ uint16_t pcpu_nums = get_pcpu_nums();
uint64_t pmain_entry_saved;
+ uint64_t mask;
+ uint16_t i;

stac();

@@ -185,6 +189,13 @@ void host_enter_s3(struct pm_s_state_data
*sstate_data, uint32_t pm1a_cnt_val, u
write_trampoline_sym(main_entry, pmain_entry_saved);
clac();

+ /* Skip BSP */
+ for (i = 1U; i < pcpu_nums; i++) {
+ bitmap_set_nolock(i, &mask);
+ }
+
Maybe not necessary. See above comments.
BTW, we may have a bug here: if one PCPU is already offline, this code
seems to be not correct.
Anyway, this is not an issue of this patch, we can decouple from this one.
Please check after this patch.
Will check it.
No need to block this patch for now.


/* online all APs again */
- start_cpus();
+ if (!start_cpus(mask)) {
+ panic("Failed to start all APs!");
+ }
}
diff --git a/hypervisor/include/arch/x86/cpu.h
b/hypervisor/include/arch/x86/cpu.h
index 13c2fd9e..8f25ca5d 100644
--- a/hypervisor/include/arch/x86/cpu.h
+++ b/hypervisor/include/arch/x86/cpu.h
@@ -250,6 +250,7 @@ enum pcpu_boot_state {
PCPU_STATE_RUNNING,
PCPU_STATE_HALTED,
PCPU_STATE_DEAD,
+ PCPU_STATE_INVALID,
};

/* Function prototypes */
@@ -259,7 +260,7 @@ void trampoline_start16(void); void
load_cpu_state_data(void); void init_cpu_pre(uint16_t
pcpu_id_args); void init_cpu_post(uint16_t pcpu_id); -void
start_cpus(void);
+bool start_cpus(uint64_t mask);
void stop_cpus(void);
void wait_sync_change(uint64_t *sync, uint64_t wake_sync);

--
2.20.0





Re: [PATCH v7 5/5] HV: Reset physical core of lapic_pt vm when shutdown

Eddie Dong
 


I got your comment. But here is what I think: I refine the stop_cpus to take one
more parameter 'mask'. So, we can reuse the stop_cpus here directly instead
of doing like the following:

1. make_pcpu_offline()
2. wait the pcpus offline.

These above logic is already in stop_cpus. IMHO, it is better to reuse stop_cpus.
Otherwise, we have duplicated code here.

If you prefer make_pcpu_offline and wait here, I will do it in next version. Sorry
for confusing.
We don't need to call stop_cpus again. It is just one line of code that can work as stop_cpus did.
Wait pcpus offline can be a separate API that can be shared.


Re: [PATCH] hv: release vpid from reset_vcpu()

Eddie Dong
 

I am fine.

-----Original Message-----
From: Chen, Zide
Sent: Friday, April 19, 2019 8:35 AM
To: Dong, Eddie <eddie.dong@...>; acrn-dev@...
Cc: Grandhi, Sainath <sainath.grandhi@...>
Subject: Re: [acrn-dev] [PATCH] hv: release vpid from reset_vcpu()

How about this simple way:

/*
* + 1, because 0 is invalid for vpid
* vpid is 16 bits which should be large enough for ACRN
*/
vpid = vm_id * CONFIG_MAX_VCPUS_PER_VM + vcpu_id + 1;

Thanks,
Zide

On 4/18/19 5:20 PM, Dong, Eddie wrote:
Good catch !

While, moving forward, we are in the style of using static allocated resource,
rather than dynamic resources, due to FuSa reason. Can we change the design
to use static vpid as well?
We may be able to map vm:vcpu to a dedicated vpid.

Comments from others?

Thx Eddie

-----Original Message-----
From: acrn-dev@...
[mailto:acrn-dev@...]
On Behalf Of Chen, Zide
Sent: Friday, April 19, 2019 6:53 AM
To: acrn-dev@...
Cc: Chen, Zide <zide.chen@...>; Grandhi, Sainath
<sainath.grandhi@...>
Subject: [acrn-dev] [PATCH] hv: release vpid from reset_vcpu()

Currently vpid is not released in reset_vcpu() hence the vpid
resource could be exhausted easily if any VMs are re-launched.

This patch implements the vpid resource as bitmap other than the
simple incremental allocation, so that it can be freed.

Tracked-On: #2700
Signed-off-by: Zide Chen <zide.chen@...>
Signed-off-by: Sainath Grandhi <sainath.grandhi@...>
---
hypervisor/arch/x86/guest/vcpu.c | 2 ++
hypervisor/arch/x86/mmu.c | 25 +++++++++++++++----------
hypervisor/include/arch/x86/mmu.h | 8 ++++++++
hypervisor/include/arch/x86/vmx.h | 3 ---
4 files changed, 25 insertions(+), 13 deletions(-)

diff --git a/hypervisor/arch/x86/guest/vcpu.c
b/hypervisor/arch/x86/guest/vcpu.c
index 81bcc1a75a08..f48ca50d9d12 100644
--- a/hypervisor/arch/x86/guest/vcpu.c
+++ b/hypervisor/arch/x86/guest/vcpu.c
@@ -579,6 +579,8 @@ void reset_vcpu(struct acrn_vcpu *vcpu)
}
vcpu->arch.cur_context = NORMAL_WORLD;

+ release_vpid(vcpu->arch.vpid);
+
vlapic = vcpu_vlapic(vcpu);
vlapic_reset(vlapic);

diff --git a/hypervisor/arch/x86/mmu.c b/hypervisor/arch/x86/mmu.c
index 9e3e4701e586..651bc51d472a 100644
--- a/hypervisor/arch/x86/mmu.c
+++ b/hypervisor/arch/x86/mmu.c
@@ -29,6 +29,7 @@

#include <types.h>
#include <atomic.h>
+#include <bits.h>
#include <page.h>
#include <pgtable.h>
#include <cpu_caps.h>
@@ -49,7 +50,9 @@ static uint8_t sanitized_page[PAGE_SIZE]
__aligned(PAGE_SIZE);
* is the value of the VPID VM-execution control field in the VMCS.
* (VM entry ensures that this value is never 0000H).
*/
-static uint16_t vmx_vpid_nr = VMX_MIN_NR_VPID;
+#define VMX_MAX_NR_VPID (CONFIG_MAX_VM_NUM *
CONFIG_MAX_VCPUS_PER_VM)
+#define VPID_BITMAP_ARRAY_SIZE
INT_DIV_ROUNDUP(VMX_MAX_NR_VPID,
64U)
+static uint64_t vmx_vpid_bitmaps[VPID_BITMAP_ARRAY_SIZE];

#define INVEPT_TYPE_SINGLE_CONTEXT 1UL
#define INVEPT_TYPE_ALL_CONTEXTS 2UL
@@ -121,23 +124,25 @@ static inline void local_invept(uint64_t type,
struct invept_desc desc)

uint16_t allocate_vpid(void)
{
- uint16_t vpid = atomic_xadd16(&vmx_vpid_nr, 1U);
+ uint16_t vpid = (uint16_t)ffz64_ex(vmx_vpid_bitmaps,
VMX_MAX_NR_VPID);

- /* TODO: vpid overflow */
if (vpid >= VMX_MAX_NR_VPID) {
- pr_err("%s, vpid overflow\n", __func__);
- /*
- * set vmx_vpid_nr to VMX_MAX_NR_VPID to disable vpid
- * since next atomic_xadd16 will always large than
- * VMX_MAX_NR_VPID.
- */
- vmx_vpid_nr = VMX_MAX_NR_VPID;
+ pr_fatal("%s, vpid overflow: 0x%x\n", __func__, vpid);
+
+ /* Should not happen. */
vpid = 0U;
+ } else {
+ bitmap_set_lock((uint16_t)(vpid & 0x3FU), vmx_vpid_bitmaps +
(vpid >>
+6U));
}

return vpid;
}

+void release_vpid(uint16_t vpid)
+{
+ bitmap_clear_lock((vpid & 0x3FU), vmx_vpid_bitmaps + (vpid >> 6U));
+}
+
void flush_vpid_single(uint16_t vpid)
{
if (vpid != 0U) {
diff --git a/hypervisor/include/arch/x86/mmu.h
b/hypervisor/include/arch/x86/mmu.h
index b307e03b274d..0edb205fb1b5 100644
--- a/hypervisor/include/arch/x86/mmu.h
+++ b/hypervisor/include/arch/x86/mmu.h
@@ -143,6 +143,14 @@ void hv_access_memory_region_update(uint64_t
base, uint64_t size);
* @retval >0 the valid VPID
*/
uint16_t allocate_vpid(void);
+/**
+ * @brief VPID release
+ *
+ * @param[in] vpid the VPID to be released
+ *
+ * @return None
+ */
+void release_vpid(uint16_t vpid);
/**
* @brief Specified signle VPID flush
*
diff --git a/hypervisor/include/arch/x86/vmx.h
b/hypervisor/include/arch/x86/vmx.h
index ffb7bb9734ef..f41ec9e9391e 100644
--- a/hypervisor/include/arch/x86/vmx.h
+++ b/hypervisor/include/arch/x86/vmx.h
@@ -325,9 +325,6 @@
#define VMX_EPT_INVEPT_SINGLE_CONTEXT (1U << 25U)
#define VMX_EPT_INVEPT_GLOBAL_CONTEXT (1U << 26U)

-#define VMX_MIN_NR_VPID 1U
-#define VMX_MAX_NR_VPID (1U << 5U)
-
#define VMX_VPID_TYPE_INDIVIDUAL_ADDR 0UL
#define VMX_VPID_TYPE_SINGLE_CONTEXT 1UL
#define VMX_VPID_TYPE_ALL_CONTEXT 2UL
--
2.17.1



Re: [PATCH] hv: release vpid from reset_vcpu()

Grandhi, Sainath
 

How about setting static array in vm_config?

-----Original Message-----
From: acrn-dev@... [mailto:acrn-dev@...]
On Behalf Of Chen, Zide
Sent: Thursday, April 18, 2019 5:35 PM
To: Dong, Eddie <eddie.dong@...>; acrn-dev@...
Cc: Grandhi, Sainath <sainath.grandhi@...>
Subject: Re: [acrn-dev] [PATCH] hv: release vpid from reset_vcpu()

How about this simple way:

/*
* + 1, because 0 is invalid for vpid
* vpid is 16 bits which should be large enough for ACRN
*/
vpid = vm_id * CONFIG_MAX_VCPUS_PER_VM + vcpu_id + 1;

Thanks,
Zide

On 4/18/19 5:20 PM, Dong, Eddie wrote:
Good catch !

While, moving forward, we are in the style of using static allocated resource,
rather than dynamic resources, due to FuSa reason. Can we change the design
to use static vpid as well?
We may be able to map vm:vcpu to a dedicated vpid.

Comments from others?

Thx Eddie

-----Original Message-----
From: acrn-dev@...
[mailto:acrn-dev@...]
On Behalf Of Chen, Zide
Sent: Friday, April 19, 2019 6:53 AM
To: acrn-dev@...
Cc: Chen, Zide <zide.chen@...>; Grandhi, Sainath
<sainath.grandhi@...>
Subject: [acrn-dev] [PATCH] hv: release vpid from reset_vcpu()

Currently vpid is not released in reset_vcpu() hence the vpid
resource could be exhausted easily if any VMs are re-launched.

This patch implements the vpid resource as bitmap other than the
simple incremental allocation, so that it can be freed.

Tracked-On: #2700
Signed-off-by: Zide Chen <zide.chen@...>
Signed-off-by: Sainath Grandhi <sainath.grandhi@...>
---
hypervisor/arch/x86/guest/vcpu.c | 2 ++
hypervisor/arch/x86/mmu.c | 25 +++++++++++++++----------
hypervisor/include/arch/x86/mmu.h | 8 ++++++++
hypervisor/include/arch/x86/vmx.h | 3 ---
4 files changed, 25 insertions(+), 13 deletions(-)

diff --git a/hypervisor/arch/x86/guest/vcpu.c
b/hypervisor/arch/x86/guest/vcpu.c
index 81bcc1a75a08..f48ca50d9d12 100644
--- a/hypervisor/arch/x86/guest/vcpu.c
+++ b/hypervisor/arch/x86/guest/vcpu.c
@@ -579,6 +579,8 @@ void reset_vcpu(struct acrn_vcpu *vcpu)
}
vcpu->arch.cur_context = NORMAL_WORLD;

+ release_vpid(vcpu->arch.vpid);
+
vlapic = vcpu_vlapic(vcpu);
vlapic_reset(vlapic);

diff --git a/hypervisor/arch/x86/mmu.c b/hypervisor/arch/x86/mmu.c
index 9e3e4701e586..651bc51d472a 100644
--- a/hypervisor/arch/x86/mmu.c
+++ b/hypervisor/arch/x86/mmu.c
@@ -29,6 +29,7 @@

#include <types.h>
#include <atomic.h>
+#include <bits.h>
#include <page.h>
#include <pgtable.h>
#include <cpu_caps.h>
@@ -49,7 +50,9 @@ static uint8_t sanitized_page[PAGE_SIZE]
__aligned(PAGE_SIZE);
* is the value of the VPID VM-execution control field in the VMCS.
* (VM entry ensures that this value is never 0000H).
*/
-static uint16_t vmx_vpid_nr = VMX_MIN_NR_VPID;
+#define VMX_MAX_NR_VPID (CONFIG_MAX_VM_NUM *
CONFIG_MAX_VCPUS_PER_VM)
+#define VPID_BITMAP_ARRAY_SIZE
INT_DIV_ROUNDUP(VMX_MAX_NR_VPID,
64U)
+static uint64_t vmx_vpid_bitmaps[VPID_BITMAP_ARRAY_SIZE];

#define INVEPT_TYPE_SINGLE_CONTEXT 1UL
#define INVEPT_TYPE_ALL_CONTEXTS 2UL
@@ -121,23 +124,25 @@ static inline void local_invept(uint64_t type,
struct invept_desc desc)

uint16_t allocate_vpid(void)
{
- uint16_t vpid = atomic_xadd16(&vmx_vpid_nr, 1U);
+ uint16_t vpid = (uint16_t)ffz64_ex(vmx_vpid_bitmaps,
VMX_MAX_NR_VPID);

- /* TODO: vpid overflow */
if (vpid >= VMX_MAX_NR_VPID) {
- pr_err("%s, vpid overflow\n", __func__);
- /*
- * set vmx_vpid_nr to VMX_MAX_NR_VPID to disable vpid
- * since next atomic_xadd16 will always large than
- * VMX_MAX_NR_VPID.
- */
- vmx_vpid_nr = VMX_MAX_NR_VPID;
+ pr_fatal("%s, vpid overflow: 0x%x\n", __func__, vpid);
+
+ /* Should not happen. */
vpid = 0U;
+ } else {
+ bitmap_set_lock((uint16_t)(vpid & 0x3FU), vmx_vpid_bitmaps
+
(vpid >>
+6U));
}

return vpid;
}

+void release_vpid(uint16_t vpid)
+{
+ bitmap_clear_lock((vpid & 0x3FU), vmx_vpid_bitmaps + (vpid >> 6U));
+}
+
void flush_vpid_single(uint16_t vpid)
{
if (vpid != 0U) {
diff --git a/hypervisor/include/arch/x86/mmu.h
b/hypervisor/include/arch/x86/mmu.h
index b307e03b274d..0edb205fb1b5 100644
--- a/hypervisor/include/arch/x86/mmu.h
+++ b/hypervisor/include/arch/x86/mmu.h
@@ -143,6 +143,14 @@ void hv_access_memory_region_update(uint64_t
base, uint64_t size);
* @retval >0 the valid VPID
*/
uint16_t allocate_vpid(void);
+/**
+ * @brief VPID release
+ *
+ * @param[in] vpid the VPID to be released
+ *
+ * @return None
+ */
+void release_vpid(uint16_t vpid);
/**
* @brief Specified signle VPID flush
*
diff --git a/hypervisor/include/arch/x86/vmx.h
b/hypervisor/include/arch/x86/vmx.h
index ffb7bb9734ef..f41ec9e9391e 100644
--- a/hypervisor/include/arch/x86/vmx.h
+++ b/hypervisor/include/arch/x86/vmx.h
@@ -325,9 +325,6 @@
#define VMX_EPT_INVEPT_SINGLE_CONTEXT (1U << 25U)
#define VMX_EPT_INVEPT_GLOBAL_CONTEXT (1U << 26U)

-#define VMX_MIN_NR_VPID 1U
-#define VMX_MAX_NR_VPID (1U << 5U)
-
#define VMX_VPID_TYPE_INDIVIDUAL_ADDR 0UL
#define VMX_VPID_TYPE_SINGLE_CONTEXT 1UL
#define VMX_VPID_TYPE_ALL_CONTEXT 2UL
--
2.17.1



[PATCH v2 2/2] HV: validate target vm in hypercall

Victor Sun
 

- The target vm in most of hypercalls should be a NORMAL_VM, in some
exceptions it might be a SOS_VM, we should validate them.

- Please be aware that some hypercall might have limitation on specific
target vm like RT or SAFETY VMs, this leaves "TODO" in future;

- Unify the coding style:

int32_t hcall_foo(vm, target_vm_id, ...)
{
int32_t ret = -1;
...

if ((is_valid_vm(target_vm) && is_normal_vm(target_vm)) {
ret = ....
}

return ret;
}

Signed-off-by: Victor Sun <victor.sun@...>
---
hypervisor/arch/x86/guest/vm.c | 9 ++
hypervisor/common/hypercall.c | 256 +++++++++++++++------------------
hypervisor/include/arch/x86/guest/vm.h | 1 +
3 files changed, 124 insertions(+), 142 deletions(-)

diff --git a/hypervisor/arch/x86/guest/vm.c b/hypervisor/arch/x86/guest/vm.c
index 242e716..3dd9a02 100644
--- a/hypervisor/arch/x86/guest/vm.c
+++ b/hypervisor/arch/x86/guest/vm.c
@@ -64,6 +64,15 @@ bool is_sos_vm(const struct acrn_vm *vm)
* @pre vm != NULL
* @pre vm->vmid < CONFIG_MAX_VM_NUM
*/
+bool is_normal_vm(const struct acrn_vm *vm)
+{
+ return (get_vm_config(vm->vm_id)->type == NORMAL_VM);
+}
+
+/**
+ * @pre vm != NULL
+ * @pre vm->vmid < CONFIG_MAX_VM_NUM
+ */
bool is_prelaunched_vm(const struct acrn_vm *vm)
{
struct acrn_vm_config *vm_config;
diff --git a/hypervisor/common/hypercall.c b/hypervisor/common/hypercall.c
index 697bcda..bc2cdcb 100644
--- a/hypervisor/common/hypercall.c
+++ b/hypervisor/common/hypercall.c
@@ -200,12 +200,11 @@ int32_t hcall_create_vm(struct acrn_vm *vm, uint64_t param)
*/
int32_t hcall_destroy_vm(uint16_t vmid)
{
- int32_t ret;
+ int32_t ret = -1;
struct acrn_vm *target_vm = get_vm_from_vmid(vmid);

- if (!is_valid_vm(target_vm)) {
- ret = -1;
- } else {
+ if (is_valid_vm(target_vm) && is_normal_vm(target_vm)) {
+ /* TODO: check target_vm guest_flags */
ret = shutdown_vm(target_vm);
}

@@ -225,15 +224,13 @@ int32_t hcall_destroy_vm(uint16_t vmid)
*/
int32_t hcall_start_vm(uint16_t vmid)
{
- int32_t ret = 0;
+ int32_t ret = -1;
struct acrn_vm *target_vm = get_vm_from_vmid(vmid);

- if (!is_valid_vm(target_vm)) {
- ret = -1;
- } else if (target_vm->sw.io_shared_page == NULL) {
- ret = -1;
- } else {
+ if (is_valid_vm(target_vm) && is_normal_vm(target_vm) && (target_vm->sw.io_shared_page != NULL)) {
+ /* TODO: check target_vm guest_flags */
start_vm(target_vm);
+ ret = 0;
}

return ret;
@@ -253,11 +250,10 @@ int32_t hcall_start_vm(uint16_t vmid)
int32_t hcall_pause_vm(uint16_t vmid)
{
struct acrn_vm *target_vm = get_vm_from_vmid(vmid);
- int32_t ret;
+ int32_t ret = -1;

- if (!is_valid_vm(target_vm)) {
- ret = -1;
- } else {
+ if (is_valid_vm(target_vm) && is_normal_vm(target_vm)) {
+ /* TODO: check target_vm guest_flags */
pause_vm(target_vm);
ret = 0;
}
@@ -282,23 +278,21 @@ int32_t hcall_pause_vm(uint16_t vmid)
*/
int32_t hcall_create_vcpu(struct acrn_vm *vm, uint16_t vmid, uint64_t param)
{
- int32_t ret;
+ int32_t ret = -1;
uint16_t pcpu_id;
struct acrn_create_vcpu cv;
struct acrn_vm *target_vm = get_vm_from_vmid(vmid);

- if (!is_valid_vm(target_vm) || (param == 0U)) {
- ret = -1;
- } else if (copy_from_gpa(vm, &cv, param, sizeof(cv)) != 0) {
- pr_err("%s: Unable copy param to vm\n", __func__);
- ret = -1;
- } else {
- pcpu_id = allocate_pcpu();
- if (pcpu_id == INVALID_CPU_ID) {
- pr_err("%s: No physical available\n", __func__);
- ret = -1;
+ if (is_valid_vm(target_vm) && is_normal_vm(target_vm) && (param != 0U)) {
+ if (copy_from_gpa(vm, &cv, param, sizeof(cv)) != 0) {
+ pr_err("%s: Unable copy param to vm\n", __func__);
} else {
- ret = prepare_vcpu(target_vm, pcpu_id);
+ pcpu_id = allocate_pcpu();
+ if (pcpu_id == INVALID_CPU_ID) {
+ pr_err("%s: No physical available\n", __func__);
+ } else {
+ ret = prepare_vcpu(target_vm, pcpu_id);
+ }
}
}

@@ -320,11 +314,10 @@ int32_t hcall_create_vcpu(struct acrn_vm *vm, uint16_t vmid, uint64_t param)
int32_t hcall_reset_vm(uint16_t vmid)
{
struct acrn_vm *target_vm = get_vm_from_vmid(vmid);
- int32_t ret;
+ int32_t ret = -1;

- if (!is_valid_vm(target_vm) || is_sos_vm(target_vm)) {
- ret = -1;
- } else {
+ if (is_valid_vm(target_vm) && is_normal_vm(target_vm)) {
+ /* TODO: check target_vm guest_flags */
ret = reset_vm(target_vm);
}
return ret;
@@ -352,7 +345,7 @@ int32_t hcall_set_vcpu_regs(struct acrn_vm *vm, uint16_t vmid, uint64_t param)
int32_t ret = -1;

/* Only allow setup init ctx while target_vm is inactive */
- if (is_valid_vm(target_vm) && (param != 0U) && (!is_sos_vm(target_vm)) && (target_vm->state != VM_STARTED)) {
+ if (is_valid_vm(target_vm) && (param != 0U) && is_normal_vm(target_vm) && (target_vm->state != VM_STARTED)) {
if (copy_from_gpa(vm, &vcpu_regs, param, sizeof(vcpu_regs)) != 0) {
pr_err("%s: Unable copy param to vm\n", __func__);
} else if (vcpu_regs.vcpu_id >= CONFIG_MAX_VCPUS_PER_VM) {
@@ -388,26 +381,24 @@ int32_t hcall_set_irqline(const struct acrn_vm *vm, uint16_t vmid,
{
uint32_t irq_pic;
struct acrn_vm *target_vm = get_vm_from_vmid(vmid);
- int32_t ret;
+ int32_t ret = -1;

- if (!is_valid_vm(target_vm)) {
- ret = -EINVAL;
- } else if (ops->gsi >= vioapic_pincount(vm)) {
- ret = -EINVAL;
- } else {
- if (ops->gsi < vpic_pincount()) {
- /*
- * IRQ line for 8254 timer is connected to
- * I/O APIC pin #2 but PIC pin #0,route GSI
- * number #2 to PIC IRQ #0.
- */
- irq_pic = (ops->gsi == 2U) ? 0U : ops->gsi;
- vpic_set_irqline(target_vm, irq_pic, ops->op);
- }
+ if (is_valid_vm(target_vm) && is_normal_vm(target_vm)) {
+ if (ops->gsi < vioapic_pincount(vm)) {
+ if (ops->gsi < vpic_pincount()) {
+ /*
+ * IRQ line for 8254 timer is connected to
+ * I/O APIC pin #2 but PIC pin #0,route GSI
+ * number #2 to PIC IRQ #0.
+ */
+ irq_pic = (ops->gsi == 2U) ? 0U : ops->gsi;
+ vpic_set_irqline(target_vm, irq_pic, ops->op);
+ }

- /* handle IOAPIC irqline */
- vioapic_set_irqline_lock(target_vm, ops->gsi, ops->op);
- ret = 0;
+ /* handle IOAPIC irqline */
+ vioapic_set_irqline_lock(target_vm, ops->gsi, ops->op);
+ ret = 0;
+ }
}

return ret;
@@ -479,11 +470,10 @@ int32_t hcall_inject_msi(struct acrn_vm *vm, uint16_t vmid, uint64_t param)
struct acrn_msi_entry msi;
struct acrn_vm *target_vm = get_vm_from_vmid(vmid);

- if (is_valid_vm(target_vm)) {
+ if (is_valid_vm(target_vm) && is_normal_vm(target_vm)) {
(void)memset((void *)&msi, 0U, sizeof(msi));
if (copy_from_gpa(vm, &msi, param, sizeof(msi)) != 0) {
pr_err("%s: Unable copy param to vm\n", __func__);
- ret = -1;
} else {
/* For target cpu with lapic pt, send ipi instead of injection via vlapic */
if (is_lapic_pt(target_vm)) {
@@ -518,30 +508,29 @@ int32_t hcall_set_ioreq_buffer(struct acrn_vm *vm, uint16_t vmid, uint64_t param
struct acrn_set_ioreq_buffer iobuf;
struct acrn_vm *target_vm = get_vm_from_vmid(vmid);
uint16_t i;
- int32_t ret;
+ int32_t ret = -1;

(void)memset((void *)&iobuf, 0U, sizeof(iobuf));
- if (!is_valid_vm(target_vm) || (copy_from_gpa(vm, &iobuf, param, sizeof(iobuf)) != 0)) {
- pr_err("%p %s: target_vm is not valid or Unable copy param to vm\n", target_vm, __func__);
- ret = -1;
- } else {
- dev_dbg(ACRN_DBG_HYCALL, "[%d] SET BUFFER=0x%p",
- vmid, iobuf.req_buf);
-
- hpa = gpa2hpa(vm, iobuf.req_buf);
- if (hpa == INVALID_HPA) {
- pr_err("%s,vm[%hu] gpa 0x%llx,GPA is unmapping.",
- __func__, vm->vm_id, iobuf.req_buf);
- target_vm->sw.io_shared_page = NULL;
- ret = -EINVAL;
- } else {
- target_vm->sw.io_shared_page = hpa2hva(hpa);
- for (i = 0U; i < VHM_REQUEST_MAX; i++) {
- set_vhm_req_state(target_vm, i, REQ_STATE_FREE);
+ if (is_valid_vm(target_vm) && is_normal_vm(target_vm)) {
+ if (copy_from_gpa(vm, &iobuf, param, sizeof(iobuf)) != 0) {
+ pr_err("%p %s: Unable copy param to vm\n", target_vm, __func__);
+ } else {
+ dev_dbg(ACRN_DBG_HYCALL, "[%d] SET BUFFER=0x%p",
+ vmid, iobuf.req_buf);
+
+ hpa = gpa2hpa(vm, iobuf.req_buf);
+ if (hpa == INVALID_HPA) {
+ pr_err("%s,vm[%hu] gpa 0x%llx,GPA is unmapping.",
+ __func__, vm->vm_id, iobuf.req_buf);
+ target_vm->sw.io_shared_page = NULL;
+ } else {
+ target_vm->sw.io_shared_page = hpa2hva(hpa);
+ for (i = 0U; i < VHM_REQUEST_MAX; i++) {
+ set_vhm_req_state(target_vm, i, REQ_STATE_FREE);
+ }
+ ret = 0;
}
-
- ret = 0;
- }
+ }
}

return ret;
@@ -562,10 +551,10 @@ int32_t hcall_notify_ioreq_finish(uint16_t vmid, uint16_t vcpu_id)
{
struct acrn_vcpu *vcpu;
struct acrn_vm *target_vm = get_vm_from_vmid(vmid);
- int32_t ret = -EINVAL;
+ int32_t ret = -1;

/* make sure we have set req_buf */
- if (is_valid_vm(target_vm) && (target_vm->sw.io_shared_page != NULL)) {
+ if (is_valid_vm(target_vm) && is_normal_vm(target_vm) && (target_vm->sw.io_shared_page != NULL)) {
dev_dbg(ACRN_DBG_HYCALL, "[%d] NOTIFY_FINISH for vcpu %d",
vmid, vcpu_id);

@@ -693,26 +682,21 @@ int32_t hcall_set_vm_memory_regions(struct acrn_vm *vm, uint64_t param)
{
struct set_regions regions;
struct vm_memory_region mr;
- struct acrn_vm *target_vm;
- uint16_t target_vm_id;
+ struct acrn_vm *target_vm = NULL;
uint32_t idx;
- int32_t ret = -EFAULT;
-
+ int32_t ret = -1;

(void)memset((void *)&regions, 0U, sizeof(regions));

if (copy_from_gpa(vm, &regions, param, sizeof(regions)) == 0) {
- target_vm = get_vm_from_vmid(regions.vmid);
- target_vm_id = target_vm->vm_id;
- if ((target_vm_id >= CONFIG_MAX_VM_NUM) || (get_vm_config(target_vm_id)->type != NORMAL_VM)
- || (target_vm->state == VM_STATE_INVALID)) {
- pr_err("%p %s:target_vm is invalid or Targeting to service vm", target_vm, __func__);
- } else {
+ if (regions.vmid < CONFIG_MAX_VM_NUM) {
+ target_vm = get_vm_from_vmid(regions.vmid);
+ }
+ if ((target_vm != NULL) && is_valid_vm(target_vm) && is_normal_vm(target_vm)) {
idx = 0U;
while (idx < regions.mr_num) {
if (copy_from_gpa(vm, &mr, regions.regions_gpa + idx * sizeof(mr), sizeof(mr)) != 0) {
pr_err("%s: Copy mr entry fail from vm\n", __func__);
- ret = -EFAULT;
break;
}

@@ -722,6 +706,8 @@ int32_t hcall_set_vm_memory_regions(struct acrn_vm *vm, uint64_t param)
}
idx++;
}
+ } else {
+ pr_err("%p %s:target_vm is invalid or Targeting to service vm", target_vm, __func__);
}
}

@@ -781,20 +767,18 @@ int32_t hcall_write_protect_page(struct acrn_vm *vm, uint16_t vmid, uint64_t wp_
{
struct wp_data wp;
struct acrn_vm *target_vm = get_vm_from_vmid(vmid);
- int32_t ret;
+ int32_t ret = -1;

- if (!is_valid_vm(target_vm) || is_sos_vm(target_vm)) {
- pr_err("%p %s: target_vm is invalid or Targeting to service vm", target_vm, __func__);
- ret = -EINVAL;
- } else {
+ if (is_valid_vm(target_vm) && is_normal_vm(target_vm)) {
(void)memset((void *)&wp, 0U, sizeof(wp));

if (copy_from_gpa(vm, &wp, wp_gpa, sizeof(wp)) != 0) {
pr_err("%s: Unable copy param to vm\n", __func__);
- ret = -EFAULT;
} else {
ret = write_protect_page(target_vm, &wp);
}
+ } else {
+ pr_err("%p %s: target_vm is invalid", target_vm, __func__);
}

return ret;
@@ -815,26 +799,24 @@ int32_t hcall_write_protect_page(struct acrn_vm *vm, uint16_t vmid, uint64_t wp_
*/
int32_t hcall_gpa_to_hpa(struct acrn_vm *vm, uint16_t vmid, uint64_t param)
{
- int32_t ret;
+ int32_t ret = -1;
struct vm_gpa2hpa v_gpa2hpa;
struct acrn_vm *target_vm = get_vm_from_vmid(vmid);

(void)memset((void *)&v_gpa2hpa, 0U, sizeof(v_gpa2hpa));
- if (!is_valid_vm(target_vm) || (copy_from_gpa(vm, &v_gpa2hpa, param, sizeof(v_gpa2hpa)) != 0)) {
- pr_err("target_vm is invalid or HCALL gpa2hpa: Unable copy param from vm\n");
- ret = -1;
- } else {
+ if (is_valid_vm(target_vm) && (!is_prelaunched_vm(target_vm))
+ && (copy_from_gpa(vm, &v_gpa2hpa, param, sizeof(v_gpa2hpa)) == 0)) {
v_gpa2hpa.hpa = gpa2hpa(target_vm, v_gpa2hpa.gpa);
if (v_gpa2hpa.hpa == INVALID_HPA) {
pr_err("%s,vm[%hu] gpa 0x%llx,GPA is unmapping.",
__func__, target_vm->vm_id, v_gpa2hpa.gpa);
- ret = -EINVAL;
} else if (copy_to_gpa(vm, &v_gpa2hpa, param, sizeof(v_gpa2hpa)) != 0) {
pr_err("%s: Unable copy param to vm\n", __func__);
- ret = -1;
} else {
ret = 0;
}
+ } else {
+ pr_err("target_vm is invalid or HCALL gpa2hpa: Unable copy param from vm\n");
}

return ret;
@@ -860,7 +842,7 @@ int32_t hcall_assign_ptdev(struct acrn_vm *vm, uint16_t vmid, uint64_t param)
bool bdf_valid = true;
bool iommu_valid = true;

- if (is_valid_vm(target_vm)) {
+ if (is_valid_vm(target_vm) && is_normal_vm(target_vm)) {
if (param < 0x10000UL) {
bdf = (uint16_t) param;
} else {
@@ -894,7 +876,7 @@ int32_t hcall_assign_ptdev(struct acrn_vm *vm, uint16_t vmid, uint64_t param)
(uint8_t)(bdf >> 8U), (uint8_t)(bdf & 0xffU));
}
} else {
- pr_err("%s, vm is null\n", __func__);
+ pr_err("%s, target vm is invalid\n", __func__);
ret = -EINVAL;
}

@@ -915,21 +897,18 @@ int32_t hcall_assign_ptdev(struct acrn_vm *vm, uint16_t vmid, uint64_t param)
*/
int32_t hcall_deassign_ptdev(struct acrn_vm *vm, uint16_t vmid, uint64_t param)
{
- int32_t ret = 0;
+ int32_t ret = -1;
uint16_t bdf;
bool bdf_valid = true;
struct acrn_vm *target_vm = get_vm_from_vmid(vmid);

- if (!is_valid_vm(target_vm)) {
- ret = -1;
- } else {
+ if (is_valid_vm(target_vm) && is_normal_vm(target_vm)) {
if (param < 0x10000UL) {
bdf = (uint16_t) param;
} else {
if (copy_from_gpa(vm, &bdf, param, sizeof(bdf)) != 0) {
pr_err("%s: Unable copy param to vm\n", __func__);
bdf_valid = false;
- ret = -1;
}
}

@@ -955,35 +934,32 @@ int32_t hcall_deassign_ptdev(struct acrn_vm *vm, uint16_t vmid, uint64_t param)
*/
int32_t hcall_set_ptdev_intr_info(struct acrn_vm *vm, uint16_t vmid, uint64_t param)
{
- int32_t ret;
+ int32_t ret = -1;
struct hc_ptdev_irq irq;
struct acrn_vm *target_vm = get_vm_from_vmid(vmid);

(void)memset((void *)&irq, 0U, sizeof(irq));
- if (!is_valid_vm(target_vm)) {
- ret = -1;
- } else if (copy_from_gpa(vm, &irq, param, sizeof(irq)) != 0) {
- pr_err("%s: Unable copy param to vm\n", __func__);
- ret = -1;
- } else {
- /* Inform vPCI about the interupt info changes */
- vpci_set_ptdev_intr_info(target_vm, irq.virt_bdf, irq.phys_bdf);
-
- if (irq.type == IRQ_INTX) {
- ret = ptirq_add_intx_remapping(target_vm, irq.is.intx.virt_pin,
- irq.is.intx.phys_pin, irq.is.intx.pic_pin);
- } else if (((irq.type == IRQ_MSI) || (irq.type == IRQ_MSIX)) &&
- (irq.is.msix.vector_cnt <= CONFIG_MAX_MSIX_TABLE_NUM)) {
- ret = ptirq_add_msix_remapping(target_vm,
- irq.virt_bdf, irq.phys_bdf,
- irq.is.msix.vector_cnt);
+ if (is_valid_vm(target_vm) && is_normal_vm(target_vm)) {
+ if (copy_from_gpa(vm, &irq, param, sizeof(irq)) != 0) {
+ pr_err("%s: Unable copy param to vm\n", __func__);
} else {
- pr_err("%s: Invalid irq type: %u or MSIX vector count: %u\n",
- __func__, irq.type, irq.is.msix.vector_cnt);
- ret = -1;
+ /* Inform vPCI about the interupt info changes */
+ vpci_set_ptdev_intr_info(target_vm, irq.virt_bdf, irq.phys_bdf);
+
+ if (irq.type == IRQ_INTX) {
+ ret = ptirq_add_intx_remapping(target_vm, irq.is.intx.virt_pin,
+ irq.is.intx.phys_pin, irq.is.intx.pic_pin);
+ } else if (((irq.type == IRQ_MSI) || (irq.type == IRQ_MSIX)) &&
+ (irq.is.msix.vector_cnt <= CONFIG_MAX_MSIX_TABLE_NUM)) {
+ ret = ptirq_add_msix_remapping(target_vm,
+ irq.virt_bdf, irq.phys_bdf,
+ irq.is.msix.vector_cnt);
+ } else {
+ pr_err("%s: Invalid irq type: %u or MSIX vector count: %u\n",
+ __func__, irq.type, irq.is.msix.vector_cnt);
+ }
}
}
-
return ret;
}

@@ -1001,20 +977,20 @@ int32_t hcall_set_ptdev_intr_info(struct acrn_vm *vm, uint16_t vmid, uint64_t pa
int32_t
hcall_reset_ptdev_intr_info(struct acrn_vm *vm, uint16_t vmid, uint64_t param)
{
- int32_t ret = 0;
+ int32_t ret = -1;
struct hc_ptdev_irq irq;
struct acrn_vm *target_vm = get_vm_from_vmid(vmid);

- if (is_valid_vm(target_vm)) {
+ if (is_valid_vm(target_vm) && is_normal_vm(target_vm)) {
(void)memset((void *)&irq, 0U, sizeof(irq));

if (copy_from_gpa(vm, &irq, param, sizeof(irq)) != 0) {
pr_err("%s: Unable copy param to vm\n", __func__);
- ret = -1;
} else if (irq.type == IRQ_INTX) {
ptirq_remove_intx_remapping(target_vm,
irq.is.intx.virt_pin,
irq.is.intx.pic_pin);
+ ret = 0;
} else if (((irq.type == IRQ_MSI) || (irq.type == IRQ_MSIX)) &&
(irq.is.msix.vector_cnt <= CONFIG_MAX_MSIX_TABLE_NUM)) {

@@ -1027,13 +1003,11 @@ hcall_reset_ptdev_intr_info(struct acrn_vm *vm, uint16_t vmid, uint64_t param)
ptirq_remove_msix_remapping(target_vm,
irq.virt_bdf,
irq.is.msix.vector_cnt);
+ ret = 0;
} else {
pr_err("%s: Invalid irq type: %u or MSIX vector count: %u\n",
__func__, irq.type, irq.is.msix.vector_cnt);
- ret = -1;
}
- } else {
- ret = -1;
}

return ret;
@@ -1052,16 +1026,14 @@ hcall_reset_ptdev_intr_info(struct acrn_vm *vm, uint16_t vmid, uint64_t param)
int32_t hcall_get_cpu_pm_state(struct acrn_vm *vm, uint64_t cmd, uint64_t param)
{
uint16_t target_vm_id;
- struct acrn_vm *target_vm;
- int32_t ret;
+ struct acrn_vm *target_vm = NULL;
+ int32_t ret = -1;

target_vm_id = (uint16_t)((cmd & PMCMD_VMID_MASK) >> PMCMD_VMID_SHIFT);
- target_vm = get_vm_from_vmid(target_vm_id);
-
- if (!is_valid_vm(target_vm)) {
- ret = -1;
- } else {
-
+ if (target_vm_id < CONFIG_MAX_VM_NUM) {
+ target_vm = get_vm_from_vmid(target_vm_id);
+ }
+ if ((target_vm != NULL) && is_valid_vm(target_vm) && is_normal_vm(target_vm)) {
switch (cmd & PMCMD_TYPE_MASK) {
case PMCMD_GET_PX_CNT: {

@@ -1175,7 +1147,7 @@ int32_t hcall_vm_intr_monitor(struct acrn_vm *vm, uint16_t vmid, uint64_t param)
uint64_t hpa;
struct acrn_vm *target_vm = get_vm_from_vmid(vmid);

- if (is_valid_vm(target_vm)) {
+ if (is_valid_vm(target_vm) && is_normal_vm(target_vm)) {
/* the param for this hypercall is page aligned */
hpa = gpa2hpa(vm, param);
if (hpa != INVALID_HPA) {
diff --git a/hypervisor/include/arch/x86/guest/vm.h b/hypervisor/include/arch/x86/guest/vm.h
index b9762f5..20b6687 100644
--- a/hypervisor/include/arch/x86/guest/vm.h
+++ b/hypervisor/include/arch/x86/guest/vm.h
@@ -205,6 +205,7 @@ void prepare_vm(uint16_t vm_id, struct acrn_vm_config *vm_config);
void launch_vms(uint16_t pcpu_id);
bool is_valid_vm(const struct acrn_vm *vm);
bool is_sos_vm(const struct acrn_vm *vm);
+bool is_normal_vm(const struct acrn_vm *vm);
bool is_prelaunched_vm(const struct acrn_vm *vm);
uint16_t get_vmid_by_uuid(const uint8_t *uuid);
struct acrn_vm *get_vm_from_vmid(uint16_t vm_id);
--
2.7.4


[PATCH v2 1/2] HV: check vm id param when dispatching hypercall

Victor Sun
 

If the vmcall param passed from guest is representing a vmid, we should
make sure it is a valid one because it is a pre-condition of following
get_vm_from_vmid(). And then we don't need to do NULL VM pointer check
in is_valid_vm() because get_vm_from_vmid() would never return NULL.

Signed-off-by: Victor Sun <victor.sun@...>
---
hypervisor/arch/x86/guest/vm.c | 5 +-
hypervisor/arch/x86/guest/vmcall.c | 107 +++++++++++++++++++++++++------------
2 files changed, 77 insertions(+), 35 deletions(-)

diff --git a/hypervisor/arch/x86/guest/vm.c b/hypervisor/arch/x86/guest/vm.c
index 28f3122..242e716 100644
--- a/hypervisor/arch/x86/guest/vm.c
+++ b/hypervisor/arch/x86/guest/vm.c
@@ -47,9 +47,12 @@ uint16_t get_vmid_by_uuid(const uint8_t *uuid)
return vm_id;
}

+/**
+ * @pre vm != NULL
+ */
bool is_valid_vm(const struct acrn_vm *vm)
{
- return (vm != NULL) && (vm->state != VM_STATE_INVALID);
+ return (vm->state != VM_STATE_INVALID);
}

bool is_sos_vm(const struct acrn_vm *vm)
diff --git a/hypervisor/arch/x86/guest/vmcall.c b/hypervisor/arch/x86/guest/vmcall.c
index cee4797..783a31a 100644
--- a/hypervisor/arch/x86/guest/vmcall.c
+++ b/hypervisor/arch/x86/guest/vmcall.c
@@ -28,7 +28,10 @@ static int32_t dispatch_hypercall(struct acrn_vcpu *vcpu)
uint64_t param1 = vcpu_get_gpreg(vcpu, CPU_REG_RDI);
/* hypercall param2 from guest*/
uint64_t param2 = vcpu_get_gpreg(vcpu, CPU_REG_RSI);
- int32_t ret;
+ /* in case hypercall param1 is a vm id */
+ uint16_t vm_id = (uint16_t)param1;
+ bool vmid_is_valid = (vm_id < CONFIG_MAX_VM_NUM) ? true : false;
+ int32_t ret = -1;

switch (hypcall_id) {
case HC_SOS_OFFLINE_CPU:
@@ -57,69 +60,89 @@ static int32_t dispatch_hypercall(struct acrn_vcpu *vcpu)

case HC_DESTROY_VM:
/* param1: vmid */
- spinlock_obtain(&vmm_hypercall_lock);
- ret = hcall_destroy_vm((uint16_t)param1);
- spinlock_release(&vmm_hypercall_lock);
+ if (vmid_is_valid) {
+ spinlock_obtain(&vmm_hypercall_lock);
+ ret = hcall_destroy_vm(vm_id);
+ spinlock_release(&vmm_hypercall_lock);
+ }
break;

case HC_START_VM:
/* param1: vmid */
- spinlock_obtain(&vmm_hypercall_lock);
- ret = hcall_start_vm((uint16_t)param1);
- spinlock_release(&vmm_hypercall_lock);
+ if (vmid_is_valid) {
+ spinlock_obtain(&vmm_hypercall_lock);
+ ret = hcall_start_vm(vm_id);
+ spinlock_release(&vmm_hypercall_lock);
+ }
break;

case HC_RESET_VM:
/* param1: vmid */
- spinlock_obtain(&vmm_hypercall_lock);
- ret = hcall_reset_vm((uint16_t)param1);
- spinlock_release(&vmm_hypercall_lock);
+ if (vmid_is_valid) {
+ spinlock_obtain(&vmm_hypercall_lock);
+ ret = hcall_reset_vm(vm_id);
+ spinlock_release(&vmm_hypercall_lock);
+ }
break;

case HC_PAUSE_VM:
/* param1: vmid */
- spinlock_obtain(&vmm_hypercall_lock);
- ret = hcall_pause_vm((uint16_t)param1);
- spinlock_release(&vmm_hypercall_lock);
+ if (vmid_is_valid) {
+ spinlock_obtain(&vmm_hypercall_lock);
+ ret = hcall_pause_vm(vm_id);
+ spinlock_release(&vmm_hypercall_lock);
+ }
break;

case HC_CREATE_VCPU:
/* param1: vmid */
- spinlock_obtain(&vmm_hypercall_lock);
- ret = hcall_create_vcpu(vm, (uint16_t)param1, param2);
- spinlock_release(&vmm_hypercall_lock);
+ if (vmid_is_valid) {
+ spinlock_obtain(&vmm_hypercall_lock);
+ ret = hcall_create_vcpu(vm, vm_id, param2);
+ spinlock_release(&vmm_hypercall_lock);
+ }
break;

case HC_SET_VCPU_REGS:
/* param1: vmid */
- spinlock_obtain(&vmm_hypercall_lock);
- ret = hcall_set_vcpu_regs(vm, (uint16_t)param1, param2);
- spinlock_release(&vmm_hypercall_lock);
+ if (vmid_is_valid) {
+ spinlock_obtain(&vmm_hypercall_lock);
+ ret = hcall_set_vcpu_regs(vm, vm_id, param2);
+ spinlock_release(&vmm_hypercall_lock);
+ }
break;

case HC_SET_IRQLINE:
/* param1: vmid */
- ret = hcall_set_irqline(vm, (uint16_t)param1,
- (struct acrn_irqline_ops *)&param2);
+ if (vmid_is_valid) {
+ ret = hcall_set_irqline(vm, vm_id,
+ (struct acrn_irqline_ops *)&param2);
+ }
break;

case HC_INJECT_MSI:
/* param1: vmid */
- ret = hcall_inject_msi(vm, (uint16_t)param1, param2);
+ if (vmid_is_valid) {
+ ret = hcall_inject_msi(vm, vm_id, param2);
+ }
break;

case HC_SET_IOREQ_BUFFER:
/* param1: vmid */
- spinlock_obtain(&vmm_hypercall_lock);
- ret = hcall_set_ioreq_buffer(vm, (uint16_t)param1, param2);
- spinlock_release(&vmm_hypercall_lock);
+ if (vmid_is_valid) {
+ spinlock_obtain(&vmm_hypercall_lock);
+ ret = hcall_set_ioreq_buffer(vm, vm_id, param2);
+ spinlock_release(&vmm_hypercall_lock);
+ }
break;

case HC_NOTIFY_REQUEST_FINISH:
/* param1: vmid
* param2: vcpu_id */
- ret = hcall_notify_ioreq_finish((uint16_t)param1,
- (uint16_t)param2);
+ if (vmid_is_valid) {
+ ret = hcall_notify_ioreq_finish(vm_id,
+ (uint16_t)param2);
+ }
break;

case HC_VM_SET_MEMORY_REGIONS:
@@ -127,7 +150,10 @@ static int32_t dispatch_hypercall(struct acrn_vcpu *vcpu)
break;

case HC_VM_WRITE_PROTECT_PAGE:
- ret = hcall_write_protect_page(vm, (uint16_t)param1, param2);
+ /* param1: vmid */
+ if (vmid_is_valid) {
+ ret = hcall_write_protect_page(vm, vm_id, param2);
+ }
break;

/*
@@ -140,27 +166,37 @@ static int32_t dispatch_hypercall(struct acrn_vcpu *vcpu)

case HC_VM_GPA2HPA:
/* param1: vmid */
- ret = hcall_gpa_to_hpa(vm, (uint16_t)param1, param2);
+ if (vmid_is_valid) {
+ ret = hcall_gpa_to_hpa(vm, vm_id, param2);
+ }
break;

case HC_ASSIGN_PTDEV:
/* param1: vmid */
- ret = hcall_assign_ptdev(vm, (uint16_t)param1, param2);
+ if (vmid_is_valid) {
+ ret = hcall_assign_ptdev(vm, vm_id, param2);
+ }
break;

case HC_DEASSIGN_PTDEV:
/* param1: vmid */
- ret = hcall_deassign_ptdev(vm, (uint16_t)param1, param2);
+ if (vmid_is_valid) {
+ ret = hcall_deassign_ptdev(vm, vm_id, param2);
+ }
break;

case HC_SET_PTDEV_INTR_INFO:
/* param1: vmid */
- ret = hcall_set_ptdev_intr_info(vm, (uint16_t)param1, param2);
+ if (vmid_is_valid) {
+ ret = hcall_set_ptdev_intr_info(vm, vm_id, param2);
+ }
break;

case HC_RESET_PTDEV_INTR_INFO:
/* param1: vmid */
- ret = hcall_reset_ptdev_intr_info(vm, (uint16_t)param1, param2);
+ if (vmid_is_valid) {
+ ret = hcall_reset_ptdev_intr_info(vm, vm_id, param2);
+ }
break;

case HC_WORLD_SWITCH:
@@ -180,7 +216,10 @@ static int32_t dispatch_hypercall(struct acrn_vcpu *vcpu)
break;

case HC_VM_INTR_MONITOR:
- ret = hcall_vm_intr_monitor(vm, (uint16_t)param1, param2);
+ /* param1: vmid */
+ if (vmid_is_valid) {
+ ret = hcall_vm_intr_monitor(vm, vm_id, param2);
+ }
break;

default:
--
2.7.4


Re: [PATCH] hv: seed: fix potential NULL pointer dereferencing

Zhu, Bing
 

ACK, It is fine to me.

-----Original Message-----
From: Qi, Yadong
Sent: Friday, April 19, 2019 10:16 AM
To: acrn-dev@...
Cc: Zhu, Bing <bing.zhu@...>; Dong, Eddie <eddie.dong@...>; Huang,
Yonghua <yonghua.huang@...>; Qi, Yadong <yadong.qi@...>
Subject: [PATCH] hv: seed: fix potential NULL pointer dereferencing

The 'boot_params' and 'entry' might be dereferenced after they were positively
checked for NULL. Refine checking logic to fix the issue.

Signed-off-by: Qi Yadong <yadong.qi@...>
---
hypervisor/arch/x86/seed/seed_sbl.c | 22 +++++++++++-----------
1 file changed, 11 insertions(+), 11 deletions(-)

diff --git a/hypervisor/arch/x86/seed/seed_sbl.c
b/hypervisor/arch/x86/seed/seed_sbl.c
index ff3347f..1309c1b 100644
--- a/hypervisor/arch/x86/seed/seed_sbl.c
+++ b/hypervisor/arch/x86/seed/seed_sbl.c
@@ -62,29 +62,29 @@ bool parse_seed_sbl(uint64_t addr, struct physical_seed
*phy_seed) {
uint8_t i;
uint8_t dseed_index = 0U;
- struct image_boot_params *boot_params;
+ struct image_boot_params *boot_params = NULL;
struct seed_list_hob *seed_hob = NULL;
- struct seed_entry *entry;
- struct seed_info *seed_list;
+ struct seed_entry *entry = NULL;
+ struct seed_info *seed_list = NULL;
bool status = false;

stac();

boot_params = (struct image_boot_params *)hpa2hva(addr);

- if ((boot_params != NULL) || (phy_seed != NULL)) {
+ if (boot_params != NULL) {
seed_hob = (struct seed_list_hob *)hpa2hva(boot_params-
p_seed_list);
}

- if (seed_hob != NULL) {
- status = true;
-
+ if ((seed_hob != NULL) && (phy_seed != NULL)) {
seed_list = phy_seed->seed_list;

entry = (struct seed_entry *)((uint8_t *)seed_hob + sizeof(struct
seed_list_hob));

- for (i = 0U; i < seed_hob->total_seed_count; i++) {
- if (entry != NULL) {
+ if (entry != NULL) {
+ status = true;
+
+ for (i = 0U; i < seed_hob->total_seed_count; i++) {
/* retrieve dseed */
if ((SEED_ENTRY_TYPE_SVNSEED == entry->type) &&
(SEED_ENTRY_USAGE_DSEED == entry->usage))
{ @@ -106,9 +106,9 @@ bool parse_seed_sbl(uint64_t addr, struct physical_seed
*phy_seed)
/* erase original seed in seed entry */
(void)memset((void *)&entry->seed[0U], 0U,
sizeof(struct seed_info));
}
- }

- entry = (struct seed_entry *)((uint8_t *)entry + entry-
seed_entry_size);
+ entry = (struct seed_entry *)((uint8_t *)entry + entry-
seed_entry_size);
+ }
}

if (status) {
--
2.7.4


[PATCH] hv: seed: fix potential NULL pointer dereferencing

Qi, Yadong <yadong.qi@...>
 

The 'boot_params' and 'entry' might be dereferenced after they were
positively checked for NULL. Refine checking logic to fix the issue.

Signed-off-by: Qi Yadong <yadong.qi@...>
---
hypervisor/arch/x86/seed/seed_sbl.c | 22 +++++++++++-----------
1 file changed, 11 insertions(+), 11 deletions(-)

diff --git a/hypervisor/arch/x86/seed/seed_sbl.c b/hypervisor/arch/x86/seed/seed_sbl.c
index ff3347f..1309c1b 100644
--- a/hypervisor/arch/x86/seed/seed_sbl.c
+++ b/hypervisor/arch/x86/seed/seed_sbl.c
@@ -62,29 +62,29 @@ bool parse_seed_sbl(uint64_t addr, struct physical_seed *phy_seed)
{
uint8_t i;
uint8_t dseed_index = 0U;
- struct image_boot_params *boot_params;
+ struct image_boot_params *boot_params = NULL;
struct seed_list_hob *seed_hob = NULL;
- struct seed_entry *entry;
- struct seed_info *seed_list;
+ struct seed_entry *entry = NULL;
+ struct seed_info *seed_list = NULL;
bool status = false;

stac();

boot_params = (struct image_boot_params *)hpa2hva(addr);

- if ((boot_params != NULL) || (phy_seed != NULL)) {
+ if (boot_params != NULL) {
seed_hob = (struct seed_list_hob *)hpa2hva(boot_params->p_seed_list);
}

- if (seed_hob != NULL) {
- status = true;
-
+ if ((seed_hob != NULL) && (phy_seed != NULL)) {
seed_list = phy_seed->seed_list;

entry = (struct seed_entry *)((uint8_t *)seed_hob + sizeof(struct seed_list_hob));

- for (i = 0U; i < seed_hob->total_seed_count; i++) {
- if (entry != NULL) {
+ if (entry != NULL) {
+ status = true;
+
+ for (i = 0U; i < seed_hob->total_seed_count; i++) {
/* retrieve dseed */
if ((SEED_ENTRY_TYPE_SVNSEED == entry->type) &&
(SEED_ENTRY_USAGE_DSEED == entry->usage)) {
@@ -106,9 +106,9 @@ bool parse_seed_sbl(uint64_t addr, struct physical_seed *phy_seed)
/* erase original seed in seed entry */
(void)memset((void *)&entry->seed[0U], 0U, sizeof(struct seed_info));
}
- }

- entry = (struct seed_entry *)((uint8_t *)entry + entry->seed_entry_size);
+ entry = (struct seed_entry *)((uint8_t *)entry + entry->seed_entry_size);
+ }
}

if (status) {
--
2.7.4


Re: [PATCH v7 5/5] HV: Reset physical core of lapic_pt vm when shutdown

Kaige Fu
 

Hi Eddie,

In current stop_cpus, it do like this:

1) make_pcpu_offline (all APs)
2) Wait for all APs offlined
3) If timeout occurs, dead loop

For reset_cpus in shutdown_vm, you commented that we should put the make_pcpu_offline and wait action in the shutdown_vm,

1) make_pcpu_offline (all cpus of lapic_pt vm)
2) Wait for all cpus offlined
3) if timeout occurs, mark cpu as INVALID
4) start_cpus.

Step 1/2/3 of stop_cpus and reset_cpus is similar. So, what I think is refine the stop_cpus to take one parameter 'mask'
like start_cpus. Then we can reuse the stop_cpus when reseting cpu of lapic_pt vm.

What is your opinion?

BTW, sorry for not making myself clear and leave you an impression that I simply ignored your comment.

On 04-19 Fri 16:44, Kaige Fu wrote:
Hi Eddie,

On 04-18 Thu 13:23, Eddie Dong wrote:


-----Original Message-----
From: acrn-dev@... [mailto:acrn-dev@...]
On Behalf Of Kaige Fu
Sent: Friday, April 19, 2019 2:11 AM
To: acrn-dev@...
Subject: [acrn-dev] [PATCH v7 5/5] HV: Reset physical core of lapic_pt vm
when shutdown

The physical core of lapic_pt vm should be reset for security and correctness
when shutdown the vm.

Signed-off-by: Kaige Fu <kaige.fu@...>
---
hypervisor/arch/x86/guest/vm.c | 20 +++++++++++++++++++-
hypervisor/include/lib/errno.h | 2 ++
2 files changed, 21 insertions(+), 1 deletion(-)

diff --git a/hypervisor/arch/x86/guest/vm.c b/hypervisor/arch/x86/guest/vm.c
index cebbc9cc..8a66973c 100644
--- a/hypervisor/arch/x86/guest/vm.c
+++ b/hypervisor/arch/x86/guest/vm.c
@@ -446,9 +446,10 @@ int32_t create_vm(uint16_t vm_id, struct
acrn_vm_config *vm_config, struct acrn_ int32_t shutdown_vm(struct
acrn_vm *vm) {
uint16_t i;
+ uint64_t mask = 0UL;
struct acrn_vcpu *vcpu = NULL;
struct acrn_vm_config *vm_config = NULL;
- int32_t ret;
+ int32_t ret = 0;

pause_vm(vm);

@@ -459,6 +460,23 @@ int32_t shutdown_vm(struct acrn_vm *vm)
foreach_vcpu(i, vm, vcpu) {
reset_vcpu(vcpu);
offline_vcpu(vcpu);
+
+ if (is_lapic_pt(vm)) {
+ bitmap_set_nolock(vcpu->pcpu_id, &mask);
3rd time comment: We should call make_pcpu_offline(..) here.
If the comments are simply ignored, I don't need to review further.
I got your comment. But here is what I think: I refine the stop_cpus to take one more parameter
'mask'. So, we can reuse the stop_cpus here directly instead of doing like the following:

1. make_pcpu_offline()
2. wait the pcpus offline.

These above logic is already in stop_cpus. IMHO, it is better to reuse stop_cpus. Otherwise, we have duplicated code here.

If you prefer make_pcpu_offline and wait here, I will do it in next version. Sorry for confusing.


+ }
+ }
+
+ if (is_lapic_pt(vm) && !stop_cpus(mask)) {
+ pr_fatal("Failed to stop all cpus in mask(0x%llx)", mask);
+ }
+
+ /*
+ * No matter whether there are cpus failed to offline, try to start all
+ * cpus in mask anyway.
+ */
+ if (is_lapic_pt(vm) && !start_cpus(mask)) {
+ pr_fatal("Failed to start all cpus in mask(0x%llx)", mask);
+ ret = -ETIMEDOUT;
}

vm_config = get_vm_config(vm->vm_id); diff --git
a/hypervisor/include/lib/errno.h b/hypervisor/include/lib/errno.h index
b97112f5..bc8c0db7 100644
--- a/hypervisor/include/lib/errno.h
+++ b/hypervisor/include/lib/errno.h
@@ -23,5 +23,7 @@
#define ENODEV 19
/** Indicates that argument is not valid. */
#define EINVAL 22
+/** Indicates that timeout occurs. */
+#define ETIMEDOUT 110

#endif /* ERRNO_H */
--
2.20.0






[PATCH] dm: not register/unregister gvt bar memory

Xinyun Liu
 

For AcrnGT, the memory access is trapped in sos kernel and acrngt driver
intercepts it. So no need to register the memory block in acrn-dm when
allocate the virtual bar.

v2: removed unnessary braces and use printf to log

Tracked-On: #2976
Signed-off-by: Liu Xinyun <xinyun.liu@...>
---
devicemodel/hw/pci/core.c | 13 +++++++++++++
1 file changed, 13 insertions(+)

diff --git a/devicemodel/hw/pci/core.c b/devicemodel/hw/pci/core.c
index ce35230d..35d9c901 100644
--- a/devicemodel/hw/pci/core.c
+++ b/devicemodel/hw/pci/core.c
@@ -127,6 +127,15 @@ CFGREAD(struct pci_vdev *dev, int coff, int bytes)
return pci_get_cfgdata32(dev, coff);
}

+static inline int
+is_pci_gvt(struct pci_vdev *dev)
+{
+ if (dev == NULL || strncmp(dev->dev_ops->class_name, "pci-gvt",7))
+ return 0;
+ else
+ return 1;
+}
+
/*
* I/O access
*/
@@ -493,6 +502,10 @@ modify_bar_registration(struct pci_vdev *dev, int idx, int registration)
struct inout_port iop;
struct mem_range mr;

+ if (is_pci_gvt(dev)) {
+ printf("modify_bar_registration: bypass for pci-gvt\n");
+ return;
+ }
switch (dev->bar[idx].type) {
case PCIBAR_IO:
bzero(&iop, sizeof(struct inout_port));
--
2.17.1

16281 - 16300 of 37344