Re: [PATCH] HV: io: drop REQ_STATE_FAILED


Eddie Dong
 

Acked-by: Eddie Dong <eddie.dong@...>

-----Original Message-----
From: acrn-dev@...
[mailto:acrn-dev@...] On Behalf Of Junjie Mao
Sent: Tuesday, August 14, 2018 7:23 PM
To: acrn-dev@...
Subject: [acrn-dev] [PATCH] HV: io: drop REQ_STATE_FAILED

Now the DM has adopted the new VHM request state transitions and
REQ_STATE_FAILED is obsolete since neither VHM nor kernel mediators will
set the state to FAILED.

This patch drops the definition to REQ_STATE_FAILED in the hypervisor,
makes ''processed'' unsigned to make the compiler happy about typing and
simplifies error handling in the following ways.

* (dm_)emulate_(pio|mmio)_post no longer returns an error code, by
introducing a
constraint that these functions must be called after an I/O request
completes (which is the case in the current design) and assuming
handlers/VHM/DM will always give a value for reads (typically all 1's if the
requested address is invalid).

* emulate_io() now returns a positive value IOREQ_PENDING to indicate that
the
request is sent to VHM. This mitigates a potential race between
dm_emulate_pio() and pio_instr_vmexit_handler() which can cause
emulate_pio_post() being called twice for the same request.

* Remove the ''processed'' member in io_request. Previously this mirrors the
state of the VHM request which terminates at either COMPLETE or FAILED.
After
the FAILED state is removed, the terminal state will always be constantly
COMPLETE. Thus the mirrored ''processed'' member is no longer useful.

Note that emulate_instruction() will always succeed after a reshuffle, and
this patch takes that assumption in advance. This does not hurt as that
returned value is not currently handled.

This patch makes it explicit that I/O emulation is not expected to fail. One
issue remains, though, which occurs when a non-aligned cross-boundary
access happens. Currently the hypervisor, VHM and DM adopts different
policy:

* Hypervisor: inject #GP if it detects that the access crossed boundary

* VHM: deliver to DM if the access does not complete falls in the range of a
client

* DM: a handler covering part of the to-be-accessed region is picked and
assertion failure can be triggered.

A high-level design covering all these components (in addition to instruction
emulation) is needed for this. Thus this patch does not yet cover the issue.

Signed-off-by: Junjie Mao <junjie.mao@...>
---
hypervisor/arch/x86/ept.c | 12 ++---
hypervisor/arch/x86/guest/vlapic.c | 1 -
hypervisor/arch/x86/io.c | 84
++++++++++++++-------------------
hypervisor/common/io_request.c | 6 +--
hypervisor/dm/vioapic.c | 1 -
hypervisor/include/arch/x86/ioreq.h | 12 ++---
hypervisor/include/public/acrn_common.h | 11 +----
7 files changed, 49 insertions(+), 78 deletions(-)

diff --git a/hypervisor/arch/x86/ept.c b/hypervisor/arch/x86/ept.c index
ed1e245..01cf6f2 100644
--- a/hypervisor/arch/x86/ept.c
+++ b/hypervisor/arch/x86/ept.c
@@ -156,7 +156,6 @@ int ept_violation_vmexit_handler(struct vcpu *vcpu)
exit_qual = vcpu->arch_vcpu.exit_qualification;

io_req->type = REQ_MMIO;
- io_req->processed = REQ_STATE_PENDING;

/* Specify if read or write operation */
if ((exit_qual & 0x2UL) != 0UL) {
@@ -214,14 +213,11 @@ int ept_violation_vmexit_handler(struct vcpu
*vcpu)

status = emulate_io(vcpu, io_req);

- /* io_req is hypervisor-private. For requests sent to VHM,
- * io_req->processed will be PENDING till dm_emulate_mmio_post() is
- * called on vcpu resume. */
if (status == 0) {
- if (io_req->processed != REQ_STATE_PENDING) {
- status = emulate_mmio_post(vcpu, io_req);
- }
- }
+ emulate_mmio_post(vcpu, io_req);
+ } else if (status == IOREQ_PENDING) {
+ status = 0;
+ }

return status;

diff --git a/hypervisor/arch/x86/guest/vlapic.c
b/hypervisor/arch/x86/guest/vlapic.c
index 35be79c..5229443 100644
--- a/hypervisor/arch/x86/guest/vlapic.c
+++ b/hypervisor/arch/x86/guest/vlapic.c
@@ -2094,7 +2094,6 @@ int vlapic_mmio_access_handler(struct vcpu
*vcpu, struct io_request *io_req,
/* Can never happen due to the range of mmio_req->direction. */
}

- io_req->processed = REQ_STATE_COMPLETE;
return ret;
}

diff --git a/hypervisor/arch/x86/io.c b/hypervisor/arch/x86/io.c index
f61b778..e968d2b 100644
--- a/hypervisor/arch/x86/io.c
+++ b/hypervisor/arch/x86/io.c
@@ -16,35 +16,33 @@ static void complete_ioreq(struct vhm_request
*vhm_req)

/**
* @pre io_req->type == REQ_PORTIO
+ *
+ * @remark This function must be called when \p io_req is completed,
+ after
+ * either a previous call to emulate_io() returning 0 or the
+ corresponding VHM
+ * request having transferred to the COMPLETE state.
*/
-static int32_t
+static void
emulate_pio_post(struct vcpu *vcpu, struct io_request *io_req) {
- int32_t status;
struct pio_request *pio_req = &io_req->reqs.pio;
uint64_t mask = 0xFFFFFFFFUL >> (32UL - 8UL * pio_req->size);

- if (io_req->processed == REQ_STATE_COMPLETE) {
- if (pio_req->direction == REQUEST_READ) {
- uint64_t value = (uint64_t)pio_req->value;
- int32_t context_idx = vcpu->arch_vcpu.cur_context;
- uint64_t rax = vcpu_get_gpreg(vcpu, CPU_REG_RAX);
+ if (pio_req->direction == REQUEST_READ) {
+ uint64_t value = (uint64_t)pio_req->value;
+ uint64_t rax = vcpu_get_gpreg(vcpu, CPU_REG_RAX);

- rax = ((rax) & ~mask) | (value & mask);
- vcpu_set_gpreg(vcpu, CPU_REG_RAX, rax);
- }
- status = 0;
- } else {
- status = -1;
+ rax = ((rax) & ~mask) | (value & mask);
+ vcpu_set_gpreg(vcpu, CPU_REG_RAX, rax);
}
-
- return status;
}

/**
* @pre vcpu->req.type == REQ_PORTIO
+ *
+ * @remark This function must be called after the VHM request
+ corresponding to
+ * \p vcpu being transferred to the COMPLETE state.
*/
-int32_t dm_emulate_pio_post(struct vcpu *vcpu)
+void dm_emulate_pio_post(struct vcpu *vcpu)
{
uint16_t cur = vcpu->vcpu_id;
union vhm_request_buffer *req_buf = NULL; @@ -60,35 +58,33 @@
int32_t dm_emulate_pio_post(struct vcpu *vcpu)
/* VHM emulation data already copy to req, mark to free slot now */
complete_ioreq(vhm_req);

- return emulate_pio_post(vcpu, io_req);
+ emulate_pio_post(vcpu, io_req);
}

/**
* @pre vcpu->req.type == REQ_MMIO
+ *
+ * @remark This function must be called when \p io_req is completed,
+ after
+ * either a previous call to emulate_io() returning 0 or the
+ corresponding VHM
+ * request having transferred to the COMPLETE state.
*/
-int32_t emulate_mmio_post(struct vcpu *vcpu, struct io_request *io_req)
+void emulate_mmio_post(struct vcpu *vcpu, struct io_request *io_req)
{
- int32_t ret;
struct mmio_request *mmio_req = &io_req->reqs.mmio;

- if (io_req->processed == REQ_STATE_COMPLETE) {
- if (mmio_req->direction == REQUEST_READ) {
- /* Emulate instruction and update vcpu register set */
- ret = emulate_instruction(vcpu);
- } else {
- ret = 0;
- }
- } else {
- ret = 0;
+ if (mmio_req->direction == REQUEST_READ) {
+ /* Emulate instruction and update vcpu register set */
+ emulate_instruction(vcpu);
}
-
- return ret;
}

/**
* @pre vcpu->req.type == REQ_MMIO
+ *
+ * @remark This function must be called when \p io_req is completed,
+ after the
+ * corresponding VHM request having transferred to the COMPLETE state.
*/
-int32_t dm_emulate_mmio_post(struct vcpu *vcpu)
+void dm_emulate_mmio_post(struct vcpu *vcpu)
{
uint16_t cur = vcpu->vcpu_id;
struct io_request *io_req = &vcpu->req; @@ -104,7 +100,7 @@ int32_t
dm_emulate_mmio_post(struct vcpu *vcpu)
/* VHM emulation data already copy to req, mark to free slot now */
complete_ioreq(vhm_req);

- return emulate_mmio_post(vcpu, io_req);
+ emulate_mmio_post(vcpu, io_req);
}

#ifdef CONFIG_PARTITION_MODE
@@ -123,15 +119,12 @@ void emulate_io_post(struct vcpu *vcpu) {
union vhm_request_buffer *req_buf;
struct vhm_request *vhm_req;
- struct io_request *io_req = &vcpu->req;

req_buf = (union vhm_request_buffer *)vcpu->vm->sw.io_shared_page;
vhm_req = &req_buf->req_queue[vcpu->vcpu_id];
- io_req->processed = atomic_load32(&vhm_req->processed);

if ((vhm_req->valid == 0) ||
- ((io_req->processed != REQ_STATE_COMPLETE) &&
- (io_req->processed != REQ_STATE_FAILED))) {
+ (atomic_load32(&vhm_req->processed) != REQ_STATE_COMPLETE))
{
return;
}

@@ -205,7 +198,6 @@ hv_emulate_pio(struct vcpu *vcpu, struct io_request
*io_req)
pr_fatal("Err:IO, port 0x%04x, size=%hu spans devices",
port, size);
status = -EIO;
- io_req->processed = REQ_STATE_FAILED;
break;
} else {
if (pio_req->direction == REQUEST_WRITE) { @@ -221,9
+213,6 @@ hv_emulate_pio(struct vcpu *vcpu, struct io_request *io_req)
pr_dbg("IO read on port %04x, data %08x",
port, pio_req->value);
}
- /* TODO: failures in the handlers should be reflected
- * here. */
- io_req->processed = REQ_STATE_COMPLETE;
status = 0;
break;
}
@@ -266,7 +255,6 @@ hv_emulate_mmio(struct vcpu *vcpu, struct
io_request *io_req)
} else if (!((address >= base) && (address + size <= end))) {
pr_fatal("Err MMIO, address:0x%llx, size:%x",
address, size);
- io_req->processed = REQ_STATE_FAILED;
return -EIO;
} else {
/* Handle this MMIO operation */
@@ -284,6 +272,7 @@ hv_emulate_mmio(struct vcpu *vcpu, struct
io_request *io_req)
* deliver to VHM.
*
* @return 0 - Successfully emulated by registered handlers.
+ * @return IOREQ_PENDING - The I/O request is delivered to VHM.
* @return -EIO - The request spans multiple devices and cannot be
emulated.
* @return Negative on other errors during emulation.
*/
@@ -303,7 +292,6 @@ emulate_io(struct vcpu *vcpu, struct io_request
*io_req)
default:
/* Unknown I/O request type */
status = -EINVAL;
- io_req->processed = REQ_STATE_FAILED;
break;
}

@@ -329,6 +317,8 @@ emulate_io(struct vcpu *vcpu, struct io_request
*io_req)
pr_fatal("Err:IO %s access to port 0x%04lx, size=%lu",
(pio_req->direction != REQUEST_READ) ? "read" : "write",
pio_req->address, pio_req->size);
+ } else {
+ status = IOREQ_PENDING;
}
#endif
}
@@ -347,7 +337,6 @@ int32_t pio_instr_vmexit_handler(struct vcpu *vcpu)
exit_qual = vcpu->arch_vcpu.exit_qualification;

io_req->type = REQ_PORTIO;
- io_req->processed = REQ_STATE_PENDING;
pio_req->size = VM_EXIT_IO_INSTRUCTION_SIZE(exit_qual) + 1UL;
pio_req->address =
VM_EXIT_IO_INSTRUCTION_PORT_NUMBER(exit_qual);
if (VM_EXIT_IO_INSTRUCTION_ACCESS_DIRECTION(exit_qual) == 0UL)
{ @@ -365,13 +354,10 @@ int32_t pio_instr_vmexit_handler(struct vcpu
*vcpu)

status = emulate_io(vcpu, io_req);

- /* io_req is hypervisor-private. For requests sent to VHM,
- * io_req->processed will be PENDING till dm_emulate_pio_post() is
- * called on vcpu resume. */
if (status == 0) {
- if (io_req->processed != REQ_STATE_PENDING) {
- status = emulate_pio_post(vcpu, io_req);
- }
+ emulate_pio_post(vcpu, io_req);
+ } else if (status == IOREQ_PENDING) {
+ status = 0;
}

return status;
diff --git a/hypervisor/common/io_request.c
b/hypervisor/common/io_request.c index 7897d6c..a4ca77e 100644
--- a/hypervisor/common/io_request.c
+++ b/hypervisor/common/io_request.c
@@ -143,7 +143,7 @@ static void local_get_req_info_(struct vhm_request
*req, int *id, char *type,

switch (req->processed) {
case REQ_STATE_COMPLETE:
- (void)strcpy_s(state, 16U, "SUCCESS");
+ (void)strcpy_s(state, 16U, "COMPLETE");
break;
case REQ_STATE_PENDING:
(void)strcpy_s(state, 16U, "PENDING"); @@ -151,8 +151,8 @@
static void local_get_req_info_(struct vhm_request *req, int *id, char *type,
case REQ_STATE_PROCESSING:
(void)strcpy_s(state, 16U, "PROCESS");
break;
- case REQ_STATE_FAILED:
- (void)strcpy_s(state, 16U, "FAILED");
+ case REQ_STATE_FREE:
+ (void)strcpy_s(state, 16U, "FREE");
break;
default:
(void)strcpy_s(state, 16U, "UNKNOWN"); diff --git
a/hypervisor/dm/vioapic.c b/hypervisor/dm/vioapic.c index
4692094..d1658ae 100644
--- a/hypervisor/dm/vioapic.c
+++ b/hypervisor/dm/vioapic.c
@@ -597,7 +597,6 @@ int vioapic_mmio_access_handler(struct vcpu *vcpu,
struct io_request *io_req,
ret = -EINVAL;
}

- io_req->processed = REQ_STATE_COMPLETE;
return ret;
}

diff --git a/hypervisor/include/arch/x86/ioreq.h
b/hypervisor/include/arch/x86/ioreq.h
index b0b5891..23f5cc2 100644
--- a/hypervisor/include/arch/x86/ioreq.h
+++ b/hypervisor/include/arch/x86/ioreq.h
@@ -10,15 +10,15 @@
#include <types.h>
#include <acrn_common.h>

+/* The return value of emulate_io() indicating the I/O request is
+delivered to
+ * VHM but not finished yet. */
+#define IOREQ_PENDING 1
+
/* Internal representation of a I/O request. */ struct io_request {
/** Type of the request (PIO, MMIO, etc). Refer to vhm_request. */
uint32_t type;

- /** Status of request handling. Written by request handlers and read by
- * the I/O emulation framework. Refer to vhm_request. */
- int32_t processed;
-
/** Details of this request in the same format as vhm_request. */
union vhm_io_request reqs;
};
@@ -122,8 +122,8 @@ int register_mmio_emulation_handler(struct vm
*vm,
uint64_t end, void *handler_private_data); void
unregister_mmio_emulation_handler(struct vm *vm, uint64_t start,
uint64_t end);
-int32_t emulate_mmio_post(struct vcpu *vcpu, struct io_request *io_req);
-int32_t dm_emulate_mmio_post(struct vcpu *vcpu);
+void emulate_mmio_post(struct vcpu *vcpu, struct io_request *io_req);
+void dm_emulate_mmio_post(struct vcpu *vcpu);

int32_t emulate_io(struct vcpu *vcpu, struct io_request *io_req); void
emulate_io_post(struct vcpu *vcpu); diff --git
a/hypervisor/include/public/acrn_common.h
b/hypervisor/include/public/acrn_common.h
index df00de8..8e24602 100644
--- a/hypervisor/include/public/acrn_common.h
+++ b/hypervisor/include/public/acrn_common.h
@@ -30,7 +30,6 @@
#define REQ_STATE_PENDING 0
#define REQ_STATE_COMPLETE 1
#define REQ_STATE_PROCESSING 2
-#define REQ_STATE_FAILED -1

#define REQ_PORTIO 0U
#define REQ_MMIO 1U
@@ -97,8 +96,6 @@ union vhm_io_request {
* The state transitions of a VHM request are:
*
* FREE -> PENDING -> PROCESSING -> COMPLETE -> FREE -> ...
- * \ /
- * +--> FAILED -+
*
* When a request is in COMPLETE or FREE state, the request is owned by
the
* hypervisor. SOS (VHM or DM) shall not read or write the internals of the
@@ -154,12 +151,6 @@ union vhm_io_request {
*
* 2. Due to similar reasons, setting state to COMPLETE is the last
operation
* of request handling in VHM or clients in SOS.
- *
- * The state FAILED is an obsolete state to indicate that the I/O request
cannot
- * be handled. In such cases the mediators and DM should switch the state
to
- * COMPLETE with the value set to all 1s for read, and skip the request for
- * writes. This state WILL BE REMOVED after the mediators and DM are
updated to
- * follow this rule.
*/
struct vhm_request {
/**
@@ -208,7 +199,7 @@ struct vhm_request {
*
* Byte offset: 136.
*/
- int32_t processed;
+ uint32_t processed;
} __aligned(256);

union vhm_request_buffer {
--
2.7.4


Join acrn-dev@lists.projectacrn.org to automatically receive all group messages.