Re: [PATCH 4/6] DM USB: xHCI: refine port assignment logic
Wu, Xiaoguang
On 18-08-13 21:40:58, Yu Wang wrote:
On 18-08-13 16:48:33, Wu, Xiaoguang wrote:The variable native_assign_ports in struct pci_xhci_vdev is usedFor "= 0" and "< 0", please define some macros... xgwu: will use macro to express, like the following: #define NATIVE_PORT_ASSIGNED(bus, port) (native_assign_ports[(bus)][(port)] != 0) ok, will change. thanks + * virtual device attached to virtual port whose number
|
|
Re: [PATCH 3/6] DM USB: introduce struct usb_native_devinfo
Wu, Xiaoguang
On 18-08-13 21:37:26, Yu Wang wrote:
On 18-08-13 16:48:32, Wu, Xiaoguang wrote:xgwu:Current design cannot get physical USB device information withoutdificulties? will change, thanks. xgwu:certain situations, hence struct usb_native_devinfo is introducedThen usb_dev_info is not useful now? Should we clear it? try to change, thanks. UPRINTF(LDBG, "%04x:%04x %d-%d connecting.\r\n",This new structure is confused to me. There has already one structure xgwu: try to merge them as one.thanks. +
|
|
Re: [PATCH 2/6] DM USB: xHCI: refine xHCI PORTSC Register related functions
Wu, Xiaoguang
On 18-08-13 21:19:37, Yu Wang wrote:
On 18-08-13 16:48:31, Wu, Xiaoguang wrote:PORTSC (Port Status and Control Register) register play a veryCan we remove intr variable? Is hasn't changed in this function and pass xgwu: i will remove intr and add it in later patch as 'need_intr', including function parameters. Thx. xgwu:Ditto. change to need_intr, Thx xgwu:+Let's keep pci_xhci_init_port.. ok, will change. Thx {
|
|
Re: [PATCH] hv: treewide: fix 'Shifting value too far'
Acked-by: Eddie Dong <eddie.dong@...>
toggle quoted messageShow quoted text
-----Original Message-----
|
|
Re: [PATCH v2] HV: Enclose debug specific code with #ifdef HV_DEBUG
#ifdef
toggle quoted messageShow quoted text
-----Original Message-----We also need to do this compile option protection in vmcall.c side for HC_SETUP_SBUF. The rest is fine. Fixed it and PR with my ack.
|
|
Re: [PATCH] HV: io: drop REQ_STATE_FAILED
Acked-by: Eddie Dong <eddie.dong@...>
toggle quoted messageShow quoted text
-----Original Message-----
|
|
[PATCH] HV: io: drop REQ_STATE_FAILED
Junjie Mao
Now the DM has adopted the new VHM request state transitions and
REQ_STATE_FAILED is obsolete since neither VHM nor kernel mediators will set the state to FAILED. This patch drops the definition to REQ_STATE_FAILED in the hypervisor, makes ''processed'' unsigned to make the compiler happy about typing and simplifies error handling in the following ways. * (dm_)emulate_(pio|mmio)_post no longer returns an error code, by introducing a constraint that these functions must be called after an I/O request completes (which is the case in the current design) and assuming handlers/VHM/DM will always give a value for reads (typically all 1's if the requested address is invalid). * emulate_io() now returns a positive value IOREQ_PENDING to indicate that the request is sent to VHM. This mitigates a potential race between dm_emulate_pio() and pio_instr_vmexit_handler() which can cause emulate_pio_post() being called twice for the same request. * Remove the ''processed'' member in io_request. Previously this mirrors the state of the VHM request which terminates at either COMPLETE or FAILED. After the FAILED state is removed, the terminal state will always be constantly COMPLETE. Thus the mirrored ''processed'' member is no longer useful. Note that emulate_instruction() will always succeed after a reshuffle, and this patch takes that assumption in advance. This does not hurt as that returned value is not currently handled. This patch makes it explicit that I/O emulation is not expected to fail. One issue remains, though, which occurs when a non-aligned cross-boundary access happens. Currently the hypervisor, VHM and DM adopts different policy: * Hypervisor: inject #GP if it detects that the access crossed boundary * VHM: deliver to DM if the access does not complete falls in the range of a client * DM: a handler covering part of the to-be-accessed region is picked and assertion failure can be triggered. A high-level design covering all these components (in addition to instruction emulation) is needed for this. Thus this patch does not yet cover the issue. Signed-off-by: Junjie Mao <junjie.mao@...> --- hypervisor/arch/x86/ept.c | 12 ++--- hypervisor/arch/x86/guest/vlapic.c | 1 - hypervisor/arch/x86/io.c | 84 ++++++++++++++------------------- hypervisor/common/io_request.c | 6 +-- hypervisor/dm/vioapic.c | 1 - hypervisor/include/arch/x86/ioreq.h | 12 ++--- hypervisor/include/public/acrn_common.h | 11 +---- 7 files changed, 49 insertions(+), 78 deletions(-) diff --git a/hypervisor/arch/x86/ept.c b/hypervisor/arch/x86/ept.c index ed1e245..01cf6f2 100644 --- a/hypervisor/arch/x86/ept.c +++ b/hypervisor/arch/x86/ept.c @@ -156,7 +156,6 @@ int ept_violation_vmexit_handler(struct vcpu *vcpu) exit_qual = vcpu->arch_vcpu.exit_qualification; io_req->type = REQ_MMIO; - io_req->processed = REQ_STATE_PENDING; /* Specify if read or write operation */ if ((exit_qual & 0x2UL) != 0UL) { @@ -214,14 +213,11 @@ int ept_violation_vmexit_handler(struct vcpu *vcpu) status = emulate_io(vcpu, io_req); - /* io_req is hypervisor-private. For requests sent to VHM, - * io_req->processed will be PENDING till dm_emulate_mmio_post() is - * called on vcpu resume. */ if (status == 0) { - if (io_req->processed != REQ_STATE_PENDING) { - status = emulate_mmio_post(vcpu, io_req); - } - } + emulate_mmio_post(vcpu, io_req); + } else if (status == IOREQ_PENDING) { + status = 0; + } return status; diff --git a/hypervisor/arch/x86/guest/vlapic.c b/hypervisor/arch/x86/guest/vlapic.c index 35be79c..5229443 100644 --- a/hypervisor/arch/x86/guest/vlapic.c +++ b/hypervisor/arch/x86/guest/vlapic.c @@ -2094,7 +2094,6 @@ int vlapic_mmio_access_handler(struct vcpu *vcpu, struct io_request *io_req, /* Can never happen due to the range of mmio_req->direction. */ } - io_req->processed = REQ_STATE_COMPLETE; return ret; } diff --git a/hypervisor/arch/x86/io.c b/hypervisor/arch/x86/io.c index f61b778..e968d2b 100644 --- a/hypervisor/arch/x86/io.c +++ b/hypervisor/arch/x86/io.c @@ -16,35 +16,33 @@ static void complete_ioreq(struct vhm_request *vhm_req) /** * @pre io_req->type == REQ_PORTIO + * + * @remark This function must be called when \p io_req is completed, after + * either a previous call to emulate_io() returning 0 or the corresponding VHM + * request having transferred to the COMPLETE state. */ -static int32_t +static void emulate_pio_post(struct vcpu *vcpu, struct io_request *io_req) { - int32_t status; struct pio_request *pio_req = &io_req->reqs.pio; uint64_t mask = 0xFFFFFFFFUL >> (32UL - 8UL * pio_req->size); - if (io_req->processed == REQ_STATE_COMPLETE) { - if (pio_req->direction == REQUEST_READ) { - uint64_t value = (uint64_t)pio_req->value; - int32_t context_idx = vcpu->arch_vcpu.cur_context; - uint64_t rax = vcpu_get_gpreg(vcpu, CPU_REG_RAX); + if (pio_req->direction == REQUEST_READ) { + uint64_t value = (uint64_t)pio_req->value; + uint64_t rax = vcpu_get_gpreg(vcpu, CPU_REG_RAX); - rax = ((rax) & ~mask) | (value & mask); - vcpu_set_gpreg(vcpu, CPU_REG_RAX, rax); - } - status = 0; - } else { - status = -1; + rax = ((rax) & ~mask) | (value & mask); + vcpu_set_gpreg(vcpu, CPU_REG_RAX, rax); } - - return status; } /** * @pre vcpu->req.type == REQ_PORTIO + * + * @remark This function must be called after the VHM request corresponding to + * \p vcpu being transferred to the COMPLETE state. */ -int32_t dm_emulate_pio_post(struct vcpu *vcpu) +void dm_emulate_pio_post(struct vcpu *vcpu) { uint16_t cur = vcpu->vcpu_id; union vhm_request_buffer *req_buf = NULL; @@ -60,35 +58,33 @@ int32_t dm_emulate_pio_post(struct vcpu *vcpu) /* VHM emulation data already copy to req, mark to free slot now */ complete_ioreq(vhm_req); - return emulate_pio_post(vcpu, io_req); + emulate_pio_post(vcpu, io_req); } /** * @pre vcpu->req.type == REQ_MMIO + * + * @remark This function must be called when \p io_req is completed, after + * either a previous call to emulate_io() returning 0 or the corresponding VHM + * request having transferred to the COMPLETE state. */ -int32_t emulate_mmio_post(struct vcpu *vcpu, struct io_request *io_req) +void emulate_mmio_post(struct vcpu *vcpu, struct io_request *io_req) { - int32_t ret; struct mmio_request *mmio_req = &io_req->reqs.mmio; - if (io_req->processed == REQ_STATE_COMPLETE) { - if (mmio_req->direction == REQUEST_READ) { - /* Emulate instruction and update vcpu register set */ - ret = emulate_instruction(vcpu); - } else { - ret = 0; - } - } else { - ret = 0; + if (mmio_req->direction == REQUEST_READ) { + /* Emulate instruction and update vcpu register set */ + emulate_instruction(vcpu); } - - return ret; } /** * @pre vcpu->req.type == REQ_MMIO + * + * @remark This function must be called when \p io_req is completed, after the + * corresponding VHM request having transferred to the COMPLETE state. */ -int32_t dm_emulate_mmio_post(struct vcpu *vcpu) +void dm_emulate_mmio_post(struct vcpu *vcpu) { uint16_t cur = vcpu->vcpu_id; struct io_request *io_req = &vcpu->req; @@ -104,7 +100,7 @@ int32_t dm_emulate_mmio_post(struct vcpu *vcpu) /* VHM emulation data already copy to req, mark to free slot now */ complete_ioreq(vhm_req); - return emulate_mmio_post(vcpu, io_req); + emulate_mmio_post(vcpu, io_req); } #ifdef CONFIG_PARTITION_MODE @@ -123,15 +119,12 @@ void emulate_io_post(struct vcpu *vcpu) { union vhm_request_buffer *req_buf; struct vhm_request *vhm_req; - struct io_request *io_req = &vcpu->req; req_buf = (union vhm_request_buffer *)vcpu->vm->sw.io_shared_page; vhm_req = &req_buf->req_queue[vcpu->vcpu_id]; - io_req->processed = atomic_load32(&vhm_req->processed); if ((vhm_req->valid == 0) || - ((io_req->processed != REQ_STATE_COMPLETE) && - (io_req->processed != REQ_STATE_FAILED))) { + (atomic_load32(&vhm_req->processed) != REQ_STATE_COMPLETE)) { return; } @@ -205,7 +198,6 @@ hv_emulate_pio(struct vcpu *vcpu, struct io_request *io_req) pr_fatal("Err:IO, port 0x%04x, size=%hu spans devices", port, size); status = -EIO; - io_req->processed = REQ_STATE_FAILED; break; } else { if (pio_req->direction == REQUEST_WRITE) { @@ -221,9 +213,6 @@ hv_emulate_pio(struct vcpu *vcpu, struct io_request *io_req) pr_dbg("IO read on port %04x, data %08x", port, pio_req->value); } - /* TODO: failures in the handlers should be reflected - * here. */ - io_req->processed = REQ_STATE_COMPLETE; status = 0; break; } @@ -266,7 +255,6 @@ hv_emulate_mmio(struct vcpu *vcpu, struct io_request *io_req) } else if (!((address >= base) && (address + size <= end))) { pr_fatal("Err MMIO, address:0x%llx, size:%x", address, size); - io_req->processed = REQ_STATE_FAILED; return -EIO; } else { /* Handle this MMIO operation */ @@ -284,6 +272,7 @@ hv_emulate_mmio(struct vcpu *vcpu, struct io_request *io_req) * deliver to VHM. * * @return 0 - Successfully emulated by registered handlers. + * @return IOREQ_PENDING - The I/O request is delivered to VHM. * @return -EIO - The request spans multiple devices and cannot be emulated. * @return Negative on other errors during emulation. */ @@ -303,7 +292,6 @@ emulate_io(struct vcpu *vcpu, struct io_request *io_req) default: /* Unknown I/O request type */ status = -EINVAL; - io_req->processed = REQ_STATE_FAILED; break; } @@ -329,6 +317,8 @@ emulate_io(struct vcpu *vcpu, struct io_request *io_req) pr_fatal("Err:IO %s access to port 0x%04lx, size=%lu", (pio_req->direction != REQUEST_READ) ? "read" : "write", pio_req->address, pio_req->size); + } else { + status = IOREQ_PENDING; } #endif } @@ -347,7 +337,6 @@ int32_t pio_instr_vmexit_handler(struct vcpu *vcpu) exit_qual = vcpu->arch_vcpu.exit_qualification; io_req->type = REQ_PORTIO; - io_req->processed = REQ_STATE_PENDING; pio_req->size = VM_EXIT_IO_INSTRUCTION_SIZE(exit_qual) + 1UL; pio_req->address = VM_EXIT_IO_INSTRUCTION_PORT_NUMBER(exit_qual); if (VM_EXIT_IO_INSTRUCTION_ACCESS_DIRECTION(exit_qual) == 0UL) { @@ -365,13 +354,10 @@ int32_t pio_instr_vmexit_handler(struct vcpu *vcpu) status = emulate_io(vcpu, io_req); - /* io_req is hypervisor-private. For requests sent to VHM, - * io_req->processed will be PENDING till dm_emulate_pio_post() is - * called on vcpu resume. */ if (status == 0) { - if (io_req->processed != REQ_STATE_PENDING) { - status = emulate_pio_post(vcpu, io_req); - } + emulate_pio_post(vcpu, io_req); + } else if (status == IOREQ_PENDING) { + status = 0; } return status; diff --git a/hypervisor/common/io_request.c b/hypervisor/common/io_request.c index 7897d6c..a4ca77e 100644 --- a/hypervisor/common/io_request.c +++ b/hypervisor/common/io_request.c @@ -143,7 +143,7 @@ static void local_get_req_info_(struct vhm_request *req, int *id, char *type, switch (req->processed) { case REQ_STATE_COMPLETE: - (void)strcpy_s(state, 16U, "SUCCESS"); + (void)strcpy_s(state, 16U, "COMPLETE"); break; case REQ_STATE_PENDING: (void)strcpy_s(state, 16U, "PENDING"); @@ -151,8 +151,8 @@ static void local_get_req_info_(struct vhm_request *req, int *id, char *type, case REQ_STATE_PROCESSING: (void)strcpy_s(state, 16U, "PROCESS"); break; - case REQ_STATE_FAILED: - (void)strcpy_s(state, 16U, "FAILED"); + case REQ_STATE_FREE: + (void)strcpy_s(state, 16U, "FREE"); break; default: (void)strcpy_s(state, 16U, "UNKNOWN"); diff --git a/hypervisor/dm/vioapic.c b/hypervisor/dm/vioapic.c index 4692094..d1658ae 100644 --- a/hypervisor/dm/vioapic.c +++ b/hypervisor/dm/vioapic.c @@ -597,7 +597,6 @@ int vioapic_mmio_access_handler(struct vcpu *vcpu, struct io_request *io_req, ret = -EINVAL; } - io_req->processed = REQ_STATE_COMPLETE; return ret; } diff --git a/hypervisor/include/arch/x86/ioreq.h b/hypervisor/include/arch/x86/ioreq.h index b0b5891..23f5cc2 100644 --- a/hypervisor/include/arch/x86/ioreq.h +++ b/hypervisor/include/arch/x86/ioreq.h @@ -10,15 +10,15 @@ #include <types.h> #include <acrn_common.h> +/* The return value of emulate_io() indicating the I/O request is delivered to + * VHM but not finished yet. */ +#define IOREQ_PENDING 1 + /* Internal representation of a I/O request. */ struct io_request { /** Type of the request (PIO, MMIO, etc). Refer to vhm_request. */ uint32_t type; - /** Status of request handling. Written by request handlers and read by - * the I/O emulation framework. Refer to vhm_request. */ - int32_t processed; - /** Details of this request in the same format as vhm_request. */ union vhm_io_request reqs; }; @@ -122,8 +122,8 @@ int register_mmio_emulation_handler(struct vm *vm, uint64_t end, void *handler_private_data); void unregister_mmio_emulation_handler(struct vm *vm, uint64_t start, uint64_t end); -int32_t emulate_mmio_post(struct vcpu *vcpu, struct io_request *io_req); -int32_t dm_emulate_mmio_post(struct vcpu *vcpu); +void emulate_mmio_post(struct vcpu *vcpu, struct io_request *io_req); +void dm_emulate_mmio_post(struct vcpu *vcpu); int32_t emulate_io(struct vcpu *vcpu, struct io_request *io_req); void emulate_io_post(struct vcpu *vcpu); diff --git a/hypervisor/include/public/acrn_common.h b/hypervisor/include/public/acrn_common.h index df00de8..8e24602 100644 --- a/hypervisor/include/public/acrn_common.h +++ b/hypervisor/include/public/acrn_common.h @@ -30,7 +30,6 @@ #define REQ_STATE_PENDING 0 #define REQ_STATE_COMPLETE 1 #define REQ_STATE_PROCESSING 2 -#define REQ_STATE_FAILED -1 #define REQ_PORTIO 0U #define REQ_MMIO 1U @@ -97,8 +96,6 @@ union vhm_io_request { * The state transitions of a VHM request are: * * FREE -> PENDING -> PROCESSING -> COMPLETE -> FREE -> ... - * \ / - * +--> FAILED -+ * * When a request is in COMPLETE or FREE state, the request is owned by the * hypervisor. SOS (VHM or DM) shall not read or write the internals of the @@ -154,12 +151,6 @@ union vhm_io_request { * * 2. Due to similar reasons, setting state to COMPLETE is the last operation * of request handling in VHM or clients in SOS. - * - * The state FAILED is an obsolete state to indicate that the I/O request cannot - * be handled. In such cases the mediators and DM should switch the state to - * COMPLETE with the value set to all 1s for read, and skip the request for - * writes. This state WILL BE REMOVED after the mediators and DM are updated to - * follow this rule. */ struct vhm_request { /** @@ -208,7 +199,7 @@ struct vhm_request { * * Byte offset: 136. */ - int32_t processed; + uint32_t processed; } __aligned(256); union vhm_request_buffer { -- 2.7.4
|
|
[PATCH v4 1/1] vbs: fix virtio_vq_index_get func handling of multi VQ concurrent request.
Ong Hock Yu <ong.hock.yu@...>
Under multiple VQ use case, it is possible to have concurrent requests.
Added support to return multiple vq index from all vcpu. Change-Id: I558941fe9e1f524b91358fb9bec4c96277fd5681 Signed-off-by: Ong Hock Yu <ong.hock.yu@...> --- drivers/vbs/vbs.c | 20 ++++++++++++++------ drivers/vbs/vbs_rng.c | 2 +- include/linux/vbs/vbs.h | 8 ++++++-- 3 files changed, 21 insertions(+), 9 deletions(-) diff --git a/drivers/vbs/vbs.c b/drivers/vbs/vbs.c index 65b8a0bbe3d3..d3dcbe77ebee 100644 --- a/drivers/vbs/vbs.c +++ b/drivers/vbs/vbs.c @@ -148,9 +148,11 @@ long virtio_dev_deregister(struct virtio_dev_info *dev) return 0; } -int virtio_vq_index_get(struct virtio_dev_info *dev, unsigned long *ioreqs_map) +int virtio_vqs_index_get(struct virtio_dev_info *dev, + unsigned long *ioreqs_map, + int *vqs_index, int max_vqs_index) { - int val = -1; + int idx = 0; struct vhm_request *req; int vcpu; @@ -159,7 +161,13 @@ int virtio_vq_index_get(struct virtio_dev_info *dev, unsigned long *ioreqs_map) return -EINVAL; } - while (1) { + if(max_vqs_index < dev->_ctx.max_vcpu) + pr_info("%s: The allocated vqs size (%d) is smaller than the + number of vcpu (%d)! This might caused the process of + some requests be delayed.",__func__,max_vqs_index, + dev->_ctx.max_vcpu); + + while (idx < max_vqs_index) { vcpu = find_first_bit(ioreqs_map, dev->_ctx.max_vcpu); if (vcpu == dev->_ctx.max_vcpu) break; @@ -179,9 +187,9 @@ int virtio_vq_index_get(struct virtio_dev_info *dev, unsigned long *ioreqs_map) pr_debug("%s: write request! type %d\n", __func__, req->type); if (dev->io_range_type == PIO_RANGE) - val = req->reqs.pio_request.value; + vqs_index[idx++] = req->reqs.pio_request.value; else - val = req->reqs.mmio_request.value; + vqs_index[idx++] = req->reqs.mmio_request.value; } smp_mb(); atomic_set(&req->processed, REQ_STATE_COMPLETE); @@ -189,7 +197,7 @@ int virtio_vq_index_get(struct virtio_dev_info *dev, unsigned long *ioreqs_map) } } - return val; + return idx; } static long virtio_vqs_info_set(struct virtio_dev_info *dev, diff --git a/drivers/vbs/vbs_rng.c b/drivers/vbs/vbs_rng.c index fd2bb27af66e..c5e28cc12c55 100644 --- a/drivers/vbs/vbs_rng.c +++ b/drivers/vbs/vbs_rng.c @@ -268,7 +268,7 @@ static int handle_kick(int client_id, unsigned long *ioreqs_map) return -EINVAL; } - val = virtio_vq_index_get(&rng->dev, ioreqs_map); + virtio_vqs_index_get(&rng->dev, ioreqs_map, &val, 1); if (val >= 0) handle_vq_kick(rng, val); diff --git a/include/linux/vbs/vbs.h b/include/linux/vbs/vbs.h index 30df8ebf68a0..964bacac865c 100644 --- a/include/linux/vbs/vbs.h +++ b/include/linux/vbs/vbs.h @@ -262,7 +262,7 @@ long virtio_dev_register(struct virtio_dev_info *dev); long virtio_dev_deregister(struct virtio_dev_info *dev); /** - * virtio_vq_index_get - get virtqueue index that frontend kicks + * virtio_vqs_index_get - get virtqueue indexes that frontend kicks * * This API is normally called in the VBS-K device's callback * function, to get value write to the "kick" register from @@ -270,10 +270,14 @@ long virtio_dev_deregister(struct virtio_dev_info *dev); * * @dev: Pointer to VBS-K device data struct * @ioreqs_map: requests bitmap need to handle, provided by VHM + * @vqs_index: array to store the vq indexes + * @max_vqs_index: size of vqs_index array * * Return: >=0 on virtqueue index, <0 on error */ -int virtio_vq_index_get(struct virtio_dev_info *dev, unsigned long *ioreqs_map); +int virtio_vqs_index_get(struct virtio_dev_info *dev, + unsigned long *ioreqs_map, + int *vqs_index, int max_vqs_index); /** * virtio_dev_reset - reset a VBS-K device -- 2.17.0
|
|
[PATCH v2] HV: Enclose debug specific code with #ifdef HV_DEBUG
#ifdef
Thare some debug specific code which don't run on release version, such as vmexit_time,
vmexit_cnt, sbuf related codes, etc... This patch enclose these codes with #ifdef HV_DEBUG. --- v1 -> v2: - Enclose suf related codes with #ifdef HV_DEBUG Signed-off-by: Kaige Fu <kaige.fu@...> --- hypervisor/common/hv_main.c | 12 +++++++++--- hypervisor/common/hypercall.c | 7 +++++++ hypervisor/include/arch/x86/per_cpu.h | 4 +++- 3 files changed, 19 insertions(+), 4 deletions(-) diff --git a/hypervisor/common/hv_main.c b/hypervisor/common/hv_main.c index 15f1d9d..e3657f9 100644 --- a/hypervisor/common/hv_main.c +++ b/hypervisor/common/hv_main.c @@ -59,11 +59,13 @@ void vcpu_thread(struct vcpu *vcpu) continue; } +#ifdef HV_DEBUG vmexit_end = rdtsc(); if (vmexit_begin != 0UL) { per_cpu(vmexit_time, vcpu->pcpu_id)[basic_exit_reason] += (vmexit_end - vmexit_begin); } +#endif TRACE_2L(TRACE_VM_ENTER, 0UL, 0UL); /* Restore guest TSC_AUX */ @@ -79,7 +81,9 @@ void vcpu_thread(struct vcpu *vcpu) continue; } +#ifdef HV_DEBUG vmexit_begin = rdtsc(); +#endif vcpu->arch_vcpu.nrexits++; /* Save guest TSC_AUX */ @@ -90,16 +94,18 @@ void vcpu_thread(struct vcpu *vcpu) CPU_IRQ_ENABLE(); /* Dispatch handler */ ret = vmexit_handler(vcpu); + basic_exit_reason = vcpu->arch_vcpu.exit_reason & 0xFFFFU; if (ret < 0) { pr_fatal("dispatch VM exit handler failed for reason" - " %d, ret = %d!", - vcpu->arch_vcpu.exit_reason & 0xFFFFU, ret); + " %d, ret = %d!", basic_exit_reason, ret); vcpu_inject_gp(vcpu, 0U); continue; } - basic_exit_reason = vcpu->arch_vcpu.exit_reason & 0xFFFFU; +#ifdef HV_DEBUG per_cpu(vmexit_cnt, vcpu->pcpu_id)[basic_exit_reason]++; +#endif + TRACE_2L(TRACE_VM_EXIT, basic_exit_reason, vcpu_get_rip(vcpu)); } while (1); } diff --git a/hypervisor/common/hypercall.c b/hypervisor/common/hypercall.c index f8efb35..4d9a23b 100644 --- a/hypervisor/common/hypercall.c +++ b/hypervisor/common/hypercall.c @@ -788,6 +788,7 @@ hcall_reset_ptdev_intr_info(struct vm *vm, uint16_t vmid, uint64_t param) return ret; } +#ifdef HV_DEBUG int32_t hcall_setup_sbuf(struct vm *vm, uint64_t param) { struct sbuf_setup_param ssp; @@ -808,6 +809,12 @@ int32_t hcall_setup_sbuf(struct vm *vm, uint64_t param) return sbuf_share_setup(ssp.pcpu_id, ssp.sbuf_id, hva); } +#else +int32_t hcall_setup_sbuf(__unused struct vm *vm, __unused uint64_t param) +{ + return -ENODEV; +} +#endif int32_t hcall_get_cpu_pm_state(struct vm *vm, uint64_t cmd, uint64_t param) { diff --git a/hypervisor/include/arch/x86/per_cpu.h b/hypervisor/include/arch/x86/per_cpu.h index 4076e27..758f9c2 100644 --- a/hypervisor/include/arch/x86/per_cpu.h +++ b/hypervisor/include/arch/x86/per_cpu.h @@ -19,10 +19,12 @@ #include "arch/x86/guest/instr_emul.h" struct per_cpu_region { +#ifdef HV_DEBUG uint64_t *sbuf[ACRN_SBUF_ID_MAX]; - uint64_t irq_count[NR_IRQS]; uint64_t vmexit_cnt[64]; uint64_t vmexit_time[64]; +#endif + uint64_t irq_count[NR_IRQS]; uint64_t softirq_pending; uint64_t spurious; uint64_t vmxon_region_pa; -- 2.7.4
|
|
[PATCH] hv: treewide: fix 'Shifting value too far'
Shiqing Gao
MISRA-C requires that shift operation cannot exceed the word length.
What this patch does: - Add the pre condition for 'init_lapic' regarding to 'pcpu_id' Currently, max 8 physical cpus are supported. Re-design will be required if we would like to support more physical cpus. So, add the pre condition here to avoid the unintentional shift operation mistakes. - Replace the id type with uint8_t in 'vlapic_build_id' - For VM0, it uses 'lapic_id' as its id, which is uint8_t. - For non VM0, it uses 'vcpu_id' as its id, which is uint16_t. Cast this id to uint8_t to make sure there is no loss of data after left shifting 24U. Signed-off-by: Shiqing Gao <shiqing.gao@...> --- hypervisor/arch/x86/guest/vlapic.c | 14 ++++++++------ hypervisor/arch/x86/lapic.c | 3 +++ 2 files changed, 11 insertions(+), 6 deletions(-) diff --git a/hypervisor/arch/x86/guest/vlapic.c b/hypervisor/arch/x86/guest/vlapic.c index 567270f..6cc1617 100644 --- a/hypervisor/arch/x86/guest/vlapic.c +++ b/hypervisor/arch/x86/guest/vlapic.c @@ -169,19 +169,21 @@ static inline uint32_t vlapic_build_id(struct acrn_vlapic *vlapic) { struct vcpu *vcpu = vlapic->vcpu; - uint16_t id; + uint8_t vlapic_id; + uint32_t lapic_regs_id; if (is_vm0(vcpu->vm)) { /* Get APIC ID sequence format from cpu_storage */ - id = per_cpu(lapic_id, vcpu->vcpu_id); + vlapic_id = per_cpu(lapic_id, vcpu->vcpu_id); } else { - id = vcpu->vcpu_id; + vlapic_id = (uint8_t)vcpu->vcpu_id; } - dev_dbg(ACRN_DBG_LAPIC, "vlapic APIC PAGE ID : 0x%08x", - (id << APIC_ID_SHIFT)); + lapic_regs_id = vlapic_id << APIC_ID_SHIFT; - return (id << APIC_ID_SHIFT); + dev_dbg(ACRN_DBG_LAPIC, "vlapic APIC PAGE ID : 0x%08x", lapic_regs_id); + + return lapic_regs_id; } static void diff --git a/hypervisor/arch/x86/lapic.c b/hypervisor/arch/x86/lapic.c index aa35e0a..6c7752c 100644 --- a/hypervisor/arch/x86/lapic.c +++ b/hypervisor/arch/x86/lapic.c @@ -209,6 +209,9 @@ void early_init_lapic(void) } } +/** + * @pre pcpu_id < 8U + */ void init_lapic(uint16_t pcpu_id) { /* Set the Logical Destination Register */ -- 1.9.1
|
|
[PATCH] drm/i915: fix a kernel panic issue of plane restriction
He, Min <min.he@...>
When plane restriction feature enabled in sevice os, there could be the
case that there're some CRTC's without a primary plane. If we don't assign any plane of pipe A to sos, like i915.avail_planes_per_pipe =0x000F00, it will cause kernel panic when booting because it assumes primary plane existing in intel_find_initial_plane_obj(). Added a check to the primary plane in CRTC to avoid such kind of issue. Signed-off-by: Min He <min.he@...> --- drivers/gpu/drm/i915/intel_display.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/gpu/drm/i915/intel_display.c b/drivers/gpu/drm/i915/intel_display.c index 01b432438bac..6aecd4c150d1 100644 --- a/drivers/gpu/drm/i915/intel_display.c +++ b/drivers/gpu/drm/i915/intel_display.c @@ -15058,7 +15058,7 @@ int intel_modeset_init(struct drm_device *dev) for_each_intel_crtc(dev, crtc) { struct intel_initial_plane_config plane_config = {}; - if (!crtc->active) + if (!crtc->active || !crtc->base.primary) continue; /* -- 2.17.0
|
|
[RFC PATCH] hv: vioapic: simple vioapic set rte
Li, Fei1
1. do nothing if nothing changed.
2. update ptdev native ioapic rte when (a) something changed and (b) the old rte interrupt mask is not masked or the new rte interrupt mask is not masked. Signed-off-by: Li, Fei1 <fei1.li@...> --- hypervisor/dm/vioapic.c | 28 ++++++++++------------------ 1 file changed, 10 insertions(+), 18 deletions(-) diff --git a/hypervisor/dm/vioapic.c b/hypervisor/dm/vioapic.c index 119eabe..7c1d95d 100644 --- a/hypervisor/dm/vioapic.c +++ b/hypervisor/dm/vioapic.c @@ -313,12 +313,12 @@ vioapic_indirect_write(struct vioapic *vioapic, uint32_t addr, uint32_t data) if ((regnum >= IOAPIC_REDTBL) && (regnum < (IOAPIC_REDTBL + (pincount * 2U)))) { uint32_t addr_offset = regnum - IOAPIC_REDTBL; - uint32_t rte_offset = addr_offset / 2U; + uint32_t rte_offset = addr_offset >> 1U; pin = rte_offset; last = vioapic->rtbl[pin]; new = last; - if ((addr_offset % 2U) != 0U) { + if ((addr_offset & 1U) != 0U) { new.u.hi_32 = data; } else { new.u.lo_32 &= RTBL_RO_BITS; @@ -336,11 +336,13 @@ vioapic_indirect_write(struct vioapic *vioapic, uint32_t addr, uint32_t data) } changed = last.full ^ new.full; + if (changed == 0UL) { + return; + } /* pin0 from vpic mask/unmask */ if ((pin == 0U) && ((changed & IOAPIC_RTE_INTMASK) != 0UL)) { /* mask -> umask */ - if (((last.full & IOAPIC_RTE_INTMASK) != 0UL) && - ((new.full & IOAPIC_RTE_INTMASK) == 0UL)) { + if ((last.full & IOAPIC_RTE_INTMASK) != 0UL) { if ((vioapic->vm->wire_mode == VPIC_WIRE_NULL) || (vioapic->vm->wire_mode == @@ -354,8 +356,7 @@ vioapic_indirect_write(struct vioapic *vioapic, uint32_t addr, uint32_t data) return; } /* unmask -> mask */ - } else if (((last.full & IOAPIC_RTE_INTMASK) == 0UL) && - ((new.full & IOAPIC_RTE_INTMASK) != 0UL)) { + } else { if (vioapic->vm->wire_mode == VPIC_WIRE_IOAPIC) { vioapic->vm->wire_mode = @@ -363,9 +364,6 @@ vioapic_indirect_write(struct vioapic *vioapic, uint32_t addr, uint32_t data) dev_dbg(ACRN_DBG_IOAPIC, "vpic wire mode -> INTR"); } - } else { - /* Can never happen since IOAPIC_RTE_INTMASK - * is changed. */ } } vioapic->rtbl[pin] = new; @@ -405,15 +403,9 @@ vioapic_indirect_write(struct vioapic *vioapic, uint32_t addr, uint32_t data) vioapic_send_intr(vioapic, pin); } - /* remap for active: interrupt mask -> unmask - * remap for deactive: interrupt mask & vector set to 0 - * remap for trigger mode change - * remap for polarity change - */ - if ( ((changed & IOAPIC_RTE_INTMASK) != 0UL) || - ((changed & IOAPIC_RTE_TRGRMOD) != 0UL) || - ((changed & IOAPIC_RTE_INTPOL ) != 0UL) ) { - + /* remap for ptdev */ + if (((new.full & IOAPIC_RTE_INTMASK) == 0UL) || + ((last.full & IOAPIC_RTE_INTMASK) == 0U)) { /* VM enable intr */ struct ptdev_intx_info intx; -- 2.7.4
|
|
[RFC PATCH] Simple vioapic set rte
Li, Fei1
Now the guest may change "Destination Field", "Trigger Mode", "Interrupt Input Pin Polarity"
even "Interrupt Vector" when "Interrupt Mask" not masked. So we should update the pass through device interrupt pin rte in this situation. The old logic would update it only when "Interrupt Mask" or "Trigger Mode" or "Interrupt Input Pin Polarity" was changed. And a monir modify: return earlier if nothing changed. Li, Fei1 (1): hv: vioapic: simple vioapic set rte hypervisor/dm/vioapic.c | 28 ++++++++++------------------ 1 file changed, 10 insertions(+), 18 deletions(-) -- 2.7.4
|
|
Re: [PATCH v2] tools: acrntrace: Add ring buffer mode
Hi Eddie,
On 08-13 Mon 15:49, Eddie Dong wrote: We don't want to complicate the debug tools too much. I remember we can already set the size of the log file. That is enough.Per current implementation, we have removed the size limitation of log file and store the trace data in current dir instead of /temp/. It is user's responsibility to avoid running out of storage of the disk. Thx Eddie-----Original Message-----
|
|
Re: [PATCH v2] tools: acrntrace: Add ring buffer mode
On 08-13 Mon 14:12, Geoffroy Van Cutsem wrote:
This paragraph has been removed.-----Original Message-----Suggestion: s/when full/when reaching the end of the buffer In current implementation, we store the trace data in current directory with no size limitation. Now, it's user's responsibility to avoid running out of storage of the disk. Thanks,
|
|
Re: [PATCH 1/2] vhm: add ioeventfd support for ACRN hypervisor service module
Shuo A Liu
On Mon 13.Aug'18 at 17:03:44 +0800, Jie Deng wrote:
Sure. I will change to (p->wildcard || p->data == data) Thanks
|
|
[PATCH v2] HV: Add const qualifiers where required
Arindam Roy <arindam.roy@...>
V1:
In order to better comply with MISRA C, add const qualifiers whereeven required. In the patch, these are being added to pointers which are normally used in "get" functions. V2: Corrected the issues in the patch pointed by Junjie in his review comments. Moved the const qualifiers to the correct places. Removed some changes which are not needed. Signed-off-by: Arindam Roy <arindam.roy@...> --- hypervisor/arch/x86/cpu_state_tbl.c | 2 +- hypervisor/arch/x86/cpuid.c | 6 +++--- hypervisor/arch/x86/ept.c | 14 +++++++------- hypervisor/arch/x86/guest/guest.c | 10 +++++----- hypervisor/include/arch/x86/cpuid.h | 2 +- hypervisor/include/arch/x86/guest/guest.h | 6 +++--- hypervisor/include/arch/x86/io.h | 18 +++++++++--------- hypervisor/include/arch/x86/mmu.h | 22 +++++++++++----------- hypervisor/include/arch/x86/pgtable.h | 8 ++++---- hypervisor/include/arch/x86/vmx.h | 2 +- 10 files changed, 45 insertions(+), 45 deletions(-) diff --git a/hypervisor/arch/x86/cpu_state_tbl.c b/hypervisor/arch/x86/cpu_state_tbl.c index 6f192dad..80093c96 100644 --- a/hypervisor/arch/x86/cpu_state_tbl.c +++ b/hypervisor/arch/x86/cpu_state_tbl.c @@ -89,7 +89,7 @@ static const struct cpu_state_table { } }; -static int get_state_tbl_idx(char *cpuname) +static int get_state_tbl_idx(const char *cpuname) { int i; int count = ARRAY_SIZE(cpu_state_tbl); diff --git a/hypervisor/arch/x86/cpuid.c b/hypervisor/arch/x86/cpuid.c index 18ebfcc7..215bdcce 100644 --- a/hypervisor/arch/x86/cpuid.c +++ b/hypervisor/arch/x86/cpuid.c @@ -8,7 +8,7 @@ extern bool x2apic_enabled; -static inline struct vcpuid_entry *find_vcpuid_entry(struct vcpu *vcpu, +static inline struct vcpuid_entry *find_vcpuid_entry(const struct vcpu *vcpu, uint32_t leaf_arg, uint32_t subleaf) { uint32_t i = 0U, nr, half; @@ -66,7 +66,7 @@ static inline struct vcpuid_entry *find_vcpuid_entry(struct vcpu *vcpu, } static inline int set_vcpuid_entry(struct vm *vm, - struct vcpuid_entry *entry) + const struct vcpuid_entry *entry) { struct vcpuid_entry *tmp; size_t entry_size = sizeof(struct vcpuid_entry); @@ -273,7 +273,7 @@ int set_vcpuid_entries(struct vm *vm) return 0; } -void guest_cpuid(struct vcpu *vcpu, +void guest_cpuid(const struct vcpu *vcpu, uint32_t *eax, uint32_t *ebx, uint32_t *ecx, uint32_t *edx) { diff --git a/hypervisor/arch/x86/ept.c b/hypervisor/arch/x86/ept.c index a7bcb43e..6a327574 100644 --- a/hypervisor/arch/x86/ept.c +++ b/hypervisor/arch/x86/ept.c @@ -10,7 +10,7 @@ #define ACRN_DBG_EPT 6U -static uint64_t find_next_table(uint32_t table_offset, void *table_base) +static uint64_t find_next_table(uint32_t table_offset, const void *table_base) { uint64_t table_entry; uint64_t table_present; @@ -100,7 +100,7 @@ void destroy_ept(struct vm *vm) free_ept_mem(vm->arch_vm.m2p); } -uint64_t local_gpa2hpa(struct vm *vm, uint64_t gpa, uint32_t *size) +uint64_t local_gpa2hpa(const struct vm *vm, uint64_t gpa, uint32_t *size) { uint64_t hpa = 0UL; uint64_t *pgentry, pg_size = 0UL; @@ -124,12 +124,12 @@ uint64_t local_gpa2hpa(struct vm *vm, uint64_t gpa, uint32_t *size) } /* using return value 0 as failure, make sure guest will not use hpa 0 */ -uint64_t gpa2hpa(struct vm *vm, uint64_t gpa) +uint64_t gpa2hpa(const struct vm *vm, uint64_t gpa) { return local_gpa2hpa(vm, gpa, NULL); } -uint64_t hpa2gpa(struct vm *vm, uint64_t hpa) +uint64_t hpa2gpa(const struct vm *vm, uint64_t hpa) { uint64_t *pgentry, pg_size = 0UL; @@ -284,7 +284,7 @@ int ept_misconfig_vmexit_handler(__unused struct vcpu *vcpu) return status; } -int ept_mr_add(struct vm *vm, uint64_t hpa_arg, +int ept_mr_add(const struct vm *vm, uint64_t hpa_arg, uint64_t gpa_arg, uint64_t size, uint32_t prot_arg) { struct mem_map_params map_params; @@ -323,7 +323,7 @@ int ept_mr_add(struct vm *vm, uint64_t hpa_arg, return 0; } -int ept_mr_modify(struct vm *vm, uint64_t *pml4_page, +int ept_mr_modify(const struct vm *vm, uint64_t *pml4_page, uint64_t gpa, uint64_t size, uint64_t prot_set, uint64_t prot_clr) { @@ -341,7 +341,7 @@ int ept_mr_modify(struct vm *vm, uint64_t *pml4_page, return ret; } -int ept_mr_del(struct vm *vm, uint64_t *pml4_page, +int ept_mr_del(const struct vm *vm, uint64_t *pml4_page, uint64_t gpa, uint64_t size) { struct vcpu *vcpu; diff --git a/hypervisor/arch/x86/guest/guest.c b/hypervisor/arch/x86/guest/guest.c index dfb33bca..47692e30 100644 --- a/hypervisor/arch/x86/guest/guest.c +++ b/hypervisor/arch/x86/guest/guest.c @@ -46,7 +46,7 @@ uint64_t vcpumask2pcpumask(struct vm *vm, uint64_t vdmask) return dmask; } -bool vm_lapic_disabled(struct vm *vm) +bool vm_lapic_disabled(const struct vm *vm) { uint16_t i; struct vcpu *vcpu; @@ -276,7 +276,7 @@ int gva2gpa(struct vcpu *vcpu, uint64_t gva, uint64_t *gpa, return ret; } -static inline uint32_t local_copy_gpa(struct vm *vm, void *h_ptr, uint64_t gpa, +static inline uint32_t local_copy_gpa(const struct vm *vm, void *h_ptr, uint64_t gpa, uint32_t size, uint32_t fix_pg_size, bool cp_from_vm) { uint64_t hpa; @@ -308,7 +308,7 @@ static inline uint32_t local_copy_gpa(struct vm *vm, void *h_ptr, uint64_t gpa, return len; } -static inline int copy_gpa(struct vm *vm, void *h_ptr_arg, uint64_t gpa_arg, +static inline int copy_gpa(const struct vm *vm, void *h_ptr_arg, uint64_t gpa_arg, uint32_t size_arg, bool cp_from_vm) { void *h_ptr = h_ptr_arg; @@ -385,12 +385,12 @@ static inline int copy_gva(struct vcpu *vcpu, void *h_ptr_arg, uint64_t gva_arg, * - some other gpa from hypercall parameters, VHM should make sure it's * continuous */ -int copy_from_gpa(struct vm *vm, void *h_ptr, uint64_t gpa, uint32_t size) +int copy_from_gpa(const struct vm *vm, void *h_ptr, uint64_t gpa, uint32_t size) { return copy_gpa(vm, h_ptr, gpa, size, 1); } -int copy_to_gpa(struct vm *vm, void *h_ptr, uint64_t gpa, uint32_t size) +int copy_to_gpa(const struct vm *vm, void *h_ptr, uint64_t gpa, uint32_t size) { return copy_gpa(vm, h_ptr, gpa, size, 0); } diff --git a/hypervisor/include/arch/x86/cpuid.h b/hypervisor/include/arch/x86/cpuid.h index 8fef50fc..35003928 100644 --- a/hypervisor/include/arch/x86/cpuid.h +++ b/hypervisor/include/arch/x86/cpuid.h @@ -132,7 +132,7 @@ static inline void cpuid_subleaf(uint32_t leaf, uint32_t subleaf, } int set_vcpuid_entries(struct vm *vm); -void guest_cpuid(struct vcpu *vcpu, +void guest_cpuid(const struct vcpu *vcpu, uint32_t *eax, uint32_t *ebx, uint32_t *ecx, uint32_t *edx); diff --git a/hypervisor/include/arch/x86/guest/guest.h b/hypervisor/include/arch/x86/guest/guest.h index 0e291a75..46ef2143 100644 --- a/hypervisor/include/arch/x86/guest/guest.h +++ b/hypervisor/include/arch/x86/guest/guest.h @@ -93,7 +93,7 @@ enum vm_paging_mode { /* * VM related APIs */ -bool vm_lapic_disabled(struct vm *vm); +bool vm_lapic_disabled(const struct vm *vm); uint64_t vcpumask2pcpumask(struct vm *vm, uint64_t vdmask); int gva2gpa(struct vcpu *vcpu, uint64_t gva, uint64_t *gpa, uint32_t *err_code); @@ -121,8 +121,8 @@ int general_sw_loader(struct vm *vm, struct vcpu *vcpu); typedef int (*vm_sw_loader_t)(struct vm *vm, struct vcpu *vcpu); extern vm_sw_loader_t vm_sw_loader; -int copy_from_gpa(struct vm *vm, void *h_ptr, uint64_t gpa, uint32_t size); -int copy_to_gpa(struct vm *vm, void *h_ptr, uint64_t gpa, uint32_t size); +int copy_from_gpa(const struct vm *vm, void *h_ptr, uint64_t gpa, uint32_t size); +int copy_to_gpa(const struct vm *vm, void *h_ptr, uint64_t gpa, uint32_t size); int copy_from_gva(struct vcpu *vcpu, void *h_ptr, uint64_t gva, uint32_t size, uint32_t *err_code, uint64_t *fault_addr); int copy_to_gva(struct vcpu *vcpu, void *h_ptr, uint64_t gva, diff --git a/hypervisor/include/arch/x86/io.h b/hypervisor/include/arch/x86/io.h index e2f4a8ec..e13dcb34 100644 --- a/hypervisor/include/arch/x86/io.h +++ b/hypervisor/include/arch/x86/io.h @@ -81,7 +81,7 @@ static inline uint32_t pio_read(uint16_t addr, size_t sz) * @param value The 32 bit value to write. * @param addr The memory address to write to. */ -static inline void mmio_write32(uint32_t value, void *addr) +static inline void mmio_write32(uint32_t value, const void *addr) { volatile uint32_t *addr32 = (volatile uint32_t *)addr; *addr32 = value; @@ -92,7 +92,7 @@ static inline void mmio_write32(uint32_t value, void *addr) * @param value The 16 bit value to write. * @param addr The memory address to write to. */ -static inline void mmio_write16(uint16_t value, void *addr) +static inline void mmio_write16(uint16_t value, const void *addr) { volatile uint16_t *addr16 = (volatile uint16_t *)addr; *addr16 = value; @@ -103,7 +103,7 @@ static inline void mmio_write16(uint16_t value, void *addr) * @param value The 8 bit value to write. * @param addr The memory address to write to. */ -static inline void mmio_write8(uint8_t value, void *addr) +static inline void mmio_write8(uint8_t value, const void *addr) { volatile uint8_t *addr8 = (volatile uint8_t *)addr; *addr8 = value; @@ -115,7 +115,7 @@ static inline void mmio_write8(uint8_t value, void *addr) * * @return The 32 bit value read from the given address. */ -static inline uint32_t mmio_read32(void *addr) +static inline uint32_t mmio_read32(const void *addr) { return *((volatile uint32_t *)addr); } @@ -126,7 +126,7 @@ static inline uint32_t mmio_read32(void *addr) * * @return The 16 bit value read from the given address. */ -static inline uint16_t mmio_read16(void *addr) +static inline uint16_t mmio_read16(const void *addr) { return *((volatile uint16_t *)addr); } @@ -137,7 +137,7 @@ static inline uint16_t mmio_read16(void *addr) * * @return The 8 bit value read from the given address. */ -static inline uint8_t mmio_read8(void *addr) +static inline uint8_t mmio_read8(const void *addr) { return *((volatile uint8_t *)addr); } @@ -150,7 +150,7 @@ static inline uint8_t mmio_read8(void *addr) * @param mask The mask to apply to the value read. * @param value The 32 bit value to write. */ -static inline void set32(void *addr, uint32_t mask, uint32_t value) +static inline void set32(const void *addr, uint32_t mask, uint32_t value) { uint32_t temp_val; @@ -165,7 +165,7 @@ static inline void set32(void *addr, uint32_t mask, uint32_t value) * @param mask The mask to apply to the value read. * @param value The 16 bit value to write. */ -static inline void set16(void *addr, uint16_t mask, uint16_t value) +static inline void set16(const void *addr, uint16_t mask, uint16_t value) { uint16_t temp_val; @@ -180,7 +180,7 @@ static inline void set16(void *addr, uint16_t mask, uint16_t value) * @param mask The mask to apply to the value read. * @param value The 8 bit value to write. */ -static inline void set8(void *addr, uint8_t mask, uint8_t value) +static inline void set8(const void *addr, uint8_t mask, uint8_t value) { uint8_t temp_val; diff --git a/hypervisor/include/arch/x86/mmu.h b/hypervisor/include/arch/x86/mmu.h index 27699035..9918efe0 100644 --- a/hypervisor/include/arch/x86/mmu.h +++ b/hypervisor/include/arch/x86/mmu.h @@ -259,27 +259,27 @@ enum _page_table_present { #define PAGE_SIZE_1G MEM_1G /* Inline functions for reading/writing memory */ -static inline uint8_t mem_read8(void *addr) +static inline uint8_t mem_read8(const void *addr) { return *(volatile uint8_t *)(addr); } -static inline uint16_t mem_read16(void *addr) +static inline uint16_t mem_read16(const void *addr) { return *(volatile uint16_t *)(addr); } -static inline uint32_t mem_read32(void *addr) +static inline uint32_t mem_read32(const void *addr) { return *(volatile uint32_t *)(addr); } -static inline uint64_t mem_read64(void *addr) +static inline uint64_t mem_read64(const void *addr) { return *(volatile uint64_t *)(addr); } -static inline void mem_write8(void *addr, uint8_t data) +static inline void mem_write8(const void *addr, uint8_t data) { volatile uint8_t *addr8 = (volatile uint8_t *)addr; *addr8 = data; @@ -379,15 +379,15 @@ static inline void clflush(volatile void *p) bool is_ept_supported(void); uint64_t create_guest_initial_paging(struct vm *vm); void destroy_ept(struct vm *vm); -uint64_t gpa2hpa(struct vm *vm, uint64_t gpa); -uint64_t local_gpa2hpa(struct vm *vm, uint64_t gpa, uint32_t *size); -uint64_t hpa2gpa(struct vm *vm, uint64_t hpa); -int ept_mr_add(struct vm *vm, uint64_t hpa_arg, +uint64_t gpa2hpa(const struct vm *vm, uint64_t gpa); +uint64_t local_gpa2hpa(const struct vm *vm, uint64_t gpa, uint32_t *size); +uint64_t hpa2gpa(const struct vm *vm, uint64_t hpa); +int ept_mr_add(const struct vm *vm, uint64_t hpa_arg, uint64_t gpa_arg, uint64_t size, uint32_t prot_arg); -int ept_mr_modify(struct vm *vm, uint64_t *pml4_page, +int ept_mr_modify(const struct vm *vm, uint64_t *pml4_page, uint64_t gpa, uint64_t size, uint64_t prot_set, uint64_t prot_clr); -int ept_mr_del(struct vm *vm, uint64_t *pml4_page, +int ept_mr_del(const struct vm *vm, uint64_t *pml4_page, uint64_t gpa, uint64_t size); void free_ept_mem(void *pml4_addr); int ept_violation_vmexit_handler(struct vcpu *vcpu); diff --git a/hypervisor/include/arch/x86/pgtable.h b/hypervisor/include/arch/x86/pgtable.h index 51733611..ffa3b179 100644 --- a/hypervisor/include/arch/x86/pgtable.h +++ b/hypervisor/include/arch/x86/pgtable.h @@ -53,17 +53,17 @@ static inline uint64_t *pml4e_offset(uint64_t *pml4_page, uint64_t addr) return pml4_page + pml4e_index(addr); } -static inline uint64_t *pdpte_offset(uint64_t *pml4e, uint64_t addr) +static inline uint64_t *pdpte_offset(const uint64_t *pml4e, uint64_t addr) { return pml4e_page_vaddr(*pml4e) + pdpte_index(addr); } -static inline uint64_t *pde_offset(uint64_t *pdpte, uint64_t addr) +static inline uint64_t *pde_offset(const uint64_t *pdpte, uint64_t addr) { return pdpte_page_vaddr(*pdpte) + pde_index(addr); } -static inline uint64_t *pte_offset(uint64_t *pde, uint64_t addr) +static inline uint64_t *pte_offset(const uint64_t *pde, uint64_t addr) { return pde_page_vaddr(*pde) + pte_index(addr); } @@ -71,7 +71,7 @@ static inline uint64_t *pte_offset(uint64_t *pde, uint64_t addr) /* * pgentry may means pml4e/pdpte/pde/pte */ -static inline uint64_t get_pgentry(uint64_t *pte) +static inline uint64_t get_pgentry(const uint64_t *pte) { return *pte; } diff --git a/hypervisor/include/arch/x86/vmx.h b/hypervisor/include/arch/x86/vmx.h index 938a2017..ede9cf7b 100644 --- a/hypervisor/include/arch/x86/vmx.h +++ b/hypervisor/include/arch/x86/vmx.h @@ -449,7 +449,7 @@ int vmx_wrmsr_pat(struct vcpu *vcpu, uint64_t value); int vmx_write_cr0(struct vcpu *vcpu, uint64_t cr0); int vmx_write_cr4(struct vcpu *vcpu, uint64_t cr4); -static inline enum vm_cpu_mode get_vcpu_mode(struct vcpu *vcpu) +static inline enum vm_cpu_mode get_vcpu_mode(const struct vcpu *vcpu) { return vcpu->arch_vcpu.cpu_mode; } -- 2.17.1
|
|
Re: [PATCH 0/3] add hkdf key derivation support based on mbedtls
Zhu, Bing
The loc of code is 1731. (most of ~5900 loc are comments)
toggle quoted messageShow quoted text
------------------------------------------------------------------------------- Language files blank comment code ------------------------------------------------------------------------------- C 5 323 155 1359 C/C++ Header 5 288 3428 372 ------------------------------------------------------------------------------- SUM: 10 611 3583 1731 -------------------------------------------------------------------------------
-----Original Message-----
|
|
Re: [RFC PATCH 2/2] hv: add a dummy file to do compile time assert
Xu, Anthony
Can we generate the OFFSET MACRO at build time?
toggle quoted messageShow quoted text
Anthony
-----Original Message-----
|
|
Re: [PATCH 0/3] add hkdf key derivation support based on mbedtls
Xu, Anthony
15 files changed, 5975 insertions(+), 70 deletions(-)
toggle quoted messageShow quoted text
This patch adds ~ 5900 LOC. Way too big, it increases the ACRN hypervisor code size by > 20%. that's my concern. Any other solutions? SOS is a special VM, If we let SOS know the physical platform, what's the real impact? Anthony
-----Original Message-----
|
|