Re: [PATCH] HV: hv_main: Add #ifdef HV_DEBUG before debug specific code
#ifdef
On 08-13 Mon 07:59, Eddie Dong wrote:
This is part of SBUF component, what kind of compile option do we use? Let us use same one.shell command "vmexit" is the only one user of vmexit_time/cnt and we use the get_vmexit_profile, enclosed by HV_DEBUG, to access these variables. BTW, hcall side also needs this kind of compile option.I can't follow here. Can you make it more clear. -- Thanks Kaige Fu -----Original Message-----
|
|
Re: [PATCH] HV: handle trusty on vm reset
Victor Sun
On 8/13/2018 3:55 PM, Mingqiang Chi wrote:
Thanks Mingqiang, I will update it soon.-----Original Message-----(void)memset(&vcpu->arch_vcpu.contexts[i], 0U, BR, Victor + }
|
|
Re: [PATCH v2 1/3] HV: Add the emulation of CPUID with 0x16 leaf
If it is physical out of ranger, returning what hardware returns is better than fixed-zero.Hi Yakui,guest_cpuid(). We want the code to be slim to save FuSa effort. Thx Eddie
|
|
Re: [PATCH 3/4] HV: Merge hypervisor debug header files
The series are fine.
toggle quoted messageShow quoted text
Please go PR with my ACK.
-----Original Message-----
|
|
Re: [PATCH] HV: hv_main: Add #ifdef HV_DEBUG before debug specific code
#ifdef
This is part of SBUF component, what kind of compile option do we use? Let us use same one.
toggle quoted messageShow quoted text
BTW, hcall side also needs this kind of compile option.
-----Original Message-----
|
|
Re: [PATCH] HV: handle trusty on vm reset
Mingqiang Chi
toggle quoted messageShow quoted text
-----Original Message-----(void)memset(&vcpu->arch_vcpu.contexts[i], 0U, sizeof(struct run_context)); + }
|
|
[PATCH] HV: handle trusty on vm reset
Victor Sun
- clear run context when reset vcpu;
- destroy trusty without erase trusty memory when reset vm; Signed-off-by: Sun Victor <victor.sun@...> Signed-off-by: Yin Fengwei <fengwei.yin@...> --- hypervisor/arch/x86/guest/vcpu.c | 7 +++++++ hypervisor/arch/x86/guest/vm.c | 2 ++ 2 files changed, 9 insertions(+) diff --git a/hypervisor/arch/x86/guest/vcpu.c b/hypervisor/arch/x86/guest/vcpu.c index 9dc0241..a8a9f0e 100644 --- a/hypervisor/arch/x86/guest/vcpu.c +++ b/hypervisor/arch/x86/guest/vcpu.c @@ -396,6 +396,7 @@ void destroy_vcpu(struct vcpu *vcpu) */ void reset_vcpu(struct vcpu *vcpu) { + int i; struct acrn_vlapic *vlapic; pr_dbg("vcpu%hu reset", vcpu->vcpu_id); @@ -419,6 +420,12 @@ void reset_vcpu(struct vcpu *vcpu) vcpu->arch_vcpu.inject_event_pending = false; (void)memset(vcpu->arch_vcpu.vmcs, 0U, CPU_PAGE_SIZE); + for (i = 0; i < NR_WORLD; i++) { + memset(&vcpu->arch_vcpu.contexts[i], 0, + sizeof(struct run_context)); + } + vcpu->arch_vcpu.cur_context = NORMAL_WORLD; + vlapic = vcpu->arch_vcpu.vlapic; vlapic_reset(vlapic); } diff --git a/hypervisor/arch/x86/guest/vm.c b/hypervisor/arch/x86/guest/vm.c index 4bfda8e..f2efbf9 100644 --- a/hypervisor/arch/x86/guest/vm.c +++ b/hypervisor/arch/x86/guest/vm.c @@ -366,6 +366,8 @@ int reset_vm(struct vm *vm) } vioapic_reset(vm->arch_vm.virt_ioapic); + destroy_secure_world(vm, false); + vm->sworld_control.flag.active = 0UL; start_vm(vm); return 0; -- 2.7.4
|
|
[PATCH] hv:Fix misra-c return value violations for vioapic APIs
Mingqiang Chi
From: Mingqiang Chi <mingqiang.chi@...>
--Changed 4 APIs to void type vioapic_set_irqstate vioapic_assert_irq vioapic_deassert_irq vioapic_pulse_irq Signed-off-by: Mingqiang Chi <mingqiang.chi@...> --- hypervisor/common/hypercall.c | 6 +++--- hypervisor/dm/vioapic.c | 20 +++++++++----------- hypervisor/include/arch/x86/guest/vioapic.h | 6 +++--- 3 files changed, 15 insertions(+), 17 deletions(-) diff --git a/hypervisor/common/hypercall.c b/hypervisor/common/hypercall.c index f8efb35..110b3b1 100644 --- a/hypervisor/common/hypercall.c +++ b/hypervisor/common/hypercall.c @@ -109,13 +109,13 @@ handle_vioapic_irqline(struct vm *vm, uint32_t irq, enum irq_mode mode) switch (mode) { case IRQ_ASSERT: - ret = vioapic_assert_irq(vm, irq); + vioapic_assert_irq(vm, irq); break; case IRQ_DEASSERT: - ret = vioapic_deassert_irq(vm, irq); + vioapic_deassert_irq(vm, irq); break; case IRQ_PULSE: - ret = vioapic_pulse_irq(vm, irq); + vioapic_pulse_irq(vm, irq); break; default: /* diff --git a/hypervisor/dm/vioapic.c b/hypervisor/dm/vioapic.c index 4692094..3724be0 100644 --- a/hypervisor/dm/vioapic.c +++ b/hypervisor/dm/vioapic.c @@ -150,14 +150,15 @@ enum irqstate { IRQSTATE_PULSE }; -static int +static void vioapic_set_irqstate(struct vm *vm, uint32_t irq, enum irqstate irqstate) { struct vioapic *vioapic; uint32_t pin = irq; if (pin >= vioapic_pincount(vm)) { - return -EINVAL; + pr_err("invalid vioapic pin number %hhu", pin); + return; } vioapic = vm_ioapic(vm); @@ -178,26 +179,23 @@ vioapic_set_irqstate(struct vm *vm, uint32_t irq, enum irqstate irqstate) panic("vioapic_set_irqstate: invalid irqstate %d", irqstate); } VIOAPIC_UNLOCK(vioapic); - - return 0; } -int +void vioapic_assert_irq(struct vm *vm, uint32_t irq) { - return vioapic_set_irqstate(vm, irq, IRQSTATE_ASSERT); + vioapic_set_irqstate(vm, irq, IRQSTATE_ASSERT); } -int -vioapic_deassert_irq(struct vm *vm, uint32_t irq) +void vioapic_deassert_irq(struct vm *vm, uint32_t irq) { - return vioapic_set_irqstate(vm, irq, IRQSTATE_DEASSERT); + vioapic_set_irqstate(vm, irq, IRQSTATE_DEASSERT); } -int +void vioapic_pulse_irq(struct vm *vm, uint32_t irq) { - return vioapic_set_irqstate(vm, irq, IRQSTATE_PULSE); + vioapic_set_irqstate(vm, irq, IRQSTATE_PULSE); } /* diff --git a/hypervisor/include/arch/x86/guest/vioapic.h b/hypervisor/include/arch/x86/guest/vioapic.h index 98ffc9e..29bb540 100644 --- a/hypervisor/include/arch/x86/guest/vioapic.h +++ b/hypervisor/include/arch/x86/guest/vioapic.h @@ -41,9 +41,9 @@ struct vioapic *vioapic_init(struct vm *vm); void vioapic_cleanup(struct vioapic *vioapic); void vioapic_reset(struct vioapic *vioapic); -int vioapic_assert_irq(struct vm *vm, uint32_t irq); -int vioapic_deassert_irq(struct vm *vm, uint32_t irq); -int vioapic_pulse_irq(struct vm *vm, uint32_t irq); +void vioapic_assert_irq(struct vm *vm, uint32_t irq); +void vioapic_deassert_irq(struct vm *vm, uint32_t irq); +void vioapic_pulse_irq(struct vm *vm, uint32_t irq); void vioapic_update_tmr(struct vcpu *vcpu); void vioapic_mmio_write(struct vm *vm, uint64_t gpa, uint32_t wval); -- 2.7.4
|
|
Re: [PATCH V3 0/3] Tools: fixed issues for S3 feature
Yan, Like
Reviewed-by: Yan, Like <like.yan@...>
toggle quoted messageShow quoted text
On Mon, Aug 13, 2018 at 10:28:47PM +0800, Tao, Yuhong wrote:
With these changes, S3 feature works.
|
|
Re: [PATCH] drm/i915/gvt: fix display messy issue which is introduced by plane pvmmio
Got it. Will simplify the description when submit PR.
toggle quoted messageShow quoted text
-----Original Message-----
From: He, Min Sent: Monday, August 13, 2018 3:13 PM To: acrn-dev@... Cc: Jiang, Fei <fei.jiang@...> Subject: RE: [acrn-dev] [PATCH] drm/i915/gvt: fix display messy issue which is introduced by plane pvmmio LGTM. But I don't think we need to talk about the reason of why it's missed in previous patch. Reviewed-by: He, Min <min.he@...> -----Original Message-----
|
|
Re: [PATCH] drm/i915/gvt: fix display messy issue which is introduced by plane pvmmio
He, Min <min.he@...>
LGTM. But I don't think we need to talk about the reason of why it's missed in
toggle quoted messageShow quoted text
previous patch. Reviewed-by: He, Min <min.he@...>
-----Original Message-----
|
|
Re: [PATCH] drm/i915/gvt: fix display messy issue which is introduced by plane pvmmio
Zhao, Yakui
toggle quoted messageShow quoted text
-----Original Message-----LGTM. Add: Reviewed-by: Zhao Yakui <yakui.zhao@...>
|
|
Re: [PATCH v2 2/3] HV: Use the pre-defined value to calculate tsc when cpuid(0x15) returns zero ecx
Zhao, Yakui
toggle quoted messageShow quoted text
-----Original Message-----As the pre-defined value is too hack, this patch is removed. Instead after it fails on CPUID 0x15, it will try the CPUID 0x16 to calculate the tsc.
|
|
Re: [PATCH v2 1/3] HV: Add the emulation of CPUID with 0x16 leaf
Zhao, Yakui
toggle quoted messageShow quoted text
-----Original Message-----After Eddie's suggestion is followed to add the limit check(>=0x15 CPUID), the mentioned check is not needed any more. I keep them so that the init_vcpuid_entry still can return the zero entry for the out-of-range CPUID input. If you think that the check is redundant, I can remove them.
|
|
[PATCH 2/2] vhm: add irqfd support for ACRN hypervisor service module
Shuo A Liu
irqfd which is based on eventfd, provides a pipe for injecting guest
interrupt through a file description writing operation. Each irqfd registered by userspace can map a interrupt of the guest to eventfd, and a writing operation on one side of the eventfd will trigger the interrupt injection on vhm side. Signed-off-by: Shuo Liu <shuo.a.liu@...> --- drivers/char/vhm/vhm_dev.c | 11 ++ drivers/vhm/Makefile | 2 +- drivers/vhm/vhm_irqfd.c | 320 +++++++++++++++++++++++++++++++++++++ include/linux/vhm/vhm_eventfd.h | 6 + include/linux/vhm/vhm_ioctl_defs.h | 8 + 5 files changed, 346 insertions(+), 1 deletion(-) create mode 100644 drivers/vhm/vhm_irqfd.c diff --git a/drivers/char/vhm/vhm_dev.c b/drivers/char/vhm/vhm_dev.c index 0ffd3b9..9e0a6e4 100644 --- a/drivers/char/vhm/vhm_dev.c +++ b/drivers/char/vhm/vhm_dev.c @@ -235,6 +235,7 @@ static long vhm_dev_ioctl(struct file *filep, } acrn_ioeventfd_init(vm->vmid); + acrn_irqfd_init(vm->vmid); pr_info("vhm: VM %d created\n", created_vm.vmid); break; @@ -276,6 +277,7 @@ static long vhm_dev_ioctl(struct file *filep, case IC_DESTROY_VM: { acrn_ioeventfd_deinit(vm->vmid); + acrn_irqfd_deinit(vm->vmid); if (vm->trusty_host_gpa) deinit_trusty(vm); ret = hcall_destroy_vm(vm->vmid); @@ -634,6 +636,15 @@ static long vhm_dev_ioctl(struct file *filep, break; } + case IC_EVENT_IRQFD: { + struct acrn_irqfd args; + + if (copy_from_user(&args, (void *)ioctl_param, sizeof(args))) + return -EFAULT; + ret = acrn_irqfd(vm->vmid, &args); + break; + } + default: pr_warn("Unknown IOCTL 0x%x\n", ioctl_num); ret = 0; diff --git a/drivers/vhm/Makefile b/drivers/vhm/Makefile index a25c65d..67a2243 100644 --- a/drivers/vhm/Makefile +++ b/drivers/vhm/Makefile @@ -1 +1 @@ -obj-y += vhm_mm.o vhm_hugetlb.o vhm_ioreq.o vhm_vm_mngt.o vhm_msi.o vhm_hypercall.o vhm_ioeventfd.o +obj-y += vhm_mm.o vhm_hugetlb.o vhm_ioreq.o vhm_vm_mngt.o vhm_msi.o vhm_hypercall.o vhm_ioeventfd.o vhm_irqfd.o diff --git a/drivers/vhm/vhm_irqfd.c b/drivers/vhm/vhm_irqfd.c new file mode 100644 index 0000000..9795893 --- /dev/null +++ b/drivers/vhm/vhm_irqfd.c @@ -0,0 +1,320 @@ +#include <linux/syscalls.h> +#include <linux/wait.h> +#include <linux/poll.h> +#include <linux/file.h> +#include <linux/list.h> +#include <linux/eventfd.h> +#include <linux/kernel.h> +#include <linux/async.h> +#include <linux/slab.h> + +#include <linux/vhm/acrn_common.h> +#include <linux/vhm/acrn_vhm_ioreq.h> +#include <linux/vhm/vhm_vm_mngt.h> +#include <linux/vhm/vhm_ioctl_defs.h> +#include <linux/vhm/vhm_hypercall.h> + +static LIST_HEAD(vhm_irqfd_clients); +static DEFINE_MUTEX(vhm_irqfds_mutex); +static ASYNC_DOMAIN_EXCLUSIVE(irqfd_domain); + +struct acrn_vhm_irqfd { + struct vhm_irqfd_info *info; + wait_queue_entry_t wait; + struct work_struct shutdown; + struct eventfd_ctx *eventfd; + struct list_head list; + poll_table pt; + struct acrn_msi_entry msi; +}; + +struct vhm_irqfd_info { + struct list_head list; + int refcnt; + uint16_t vmid; + struct workqueue_struct *wq; + + struct list_head irqfds; + spinlock_t irqfds_lock; +}; + +static struct vhm_irqfd_info *find_get_irqfd_info_by_vm(uint16_t vmid) +{ + struct vhm_irqfd_info *info = NULL; + + mutex_lock(&vhm_irqfds_mutex); + list_for_each_entry(info, &vhm_irqfd_clients, list) { + if (info->vmid == vmid) { + info->refcnt++; + mutex_unlock(&vhm_irqfds_mutex); + return info; + } + } + mutex_unlock(&vhm_irqfds_mutex); + return NULL; +} + +static void put_irqfd_info(struct vhm_irqfd_info *info) +{ + mutex_lock(&vhm_irqfds_mutex); + info->refcnt--; + if (info->refcnt == 0) { + list_del(&info->list); + kfree(info); + } + mutex_unlock(&vhm_irqfds_mutex); +} + +static void vhm_irqfd_inject(struct acrn_vhm_irqfd *irqfd) +{ + struct vhm_irqfd_info *info = irqfd->info; + + vhm_inject_msi(info->vmid, irqfd->msi.msi_addr, + irqfd->msi.msi_data); +} + +/* + * Try to find if the irqfd still in list info->irqfds + * + * assumes info->irqfds_lock is held + */ +static bool vhm_irqfd_is_active(struct vhm_irqfd_info *info, + struct acrn_vhm_irqfd *irqfd) +{ + struct acrn_vhm_irqfd *_irqfd; + + list_for_each_entry(_irqfd, &info->irqfds, list) + if (_irqfd == irqfd) + return true; + + return false; +} + +/* + * Remove irqfd and free it. + * + * assumes info->irqfds_lock is held + */ +static void vhm_irqfd_shutdown(struct acrn_vhm_irqfd *irqfd) +{ + u64 cnt; + + /* remove from wait queue */ + list_del_init(&irqfd->list); + eventfd_ctx_remove_wait_queue(irqfd->eventfd, &irqfd->wait, &cnt); + eventfd_ctx_put(irqfd->eventfd); + kfree(irqfd); +} + +static void vhm_irqfd_shutdown_work(struct work_struct *work) +{ + unsigned long flags; + struct acrn_vhm_irqfd *irqfd = + container_of(work, struct acrn_vhm_irqfd, shutdown); + struct vhm_irqfd_info *info = irqfd->info; + + spin_lock_irqsave(&info->irqfds_lock, flags); + if (vhm_irqfd_is_active(info, irqfd)) + vhm_irqfd_shutdown(irqfd); + spin_unlock_irqrestore(&info->irqfds_lock, flags); +} + +/* + * Called with wqh->lock held and interrupts disabled + */ +static int vhm_irqfd_wakeup(wait_queue_entry_t *wait, unsigned mode, + int sync, void *key) +{ + struct acrn_vhm_irqfd *irqfd = + container_of(wait, struct acrn_vhm_irqfd, wait); + unsigned long poll_bits = (unsigned long)key; + struct vhm_irqfd_info *info = irqfd->info; + + if (poll_bits & POLLIN) + /* An event has been signaled, inject an interrupt */ + vhm_irqfd_inject(irqfd); + + if (poll_bits & POLLHUP) + /* async close eventfd as shutdown need hold wqh->lock */ + queue_work(info->wq, &irqfd->shutdown); + + return 0; +} + +static void vhm_irqfd_poll_func(struct file *file, + wait_queue_head_t *wqh, poll_table *pt) +{ + struct acrn_vhm_irqfd *irqfd = + container_of(pt, struct acrn_vhm_irqfd, pt); + add_wait_queue(wqh, &irqfd->wait); +} + +static int acrn_irqfd_assign(struct vhm_irqfd_info *info, + struct acrn_irqfd *args) +{ + struct acrn_vhm_irqfd *irqfd, *tmp; + struct fd f; + struct eventfd_ctx *eventfd = NULL; + int ret = 0; + unsigned int events; + + irqfd = kzalloc(sizeof(*irqfd), GFP_KERNEL); + if (!irqfd) + return -ENOMEM; + + irqfd->info = info; + memcpy(&irqfd->msi, &args->msi, sizeof(args->msi)); + INIT_LIST_HEAD(&irqfd->list); + INIT_WORK(&irqfd->shutdown, vhm_irqfd_shutdown_work); + + f = fdget(args->fd); + if (!f.file) { + ret = -EBADF; + goto out; + } + + eventfd = eventfd_ctx_fileget(f.file); + if (IS_ERR(eventfd)) { + ret = PTR_ERR(eventfd); + goto fail; + } + + irqfd->eventfd = eventfd; + + /* + * Install our own custom wake-up handling so we are notified via + * a callback whenever someone signals the underlying eventfd + */ + init_waitqueue_func_entry(&irqfd->wait, vhm_irqfd_wakeup); + init_poll_funcptr(&irqfd->pt, vhm_irqfd_poll_func); + + spin_lock_irq(&info->irqfds_lock); + + list_for_each_entry(tmp, &info->irqfds, list) { + if (irqfd->eventfd != tmp->eventfd) + continue; + /* This fd is used for another irq already. */ + ret = -EBUSY; + spin_unlock_irq(&info->irqfds_lock); + goto fail; + } + list_add_tail(&irqfd->list, &info->irqfds); + + spin_unlock_irq(&info->irqfds_lock); + + /* Check the pending event in this stage */ + events = f.file->f_op->poll(f.file, &irqfd->pt); + + if (events & POLLIN) + vhm_irqfd_inject(irqfd); + + fdput(f); + + return 0; +fail: + if (eventfd && !IS_ERR(eventfd)) + eventfd_ctx_put(eventfd); + + fdput(f); +out: + kfree(irqfd); + return ret; +} + +static int acrn_irqfd_deassign(struct vhm_irqfd_info *info, + struct acrn_irqfd *args) +{ + struct acrn_vhm_irqfd *irqfd, *tmp; + struct eventfd_ctx *eventfd; + + eventfd = eventfd_ctx_fdget(args->fd); + if (IS_ERR(eventfd)) + return PTR_ERR(eventfd); + + spin_lock_irq(&info->irqfds_lock); + + list_for_each_entry_safe(irqfd, tmp, &info->irqfds, list) { + if (irqfd->eventfd == eventfd) { + vhm_irqfd_shutdown(irqfd); + break; + } + } + + spin_unlock_irq(&info->irqfds_lock); + eventfd_ctx_put(eventfd); + + return 0; +} + +int acrn_irqfd(uint16_t vmid, struct acrn_irqfd *args) +{ + struct vhm_irqfd_info *info; + int ret; + + info = find_get_irqfd_info_by_vm(vmid); + if (!info) + return -ENOENT; + + if (args->flags & ACRN_IRQFD_FLAG_DEASSIGN) + ret = acrn_irqfd_deassign(info, args); + else + ret = acrn_irqfd_assign(info, args); + + put_irqfd_info(info); + return ret; +} + +int acrn_irqfd_init(uint16_t vmid) +{ + struct vhm_irqfd_info *info; + + info = find_get_irqfd_info_by_vm(vmid); + if (info) { + put_irqfd_info(info); + return -EEXIST; + } + + info = kzalloc(sizeof(struct vhm_irqfd_info), GFP_KERNEL); + if (!info) { + pr_err("ACRN vhm irqfd init fail!\n"); + return -ENOMEM; + } + info->vmid = vmid; + info->refcnt = 1; + INIT_LIST_HEAD(&info->irqfds); + spin_lock_init(&info->irqfds_lock); + + info->wq = alloc_workqueue("acrn_irqfd-%d", 0, 0, vmid); + if (!info->wq) { + kfree(info); + return -ENOMEM; + } + + mutex_lock(&vhm_irqfds_mutex); + list_add(&info->list, &vhm_irqfd_clients); + mutex_unlock(&vhm_irqfds_mutex); + + pr_info("ACRN vhm irqfd init done!\n"); + return 0; +} + +void acrn_irqfd_deinit(uint16_t vmid) +{ + struct acrn_vhm_irqfd *irqfd, *tmp; + struct vhm_irqfd_info *info; + + info = find_get_irqfd_info_by_vm(vmid); + if (!info) + return; + put_irqfd_info(info); + + destroy_workqueue(info->wq); + + spin_lock_irq(&info->irqfds_lock); + list_for_each_entry_safe(irqfd, tmp, &info->irqfds, list) + vhm_irqfd_shutdown(irqfd); + spin_unlock_irq(&info->irqfds_lock); + + /* put one more to release it */ + put_irqfd_info(info); +} diff --git a/include/linux/vhm/vhm_eventfd.h b/include/linux/vhm/vhm_eventfd.h index 3c73194..7ee5843 100644 --- a/include/linux/vhm/vhm_eventfd.h +++ b/include/linux/vhm/vhm_eventfd.h @@ -7,4 +7,10 @@ int acrn_ioeventfd_init(int vmid); int acrn_ioeventfd(int vmid, struct acrn_ioeventfd *args); void acrn_ioeventfd_deinit(int vmid); +/* irqfd APIs */ +struct acrn_irqfd; +int acrn_irqfd_init(uint16_t vmid); +int acrn_irqfd(uint16_t vmid, struct acrn_irqfd *args); +void acrn_irqfd_deinit(uint16_t vmid); + #endif diff --git a/include/linux/vhm/vhm_ioctl_defs.h b/include/linux/vhm/vhm_ioctl_defs.h index ac62f1a..3cd0a00 100644 --- a/include/linux/vhm/vhm_ioctl_defs.h +++ b/include/linux/vhm/vhm_ioctl_defs.h @@ -111,6 +111,7 @@ /* VHM eventfd */ #define IC_ID_EVENT_BASE 0x70UL #define IC_EVENT_IOEVENTFD _IC_ID(IC_ID, IC_ID_EVENT_BASE + 0x00) +#define IC_EVENT_IRQFD _IC_ID(IC_ID, IC_ID_EVENT_BASE + 0x01) /** * struct vm_memseg - memory segment info for guest @@ -228,4 +229,11 @@ struct acrn_ioeventfd { uint32_t len; uint64_t data; }; + +#define ACRN_IRQFD_FLAG_DEASSIGN 0x01 +struct acrn_irqfd { + int32_t fd; + uint32_t flags; + struct acrn_msi_entry msi; +}; #endif /* __VHM_IOCTL_DEFS_H__ */ -- 2.8.3
|
|
[PATCH 1/2] vhm: add ioeventfd support for ACRN hypervisor service module
Shuo A Liu
ioeventfd which is based on eventfd, intends to glue vhm module and
other modules who are interested in guest IOs. Each ioeventfd registered by userspace can map a PIO/MMIO range of the guest to eventfd, and response to signal the eventfd when get the in-range IO write from guest. Then the other side of eventfd can be notified to process the IO request. Signed-off-by: Shuo Liu <shuo.a.liu@...> --- drivers/char/vhm/vhm_dev.c | 15 ++ drivers/vhm/Makefile | 2 +- drivers/vhm/vhm_ioeventfd.c | 412 +++++++++++++++++++++++++++++++++++++ include/linux/vhm/vhm_eventfd.h | 10 + include/linux/vhm/vhm_ioctl_defs.h | 14 ++ 5 files changed, 452 insertions(+), 1 deletion(-) create mode 100644 drivers/vhm/vhm_ioeventfd.c create mode 100644 include/linux/vhm/vhm_eventfd.h diff --git a/drivers/char/vhm/vhm_dev.c b/drivers/char/vhm/vhm_dev.c index 34b498b..0ffd3b9 100644 --- a/drivers/char/vhm/vhm_dev.c +++ b/drivers/char/vhm/vhm_dev.c @@ -83,6 +83,7 @@ #include <linux/vhm/acrn_vhm_mm.h> #include <linux/vhm/vhm_vm_mngt.h> #include <linux/vhm/vhm_hypercall.h> +#include <linux/vhm/vhm_eventfd.h> #include <asm/hypervisor.h> @@ -233,6 +234,8 @@ static long vhm_dev_ioctl(struct file *filep, goto ioreq_buf_fail; } + acrn_ioeventfd_init(vm->vmid); + pr_info("vhm: VM %d created\n", created_vm.vmid); break; ioreq_buf_fail: @@ -249,6 +252,7 @@ static long vhm_dev_ioctl(struct file *filep, pr_err("vhm: failed to start VM %ld!\n", vm->vmid); return -EFAULT; } + break; } @@ -271,6 +275,7 @@ static long vhm_dev_ioctl(struct file *filep, } case IC_DESTROY_VM: { + acrn_ioeventfd_deinit(vm->vmid); if (vm->trusty_host_gpa) deinit_trusty(vm); ret = hcall_destroy_vm(vm->vmid); @@ -279,6 +284,7 @@ static long vhm_dev_ioctl(struct file *filep, return -EFAULT; } vm->vmid = ACRN_INVALID_VMID; + break; } @@ -619,6 +625,15 @@ static long vhm_dev_ioctl(struct file *filep, break; } + case IC_EVENT_IOEVENTFD: { + struct acrn_ioeventfd args; + + if (copy_from_user(&args, (void *)ioctl_param, sizeof(args))) + return -EFAULT; + ret = acrn_ioeventfd(vm->vmid, &args); + break; + } + default: pr_warn("Unknown IOCTL 0x%x\n", ioctl_num); ret = 0; diff --git a/drivers/vhm/Makefile b/drivers/vhm/Makefile index 23f17ae..a25c65d 100644 --- a/drivers/vhm/Makefile +++ b/drivers/vhm/Makefile @@ -1 +1 @@ -obj-y += vhm_mm.o vhm_hugetlb.o vhm_ioreq.o vhm_vm_mngt.o vhm_msi.o vhm_hypercall.o +obj-y += vhm_mm.o vhm_hugetlb.o vhm_ioreq.o vhm_vm_mngt.o vhm_msi.o vhm_hypercall.o vhm_ioeventfd.o diff --git a/drivers/vhm/vhm_ioeventfd.c b/drivers/vhm/vhm_ioeventfd.c new file mode 100644 index 0000000..acde4f7 --- /dev/null +++ b/drivers/vhm/vhm_ioeventfd.c @@ -0,0 +1,412 @@ +#include <linux/syscalls.h> +#include <linux/wait.h> +#include <linux/poll.h> +#include <linux/file.h> +#include <linux/list.h> +#include <linux/eventfd.h> +#include <linux/kernel.h> +#include <linux/slab.h> + +#include <linux/vhm/acrn_common.h> +#include <linux/vhm/acrn_vhm_ioreq.h> +#include <linux/vhm/vhm_ioctl_defs.h> +#include <linux/vhm/vhm_hypercall.h> + +static LIST_HEAD(vhm_ioeventfd_clients); +static DEFINE_MUTEX(vhm_ioeventfds_mutex); + +struct acrn_vhm_ioeventfd { + struct list_head list; + struct eventfd_ctx *eventfd; + u64 addr; + u64 data; + int length; + int type; + bool wildcard; +}; + +struct vhm_ioeventfd_info { + struct list_head list; + int refcnt; + uint16_t vmid; + int vhm_client_id; + int vcpu_num; + struct vhm_request *req_buf; + + struct list_head ioeventfds; + struct mutex ioeventfds_lock; +}; + +static struct vhm_ioeventfd_info *find_get_ioeventfd_info_by_client( + int client_id) +{ + struct vhm_ioeventfd_info *info = NULL; + + mutex_lock(&vhm_ioeventfds_mutex); + list_for_each_entry(info, &vhm_ioeventfd_clients, list) { + if (info->vhm_client_id == client_id) { + info->refcnt++; + mutex_unlock(&vhm_ioeventfds_mutex); + return info; + } + } + mutex_unlock(&vhm_ioeventfds_mutex); + return NULL; +} + +static struct vhm_ioeventfd_info *find_get_ioeventfd_info_by_vm(uint16_t vmid) +{ + struct vhm_ioeventfd_info *info = NULL; + + mutex_lock(&vhm_ioeventfds_mutex); + list_for_each_entry(info, &vhm_ioeventfd_clients, list) { + if (info->vmid == vmid) { + info->refcnt++; + mutex_unlock(&vhm_ioeventfds_mutex); + return info; + } + } + mutex_unlock(&vhm_ioeventfds_mutex); + return NULL; +} + +static void put_ioeventfd_info(struct vhm_ioeventfd_info *info) +{ + mutex_lock(&vhm_ioeventfds_mutex); + info->refcnt--; + if (info->refcnt == 0) { + list_del(&info->list); + acrn_ioreq_destroy_client(info->vhm_client_id); + kfree(info); + } + mutex_unlock(&vhm_ioeventfds_mutex); +} + +/* assumes info->ioeventfds_lock held */ +static void vhm_ioeventfd_shutdown(struct acrn_vhm_ioeventfd *p) +{ + eventfd_ctx_put(p->eventfd); + list_del(&p->list); + kfree(p); +} + +static inline int ioreq_type_from_flags(int flags) +{ + return flags & ACRN_IOEVENTFD_FLAG_PIO ? + REQ_PORTIO : REQ_MMIO; +} + +/* assumes info->ioeventfds_lock held */ +static bool vhm_ioeventfd_is_duplicated(struct vhm_ioeventfd_info *info, + struct acrn_vhm_ioeventfd *ioeventfd) +{ + struct acrn_vhm_ioeventfd *p; + + list_for_each_entry(p, &info->ioeventfds, list) + if (p->addr == ioeventfd->addr && + p->type == ioeventfd->type && + (!p->length || !ioeventfd->length || + (p->length == ioeventfd->length && + (p->wildcard || ioeventfd->wildcard || + p->data == ioeventfd->data)))) + return true; + + return false; +} + +static int acrn_assign_ioeventfd(struct vhm_ioeventfd_info *info, + struct acrn_ioeventfd *args) +{ + struct eventfd_ctx *eventfd; + struct acrn_vhm_ioeventfd *p; + int ret = -ENOENT; + + /* check for range overflow */ + if (args->addr + args->len < args->addr) + return -EINVAL; + + /* no length is not supported */ + if (!args->len) + return -EINVAL; + + eventfd = eventfd_ctx_fdget(args->fd); + if (IS_ERR(eventfd)) + return PTR_ERR(eventfd); + + p = kzalloc(sizeof(*p), GFP_KERNEL); + if (!p) { + ret = -ENOMEM; + goto fail; + } + + INIT_LIST_HEAD(&p->list); + p->addr = args->addr; + p->length = args->len; + p->eventfd = eventfd; + p->type = ioreq_type_from_flags(args->flags); + + /* If datamatch enabled, we compare the data + * otherwise this is a wildcard + */ + if (args->flags & ACRN_IOEVENTFD_FLAG_DATAMATCH) + p->data = args->data; + else + p->wildcard = true; + + mutex_lock(&info->ioeventfds_lock); + + /* Verify that there isn't a match already */ + if (vhm_ioeventfd_is_duplicated(info, p)) { + ret = -EEXIST; + goto unlock_fail; + } + + /* register the IO range into vhm */ + ret = acrn_ioreq_add_iorange(info->vhm_client_id, p->type, + p->addr, p->addr + p->length - 1); + if (ret < 0) + goto unlock_fail; + + list_add_tail(&p->list, &info->ioeventfds); + mutex_unlock(&info->ioeventfds_lock); + + return 0; + +unlock_fail: + mutex_unlock(&info->ioeventfds_lock); +fail: + kfree(p); + eventfd_ctx_put(eventfd); + return ret; +} + +static int acrn_deassign_ioeventfd(struct vhm_ioeventfd_info *info, + struct acrn_ioeventfd *args) +{ + struct acrn_vhm_ioeventfd *p, *tmp; + struct eventfd_ctx *eventfd; + int ret = -ENOENT; + + eventfd = eventfd_ctx_fdget(args->fd); + if (IS_ERR(eventfd)) + return PTR_ERR(eventfd); + + mutex_lock(&info->ioeventfds_lock); + + list_for_each_entry_safe(p, tmp, &info->ioeventfds, list) { + bool wildcard = !(args->flags & ACRN_IOEVENTFD_FLAG_DATAMATCH); + + if (p->eventfd != eventfd || + p->addr != args->addr || + p->length != args->len || + p->type != ioreq_type_from_flags(args->flags) || + p->wildcard != wildcard) + continue; + + if (!p->wildcard && p->data != args->data) + continue; + + ret = acrn_ioreq_del_iorange(info->vhm_client_id, p->type, + p->addr, p->addr + p->length - 1); + if (ret) + break; + vhm_ioeventfd_shutdown(p); + ret = 0; + break; + } + + mutex_unlock(&info->ioeventfds_lock); + + eventfd_ctx_put(eventfd); + + return ret; +} + +int acrn_ioeventfd(uint16_t vmid, struct acrn_ioeventfd *args) +{ + struct vhm_ioeventfd_info *info = NULL; + int ret; + + info = find_get_ioeventfd_info_by_vm(vmid); + if (!info) + return -ENOENT; + + if (args->flags & ACRN_IOEVENTFD_FLAG_DEASSIGN) + ret = acrn_deassign_ioeventfd(info, args); + else + ret = acrn_assign_ioeventfd(info, args); + + put_ioeventfd_info(info); + return ret; +} + +static struct acrn_vhm_ioeventfd *vhm_ioeventfd_match( + struct vhm_ioeventfd_info *info, + u64 addr, u64 data, int length, int type) +{ + struct acrn_vhm_ioeventfd *p = NULL; + + list_for_each_entry(p, &info->ioeventfds, list) { + if (p->type == type && + p->addr == addr && + p->length == length && + (p->data == data || p->wildcard)) + return p; + } + + return NULL; +} + +static int acrn_ioeventfd_dispatch_ioreq(int client_id, + unsigned long *ioreqs_map) +{ + int ret = 0; + struct vhm_request *req; + struct acrn_vhm_ioeventfd *p; + struct vhm_ioeventfd_info *info; + u64 addr; + u64 val; + int size; + int vcpu; + + info = find_get_ioeventfd_info_by_client(client_id); + if (!info) + return -EINVAL; + + /* get req buf */ + if (!info->req_buf) { + info->req_buf = acrn_ioreq_get_reqbuf(info->vhm_client_id); + if (!info->req_buf) { + pr_err("Failed to get req_buf for client %d\n", + info->vhm_client_id); + put_ioeventfd_info(info); + return -EINVAL; + } + } + + while (1) { + vcpu = find_first_bit(ioreqs_map, info->vcpu_num); + if (vcpu == info->vcpu_num) + break; + req = &info->req_buf[vcpu]; + if (req->valid && + req->processed == REQ_STATE_PROCESSING && + req->client == client_id) { + if (req->type == REQ_MMIO) { + if (req->reqs.mmio_request.direction == + REQUEST_READ) { + ret = -EOPNOTSUPP; + break; + } + addr = req->reqs.mmio_request.address; + size = req->reqs.mmio_request.size; + val = req->reqs.mmio_request.value; + } else { + if (req->reqs.pio_request.direction == + REQUEST_READ) { + ret = -EOPNOTSUPP; + break; + } + addr = req->reqs.pio_request.address; + size = req->reqs.pio_request.size; + val = req->reqs.pio_request.value; + } + + mutex_lock(&info->ioeventfds_lock); + p = vhm_ioeventfd_match(info, addr, val, size, req->type); + if (p) { + eventfd_signal(p->eventfd, 1); + mutex_unlock(&info->ioeventfds_lock); + req->processed = REQ_STATE_SUCCESS; + acrn_ioreq_complete_request(client_id, vcpu); + ret = 0; + } else { + mutex_unlock(&info->ioeventfds_lock); + ret = -EOPNOTSUPP; + break; + } + } + } + + put_ioeventfd_info(info); + return ret; +} + +int acrn_ioeventfd_init(uint16_t vmid) +{ + int ret = 0; + char name[16]; + struct vm_info vm_info; + struct vhm_ioeventfd_info *info; + + info = find_get_ioeventfd_info_by_vm(vmid); + if (info) { + put_ioeventfd_info(info); + return -EEXIST; + } + + info = kzalloc(sizeof(struct vhm_ioeventfd_info), GFP_KERNEL); + if (!info) { + pr_err("ACRN vhm ioeventfd init fail!\n"); + return -ENOMEM; + } + snprintf(name, sizeof(name), "ioeventfd-%d", vmid); + info->vhm_client_id = acrn_ioreq_create_client(vmid, + acrn_ioeventfd_dispatch_ioreq, name); + if (info->vhm_client_id < 0) { + pr_err("Failed to create ioeventfd client for ioreq!\n"); + ret = -EINVAL; + goto fail; + } + + ret = vhm_get_vm_info(vmid, &vm_info); + if (ret < 0) { + pr_err("Failed to get vm info for vmid %d\n", vmid); + goto client_fail; + } + + info->vcpu_num = vm_info.max_vcpu; + + ret = acrn_ioreq_attach_client(info->vhm_client_id, 0); + if (ret < 0) { + pr_err("Failed to attach vhm client %d!\n", + info->vhm_client_id); + goto client_fail; + } + mutex_init(&info->ioeventfds_lock); + info->vmid = vmid; + info->refcnt = 1; + INIT_LIST_HEAD(&info->ioeventfds); + + mutex_lock(&vhm_ioeventfds_mutex); + list_add(&info->list, &vhm_ioeventfd_clients); + mutex_unlock(&vhm_ioeventfds_mutex); + + pr_info("ACRN vhm ioeventfd init done!\n"); + return 0; +client_fail: + acrn_ioreq_destroy_client(info->vhm_client_id); +fail: + kfree(info); + return ret; +} + +void acrn_ioeventfd_deinit(uint16_t vmid) +{ + struct acrn_vhm_ioeventfd *p, *tmp; + struct vhm_ioeventfd_info *info = NULL; + + info = find_get_ioeventfd_info_by_vm(vmid); + if (!info) + return; + + put_ioeventfd_info(info); + + mutex_lock(&info->ioeventfds_lock); + list_for_each_entry_safe(p, tmp, &info->ioeventfds, list) + vhm_ioeventfd_shutdown(p); + mutex_unlock(&info->ioeventfds_lock); + + /* put one more as we count it in finding */ + put_ioeventfd_info(info); +} diff --git a/include/linux/vhm/vhm_eventfd.h b/include/linux/vhm/vhm_eventfd.h new file mode 100644 index 0000000..3c73194 --- /dev/null +++ b/include/linux/vhm/vhm_eventfd.h @@ -0,0 +1,10 @@ +#ifndef __ACRN_VHM_EVENTFD_H__ +#define __ACRN_VHM_EVENTFD_H__ + +/* ioeventfd APIs */ +struct acrn_ioeventfd; +int acrn_ioeventfd_init(int vmid); +int acrn_ioeventfd(int vmid, struct acrn_ioeventfd *args); +void acrn_ioeventfd_deinit(int vmid); + +#endif diff --git a/include/linux/vhm/vhm_ioctl_defs.h b/include/linux/vhm/vhm_ioctl_defs.h index 32e081a..ac62f1a 100644 --- a/include/linux/vhm/vhm_ioctl_defs.h +++ b/include/linux/vhm/vhm_ioctl_defs.h @@ -108,6 +108,10 @@ #define IC_PM_GET_CPU_STATE _IC_ID(IC_ID, IC_ID_PM_BASE + 0x00) #define IC_PM_SET_SSTATE_DATA _IC_ID(IC_ID, IC_ID_PM_BASE + 0x01) +/* VHM eventfd */ +#define IC_ID_EVENT_BASE 0x70UL +#define IC_EVENT_IOEVENTFD _IC_ID(IC_ID, IC_ID_EVENT_BASE + 0x00) + /** * struct vm_memseg - memory segment info for guest * @@ -214,4 +218,14 @@ struct api_version { uint32_t minor_version; }; +#define ACRN_IOEVENTFD_FLAG_PIO 0x01 +#define ACRN_IOEVENTFD_FLAG_DATAMATCH 0x02 +#define ACRN_IOEVENTFD_FLAG_DEASSIGN 0x04 +struct acrn_ioeventfd { + int32_t fd; + uint32_t flags; + uint64_t addr; + uint32_t len; + uint64_t data; +}; #endif /* __VHM_IOCTL_DEFS_H__ */ -- 2.8.3
|
|
[PATCH 0/2] Add ioeventfd and irqfd support for vhm
Shuo A Liu
To support vhost, we need implement call eventfd and notify eventfd in
kernel for it. For ACRN vhm, we implement ioeventfd and irqfd for that. ioeventfd which is based on eventfd, intends to glue vhm module and other modules who are interested in guest IOs. Each ioeventfd registered by userspace can map a PIO/MMIO range of the guest to eventfd, and response to signal the eventfd when get the in-range IO write from guest. Then the other side of eventfd can be notified to process the IO request. irqfd is based on eventfd too, provides a pipe for injecting guest interrupt through a file description writing operation. Each irqfd registered by userspace can map a interrupt of the guest to eventfd, and a writing operation on one side of the eventfd will trigger the interrupt injection on vhm side. Shuo Liu (2): vhm: add ioeventfd support for ACRN hypervisor service module vhm: add irqfd support for ACRN hypervisor service module drivers/char/vhm/vhm_dev.c | 26 +++ drivers/vhm/Makefile | 2 +- drivers/vhm/vhm_ioeventfd.c | 412 +++++++++++++++++++++++++++++++++++++ drivers/vhm/vhm_irqfd.c | 320 ++++++++++++++++++++++++++++ include/linux/vhm/vhm_eventfd.h | 16 ++ include/linux/vhm/vhm_ioctl_defs.h | 22 ++ 6 files changed, 797 insertions(+), 1 deletion(-) create mode 100644 drivers/vhm/vhm_ioeventfd.c create mode 100644 drivers/vhm/vhm_irqfd.c create mode 100644 include/linux/vhm/vhm_eventfd.h -- 2.8.3
|
|
[PATCH] drm/i915/gvt: fix display messy issue which is introduced by plane pvmmio
For plane pvmmio optimization, we need cache all plane related registers,
in previous commit 9c3e8b1a3f15 ("drm/i915/gvt: handling pvmmio update of plane registers in GVT-g"), due to PLANE_AUX_DIST and PLANE_AUX_OFFSET are trapped with _REG_701C0 and _REG_701C4, then they are missing by only checking PLANE_AUX_DIST and PLANE_AUX_OFFSET. Signed-off-by: Fei Jiang <fei.jiang@...> --- drivers/gpu/drm/i915/gvt/handlers.c | 6 ++++++ 1 file changed, 6 insertions(+) diff --git a/drivers/gpu/drm/i915/gvt/handlers.c b/drivers/gpu/drm/i915/gvt/handlers.c index c33c771..6001127 100644 --- a/drivers/gpu/drm/i915/gvt/handlers.c +++ b/drivers/gpu/drm/i915/gvt/handlers.c @@ -777,6 +777,12 @@ static void pvmmio_update_plane_register(struct intel_vgpu *vgpu, skl_plane_mmio_write(vgpu, i915_mmio_reg_offset(PLANE_SIZE(pipe, plane)), &pv_plane->plane_size, 4); + skl_plane_mmio_write(vgpu, + i915_mmio_reg_offset(PLANE_AUX_DIST(pipe, plane)), + &pv_plane.plane_aux_dist, 4); + skl_plane_mmio_write(vgpu, + i915_mmio_reg_offset(PLANE_AUX_OFFSET(pipe, plane)), + &pv_plane.plane_aux_offset, 4); if (pv_plane->flags & PLANE_SCALER_BIT) { skl_ps_mmio_write(vgpu, -- 2.7.4
|
|
[PATCH V3 3/3] dm: monotor: bugfix: update wakeup reason before call recume() callback
In handle_resume(), wakeup_reason is updated before call
ops->ops->resume(). Because ops->ops->resume() needs to know the latest wakeup reason. Signed-off-by: Tao Yuhong <yuhong.tao@...> --- devicemodel/core/monitor.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/devicemodel/core/monitor.c b/devicemodel/core/monitor.c index c5b41f1..4b97c5a 100644 --- a/devicemodel/core/monitor.c +++ b/devicemodel/core/monitor.c @@ -168,6 +168,8 @@ static void handle_resume(struct mngr_msg *msg, int client_fd, void *param) ack.msgid = msg->msgid; ack.timestamp = msg->timestamp; + wakeup_reason = msg->data.reason; + LIST_FOREACH(ops, &vm_ops_head, list) { if (ops->ops->resume) { ret += ops->ops->resume(ops->arg); @@ -181,8 +183,6 @@ static void handle_resume(struct mngr_msg *msg, int client_fd, void *param) } else ack.data.err = ret; - wakeup_reason = msg->data.reason; - mngr_send_msg(client_fd, &ack, NULL, ACK_TIMEOUT); } -- 2.7.4
|
|
[PATCH V3 2/3] tools: acrnd: Fixed get_sos_wakeup_reason()
get_sos_wakeup_reason() runs into error branch without any error, so
no wakeup reason will be returend. Signed-off-by: Tao Yuhong <yuhong.tao@...> --- tools/acrn-manager/acrnd.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/tools/acrn-manager/acrnd.c b/tools/acrn-manager/acrnd.c index 3b47bdd..f1eb338 100644 --- a/tools/acrn-manager/acrnd.c +++ b/tools/acrn-manager/acrnd.c @@ -284,7 +284,7 @@ unsigned get_sos_wakeup_reason(void) req.msgid = WAKEUP_REASON; req.timestamp = time(NULL); - if (mngr_send_msg(client_fd, &req, &ack, DEFAULT_TIMEOUT)) + if (mngr_send_msg(client_fd, &req, &ack, DEFAULT_TIMEOUT) <= 0) fprintf(stderr, "Failed to get wakeup_reason from SOS, err(%d)\n", ret); else ret = ack.data.reason; -- 2.7.4
|
|