Re: [PATCH 1/2] ACRN:DM:VGPU: Fix the incorrect offset for EDID detailed_timing_block
Yu Wang
Acked-by: Wang, Yu1 <yu1.wang@...>
toggle quoted messageShow quoted text
On Fri, May 06, 2022 at 11:39:24AM +0800, Zhao Yakui wrote:
The EDID uses the bytes of 54-125 to define the four detailed_timing_blocks.
|
|
Re: [PATCH] hv: fix the issue of "right shift count"
toggle quoted messageShow quoted text
-----Original Message-----Didn't quite understand. BTW, the current logic of powerof2_roundup() is very ugly, and I am wondering why we need the following logic: #if (CONFIG_MAX_PT_IRQ_ENTRIES <= 256) #define MAX_IR_ENTRIES 256 #else #define MAX_IR_ENTRIES powerof2_roundup(CONFIG_MAX_PT_IRQ_ENTRIES) #endif Can we assume CONFIG_MAX_PT_IRQ_ENTRIES be in the power of 2 ? If yes, we can define both PT_IRQ_ENTRY_BITS, and MAX_PT_IRQ_ENTRIES= (1U << PT_IRQ_ENTRY_BITS). And refuse to boot if it is not.4 This way, it can be simplified.
|
|
[PATCH v2 2/2] misc: modify the generate logic of vcat config
chenli.wei
The current code generate vcat info by count the CLOS MASK and the
scenario xml should set multiple times. It is not an ideal solution, so we add an "virtual_cat_number" element to set the virtual CLOS number and get the MAX pcbm directly. Signed-off-by: Chenli Wei <chenli.wei@...> --- misc/config_tools/schema/config.xsd | 5 ++++ .../xforms/vm_configurations.c.xsl | 26 ++++++++++++------- 2 files changed, 21 insertions(+), 10 deletions(-) diff --git a/misc/config_tools/schema/config.xsd b/misc/config_tools/schema/config.xsd index 68cc575c6..4182586cf 100644 --- a/misc/config_tools/schema/config.xsd +++ b/misc/config_tools/schema/config.xsd @@ -336,6 +336,11 @@ Refer to :ref:`vuart_config` for detailed vUART settings.</xs:documentation> <xs:documentation>Enable nested virtualization for KVM.</xs:documentation> </xs:annotation> </xs:element> + <xs:element name="virtual_cat_number" type="xs:integer" default="0" minOccurs="0"> + <xs:annotation acrn:title="Maximum virtual CLOS" acrn:applicable-vms="pre-launched, post-launched" acrn:views="advanced"> + <xs:documentation>Max number of virtual CLOS MASK</xs:documentation> + </xs:annotation> + </xs:element> <xs:element name="virtual_cat_support" type="Boolean" default="n" minOccurs="0"> <xs:annotation acrn:title="VM Virtual Cache Allocation Tech" acrn:applicable-vms="pre-launched, post-launched" acrn:views="advanced"> <xs:documentation>Enable virtualization of the Cache Allocation Technology (CAT) feature in RDT. CAT enables you to allocate cache to VMs, providing isolation to avoid performance interference from other VMs.</xs:documentation> diff --git a/misc/config_tools/xforms/vm_configurations.c.xsl b/misc/config_tools/xforms/vm_configurations.c.xsl index b16565961..b2741741d 100644 --- a/misc/config_tools/xforms/vm_configurations.c.xsl +++ b/misc/config_tools/xforms/vm_configurations.c.xsl @@ -181,18 +181,24 @@ <xsl:value-of select="acrn:initializer('pclosids', concat('vm', ../@id, '_vcpu_clos'))" /> <xsl:variable name="vm_id" select="../@id" /> - <xsl:value-of select="acrn:initializer('num_pclosids', concat(count(//allocation-data/acrn-config/vm[@id=$vm_id]/clos/vcpu_clos), 'U'))" /> - <xsl:if test="acrn:is-vcat-enabled() and ../virtual_cat_support[text() = 'y']"> - <xsl:variable name="rdt_res_str" select="acrn:get-normalized-closinfo-rdt-res-str()" /> + <xsl:variable name="vm_name" select="../name/text()" /> + <xsl:choose> + <xsl:when test="acrn:is-vcat-enabled() and ../virtual_cat_support[text() = 'y']"> + <xsl:value-of select="acrn:initializer('num_pclosids', concat(count(//vm[@id=$vm_id]/virtual_cat_number), 'U'))" /> + <xsl:variable name="rdt_res_str" select="acrn:get-normalized-closinfo-rdt-res-str()" /> - <xsl:if test="contains($rdt_res_str, 'L2')"> - <xsl:value-of select="acrn:initializer('max_l2_pcbm', concat(math:max(//allocation-data/acrn-config/vm[@id=$vm_id]/clos/vcpu_clos), 'U'))" /> - </xsl:if> + <xsl:if test="contains($rdt_res_str, 'L2')"> + <xsl:value-of select="acrn:initializer('max_l2_pcbm', //CACHE_ALLOCATION[CACHE_LEVEL='2']/POLICY[VM=$vm_name]/CLOS_MASK)" /> + </xsl:if> - <xsl:if test="contains($rdt_res_str, 'L3')"> - <xsl:value-of select="acrn:initializer('max_l3_pcbm', concat(math:max(//allocation-data/acrn-config/vm[@id=$vm_id]/clos/vcpu_clos), 'U'))" /> - </xsl:if> - </xsl:if> + <xsl:if test="contains($rdt_res_str, 'L3')"> + <xsl:value-of select="acrn:initializer('max_l3_pcbm', //CACHE_ALLOCATION[CACHE_LEVEL='3']/POLICY[VM=$vm_name]/CLOS_MASK)" /> + </xsl:if> + </xsl:when> + <xsl:otherwise> + <xsl:value-of select="acrn:initializer('num_pclosids', concat(count(//allocation-data/acrn-config/vm[@id=$vm_id]/clos/vcpu_clos), 'U'))" /> + </xsl:otherwise> + </xsl:choose> <xsl:value-of select="$endif" /> </xsl:template> -- 2.17.1
|
|
[PATCH v2 1/2] misc: refine CLOS module for expansibility
chenli.wei
This patch refine CLOS module by the following aspects:
1 Unified CACHE_ID type to Hex Format 2 Rewrite policy merge with RDT Class 3 Modify the logic of generate CPU mask v1-->v2: 1. code format 2. add comment for RdtPolicy Signed-off-by: Chenli Wei <chenli.wei@...> --- misc/config_tools/board_config/board_c.py | 8 +- misc/config_tools/schema/types.xsd | 2 +- misc/config_tools/static_allocators/clos.py | 233 ++++++++++++-------- 3 files changed, 144 insertions(+), 99 deletions(-) diff --git a/misc/config_tools/board_config/board_c.py b/misc/config_tools/board_config/board_c.py index 8082c4387..4be3ea4ec 100644 --- a/misc/config_tools/board_config/board_c.py +++ b/misc/config_tools/board_config/board_c.py @@ -173,7 +173,7 @@ def gen_rdt_str(cache, config): return err_dic cdp_enable = get_cdp_enabled() - cat_mask_list = get_mask_list(cache_level, int(cache_id, 16)) + cat_mask_list = get_mask_list(cache_level, cache_id) if len(cat_mask_list) > int(clos_number): err_dic['board config: Failed to generate board.c'] = "CLOS Mask Number too bigger then the supported of L2/L3 cache" return err_dic; @@ -207,7 +207,7 @@ def gen_rdt_str(cache, config): cpu_mask = 0 for processor in processor_list: - core_id = common.get_node(f"//core[@id = '{processor}']/thread/cpu_id/text()", board_etree) + core_id = common.get_node(f"//thread[apic_id = '{processor}']/cpu_id/text()", board_etree) if core_id is None: continue else: @@ -240,7 +240,7 @@ def gen_clos_array(cache_list, config): clos_number = common.get_node(f"./capability/clos_number/text()", cache) if cache_level == "2": - cat_mask_list = get_mask_list(cache_level, int(cache_id, 16)) + cat_mask_list = get_mask_list(cache_level, cache_id) array_size = len(cat_mask_list) print("union clos_config platform_l2_clos_array_{0}[{1}] = {{".format(int(cache_id, 16), clos_number), file=config) @@ -250,7 +250,7 @@ def gen_clos_array(cache_list, config): print("};\n", file=config) res_present[RDT.L2.value] += 1 elif cache_level == "3": - cat_mask_list = get_mask_list(cache_level, int(cache_id, 16)) + cat_mask_list = get_mask_list(cache_level, cache_id) print("union clos_config platform_l3_clos_array_{0}[{1}] = {{".format(int(cache_id, 16), clos_number), file=config) diff --git a/misc/config_tools/schema/types.xsd b/misc/config_tools/schema/types.xsd index 36a4aae09..cec4b84a5 100644 --- a/misc/config_tools/schema/types.xsd +++ b/misc/config_tools/schema/types.xsd @@ -355,7 +355,7 @@ RDT, setting this option to ``y`` is ignored.</xs:documentation> <xs:complexType name="CacheAllocationType"> <xs:sequence> - <xs:element name="CACHE_ID" type="xs:integer"/> + <xs:element name="CACHE_ID" type="HexFormat"/> <xs:element name="CACHE_LEVEL" type="xs:integer"/> <xs:element name="POLICY" type="CachePolicyType" minOccurs="1" maxOccurs="unbounded"/> </xs:sequence> diff --git a/misc/config_tools/static_allocators/clos.py b/misc/config_tools/static_allocators/clos.py index 1f0476422..80b59435d 100644 --- a/misc/config_tools/static_allocators/clos.py +++ b/misc/config_tools/static_allocators/clos.py @@ -12,114 +12,159 @@ import re from collections import defaultdict from itertools import combinations -def create_clos_node(etree, vm_id, index_list): - allocation_vm_node = common.get_node(f"/acrn-config/vm[@id = '{vm_id}']", etree) +class Identifier: + vm_name = "" + vcpu = "" + cache_type = "" + + def __init__(self, vm_name, vcpu, cache_type): + self.vm_name = vm_name + self.vcpu = vcpu + self.cache_type = cache_type + + def __eq__(self, other): + return (self.vm_name == other.vm_name) and (self.vcpu == other.vcpu) and (self.cache_type == other.cache_type) + +class RdtPolicy: + +# a dictionary to save the CLOS policy from user setting + policy_dict = {} + +# a list stored the vCPU or VM info + policy_owner = [] + +# a list stored the L2 cache IDs + cache2_id_list = [] + +#L2 cache have more then one section, this function find which one have been set + def find_cache2_id(self, mask): + for cache2 in self.cache2_id_list: + if mask[cache2] != None: + return cache2 + return None + + def __init__(self, policy_dict, owner): + self.policy_dict = policy_dict + self.policy_owner = [owner] + +#check whether the src could be merged, if yes, add the src owner to policy_owner list and return True + def merge_policy(self, src): + if self.policy_dict["l3"] == src.policy_dict["l3"]: + cache2_id = self.find_cache2_id(src.policy_dict) + if (cache2_id == None) or (self.policy_dict[cache2_id] == src.policy_dict[cache2_id]): + self.policy_owner.append(src.policy_owner[0]) + return True + elif self.policy_dict[cache2_id] == None: + self.policy_dict[cache2_id] = src.policy_dict[cache2_id] + return True + return False + +#check whether a VM/vCPU could adapt this policy + def adapt_policy(self, policy_identifier): + for owner in self.policy_owner: + if owner == policy_identifier: + return True + return False + +class vCatPolicy(RdtPolicy): + def merge_policy(self, src): + return False + +def create_clos_node(scenario_etree, vm_id, index_list): + allocation_vm_node = common.get_node(f"/acrn-config/vm[@id = '{vm_id}']", scenario_etree) if allocation_vm_node is None: - allocation_vm_node = common.append_node("/acrn-config/vm", None, etree, id = vm_id) + allocation_vm_node = common.append_node("/acrn-config/vm", None, scenario_etree, id = vm_id) if common.get_node("./clos", allocation_vm_node) is None: clos_node = common.append_node("./clos", None, allocation_vm_node) for index in index_list: common.append_node(f"./vcpu_clos", str(index), clos_node) -def find_cache2_id(mask, cache2_id_list): - for cache2 in cache2_id_list: - if mask[cache2] != "None": - return cache2 - return "None" - -def merge_policy_list(mask_list, cache2_id_list): - index = 0 +def merge_policy_list(policy_list): result_list = [] - for index,mask in enumerate(mask_list): - merged = 0 + for index,p in enumerate(policy_list): + merged = False if index == 0: - result_list.append(mask) + result_list.append(p) continue for result in result_list: - if result["l3"] != mask["l3"]: - continue - else: - cache2_id = find_cache2_id(mask, cache2_id_list) - if cache2_id == "None" or result[cache2_id] == mask[cache2_id]: - merged = 1 - break - if result[cache2_id] == "None": - merged = 1 - result[cache2_id] = mask[cache2_id] - break - if merged == 0: - result_list.append(mask) + if result.merge_policy(p): + merged = True + break; + if merged == False: + result_list.append(p) return result_list -def gen_all_clos_index(board_etree, scenario_etree, allocation_etree): - policy_list = [] - allocation_list = scenario_etree.xpath(f"//POLICY") +def gen_identifier_list(scenario_etree): + identifier_list = [] + vm_list = scenario_etree.xpath("//POLICY/VM") + for vm in vm_list: + vm_name = common.get_node("./text()", vm) + vcpu = common.get_node("../VCPU/text()", vm) + cache_type = common.get_node("../TYPE/text()", vm) + identifier_list.append(Identifier(vm_name, vcpu, cache_type)) + return identifier_list + +def vm_vcat_enable(scenario_etree, vm_name): + vcat_enable = common.get_node(f"//VCAT_ENABLED/text()", scenario_etree) + virtual_cat_support = common.get_node(f"//vm[name = '{vm_name}']/virtual_cat_support/text()", scenario_etree) + return (vcat_enable == "y") and (virtual_cat_support == "y") + +def get_policy_list(board_etree, scenario_etree, allocation_etree): cache2_id_list = scenario_etree.xpath("//CACHE_ALLOCATION[CACHE_LEVEL = 2]/CACHE_ID/text()") cache2_id_list.sort() - for policy in allocation_list: - cache_level = common.get_node("../CACHE_LEVEL/text()", policy) - cache_id = common.get_node("../CACHE_ID/text()", policy) - vcpu = common.get_node("./VCPU/text()", policy) - mask = common.get_node("./CLOS_MASK/text()", policy) - tmp = (cache_level, cache_id, vcpu, mask) - policy_list.append(tmp) - - vCPU_list = scenario_etree.xpath(f"//POLICY/VCPU/text()") - l3_mask_list = scenario_etree.xpath(f"//CACHE_ALLOCATION[CACHE_LEVEL = 3]/POLICY/CLOS_MASK") - mask_list = [] - for vCPU in vCPU_list: + RdtPolicy.cache2_id_list = cache2_id_list + identifier_list = gen_identifier_list(scenario_etree) + + result_list = [] + for ident in identifier_list: dict_tmp = {} - l3_mask = l2_mask = "None" - l3_mask_list = scenario_etree.xpath(f"//CACHE_ALLOCATION[CACHE_LEVEL = 3]/POLICY[VCPU = '{vCPU}']/CLOS_MASK/text()") - if len(l3_mask_list) > 0: - l3_mask = l3_mask_list[0] + policy_list = scenario_etree.xpath(f"//POLICY[VM = '{ident.vm_name}' and VCPU = '{ident.vcpu}' and TYPE = '{ident.cache_type}']") + l3_mask = l2_mask = None + cache2_id = None + for policy in policy_list: + cache_level = common.get_node("../CACHE_LEVEL/text()", policy) + cache_id = common.get_node("../CACHE_ID/text()", policy) + clos_mask = common.get_node("./CLOS_MASK/text()", policy) + if cache_level == "2": + l2_mask = clos_mask + cache2_id = cache_id + else: + l3_mask = clos_mask dict_tmp["l3"] = l3_mask - - l2_mask_list = scenario_etree.xpath(f"//CACHE_ALLOCATION[CACHE_LEVEL = 2]/POLICY[VCPU = '{vCPU}']/CLOS_MASK") - if len(l2_mask_list) > 0: - l2_mask = l2_mask_list[0].text - cache_id = scenario_etree.xpath(f"//CACHE_ALLOCATION[CACHE_LEVEL = 2 and POLICY/VCPU = '{vCPU}']/CACHE_ID/text()")[0] for cache2 in cache2_id_list: - if cache2 == cache_id: + if cache2 == cache2_id: dict_tmp[cache_id] = l2_mask else: - dict_tmp[cache2] = "None" - mask_list.append(dict_tmp) - mask_list = merge_policy_list(mask_list, cache2_id_list) - return mask_list - -def get_clos_index(cache_level, cache_id, clos_mask): - mask_list = common.get_mask_list(cache_level, cache_id) - idx = 0 - for mask in mask_list: - idx += 1 - if mask == clos_mask: - break - return idx -def get_clos_id(mask_list, l2_id, l2_mask, l3_mask): - for mask in mask_list: - if mask[l2_id] == l2_mask and mask["l3"] == l3_mask: - return mask_list.index(mask) + dict_tmp[cache2] = None + if vm_vcat_enable(scenario_etree, ident.vm_name): + result_list.append(vCatPolicy(dict_tmp, ident)) + else: + result_list.append(RdtPolicy(dict_tmp, ident)) + return merge_policy_list(result_list) + +def get_clos_id(rdt_list, mask_identifier): + for index,rdt in enumerate(rdt_list): + if rdt.adapt_policy(mask_identifier): + return index return 0 def alloc_clos_index(board_etree, scenario_etree, allocation_etree, mask_list): vm_node_list = scenario_etree.xpath("//vm") for vm_node in vm_node_list: - vmname = common.get_node("./name/text()", vm_node) - allocation_list = scenario_etree.xpath(f"//CACHE_ALLOCATION[POLICY/VM = '{vmname}']") - for allocation in allocation_list: - index_list = [] - cache_level = common.get_node("./CACHE_LEVEL/text()", allocation) - cache_id = common.get_node("./CACHE_ID/text()", allocation) - clos_mask_list = allocation.xpath(f".//POLICY[VM = '{vmname}']/CLOS_MASK/text()") - - for clos_mask in clos_mask_list: - index = get_clos_id(mask_list, cache_id, clos_mask, "None") + vm_name = common.get_node("./name/text()", vm_node) + vcpu_list = scenario_etree.xpath(f"//POLICY[VM = '{vm_name}']/VCPU/text()") + index_list = [] + for vcpu in sorted(list(set(vcpu_list))): + type_list = scenario_etree.xpath(f"//POLICY[VM = '{vm_name}' and VCPU = '{vcpu}']/TYPE/text()") + for cache_type in sorted(list(set(type_list))): + if cache_type == "Data": + continue + index = get_clos_id(mask_list, Identifier(vm_name, vcpu, cache_type)) index_list.append(index) - create_clos_node(allocation_etree, common.get_node("./@id", vm_node), index_list) + create_clos_node(allocation_etree, common.get_node("./@id", vm_node), index_list) -def creat_mask_list_node(board_etree, scenario_etree, allocation_etree, mask_list): +def create_mask_list_node(board_etree, scenario_etree, allocation_etree, rdt_policy_list): allocation_hv_node = common.get_node(f"//hv", allocation_etree) if allocation_hv_node is None: allocation_hv_node = common.append_node(f"/acrn-config/hv", None, allocation_etree) @@ -127,24 +172,24 @@ def creat_mask_list_node(board_etree, scenario_etree, allocation_etree, mask_lis cache2_id_list.sort() if common.get_node("./clos_mask[@id = l3]", allocation_hv_node) is None: clos_mask = common.append_node("./clos_mask", None, allocation_hv_node, id="l3") - for i in range(0, len(mask_list)): - if mask_list[i]["l3"] == "None": - value = "0xffff" - else: - value = str(mask_list[i]["l3"]) + length = common.get_node(f"//cache[@level='3']/capability/capacity_mask_length/text()", board_etree) + value = hex((1 << int(length)) - 1) + for i in range(0, len(rdt_policy_list)): + if rdt_policy_list[i].policy_dict["l3"] is not None: + value = str(rdt_policy_list[i].policy_dict["l3"]) common.append_node(f"./clos", value, clos_mask) for cache2 in cache2_id_list: + length = common.get_node(f"//cache[@level='2' and @id = '{cache2}']/capability/capacity_mask_length/text()", board_etree) + value = hex((1 << int(length)) - 1) if common.get_node("./clos_mask[@id = '{cache2}']", allocation_hv_node) is None: clos_mask = common.append_node("./clos_mask", None, allocation_hv_node, id=cache2) - for i in range(0, len(mask_list)): - if mask_list[i][cache2] == "None": - value = "0xffff" - else: - value = str(mask_list[i][cache2] ) + for i in range(0, len(rdt_policy_list)): + if rdt_policy_list[i].policy_dict[cache2] is not None: + value = str(rdt_policy_list[i].policy_dict[cache2] ) common.append_node(f"./clos", value, clos_mask) def fn(board_etree, scenario_etree, allocation_etree): - mask_list = gen_all_clos_index(board_etree, scenario_etree, allocation_etree) - creat_mask_list_node(board_etree, scenario_etree, allocation_etree, mask_list) - alloc_clos_index(board_etree, scenario_etree, allocation_etree, mask_list) + policy_list = get_policy_list(board_etree, scenario_etree, allocation_etree) + create_mask_list_node(board_etree, scenario_etree, allocation_etree, policy_list) + alloc_clos_index(board_etree, scenario_etree, allocation_etree, policy_list) -- 2.17.1
|
|
[PATCH v2 0/2] Refine for RDT
chenli.wei
1 Unified CACHE_ID type to Hex Format
2 Rewrite policy merge with RDT Class 3 Modify the logic of generate CPU mask 4 Modify the generate logic of vcat config *** BLURB HERE *** Chenli Wei (2): misc: refine CLOS module for expansibility misc: modify the generate logic of vcat config misc/config_tools/board_config/board_c.py | 8 +- misc/config_tools/schema/config.xsd | 5 + misc/config_tools/schema/types.xsd | 2 +- misc/config_tools/static_allocators/clos.py | 233 +++++++++++------- .../xforms/vm_configurations.c.xsl | 26 +- 5 files changed, 165 insertions(+), 109 deletions(-) -- 2.17.1
|
|
Re: [PATCH] misc: refine CLOS module for expansibility
chenli.wei
On 5/6/2022 4:08 PM, Junjie Mao wrote:
Chenli Wei <chenli.wei@...> writes:typoThis patch refine CLOS module by the following aspects:What is 'policy dirt'? policy Dictionaries OK, I will add comment for these member.+ policy_owner = []What does an object of `RdtPolicy` represent? The code is not Yes, I will rename it.+Do you mean "adapt policy"? OK+ for owner in self.policy_owner:`merged` should be a Boolean rather than integer. Doneif index == 0:Simply `if results.merge_policy(p):` + merged = 1
|
|
[PATCH] hv: fix the issue of "right shift count"
chenli.wei
The current code have use powerof2_roundup to get the 2^n, there was an
issue if used to get the 2^n of a macro, that's because macros have no data type. So this patch split the powerof2_roundup to powerof2_roundup32 and powerof2_roundup64 for different case. Signed-off-by: Chenli Wei <chenli.wei@...> --- hypervisor/include/arch/x86/asm/vtd.h | 2 +- hypervisor/include/lib/util.h | 3 ++- 2 files changed, 3 insertions(+), 2 deletions(-) diff --git a/hypervisor/include/arch/x86/asm/vtd.h b/hypervisor/include/arch/x86/asm/vtd.h index 4d3518286..4df5b264b 100644 --- a/hypervisor/include/arch/x86/asm/vtd.h +++ b/hypervisor/include/arch/x86/asm/vtd.h @@ -46,7 +46,7 @@ #if (CONFIG_MAX_PT_IRQ_ENTRIES <= 256) #define MAX_IR_ENTRIES 256 #else -#define MAX_IR_ENTRIES powerof2_roundup(CONFIG_MAX_PT_IRQ_ENTRIES) +#define MAX_IR_ENTRIES powerof2_roundup32(CONFIG_MAX_PT_IRQ_ENTRIES) #endif /* Values for entry_type in ACPI_DMAR_DEVICE_SCOPE - device types */ diff --git a/hypervisor/include/lib/util.h b/hypervisor/include/lib/util.h index ca3296556..2e2a6c6df 100644 --- a/hypervisor/include/lib/util.h +++ b/hypervisor/include/lib/util.h @@ -44,7 +44,8 @@ #define ROUND8(x) (ROUND4(x) |(ROUND4(x)>>8)) #define ROUND16(x) (ROUND8(x) |(ROUND8(x)>>16)) #define ROUND32(x) (ROUND16(x) |(ROUND16(x)>>32)) -#define powerof2_roundup(x) ((ROUND32(x) == (~0UL)) ? ~0UL : (ROUND32(x) +1)) +#define powerof2_roundup32(x) ((ROUND16(x) == (~0UL)) ? ~0UL : (ROUND16(x) +1)) +#define powerof2_roundup64(x) ((ROUND32(x) == (~0UL)) ? ~0UL : (ROUND32(x) +1)) /* Macro used to check if a value is aligned to the required boundary. * Returns TRUE if aligned; FALSE if not aligned -- 2.25.1
|
|
Re: [PATCH] misc: refine CLOS module for expansibility
Junjie Mao
Chenli Wei <chenli.wei@...> writes:
This patch refine CLOS module by the following aspects:What is 'policy dirt'? + policy_owner = []What does an object of `RdtPolicy` represent? The code is not self-explanatory. At least you need comment on what each of its member represents. +Do you mean "adapt policy"? + for owner in self.policy_owner:`merged` should be a Boolean rather than integer. if index == 0:Simply `if results.merge_policy(p):` + merged = 1-- Best Regards Junjie Mao
|
|
[PATCH] misc: configurator: Config usb mediator devices in dropdown list
Calvin Zhang <calvinzhang.cool@...>
Support detecting connected usb devices in board_inspector and list them
in the usb mediator configuration menu. Tracked-On: #7424 Signed-off-by: Calvin Zhang <calvinzhang.cool@...> --- .../board_inspector/extractors/65-usb.py | 33 +++++++++++++++++++ .../launch_config/launch_cfg_gen.py | 5 +-- misc/config_tools/schema/VMtypes.xsd | 12 +++++++ misc/config_tools/schema/config.xsd | 10 +----- 4 files changed, 49 insertions(+), 11 deletions(-) create mode 100644 misc/config_tools/board_inspector/extractors/65-usb.py diff --git a/misc/config_tools/board_inspector/extractors/65-usb.py b/misc/config_tools/board_inspector/extractors/65-usb.py new file mode 100644 index 000000000..080a99ae6 --- /dev/null +++ b/misc/config_tools/board_inspector/extractors/65-usb.py @@ -0,0 +1,33 @@ +# Copyright (C) 2022 Intel Corporation. All rights reserved. +# +# SPDX-License-Identifier: BSD-3-Clause +# + +import os, re + +from extractors.helpers import add_child, get_node + +USB_DEVICES_PATH = "/sys/bus/usb/devices" +USB_DEVICES_REGEX = r"^\d-\d$" # only devices connecting to root hub + +def extract(args, board_etree): + usb_devices_node = get_node(board_etree, f"//usb_devices") + if not usb_devices_node: + devices_node = get_node(board_etree, f"//devices") + usb_devices_node = add_child(devices_node, "usb_devices") + + dev_regex = re.compile(USB_DEVICES_REGEX) + for dev in os.listdir(USB_DEVICES_PATH): + m = dev_regex.match(dev) + if m: + d = m.group(0) + devpath = os.path.join(USB_DEVICES_PATH, d) + with open(os.path.join(devpath, 'devnum'), 'r') as f: + devnum = f.read().strip() + with open(os.path.join(devpath, 'busnum'), 'r') as f: + busnum = f.read().strip() + cmd_out = os.popen('lsusb -s {b}:{d}'.format(b=busnum, d=devnum)).read() + desc = cmd_out.split(':', maxsplit=1)[1].strip('\n') + add_child(usb_devices_node, "device", location=d, + description=d + desc) + diff --git a/misc/config_tools/launch_config/launch_cfg_gen.py b/misc/config_tools/launch_config/launch_cfg_gen.py index 76c9fd683..f007b85f9 100755 --- a/misc/config_tools/launch_config/launch_cfg_gen.py +++ b/misc/config_tools/launch_config/launch_cfg_gen.py @@ -259,8 +259,9 @@ def generate_for_one_vm(board_etree, hv_scenario_etree, vm_scenario_etree, vm_id script.add_virtual_device("uart", options="vuart_idx:{idx}") # Mediated PCI devices, including virtio - for usb_xhci in eval_xpath_all(vm_scenario_etree, ".//usb_xhci[text() != '']/text()"): - script.add_virtual_device("xhci", options=usb_xhci) + for usb_xhci in eval_xpath_all(vm_scenario_etree, ".//usb_xhci/usb_dev[text() != '']/text()"): + bus_port = usb_xhci.split(' ')[0] + script.add_virtual_device("xhci", options=bus_port) for virtio_input_etree in eval_xpath_all(vm_scenario_etree, ".//virtio_devices/input"): backend_device_file = eval_xpath(virtio_input_etree, "./backend_device_file[text() != '']/text()") diff --git a/misc/config_tools/schema/VMtypes.xsd b/misc/config_tools/schema/VMtypes.xsd index 5deea6479..6f5668fc3 100644 --- a/misc/config_tools/schema/VMtypes.xsd +++ b/misc/config_tools/schema/VMtypes.xsd @@ -276,6 +276,18 @@ The size is a subset of the VM's total memory size specified on the Basic tab.</ </xs:sequence> </xs:complexType> +<xs:complexType name="USBDevsConfiguration"> + <xs:sequence> + <xs:element name="usb_dev" type="xs:string" minOccurs="0" maxOccurs="unbounded"> + <xs:annotation acrn:title="USB device assignment" + acrn:options="//usb_devices/device/@description" acrn:options-sorted-by="lambda s: (s, s.split(' ')[0])"> + <xs:documentation>Select the USB devices you want to assign to this virtual machine.</xs:documentation> + </xs:annotation> + </xs:element> + </xs:sequence> +</xs:complexType> + + <xs:simpleType name="VirtioNetworkFrameworkType"> <xs:annotation> <xs:documentation>A string with value: ``Kernel based (Virtual Host)`` or ``User space based (VBSU)``.</xs:documentation> diff --git a/misc/config_tools/schema/config.xsd b/misc/config_tools/schema/config.xsd index 3139814fb..549e80740 100644 --- a/misc/config_tools/schema/config.xsd +++ b/misc/config_tools/schema/config.xsd @@ -424,18 +424,10 @@ argument and memory.</xs:documentation> <xs:documentation>Enable the ACRN Device Model to emulate COM1 as a User VM stdio I/O. Hypervisor global emulation will take priority over this VM setting.</xs:documentation> </xs:annotation> </xs:element> - <xs:element name="usb_xhci" minOccurs="0"> + <xs:element name="usb_xhci" type="USBDevsConfiguration" minOccurs="0"> <xs:annotation acrn:title="Virtual USB HCI" acrn:views="basic" acrn:applicable-vms="post-launched"> <xs:documentation>Select the USB physical bus and port number that will be emulated by the ACRN Device Model for this VM. USB 3.0, 2.0, and 1.0 are supported.</xs:documentation> </xs:annotation> - <xs:simpleType> - <xs:annotation> - <xs:documentation>Input format: ``bus#-port#[:bus#-port#: ...]``, for example, ``1-2:2-4``.</xs:documentation> - </xs:annotation> - <xs:restriction base="xs:string"> - <xs:pattern value="([\d]+-[\d]+){0,1}(:[\d]+-[\d]+)*" /> - </xs:restriction> - </xs:simpleType> </xs:element> <xs:element name="virtio_devices"> <xs:annotation acrn:title="Virt-IO devices" acrn:views="basic" acrn:applicable-vms="post-launched"> -- 2.30.2
|
|
Re: [PATCH v2 2/2] hv: move the MAX_VUART_NUM_PER_VM to offline tool
chenli.wei
On 5/3/2022 2:25 AM, Eddie Dong wrote:
SOS VM may need more vUARTs, but other VMs doesn't.Yes, this patch have not split the VUART NUM for each VM. The vuart config is a very little struct and I try to use different proposals to split VUART NUM for each VM, but they all make the current code being too complex: 1.The "struct vuart_config vuart[MAX_VUART_NUM_PER_VM]" was define in the VM_Config which is a unified struct for all VMs. 2.There are more then one # depend on the MAX_VUART_NUM_PER_VM, If we change MAX_VUART_NUM_PER_VM for each VM or dynamic parameter like "get_vuart_num()", all these values have to dynamically: #define PM1A_ EVT_ PIO_ IDX (UART_PIO_IDX0 + MAX_VUART_NUM_PER_VM) #define PM1A_ CNT_ PIO_ IDX (PM1A_EVT_PIO_IDX + 1U) #define PM1B_ EVT_ PIO_ IDX (PM1A_CNT_PIO_IDX + 1U) #define PM1B_ CNT_ PIO_ IDX (PM1B_EVT_PIO_IDX + 1U) #define RTC_ PIO_ IDX (PM1B_CNT_PIO_IDX + 1U) #define VIRTUAL_ PM1A_ CNT_ PIO_ IDX (RTC_PIO_IDX + 1U) #define KB_ PIO_ IDX (VIRTUAL_PM1A_CNT_PIO_IDX + 1U) #define CF9_ PIO_ IDX (KB_PIO_IDX + 1U) #define PIO_ RESET_ REG_ IDX (CF9_PIO_IDX + 1U) #define SLEEP_ CTL_ PIO_ IDX (PIO_RESET_REG_IDX + 1U) #define EMUL_ PIO_ IDX_ MAX (SLEEP_CTL_PIO_IDX + 1U) -----Original Message-----
|
|
Re: [PATCH] misc: deb: Fix the ServiceVM kernel line in ACRN grub entry
Calvin Zhang <calvinzhang.cool@...>
Ping...
toggle quoted messageShow quoted text
On Mon, May 02, 2022 at 10:14:28AM +0800, Calvin Zhang via lists.projectacrn.org wrote:
After replacing partitioned ACRN deb with shared deb, the stale
|
|
[PATCH 2/2] ACRN:DM:VGPU: Add more standard modes for EDID block
Zhao, Yakui
EDID uses the bytes of 38-53 to support up to 8 standard modes. And it is
based on the mechanism: Byte 0 defines the xres / 8 - 31 and byte 1 defines the aspect_ratio/refresh_rate. But now it uses the incorrect logic in course of adding standard mode, which causes that no standard mode is added in EDID block. Fix it and add another two standard modes so that the guest_vm can parse more modes from EDID. Signed-off-by: Zhao Yakui <yakui.zhao@...> --- devicemodel/hw/vdisplay_sdl.c | 11 +++++++---- 1 file changed, 7 insertions(+), 4 deletions(-) diff --git a/devicemodel/hw/vdisplay_sdl.c b/devicemodel/hw/vdisplay_sdl.c index 02b8a6de1..eeefec17b 100644 --- a/devicemodel/hw/vdisplay_sdl.c +++ b/devicemodel/hw/vdisplay_sdl.c @@ -105,6 +105,7 @@ static const struct timing_entry { uint32_t byte_t3;// byte idx in the Established Timings III Descriptor uint32_t bit; // bit idx uint8_t hz; // frequency + bool is_std; // the flag of standard mode } timings[] = { /* Established Timings I & II (all @ 60Hz) */ { .hpixel = 1024, .vpixel = 768, .byte = 36, .bit = 3, .hz = 60}, @@ -112,8 +113,10 @@ static const struct timing_entry { { .hpixel = 640, .vpixel = 480, .byte = 35, .bit = 5, .hz = 60 }, /* Standard Timings */ - { .hpixel = 1920, .vpixel = 1080, .hz = 60 }, - { .hpixel = 1280, .vpixel = 720, .hz = 60 }, + { .hpixel = 1920, .vpixel = 1080, .hz = 60, .is_std = true }, + { .hpixel = 1280, .vpixel = 720, .hz = 60, .is_std = true }, + { .hpixel = 1440, .vpixel = 900, .hz = 60, .is_std = true }, + { .hpixel = 1680, .vpixel = 1050, .hz = 60, .is_std = true }, }; typedef struct frame_param{ @@ -269,7 +272,7 @@ vdpy_edid_set_timing(uint8_t *addr, const base_param *b_param, TIMING_MODE mode) return; } case STDT: // Standard Timings - if(stdcnt < 8 && (timing->hpixel == b_param->h_pixel)) { + if(stdcnt < 8 && timing->is_std) { hpixel = (timing->hpixel >> 3) - 31; if (timing->hpixel == 0 || timing->vpixel == 0) { @@ -304,7 +307,7 @@ vdpy_edid_set_timing(uint8_t *addr, const base_param *b_param, TIMING_MODE mode) } break; } else { - return; + continue; } default: -- 2.25.1
|
|
[PATCH 1/2] ACRN:DM:VGPU: Fix the incorrect offset for EDID detailed_timing_block
Zhao, Yakui
The EDID uses the bytes of 54-125 to define the four detailed_timing_blocks.
But the static offset is used to update the DTB. This will cause that the incorrect offset is used for DTB if the edid_generate is called several times. Signed-off-by: Zhao Yakui <yakui.zhao@...> --- devicemodel/hw/vdisplay_sdl.c | 21 ++++++++++----------- 1 file changed, 10 insertions(+), 11 deletions(-) diff --git a/devicemodel/hw/vdisplay_sdl.c b/devicemodel/hw/vdisplay_sdl.c index c29596788..02b8a6de1 100644 --- a/devicemodel/hw/vdisplay_sdl.c +++ b/devicemodel/hw/vdisplay_sdl.c @@ -353,22 +353,17 @@ vdpy_edid_set_dtd(uint8_t *dtd, const frame_param *frame) } static void -vdpy_edid_set_descripter(uint8_t *edid, uint8_t is_dtd, +vdpy_edid_set_descripter(uint8_t *desc, uint8_t is_dtd, uint8_t tag, const base_param *b_param) { - static uint8_t offset; - uint8_t *desc; frame_param frame; const char* text; uint16_t len; - offset = 54; - desc = edid + offset; if (is_dtd) { vdpy_edid_set_frame(&frame, b_param); vdpy_edid_set_dtd(desc, &frame); - offset += 18; return; } desc[3] = tag; @@ -408,7 +403,6 @@ vdpy_edid_set_descripter(uint8_t *edid, uint8_t is_dtd, default: break; } - offset += 18; } static uint8_t @@ -431,6 +425,7 @@ vdpy_edid_generate(uint8_t *edid, size_t size, struct edid_info *info) uint16_t id_manuf; uint16_t id_product; uint32_t serial; + uint8_t *desc; base_param b_param, c_param; vdpy_edid_set_baseparam(&b_param, info->prefx, info->prefy); @@ -496,15 +491,19 @@ vdpy_edid_generate(uint8_t *edid, size_t size, struct edid_info *info) /* edid[125:54], Detailed Timing Descriptor - 18 bytes x 4 */ // Detailed Timing Descriptor 1 + desc = edid + 54; vdpy_edid_set_baseparam(&c_param, VDPY_MAX_WIDTH, VDPY_MAX_HEIGHT); - vdpy_edid_set_descripter(edid, 0x1, 0, &c_param); + vdpy_edid_set_descripter(desc, 0x1, 0, &c_param); // Detailed Timing Descriptor 2 + desc += 18; vdpy_edid_set_baseparam(&c_param, VDPY_DEFAULT_WIDTH, VDPY_DEFAULT_HEIGHT); - vdpy_edid_set_descripter(edid, 0x1, 0, &c_param); + vdpy_edid_set_descripter(desc, 0x1, 0, &c_param); // Display Product Name (ASCII) String Descriptor (tag #FCh) - vdpy_edid_set_descripter(edid, 0, 0xfc, &b_param); + desc += 18; + vdpy_edid_set_descripter(desc, 0, 0xfc, &b_param); // Display Product Serial Number Descriptor (tag #FFh) - vdpy_edid_set_descripter(edid, 0, 0xff, &b_param); + desc += 18; + vdpy_edid_set_descripter(desc, 0, 0xff, &b_param); /* EDID[126], Extension Block Count */ edid[126] = 0; // no Extension Block -- 2.25.1
|
|
[PATCH 0/2] ACRN:DM:VGPU: Fix the issue that too few modes are parsed from EDID
Zhao, Yakui
Now the function of edid_generate has some problems in constructing
the detailed_timing_block and standard mode blocks, which causes that too few modes are parsed from EDID in guest_vm. This patch set tries to fix the incorrect logic in EDID detailed_timing_block and standard mode blocks so that the guest_vm can parse the correct modes from EDID. Zhao Yakui (2): ACRN:DM:VGPU: Fix the incorrect offset for EDID detailed_timing_block ACRN:DM:VGPU: Add more standard modes for EDID block devicemodel/hw/vdisplay_sdl.c | 32 +++++++++++++++++--------------- 1 file changed, 17 insertions(+), 15 deletions(-) -- 2.25.1
|
|
Re: [PATCH v4 6/8] hv: add time modification of vrtc
Zhao, Yuanyuan
On 5/6/2022 9:32 AM, Junjie Mao wrote:
Yuanyuan Zhao <yuanyuan.zhao@...> writes:OK.VRTC for hv used to ignore writes to vRTC register.What does "0" mean here? If the switch statement is checking rtc->addr, + /*
|
|
Re: [PATCH v4 5/8] hv: add Service OS write physical RTC register
Zhao, Yuanyuan
On 5/6/2022 9:27 AM, Junjie Mao wrote:
Yuanyuan Zhao <yuanyuan.zhao@...> writes:OKService OS write physical RTC register.Reviewed-by: Junjie Mao <junjie.mao@...>
|
|
Re: [PATCH v4 5/8] hv: add Service OS write physical RTC register
Zhao, Yuanyuan
On 5/5/2022 10:31 PM, VanCutsem, Geoffroy wrote:
OK-----Original Message-----Service OS -> Service VM (or Service VM OS) (and update the subject line as well please) Both RTC time modify and Configuration register is available.
|
|
Re: [PATCH v4 4/8] hv: add vRTC reg_b and reg_c support
Zhao, Yuanyuan
On 5/6/2022 9:24 AM, Junjie Mao wrote:
Yuanyuan Zhao <yuanyuan.zhao@...> writes:During setting RTC time, driver will halt RTC update. So support bit 7 of reg_b. When it set to 0, time will be updated. AndSupport bit 7 of reg_b. When it set to 0, time will be updated. AndPassive voices are typically not a good choice for justification as they when it's set to 1, rtc will keep the second it was set. In the process of getting RTC time, driver sets alarm interrupt, waits 1s, and get alarm interrupt flag. So support alarm interrupt flag update. If alarm interrupt is enabled (bit 5, reg_b set to 1), interrupt flag register will be set (bit 7 & bit 5 of reg_c) at appropriate time. It is highly recommended to describe the background of this operation,
|
|
Re: [PATCH v4 3/8] hv: support 12 hour mode
Zhao, Yuanyuan
On 5/6/2022 9:19 AM, Junjie Mao wrote:
Yuanyuan Zhao <yuanyuan.zhao@...> writes:Add support for hour format.Why does the double white spaces still exist here?!SSorry, it roll back when resolve conflict.
|
|
Re: [PATCH v4 7/8] hv: vrtc calibrate function
Zhao, Yuanyuan
On 5/6/2022 10:23 AM, Junjie Mao wrote:
"Zhao, Yuanyuan" <yuanyuan.zhao@...> writes:system time: 12:30:59 ----------------> get rtc time 12:30:58On 5/6/2022 9:46 AM, Junjie Mao wrote:How is monotonicity derived from the method above?Yuanyuan Zhao <yuanyuan.zhao@...> writes:Monotonicity ensured by calibration period:For TSC's precision (20~100ppm) is lower than physical RTC (less than 20ppm),`Calibrate` is a verb and thus makes `calibrate lock` misleading ---> wait alarm interrupt 12:30:59 ----> system time 10:30.59
|
|