[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH RFC 18/20] libxc/acpi: Build ACPI tables for HVMlite guests



On 06/06/2016 08:03 AM, Wei Liu wrote:
>
>> diff --git a/tools/libxc/Makefile b/tools/libxc/Makefile
>> index 608404f..9569e54 100644
>> --- a/tools/libxc/Makefile
>> +++ b/tools/libxc/Makefile
>> @@ -79,6 +79,26 @@ GUEST_SRCS-y += $(ELF_SRCS-y)
>>  $(patsubst %.c,%.o,$(ELF_SRCS-y)): CFLAGS += -Wno-pointer-sign
>>  $(patsubst %.c,%.opic,$(ELF_SRCS-y)): CFLAGS += -Wno-pointer-sign
>>  
>> +ACPI_PATH = $(XEN_ROOT)/xen/common/libacpi
>> +vpath %.c $(ACPI_PATH)
>> +ACPI_FILES  = dsdt_anycpu.c dsdt_15cpu.c static_tables.c
>> +ACPI_FILES += dsdt_anycpu_qemu_xen.c dsdt_empty.c build.c
>> +ACPI_SRCS = $(patsubst %.c,$(ACPI_PATH)/%.c,$(ACPI_FILES))
>> +
>> +.NOTPARALLEL: $(ACPI_SRCS)
> Why is this needed?


To prevent building intermediate *.c files in parallel from multiple
places (libxc and hvmloader). But this will be removed per Jan's
suggestion not to create those files in libacpi and instead keep them in
libxc and hvmloader.


>
> +static int init_acpi_config(struct xc_dom_image *dom,
> +                            struct acpi_config *config)
> +{
> +    xc_interface *xch = dom->xch;
> +    uint32_t domid = dom->guest_domid;
> +    xc_dominfo_t info;
> +    int i, rc;
> +
> +    memset(config, 0, sizeof(*config));
> +
> +    config->dsdt_anycpu = config->dsdt_15cpu = dsdt_empty;
> +    config->dsdt_anycpu_len = config->dsdt_15cpu_len = dsdt_empty_len;
> +
> +    rc = xc_domain_getinfo(xch, domid, 1, &info);
> +    if ( rc < 0 )
> +    {
> +        DOMPRINTF("%s: getdomaininfo failed (rc=%d)", __FUNCTION__, rc);
> +        return rc;
> +    }
> +
> +    config->apic_mode = 1;
> +
> +    if ( dom->nr_vnodes )
> +    {
> +        struct acpi_numa *numa = &config->numa;
> +
> +        numa->vmemrange = calloc(dom->nr_vmemranges,
> +                                 sizeof(*numa->vmemrange));
> +        numa->vdistance = calloc(dom->nr_vnodes,
> +                                 sizeof(*numa->vdistance));
> +        numa->vcpu_to_vnode = calloc(config->nr_vcpus,
> +                                     sizeof(*numa->vcpu_to_vnode));
> +        if ( !numa->vmemrange || !numa->vdistance || !numa->vcpu_to_vnode )
> +        {
> +            DOMPRINTF("%s: Out of memory", __FUNCTION__);
> +            free(numa->vmemrange);
> +            free(numa->vdistance);
> +            free(numa->vcpu_to_vnode);
> +            return -ENOMEM;
> +        }
> +
> +        rc = xc_domain_getvnuma(xch, domid, &numa->nr_vnodes,
> +                                &numa->nr_vmemranges,
> +                                &config->nr_vcpus, numa->vmemrange,
> +                                numa->vdistance, numa->vcpu_to_vnode);
> +
> +         if ( rc )
> +        {
> +            DOMPRINTF("%s: xc_domain_getvnuma failed (rc=%d)", __FUNCTION__, 
> rc);
> +            return rc;
> +        }
> +    }
> +    else
> +        config->nr_vcpus = info.max_vcpu_id + 1;
> This looks wrong, at least it is not immediately clear why you would
> want to do this.


Why is this wrong? If we have one VCPU max_vcpu_id will be zero, won't it?


> +}
> +
> +int xc_dom_build_acpi(struct xc_dom_image *dom)
> +{
> +    struct acpi_config config;
> +    uint32_t domid = dom->guest_domid;
> +    xc_interface *xch = dom->xch;
> +    int rc, i, acpi_pages_num;
> +    xen_pfn_t extent, *extents;
> +    void *acpi_pages, *acpi_physical;
> +    void *guest_info_page, *guest_acpi_pages;
> +
> Need to initialise these to NULL and 0 in case you take the exit path
> earlier.

Right. I think Roger pointed out that clang doesn't like this neither.

-boris



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.