|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH v4 08/15] pvh/acpi: Handle ACPI accesses for PVH guests
>>> On 29.11.16 at 16:33, <boris.ostrovsky@xxxxxxxxxx> wrote:
> +static int acpi_access_common(struct domain *d,
> + int dir, unsigned int port,
> + unsigned int bytes, uint32_t *val)
> +{
Why is this a separate function instead of the body of
acpi_guest_access()? Do you mean to later use this for the
domctl handling (as the use of XEN_DOMCTL_ACPI_* suggests)?
Such things may be worthwhile mentioning at least after the first
--- marker.
> + uint16_t *sts = NULL, *en = NULL;
> + const uint16_t *mask_sts = NULL, *mask_en = NULL;
> + static const uint16_t pm1a_sts_mask = ACPI_BITMASK_GLOBAL_LOCK_STATUS;
> + static const uint16_t pm1a_en_mask = ACPI_BITMASK_GLOBAL_LOCK_ENABLE;
> + static const uint16_t gpe0_sts_mask = 1U << XEN_GPE0_CPUHP_BIT;
> + static const uint16_t gpe0_en_mask = 1U << XEN_GPE0_CPUHP_BIT;
> +
> + BUILD_BUG_ON(XEN_ACPI_CPU_MAP + XEN_ACPI_CPU_MAP_LEN
> + >= ACPI_GPE0_BLK_ADDRESS_V1);
> +
> + ASSERT(!has_acpi_dm_ff(d));
> +
> + switch ( port )
> + {
> + case ACPI_PM1A_EVT_BLK_ADDRESS_V1 ...
> + ACPI_PM1A_EVT_BLK_ADDRESS_V1 +
> + sizeof (d->arch.hvm_domain.acpi.pm1a_sts) +
> + sizeof (d->arch.hvm_domain.acpi.pm1a_en):
Same remark as for an earlier patch regarding the blanks here.
> + sts = &d->arch.hvm_domain.acpi.pm1a_sts;
> + en = &d->arch.hvm_domain.acpi.pm1a_en;
> + mask_sts = &pm1a_sts_mask;
> + mask_en = &pm1a_en_mask;
> + break;
> +
> + case ACPI_GPE0_BLK_ADDRESS_V1 ...
> + ACPI_GPE0_BLK_ADDRESS_V1 +
> + sizeof (d->arch.hvm_domain.acpi.gpe0_sts) +
> + sizeof (d->arch.hvm_domain.acpi.gpe0_en):
> +
> + sts = &d->arch.hvm_domain.acpi.gpe0_sts;
> + en = &d->arch.hvm_domain.acpi.gpe0_en;
> + mask_sts = &gpe0_sts_mask;
> + mask_en = &gpe0_en_mask;
> + break;
> +
> + case XEN_ACPI_CPU_MAP ...
> + XEN_ACPI_CPU_MAP + XEN_ACPI_CPU_MAP_LEN - 1:
> + break;
> +
> + default:
> + return X86EMUL_UNHANDLEABLE;
> + }
> +
> + if ( dir == XEN_DOMCTL_ACPI_READ )
> + {
> + uint32_t mask = (bytes < 4) ? ~0U << (bytes * 8) : 0;
> +
> + if ( !mask_sts )
> + {
> + unsigned int first_byte = port - XEN_ACPI_CPU_MAP;
> +
> + /*
> + * Clear bits that we are about to read to in case we
> + * copy fewer than @bytes.
> + */
> + *val &= mask;
> +
> + if ( ((d->max_vcpus + 7) / 8) > first_byte )
> + {
> + memcpy(val, (uint8_t *)d->avail_vcpus + first_byte,
> + min(bytes, ((d->max_vcpus + 7) / 8) - first_byte));
> + }
Unnecessary braces.
> + }
> + else
> + {
> + uint32_t data = (((uint32_t)*en) << 16) | *sts;
> + data >>= 8 * (port & 3);
Blank line between declaration and statement(s) please.
> + *val = (*val & mask) | (data & ~mask);
> + }
> + }
> + else
> + {
> + /* Guests do not write CPU map */
> + if ( !mask_sts )
> + return X86EMUL_UNHANDLEABLE;
> + else if ( mask_sts )
> + {
> + uint32_t v = *val;
> +
> + /* Status register is write-1-to-clear by guests */
> + switch ( port & 3 )
> + {
> + case 0:
> + *sts &= ~(v & 0xff);
> + *sts &= *mask_sts;
> + if ( !--bytes )
> + break;
> + v >>= 8;
> +
> + case 1:
> + *sts &= ~((v & 0xff) << 8);
> + *sts &= *mask_sts;
> + if ( !--bytes )
> + break;
> + v >>= 8;
> +
> + case 2:
> + *en = ((*en & 0xff00) | (v & 0xff)) & *mask_en;
> + if ( !--bytes )
> + break;
> + v >>= 8;
> +
> + case 3:
> + *en = (((v & 0xff) << 8) | (*en & 0xff)) & *mask_en;
> + }
Please annotate intended fall-through with comments, to silence
Coverity. Also the last one would better end with break.
> --- a/xen/common/domctl.c
> +++ b/xen/common/domctl.c
> @@ -651,6 +651,11 @@ long do_domctl(XEN_GUEST_HANDLE_PARAM(xen_domctl_t)
> u_domctl)
> goto maxvcpu_out;
> }
>
> + d->avail_vcpus = xzalloc_array(unsigned long,
> + BITS_TO_LONGS(d->max_vcpus));
> + if ( !d->avail_vcpus )
> + goto maxvcpu_out;
Considering this isn't being touched outside of
acpi_access_common(), how come you get away without setting
the bits for the vCPU-s online when the guest starts?
Also you appear to leak this array when the domain gets destroyed.
Jan
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |