[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 08/10] pvh/acpi: Handle ACPI accesses for PVH guests



On Sun, Nov 06, 2016 at 04:42:41PM -0500, Boris Ostrovsky wrote:
> Signed-off-by: Boris Ostrovsky <boris.ostrovsky@xxxxxxxxxx>
> ---
> CC: Paul Durrant <paul.durrant@xxxxxxxxxx>
> ---
>  xen/arch/x86/hvm/ioreq.c | 66 
> ++++++++++++++++++++++++++++++++++++++++++++++++
>  1 file changed, 66 insertions(+)
> 
> diff --git a/xen/arch/x86/hvm/ioreq.c b/xen/arch/x86/hvm/ioreq.c
> index 171ea82..ced7c92 100644
> --- a/xen/arch/x86/hvm/ioreq.c
> +++ b/xen/arch/x86/hvm/ioreq.c
> @@ -1392,6 +1392,72 @@ void hvm_ioreq_init(struct domain *d)
>  static int acpi_ioaccess(
>      int dir, unsigned int port, unsigned int bytes, uint32_t *val)
>  {
> +    unsigned int i;
> +    unsigned int bits = bytes * 8;
> +    uint8_t *reg = NULL;
> +    unsigned idx = port & 3;
> +    bool is_cpu_map = 0;
> +    struct domain *currd = current->domain;
> +
> +    BUILD_BUG_ON((ACPI_PM1A_EVT_BLK_LEN != 4) ||
> +                 (ACPI_GPE0_BLK_LEN_V1 != 4));
> +
> +    switch (port)
> +    {
> +    case ACPI_PM1A_EVT_BLK_ADDRESS_V1 ...
> +        (ACPI_PM1A_EVT_BLK_ADDRESS_V1 + ACPI_PM1A_EVT_BLK_LEN - 1):
> +        reg = currd->arch.hvm_domain.acpi_io.pm1a;
> +        break;
> +    case ACPI_GPE0_BLK_ADDRESS_V1 ...
> +        (ACPI_GPE0_BLK_ADDRESS_V1 + ACPI_GPE0_BLK_LEN_V1 - 1):
> +        reg = currd->arch.hvm_domain.acpi_io.gpe;
> +        break;
> +    case 0xaf00 ... (0xaf00 + HVM_MAX_VCPUS/8 - 1):

That may need some documentation or a #define perhaps?

Also just in case somebody decided it was funny and compile Xen with
HVM_MAX_VCPUS set to say 4, won't this go in 0xfffffff region?

You may want to add a BUILD_BUG_ON for the HVM_MAX_VCPUS, like:

BUILD_BUGON(HVM_MAX_VCPUS > 8)?

> +        is_cpu_map = 1;
> +        break;
> +    default:
> +        return X86EMUL_UNHANDLEABLE;
> +    }
> +
> +    if ( bytes == 0 )
> +        return X86EMUL_OKAY;

Should you also check for other odd sizes? Say 3?

> +
> +    if ( dir == IOREQ_READ )
> +    {
> +        *val &= ~((1U << bits) - 1);
> +
> +        if ( is_cpu_map )
> +        {
> +            unsigned first_bit, last_bit;
> +
> +            first_bit = (port - 0xaf00) * 8;

Oh, that needs a define. 
> +            last_bit = min(currd->arch.avail_vcpus, first_bit + bits);
> +            for (i = first_bit; i < last_bit; i++)

I think you need spaces here.
> +                *val |= (1U << (i - first_bit));
> +        }
> +        else
> +            memcpy(val, &reg[idx], bytes);
> +    }
> +    else
> +    {
> +        if ( is_cpu_map )
> +            /* CPU map should not be written. */

It shouldn't? Then who updates this? Oh we do it via the the hypercall.
You may want to mention in this function how this all is suppose to work?

> +            return X86EMUL_UNHANDLEABLE;
> +
> +        /* Write either status or enable reegister. */
> +        if ( (bytes > 2) || ((bytes == 2) && (port & 1)) )
> +            return X86EMUL_UNHANDLEABLE;
> +
> +        if ( idx < 2 ) /* status, write 1 to clear. */
> +        {
> +            reg[idx] &= ~(*val & 0xff);
> +            if ( bytes == 2 )
> +                reg[idx + 1] &= ~((*val >> 8) & 0xff);
> +        }
> +        else           /* enable */
> +            memcpy(&reg[idx], val, bytes);
> +    }
> +
>      return X86EMUL_OKAY;
>  }
>  
> -- 
> 2.7.4
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@xxxxxxxxxxxxx
> https://lists.xen.org/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.