[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v5 12/13] pvh/acpi: Save ACPI registers for PVH guests



>>> On 17.12.16 at 00:18, <boris.ostrovsky@xxxxxxxxxx> wrote:
> --- a/xen/arch/x86/hvm/pmtimer.c
> +++ b/xen/arch/x86/hvm/pmtimer.c
> @@ -257,7 +257,11 @@ static int acpi_save(struct domain *d, 
> hvm_domain_context_t *h)
>      int rc;
>  
>      if ( !has_vpm(d) )
> +    {
> +        if ( !has_acpi_dm_ff(d) )
> +            return hvm_save_entry(PMTIMER, 0, h, acpi);
>          return 0;
> +    }
>  
>      spin_lock(&s->lock);
>  
> @@ -286,7 +290,11 @@ static int acpi_load(struct domain *d, 
> hvm_domain_context_t *h)
>      PMTState *s = &d->arch.hvm_domain.pl_time->vpmt;
>  
>      if ( !has_vpm(d) )
> +    {
> +        if ( !has_acpi_dm_ff(d) )
> +            return hvm_load_entry(PMTIMER, h, acpi);
>          return -ENODEV;
> +    }
>  
>      spin_lock(&s->lock);

Seeing this I first of all wonder - would there be any harm in simply
having PVH take (almost) the same route as HVM here? In particular
there's a pmt_update_sci() call, an equivalent of which would seem
to be needed for PVH too.

Which in turn gets me to wonder whether some of the code which
is already there couldn't be re-used (handle_evt_io() for example).

And then, seeing the locking here - don't you need some locking
in the earlier patches too, both to serialize accesses from multiple
guest vCPU-s and to arbitrate between Dom0 and the guest?

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.