[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v5 12/13] pvh/acpi: Save ACPI registers for PVH guests



>>> On 20.12.16 at 16:09, <boris.ostrovsky@xxxxxxxxxx> wrote:
> On 12/20/2016 08:57 AM, Jan Beulich wrote:
>>>>> On 17.12.16 at 00:18, <boris.ostrovsky@xxxxxxxxxx> wrote:
>>> --- a/xen/arch/x86/hvm/pmtimer.c
>>> +++ b/xen/arch/x86/hvm/pmtimer.c
>>> @@ -257,7 +257,11 @@ static int acpi_save(struct domain *d, 
>>> hvm_domain_context_t *h)
>>>      int rc;
>>>  
>>>      if ( !has_vpm(d) )
>>> +    {
>>> +        if ( !has_acpi_dm_ff(d) )
>>> +            return hvm_save_entry(PMTIMER, 0, h, acpi);
>>>          return 0;
>>> +    }
>>>  
>>>      spin_lock(&s->lock);
>>>  
>>> @@ -286,7 +290,11 @@ static int acpi_load(struct domain *d, 
>>> hvm_domain_context_t *h)
>>>      PMTState *s = &d->arch.hvm_domain.pl_time->vpmt;
>>>  
>>>      if ( !has_vpm(d) )
>>> +    {
>>> +        if ( !has_acpi_dm_ff(d) )
>>> +            return hvm_load_entry(PMTIMER, h, acpi);
>>>          return -ENODEV;
>>> +    }
>>>  
>>>      spin_lock(&s->lock);
>> Seeing this I first of all wonder - would there be any harm in simply
>> having PVH take (almost) the same route as HVM here? In particular
>> there's a pmt_update_sci() call, an equivalent of which would seem
>> to be needed for PVH too.
> 
> It probably is harmless but not very useful too since we'd be saving
> PMTState which is not in use by PVH.
> 
> As far as pmt_update_sci() --- the only case when we have an SCI
> generated is CPU map update and the interrupt is made pending to the
> guest immediately so if we try to resend it during restore won't we be
> sending it twice?

Well, depends on how far things got prior to the (racing) migration.
I think it's better to send one too many than risking the guest not
getting to see any.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.