[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v6] Sanity check xsave area when migrating or restoring from older Xen verions



>>> On 23.10.14 at 16:20, <dkoch@xxxxxxxxxxx> wrote:
> On Thu, 23 Oct 2014 08:38:12 +0100
> Jan Beulich <JBeulich@xxxxxxxx> wrote:
> 
>> >>> On 22.10.14 at 16:53, <dkoch@xxxxxxxxxxx> wrote:
>> > @@ -2011,15 +2012,8 @@ static int hvm_load_cpu_xsave_states(struct domain 
> *d, hvm_domain_context_t *h)
>> >                          save_area) + XSTATE_AREA_MIN_SIZE);
>> >          return -EINVAL;
>> >      }
>> > -    size = HVM_CPU_XSAVE_SIZE(xfeature_mask);
>> > -    if ( desc->length > size )
>> > -    {
>> > -        printk(XENLOG_G_WARNING
>> > -               "HVM%d.%d restore mismatch: xsave length %u > %u\n",
>> > -               d->domain_id, vcpuid, desc->length, size);
>> > -        return -EOPNOTSUPP;
>> > -    }
>> >      h->cur += sizeof (*desc);
>> > +    overflow_start = h->cur;
>> 
>> This variable badly named: What it points to is the payload, not the
>> excess data.
> 
> Wasn't fond of the name, anyway. (I'm horrid at picking variable names.)
> Propose changing it to desc_start.

Or data_start or payload_start, or just start.

>> > @@ -2038,10 +2032,23 @@ static int hvm_load_cpu_xsave_states(struct domain 
> *d, hvm_domain_context_t *h)
>> >      size = HVM_CPU_XSAVE_SIZE(ctxt->xcr0_accum);
>> >      if ( desc->length > size )
>> >      {
>> > +        /*
>> > +         * Xen-4.3 and older used to send longer-than-needed xsave 
>> > regions. 
>> 
>> 4.3.0 please (also in the patch description), since from 4.3.1
>> onwards this isn't the case anymore.
> 
> OK. I was unaware this had been ported to 4.3.1. Will change.
> 
> Are there any versions of 4.2.x that said patch has been backported to?

I think so; just go check.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.