[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH] xen/hvm: fix hypervisor crash with hvm_save_one()



On 02/05/17 15:13, Razvan Cojocaru wrote:
> On 05/02/17 17:09, Jan Beulich wrote:
>>>>> On 02.05.17 at 15:54, <rcojocaru@xxxxxxxxxxxxxxx> wrote:
>>> On 05/02/17 16:48, Jan Beulich wrote:
>>>>>>> On 02.05.17 at 15:25, <rcojocaru@xxxxxxxxxxxxxxx> wrote:
>>>>> --- a/xen/common/hvm/save.c
>>>>> +++ b/xen/common/hvm/save.c
>>>>> @@ -113,7 +113,7 @@ int hvm_save_one(struct domain *d, uint16_t typecode, 
>>>>> uint16_t instance,
>>>>>          const struct hvm_save_descriptor *desc;
>>>>>  
>>>>>          rv = -ENOENT;
>>>>> -        for ( off = 0; off < (ctxt.cur - sizeof(*desc)); off += 
>>>>> desc->length 
>>> )
>>>>> +        for ( off = 0; (off + sizeof(*desc)) < ctxt.cur; off += 
>>>>> desc->length 
>>> )
>>>>>          {
>>>>>              desc = (void *)(ctxt.data + off);
>>>>>              /* Move past header */
>>>> I don't think this is an appropriate fix. Instead I think the function
>>>> should check whether it got back any data at all, prior to entering
>>>> the loop. Furthermore it might be worth considering to (also)
>>>> refuse doing anything here if the domain's is_dying marker has
>>>> already been set.
>>> hvm_save_one() already checks is_dying:
>>>
>>>  77 /* Extract a single instance of a save record, by marshalling all
>>>  78  * records of that type and copying out the one we need. */
>>>  79 int hvm_save_one(struct domain *d, uint16_t typecode, uint16_t
>>> instance,
>>>  80                  XEN_GUEST_HANDLE_64(uint8) handle)
>>>  81 {
>>>  82     int rv = 0;
>>>  83     size_t sz = 0;
>>>  84     struct vcpu *v;
>>>  85     hvm_domain_context_t ctxt = { 0, };
>>>  86
>>>  87     if ( d->is_dying
>>>  88          || typecode > HVM_SAVE_CODE_MAX
>>>  89          || hvm_sr_handlers[typecode].size < sizeof(struct
>>> hvm_save_descriptor)
>>>  90          || hvm_sr_handlers[typecode].save == NULL )
>>>  91         return -EINVAL;
>> Hmm, interesting. The timing window to see is_dying clear here,
>> bit no vCPU-s left there should be pretty small, so I wonder how
>> you've managed to hit it. But anyway ...
>>
>>> As for checking whether the handler wrote any data, I believe that
>>> Andrew has checked and none of the handlers report when no data is being
>>> passed on.
>> ... that's not what I've read out of his replies. I don't think the
>> handlers need to report anything special. It is the caller which
>> should check whether, despite having got back "success" there's
>> no data in the buffer.
> So you would prefer something like this?
>
> diff --git a/xen/common/hvm/save.c b/xen/common/hvm/save.c
> index 78706f5..d4c8d84 100644
> --- a/xen/common/hvm/save.c
> +++ b/xen/common/hvm/save.c
> @@ -113,6 +113,10 @@ int hvm_save_one(struct domain *d, uint16_t
> typecode, uint16_t instance,
>          const struct hvm_save_descriptor *desc;
>
>          rv = -ENOENT;
> +
> +        if ( !ctxt.cur )
> +            goto out;
> +
>          for ( off = 0; off < (ctxt.cur - sizeof(*desc)); off +=
> desc->length )
>          {
>              desc = (void *)(ctxt.data + off);
> @@ -132,6 +136,7 @@ int hvm_save_one(struct domain *d, uint16_t
> typecode, uint16_t instance,
>          }
>      }
>
> +out:
>      xfree(ctxt.data);
>      return rv;
> }

For the record, I am -1 for this, because it does not fix the problem
when ctxt.cur has a value between 1 and sizeof(*desc).

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.