[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 16/27] tools/libxl: Infrastructure for reading a libxl migration v2 stream



On 16/06/15 16:35, Ian Campbell wrote:
>
>>>> +    if (dc->writefd == -1) {
>>>> +        ret = ERROR_FAIL;
>>>> +        LOGE(ERROR, "Unable to open '%s'", path);
>>>> +        goto err;
>>>> +    }
>>>> +    dc->maxsz = dc->bytes_to_read = rec_hdr->length - sizeof(*emu_hdr);
>>>> +    stream->expected_len = dc->used = 0;
>>> expecting 0? This differs from the pattern common everywhere else and
>>> I'm not sure why.
>> The datacopier has been overloaded so many times, it is messy to use.
>>
>> In this case, we are splicing from read fd to a write fd, rather than to
>> a local buffer.
>>
>> Therefore, when the IO is complete, we expect 0 bytes in the local
>> buffer, as all data should end up in the fd.
> I think using 2 or more data copiers to cover the different
> configurations might help? You can still reuse one for the normal record
> processing but a separate dedicated one for writing the emu to a file
> might iron out a wrinkle.

I specifically do not want to risk setting two dc's running at the same
time with the same readfd.

As all of this code is reading from a single readfd, I have specifically
avoided having multiple dc structures lying around.

>
>>> And given that why not handle this in some central place rather than in
>>> the emulator only place?
>> Experimentally, some versions of Qemu barf if they have trailing zeros
>> in save file.  I think they expect to find eof() on a qemu record boundary.
> What I was suggesting was to do the padding in the core, where it would
> often be a zero nop, but would save mistakes (or duplication) if some
> other record also needs such handling in the future.

I don't see an easy way of doing that, given the current divergence in
setting the dcs up in the first place.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.