[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH] xen: arm: add missing flushing dcache to the copy to/clean guest functions



Hi, Ian.

This issue is reproduced, when we tried to start the kernel Image (uncompressed image) instead of the zImage (Dom0). 

We've written a patch for the hypervisor. With this patch the hypervisor can boot not only zImage, but uImage to. uImage - consists from the input image with the 64 bytes header. We use 'mkimage' utility to generate the uImage. Now the hypervisor does analyse the uImage header, removes this header header, copies the rest to the destination and simply jumps to the loaded start of the copied image.

Normal case: we used uImage generated from the kernel zImage. Kernel Dom0 boots normally.
Bad case: all the same instead of the Dom0 kernel. We used uImage generated from the kernel Image (with the same parameters as the zImage). In this case the kernel hungs frequently.

I'll rework and push the new version of this patch. I'll introduce a new function with flushing the dcache which will be based on the copy_to_user function.

Oleksandr Dmytryshyn | Product Engineering and Development
GlobalLogic
P x3657  M +38.067.382.2525
www.globallogic.com

http://www.globallogic.com/email_disclaimer.txt


On Mon, Nov 25, 2013 at 6:33 PM, Ian Campbell <Ian.Campbell@xxxxxxxxxx> wrote:
> On Mon, 2013-11-25 at 18:21 +0200, Oleksandr Dmytryshyn wrote:
>
> Thanks for the patch.
>
>> Without flushing dcache the hypervisor couldn't copy the device tree
>> correctly when booting the kernel dom0 Image (memory with device tree
>> is corrupted). As the result - when we try to load the kernel dom0
>> Image - dom0 hungs frequently. This issue is not reproduced with the
>> kernel dom0 zImage because the zImage decompressor code flushes all
>> dcache before starting the decompressed kernel Image. When the
>> hypervisor loads the kernel uImage or initrd, this memory region
>> isn't corrupted because the hypervisor code flushes the dcache.
>
> So if not then when/how is this reproduced?
>
> In general I would like to try and keep flushes out of this code path
> because they are used in the hypercall path, we have decreed that
> guest's must have caching enabled to make hypercalls (at least those
> which take in-memory arguments).
>
> I think the right fix is to do the flushes domain_build.c, similar to
> how kernel_zimage_load does it. This might need an opencoded version of
> copy_to_user. Or better, introduce a flushing variant which shares most
> code with the normal one via a common internal function.
>
> Or perhaps we should flush all of the new guest's RAM after building. I
> think Julien was looking at doing something along those lines for the
> domU building case.
>
> Ian.
>
>>
>> Signed-off-by: Oleksandr Dmytryshyn <oleksandr.dmytryshyn@xxxxxxxxxxxxxxx>
>> ---
>>  xen/arch/arm/guestcopy.c | 2 ++
>>  1 file changed, 2 insertions(+)
>>
>> diff --git a/xen/arch/arm/guestcopy.c b/xen/arch/arm/guestcopy.c
>> index d146cd6..28d3151 100644
>> --- a/xen/arch/arm/guestcopy.c
>> +++ b/xen/arch/arm/guestcopy.c
>> @@ -24,6 +24,7 @@ unsigned long raw_copy_to_guest(void *to, const void *from, unsigned len)
>>          p = map_domain_page(g>>PAGE_SHIFT);
>>          p += offset;
>>          memcpy(p, from, size);
>> +        flush_xen_dcache_va_range(p, size);
>>
>>          unmap_domain_page(p - offset);
>>          len -= size;
>> @@ -54,6 +55,7 @@ unsigned long raw_clear_guest(void *to, unsigned len)
>>          p = map_domain_page(g>>PAGE_SHIFT);
>>          p += offset;
>>          memset(p, 0x00, size);
>> +        flush_xen_dcache_va_range(p, size);
>>
>>          unmap_domain_page(p - offset);
>>          len -= size;
>
>
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.