[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [Qemu-devel] [PATCH V2 5/5] vga-cirrus: Workaround during restore when using Xen.



On Thu, 5 Jan 2012, Avi Kivity wrote:
> On 01/05/2012 02:30 PM, Stefano Stabellini wrote:
> > > >
> > > > I cannot see how this is going to fix the save/restore issue we are
> > > > trying to solve.
> > > > The problem, unfortunately very complex, is that at restore time the
> > > > videoram is already allocated at the physical address it was mapped
> > > > before the save operation. If it was not mapped, it is at the end of the
> > > > physical memory of the guest (where qemu_ram_alloc_from_ptr decides to
> > > > allocate it).
> > > 
> > > Sorry, I don't follow, please be specific as to which type of address
> > > you're referring to:
> > > 
> > > ram_addr?
> > > physical address (as seen by guest - but if it is not mapped, what does
> > > your last sentence mean?)
> > > something else?
> >
> > ram_addr_t as returned by qemu_ram_alloc_from_ptr.
> >
> > In fact on xen qemu_ram_alloc_from_ptr asks the hypervisor to add
> > the specified amount of memory to the guest physmap at
> > new_block->offset. So in a way the videoram is always visible by the
> > guest, initially at new_block->offset, chosen by find_ram_offset, then
> > at cirrus_bank_base, when map_linear_vram_bank is called.
> 
> Okay.  So we will need to hook this differently from the memory API.
> 
> There are two places we can hook:
> - memory_region_init_ram() - similar to qemu_ram_alloc() - at region
> construction time
> - MemoryListener::region_add() - called the first time the region is
> made visible, probably not what we want

memory_region_init_ram seems to be the right place to me


> > > > So the issue is that the videoram appears to qemu as part of the
> > > > physical memory of the guest at an unknown address.
> > > >
> > > > The proposal of introducing early_savevm would easily solve this last
> > > > problem: letting us know where the videoram is. The other problem, the
> > > > fact that under Xen the videoram would be already allocated while under
> > > > native it would not, remains unsolved. 
> > > > We cannot simply allocate the videoram twice because the operation
> > > > would fail (Xen would realize that we are trying to allocate more memory
> > > > than it we are supposed to, returning an error).
> > > > However, once we know where the videoram is, we could probably figure 
> > > > out
> > > > a smart (see hacky) way to avoid allocating it twice without changes to
> > > > the cirrus code.
> > > 
> > > I'm missing some context.  Can you please explain in more detail?
> > > Note that with the memory API changes, ram addresses are going away. 
> > > There will not be a linear space for guest RAM.  We'll have
> > > (MemoryRegion *, offset) pairs that will be mapped into discontiguous
> > > guest physical address ranges (perhaps with overlaps).
> >
> >
> > This is how memory is currently allocated and mapped in qemu on xen:
> >
> > - qemu_ram_alloc_from_ptr asks the hypervisor to allocate memory for
> > the guest, the memory is added to the guest p2m (physical to machine
> > translation table) at the given guest physical address
> > (new_block->offset, as chosen by find_ram_offset);
> >
> > - qemu_get_ram_ptr uses the xen mapcache to map guest physical address
> > ranges into qemu's address space, so that qemu can read/write guest
> > memory;
> >
> > - xen_set_memory, called by the memory_listener interface, effectively
> > moves a guest physical memory address range from one address to another.
> > So the memory that was initially allocated at new_block->offset, as
> > chosen by find_ram_offset, is going to be moved to a new destination,
> > section->offset_within_address_space.
> 
> So, where qemu has two different address spaces (ram_addr_t and guest
> physical addresses), Xen has just one, and any time the translation
> between the two changes, you have to move memory around.

Yes


> > So the videoram lifecycle is the following:
> >
> > - qemu_ram_alloc_from_ptr allocates the videoram and adds it to the end
> >   of the physmap;
> >
> > - qemu_get_ram_ptr maps the videoram into qemu's address space;
> >
> > - xen_set_memory moves the videoram to cirrus_bank_base;
> >
> >
> >
> > Now let's introduce save/restore into the picture: the videoram is part
> > of the guest's memory from the hypervisor POV, so xen will take care of
> > saving and restoring it as part of the normal guest memory, out of
> > qemu's control.
> > At restore time, we know that the videoram is already allocated and
> > mapped somewhere in the guest physical address space: it could be
> > cirrus_bank_base, which we don't know yet, or the initial
> > new_block->offset.
> > A second videoram allocation by qemu_ram_alloc_from_ptr will fail
> > because of memory constraints enforced by the hypervisor. Trying to map
> > the already allocated videoram into qemu's address space is not easy
> > because we don't know where it is yet (keep in mind that machine->init
> > is called before the machine restore functions).
> >
> > The "solution" I am proposing is introducing an early_savevm set of
> > save/restore functions so that at restore time we can get to know at
> > what address the videoram is mapped into the guest address space. Once we
> > know the address we can remap it into qemu's address space and/or move it
> > to another guest physical address.
> 
> Why can we not simply track it?  For every MemoryRegion, have a field
> called xen_address which tracks its location in the Xen address space
> (as determined by the last call to xen_set_memory or qemu_ram_alloc). 
> xen_address would be maintained by callbacks called from the memory API
> into xen-all.c.

Nice and simple, I like it.
However we would still need an early_savevm mechanism to save and restore the
MemoryRegions, unless they already gets saved and restored somehow?
Maybe saving and restoring the list of MemoryRegions could be useful for
the generic case too?


> > The problem of avoiding a second allocation remains, but could be
> > solved by passing the "name" parameter from qemu_ram_alloc_from_ptr to
> > xen_ram_alloc: xen_ram_alloc could avoid doing any work for anything
> > called "vga.vram" at restore time, and use the reference to the already
> > allocated videoram instead.
> 
> Hacky

Yes :/


> The allocation is not driven by qemu then?

At restore time, it is not.


> For the long term I suggest making qemu control the allocations (perhaps
> by rpcing dom0); otherwise how can you do memory hotplug or PCI cards
> with RAM (like ivshmem)?

It is only the videoram (well, everything allocated with
qemu_ram_alloc_from_ptr actually) and only at restore time, because
the memory in question is being considered normal guest memory and
therefore it is saved and restored by the hypervisor.
Otherwise Qemu is the one that triggers these allocations, so there are
no issues with memory hotplug and pci passthrough.

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.