[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [XEN][RFC PATCH V2 15/17] xl: support spawn/destroy on multiple device model



On Fri, 2012-08-24 at 15:37 +0100, Julien Grall wrote:
> In case of Xen, it's hard to have a compatibility. We can
> still spawn only one QEMU, but ioreq handling will not
> send an io request if no device models registered it.
> There is no more default QEMU.

This means we've broken existing qemu on a new hypervisor, which now
that we have Xen support in upstream qemu is something we need to think
about and decide if we are happy with that or not.

Perhaps it is sufficient for this to be a compile time thing, i.e.
detect if we are building against a disagg capable hypervisor or not.

Or maybe it has to be a runtime thing with Xen only turning off the
default QEMU when the first io req region is registered, or something
like that.

> >>> Isn't this baking in some implementation detail from the current qemu
> >>> version? What happens if it changes?
> >>>
> >>>        
> >> I don't have another way for the moment. I would be happy,
> >> if someone have a good solution.
> >>      
> > Could we at least make the assignments of the 3 prior BDFs explicit on
> > the command line too?
> >    
> I don't understand your question. Theses 3 priors BDFs can't
> be modify via QEMU command line (or I don't know how).

Could qemu be modified to allow this?

> >>>> @@ -528,65 +583,69 @@ static char ** 
> >>>> libxl__build_device_model_args_new(libxl__gc *gc,
> >>>>            abort();
> >>>>        }
> >>>>
> >>>> -    ram_size = libxl__sizekb_to_mb(b_info->max_memkb - 
> >>>> b_info->video_memkb);
> >>>> +    // Allocate ram space of 32Mo per previous device model to store rom
> >>>>
> >>>>          
> >>> What is this about?
> >>>
> >>> (also that Mo looks a bit odd in among all these mb's)
> >>>
> >>>
> >>>        
> >> It's space for ROM allocation, like vga, rtl8139 roms ...
> >> Each QEMU can load ROM and memory, but the memory
> >> allocator consider that it's alone. It starts to allocate
> >> ROM space from the end of memory RAM.
> >>
> >> It's a solution suggest by Stefano, it's avoid modification
> >> in QEMU. As we don't know the number of ROM and their
> >> size per QEMU, we chose a space of 32 Mo to be sure, but in
> >> fine most of time memory is not allocated.
> >>      
> > "32Mo per previous device model" is the bit which struck me as odd. That
> > means the first device model uses 32Mo, the second 64Mo, the third 96Mo
> > etc?
> >    
> That means:
>      - first QEMU can allocate ROM after ram_size + 0
>      - second after ram_size + 32 mo
>      - ...
> 
> It's a hack to avoid modification in QEMU memory allocator
> (find_ram_offset exec.c in QEMU).

Why don't we enhance the memory allocator instead of adding hacks?

> > Aren't we already modifying qemu quite substantially to implement this
> > functionality anyway? so why are we trying to avoid it in this one
> > corner? Especially at the cost of doing something which on the face of
> > it looks quite strange!
> >
> >    
> It's not possible to made it in QEMU, otherwise QEMU need to
> be spawn one by one. Indeed, the next QEMU need to know
> what is the last 'address' used by the previous QEMU.

Or each one needs to be told explicitly where to put its ROMs. Encoding
a magic 32Mo*N in the interface is just too hacky.

> I made a modification in this way, but it was abandoned. Indeed,
> it required XenStore.
> 
> > Isn't space for the ROMs allocated by SeaBIOS as part of enumerating the
> > PCI bus anyway? Or is this a different per-ROM allocation?
> >    
> It's the rom allocated via pci_add_option_rom in QEMU.
> QEMU seems to store ROM in memory and then SeaBIOS
> will copy it, in the right place.

So the ROM binary (the content of the ROM_BAR) is stored in "guest"
memory? That seems a bit odd to me, I'd have thought it would be stored
in the host and provided on demand when the ROM BAR was accessed.

Is there any scope for changing this behaviour?

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.