[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] disable qemu PCI devices in HVM domains



> > I can't see any reason why the approach we take in our closed-source
> > drivers wouldn't work here as well.  I've attached the appropriate
> > patches from our product qemu patchqueue, tidied up and stripped of
> > the most obviously XenServer-specific bits, and made to apply to
> > current ioemu-remote.
> 
> I'm just in the process of applying this and I came across this:
> 
> @@ -792,6 +793,10 @@ static void raw_close(BlockDriverState *bs)
> ...
> +#ifndef CONFIG_STUBDOM
> +        /* Invalidate buffer cache for this device. */
> +        ioctl(s->fd, BLKFLSBUF, 0);
> +#endif
> 
> Does this mean that there is currently, in the Open Source qemu-dm
> tree, a cache coherency problem between emulated and PV disk paths ?
I think for correctness it's probably sufficient to issue a flush
whenever we switch between emulated and PV mode when the previous mode
had issued some writes.  As far as I'm aware, all of the existing
Windows drivers will boot off of emulated and then switch to PV mode
before any writes are issued, so we should be okay.  The switch from
PV to emulated which happens when you reboot a guest should be covered
by the BLKFLSBUF at the end of raw_open(), so I think we're okay there
as well.

So this hunk is probably, strictly speaking, redundant for all current
driver implementations.

Having said that, it's clearly more robust to not rely on the various
drivers being able to get in before any writes are issued, so it's
probably a good thing to have anyway.

> What about Linux platforms with existing PV drivers which do not
> engage in the blacklisting/disabling protocol ?
Yeah, things might go a bit funny if you write using emulated drivers
and then switch to PV ones without rebooting in between.  I think
that's probably a fairly unusual thing to do, but it's not really
invalid.

I'm not sure what the best way of fixing this would be.  You could
conceivably have blkback tell qemu to do a flush when the frontend
connects and before blkback starts doing IO, but that's kind of ugly.
Alternatively, we could modify blkfront so that it tells qemu to flush
devices when appropriate, but that won't help existing drivers.

Steven.

Attachment: signature.asc
Description: Digital signature

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.