[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Any plan to support disk driver domain?



On 24/07/13 18:17, G.R. wrote:
>>
>>> In your example, the driver is setup using zvol. I wonder if there are
>>> any constraint prohibiting using a file based backend?
>>
>> I have not tried it, but FreeBSD blkback should be able to handle raw
>> files also.
>>
>>> Finally, I saw this limitation in the wiki:
>>>>> It is not possible to use driver domains with pygrub or HVM guests yet, 
>>>>> so it will only work with PV guests that have the kernel in Dom0.
>>> While I can imagine why pygrub does not work, I don't understand the
>>> reason HVM is affected here. Could you explain a little bit?
>>> And what about a HVM with PV driver? (e.g. those windows guests)
>>
>> If you use HVM, Qemu needs to access the block/file used as disk, so if
>> the disk is on another domain Qemu has no way to access it (unless you
>> plug the disk to the Dom0 and then pass the created block device
>> /dev/xvd* to Qemu).
>>
> 
> I just upgrade to xen 4.3 and here is my quick report.
> 
> With some preliminary test, I can confirm that freebsd 8.3 is able to
> serve as disk backend, exporting file as a disk.
> I was able to block-attach it and mount it in dom0.
> However, there are some issues with the xl block-list / block-detach command.
> The attached disk cannot be listed. And block-detach will report fail
> when I try to get it removed.

This is probably due to block-list/attach commands making assumptions
about the backend domain always being Dom0. I will take a look, thanks
for the report.

> However, it appears to have removed the data in xenstored in spite of
> the remove error report.
> And I was able to re-attach the same file again later.
> 
> There is also an issue about block-attach -- it does not prevent me
> attaching the same file twice accidentally.

This kind of check should be implemented in FreeBSD itself rather than
the toolstack. I will take a look into FreeBSD blkback in order to
prevent it from attaching the same disk twice.

> This appears to crash the whole system later when I tried some further
> operations.
> 
> I believe I should be able to serve HVM domU in this way (using dom0
> as a proxy).
> But this appears to introduce some overhead (dom0 proxy).
> I wonder if it will be hard to provide HVM support for disk domain?

This is not possible due to the fact than when using Qemu the Qemu
process in Dom0 needs access to the disk you are attaching to the guest.
For PVHVM guests this is not a big deal, because the emulated device
will only be used for the bootloader, and then the OS switches to PV,
which doesn't use Dom0 as a proxy.

The only improvement here would be to auto attach the disk to Dom0 in
order to launch Qemu, which now has to be done manually. The same
happens with PV domains that use pygrub.

> Will it be able to provide dual-interface like hda/xvda so the
> overhead is eliminated after PV driver is loaded?

Exactly.

> It will be great if you can share your plan / schedule about further
> enhancement of this feature.

Next steps are fixing the bugs you describe, and allowing driver domains
to use userspace backends (Qdisk for instance).

Roger.

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.