[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Fatal crash on xen4.2 HVM + qemu-xen dm + NFS



On Mon, 2013-01-21 at 15:35 +0000, Alex Bligh wrote:
> Ian,
> 
> --On 21 January 2013 15:23:10 +0000 Ian Campbell <Ian.Campbell@xxxxxxxxxx> 
> wrote:
> 
> > On Mon, 2013-01-21 at 15:15 +0000, Alex Bligh wrote:
> >> Surely before Xen removes the grant on the page, unmapping it from dom0's
> >> memory, it should check to see if there are any existing references
> >> to the page and if there are, given the kernel its own COW copy, rather
> >> than unmap it totally which is going to lead to problems.
> >
> > Unfortunately each page only has one reference count, so you cannot
> > distinguish between references from this particular NFS write from other
> > references (other writes, the ref held by the process itself, etc).
> >
> > My old series added a reference count to the SKB itself exactly so that
> > it would be possible to know when the network stack was truly finished
> > with the page in the context of a specific operation.
> >
> > Unfortunately due to lack of time I've not been able to finish those
> > off.
> 
> Does that apply even when O_DIRECT is not being used (which I don't
> think it is by default for upstream qemu & xen, as it's
> cache=writeback, and cache=none produces a different failure)?
> 
> If so, I think it's the case that *ALL* NFS dom0 access by Xen domU
> VMs is unsafe in the event of tcp retransmit (both in the sense that
> the grant can be freed up causing a crash, or the domU's data can be
> rewritten post write causing corruption).

Yes. Prior to your report this (assuming it is the same issue) had been
a very difficult to trigger issue -- I was only able to do so with
userspace firewalls rules which deliberately delayed TCP acks.

The fact that you can reproduce so easily makes me wonder if this is
really the same issue. To trigger the issue you need this sequence of
events:
      * Send an RPC
      * RPC is encapsulated into a TCP/IP frame (or several) and sent.
      * Wait for an ACK response to the TCP/IP frame
      * Timeout.
      * Queue a retransmit of the TCP/IP frame(s)
      * Receive the ACK to the original.
      * Receive the reply to the RPC as well
      * Report success up the stack
      * Userspace gets success and unmaps the page
      * Retransmit hits the front of the queue
      * BOOM

To do this you need to be pretty unlucky or retransmitting a lot (which
would usually imply something up with either the network or the filer).

BTW, there is also a similar situation with RPC level retransmits, which
I think might be where the NFSv3 vs v4 comes from (i.e. only v3 is
susceptible to that specific case), this one is very hard to reproduce
as well (although slightly easier than the TCP retransmit one, IIRC)

>  I think that would also
> apply to iSCSI over tcp, which would presumably suffer similarly.

Correct, iSCSI over TCP can also have this issue.

> Is that analysis correct?

The important thing is zero copy vs. non-zero copy or not. IOW it is
only a problem if the actual userspace page, which is a mapped domU
page, is what gets queued up. Whether zero copy is done or not depends
on things like O_DIRECT and write(2) vs. sendpage(2) etc and what the
underlying fs implements etc. I thought NFS only did it for O_DIRECT. I
may be mistaken. aio is probably a factor too.

FWIW blktap2 always copies for pretty much this reason, I seem to recall
the maintainer saying the perf hit wasn't noticeable.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.