[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Possible bug? DOM-U network stopped working after fatal error reported in DOM0


  • To: G.R. <firemeteor@xxxxxxxxxxxxxxxxxxxxx>
  • From: Roger Pau Monné <roger.pau@xxxxxxxxxx>
  • Date: Fri, 24 Dec 2021 12:24:58 +0100
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com; dkim=pass header.d=citrix.com; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=Ek2+P0CpzQYy9MxFuVw/G7r2hLewoWTF/WsBc6/N5io=; b=UJ4xyr/mTpNswn7VdMqJu2Yu9Ko4MRqkq7K5y70ALeOQxXpb4Hbj226Hu5up4l1dtQWbzncp4LIDgYI/f806UX+X797qbglBZ+h3DDWkpCFjDRA63iriax8QvBAWv7Agnat4KlftsbKzmbMHGHPiz+C3qk5SDuLzD+QpZIqgpZR+HSm/v7HUJT0fT97IIbTPAvQOZrFu7caqCg7zrYo/YXTu5LNZDVCMFhFIE5aO7WadluyzH6nQyvgqUz8+tzh6i/hmYUMZbBvXWc+GqtNxu6xmqWbIuVzQLHoZFtWMNLbRvpmGedCOEtbcRe+mLyCkl2SscA/X+aDn59Ge7crYNQ==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=PFDfbiuoJuO7r8K9qUBYeNwoYULzi5zB3dc4cCPdteZkSorcluLPxWQjTUfh2rzOt+E9IfXIhoPNlDEy5ILGCOZs3XHqebv12JfprR0nmAQcuNxqTO7uSS141X8I0NHOI/wmfeG0HobOVy8Z5cyuXSwDzWdYmFYvUXN4QfNHDvoLrDdwf+lRDbVVXDgfRkULqSow4yNOhjZea/vRxdkibmdCUuZTjysPxVbNASA2xyiUtpHelu3saEx/wZXauT/YGu3eoRGJsQH+hEFh6hWILb9TZEASYzTbRREovZ2nyrPc5Sxx35yVBzOZjivecrmaR6j/EW7YXEFGU4tVweVWzA==
  • Authentication-results: esa6.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
  • Cc: xen-devel <xen-devel@xxxxxxxxxxxxx>
  • Delivery-date: Fri, 24 Dec 2021 11:25:23 +0000
  • Ironport-data: A9a23:l84NBq4GvYjhJopIDEhlfQxRtPfAchMFZxGqfqrLsTDasY5as4F+v mMeXmCDM62MNjP0LdgjaYWzo0JUv5LQz9dqTgQ4pXxhHi5G8cbLO4+Ufxz6V8+wwmwvb67FA +E2MISowBUcFyeEzvuV3zyIQUBUjclkfJKlYAL/En03FV8MpBsJ00o5wrdj2NIw27BVPivW0 T/Mi5yHULOa82Yc3lI8s8pvfzs24ZweEBtB1rAPTagjUG32zhH5P7pGTU2FFFPqQ5E8IwKPb 72rIIdVXI/u10xF5tuNyt4Xe6CRK1LYFVDmZnF+A8BOjvXez8CbP2lS2Pc0MC9qZzu1c99Z6 fp1spWzED0VDrzviu8/YjRdAgFCIvgTkFPHCSDXXc27ykTHdz3nwul0DVFwNoodkgp1KTgQr 7pCcmlLN03dwbLtqF64YrAEasALNs7kMZlZonh95TrYEewnUdbIRKCiCdpwgm9u2JEWQK22i 8wxZwY+QQn4X0B2OV4dUak/x9+jlEPSWmgNwL6SjfVuuDWCpOBr65D9PdyQdtGUSMF9mkeDu nmA72n/RBYAO7S3xzuf/3ehmuLIhwvmQ48ID/uj8eNwi1CdwXYcBVsQWEfTnBWiohfgAZQFc RVSo3dw6/hpnKC2cjXjdzyXmHmNvUcmZ+pBMP8xyjCgz7v03xnMUwDoUQV9QNAhscY3Qxkj2 VmIg87lCFRTjVGFdZ6O3uzK9G3vYED5OUdHPHZZFlVdv7EPtalq1kqXJuuPBpJZmTEc9dvY5 zmR5BYziLwI5SLg//XqpAuX695AS3Wgc+LU2uk1dj70hu+aTNT8D2BN1bQ8xaweRLt1tnHb4 BA5dzG2tYji962lmi2XW/kqF7q0/fuDOzC0qQcxQ8F7p2n2oyXzJNw4DNRCyKBBaJlsRNMUS BWL5VM5CGF7YRNGkpObk6ruUp93nMAM5PzuV+zOb8omX3SCXFTvwc2aXmbJhzqFuBF1yckXY M7HGe7xXSdyIfk2l1KeGrZCuYLHMwhjnAs/s7iglE/5uVdfDVbIIYo43KymMrpksfja+VqNq L6y9aKikn1ibQE3WQGOmaY7JlEWN3krQ5fwrs1cbOmYJQR6XmomDpfsLXkJIOSJRoxZybXF+ G+TQEhdxAatjHHLM1zSOHtidKnuTdB0qndiZX4gOlOh2n4CZ4ez7fhAK8trLOd/rOEzn+RpS /QletmbBqgdQDrw5DlAP4L2q5ZvdUr3iFvWbTalejU2Y7VpWxfNpo3/ZgLq+SRXVni3uMIyr qeOzATeRZZfFQ1uANyPMKCkzk+rvGhbk+V3BhOaLt5WcUTq0Y5rNy2u0aNnf5BScU3On2LI2 RyXDBEUofj2j7U0qNSZ17qZq4qJEvdlGhYIFWfs8rvrZzLR+XCuwNEcXb/QLyzdTm795I6re f5Rk6PnKPQCkVtH79h8HrJswf5s7tfjveYHnAFtHXGNZFW3ELJwZHKB2JAX5KFKw7ZYvyqwW 16OpYYGaenYZpu9HQ5DPhchY8SCyeoQy2vb4vkCKUnn4DN6oeicWkJIMhjQ0CFQIdOZ6m/+L TvNbCLO1zGCtw==
  • Ironport-hdrordr: A9a23:1lSsu6sb/Fwe/K7IAS/975jV7skC7oMji2hC6mlwRA09TyXGra +TdaUguSMc1gx9ZJhBo7G90KnpewK6yXdQ2/hqAV7EZniahILIFvAY0WKG+VPd8kLFh4xgPM tbAs1D4ZjLfCRHZKXBkXiF+rQbsaC6GcmT7I+0pRcdLj2CKZsQlzuRYjzrbHGeLzM2Y6bReq Dsgvau8FGbCAsqh4mAdzI4dtmGg+eOuIPtYBYACRJiwA6SjQmw4Lq/NxSDxB8RXx5G3L9nqA H+4kHEz5Tml8v+5g7X1mfV4ZgTsNz9yuFbDMjJrsQOMD3jhiuheYwkcbyfuzIepv2p9T8R4Z PxiiZlG/42x2Laf2mzrxeo8w780Aw243un8lOciWuLm72OeBsKT+56wa5JeBrQ7EQt+Ptm1r hQ4m6fv51LSTvdgSXU/bHzJl9Xv3vxhUBnvf8YjnRZX4dbQqRWt5Yj8ERcF4pFND7m6bogDP JlAKjnlblrmGuhHjDkV1RUsZ+RtixZJGbFfqFCgL3Y79FupgE586NCr/Zv20vp9/oGOu55Dq r/Q+BVfYp1P7wrhJRGdZM8qPuMexzwqC33QRCvyHTcZeg60iH22tbKCItc3pDeRHVP9up0pK j8
  • Ironport-sdr: f8LxSjo10dZl8lj8DPvK0lTr1nRQXRcBXZ8zAscZkVRtgT1IBoPXq2rvVQFDC8wL4qhSX7aIaW kmie+fdHlyJ6BNjIjXctwNwvgQFRV7QugY1tIfG5xradwDJjq7QgWfT1IdueJCjnhN0FjY1OtU IXuYjZItIzeyI8Kf9YQkr6MHyKTu31tugD4DvqbL4o9hJZvplSQgItYp4xJurvlbY1n6BV0kaW YWedS1k3Ahd0JxbOkPL5I10qJkXFEb61swbzI5aQbLCN7GUS0k6vdu3lan4jOGG9RYCvfTypDx C+NV4HqX/IJm8MhRiAkGD1KR
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

On Thu, Dec 23, 2021 at 11:49:08PM +0800, G.R. wrote:
> On Wed, Dec 22, 2021 at 3:13 AM Roger Pau Monné <roger.pau@xxxxxxxxxx> wrote:
> 
> > Could you build a debug kernel with the following patch applied and
> > give me the trace when it explodes?
> 
> Please find the trace and the kernel CL below.
> Note, the domU get stuck into a bootloop with this assertion as the
> situation will come back after domU restart and only dom0 reboot can
> get the situation back to normal.
> The trace I captured below is within the boot loop. I suspect the
> initial trigger may look different. Will give it another try soon.
> 
> FreeBSD 12.2-RELEASE-p11 #0 c8625d629c3(truenas/12.0-stable)-dirty:
> Wed Dec 22 20:26:46 UTC 2021
> The repo is here: https://github.com/freenas/os.git
> 
> db:0:kdb.enter.default>  bt
> Tracing pid 0 tid 101637 td 0xfffff80069cc4000
> kdb_enter() at kdb_enter+0x37/frame 0xfffffe009f121460
> vpanic() at vpanic+0x197/frame 0xfffffe009f1214b0
> panic() at panic+0x43/frame 0xfffffe009f121510
> xn_txq_mq_start_locked() at xn_txq_mq_start_locked+0x4c6/frame
> 0xfffffe009f121580
> xn_txq_mq_start() at xn_txq_mq_start+0x84/frame 0xfffffe009f1215b0
> ether_output_frame() at ether_output_frame+0xb4/frame 0xfffffe009f1215e0
> ether_output() at ether_output+0x6a5/frame 0xfffffe009f121680
> ip_output() at ip_output+0x1319/frame 0xfffffe009f1217e0
> tcp_output() at tcp_output+0x1dbf/frame 0xfffffe009f121980
> tcp_usr_send() at tcp_usr_send+0x3c9/frame 0xfffffe009f121a40
> sosend_generic() at sosend_generic+0x440/frame 0xfffffe009f121af0
> sosend() at sosend+0x66/frame 0xfffffe009f121b20
> icl_send_thread() at icl_send_thread+0x44e/frame 0xfffffe009f121bb0
> fork_exit() at fork_exit+0x80/frame 0xfffffe009f121bf0
> fork_trampoline() at fork_trampoline+0xe/frame 0xfffffe009f121bf0

Thanks. I've raised this on freensd-net for advice [0]. IMO netfront
shouldn't receive an mbuf that crosses a page boundary, but if that's
indeed a legit mbuf I will figure out the best way to handle it.

I have a clumsy patch (below) that might solve this, if you want to
give it a try.

Regards, Roger.

[0] https://lists.freebsd.org/archives/freebsd-net/2021-December/001179.html
---
diff --git a/sys/dev/xen/netfront/netfront.c b/sys/dev/xen/netfront/netfront.c
index 87bc3ecfc4dd..c8f807778b75 100644
--- a/sys/dev/xen/netfront/netfront.c
+++ b/sys/dev/xen/netfront/netfront.c
@@ -1529,6 +1529,35 @@ xn_count_frags(struct mbuf *m)
        return (nfrags);
 }
 
+static inline int fragment(struct mbuf *m)
+{
+       while (m != NULL) {
+               vm_offset_t offset = mtod(m, vm_offset_t) & PAGE_MASK;
+
+               if (offset + m->m_len > PAGE_SIZE) {
+                       /* Split mbuf because it crosses a page boundary. */
+                       struct mbuf *m_new = m_getcl(M_NOWAIT, MT_DATA, 0);
+
+                       if (m_new == NULL)
+                               return (ENOMEM);
+
+                       m_copydata(m, 0, m->m_len - (PAGE_SIZE - offset),
+                           mtod(m_new, caddr_t));
+
+                       /* Set adjusted mbuf sizes. */
+                       m_new->m_len = m->m_len - (PAGE_SIZE - offset);
+                       m->m_len = PAGE_SIZE - offset;
+
+                       /* Insert new mbuf into chain. */
+                       m_new->m_next = m->m_next;
+                       m->m_next = m_new;
+               }
+               m = m->m_next;
+       }
+
+       return (0);
+}
+
 /**
  * Given an mbuf chain, make sure we have enough room and then push
  * it onto the transmit ring.
@@ -1541,6 +1570,12 @@ xn_assemble_tx_request(struct netfront_txq *txq, struct 
mbuf *m_head)
        struct ifnet *ifp = np->xn_ifp;
        u_int nfrags;
        int otherend_id;
+       int rc;
+
+       /* Fragment if mbuf crosses a page boundary. */
+       rc = fragment(m_head);
+       if (rc != 0)
+               return (rc);
 
        /**
         * Defragment the mbuf if necessary.




 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.