[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Possible bug? DOM-U network stopped working after fatal error reported in DOM0



On Wed, Jan 5, 2022 at 10:33 PM Roger Pau Monné <roger.pau@xxxxxxxxxx> wrote:
>
> On Wed, Jan 05, 2022 at 12:05:39AM +0800, G.R. wrote:
> > > > > > But seems like this patch is not stable enough yet and has its own
> > > > > > issue -- memory is not properly released?
> > > > >
> > > > > I know. I've been working on improving it this morning and I'm
> > > > > attaching an updated version below.
> > > > >
> > > > Good news.
> > > > With this  new patch, the NAS domU can serve iSCSI disk without OOM
> > > > panic, at least for a little while.
> > > > I'm going to keep it up and running for a while to see if it's stable 
> > > > over time.
> > >
> > > Thanks again for all the testing. Do you see any difference
> > > performance wise?
> > I'm still on a *debug* kernel build to capture any potential panic --
> > none so far -- no performance testing yet.
> > Since I'm a home user with a relatively lightweight workload, so far I
> > didn't observe any difference in daily usage.
> >
> > I did some quick iperf3 testing just now.
>
> Thanks for doing this.
>
> > 1. between nas domU <=> Linux dom0 running on an old i7-3770 based box.
> > The peak is roughly 12 Gbits/s when domU is the server.
> > But I do see regression down to ~8.5 Gbits/s when I repeat the test in
> > a short burst.
> > The regression can recover when I leave the system idle for a while.
> >
> > When dom0 is the iperf3 server, the transfer rate is much lower, down
> > all the way to 1.x Gbits/s.
> > Sometimes, I can see the following kernel log repeats during the
> > testing, likely contributing to the slowdown.
> >              interrupt storm detected on "irq2328:"; throttling interrupt 
> > source
>
> I assume the message is in the domU, not the dom0?
Yes, in the TrueNAS domU.
BTW, I rebooted back to the stock kernel and the message is no longer observed.

With the stock kernel, the transfer rate from dom0 to nas domU can be
as high as 30Gbps.
The variation is still observed, sometimes down to ~19Gbps. There is
no retransmission in this direction.

For the reverse direction, the observed low transfer rate still exists.
It's still within the range of 1.x Gbps, but should still be better
than the previous test.
The huge number of re-transmission is still observed.
The same behavior can be observed on a stock FreeBSD 12.2 image, so
this is not specific to TrueNAS.

According to the packet capture, the re-transmission appears to be
caused by packet reorder.
Here is one example incident:
1. dom0 sees a sequence jump in the incoming stream and begins to send out SACKs
2. When SACK shows up at domU, it begins to re-transmit lost frames
   (the re-transmit looks weird since it show up as a mixed stream of
1448 bytes and 12 bytes packets, instead of always 1448 bytes)
3. Suddenly the packets that are believed to have lost show up, dom0
accept them as if they are re-transmission
4. The actual re-transmission finally shows up in dom0...
Should we expect packet reorder on a direct virtual link? Sounds fishy to me.
Any chance we can get this re-transmission fixed?

So looks like at least the imbalance between two directions are not
related to your patch.
Likely the debug build is a bigger contributor to the perf difference
in both directions.

I also tried your patch on a release build, and didn't observe any
major difference in iperf3 numbers.
Roughly match the 30Gbps and 1.xGbps number on the stock release kernel.

>
> > Another thing that looks alarming is the retransmission is high:
> > [ ID] Interval           Transfer     Bitrate         Retr  Cwnd
> > [  5]   0.00-1.00   sec   212 MBytes  1.78 Gbits/sec  110    231 KBytes
> > [  5]   1.00-2.00   sec   230 MBytes  1.92 Gbits/sec    1    439 KBytes
> > [  5]   2.00-3.00   sec   228 MBytes  1.92 Gbits/sec    3    335 KBytes
> > [  5]   3.00-4.00   sec   204 MBytes  1.71 Gbits/sec    1    486 KBytes
> > [  5]   4.00-5.00   sec   201 MBytes  1.69 Gbits/sec  812    258 KBytes
> > [  5]   5.00-6.00   sec   179 MBytes  1.51 Gbits/sec    1    372 KBytes
> > [  5]   6.00-7.00   sec  50.5 MBytes   423 Mbits/sec    2    154 KBytes
> > [  5]   7.00-8.00   sec   194 MBytes  1.63 Gbits/sec  339    172 KBytes
> > [  5]   8.00-9.00   sec   156 MBytes  1.30 Gbits/sec  854    215 KBytes
> > [  5]   9.00-10.00  sec   143 MBytes  1.20 Gbits/sec  997   93.8 KBytes
> > - - - - - - - - - - - - - - - - - - - - - - - - -
> > [ ID] Interval           Transfer     Bitrate         Retr
> > [  5]   0.00-10.00  sec  1.76 GBytes  1.51 Gbits/sec  3120             
> > sender
> > [  5]   0.00-10.45  sec  1.76 GBytes  1.44 Gbits/sec                  
> > receiver
>
> Do you see the same when running the same tests on a debug kernel
> without my patch applied? (ie: a kernel build yourself from the same
> baseline but just without my patch applied)
>
> I'm mostly interested in knowing whether the patch itself causes any
> regressions from the current state (which might not be great already).
>
> >
> > 2. between a remote box <=> nas domU, through a 1Gbps ethernet cable.
> > Roughly saturate the link when domU is the server, without obvious perf drop
> > When domU running as a client, the achieved BW is ~30Mbps lower than the 
> > peak.
> > Retransmission sometimes also shows up in this scenario, more
> > seriously when domU is the client.
> >
> > I cannot test with the stock kernel nor with your patch in release
> > mode immediately.
> >
> > But according to the observed imbalance between inbounding and
> > outgoing path, non-trivial penalty applies I guess?
>
> We should get a baseline using the same sources without my path
> applied.
>
> Thanks, Roger.



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.