[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH] xen/console: do not drop serial output from the hardware domain


  • To: Jan Beulich <jbeulich@xxxxxxxx>
  • From: Roger Pau Monné <roger.pau@xxxxxxxxxx>
  • Date: Thu, 16 Jun 2022 13:31:08 +0200
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com; dkim=pass header.d=citrix.com; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=MDRUB+Tje9osuHlgCT/GCZN9hu9NnzZg4bn8F8g5oTU=; b=ePsmn5bI+zm5WjbG4tXuR4XJSJ+F9g8SxQCBdzi2u3KtqrrG3qGTli1Bjs3j31WcMqGLhZu8JQ/Ngt0uxleWi/vSgBz2JXvs1kffOOvbvSrUdUjvZFuw5GqAXu38fhEpTorPsmiWf53w3I6NXYMoY7WAamQp78TatfJBHhsVEJDZrgj0tbzUNr/5GV/5GAL8TGHPJyZKA8OcVF+s6FmRCmuJ0PKXNeiUbEqbtfn3uBXwt1LVW6T12qHTe6tnOF0SFQNxmRagYAIs3qfUCrvMf/C54V9nYRYYCscDpOQX83WQ0CcPPotOfkL6+JadEjxKdtOKrTDSUa1fgeN02j/oIg==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=kPxTv4U2bbEykJNxMlukz2yWjGa4Ovsidys9cj8hrqPR+vF+Q5k5wmLwkgfetlt6LMbwTVnB5ZPtbQp7WwVGsJHgaG6jINIHqU+hX9MYG3Cof8mnFMXTBZAStsds6OG5xd3ofu8IuKSH8MvJR42K40o3ybyEoEoCKJh9Ozt4liYqfcZRDG8pr7vLW6gScGUeCJv9tWfkz9cA1hT4DBOnMqVk6hHFWYjN09vNY1ZKdlvfRPvZ8+LtWQjzAlIh5tsRMtpHjaWOG50bQu22OS7dJqhbEzKUz0bMxm7V5jD0GOk+aG5sLT+HyNgryecWmjy9gkDUICAIXitZVAZIUNwcPQ==
  • Authentication-results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=citrix.com;
  • Cc: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>, George Dunlap <george.dunlap@xxxxxxxxxx>, Julien Grall <julien@xxxxxxx>, Stefano Stabellini <sstabellini@xxxxxxxxxx>, Wei Liu <wl@xxxxxxx>, xen-devel@xxxxxxxxxxxxxxxxxxxx
  • Delivery-date: Thu, 16 Jun 2022 11:31:26 +0000
  • Ironport-data: A9a23:l9bxRKi8GS/PoBWf4uc2wmNpX161fBEKZh0ujC45NGQN5FlHY01je htvUDuHOf3eYDb2Ldxzb4Sz9UsGvpCEztdhGwA/qCsyFXwb9cadCdqndUqhZCn6wu8v7a5EA 2fyTvGacajYm1eF/k/F3oDJ9CU6jefSLlbFILas1hpZHGeIcw98z0M68wIFqtQw24LhXVrT4 YmaT/D3YzdJ5RYlagr41IrbwP9flKyaVOQw5wFWiVhj5TcyplFNZH4tDfjZw0jQG+G4KtWSV efbpIxVy0uCl/sb5nFJpZ6gGqECaua60QFjERO6UYD66vRJjnRaPqrWqJPwwKqY4tmEt4kZ9 TlDiXC/YSMxAvPoyNUHaDdBOQ9uIIp0/OeYLEHq5KR/z2WeG5ft69NHKRhveKc+qqNwC2wI8 uEEIjcQaBzFn/ix3L+wVuhrgIIkMdXvO4Qc/HpnyFk1D95/GcyFH/qMuI8ehWhr7ixNNa+2i 84xcz1gYQ6GexRSElwWFIg/jKGjgXyXnzhw9w7M+/RrujK7IApZj5/dH9z5Xd2wZ4YIxViEt keczVygHURPXDCY4X/fmp62vcfNly7mXIMZFJWj6+VnxlaUwwQ7CgASVFa9iem0jAi5Qd03A 1cP5iMkoKw29UqqZtrwRRu1pDiDpBF0c8VUO/037keK0KW8yxaUAC0IQyBMbPQitdQqXno62 1mRhdTrCDdz9rqPRhqgGqy8qDqzPW0fKz8EbCpdFA8duYC8+8c0kw7FSctlHOitlNrpFDrsw jeM6i8jm7EUis1N3KK+lbzavw+RSlHyZlZdzm3qsqiNtGuVuKbNi1SU1GXm
  • Ironport-hdrordr: A9a23:Q3Gtv6jlREJGK0/SPnuFGWz5C3BQX0h13DAbv31ZSRFFG/FwyP rCoB1L73XJYWgqM03I+eruBEBPewK4yXdQ2/hoAV7EZnichILIFvAa0WKG+VHd8kLFltK1uZ 0QEJSWTeeAd2SS7vyKnzVQcexQp+VvmZrA7Ym+854ud3ANV0gJ1XYENu/xKDwTeOApP+taKH LKjfA32gZINE5nJ/iTNz0gZazuttfLnJXpbVovAAMm0hCHiXeN5KThGxaV8x8CW3cXqI1Sul Ttokjc3OGOovu7whjT2yv66IlXosLozp9mCNaXgsYYBz3wgkKDZZhnWZeFoDcpydvfoGoCoZ 3pmVMNLs5z43TeciWcpgbs4RDp1HIU53rr2Taj8A/eiP28YAh/J9tKhIpffBecwVEnpstA3K VC2H/cn4ZLDDvb9R6NqOTgZlVPrA6ZsHAimekcgzh0So0FcoJcqoQZ4Qd8DIoAJiTn84oqed MeQP003MwmMG9yUkqp/lWGmLeXLzcO91a9MwU/U/WuonZrdCsT9Tpb+CQd9k1wga7VBaM0ot gsCZ4Y5Y2mfvVmE56VO91xMfdfKla9Ni4kY1jiV2gOKsk8SgHwgq+yxokJz8eXX7FN5KcOuf 36ISFlXCgJCgjTNfE=
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

On Tue, Jun 14, 2022 at 11:45:54AM +0200, Jan Beulich wrote:
> On 14.06.2022 11:38, Roger Pau Monné wrote:
> > On Tue, Jun 14, 2022 at 11:13:07AM +0200, Jan Beulich wrote:
> >> On 14.06.2022 10:32, Roger Pau Monné wrote:
> >>> On Tue, Jun 14, 2022 at 10:10:03AM +0200, Jan Beulich wrote:
> >>>> On 14.06.2022 08:52, Roger Pau Monné wrote:
> >>>>> On Mon, Jun 13, 2022 at 03:56:54PM +0200, Jan Beulich wrote:
> >>>>>> On 13.06.2022 14:32, Roger Pau Monné wrote:
> >>>>>>> On Mon, Jun 13, 2022 at 11:18:49AM +0200, Jan Beulich wrote:
> >>>>>>>> On 13.06.2022 11:04, Roger Pau Monné wrote:
> >>>>>>>>> On Mon, Jun 13, 2022 at 10:29:43AM +0200, Jan Beulich wrote:
> >>>>>>>>>> On 13.06.2022 10:21, Roger Pau Monné wrote:
> >>>>>>>>>>> On Mon, Jun 13, 2022 at 09:30:06AM +0200, Jan Beulich wrote:
> >>>>>>>>>>>> On 10.06.2022 17:06, Roger Pau Monne wrote:
> >>>>>>>>>>>>> Prevent dropping console output from the hardware domain, since 
> >>>>>>>>>>>>> it's
> >>>>>>>>>>>>> likely important to have all the output if the boot fails 
> >>>>>>>>>>>>> without
> >>>>>>>>>>>>> having to resort to sync_console (which also affects the output 
> >>>>>>>>>>>>> from
> >>>>>>>>>>>>> other guests).
> >>>>>>>>>>>>>
> >>>>>>>>>>>>> Do so by pairing the console_serial_puts() with
> >>>>>>>>>>>>> serial_{start,end}_log_everything(), so that no output is 
> >>>>>>>>>>>>> dropped.
> >>>>>>>>>>>>
> >>>>>>>>>>>> While I can see the goal, why would Dom0 output be (effectively) 
> >>>>>>>>>>>> more
> >>>>>>>>>>>> important than Xen's own one (which isn't "forced")? And with 
> >>>>>>>>>>>> this
> >>>>>>>>>>>> aiming at boot output only, wouldn't you want to stop the 
> >>>>>>>>>>>> overriding
> >>>>>>>>>>>> once boot has completed (of which, if I'm not mistaken, we don't
> >>>>>>>>>>>> really have any signal coming from Dom0)? And even during boot 
> >>>>>>>>>>>> I'm
> >>>>>>>>>>>> not convinced we'd want to let through everything, but perhaps 
> >>>>>>>>>>>> just
> >>>>>>>>>>>> Dom0's kernel messages?
> >>>>>>>>>>>
> >>>>>>>>>>> I normally use sync_console on all the boxes I'm doing dev work, 
> >>>>>>>>>>> so
> >>>>>>>>>>> this request is something that come up internally.
> >>>>>>>>>>>
> >>>>>>>>>>> Didn't realize Xen output wasn't forced, since we already have 
> >>>>>>>>>>> rate
> >>>>>>>>>>> limiting based on log levels I was assuming that non-ratelimited
> >>>>>>>>>>> messages wouldn't be dropped.  But yes, I agree that Xen 
> >>>>>>>>>>> (non-guest
> >>>>>>>>>>> triggered) output shouldn't be rate limited either.
> >>>>>>>>>>
> >>>>>>>>>> Which would raise the question of why we have log levels for 
> >>>>>>>>>> non-guest
> >>>>>>>>>> messages.
> >>>>>>>>>
> >>>>>>>>> Hm, maybe I'm confused, but I don't see a direct relation between 
> >>>>>>>>> log
> >>>>>>>>> levels and rate limiting.  If I set log level to WARNING I would
> >>>>>>>>> expect to not loose _any_ non-guest log messages with level WARNING 
> >>>>>>>>> or
> >>>>>>>>> above.  It's still useful to have log levels for non-guest messages,
> >>>>>>>>> since user might want to filter out DEBUG non-guest messages for
> >>>>>>>>> example.
> >>>>>>>>
> >>>>>>>> It was me who was confused, because of the two log-everything 
> >>>>>>>> variants
> >>>>>>>> we have (console and serial). You're right that your change is 
> >>>>>>>> unrelated
> >>>>>>>> to log levels. However, when there are e.g. many warnings or when an
> >>>>>>>> admin has lowered the log level, what you (would) do is effectively
> >>>>>>>> force sync_console mode transiently (for a subset of messages, but
> >>>>>>>> that's secondary, especially because the "forced" output would still
> >>>>>>>> be waiting for earlier output to make it out).
> >>>>>>>
> >>>>>>> Right, it would have to wait for any previous output on the buffer to
> >>>>>>> go out first.  In any case we can guarantee that no more output will
> >>>>>>> be added to the buffer while Xen waits for it to be flushed.
> >>>>>>>
> >>>>>>> So for the hardware domain it might make sense to wait for the TX
> >>>>>>> buffers to be half empty (the current tx_quench logic) by preempting
> >>>>>>> the hypercall.  That however could cause issues if guests manage to
> >>>>>>> keep filling the buffer while the hardware domain is being preempted.
> >>>>>>>
> >>>>>>> Alternatively we could always reserve half of the buffer for the
> >>>>>>> hardware domain, and allow it to be preempted while waiting for space
> >>>>>>> (since it's guaranteed non hardware domains won't be able to steal the
> >>>>>>> allocation from the hardware domain).
> >>>>>>
> >>>>>> Getting complicated it seems. I have to admit that I wonder whether we
> >>>>>> wouldn't be better off leaving the current logic as is.
> >>>>>
> >>>>> Another possible solution (more like a band aid) is to increase the
> >>>>> buffer size from 4 pages to 8 or 16.  That would likely allow to cope
> >>>>> fine with the high throughput of boot messages.
> >>>>
> >>>> You mean the buffer whose size is controlled by serial_tx_buffer?
> >>>
> >>> Yes.
> >>>
> >>>> On
> >>>> large systems one may want to simply make use of the command line
> >>>> option then; I don't think the built-in default needs changing. Or
> >>>> if so, then perhaps not statically at build time, but taking into
> >>>> account system properties (like CPU count).
> >>>
> >>> So how about we use:
> >>>
> >>> min(16384, ROUNDUP(1024 * num_possible_cpus(), 4096))
> >>
> >> That would _reduce_ size on small systems, wouldn't it? Originally
> >> you were after increasing the default size. But if you had meant
> >> max(), then I'd fear on very large systems this may grow a little
> >> too large.
> > 
> > See previous followup about my mistake of using min() instead of
> > max().
> > 
> > On a system with 512 CPUs that would be 512KB, I don't think that's a
> > lot of memory, specially taking into account that a system with 512
> > CPUs should have a matching amount of memory I would expect.
> > 
> > It's true however that I very much doubt we would fill a 512K buffer,
> > so limiting to 64K might be a sensible starting point?
> 
> Yeah, 64k could be a value to compromise on. What total size of
> output have you observed to trigger the making of this patch? Xen
> alone doesn't even manage to fill 16k on most of my systems ...

I've tried on one of the affected systems now, it's a 8 CPU Kaby Lake
at 3,5GHz, and manages to fill the buffer while booting Linux.

My proposed formula won't fix this use case, so what about just
bumping the buffer to 32K by default, which does fix it?

Or alternatively use the proposed formula, but adjust the buffer to be
between [32K,64K].

Thanks, Roger.



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.