[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v3 14/14] AMD/IOMMU: process softirqs while dumping IRTs


  • To: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
  • From: Jan Beulich <JBeulich@xxxxxxxx>
  • Date: Mon, 22 Jul 2019 08:49:32 +0000
  • Accept-language: en-US
  • Arc-authentication-results: i=1; mx.microsoft.com 1;spf=pass smtp.mailfrom=suse.com;dmarc=pass action=none header.from=suse.com;dkim=pass header.d=suse.com;arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=VLEn1PbYfeH5L5LDJSS5T/f3EKQAsMSckE69r11w/uU=; b=MhnqGasJAv3HTeH7iibm2IkE2kMHptiZNmo7bsez6jLaS5Lt1/0wJ9oKvx+wZ/V7FqeSbtNM9lNI9KOeIIX5i/dzNiJAHEi3On4CJ2KnszbKtl02QMG7FtvttFcoduW8EllxTQH1nIGcGM5xkgg3i2XP4jopddB9Np3x8RCyMhW/ZysshcPCMvS12jwsLcD7Xt5HeP5rmcdspWvSW6kzRLk+0pUIwIE/SbYbIrerjaUgMclyKhFOjscrXOEYpyq6OKqxm2alRYPFrU61MKJb7l5nqU5xfj9/cVra4p6sbXAcNI5I/JYK5mE2o5YZ2Hl/2RBbOdU8YPabovOknfL98A==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=AGrKdSg1iQq5XAMOuXNY0cUbIG2sv+xpA9DEvhHbJMsYeYMMDNC1sYxTeTZyVSJYpUp6YAmD5n2m9Rq48T0usokwGVqkoh29UFUu4udzMdZB572Asuv1vTmpZNhEXUtqNpGKh4Bf7bFmF6unhZtvApy2yl1iUvPTtvzuq5NTuiw8koerVBJ7l36RZLYAqYg0CifjmclRtlGzrmrABb9gcRM1Q3xPo52lQtWxRzF68Mr6TZ5O/TLQMFumkfmBU3+OqkBWkLtLgrweORngJRDywSQ6SwMfZ8LwrTNCX0nfMOz1AKkczC74jJuERGnGtYaeJelJlkx6xUZ9DQmOZNlPNA==
  • Authentication-results: spf=none (sender IP is ) smtp.mailfrom=JBeulich@xxxxxxxx;
  • Cc: "xen-devel@xxxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxxx>, Brian Woods <brian.woods@xxxxxxx>, Suravee Suthikulpanit <suravee.suthikulpanit@xxxxxxx>
  • Delivery-date: Mon, 22 Jul 2019 08:53:09 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>
  • Thread-index: AQHVO/VM0Z1oh+Srj0iEZDBBy+eI46bSPozHgAQeRYA=
  • Thread-topic: [PATCH v3 14/14] AMD/IOMMU: process softirqs while dumping IRTs

On 19.07.2019 19:55, Andrew Cooper wrote:
> On 16/07/2019 17:41, Jan Beulich wrote:
>> When there are sufficiently many devices listed in the ACPI tables (no
>> matter if they actually exist), output may take way longer than the
>> watchdog would like.
>>
>> Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx>
>> ---
>> v3: New.
>> ---
>> TBD: Seeing the volume of output I wonder whether we should further
>>        suppress logging headers of devices which have no active entry
>>        (i.e. emit the header only upon finding the first IRTE worth
>>        logging). And while minor for the total volume of output I'm
>>        also unconvinced logging both a "per device" header line and a
>>        "shared" one makes sense, when only one of the two can actually
>>        be followed by actual contents.
> 
> I don't have a system I can access at the moment, so can't judge how bad
> it is right now.  However, I would advocate the removal of irrelevant
> information.

I'll try to get to putting together another patch to this effect.

> Either way, this is debugging so Acked-by: Andrew Cooper
> <andrew.cooper3@xxxxxxxxxx>

Thanks, also for all the other review of this series!

> As an observation, I wonder whether continually sprinkling
> process_pending_softirqs() is the best thing to do for keyhandlers.
> We've got a number of other which incur the wrath of the watchdog (grant
> table in particular), which in practice means they are typically broken
> when they are actually used for debugging production.
> 
> As these are for debugging only, might it be a better idea to stop the
> watchdog while keyhandlers are running?  The only useful thing we
> actually manage here is to stop the watchdog killing us.

Hmm, I would agree going this route if the watchdog could be disabled
on a per-CPU basis, but right now watchdog_disable() is a system wide
action.

Jan
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.