[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [PATCH v6 11/13] xen/arm: Add support for system suspend triggered by hardware domain
Thanks for your detailed comments and suggestions — much appreciated. On Tue, Sep 9, 2025 at 12:14 PM Jan Beulich <jbeulich@xxxxxxxx> wrote: > > On 09.09.2025 10:14, Mykola Kvach wrote: > > On Tue, Sep 9, 2025 at 9:57 AM Jan Beulich <jbeulich@xxxxxxxx> wrote: > >> On 09.09.2025 08:29, Mykola Kvach wrote: > >>> Then, in domain_shutdown(), we can call need_hwdom_shutdown() instead > >>> of directly checking is_hardware_domain(d). This keeps the logic > >>> readable and avoids code duplication. > >>> > >>> What do you think about this approach? > >> > >> Well, there's still the CONFIG_ARM check in there that I would like to > >> see gone. (As a nit, the use of u8 would also want to go away.) > > > > We could combine your proposal from v5 of this patch series, i.e., using the > > HAS_HWDOM_SUSPEND extra config together with this helper function: > > > > static inline bool need_hwdom_shutdown(const struct domain *d) > > { > > bool is_hw_dom = is_hardware_domain(d); > > > > if ( !IS_ENABLED(CONFIG_HAS_HWDOM_SUSPEND) ) > > return is_hw_dom && d->shutdown_code != SHUTDOWN_suspend; > > > > return is_hw_dom; > > } > > Maybe. Yet then the next thing striking me as odd is the redundant > checking of is_hw_dom. Why not simply > > { > if ( !is_hardware_domain(d) ) > return false; > > return IS_ENABLED(CONFIG_HAS_HWDOM_SUSPEND) || d->shutdown_code != > SHUTDOWN_suspend; > } > > Yet as said - my expectation is anyway that the is_hardware_domain() check > would live in the caller. Ack. > > > As for the second argument (reason), I can extract it directly from the > > domain structure, as is done in the function above. > > Looks like a misunderstanding: I don't mind the function parameter. But > the "u8" type shouldn't be used anymore in new code; that's uint8_t now. Oh, got it. I just used the same type as in domain_shutdown(). > > >> Furthermore with continuing to (ab)use domain_shutdown() also for the > >> suspend case (Dom0 isn't really shut down when suspending, aiui), you > >> retain the widening of the issue with the bogus setting of > >> d->is_shutting_down (and hence the need for later clearing the flag > >> again) that I mentioned elsewhere. (Yes, I remain of the opinion that > >> you don't need to sort that as a prereq to your work, yet at the same > >> time I think the goal should be to at least not make a bad situation > >> worse.) > > > > From the perspective of ARM logic inside Xen, we perform the exact same > > shutdown steps as for other domains, except that in the end we need to > > call Xen suspend. > > Which, as said, feels wrong. Domains to be revived after resume aren't > really shut down, so imo they should never have ->is_shutting_down set. I believe this is out of scope for this series; actually, the same applies to shutdown_code. > > > For a domain with a toolstack, it is possible to have a running Xen > > watchdog service. For example, if we have systemd, it can be easily stopped > > from the guest because we have hooks and can perform some actions before > > suspend. > > > > The same story applies to a Linux kernel driver: if it has PM ops installed > > for the Xen watchdog driver, nothing bad happens. > > > > However, in the case of using init.d, it isn’t easy to stop the Xen WDT > > automatically right before suspend. Therefore, Xen code has an extra check > > (see domain_watchdog_timeout) where it checks the is_shutting_down flag > > and does nothing if it is set. > > I don't understand how these watchdog considerations come into play here. I’m just trying to explain why we still need to set this flag even for HW domain. > > > The is_shutting_down flag is easily reset on Xen resume via a > > domain_resume call, so I don’t see any problems with that. > > You did read my earlier mail though, regarding concerns towards the clearing > of that flag once it was set? (You must have, since iirc you even asked [1] > whether you're expected to address those issues up front.) As far as I understand, this issue is relevant to x86, and I believe it is out of scope for this series. See my previous message here: https://lists.xen.org/archives/html/xen-devel/2025-08/msg02127.html I will prepare a separate patch series to address it. > > Jan > > [1] https://lists.xen.org/archives/html/xen-devel/2025-08/msg02057.html Best regards, Mykola
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |