[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [Qemu-devel] [PATCH v7 3/5] shutdown: Add source information to SHUTDOWN and RESET



Eric Blake <eblake@xxxxxxxxxx> writes:

> Time to wire up all the call sites that request a shutdown or
> reset to use the enum added in the previous patch.
>
> It would have been less churn to keep the common case with no
> arguments as meaning guest-triggered, and only modified the
> host-triggered code paths, via a wrapper function, but then we'd
> still have to audit that I didn't miss any host-triggered spots;
> changing the signature forces us to double-check that I correctly
> categorized all callers.
>
> Since command line options can change whether a guest reset request
> causes an actual reset vs. a shutdown, it's easy to also add the
> information to reset requests.
>
> Replay adds a FIXME to preserve the cause across the replay stream,
> that will be tackled in the next patch.
>
> Signed-off-by: Eric Blake <eblake@xxxxxxxxxx>
> Acked-by: David Gibson <david@xxxxxxxxxxxxxxxxxxxxx> [ppc parts]
> Reviewed-by: Mark Cave-Ayland <mark.cave-ayland@xxxxxxxxxxxx> [SPARC part]
[...]
> diff --git a/hw/acpi/core.c b/hw/acpi/core.c
> index e890a5d..95fcac9 100644
> --- a/hw/acpi/core.c
> +++ b/hw/acpi/core.c
> @@ -561,7 +561,7 @@ static void acpi_pm1_cnt_write(ACPIREGS *ar, uint16_t val)
>          uint16_t sus_typ = (val >> 10) & 7;
>          switch(sus_typ) {
>          case 0: /* soft power off */
> -            qemu_system_shutdown_request();
> +            qemu_system_shutdown_request(SHUTDOWN_CAUSE_GUEST_SHUTDOWN);
>              break;
>          case 1:
>              qemu_system_suspend_request();
> @@ -569,7 +569,7 @@ static void acpi_pm1_cnt_write(ACPIREGS *ar, uint16_t val)
>          default:
>              if (sus_typ == ar->pm1.cnt.s4_val) { /* S4 request */
>                  qapi_event_send_suspend_disk(&error_abort);
> -                qemu_system_shutdown_request();
> +                qemu_system_shutdown_request(SHUTDOWN_CAUSE_GUEST_SHUTDOWN);

I'm fine with using SHUTDOWN_CAUSE_GUEST_SHUTDOWN for suspend, but have
you considered SHUTDOWN_CAUSE_GUEST_SUSPEND?

>              }
>              break;
>          }
[...]
> diff --git a/qmp.c b/qmp.c
> index ab74cd7..95949d0 100644
> --- a/qmp.c
> +++ b/qmp.c
> @@ -84,7 +84,7 @@ UuidInfo *qmp_query_uuid(Error **errp)
>  void qmp_quit(Error **errp)
>  {
>      no_shutdown = 0;
> -    qemu_system_shutdown_request();
> +    qemu_system_shutdown_request(SHUTDOWN_CAUSE_HOST_QMP);
>  }
>
>  void qmp_stop(Error **errp)
> @@ -105,7 +105,7 @@ void qmp_stop(Error **errp)
>
>  void qmp_system_reset(Error **errp)
>  {
> -    qemu_system_reset_request();
> +    qemu_system_reset_request(SHUTDOWN_CAUSE_HOST_QMP);

This is the only place where we pass something other than
SHUTDOWN_CAUSE_GUEST_RESET.  We could avoid churn the obvious way, but I
guess having the churn eases patch review.  Okay.

>  }
>
>  void qmp_system_powerdown(Error **erp)
> diff --git a/replay/replay.c b/replay/replay.c
> index f810628..604fa4f 100644
> --- a/replay/replay.c
> +++ b/replay/replay.c
> @@ -51,7 +51,8 @@ bool replay_next_event_is(int event)
>          switch (replay_state.data_kind) {
>          case EVENT_SHUTDOWN:
>              replay_finish_event();
> -            qemu_system_shutdown_request();
> +            /* FIXME - store actual reason */
> +            qemu_system_shutdown_request(SHUTDOWN_CAUSE_HOST_ERROR);

The temporary replay breakage is no big deal.  Still, can we avoid it by
extending replay first, using a dummy value like
SHUTDOWN_CAUSE_HOST_ERROR until the real cause becomes available?  Not
sure it's worth a respin, though.

>              break;
>          default:
>              /* clock, time_t, checkpoint and other events */
[...]

Reviewed-by: Markus Armbruster <armbru@xxxxxxxxxx>

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.