[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v9 10/13] xen/arm64: Save/restore CPU context across SYSTEM_SUSPEND


  • To: Mykola Kvach <xakep.amatop@xxxxxxxxx>
  • From: Luca Fancellu <Luca.Fancellu@xxxxxxx>
  • Date: Thu, 14 May 2026 17:20:53 +0000
  • Accept-language: en-GB, en-US
  • Arc-authentication-results: i=2; mx.microsoft.com 1; spf=pass (sender ip is 4.158.2.129) smtp.rcpttodomain=gmail.com smtp.mailfrom=arm.com; dmarc=pass (p=none sp=none pct=100) action=none header.from=arm.com; dkim=pass (signature was verified) header.d=arm.com; arc=pass (0 oda=1 ltdi=1 spf=[1,1,smtp.mailfrom=arm.com] dkim=[1,1,header.d=arm.com] dmarc=[1,1,header.from=arm.com])
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass header.d=arm.com; arc=none
  • Arc-message-signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=0emeo6+8Y3nQwRjlJPnfCGtFv/1dOiWJsAIsXGNXnoo=; b=YZNSF444O5KVHdhhHpQu3b/l7RlqWS1Szzfzbzl7SfX8EgUiSIEYOn2wQ+XOwBa1LGohes8t3nnIRokSOaAvXRJF0OAh7V8lFh2ytGBh+tc1YuhhYkx4VHY6socBTAbdOep/YUOFQHzM6Ts69kDjI9EjMFDYDRhxHvtzRSOOiqrFcXmPyk23CUIe/e9n3+AqNhLeSE7thQqb/rzoiU9+MK6RRDgYPJ4pSU0+xBGpyjyfx5+A8UPCBmofM3W6mu6IuHIXyefcqeYJNhqlRqdX1C7ggKwrUr3VLfQnlL96shgYCJSd0zMfdmItEB1Pbo8ICS85Ep2WP8iio2NAOUsmUg==
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=0emeo6+8Y3nQwRjlJPnfCGtFv/1dOiWJsAIsXGNXnoo=; b=IF4mW8y2WA+MjoP8/LXc1d4ikanwqtJNP/YoRIKvxDTlEzL9ZqV0wHYzlAwjFY+fqHGDk2uUK9lloHzyOJc9sQ4Ra82XkJzS5so2cFG7V01q3iTVxnxlHdNFiSUyVd68I5HzmxYlYiNo8ACXLZHONGvqzJamA+r+sHaL947PrinmVHcbyiZsb/57ukGZJ2sKzuxbxi4NsQYiJN0JZJ/PCPXzt0gTqLb2u2XsCCL26gTf+SplIrtpTowhVqkNgtAq40XgCRfjG295AhM/eoh/SdqxtALTg9GiY1aLX67SMTbm7ruTaMt+kdP3Da5OzkYm+DpfCqUzOonI5ZJ1MoPtRA==
  • Arc-seal: i=2; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=pass; b=ggME/fFaEXIX9sCOD/K3awz5FQbb9DJRIkPqofGPjTnKYnuV6g30NBA8YEBpcCYV1FmUcq37xhCAhGSTS+ggT5DDD15346zA5acjhZpkth17OdE1kLL1mexp9e/Vv/wH/XODvint6HOprM9Z9T1ypvIaQLmv1hT10GwzuOwz+j6SQ01ElrGPg92BHmqrmh7+J+FOzEGOEBJokKd5ljnZ2cTgAS7amp6cUDo0/zVlH3i8LELfaO6sN03xrpFSwhfxjNwi+ROSNGa9+CGEC7GDq8Ue3OAQ5neVCOj7sbx3ORBq+FWEkDdNVnSZE7zg7RPeiCIgPUKDlxd6NwBIhXPHKg==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=WbT3WnIJmGz+YuG5JdcO2+RS/ySfD1E1wPbKnWTUL4lrkbSbHakoEN2fmNrCrYChD+EDJI/hxs5GyaGdw/Z+ZQH5buSIh3+b3ZylVWy5q7/FwJ98JH82X11M0PaLlg9EKNYJ0LVf/51auTSl73QH/NwyiCuc27X6tpQXcFz9E95EgCUwwRITuSdqyLOyfhIr1sAq4HDllbCsng4Wjdya2v2ZIGJllfoGLKLjn0RF2tTerraLVDqsT5v+tpU474L2jwicq//dUMK1qZoFRwM4M7gS/6MHvlVb/RSxt5zL+wOi5H4eIY8wGOuQ3utaRvwUzhu8i6I7hRZdjSA6DB0vng==
  • Authentication-results: eu.smtp.expurgate.cloud; dkim=pass header.s=selector1 header.d=arm.com header.i="@arm.com" header.h="From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck"; dkim=pass header.s=selector1 header.d=arm.com header.i="@arm.com" header.h="From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck"
  • Authentication-results-original: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=arm.com;
  • Cc: "xen-devel@xxxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxxx>, Mykola Kvach <mykola_kvach@xxxxxxxx>, Stefano Stabellini <sstabellini@xxxxxxxxxx>, Julien Grall <julien@xxxxxxx>, Bertrand Marquis <Bertrand.Marquis@xxxxxxx>, Michal Orzel <michal.orzel@xxxxxxx>, Volodymyr Babchuk <Volodymyr_Babchuk@xxxxxxxx>
  • Delivery-date: Thu, 14 May 2026 17:22:11 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>
  • Nodisclaimer: true
  • Thread-index: AQHc4jI1kApzK9pOoE+jepD/YgGe7bYNx4CA
  • Thread-topic: [PATCH v9 10/13] xen/arm64: Save/restore CPU context across SYSTEM_SUSPEND

Hi Mykola,

> 
> +#ifdef CONFIG_SYSTEM_SUSPEND
> +/*
> + * int prepare_resume_ctx(void)
> + *
> + * CPU context saved here will be restored on resume in hyp_resume function.
> + * prepare_resume_ctx shall return a non-zero value. Upon restoring context
> + * hyp_resume shall return value zero instead. From C code that invokes
> + * prepare_resume_ctx, the return value is interpreted to determine whether
> + * the context is saved (prepare_resume_ctx) or restored (hyp_resume).
> + */
> +FUNC(prepare_resume_ctx)
> +        ldr   x0, =resume_cpu_context
> +
> +        /* Store callee-saved registers */
> +        stp   x19, x20, [x0, #RESUME_CTX_X19]
> +        stp   x21, x22, [x0, #RESUME_CTX_X21]
> +        stp   x23, x24, [x0, #RESUME_CTX_X23]
> +        stp   x25, x26, [x0, #RESUME_CTX_X25]
> +        stp   x27, x28, [x0, #RESUME_CTX_X27]
> +        stp   x29, lr, [x0, #RESUME_CTX_X29]
> +
> +        /* Store stack-pointer */
> +        mov   x2, sp
> +        str   x2, [x0, #RESUME_CTX_SP]
> +
> +        /* Store system control registers */
> +        mrs   x2, VBAR_EL2
> +        str   x2, [x0, #RESUME_CTX_VBAR_EL2]
> +        mrs   x2, VTCR_EL2
> +        str   x2, [x0, #RESUME_CTX_VTCR_EL2]
> +        mrs   x2, VTTBR_EL2
> +        str   x2, [x0, #RESUME_CTX_VTTBR_EL2]
> +        mrs   x2, TPIDR_EL2
> +        str   x2, [x0, #RESUME_CTX_TPIDR_EL2]
> +        mrs   x2, MDCR_EL2
> +        str   x2, [x0, #RESUME_CTX_MDCR_EL2]
> +        mrs   x2, HSTR_EL2
> +        str   x2, [x0, #RESUME_CTX_HSTR_EL2]
> +        mrs   x2, CPTR_EL2
> +        str   x2, [x0, #RESUME_CTX_CPTR_EL2]
> +        mrs   x2, HCR_EL2
> +        str   x2, [x0, #RESUME_CTX_HCR_EL2]

Do you think we should save also CNTHCTL_EL2? Apologies it escaped my first 
review,
but I see we program it in the boot cpu path + secondary cpu path: 
init_timer_interrupt().

The rest looks ok.

Cheers,
Luca




 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.