|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [PATCH v3 11/13] x86/xen: use lazy_mmu_state when context-switching
On 27/10/2025 13:29, David Hildenbrand wrote: > On 25.10.25 00:52, Demi Marie Obenour wrote: >> On 10/24/25 10:51, David Hildenbrand wrote: >>> On 24.10.25 16:47, David Woodhouse wrote: >>>> On Thu, 2025-10-23 at 22:06 +0200, David Hildenbrand wrote: >>>>> On 15.10.25 10:27, Kevin Brodsky wrote: >>>>>> We currently set a TIF flag when scheduling out a task that is in >>>>>> lazy MMU mode, in order to restore it when the task is scheduled >>>>>> again. >>>>>> >>>>>> The generic lazy_mmu layer now tracks whether a task is in lazy MMU >>>>>> mode in task_struct::lazy_mmu_state. We can therefore check that >>>>>> state when switching to the new task, instead of using a separate >>>>>> TIF flag. >>>>>> >>>>>> Signed-off-by: Kevin Brodsky <kevin.brodsky@xxxxxxx> >>>>>> --- >>>>> >>>>> >>>>> Looks ok to me, but I hope we get some confirmation from x86 / xen >>>>> folks. >>>> >>>> >>>> I know tglx has shouted at me in the past for precisely this reminder, >>>> but you know you can test Xen guests under QEMU/KVM now and don't need >>>> to actually run Xen? Has this been boot tested? >>> >>> And after that, boot-testing sparc as well? :D >>> >>> If it's easy, why not. But other people should not suffer for all the >>> XEN hacks we keep dragging along. >> >> Which hacks? Serious question. Is this just for Xen PV or is HVM >> also affected? > > In the context of this series, XEN_LAZY_MMU. FWIW in that particular case it's relatively easy to tell this is specific to Xen PV (this is only used in mmu_pv.c and enlighten_pv.c). Knowing what to test is certainly not obvious in general, though. - Kevin > > Your question regarding PV/HVM emphasizes my point: how is a submitter > supposed to know which XEN combinations to test (and how to test > them), to not confidentially break something here. > > We really need guidance+help from the XEN folks here.
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |