|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [PATCH v2 20/23] x86/pv: Exception handling in FRED mode
On 28.08.2025 17:04, Andrew Cooper wrote:
> --- a/xen/arch/x86/traps.c
> +++ b/xen/arch/x86/traps.c
> @@ -2265,9 +2265,83 @@ void asmlinkage check_ist_exit(const struct
> cpu_user_regs *regs, bool ist_exit)
>
> void asmlinkage entry_from_pv(struct cpu_user_regs *regs)
> {
> + struct fred_info *fi = cpu_regs_fred_info(regs);
> + uint8_t type = regs->fred_ss.type;
> + uint8_t vec = regs->fred_ss.vector;
> +
> /* Copy fred_ss.vector into entry_vector as IDT delivery would have
> done. */
> - regs->entry_vector = regs->fred_ss.vector;
> + regs->entry_vector = vec;
> +
> + if ( !IS_ENABLED(CONFIG_PV) )
> + goto fatal;
> +
> + /*
> + * First, handle the asynchronous or fatal events. These are either
> + * unrelated to the interrupted context, or may not have valid context
> + * recorded, and all have special rules on how/whether to re-enable IRQs.
> + */
> + switch ( type )
> + {
> + case X86_ET_EXT_INTR:
> + return do_IRQ(regs);
>
> + case X86_ET_NMI:
> + return do_nmi(regs);
> +
> + case X86_ET_HW_EXC:
> + switch ( vec )
> + {
> + case X86_EXC_DF: return do_double_fault(regs);
> + case X86_EXC_MC: return do_machine_check(regs);
> + }
> + break;
> + }
This switch() is identical to entry_from_xen()'s. Fold into a helper?
> + /*
> + * With the asynchronous events handled, what remains are the synchronous
> + * ones. Guest context always had interrupts enabled.
> + */
> + local_irq_enable();
In the comment, maybe s/Guest/PV guest/?
> + switch ( type )
> + {
> + case X86_ET_HW_EXC:
> + case X86_ET_PRIV_SW_EXC:
> + case X86_ET_SW_EXC:
> + switch ( vec )
> + {
> + case X86_EXC_PF: handle_PF(regs, fi->edata); break;
> + case X86_EXC_GP: do_general_protection(regs); break;
> + case X86_EXC_UD: do_invalid_op(regs); break;
> + case X86_EXC_NM: do_device_not_available(regs); break;
> + case X86_EXC_BP: do_int3(regs); break;
> + case X86_EXC_DB: handle_DB(regs, fi->edata); break;
> +
> + case X86_EXC_DE:
> + case X86_EXC_OF:
> + case X86_EXC_BR:
> + case X86_EXC_NP:
> + case X86_EXC_SS:
> + case X86_EXC_MF:
> + case X86_EXC_AC:
> + case X86_EXC_XM:
> + do_trap(regs);
> + break;
> +
> + case X86_EXC_CP: do_entry_CP(regs); break;
> +
> + default:
> + goto fatal;
> + }
> + break;
This again looks identical to when entry_from_xen() has. Maybe, instead of
a helper for each switch(), we could have a common always-inline function
(with all necessary parametrization) that both invoke?
> --- a/xen/arch/x86/x86_64/entry.S
> +++ b/xen/arch/x86/x86_64/entry.S
> @@ -63,7 +63,7 @@ UNLIKELY_END(syscall_no_callback)
> /* Conditionally clear DF */
> and %esi, UREGS_eflags(%rsp)
> /* %rbx: struct vcpu */
> -test_all_events:
> +LABEL(test_all_events, 0)
> ASSERT_NOT_IN_ATOMIC
> cli # tests must not race interrupts
> /*test_softirqs:*/
> @@ -152,6 +152,8 @@ END(switch_to_kernel)
> FUNC_LOCAL(restore_all_guest)
> ASSERT_INTERRUPTS_DISABLED
>
> + ALTERNATIVE "", "jmp eretu_exit_to_guest", X86_FEATURE_XEN_FRED
> +
> /* Stash guest SPEC_CTRL value while we can read struct vcpu. */
> mov VCPU_arch_msrs(%rbx), %rdx
I assume it's deliberate that you don't "consume" this insn into the
alternative, but without the description saying anything it's not quite
clear why.
Jan
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |