[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [PATCH v1 2/6] xen/riscv: introduce things necessary for p2m initialization
- To: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>, xen-devel@xxxxxxxxxxxxxxxxxxxx
- From: Oleksii Kurochko <oleksii.kurochko@xxxxxxxxx>
- Date: Mon, 12 May 2025 11:33:28 +0200
- Cc: Alistair Francis <alistair.francis@xxxxxxx>, Bob Eshleman <bobbyeshleman@xxxxxxxxx>, Connor Davis <connojdavis@xxxxxxxxx>, Anthony PERARD <anthony.perard@xxxxxxxxxx>, Michal Orzel <michal.orzel@xxxxxxx>, Jan Beulich <jbeulich@xxxxxxxx>, Julien Grall <julien@xxxxxxx>, Roger Pau Monné <roger.pau@xxxxxxxxxx>, Stefano Stabellini <sstabellini@xxxxxxxxxx>
- Delivery-date: Mon, 12 May 2025 09:33:46 +0000
- List-id: Xen developer discussion <xen-devel.lists.xenproject.org>
On 5/12/25 11:24 AM, Oleksii Kurochko
wrote:
On 5/9/25 6:14 PM, Andrew Cooper
wrote:
On 09/05/2025 4:57 pm, Oleksii Kurochko wrote:
diff --git a/xen/arch/riscv/p2m.c b/xen/arch/riscv/p2m.c
new file mode 100644
index 0000000000..ad4beef8f9
--- /dev/null
+++ b/xen/arch/riscv/p2m.c
@@ -0,0 +1,168 @@
+#include <xen/domain_page.h>
+#include <xen/iommu.h>
+#include <xen/lib.h>
+#include <xen/mm.h>
+#include <xen/pfn.h>
+#include <xen/rwlock.h>
+#include <xen/sched.h>
+#include <xen/spinlock.h>
+
+#include <asm/page.h>
+#include <asm/p2m.h>
+
+/*
+ * Force a synchronous P2M TLB flush.
+ *
+ * Must be called with the p2m lock held.
+ *
+ * TODO: add support of flushing TLB connected to VMID.
+ */
+static void p2m_force_tlb_flush_sync(struct p2m_domain *p2m)
+{
+ ASSERT(p2m_is_write_locked(p2m));
+
+ /*
+ * TODO: shouldn't be this flush done for each physical CPU?
+ * If yes, then SBI call sbi_remote_hfence_gvma() could
+ * be used for that.
+ */
+#if defined(__riscv_hh) || defined(__riscv_h)
+ asm volatile ( "hfence.gvma" ::: "memory" );
+#else
+ asm volatile ( ".insn r 0x73, 0x0, 0x31, x0, x0, x0" ::: "memory" );
+#endif
TLB flushing needs to happen for each pCPU which potentially has cached
a mapping.
In other arches, this is tracked by d->dirty_cpumask which is the bitmap
of pCPUs where this domain is scheduled.
I could only find usage of d->dirty_cpumask in x86 and common code (grant
tables) for flushing the TLB. However, it seems that d->dirty_cpumask is
not set anywhere for ARM. Is it sufficient to set a bit in d->dirty_cpumask
in startup_cpu_idle_loop() ?
And one more thing.
If d->dirty_cpumask is empty (for example, on p2m initialization stage) then
p2m TLB flush could be skipped at all, right?
~ Oleksii
In addition, it’s also necessary to set and clear bits in d->dirty_cpumask
during context_switch , correct? Set it before switching from the previous
domain, and clear it after switching to the new domain?
Also, when a bit is set in d->dirty_cpumask , the v->processor value is also
stored in v->dirty_cpu . Is this needed to track which processor is
currently being used for the vCPU?
CPUs need to flush their TLBs before removing themselves from
d->dirty_cpumask, which is typically done during context switch, but it
means that to flush the P2M, you only need to IPI a subset of CPUs.
I can't find where the P2M flush happens for x86/ARM. Could you please point me
to where it is handled?
Also, I found guest_flush_tlb_mask() for x86. I assume that it is x86 specific
and generally it is enough to have only flush_tlb_mask(), right?
Thanks in advance for the answers.
~ Oleksii
|