[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [RFC PATCH V1 04/12] xen/arm: Introduce arch specific bits for IOREQ/DM features



Hi,

On 05/08/2020 16:41, Oleksandr wrote:

On 05.08.20 12:32, Julien Grall wrote:

Hi Julien.

Hi Stefano,

On 05/08/2020 00:22, Stefano Stabellini wrote:
On Mon, 3 Aug 2020, Oleksandr Tyshchenko wrote:
From: Oleksandr Tyshchenko <oleksandr_tyshchenko@xxxxxxxx>

This patch makes possible to forward Guest MMIO accesses
to a device emulator on Arm and enables that support for
Arm64.

Also update XSM code a bit to let DM op be used on Arm.
New arch DM op will be introduced in the follow-up patch.

Please note, at the moment build on Arm32 is broken
(see cmpxchg usage in hvm_send_buffered_ioreq()) if someone

Speaking of buffered_ioreq, if I recall correctly, they were only used
for VGA-related things on x86. It looks like it is still true.

If so, do we need it on ARM? Note that I don't think we can get rid of
it from the interface as it is baked into ioreq, but it might be
possible to have a dummy implementation on ARM. Or maybe not: looking at
xen/common/hvm/ioreq.c it looks like it would be difficult to
disentangle bufioreq stuff from the rest of the code.

We possibly don't need it right now. However, this could possibly be used in the future (e.g. virtio notification doorbell).

@@ -2275,6 +2282,16 @@ static void check_for_vcpu_work(void)
   */
  void leave_hypervisor_to_guest(void)
  {
+#ifdef CONFIG_IOREQ_SERVER
+    /*
+     * XXX: Check the return. Shall we call that in
+     * continue_running and context_switch instead?
+     * The benefits would be to avoid calling
+     * handle_hvm_io_completion on every return.
+     */

Yeah, that could be a simple and good optimization

Well, it is not simple as it is sounds :). handle_hvm_io_completion() is the function in charge to mark the vCPU as waiting for I/O. So we would at least need to split the function.

I wrote this TODO because I wasn't sure about the complexity of handle_hvm_io_completion(current). Looking at it again, the main complexity is the looping over the IOREQ servers.

I think it would be better to optimize handle_hvm_io_completion() rather than trying to hack the context_switch() or continue_running().

[...]

diff --git a/xen/include/asm-arm/p2m.h b/xen/include/asm-arm/p2m.h
index 5fdb6e8..5823f11 100644
--- a/xen/include/asm-arm/p2m.h
+++ b/xen/include/asm-arm/p2m.h
@@ -385,10 +385,11 @@ static inline int set_foreign_p2m_entry(struct domain *d, unsigned long gfn,
                                          mfn_t mfn)
  {
      /*
-     * NOTE: If this is implemented then proper reference counting of
-     *       foreign entries will need to be implemented.
+     * XXX: handle properly reference. It looks like the page may not always
+     * belong to d.

Just as a reference, and without taking away anything from the comment,
I think that QEMU is doing its own internal reference counting for these
mappings.

I am not sure how this matters here? We can't really trust the DM to do the right thing if it is not running in dom0.

But, IIRC, the problem is some of the pages doesn't belong to do a domain, so it is not possible to treat them as foreign mapping (e.g. you wouldn't be able to grab a reference). This investigation was done a couple of years ago, so this may have changed in recent Xen.

As a side note, I am a bit surprised to see most of my original TODOs present in the code. What is the plan to solve them?
The plan is to solve most critical TODOs in current series, and rest in follow-up series if no objections of course. Any pointers how to solve them properly would be much appreciated. Unfortunately, now I have a weak understanding how they should be fixed.

AFAICT, there is already some discussion about those 3 major TODOs happening. I would suggest to go through the discussions. We can clarify anything if needed.

Cheers,

--
Julien Grall



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.