[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [RFC PATCH V1 04/12] xen/arm: Introduce arch specific bits for IOREQ/DM features





On 06/08/2020 14:27, Oleksandr wrote:

On 06.08.20 14:08, Julien Grall wrote:

Hi Julien


What is this function supposed to do?
Agree, sounds confusing a bit. I assume it is supposed to complete Guest MMIO access after finishing emulation.

Shall I rename it to something appropriate (maybe by adding ioreq prefix)?

How about ioreq_handle_complete_mmio()?

For me it sounds fine.




diff --git a/xen/common/memory.c b/xen/common/memory.c
index 9283e5e..0000477 100644
--- a/xen/common/memory.c
+++ b/xen/common/memory.c
@@ -8,6 +8,7 @@
   */
    #include <xen/domain_page.h>
+#include <xen/hvm/ioreq.h>
  #include <xen/types.h>
  #include <xen/lib.h>
  #include <xen/mm.h>
@@ -30,10 +31,6 @@
  #include <public/memory.h>
  #include <xsm/xsm.h>
  -#ifdef CONFIG_IOREQ_SERVER
-#include <xen/hvm/ioreq.h>
-#endif
-

Why do you remove something your just introduced?
The reason I guarded that header is to make "xen/mm: Make x86's XENMEM_resource_ioreq_server handling common" (previous) patch buildable on Arm without arch IOREQ header added yet. I tried to make sure that the result after each patch was buildable to retain bisectability. As current patch adds Arm IOREQ specific bits (including header), that guard could be removed as not needed anymore.
I agree we want to have the build bisectable. However, I am still puzzled why it is necessary to remove the #ifdef and move it earlier in the list.

Do you mind to provide more details?
Previous patch "xen/mm: Make x86's XENMEM_resource_ioreq_server handling common" breaks build on Arm as it includes xen/hvm/ioreq.h which requires arch header to be present (asm/hvm/ioreq.h). But the missing arch header together with other arch specific bits are introduced here in current patch.

I understand that both Arm and x86 now implement the asm/hvm/ioreq.h.
However, please keep in mind that there might be other architectures in the future.

With your change here, you would impose a new arch to implement asm/hvm/ioreq.h even if the developper have no plan to use the feature.

Probably I should have rearranged
changes in a way to not introduce #ifdef and then remove it...

Ideally we want to avoid #ifdef in the common code. But if this can't be done in an header, then the #ifdef here would be fine.




[...]

+
+bool handle_mmio(void);
+
+static inline bool handle_pio(uint16_t port, unsigned int size, int dir)
+{
+    /* XXX */

Can you expand this TODO? What do you expect to do?
I didn't expect this to be called on Arm. Sorry, I am not sure l have an idea how to handle this properly. I would keep it unimplemented until a real reason.
Will expand TODO.

Let see how the conversation on patch#1 goes about PIO vs MMIO.

ok





+    BUG();
+    return true;
+}
+
+static inline paddr_t hvm_mmio_first_byte(const ioreq_t *p)
+{
+    return p->addr;
+}

I understand that the x86 version is more complex as it check p->df. However, aside reducing the complexity, I am not sure why we would want to diverge it.

After all, IOREQ is now meant to be a common feature.
Well, no objections at all.
Could you please clarify how could 'df' (Direction Flag?) be handled/used on Arm?

On x86, this is used by 'rep' instruction to tell the direction to iterate (forward or backward).

On Arm, all the accesses to MMIO region will do a single memory access. So for now, we can safely always set to 0.

I see that try_fwd_ioserv() always sets it 0. Or I need to just reuse x86's helpers as is, which (together with count = df = 0) will result in what we actually have here?
AFAIU, both count and df should be 0 on Arm.

Thanks for the explanation. The only one question remains where to put common helpers hvm_mmio_first_byte/hvm_mmio_last_byte (common io.h?)?

It feels to me it should be part of the common ioreq.h.

Cheers,

--
Julien Grall



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.