[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [RFC PATCH V1 01/12] hvm/ioreq: Make x86's IOREQ feature common



On Tue, 4 Aug 2020, Julien Grall wrote:
> On 04/08/2020 12:10, Oleksandr wrote:
> > On 04.08.20 10:45, Paul Durrant wrote:
> > > > +static inline bool hvm_ioreq_needs_completion(const ioreq_t *ioreq)
> > > > +{
> > > > +    return ioreq->state == STATE_IOREQ_READY &&
> > > > +           !ioreq->data_is_ptr &&
> > > > +           (ioreq->type != IOREQ_TYPE_PIO || ioreq->dir !=
> > > > IOREQ_WRITE);
> > > > +}
> > > I don't think having this in common code is correct. The short-cut of not
> > > completing PIO reads seems somewhat x86 specific. 
> 
> Hmmm, looking at the code, I think it doesn't wait for PIO writes to complete
> (not read). Did I miss anything?
> 
> > Does ARM even 
> > > have the concept of PIO?
> > 
> > I am not 100% sure here, but it seems that doesn't have.
> 
> Technically, the PIOs exist on Arm, however they are accessed the same way as
> MMIO and will have a dedicated area defined by the HW.
> 
> AFAICT, on Arm64, they are only used for PCI IO Bar.
> 
> Now the question is whether we want to expose them to the Device Emulator as
> PIO or MMIO access. From a generic PoV, a DM shouldn't have to care about the
> architecture used. It should just be able to request a given IOport region.
> 
> So it may make sense to differentiate them in the common ioreq code as well.
> 
> I had a quick look at QEMU and wasn't able to tell if PIOs and MMIOs address
> space are different on Arm as well. Paul, Stefano, do you know what they are
> doing?

On the QEMU side, it looks like PIO (address_space_io) is used in
connection with the emulation of the "in" or "out" instructions, see
ioport.c:cpu_inb for instance. Some parts of PCI on QEMU emulate PIO
space regardless of the architecture, such as
hw/pci/pci_bridge.c:pci_bridge_initfn.

However, because there is no "in" and "out" on ARM, I don't think
address_space_io can be accessed. Specifically, there is no equivalent
for target/i386/misc_helper.c:helper_inb on ARM.

So I think PIO is unused on ARM in QEMU.


FYI the ioreq type for PCI conf space reads and writes is
IOREQ_TYPE_PCI_CONFIG (neither MMIO nor PIO) which is implemented as
pci_host_config_read_common/pci_host_config_write_common directly
(neither PIO nor MMIO).


It looks like PIO-specific things could be kept x86-specific, without
loss of functionalities on the ARM side.


> The point of the check isn't to determine whether to wait, but
> what to do after having waited. Reads need a retry round through
> the emulator (to store the result in the designated place),
> while writes don't have such a requirement (and hence guest
> execution can continue immediately in the general case).

The x86 code looks like this:

            rc = hvm_send_ioreq(s, &p, 0);
            if ( rc != X86EMUL_RETRY || currd->is_shutting_down )
                vio->io_req.state = STATE_IOREQ_NONE;
            else if ( !hvm_ioreq_needs_completion(&vio->io_req) )
                rc = X86EMUL_OKAY;

Basically hvm_send_ioreq is expected to return RETRY.
Then, if it is a PIO write operation only, it is turned into OKAY right
away. Otherwise, rc stays as RETRY.

So, normally, hvmemul_do_io is expected to return RETRY, because the
emulator is not done yet. Am I understanding the code correctly?

If so, who is handling RETRY on x86? It tried to follow the call chain
but ended up in the x86 emulator and got lost :-)


At some point later, after the emulator (QEMU) has completed the
request, handle_hvm_io_completion gets called which ends up calling
handle_mmio() finishing the job on the Xen side too.


In other words:
RETRY ==> emulation in progress
OKAY  ==> emulation completed


Is that correct?

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.