[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v6 1/4] x86/ioreq server: Add HVMOP to map guest ram with p2m_ioreq_server to an ioreq server.



>>> On 06.09.16 at 10:03, <Paul.Durrant@xxxxxxxxxx> wrote:
>>  -----Original Message-----
>> From: Jan Beulich [mailto:JBeulich@xxxxxxxx]
>> Sent: 06 September 2016 08:58
>> To: George Dunlap <George.Dunlap@xxxxxxxxxx>; Yu Zhang
>> <yu.c.zhang@xxxxxxxxxxxxxxx>
>> Cc: Andrew Cooper <Andrew.Cooper3@xxxxxxxxxx>; Paul Durrant
>> <Paul.Durrant@xxxxxxxxxx>; George Dunlap <George.Dunlap@xxxxxxxxxx>;
>> JunNakajima <jun.nakajima@xxxxxxxxx>; Kevin Tian <kevin.tian@xxxxxxxxx>;
>> zhiyuan.lv@xxxxxxxxx; xen-devel@xxxxxxxxxxxxx; Tim (Xen.org)
>> <tim@xxxxxxx>
>> Subject: Re: [PATCH v6 1/4] x86/ioreq server: Add HVMOP to map guest ram
>> with p2m_ioreq_server to an ioreq server.
>> 
>> >>> On 05.09.16 at 19:20, <george.dunlap@xxxxxxxxxx> wrote:
>> > On 05/09/16 14:31, Jan Beulich wrote:
>> >>>>> On 02.09.16 at 12:47, <yu.c.zhang@xxxxxxxxxxxxxxx> wrote:
>> >>> @@ -178,8 +179,27 @@ static int hvmemul_do_io(
>> >>>          break;
>> >>>      case X86EMUL_UNHANDLEABLE:
>> >>>      {
>> >>> -        struct hvm_ioreq_server *s =
>> >>> -            hvm_select_ioreq_server(curr->domain, &p);
>> >>> +        struct hvm_ioreq_server *s = NULL;
>> >>> +        p2m_type_t p2mt = p2m_invalid;
>> >>> +
>> >>> +        if ( is_mmio )
>> >>> +        {
>> >>> +            unsigned long gmfn = paddr_to_pfn(addr);
>> >>> +
>> >>> +            (void) get_gfn_query_unlocked(currd, gmfn, &p2mt);
>> >>> +
>> >>> +            if ( p2mt == p2m_ioreq_server && dir == IOREQ_WRITE )
>> >>> +            {
>> >>> +                unsigned int flags;
>> >>> +
>> >>> +                s = p2m_get_ioreq_server(currd, &flags);
>> >>> +                if ( !(flags & XEN_HVMOP_IOREQ_MEM_ACCESS_WRITE) )
>> >>> +                    s = NULL;
>> >>> +            }
>> >>> +        }
>> >>> +
>> >>> +        if ( !s && p2mt != p2m_ioreq_server )
>> >>> +            s = hvm_select_ioreq_server(currd, &p);
>> >>
>> >> What I recall is that we had agreed on p2m_ioreq_server pages to be
>> >> treated as ordinary RAM ones as long as no server can be found. The
>> >> type check here contradicts that. Is there a reason?
>> >
>> > I think it must be a confusion as to what "treated like ordinary RAM
>> > ones" means.  p2m_ram_rw types that gets here will go through
>> > hvm_select_ioreq_server(), and (therefore) potentially be treated as
>> > MMIO accesses, which is not how "ordinary RAM" would behave.  If what
>> > you meant was that you want p2m_ioreq_server to behave like
>> p2m_ram_rw
>> > (and be treated as MMIO if it matches an iorange) then yes.  If what
>> > you want is for p2m_ioreq_server to actually act like RAM, then
>> > probably something more needs to happen here.
>> 
>> Well, all I'm questioning is the special casing of p2m_ioreq_server in the 
>> if().
>> That's imo orthogonal to p2m_ram_rw pages not being supposed to make it
>> here (hence the is_mmio check in the earlier if() also looks questionable).
>> Perhaps it would already help if there was a comment explaining what the
>> exact intended behavior here is.
>> 
> 
> My understanding is that we want accesses that make it here for pages that 
> are not of type 'ioreq_server' to result in MMIO emulation (i.e. they hit an 
> emulator's requested ranges, or the access is completed as unclaimed MMIO by 
> Xen). Accesses that make it here because the page *is* of type 'ioreq server' 
> should be sent to the emulator that has claimed the type and, if no emulator 
> does currently have a claim to the type, be handled as if the access was to 
> r/w RAM.

Makes sense, but it doesn't look like the code above does that, as
- keeping s to be NULL means ignoring the access
- finding a server by some means would imply handling the access as I/O
  instead of RAM.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.