[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v4 17/17] x86/hvm: track large memory mapped accesses by buffer offset



>>> On 25.06.15 at 12:55, <Paul.Durrant@xxxxxxxxxx> wrote:
>> From: Paul Durrant
>> Sent: 25 June 2015 11:52
>> > From: Jan Beulich [mailto:JBeulich@xxxxxxxx]
>> > Sent: 25 June 2015 11:47
>> > >>> On 24.06.15 at 13:24, <paul.durrant@xxxxxxxxxx> wrote:
>> > > @@ -621,14 +574,41 @@ static int hvmemul_phys_mmio_access(
>> > >
>> > >      for ( ;; )
>> > >      {
>> > > -        rc = hvmemul_do_mmio_buffer(gpa, &one_rep, chunk, dir, 0,
>> > > -                                    *buffer);
>> > > -        if ( rc != X86EMUL_OKAY )
>> > > -            break;
>> > > +        /* Have we already done this chunk? */
>> > > +        if ( (*off + chunk) <= vio->mmio_cache[dir].size )
>> >
>> > I can see why you would like to get rid of the address check, but
>> > I'm afraid you can't: You have to avoid getting mixed up multiple
>> > same kind (reads or writes) memory accesses that a single
>> > instruction can do. While generally I would assume that
>> > secondary accesses (like the I/O bitmap read associated with an
>> > OUTS) wouldn't go to MMIO, CMPS with both operands being
>> > in MMIO would break even if neither crosses a page boundary
>> > (not to think of when the emulator starts supporting the
>> > scatter/gather instructions, albeit supporting them will require
>> > further changes, or we could choose to do them one element at
>> > a time).
>> 
>> Ok. Can I assume at most two distinct set of addresses for read or write? If 
> so
>> then I can just keep two sets of caches in the hvm_io struct.
>> 
> 
> Oh, I mean linear addresses here BTW.

Yes, that's what I implied - afaics switching to using linear addresses
shouldn't result in any problem (but then again I wonder whether
physical addresses really were chosen originally for no real reason).

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.