[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH] x86/HVM: avoid pointer wraparound in bufioreq handling



> -----Original Message-----
> From: Jan Beulich [mailto:JBeulich@xxxxxxxx]
> Sent: 16 June 2015 10:30
> To: Paul Durrant
> Cc: Andrew Cooper; Ian Campbell; Ian Jackson; Stefano Stabellini; Wei Liu;
> xen-devel; Keir (Xen.org)
> Subject: RE: [Xen-devel] [PATCH] x86/HVM: avoid pointer wraparound in
> bufioreq handling
> 
> >>> On 16.06.15 at 11:15, <Paul.Durrant@xxxxxxxxxx> wrote:
> >>  -----Original Message-----
> >> From: Ian Campbell [mailto:ian.campbell@xxxxxxxxxx]
> >> Sent: 16 June 2015 10:00
> >> To: Jan Beulich
> >> Cc: Andrew Cooper; Wei Liu; Ian Jackson; Stefano Stabellini; xen-devel;
> Keir
> >> (Xen.org); Paul Durrant
> >> Subject: Re: [Xen-devel] [PATCH] x86/HVM: avoid pointer wraparound in
> >> bufioreq handling
> >>
> >> On Tue, 2015-06-16 at 09:37 +0100, Jan Beulich wrote:
> >> > >>> On 16.06.15 at 10:20, <ian.campbell@xxxxxxxxxx> wrote:
> >> > > On Tue, 2015-06-16 at 07:44 +0100, Jan Beulich wrote:
> >> > >> >>> On 15.06.15 at 16:30, <JBeulich@xxxxxxxx> wrote:
> >> > >> > The number of slots per page being 511 (i.e. not a power of two)
> >> means
> >> > >> > that the (32-bit) read and write indexes going beyond 2^32 will
> likely
> >> > >> > disturb operation. Extend I/O req server creation so the caller can
> >> > >> > indicate that it is using suitable atomic accesses where needed (not
> >> > >> > all accesses to the two pointers really need to be atomic), allowing
> >> > >> > the hypervisor to atomically canonicalize both pointers when both
> >> have
> >> > >> > gone through at least one cycle.
> >> > >> >
> >> > >> > Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx>
> >> > >>
> >> > >> No matter that it's just a single line change, I realized that I
> >> > >> forgot to Cc the tools maintainers. While a v2 will be needed (see
> >> > >> the reply just sent to Andrew) I'd still appreciate input (if any) to
> >> > >> limit the number of revisions needed.
> >> > >
> >> > > For such a simple toolstack side change which just reflects the
> >> > > underlying hcall interface I have no real opinion so far as the tools
> >> > > side goes, but it would be good to update the comments in xenctrl.h
> too.
> >> > > With that done for the tools change:
> >> > >         Acked-by: Ian Campbell <ian.campbell@xxxxxxxxxx>
> >> >
> >> > Thanks. The request for feedback went beyond the request for
> >> > an ack though, namely
> >> >
> >> > TBD: Do we need to be worried about non-libxc users of the changed
> >> >      (tools only) interface?
> >>
> >> It's (currently at least) a declared non-stable API, so in principal no.
> >> It would be polite to give a heads up to the expected potential users
> >> though, which you've done by CCing the QEMU maintainers I think.
> Adding
> >> Paul D for completeness though.
> >
> > From my reading, both QEMU upstream and trad are safe. They use a loop
> of
> > the form:
> >
> > while (read_ptr != write_ptr)
> > {
> >    do stuff
> >
> >   read_ptr += (handled a qword) ? 2 : 1;
> > }
> >
> > So, since the only test is for equality I think overflow should be handled
> > correctly. So, does anything actually need to be fixed?
> 
> Of course this needs to be fixed: When either pointer crosses the
> 2^32 boundary, the slot referenced goes from 0x1f to 0 (due to the
> "modulo 511" operation determining the slot to be used), introducing
> a discontinuity and potentially corrupting data in slots not consumed
> yet.
> 

Ah yes. I thought you were worried about inequality checks going wrong.

The way that QEMU processes buffered requests means that a synchronous ioreq is 
a barrier to buffered ring processing. So I guess it should be possible to send 
a synchronous request and then zero the buffered ring counters before they 
reach overflow.

  Paul
 
> Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.