[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 0/2] QEMU/xen: simplify cpu_ioreq_pio and cpu_ioreq_move / Xen on Xen nested virt



Hello,

IanJ: I can't see this patch in qemu-xen-unstable .. 
Does the the patch still need something to be fixed before it can be applied? 

Thanks,

-- Pasi

On Wed, Jan 09, 2013 at 09:09:26AM +0200, Pasi Kärkkäinen wrote:
> On Tue, Jan 08, 2013 at 01:49:06AM +0000, Xu, Dongxiao wrote:
> > > 
> > > > -----Original Message-----
> > > > From: Ian Jackson [mailto:Ian.Jackson@xxxxxxxxxxxxx]
> > > > Sent: Saturday, December 08, 2012 12:34 AM
> > > > To: Ian Campbell
> > > > Cc: Stefano Stabellini; Xu, Dongxiao; xen-devel@xxxxxxxxxxxxxxxxxxx;
> > > > qemu-devel@xxxxxxxxxx
> > > > Subject: Re: [Xen-devel] [PATCH 0/2] QEMU/xen: simplify cpu_ioreq_pio 
> > > > and
> > > > cpu_ioreq_move
> > > >
> > > > Ian Campbell writes ("Re: [Xen-devel] [PATCH 0/2] QEMU/xen: simplify
> > > > cpu_ioreq_pio   and cpu_ioreq_move"):
> > > > > On Fri, 2012-12-07 at 16:14 +0000, Ian Jackson wrote:
> > > > > > +    target_phys_addr_t offset = (target_phys_addr_t)req->size * i;
> > > > > > +    if (req->df) addr -= offset;
> > > > > > +    else addr -= offset;
> > > > >
> > > > > One of these -= should be a += I presume?
> > > >
> > > > Uh, yes.
> > > >
> > > > > [...]
> > > > > > +                write_phys_req_item((target_phys_addr_t)
> > > req->data,
> > > > req, i, &tmp);
> > > > >
> > > > > This seems to be the only one with this cast, why?
> > > >
> > > > This is a mistake.
> > > >
> > > > > write_phys_req_item takes a target_phys_addr_t so this will happen
> > > > > regardless I think.
> > > >
> > > > Indeed.
> > > >
> > > > Ian.
> > > 
> > > Tested this v2 patch on my system, and it works.
> > 
> > Are we planning to check this patch into qemu-traditional? Since it is a 
> > critical patch to boot Xen on Xen.
> > 
> 
> We should definitely get Xen-on-Xen working.. 
> 
> 
> -- Pasi
> 
> > Thanks,
> > Dongxiao
> > 
> > > 
> > > Thanks,
> > > Dongxiao
> > > 
> > > 
> > > >
> > > > commit fd3865f8e0d867a203b4ddcb22eefa827cfaca0a
> > > > Author: Ian Jackson <ian.jackson@xxxxxxxxxxxxx>
> > > > Date:   Fri Dec 7 16:02:04 2012 +0000
> > > >
> > > >     cpu_ioreq_pio, cpu_ioreq_move: introduce read_phys_req_item,
> > > > write_phys_req_item
> > > >
> > > >     The current code compare i (int) with req->count (uint32_t) in a for
> > > >     loop, risking an infinite loop if req->count is >INT_MAX.  It also
> > > >     does the multiplication of req->size in a too-small type, leading to
> > > >     integer overflows.
> > > >
> > > >     Turn read_physical and write_physical into two different helper
> > > >     functions, read_phys_req_item and write_phys_req_item, that take
> > > care
> > > >     of adding or subtracting offset depending on sign.
> > > >
> > > >     This removes the formulaic multiplication to a single place where 
> > > > the
> > > >     integer overflows can be dealt with by casting to wide-enough 
> > > > unsigned
> > > >     types.
> > > >
> > > >     Reported-By: Dongxiao Xu <dongxiao.xu@xxxxxxxxx>
> > > >     Signed-off-by: Stefano Stabellini <stefano.stabellini@xxxxxxxxxxxxx>
> > > >     Signed-off-by: Ian Jackson <ian.jackson@xxxxxxxxxxxxx>
> > > >
> > > >     --
> > > >     v2: Fix sign when !!req->df.  Remove a useless cast.
> > > >
> > > > diff --git a/i386-dm/helper2.c b/i386-dm/helper2.c
> > > > index c6d049c..63a938b 100644
> > > > --- a/i386-dm/helper2.c
> > > > +++ b/i386-dm/helper2.c
> > > > @@ -339,21 +339,40 @@ static void do_outp(CPUState *env, unsigned long
> > > > addr,
> > > >      }
> > > >  }
> > > >
> > > > -static inline void read_physical(uint64_t addr, unsigned long size, 
> > > > void *val)
> > > > +/*
> > > > + * Helper functions which read/write an object from/to physical guest
> > > > + * memory, as part of the implementation of an ioreq.
> > > > + *
> > > > + * Equivalent to
> > > > + *   cpu_physical_memory_rw(addr + (req->df ? -1 : +1) * req->size * i,
> > > > + *                          val, req->size, 0/1)
> > > > + * except without the integer overflow problems.
> > > > + */
> > > > +static void rw_phys_req_item(target_phys_addr_t addr,
> > > > +                             ioreq_t *req, uint32_t i, void *val, int
> > > rw)
> > > >  {
> > > > -    return cpu_physical_memory_rw((target_phys_addr_t)addr, val, size,
> > > 0);
> > > > +    /* Do everything unsigned so overflow just results in a truncated 
> > > > result
> > > > +     * and accesses to undesired parts of guest memory, which is up
> > > > +     * to the guest */
> > > > +    target_phys_addr_t offset = (target_phys_addr_t)req->size * i;
> > > > +    if (req->df) addr -= offset;
> > > > +    else addr += offset;
> > > > +    cpu_physical_memory_rw(addr, val, req->size, rw);
> > > >  }
> > > > -
> > > > -static inline void write_physical(uint64_t addr, unsigned long size, 
> > > > void *val)
> > > > +static inline void read_phys_req_item(target_phys_addr_t addr,
> > > > +                                      ioreq_t *req, uint32_t i, void
> > > > *val)
> > > >  {
> > > > -    return cpu_physical_memory_rw((target_phys_addr_t)addr, val, size,
> > > 1);
> > > > +    rw_phys_req_item(addr, req, i, val, 0);
> > > > +}
> > > > +static inline void write_phys_req_item(target_phys_addr_t addr,
> > > > +                                       ioreq_t *req, uint32_t i,
> > > void
> > > > *val)
> > > > +{
> > > > +    rw_phys_req_item(addr, req, i, val, 1);
> > > >  }
> > > >
> > > >  static void cpu_ioreq_pio(CPUState *env, ioreq_t *req)
> > > >  {
> > > > -    int i, sign;
> > > > -
> > > > -    sign = req->df ? -1 : 1;
> > > > +    uint32_t i;
> > > >
> > > >      if (req->dir == IOREQ_READ) {
> > > >          if (!req->data_is_ptr) {
> > > > @@ -363,9 +382,7 @@ static void cpu_ioreq_pio(CPUState *env, ioreq_t
> > > *req)
> > > >
> > > >              for (i = 0; i < req->count; i++) {
> > > >                  tmp = do_inp(env, req->addr, req->size);
> > > > -                write_physical((target_phys_addr_t) req->data
> > > > -                  + (sign * i * req->size),
> > > > -                  req->size, &tmp);
> > > > +                write_phys_req_item(req->data, req, i, &tmp);
> > > >              }
> > > >          }
> > > >      } else if (req->dir == IOREQ_WRITE) {
> > > > @@ -375,9 +392,7 @@ static void cpu_ioreq_pio(CPUState *env, ioreq_t
> > > *req)
> > > >              for (i = 0; i < req->count; i++) {
> > > >                  unsigned long tmp = 0;
> > > >
> > > > -                read_physical((target_phys_addr_t) req->data
> > > > -                  + (sign * i * req->size),
> > > > -                  req->size, &tmp);
> > > > +                read_phys_req_item(req->data, req, i, &tmp);
> > > >                  do_outp(env, req->addr, req->size, tmp);
> > > >              }
> > > >          }
> > > > @@ -386,22 +401,16 @@ static void cpu_ioreq_pio(CPUState *env, ioreq_t
> > > > *req)
> > > >
> > > >  static void cpu_ioreq_move(CPUState *env, ioreq_t *req)
> > > >  {
> > > > -    int i, sign;
> > > > -
> > > > -    sign = req->df ? -1 : 1;
> > > > +    uint32_t i;
> > > >
> > > >      if (!req->data_is_ptr) {
> > > >          if (req->dir == IOREQ_READ) {
> > > >              for (i = 0; i < req->count; i++) {
> > > > -                read_physical(req->addr
> > > > -                  + (sign * i * req->size),
> > > > -                  req->size, &req->data);
> > > > +                read_phys_req_item(req->addr, req, i, &req->data);
> > > >              }
> > > >          } else if (req->dir == IOREQ_WRITE) {
> > > >              for (i = 0; i < req->count; i++) {
> > > > -                write_physical(req->addr
> > > > -                  + (sign * i * req->size),
> > > > -                  req->size, &req->data);
> > > > +                write_phys_req_item(req->addr, req, i, &req->data);
> > > >              }
> > > >          }
> > > >      } else {
> > > > @@ -409,21 +418,13 @@ static void cpu_ioreq_move(CPUState *env,
> > > ioreq_t
> > > > *req)
> > > >
> > > >          if (req->dir == IOREQ_READ) {
> > > >              for (i = 0; i < req->count; i++) {
> > > > -                read_physical(req->addr
> > > > -                  + (sign * i * req->size),
> > > > -                  req->size, &tmp);
> > > > -                write_physical((target_phys_addr_t )req->data
> > > > -                  + (sign * i * req->size),
> > > > -                  req->size, &tmp);
> > > > +                read_phys_req_item(req->addr, req, i, &tmp);
> > > > +                write_phys_req_item(req->data, req, i, &tmp);
> > > >              }
> > > >          } else if (req->dir == IOREQ_WRITE) {
> > > >              for (i = 0; i < req->count; i++) {
> > > > -                read_physical((target_phys_addr_t) req->data
> > > > -                  + (sign * i * req->size),
> > > > -                  req->size, &tmp);
> > > > -                write_physical(req->addr
> > > > -                  + (sign * i * req->size),
> > > > -                  req->size, &tmp);
> > > > +                read_phys_req_item(req->data, req, i, &tmp);
> > > > +                write_phys_req_item(req->addr, req, i, &tmp);
> > > >              }
> > > >          }
> > > >      }
> > 
> > _______________________________________________
> > Xen-devel mailing list
> > Xen-devel@xxxxxxxxxxxxx
> > http://lists.xen.org/xen-devel
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@xxxxxxxxxxxxx
> http://lists.xen.org/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.