[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v2 0/6] crash: Bundle of fixes for Xen




----- Original Message -----
> On Fri, Aug 10, 2012 at 03:11:57PM -0400, Dave Anderson wrote:
> >
> >
> > ----- Original Message -----
> > > Hi,
> > >
> > > It looks that Xen support for crash have not been maintained
> > > since 2009. I am trying to fix this. Here it is bundle of fixes:
> > >   - xen: Always calculate max_cpus value,
> > >   - xen: Read only crash notes for onlined CPUs,
> > >   - x86/xen: Read variables from dynamically allocated per_cpu
> > >   data,
> > >   - xen: Get idle data from alternative source,
> > >   - xen: Read data correctly from dynamically allocated console
> > >   ring, too
> > >     (fixed in this release),
> > >   - xen: Add support for 3 level P2M tree (new patch in this
> > >   release).
> > >
> > > Daniel
> >
> > Hi Daniel,
> >
> > The original 5 updates specific to the Xen hypervisor look OK,
> > but new patch 6/6 is going to take some studying/testing to
> > alleviate my backwards-compatibility worries.  Can I ask whether
> > you fully tested it with older 2-level P2M tree kernels?
> 
> As you asked me earlier I have tested all patches on Xen 3.1 and 4.1,
> Linux Kernel Ver. 2.6.18 (P2M array), 2.6.36 (2 level P2M tree)
> and 2.6.39 (3 level P2M tree). Additionaly, there were some internal
> tests done by others in my company.
> 
> Daniel

OK good.  It tests OK on a few older pvops kernels that I have on hand.

The only thing I've changed is to handle compiler warnings in x86_64.c and 
x86.c by initializing p2m_top to NULL in x86_64_pvops_xendump_p2m_l3_create()
and x86_pvops_xendump_p2m_l3_create().  I also used GETBUF() in those two
functions to avoid having to add the malloc-failure line.

Queued for crash-6.0.9.

Thanks,
  Dave
  

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.