[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] backport requests for 4.x-testing



On Fri, Mar 30, 2012 at 12:23 AM, Konrad Rzeszutek Wilk
<konrad.wilk@xxxxxxxxxx> wrote:
> On Fri, Mar 30, 2012 at 12:20:05AM +0800, Teck Choon Giam wrote:
>> On Thu, Mar 29, 2012 at 11:56 PM, Konrad Rzeszutek Wilk
>> <konrad.wilk@xxxxxxxxxx> wrote:
>> >> >> > Applied 23225 and 24013. The other, toolstack-related, patches I 
>> >> >> > will leave
>> >> >> > for a tools maintainer to ack or apply.
>> >> >>
>> >> > Hey Teck,
>> >> >
>> >> > Thanks for reporting!
>> >> >
>> >> >> With the two backport patches committed in xen-4.1-testing (changeset
>> >> >> 23271:13741fd6253b), xl list or xl create domU will cause 100% CPU and
>> >> >
>> >> > xl list?
>> >>
>> >> After a reboot with no domU running, xl list is fine but if I start a
>> >> hvm domU will be stuck and caused high load then open another ssh
>> >> terminal to issue xl list will stuck as well.
>> >
>> > This fix fixes it for me:
>> >
>> > diff -r 13741fd6253b xen/arch/x86/domain.c
>> > --- a/xen/arch/x86/domain.c     Thu Mar 29 10:20:58 2012 +0100
>> > +++ b/xen/arch/x86/domain.c     Thu Mar 29 11:44:54 2012 -0400
>> > @@ -558,9 +558,9 @@ int arch_domain_create(struct domain *d,
>> >         d->arch.is_32bit_pv = d->arch.has_32bit_shinfo =
>> >             (CONFIG_PAGING_LEVELS != 4);
>> >
>> > -        spin_lock_init(&d->arch.e820_lock);
>> >     }
>> >
>> > +    spin_lock_init(&d->arch.e820_lock);
>> >     memset(d->arch.cpuids, 0, sizeof(d->arch.cpuids));
>> >     for ( i = 0; i < MAX_CPUID_INPUT; i++ )
>> >     {
>> > @@ -605,8 +605,8 @@ void arch_domain_destroy(struct domain *
>> >
>> >     if ( is_hvm_domain(d) )
>> >         hvm_domain_destroy(d);
>> > -    else
>> > -        xfree(d->arch.e820);
>> > +
>> > +    xfree(d->arch.e820);
>> >
>> >     vmce_destroy_msr(d);
>> >     free_domain_pirqs(d);
>> >
>> >
>> > The issue is that upstream we have two 'domain structs' - one for PV and
>> > one for HVM. In 4.1 it is just 'arch_domain' and the calls to create
>> > the guests are going through the same interface (at least using xl, with
>> > xm they are seperate). And I only initialized the spinlock in the PV case,
>> > but not in the HVM case. This fix to the backport resolves the problem.
>>
>> Thanks for your fast and prompt fix ;)
>>
>> I am compiling with the fix patch you provided on top of
>> xen-4.1-testing changeset 23271:13741fd6253b.  Will test and report
>> back if you are interested ;)
>
> Yes please! If you find other issues, please report them immediately! Thanks
> again for doing this.

Thanks and it works!

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.