[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH] Don't track all memory when enabling log dirty to track vram



Jan Beulich wrote on 2014-02-18:
>>>> On 18.02.14 at 04:25, "Zhang, Yang Z" <yang.z.zhang@xxxxxxxxx> wrote:
>> Jan Beulich wrote on 2014-02-17:
>>>>>> On 17.02.14 at 11:18, "Jan Beulich" <JBeulich@xxxxxxxx> wrote:
>>>> And second, I have been fighting with finding both conditions and
>>>> (eventually) the root cause of a severe performance regression
>>>> (compared to 4.3.x) I'm observing on an EPT+IOMMU system. This
>>>> became _much_ worse after adding in the patch here (while in fact
>>>> I had hoped it might help with the originally observed
>>>> degradation): X startup fails due to timing out, and booting the
>>>> guest now takes about 20 minutes). I didn't find the root cause of
>>>> this yet, but meanwhile I know that
>>>> - the same isn't observable on SVM
>>>> - there's no problem when forcing the domain to use shadow
>>>>   mode - there's no need for any device to actually be assigned to the
>>>>   guest - the regression is very likely purely graphics related (based
>>>>   on the observation that when running something that regularly but not
>>>>   heavily updates the screen with X up, the guest consumes a full CPU's
>>>>   worth of processing power, yet when that updating doesn't
>>>> happen,
> CPU
>>>>   consumption goes down, and it goes further down when shutting
>>>> down
> X
>>>>   altogether - at least as log as the patch here doesn't get involved).
>>>> This I'm observing on a Westmere box (and I didn't notice it
>>>> earlier because that's one of those where due to a chipset erratum
>>>> the IOMMU gets turned off by default), so it's possible that this
>>>> can't be seen on more modern hardware. I'll hopefully find time
>>>> today to check this on the one newer (Sandy Bridge) box I have.
>>> 
>>> Just got done with trying this: By default, things work fine there. As
>>> soon as I use "iommu=no-snoop", things go bad (even worse than one the
>>> older box - the guest is consuming about 2.5 CPUs worth of processing
>>> power _without_ the patch here in use, so I don't even want to think
>>> about trying it there); I guessed that to be another of the potential
>>> sources of the problem since on that older box the respective hardware
>>> feature is unavailable.
>>> 
>>> While I'll try to look into this further, I guess I have to defer
>>> to our VT-d specialists at Intel at this point...
>>> 
>> 
>> Hi, Jan,
>> 
>> I tried to reproduce it. But unfortunately, I cannot reproduce it in
>> my box (sandy bridge EP)with latest Xen(include my patch). I guess
>> my configuration or steps may wrong, here is mine:
>> 
>> 1. add iommu=1,no-snoop in by xen cmd line:
>> (XEN) Intel VT-d Snoop Control not enabled.
>> (XEN) Intel VT-d Dom0 DMA Passthrough not enabled.
>> (XEN) Intel VT-d Queued Invalidation enabled.
>> (XEN) Intel VT-d Interrupt Remapping enabled.
>> (XEN) Intel VT-d Shared EPT tables enabled.
>> 
>> 2. boot a rhel6u4 guest.
>> 
>> 3. after guest boot up, run startx inside guest.
>> 
>> 4. a few second, the X windows shows and didn't see any error. Also
>> the CPU utilization is about 1.7%.
>> 
>> Any thing wrong?
> 
> Nothing at all, as it turns out. The regression is due to Dongxiao's
> 
> http://lists.xenproject.org/archives/html/xen-devel/2013-12/msg00367.h
> tml
> 
> which I have in my tree as part of various things pending for 4.5.
> And which at the first, second, and third glance looks pretty innocent
> (IOW I still have to find out _why_ it is wrong).
> 
> In any case - I'm very sorry for the false alarm.
> 

It doesn't matter. Conversely, we need to thank you for helping us to fix this 
issue. :)

BTW, I still cannot reproduce it in my box, even I uses SLES 11 SP3 as guest.

> Jan


Best regards,
Yang


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.