[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] vmx: VT-d posted-interrupt core logic handling



>>> On 10.03.16 at 09:43, <kevin.tian@xxxxxxxxx> wrote:
>>  From: Jan Beulich [mailto:JBeulich@xxxxxxxx]
>> Sent: Thursday, March 10, 2016 4:07 PM
>> 
>> theoretical basis for some sort of measurement would be to
>> determine how long a worst case list traversal would take. With
>> "worst case" being derived from the theoretical limits the
>> hypervisor implementation so far implies: 128 vCPU-s per domain
>> (a limit which we sooner or later will need to lift, i.e. taking into
>> consideration a larger value - like the 8k for PV guests - wouldn't
>> hurt) by 32k domains per host, totaling to 4M possible list entries.
>> Yes, it is obvious that this limit won't be reachable in practice, but
>> no, any lower limit can't be guaranteed to be good enough.
> 
> Here do you think whether '4M' possible entries are 'overly large'
> so we must have some enforcement in code, or still some experiments 
> required to verify '4M' does been a problem (since total overhead 
> depends on what we do with each entry)? If the latter what's the 
> criteria to define it as a problem (e.g. 200us in total)?

Well, 4M makes, even with a single loop iteration just taking 1ns,
4ms already. Anything reaching the order of the minimum
scheduler time slice is potentially problematic. Anything reaching
the order of 1s is known to be actively bad outside of interrupt
context; within interrupt context you need to also consider
interrupt rate of course, so 4ms likely would already open the
potential of a CPU not making any forward progress anymore.

> There are many linked list usages today in Xen hypervisor, which
> have different theoretical maximum possible number. The closest
> one to PI might be the usage in tmem (pool->share_list) which is 
> page based so could grow 'overly large'. Other examples are 
> magnitude lower, e.g. s->ioreq_vcpu_list in ioreq server (which
> could be 8K in above example), and d->arch.hvm_domain.msixtbl_list
> in MSI-x virtualization (which could be 2^11 per spec). Do we
> also want to create some artificial scenarios to examine them 
> since based on actual operation K-level entries may also become
> a problem? 
> 
> Just want to figure out how best we can solve all related linked-list 
> usages in current hypervisor. 

As you say, those are (perhaps with the exception of tmem, which
isn't supported anyway due to XSA-15, and which therefore also
isn't on by default) in the order of a few thousand list elements.
And as mentioned above, different bounds apply for lists traversed
in interrupt context vs such traversed only in "normal" context.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.