WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

[Xen-devel] Re: [PATCH 3 of 4] Nested p2m: clarify logic in p2m_get_nest

To: Tim Deegan <Tim.Deegan@xxxxxxxxxx>
Subject: [Xen-devel] Re: [PATCH 3 of 4] Nested p2m: clarify logic in p2m_get_nestedp2m()
From: Christoph Egger <Christoph.Egger@xxxxxxx>
Date: Mon, 27 Jun 2011 11:46:05 +0200
Cc: "xen-devel@xxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxx>
Delivery-date: Mon, 27 Jun 2011 02:48:20 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <20110624150509.GJ9784@xxxxxxxxxxxxxxxxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
References: <patchbomb.1308759026@xxxxxxxxxxxxxxxxxxxxxxx> <b265371addbbc8a58c95.1308759029@xxxxxxxxxxxxxxxxxxxxxxx> <4E049E64.9080908@xxxxxxx> <20110624143726.GI9784@xxxxxxxxxxxxxxxxxxxxxxx> <4E04A4F0.4090803@xxxxxxx> <20110624150509.GJ9784@xxxxxxxxxxxxxxxxxxxxxxx>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
User-agent: Mozilla/5.0 (X11; U; NetBSD amd64; en-US; rv:1.9.2.17) Gecko/20110523 Lightning/1.0b3pre Thunderbird/3.1.10
On 06/24/11 17:05, Tim Deegan wrote:
Hi,

At 15:53 +0100 on 24 Jun (1308930816), Christoph Egger wrote:
More generally, I think that you need to figure out exactly what
behaviour you want from this function.  For example in the current code
there's no way that two vcpus with the same ncr3 value can share a
nested-p2m.  Is that deliberate?

By 'current code' do you mean with or w/o this patch ?

Both, and all versions of the code from before my current series to the
full series applied.

It is deliberate that two vcpus with the same ncr3 share a nested-p2m.

But they don't.  The code in current unstable tip does this:

     for (i = 0; i<  MAX_NESTEDP2M; i++) {
         p2m = d->arch.nested_p2m[i];
         if ((p2m->cr3 != cr3&&  p2m->cr3 != CR3_EADDR) || (p2m != nv->nv_p2m))
             continue;

        // ... return this p2m
     }

     /* All p2m's are or were in use. Take the least recent used one,
      * flush it and reuse.
      */
     for (i = 0; i<  MAX_NESTEDP2M; i++) {
         p2m = p2m_getlru_nestedp2m(d, NULL);
         rv = p2m_flush_locked(p2m);
         if (rv == 0)
             break;
     }

     // ... return this p2m

The first loop never returns a p2m that's != nv->nv_p2m.  The second
loop always returns a fresh, flushed p2m.  So there's no way that two
different vcpus, starting with nv->nv_p2m == NULL, can ever get the same
p2m as each other.

The pseudocode is basically:
  - If I have an existing nv_p2m and it hasn't been flushed, reuse it.
  - Else flush all np2ms in LRU order and return the last one flushed.

I see. Thanks.

My patch 3/4 doesn't change the logic at all (I think); your latest fix
just avoids the over-aggressive flushing of all np2ms.

Yes and results in a noticable performance boost.

But fixing the p2m locking problem in upstream tree has a higher
priority right now and we can work on that after the p2m locking
issue is fixed upstream.

AFAICS the locking is fixed by the current set of patches

Yes, I can confirm that. patch 4 fixes it.

(though I'm still not able to run Xen-in-Xen well enough to test them).

Can you describe what problem you are facing? Is it a hanging L1 Dom0 ?
Does L1 Dom0 not see the SVM cpuid feature bit?

I can send the full series again for clarity if you like.

Yes, please!

The outstanding bug is that there are many more IPIs than previously;
> I suspect that your latest fix will reduce them quite a lot by avoiding a storm of
mutually-destructive flush operations.

Yes because p2m_flush_nestedp2m() runs less often.

The number of sent IPIs from p2m_flush_nestedp2m() is
still 10 times higher.

If the performance is still too bad we can add more IPI-avoidance strategies.

Yes, this still brings significant performance for L3 guests because both host and l1 guest send less IPIs then.

We can do this performance improvement in a seperate patch series.

Christoph


--
---to satisfy European Law for business letters:
Advanced Micro Devices GmbH
Einsteinring 24, 85689 Dornach b. Muenchen
Geschaeftsfuehrer: Alberto Bozzo, Andrew Bowd
Sitz: Dornach, Gemeinde Aschheim, Landkreis Muenchen
Registergericht Muenchen, HRB Nr. 43632


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel

<Prev in Thread] Current Thread [Next in Thread>