[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RE: [Xen-devel] [PATCH][SPT][DISCUSS] BUG() in shadow.h delete_shadow_status() with HVM guest


  • To: "Woller, Thomas" <thomas.woller@xxxxxxx>, <xen-devel@xxxxxxxxxxxxxxxxxxx>
  • From: "Nakajima, Jun" <jun.nakajima@xxxxxxxxx>
  • Date: Thu, 1 Jun 2006 17:58:57 -0700
  • Delivery-date: Thu, 01 Jun 2006 17:59:25 -0700
  • List-id: Xen developer discussion <xen-devel.lists.xensource.com>
  • Thread-index: AcZzldcJdz7SebjeQ1OGHswZcegpxQABcXPQAAelk7AEgQh+kAAINgNg
  • Thread-topic: [Xen-devel] [PATCH][SPT][DISCUSS] BUG() in shadow.h delete_shadow_status() with HVM guest

The patch below is included in our next patch (3-on-3). It's okay to
_prefetch_ that part.

Jun
---
Intel Open Source Technology Center 

-----Original Message-----
From: Woller, Thomas [mailto:thomas.woller@xxxxxxx] 
Sent: Thursday, June 01, 2006 2:09 PM
To: Woller, Thomas; Nakajima, Jun; xen-devel@xxxxxxxxxxxxxxxxxxx
Subject: RE: [Xen-devel] [PATCH][SPT][DISCUSS] BUG() in shadow.h
delete_shadow_status() with HVM guest

Keir, we have been using this patch since May 8th on our internal trees,
and haven't seen any negative consequences.  Jun's patch below fixes a
problem with a hang when performing an "xm destroy" on a windows guest.
We would like to see it go into the xen-unstable.hg tree, and
3.0-testing if you feel comfortable with it.
Thanks
Tom

Signed-off-by: Tom Woller <thomas.woller@xxxxxxx>

diff -r 1e3977e029fd xen/arch/x86/shadow.c
--- a/xen/arch/x86/shadow.c     Mon May  8 18:21:41 2006
+++ b/xen/arch/x86/shadow.c     Tue May  9 13:20:33 2006
@@ -3467,7 +3467,9 @@
         } else {
             printk("For non HVM shadow, create_l1_shadow:%d\n",
create_l2_shadow);
         }
-         shadow_update_min_max(l4e_get_pfn(sl4e), l3_table_offset(va));
+         
+        if ( v->domain->arch.ops->guest_paging_levels == PAGING_L4 )
+            shadow_update_min_max(l4e_get_pfn(sl4e),
l3_table_offset(va));


> -----Original Message-----
> From: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx 
> [mailto:xen-devel-bounces@xxxxxxxxxxxxxxxxxxx] On Behalf Of 
> Woller, Thomas
> Sent: Tuesday, May 09, 2006 6:07 PM
> To: Nakajima, Jun; xen-devel@xxxxxxxxxxxxxxxxxxx
> Subject: RE: [Xen-devel] [PATCH][SPT][DISCUSS] BUG() in 
> shadow.h delete_shadow_status() with HVM guest
> 
> > I think this is a bit different because the hash key has 
> the index of 
> > the PDP for PAE guests. I guess somehow tlbflush_timestamp has been 
> > modified. Can you try this patch?
> 
> Thanks for the reply and the fix - your patch was successful 
> on both SVM and VMX boxes.  I tested 32bit PAE win2003 server 
> SE on SVM, and 32bit PAE Winxpsp2 on VMX.  Both did not hit 
> the BUG() in shadow.h.
> 
> We definitely don't have much priority with PAE here, might 
> be prudent to let this patch sit with your more extensive PAE 
> testing, including 32bit hv, etc.  We'll use your patch 
> internally for a while, and indicate if we see an adverse 
> side-affects. 
> 
> So, unless you indicate otherwise, I'll defer to you to push 
> up when you feel it's a solid fix.  
> thanks
> Tom
> 
> 
> > diff -r 1e3977e029fd xen/arch/x86/shadow.c
> > --- a/xen/arch/x86/shadow.c     Mon May  8 18:21:41 2006
> > +++ b/xen/arch/x86/shadow.c     Tue May  9 13:20:33 2006
> > @@ -3467,7 +3467,9 @@
> >          } else {
> >              printk("For non HVM shadow, create_l1_shadow:%d\n", 
> > create_l2_shadow);
> >          }
> > -         shadow_update_min_max(l4e_get_pfn(sl4e), 
> > l3_table_offset(va));
> > +         
> > +        if ( v->domain->arch.ops->guest_paging_levels == 
> PAGING_L4 )
> > +            shadow_update_min_max(l4e_get_pfn(sl4e),
> > l3_table_offset(va));
> >  
> 
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@xxxxxxxxxxxxxxxxxxx
> http://lists.xensource.com/xen-devel
> 
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.