WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

[Xen-devel] Re: trip to shanghai

To: Jiageng Yu <yujiageng734@xxxxxxxxx>
Subject: [Xen-devel] Re: trip to shanghai
From: Stefano Stabellini <stefano.stabellini@xxxxxxxxxxxxx>
Date: Mon, 18 Jul 2011 18:09:14 +0100
Cc: xen-devel@xxxxxxxxxxxxxxxxxxx, Konrad Rzeszutek Wilk <konrad.wilk@xxxxxxxxxx>, Stefano Stabellini <Stefano.Stabellini@xxxxxxxxxxxxx>
Delivery-date: Mon, 18 Jul 2011 10:06:20 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <CAJ0pt16K1T+=a_+kCCzAtA3Zh2H-hCQ7i-awQDn+ktMZ7tX23w@xxxxxxxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
References: <CAJ0pt14Ub6RVoCFjBr8-Pq2Q-_y4jvioRJOsNQqvm-U_K8u6HA@xxxxxxxxxxxxxx> <alpine.DEB.2.00.1107141806510.12963@kaball-desktop> <CAJ0pt14FqCLRHfvD=m7jb_gqC9Qr91Kn3vvXo0AK5XkNqyRq1A@xxxxxxxxxxxxxx> <alpine.DEB.2.00.1107151105230.12963@kaball-desktop> <CAJ0pt150ycjwRN2aVtc64ZTjipGCN43deP0xBO4OxVDK85DhQg@xxxxxxxxxxxxxx> <alpine.DEB.2.00.1107151237240.12963@kaball-desktop> <CAJ0pt15FOTON-a+evEsU5ZpEDa_NkGxkfBx=6XxMg1_SNjCROw@xxxxxxxxxxxxxx> <CAJ0pt16M38TraT+HgUNM0DdAEGh_+VkaJFQBY3_NyTADR3zCEw@xxxxxxxxxxxxxx> <alpine.DEB.2.00.1107151810080.12963@kaball-desktop> <CAJ0pt16K1T+=a_+kCCzAtA3Zh2H-hCQ7i-awQDn+ktMZ7tX23w@xxxxxxxxxxxxxx>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
User-agent: Alpine 2.00 (DEB 1167 2008-08-23)
CC'ing Tim and xen-devel

On Mon, 18 Jul 2011, Jiageng Yu wrote:
> 2011/7/16 Stefano Stabellini <stefano.stabellini@xxxxxxxxxxxxx>:
> > On Fri, 15 Jul 2011, Jiageng Yu wrote:
> >> 2011/7/15 Jiageng Yu <yujiageng734@xxxxxxxxx>:
> >> > 2011/7/15 Stefano Stabellini <stefano.stabellini@xxxxxxxxxxxxx>:
> >> >> On Fri, 15 Jul 2011, Jiageng Yu wrote:
> >> >>> > Does it mean you are actually able to boot an HVM guest using Linux
> >> >>> > based stubdoms?? Did you manage to solve the framebuffer problem too?
> >> >>>
> >> >>>
> >> >>> The HVM guest is booted. But the boot process is terminated because
> >> >>> vga bios is not invoked by seabios. I have got stuck here for a week.
> >> >>>
> >> >>
> >> >> There was a bug in xen-unstable.hg or seabios that would prevent vga 
> >> >> bios from
> >> >> being loaded, it should be fixed now.
> >> >>
> >> >> Alternatively you can temporarely work around the issue with this hacky 
> >> >> patch:
> >> >>
> >> >> ---
> >> >>
> >> >>
> >> >> diff -r 00d2c5ca26fd tools/firmware/hvmloader/hvmloader.c
> >> >> --- a/tools/firmware/hvmloader/hvmloader.c      Fri Jul 08 18:35:24 
> >> >> 2011 +0100
> >> >> +++ b/tools/firmware/hvmloader/hvmloader.c      Fri Jul 15 11:37:12 
> >> >> 2011 +0000
> >> >> @@ -430,7 +430,7 @@ int main(void)
> >> >>             bios->create_pir_tables();
> >> >>     }
> >> >>
> >> >> -    if ( bios->load_roms )
> >> >> +    if ( 1 )
> >> >>     {
> >> >>         switch ( virtual_vga )
> >> >>         {
> >> >>
> >> >>
> >> >
> >> > Yes. Vga bios is booted. However, the upstram qemu receives a SIGSEGV
> >> > signal subsequently. I am trying to print the call stack when
> >> > receiving the signal.
> >> >
> >>
> >> Hi,
> >>
> >>    I find the cause of SIGSEGV signal:
> >>
> >>    cpu_physical_memory_rw(target_phys_addr_t addr, uint8_t *buf, int
> >> len, int is_write)
> >>                   ->memcpy(buf, ptr + (addr & ~TARGET_PAGE_MASK), l);
> >>
> >>     In my case, ptr=0 and addr=0xc253e, when qemu attempts to vist
> >> 0x53e address, the SIGSEGV signal is generated.
> >>
> >>     I believe the qemu is trying to vist vram in this moment. This
> >> code seems no problem, and I will continue to find the root cause.
> >>
> >
> > The vram is allocated by qemu, see hw/vga.c:vga_common_init.
> > qemu_ram_alloc under xen ends up calling xen_ram_alloc that calls
> > xc_domain_populate_physmap_exact.
> > xc_domain_populate_physmap_exact is the hypercall that should ask Xen to
> > add the missing vram pages in the guest. Maybe this hypercall is failing
> > in your case?
> 
> 
> Hi,
> 
>    I continue to invesgate this bug and find hypercall_mmu_update in
> qemu_remap_bucket(xc_map_foreign_bulk) is failing:
> 
> do_mmu_update
>       ->mod_l1_entry
>              ->  if ( !p2m_is_ram(p2mt) || unlikely(mfn == INVALID_MFN) )
>                          return -EINVAL;
> 
>    mfn==INVALID_MFN, because :
> 
> mod_l1_entry
>       ->gfn_to_mfn(p2m_get_hostp2m(pg_dom), l1e_get_pfn(nl1e), &p2mt));
>               ->p2m->get_entry
>                         ->p2m_gfn_to_mfn
>                                -> if ( gfn > p2m->max_mapped_pfn )
>                                    /* This pfn is higher than the
> highest the p2m map currently holds */
>                                    return _mfn(INVALID_MFN);
> 
>    The p2m->max_mapped_pfn is usually 0xfffff. In our case,
> mmu_update.val exceeds 0x8000000100000000.  Additionally, l1e =
> l1e_from_intpte(mmu_update.val); gfn=l1e_get_pfn(l1e ). Therefore, gfn
> will exceed 0xfffff.
> 
>    In the case of minios based stubdom, the mmu_update.vals do not
> exceed 0x8000000100000000. Next, I will invesgate why mmu_update.val
> exceeds 0x8000000100000000.

It looks like the address of the guest that qemu is trying to map is not
valid.
Make sure you are running a guest with less than 2GB of ram, otherwise
you need the patch series that Anthony sent on Friday:

http://marc.info/?l=qemu-devel&m=131074042905711&w=2
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel
<Prev in Thread] Current Thread [Next in Thread>