[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] NetBSD port and a couple of remarks



Cool. Is there any way I could take a look at your changes?

You'll want to bring it up to date to use unstable:
bk clone bk://xen.bkbits.net/xeno-unstable.bk

                        -Kip

On Tue, 27 Jan 2004, Christian Limpach wrote:

> Hi!
>
> I have made a NetBSD-current kernel which boots on Xen.  It still has some
> problems but it's good enough to boot multi-user and allows logins.
>
> While working on this, I noticed a few problems and I'm wondering if these
> are corrected in Xen versions >1.1:
>
> - the count for the initial l2 pagetable seems to be wrong:  The page is
> pin'ed and it's used as a l2 pagetable but it's count is 0x40000000.  If
> you switch to another table, you can't unpin in and if you unpin it first,
> it will stay typed as a l2 pagetable.  Additionally, if you unpin it, you
> can make it writable then while it's still in use...
>
> - Xen completely locks up if my idle loop doesn't call the yield function,
> but consists of:
> * clear EVENTS_MASTER_ENABLE_BIT
> * check NetBSD runqueues
> * set EVENTS_MASTER_ENABLE_BIT
> * check for missed events
> * loop
> Not sure yet, why that is...  It only happens occasionally but I never
> managed to complete a boot until I added the yield call.  I'm using the
> hypervisor callback code with critical region fixup from mini-os which I
> think is identical to the one used in Linux.
>
>
> I've implemented a network driver and a console driver (output only).
> There's no support for dom0 operations yet and there's no driver for
> harddisks yet.  Also a couple minor things like cpu speed detection and
> setting the clock are missing.  Right now there's also still some problems
> with pagetables where either hypervisor calls are used to update inactive
> pagetables or vice versa to a lesser extent.  I think that will be solved
> once I use pinning.
>
> I hope to commit this to the NetBSD tree eventually.  Or I'll make patches
> available after some cleaning up.
>
> Finally, if someone could get me a xen-1.2 and/or xen-unstable tree out of
> bitkeeper, that would be much appreciated.
>
> --
> Christian Limpach <chris@xxxxxx>
>
> [11] text 0xc0100000 data 0xc0276c24 bss 0xc02808c4 end 0xc02c163c esym
> 0xc02f177c
> [11] NetBSD 1.6ZH (XENO) #704: Tue Jan 27 18:52:02 CET 2004
> [11]    chris@marble:/devel/netbsd/src-current-xen/compile/XENO
> [11] start_info:   0xc02bbd80
> [11]   nr_pages:   4000
> [11]   shared_inf: 0xc0300000 (was 0xc10a0000)
> [11]   pt_base:    0xc109f000
> [11]   mod_start:  0xc0281000
> [11]   mod_len:    196924
> [11]   net_rings:
> [11]  264000
> [11]
> [11]   blk_ring:   0x258000
> [11]   dom_id:     11
> [11]   flags:      0x0
> [11]   cmd_line:
> ip=172.20.4.17:172.20.4.13:172.20.1.1:255.255.128.0::eth0:off
> bootdev=xennet0 nfsroot=marble:/netboot/qube
> [11] NetBSD Xen console attached.
> Loaded initial symtab at 0xc02c1640, strtab at 0xc02db47c, # entries 6576
> Copyright (c) 1996, 1997, 1998, 1999, 2000, 2001, 2002, 2003, 2004
>     The NetBSD Foundation, Inc.  All rights reserved.
> Copyright (c) 1982, 1986, 1989, 1991, 1993
>     The Regents of the University of California.  All rights reserved.
> NetBSD 1.6ZH (XENO) #704: Tue Jan 27 18:52:02 CET 2004
>         chris@marble:/devel/netbsd/src-current-xen/compile/XENO
> total memory = 13948 KB
> avail memory = 13576 KB
> [11] Xen reported: 501.148 MHz processor.
> mainbus0 (root)
> cpu0 at mainbus0: (uniprocessor)
> cpu0: Intel Celeron (Mendocino) (686-class), 20.00 MHz, id 0x665
> cpu0: features 183fbff<FPU,VME,DE,PSE,TSC,MSR,PAE,MCE,CX8,APIC,SEP,MTRR>
> cpu0: features 183fbff<PGE,MCA,CMOV,PAT,PSE36,MMX>
> cpu0: features 183fbff<FXSR>
> cpu0: I-cache 16 KB 32b/line 4-way, D-cache 16 KB 32b/line 4-way
> cpu0: L2 cache 128 KB 32b/line 4-way
> cpu0: ITLB 32 4 KB entries 4-way, 2 4 MB entries fully associative
> cpu0: DTLB 64 4 KB entries 4-way, 8 4 MB entries 4-way
> cpu0: 8 page colors
> xenc0 at mainbus0: Xen Virtual Console Driver
> xennet0 at mainbus0: Xen Virtual Network Driver
> xennet0: MAC address aa:00:00:24:1c:9b
> npx0 at mainbus0: using exception 16
> IPsec: Initialized Security Association Processing.
> boot device: xennet0
> root on xennet0
> mountroot: trying nfs...
> nfs_boot: trying static
> nfs_boot: client_addr=172.20.4.17
> nfs_boot: gateway=172.20.1.1
> nfs_boot: netmask=255.255.128.0
> nfs_boot: server=172.20.4.13
> nfs_boot: root=marble:/netboot/qube
> root on marble:/netboot/qube
> root time: 0x4016a704
> root file system type: nfs
> init: copying out path `/sbin/init' 11
> Thu Jan  1 00:01:05 UTC 1970
> Starting file system checks:
> Setting tty flags.
> Setting sysctl variables:
> Starting network.
> Hostname: qube
> IPv6 mode: host
> Configuring network interfaces:
> ..
> Building databases...
> Starting syslogd.
> Mounting all filesystems...
> Creating a.out runtime link editor directory cache.
> Checking quotas:
>  done.
> /etc/rc: WARNING: No swap space configured!
> Starting virecover.
> Starting local daemons:
> ..
> Starting sshd.
> Starting inetd.
> Thu Jan  1 00:01:16 UTC 1970
>
> NetBSD/i386 (qube) (console)
>
> login:
>
>
>
>
>
> -------------------------------------------------------
> The SF.Net email is sponsored by EclipseCon 2004
> Premiere Conference on Open Tools Development and Integration
> See the breadth of Eclipse activity. February 3-5 in Anaheim, CA.
> http://www.eclipsecon.org/osdn
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@xxxxxxxxxxxxxxxxxxxxx
> https://lists.sourceforge.net/lists/listinfo/xen-devel
>


-------------------------------------------------------
The SF.Net email is sponsored by EclipseCon 2004
Premiere Conference on Open Tools Development and Integration
See the breadth of Eclipse activity. February 3-5 in Anaheim, CA.
http://www.eclipsecon.org/osdn
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxxx
https://lists.sourceforge.net/lists/listinfo/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.