[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RE: [Xen-devel] [PATCH 1/1] Xen ARINC 653 Scheduler (updated to add support for CPU pools)



Looks like xenconsoled is running, see output:

# ps auxw | grep xen
root        10  0.0  0.0      0     0 ?        S<   09:19   0:00
[xenwatch]
root        11  0.0  0.0      0     0 ?        S<   09:19   0:00
[xenbus]
root      3559  0.1  0.0   2104   968 ?        S    09:21   0:00
xenstored
root      3563  0.0  0.0  10200   644 ?        SLl  09:21   0:00
xenconsoled
root      3572  0.0  0.6  12940  7224 ?        S    09:21   0:00
/usr/bin/python /usr/sbin/xend start
root      3573  1.2  1.1  89980 12068 ?        SLl  09:21   0:00
/usr/bin/python /usr/sbin/xend start
root      3839  0.1  0.0  10148   640 pts/3    Sl+  09:21   0:00
/usr/lib/xen/bin/xenconsole 1 --num 0
root      3853  0.0  0.3  28212  3456 ?        Sl   09:21   0:00
/usr/lib/xen/bin/qemu-dm -d 1 -serial pty -domain-name gentoo -videoram
4 -vnc 127.0.0.1:0 -vncunused -M xenpv
root      3964  0.0  0.0   3232   868 pts/2    S+   09:22   0:00 grep
--color=auto xen

Like you, I run "xenstored; xenconsoled; xend start" each time I start
up.

Thanks,
  Kathy

> -----Original Message-----
> From: Keir Fraser [mailto:keir.fraser@xxxxxxxxxxxxx]
> Sent: Thursday, June 24, 2010 9:23 AM
> To: Dan Magenheimer; Kathy Hadley; George Dunlap
> Cc: xen-devel@xxxxxxxxxxxxxxxxxxx
> Subject: Re: [Xen-devel] [PATCH 1/1] Xen ARINC 653 Scheduler (updated
> to add support for CPU pools)
> 
> Yes, one possibility here is that somehow you do not have xenconsoled
> running. You should 'ps auxw' in dom0 and check that xenstored and
> xenconsoled are both running.
> 
> I now start xend with a little 'xenstored; xenconsoled; xend start'
> script.
> :-)
> 
>  -- Keir
> 
> On 24/06/2010 14:08, "Dan Magenheimer" <dan.magenheimer@xxxxxxxxxx>
> wrote:
> 
> > Just a thought...
> >
> > With all the recent tool layer changes (involving udev, xend,
> > bridging etc), any chance that everything in the guest
> > is working just fine and everything in the hypervisor
> > is working just fine but the connections to the console
> > in your distro/configuration are not playing nicely with
> > the recent xen-unstable tool changes, so you just can't see
> > that everything (else) is fine?
> >
> > (if so, please support my recent rant against changes that
> > cause "unnecessary pain" ;-)
> >
> >> -----Original Message-----
> >> From: Kathy Hadley [mailto:Kathy.Hadley@xxxxxxxxxxxxxxx]
> >> Sent: Thursday, June 24, 2010 6:54 AM
> >> To: Keir Fraser; George Dunlap
> >> Cc: xen-devel@xxxxxxxxxxxxxxxxxxx
> >> Subject: RE: [Xen-devel] [PATCH 1/1] Xen ARINC 653 Scheduler
> (updated
> >> to add support for CPU pools)
> >>
> >> We are using the following set-up:
> >>   Xen-unstable changeset 21650
> >>   Gentoo 2.6.29.6 with Xen patches for Dom0
> >>   Linux 2.6.18-Xen for DomU (downloaded from linux-2.6.18-xen.hg)
> >>
> >> Dom0 and DomU run fine with Xen-3.4.1 and Xen-4.0.0 (our scheduler
> or
> >> the credit scheduler).  Dom0 appears to run fine with xen-unstable,
> but
> >> DomU "hangs" when our scheduler or the credit scheduler (as
> discussed
> >> in
> >> earlier e-mails).  "xm list" shows that DomU is blocked.
> >>
> >> Do you have any suggestions for how I could troubleshoot this
issue?
> >> I'm still wondering about the warning I'm seeing issued from
traps.c
> -
> >> while it could have nothing to do with my issue, it is an
> interesting
> >> coincidence.
> >>
> >> Thanks,
> >>   Kathy Hadley
> >>
> >>> -----Original Message-----
> >>> From: Keir Fraser [mailto:keir.fraser@xxxxxxxxxxxxx]
> >>> Sent: Wednesday, June 23, 2010 6:36 PM
> >>> To: Kathy Hadley; George Dunlap
> >>> Cc: xen-devel@xxxxxxxxxxxxxxxxxxx
> >>> Subject: Re: [Xen-devel] [PATCH 1/1] Xen ARINC 653 Scheduler
> (updated
> >>> to add support for CPU pools)
> >>>
> >>> I've just built latest xen-unstable.hg and linux-2.6.18-xen.hg and
> >>> booted a
> >>> domU just fine. All my builds are 64-bit though whereas yours are
> 32-
> >>> bit. I
> >>> suppose that could cause a difference (in particular, 32-bit
> >> hypervisor
> >>> is
> >>> less tested by people).
> >>>
> >>>  -- Keir
> >>>
> >>> On 23/06/2010 22:16, "Kathy Hadley" <Kathy.Hadley@xxxxxxxxxxxxxxx>
> >>> wrote:
> >>>
> >>>> Keir,
> >>>>   I see this same behavior when I run the credit scheduler.  It
> >>> doesn't
> >>>> look like it's localized to the scheduler I'm working on.  I
> pulled
> >>> the
> >>>> latest code from http://xenbits.xensource.com/linux-2.6.18-xen.hg
> >> and
> >>>> rebuilt the kernel earlier today, with no effect.
> >>>>
> >>>>   Note that I can successfully start the domain with Xen-3.4.1
and
> >>>> Xen-4.0.0, using the same configuration file as I am using with
> >>>> xen-unstable.
> >>>>
> >>>> Kathy
> >>>>
> >>>>> -----Original Message-----
> >>>>> From: Keir Fraser [mailto:keir.fraser@xxxxxxxxxxxxx]
> >>>>> Sent: Wednesday, June 23, 2010 4:23 PM
> >>>>> To: Kathy Hadley; George Dunlap
> >>>>> Cc: xen-devel@xxxxxxxxxxxxxxxxxxx
> >>>>> Subject: Re: [Xen-devel] [PATCH 1/1] Xen ARINC 653 Scheduler
> >>> (updated
> >>>>> to add support for CPU pools)
> >>>>>
> >>>>> On 23/06/2010 20:57, "Kathy Hadley"
> <Kathy.Hadley@xxxxxxxxxxxxxxx>
> >>>>> wrote:
> >>>>>
> >>>>>> Call Trace:
> >>>>>>   [<c01013a7>] hypercall_page+0x3a7  <--
> >>>>>>   [<c0109005>] raw_safe_halt+0xa5
> >>>>>>   [<c0104789>] xen_idle+0x49
> >>>>>>   [<c010482d>] cpu_idle+0x8d
> >>>>>>   [<c0404895>] start_kernel+0x3f5
> >>>>>>   [<c04041d0>] do_early_param+0x80
> >>>>>>
> >>>>>>   Does this shed any light on the situation?
> >>>>>
> >>>>> Looks like you're in the idle loop. So, no, it doesn't really
> shed
> >>>> much
> >>>>> useful light.
> >>>>>
> >>>>>  -- Keir
> >>>>>
> >>>>
> >>>
> >>
> >>
> >> _______________________________________________
> >> Xen-devel mailing list
> >> Xen-devel@xxxxxxxxxxxxxxxxxxx
> >> http://lists.xensource.com/xen-devel
> 


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.