Hi, all:
I added some debug info into Xen, here is the output:
(XEN) do_vcpu_op, domain: 1, time: 99614016791977, now 99614015
(XEN) do_vcpu_op, domain: 1, time: 99614065791952, now 99614015
(XEN) into do_sched_op, now 99614015
(XEN) into do_block, domain: 1, now 99614015
(XEN) happened! domain 1, now 99614015
(XEN) into vcpu_unblock, domain 1, now 99614065
(XEN) do_vcpu_op, domain: 1, time: 99614066791987, now 99614065
(XEN) do_vcpu_op, domain: 1, time: 99615051516712, now 99614065
(XEN) do_vcpu_op, domain: 1, time: 99614066791967, now 99614065
every one second, the do_sched_op would get called, and the timer for
the unblock is set to fire 50 ms later(usually 1 ms later).
this happens for the defalut credit based scheduler. Domain 0 do not
have this probelm
does anyone know how can I add info in the linux kernel to trace this?
I add printk in the Linux kernel, and recompiled it, it seems it would
only output in domain 0, not in other domains.
any suggestions?
thanks!
sam
On Tue, Nov 2, 2010 at 1:16 PM, walmart <vmwalmart@xxxxxxxxx> wrote:
> Hi, Lan:
>
> Thanks for you reply! I appreciate your help very much! :)
>
> I checked the Linux side of calling the do_sched_op, with parameters
> of SCHEDOP_block.
>
> it is called in the following places:
> arch/x86/xen/irq.c:
> static void xen_safe_halt(void)
> {
> /* Blocking includes an implicit local_irq_enable(). */
> if (HYPERVISOR_sched_op(SCHEDOP_block, NULL) != 0)
> BUG();
> }
>
> arch/x86/include/mach-xen/asm/hypervisor.h:
> static inline int
> HYPERVISOR_block(void)
> {
> int rc = HYPERVISOR_sched_op(SCHEDOP_block, NULL);
> return rc;
> }
>
>
> Time-xen.c (arch\x86\kernel):
> void xen_safe_halt(void)
> {
> stop_hz_timer();
> /* Blocking includes an implicit local_irq_enable(). */
> HYPERVISOR_block();
> start_hz_timer();
> }
>
>
> It seems it has something to do with the xen_safe_halt, and
> xen_safe_halt is bind with:
> static inline void raw_safe_halt(void)
> {
> xen_safe_halt();
> }
>
> Irq.c (arch\x86\xen):
> static void xen_halt(void)
> {
> if (irqs_disabled())
> HYPERVISOR_vcpu_op(VCPUOP_down, smp_processor_id(), NULL);
> else
> xen_safe_halt();
> }
> static const struct pv_irq_ops xen_irq_ops __initdata = {
> .safe_halt = xen_safe_halt,
>
> }
>
>
> Does anyone know why this is called? why can I only get 95%
> utilization with one busy VCPU?
>
> Thanks very much!
>
> best!
>
> Sam
>
> On Mon, Nov 1, 2010 at 3:18 AM, Ian Campbell <Ian.Campbell@xxxxxxxxxx> wrote:
>> On Sun, 2010-10-31 at 05:09 +0000, walmart wrote:
>>> Hi, all:
>>>
>>> In the 64 bit Xen 4.0.1, compiled from the source code,
>>>
>>> under xen/common/schedule.c,
>>> ret_t do_sched_op(int cmd, XEN_GUEST_HANDLE(void) arg).
>>>
>>> Does anyone know how this function got called?
>>
>> It is an entry point for a hypercall so it will potentially be called
>> from any guest OS. See xen/arch/x86/*/entry.S for the hypercall table
>> entry which points at this function. In the pvops Linux kernel see the
>> callers of HYPERVISOR_sched_op() for the users of this hypercall.
>>
>> There are some other callers in the HVM code called in response to
>> certain events which lead to scheduling type decisions, such as the
>> guest executing a hlt instruction.
>>
>> Ian.
>>
>>>
>>> I raise this question cause I noticed in the default credit based
>>> scheduler, if I only configure one busy cpu to run(pinned to one
>>> specific core, it is also the only VCPU on that core). if I run a busy
>>> loop, I would only get 95% of the utilization.
>>>
>>> I add some printk into the code and found that:
>>>
>>> every 1s, the do_sched_op() would exec, and the cmd is schedop_block.
>>> which would block the vcpu for 50 ms. (50 ms / 1 s = 5%), causing the
>>> vcpu can only get 95% of the resources.
>>>
>>> Does anyone know the reason for this?
>>>
>>> Or, does anyone know how and where this do_sched_op() function get
>>> called? I did a grep and all I can found is the compat_do_sched_op,
>>> which is not called at all..
>>>
>>> I would highly appreciate your help!
>>>
>>> Thanks very much!
>>>
>>> best!
>>>
>>> Sam
>>>
>>> _______________________________________________
>>> Xen-devel mailing list
>>> Xen-devel@xxxxxxxxxxxxxxxxxxx
>>> http://lists.xensource.com/xen-devel
>>
>>
>>
>
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel
|