WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

Re: [Xen-ia64-devel] vcpu-pin bug to dom0 ?

>Hi Yongkang,
>
>>Hi all,
>>
>>I find a strange issue that if I only assign 1 or 2 vcpus to dom0, I can 
>>not use vcpu-pin to pin the No. 0 vcpu. It will report: "Invalid argument".
>> But it works, if I let Xen0 see all vcpus when booting.
>>
>>For example, in IA32, set (dom0-cpus 1)
>>After Xen0 boot up, "xm vcpu-p 0 0 0" will see errors.
>>If setting (dom0-cpus 0), above command works.
>>
>>In IA64, set dom0_max_vcpus=2 (totally have 16 vcpus)
>>After Xen0 boot up, "xm vcpu-p 0 0 0" will see errors.
>>But "xm vcpu-p 0 1 0" works.
>>
>
>I think that you can solve this problem by applying the following 
>patch, and inputting "xm vcpu-pin 0 0 0" from two consoles at the 
>same time...  Need many retry :-)
>

Hi Yongkang,

Sorry, that is an expected behavior, my patch is far from perfect.
As Keir noted, we need a scheduler fix to correctly solve this.

Not apply my patch:
 +---------------+------------------------------+-----------+
 | Target domain | pCPU that vCPU processing    | Result    |
 |               | 'xm vcpu-pin' command works  |           |
 +---------------+------------------------------+-----------+
 | Domain-0      | == Target pCPU               | Error(22) |
 |               +------------------------------+-----------+
 |               | != Target pCPU               | Error(22) |
 +---------------+------------------------------+-----------+
 | Domain-U      |  -                           | OK        |
 +---------------+------------------------------+-----------+

Apply my patch:
 +---------------+------------------------------+-----------+
 | Target domain | pCPU that vCPU processing    | Result    |
 |               | 'xm vcpu-pin' command works  |           |
 +---------------+------------------------------+-----------+
 | Domain-0      | == Target pCPU               | OK        |
 |               +------------------------------+-----------+
 |               | != Target pCPU               | Error(16) |
 +---------------+------------------------------+-----------+
 | Domain-U      |  -                           | OK        |
 +---------------+------------------------------+-----------+


Best regards,
 Kan


>diff -r 5b9ff5e8653a xen/common/domctl.c
>--- a/xen/common/domctl.c       Sun Aug 27 06:56:01 2006 +0100
>+++ b/xen/common/domctl.c       Mon Aug 28 18:01:28 2006 +0900
>@@ -380,13 +380,6 @@ long do_domctl(XEN_GUEST_HANDLE(xen_domc
> 
>         if ( op->cmd == XEN_DOMCTL_setvcpuaffinity )
>         {
>-            if ( v == current )
>-            {
>-                ret = -EINVAL;
>-                put_domain(d);
>-                break;
>-            }
>-
>             xenctl_cpumap_to_cpumask(
>                 &new_affinity, &op->u.vcpuaffinity.cpumap);
>             ret = vcpu_set_affinity(v, &new_affinity);
>
>
>But, if Domain-0 has one virtual CPU, this problem cannot be solved 
>even if applying this patch. If you are using CREDIT scheduler, 
>'xm vcpu-pin 0 0 0' makes an error by the following line.
>
>
>static int
>csched_vcpu_set_affinity(struct vcpu *vc, cpumask_t *affinity)
>{
>    unsigned long flags;
>    int lcpu;
>
>    if ( vc == current )
>    {
>        /* No locking needed but also can't move on the spot... */
>        if ( !cpu_isset(vc->processor, *affinity) )
>            return -EBUSY;   <---- This!
>
>        vc->cpu_affinity = *affinity;
>    }
>
>
>Hi Keir,
>Do you have good ideas to solve this problem?
>
>
>Best regards,
> Kan
>
>>Best Regards,
>>Yongkang (Kangkang)
>>
>>_______________________________________________
>>Xen-ia64-devel mailing list
>>Xen-ia64-devel@xxxxxxxxxxxxxxxxxxx
>>http://lists.xensource.com/xen-ia64-devel
>
>
>
>_______________________________________________
>Xen-ia64-devel mailing list
>Xen-ia64-devel@xxxxxxxxxxxxxxxxxxx
>http://lists.xensource.com/xen-ia64-devel


_______________________________________________
Xen-ia64-devel mailing list
Xen-ia64-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-ia64-devel

<Prev in Thread] Current Thread [Next in Thread>