WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

Re: [Xen-devel] [xl] a fault happens on pv domain reboot

To: Sergey Tovpeko <tsv.devel@xxxxxxxxx>, Ian Jackson <Ian.Jackson@xxxxxxxxxxxxx>
Subject: Re: [Xen-devel] [xl] a fault happens on pv domain reboot
From: Keir Fraser <keir.fraser@xxxxxxxxxxxxx>
Date: Wed, 2 Jun 2010 10:27:51 +0100
Cc: "xen-devel@xxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxx>, Stabellini <Stefano.Stabellini@xxxxxxxxxxxxx>, Stefano
Delivery-date: Wed, 02 Jun 2010 02:28:44 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <4C061FE8.8000501@xxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
Thread-index: AcsCM9sr6Gq9kVNRSPi7BzNysQbCvwAAgPxx
Thread-topic: [Xen-devel] [xl] a fault happens on pv domain reboot
User-agent: Microsoft-Entourage/12.24.0.100205
Finding out which syscall involved in libxl_domain_info returns EAGAIN would
be the obvious first step. It is probably one of the mlocks, since the
hypercall itself should not be returning EAGAIN (if it is, then that's going
to be a nice easy bug fix!). Root-causing why mlock is so often failing on
your setup is going to be harder to debug, no doubt.

 -- Keir

On 02/06/2010 10:10, "Sergey Tovpeko" <tsv.devel@xxxxxxxxx> wrote:

> The mlock issue isn't my case. I mean the patch didn't solve the problem.
> The mentioned mlock returned success with the first iteration.
> 
> I thought below patch (trick) solves the issue, but it only decreases
> the probability of the bug.
> 
> diff -r 458589ad3793 tools/libxl/xl_cmdimpl.c
> --- a/tools/libxl/xl_cmdimpl.c  Tue Jun 01 07:06:50 2010 +0100
> +++ b/tools/libxl/xl_cmdimpl.c  Wed Jun 02 16:57:59 2010 +0400
> @@ -1159,6 +1159,15 @@
>                              libxl_free_waiter(w2);
>                              free(w1);
>                              free(w2);
> +                            while (1) {
> +                                struct libxl_dominfo info_t;
> +                                if (libxl_domain_info(&ctx, &info_t,
> domid))
> +                                    break;
> +                                sleep(1);
> +                            }
> 
>                              LOG("Done. Rebooting now");
>                              goto start;
>                          }
> 
> 
>> Stefano Stabellini writes ("Re: [Xen-devel] [xl] a fault happens on pv domain
>> reboot"):
>> 
>>> That means that xc_domain_getinfolist failed with EAGAIN, the reason
>>> seems to be that mlock can actually failed with EAGAIN but we don't do
>>> anything about it in libxc.
>>> If mlock returns EAGAIN we should make sure that the pages are actually
>>> available in RAM and try again.
>>> 
>> 
>> So, on that note, could you please try this patch ?
>> 
>> Ian.
>> 
>> diff -r 267ecb2ee5bf tools/libxc/xc_acm.c
>> --- a/tools/libxc/xc_acm.c    Tue Jun 01 14:25:59 2010 +0100
>> +++ b/tools/libxc/xc_acm.c    Tue Jun 01 17:54:51 2010 +0100
>> @@ -83,7 +83,7 @@
>> 
>>      hypercall.op = __HYPERVISOR_xsm_op;
>>      hypercall.arg[0] = (unsigned long)&acmctl;
>> -    if ( lock_pages(&acmctl, sizeof(acmctl)) != 0)
>> +    if ( xc_lock_pages(xch, &acmctl, sizeof(acmctl)) != 0)
>>      {
>>          PERROR("Could not lock memory for Xen hypercall");
>>          return -EFAULT;
>> diff -r 267ecb2ee5bf tools/libxc/xc_cpupool.c
>> --- a/tools/libxc/xc_cpupool.c        Tue Jun 01 14:25:59 2010 +0100
>> +++ b/tools/libxc/xc_cpupool.c        Tue Jun 01 17:54:51 2010 +0100
>> @@ -71,7 +71,7 @@
>>          set_xen_guest_handle(sysctl.u.cpupool_op.cpumap.bitmap, local);
>>          sysctl.u.cpupool_op.cpumap.nr_cpus = sizeof(info->cpumap) * 8;
>> 
>> -        if ( (err = lock_pages(local, sizeof(local))) != 0 )
>> +        if ( (err = xc_lock_pages(xch, local, sizeof(local))) != 0 )
>>          {
>>              PERROR("Could not lock memory for Xen hypercall");
>>              break;
>> @@ -147,7 +147,7 @@
>>      set_xen_guest_handle(sysctl.u.cpupool_op.cpumap.bitmap, local);
>>      sysctl.u.cpupool_op.cpumap.nr_cpus = sizeof(*cpumap) * 8;
>> 
>> -    if ( (err = lock_pages(local, sizeof(local))) != 0 )
>> +    if ( (err = xc_lock_pages(xch, local, sizeof(local))) != 0 )
>>      {
>>          PERROR("Could not lock memory for Xen hypercall");
>>          return err;
>> diff -r 267ecb2ee5bf tools/libxc/xc_domain.c
>> --- a/tools/libxc/xc_domain.c Tue Jun 01 14:25:59 2010 +0100
>> +++ b/tools/libxc/xc_domain.c Tue Jun 01 17:54:51 2010 +0100
>> @@ -80,7 +80,7 @@
>>      arg.domain_id = domid;
>>      arg.reason = reason;
>> 
>> -    if ( lock_pages(&arg, sizeof(arg)) != 0 )
>> +    if ( xc_lock_pages(xch, &arg, sizeof(arg)) != 0 )
>>      {
>>          PERROR("Could not lock memory for Xen hypercall");
>>          goto out1;
>> @@ -119,7 +119,7 @@
>> 
>>      domctl.u.vcpuaffinity.cpumap.nr_cpus = cpusize * 8;
>> 
>> -    if ( lock_pages(local, cpusize) != 0 )
>> +    if ( xc_lock_pages(xch, local, cpusize) != 0 )
>>      {
>>          PERROR("Could not lock memory for Xen hypercall");
>>          goto out;
>> @@ -158,7 +158,7 @@
>>      set_xen_guest_handle(domctl.u.vcpuaffinity.cpumap.bitmap, local);
>>      domctl.u.vcpuaffinity.cpumap.nr_cpus = cpusize * 8;
>> 
>> -    if ( lock_pages(local, sizeof(local)) != 0 )
>> +    if ( xc_lock_pages(xch, local, sizeof(local)) != 0 )
>>      {
>>          PERROR("Could not lock memory for Xen hypercall");
>>          goto out;
>> @@ -243,7 +243,7 @@
>>      int ret = 0;
>>      DECLARE_SYSCTL;
>> 
>> -    if ( lock_pages(info, max_domains*sizeof(xc_domaininfo_t)) != 0 )
>> +    if ( xc_lock_pages(xch, info, max_domains*sizeof(xc_domaininfo_t)) != 0
>> )
>>          return -1;
>> 
>>      sysctl.cmd = XEN_SYSCTL_getdomaininfolist;
>> @@ -276,7 +276,7 @@
>>      set_xen_guest_handle(domctl.u.hvmcontext.buffer, ctxt_buf);
>> 
>>      if ( ctxt_buf )
>> -        if ( (ret = lock_pages(ctxt_buf, size)) != 0 )
>> +        if ( (ret = xc_lock_pages(xch, ctxt_buf, size)) != 0 )
>>              return ret;
>> 
>>      ret = do_domctl(xch, &domctl);
>> @@ -308,7 +308,7 @@
>>      domctl.u.hvmcontext_partial.instance = instance;
>>      set_xen_guest_handle(domctl.u.hvmcontext_partial.buffer, ctxt_buf);
>> 
>> -    if ( (ret = lock_pages(ctxt_buf, size)) != 0 )
>> +    if ( (ret = xc_lock_pages(xch, ctxt_buf, size)) != 0 )
>>          return ret;
>> 
>>      ret = do_domctl(xch, &domctl);
>> @@ -333,7 +333,7 @@
>>      domctl.u.hvmcontext.size = size;
>>      set_xen_guest_handle(domctl.u.hvmcontext.buffer, ctxt_buf);
>> 
>> -    if ( (ret = lock_pages(ctxt_buf, size)) != 0 )
>> +    if ( (ret = xc_lock_pages(xch, ctxt_buf, size)) != 0 )
>>          return ret;
>> 
>>      ret = do_domctl(xch, &domctl);
>> @@ -358,7 +358,7 @@
>>      set_xen_guest_handle(domctl.u.vcpucontext.ctxt, &ctxt->c);
>> 
>> 
>> -    if ( (rc = lock_pages(ctxt, sz)) != 0 )
>> +    if ( (rc = xc_lock_pages(xch, ctxt, sz)) != 0 )
>>          return rc;
>>      rc = do_domctl(xch, &domctl);
>>      unlock_pages(ctxt, sz);
>> @@ -446,7 +446,8 @@
>> 
>>      set_xen_guest_handle(fmap.map.buffer, &e820);
>> 
>> -    if ( lock_pages(&fmap, sizeof(fmap)) || lock_pages(&e820, sizeof(e820))
>> )
>> +    if ( xc_lock_pages(xch, &fmap, sizeof(fmap)) ||
>> +         xc_lock_pages(xch, &e820, sizeof(e820)) )
>>      {
>>          PERROR("Could not lock memory for Xen hypercall");
>>          rc = -1;
>> @@ -522,7 +523,7 @@
>>      domctl.cmd = XEN_DOMCTL_gettscinfo;
>>      domctl.domain = (domid_t)domid;
>>      set_xen_guest_handle(domctl.u.tsc_info.out_info, &info);
>> -    if ( (rc = lock_pages(&info, sizeof(info))) != 0 )
>> +    if ( (rc = xc_lock_pages(xch, &info, sizeof(info))) != 0 )
>>          return rc;
>>      rc = do_domctl(xch, &domctl);
>>      if ( rc == 0 )
>> @@ -807,7 +808,7 @@
>>      domctl.u.vcpucontext.vcpu = vcpu;
>>      set_xen_guest_handle(domctl.u.vcpucontext.ctxt, &ctxt->c);
>> 
>> -    if ( (rc = lock_pages(ctxt, sz)) != 0 )
>> +    if ( (rc = xc_lock_pages(xch, ctxt, sz)) != 0 )
>>          return rc;
>>      rc = do_domctl(xch, &domctl);
>> 
>> @@ -875,7 +876,7 @@
>>      arg.domid = dom;
>>      arg.index = param;
>>      arg.value = value;
>> -    if ( lock_pages(&arg, sizeof(arg)) != 0 )
>> +    if ( xc_lock_pages(handle, &arg, sizeof(arg)) != 0 )
>>          return -1;
>>      rc = do_xen_hypercall(handle, &hypercall);
>>      unlock_pages(&arg, sizeof(arg));
>> @@ -893,7 +894,7 @@
>>      hypercall.arg[1] = (unsigned long)&arg;
>>      arg.domid = dom;
>>      arg.index = param;
>> -    if ( lock_pages(&arg, sizeof(arg)) != 0 )
>> +    if ( xc_lock_pages(handle, &arg, sizeof(arg)) != 0 )
>>          return -1;
>>      rc = do_xen_hypercall(handle, &hypercall);
>>      unlock_pages(&arg, sizeof(arg));
>> @@ -946,7 +947,7 @@
>> 
>>      set_xen_guest_handle(domctl.u.get_device_group.sdev_array, sdev_array);
>> 
>> -    if ( lock_pages(sdev_array, max_sdevs * sizeof(*sdev_array)) != 0 )
>> +    if ( xc_lock_pages(xch, sdev_array, max_sdevs * sizeof(*sdev_array)) !=
>> 0 )
>>      {
>>          PERROR("Could not lock memory for xc_get_device_group");
>>          return -ENOMEM;
>> diff -r 267ecb2ee5bf tools/libxc/xc_domain_restore.c
>> --- a/tools/libxc/xc_domain_restore.c Tue Jun 01 14:25:59 2010 +0100
>> +++ b/tools/libxc/xc_domain_restore.c Tue Jun 01 17:54:51 2010 +0100
>> @@ -1451,7 +1451,7 @@
>>      memset(region_mfn, 0,
>>             ROUNDUP(MAX_BATCH_SIZE * sizeof(xen_pfn_t), PAGE_SHIFT));
>> 
>> -    if ( lock_pages(region_mfn, sizeof(xen_pfn_t) * MAX_BATCH_SIZE) )
>> +    if ( xc_lock_pages(xch, region_mfn, sizeof(xen_pfn_t) * MAX_BATCH_SIZE)
>> )
>>      {
>>          PERROR("Could not lock region_mfn");
>>          goto out;
>> @@ -1801,7 +1801,7 @@
>>          }
>>      }
>> 
>> -    if ( lock_pages(&ctxt, sizeof(ctxt)) )
>> +    if ( xc_lock_pages(xch, &ctxt, sizeof(ctxt)) )
>>      {
>>          PERROR("Unable to lock ctxt");
>>          return 1;
>> diff -r 267ecb2ee5bf tools/libxc/xc_domain_save.c
>> --- a/tools/libxc/xc_domain_save.c    Tue Jun 01 14:25:59 2010 +0100
>> +++ b/tools/libxc/xc_domain_save.c    Tue Jun 01 17:54:51 2010 +0100
>> @@ -1032,14 +1032,14 @@
>> 
>>      memset(to_send, 0xff, BITMAP_SIZE);
>> 
>> -    if ( lock_pages(to_send, BITMAP_SIZE) )
>> +    if ( xc_lock_pages(xch, to_send, BITMAP_SIZE) )
>>      {
>>          PERROR("Unable to lock to_send");
>>          return 1;
>>      }
>> 
>>      /* (to fix is local only) */
>> -    if ( lock_pages(to_skip, BITMAP_SIZE) )
>> +    if ( xc_lock_pages(xch, to_skip, BITMAP_SIZE) )
>>      {
>>          PERROR("Unable to lock to_skip");
>>          return 1;
>> @@ -1077,7 +1077,7 @@
>>      memset(pfn_type, 0,
>>             ROUNDUP(MAX_BATCH_SIZE * sizeof(*pfn_type), PAGE_SHIFT));
>> 
>> -    if ( lock_pages(pfn_type, MAX_BATCH_SIZE * sizeof(*pfn_type)) )
>> +    if ( xc_lock_pages(xch, pfn_type, MAX_BATCH_SIZE * sizeof(*pfn_type)) )
>>      {
>>          PERROR("Unable to lock pfn_type array");
>>          goto out;
>> diff -r 267ecb2ee5bf tools/libxc/xc_evtchn.c
>> --- a/tools/libxc/xc_evtchn.c Tue Jun 01 14:25:59 2010 +0100
>> +++ b/tools/libxc/xc_evtchn.c Tue Jun 01 17:54:51 2010 +0100
>> @@ -19,7 +19,7 @@
>>      hypercall.arg[0] = cmd;
>>      hypercall.arg[1] = (unsigned long)arg;
>> 
>> -    if ( lock_pages(arg, arg_size) != 0 )
>> +    if ( xc_lock_pages(xch, arg, arg_size) != 0 )
>>      {
>>          PERROR("do_evtchn_op: arg lock failed");
>>          goto out;
>> diff -r 267ecb2ee5bf tools/libxc/xc_linux.c
>> --- a/tools/libxc/xc_linux.c  Tue Jun 01 14:25:59 2010 +0100
>> +++ b/tools/libxc/xc_linux.c  Tue Jun 01 17:54:51 2010 +0100
>> @@ -724,7 +724,7 @@
>>      hypercall.arg[1] = (unsigned long)op;
>>      hypercall.arg[2] = count;
>> 
>> -    if ( lock_pages(op, count* op_size) != 0 )
>> +    if ( xc_lock_pages(xch, op, count* op_size) != 0 )
>>      {
>>          PERROR("Could not lock memory for Xen hypercall");
>>          goto out1;
>> @@ -776,7 +776,7 @@
>>      *gnt_num = query.nr_frames * (PAGE_SIZE / sizeof(grant_entry_v1_t) );
>> 
>>      frame_list = malloc(query.nr_frames * sizeof(unsigned long));
>> -    if ( !frame_list || lock_pages(frame_list,
>> +    if ( !frame_list || xc_lock_pages(xch, frame_list,
>>                                     query.nr_frames * sizeof(unsigned long))
>> )
>>      {
>>          ERROR("Alloc/lock frame_list in xc_gnttab_map_table\n");
>> diff -r 267ecb2ee5bf tools/libxc/xc_misc.c
>> --- a/tools/libxc/xc_misc.c   Tue Jun 01 14:25:59 2010 +0100
>> +++ b/tools/libxc/xc_misc.c   Tue Jun 01 17:54:51 2010 +0100
>> @@ -28,7 +28,7 @@
>>          sysctl.u.readconsole.incremental = incremental;
>>      }
>> 
>> -    if ( (ret = lock_pages(buffer, nr_chars)) != 0 )
>> +    if ( (ret = xc_lock_pages(xch, buffer, nr_chars)) != 0 )
>>          return ret;
>> 
>>      if ( (ret = do_sysctl(xch, &sysctl)) == 0 )
>> @@ -52,7 +52,7 @@
>>      set_xen_guest_handle(sysctl.u.debug_keys.keys, keys);
>>      sysctl.u.debug_keys.nr_keys = len;
>> 
>> -    if ( (ret = lock_pages(keys, len)) != 0 )
>> +    if ( (ret = xc_lock_pages(xch, keys, len)) != 0 )
>>          return ret;
>> 
>>      ret = do_sysctl(xch, &sysctl);
>> @@ -140,7 +140,7 @@
>>      DECLARE_HYPERCALL;
>> 
>>      mc->interface_version = XEN_MCA_INTERFACE_VERSION;
>> -    if ( lock_pages(mc, sizeof(mc)) )
>> +    if ( xc_lock_pages(xch, mc, sizeof(mc)) )
>>      {
>>          PERROR("Could not lock xen_mc memory");
>>          return -EINVAL;
>> @@ -213,7 +213,7 @@
>>      sysctl.u.getcpuinfo.max_cpus = max_cpus;
>>      set_xen_guest_handle(sysctl.u.getcpuinfo.info, info);
>> 
>> -    if ( (rc = lock_pages(info, max_cpus*sizeof(*info))) != 0 )
>> +    if ( (rc = xc_lock_pages(xch, info, max_cpus*sizeof(*info))) != 0 )
>>          return rc;
>> 
>>      rc = do_sysctl(xch, &sysctl);
>> @@ -236,7 +236,7 @@
>>      struct xen_hvm_set_pci_intx_level _arg, *arg = &_arg;
>>      int rc;
>> 
>> -    if ( (rc = hcall_buf_prep((void **)&arg, sizeof(*arg))) != 0 )
>> +    if ( (rc = hcall_buf_prep(xch, (void **)&arg, sizeof(*arg))) != 0 )
>>      {
>>          PERROR("Could not lock memory");
>>          return rc;
>> @@ -269,7 +269,7 @@
>>      struct xen_hvm_set_isa_irq_level _arg, *arg = &_arg;
>>      int rc;
>> 
>> -    if ( (rc = hcall_buf_prep((void **)&arg, sizeof(*arg))) != 0 )
>> +    if ( (rc = hcall_buf_prep(xch, (void **)&arg, sizeof(*arg))) != 0 )
>>      {
>>          PERROR("Could not lock memory");
>>          return rc;
>> @@ -305,7 +305,7 @@
>>      arg.link    = link;
>>      arg.isa_irq = isa_irq;
>> 
>> -    if ( (rc = lock_pages(&arg, sizeof(arg))) != 0 )
>> +    if ( (rc = xc_lock_pages(xch, &arg, sizeof(arg))) != 0 )
>>      {
>>          PERROR("Could not lock memory");
>>          return rc;
>> @@ -336,7 +336,7 @@
>>      arg.nr        = nr;
>>      set_xen_guest_handle(arg.dirty_bitmap, (uint8_t *)dirty_bitmap);
>> 
>> -    if ( (rc = lock_pages(&arg, sizeof(arg))) != 0 )
>> +    if ( (rc = xc_lock_pages(xch, &arg, sizeof(arg))) != 0 )
>>      {
>>          PERROR("Could not lock memory");
>>          return rc;
>> @@ -364,7 +364,7 @@
>>      arg.first_pfn = first_pfn;
>>      arg.nr        = nr;
>> 
>> -    if ( (rc = lock_pages(&arg, sizeof(arg))) != 0 )
>> +    if ( (rc = xc_lock_pages(xch, &arg, sizeof(arg))) != 0 )
>>      {
>>          PERROR("Could not lock memory");
>>          return rc;
>> @@ -393,7 +393,7 @@
>>      arg.first_pfn    = first_pfn;
>>      arg.nr           = nr;
>> 
>> -    if ( (rc = lock_pages(&arg, sizeof(arg))) != 0 )
>> +    if ( (rc = xc_lock_pages(xch, &arg, sizeof(arg))) != 0 )
>>      {
>>          PERROR("Could not lock memory");
>>          return rc;
>> diff -r 267ecb2ee5bf tools/libxc/xc_offline_page.c
>> --- a/tools/libxc/xc_offline_page.c   Tue Jun 01 14:25:59 2010 +0100
>> +++ b/tools/libxc/xc_offline_page.c   Tue Jun 01 17:54:51 2010 +0100
>> @@ -57,7 +57,7 @@
>>      if ( !status || (end < start) )
>>          return -EINVAL;
>> 
>> -    if (lock_pages(status, sizeof(uint32_t)*(end - start + 1)))
>> +    if (xc_lock_pages(xch, status, sizeof(uint32_t)*(end - start + 1)))
>>      {
>>          ERROR("Could not lock memory for xc_mark_page_online\n");
>>          return -EINVAL;
>> @@ -84,7 +84,7 @@
>>      if ( !status || (end < start) )
>>          return -EINVAL;
>> 
>> -    if (lock_pages(status, sizeof(uint32_t)*(end - start + 1)))
>> +    if (xc_lock_pages(xch, status, sizeof(uint32_t)*(end - start + 1)))
>>      {
>>          ERROR("Could not lock memory for xc_mark_page_offline");
>>          return -EINVAL;
>> @@ -111,7 +111,7 @@
>>      if ( !status || (end < start) )
>>          return -EINVAL;
>> 
>> -    if (lock_pages(status, sizeof(uint32_t)*(end - start + 1)))
>> +    if (xc_lock_pages(xch, status, sizeof(uint32_t)*(end - start + 1)))
>>      {
>>          ERROR("Could not lock memory for xc_query_page_offline_status\n");
>>          return -EINVAL;
>> @@ -277,7 +277,7 @@
>>          minfo->pfn_type[i] = pfn_to_mfn(i, minfo->p2m_table,
>>                                          minfo->guest_width);
>> 
>> -    if ( lock_pages(minfo->pfn_type, minfo->p2m_size *
>> sizeof(*minfo->pfn_type)) )
>> +    if ( xc_lock_pages(xch, minfo->pfn_type, minfo->p2m_size *
>> sizeof(*minfo->pfn_type)) )
>>      {
>>          ERROR("Unable to lock pfn_type array");
>>          goto failed;
>> diff -r 267ecb2ee5bf tools/libxc/xc_pm.c
>> --- a/tools/libxc/xc_pm.c     Tue Jun 01 14:25:59 2010 +0100
>> +++ b/tools/libxc/xc_pm.c     Tue Jun 01 17:54:51 2010 +0100
>> @@ -57,11 +57,11 @@
>>      if ( (ret = xc_pm_get_max_px(xch, cpuid, &max_px)) != 0)
>>          return ret;
>> 
>> -    if ( (ret = lock_pages(pxpt->trans_pt,
>> +    if ( (ret = xc_lock_pages(xch, pxpt->trans_pt,
>>          max_px * max_px * sizeof(uint64_t))) != 0 )
>>          return ret;
>> 
>> -    if ( (ret = lock_pages(pxpt->pt,
>> +    if ( (ret = xc_lock_pages(xch, pxpt->pt,
>>          max_px * sizeof(struct xc_px_val))) != 0 )
>>      {
>>          unlock_pages(pxpt->trans_pt, max_px * max_px * sizeof(uint64_t));
>> @@ -132,11 +132,11 @@
>>      if ( (ret = xc_pm_get_max_cx(xch, cpuid, &max_cx)) )
>>          goto unlock_0;
>> 
>> -    if ( (ret = lock_pages(cxpt, sizeof(struct xc_cx_stat))) )
>> +    if ( (ret = xc_lock_pages(xch, cxpt, sizeof(struct xc_cx_stat))) )
>>          goto unlock_0;
>> -    if ( (ret = lock_pages(cxpt->triggers, max_cx * sizeof(uint64_t))) )
>> +    if ( (ret = xc_lock_pages(xch, cxpt->triggers, max_cx *
>> sizeof(uint64_t))) )
>>          goto unlock_1;
>> -    if ( (ret = lock_pages(cxpt->residencies, max_cx * sizeof(uint64_t))) )
>> +    if ( (ret = xc_lock_pages(xch, cxpt->residencies, max_cx *
>> sizeof(uint64_t))) )
>>          goto unlock_2;
>> 
>>      sysctl.cmd = XEN_SYSCTL_get_pmstat;
>> @@ -199,13 +199,13 @@
>>               (!user_para->scaling_available_governors) )
>>              return -EINVAL;
>> 
>> -        if ( (ret = lock_pages(user_para->affected_cpus,
>> +        if ( (ret = xc_lock_pages(xch, user_para->affected_cpus,
>>                                 user_para->cpu_num * sizeof(uint32_t))) )
>>              goto unlock_1;
>> -        if ( (ret = lock_pages(user_para->scaling_available_frequencies,
>> +        if ( (ret = xc_lock_pages(xch,
>> user_para->scaling_available_frequencies,
>>                                 user_para->freq_num * sizeof(uint32_t))) )
>>              goto unlock_2;
>> -        if ( (ret = lock_pages(user_para->scaling_available_governors,
>> +        if ( (ret = xc_lock_pages(xch,
>> user_para->scaling_available_governors,
>>                   user_para->gov_num * CPUFREQ_NAME_LEN * sizeof(char))) )
>>              goto unlock_3;
>> 
>> diff -r 267ecb2ee5bf tools/libxc/xc_private.c
>> --- a/tools/libxc/xc_private.c        Tue Jun 01 14:25:59 2010 +0100
>> +++ b/tools/libxc/xc_private.c        Tue Jun 01 17:54:51 2010 +0100
>> @@ -174,7 +174,7 @@
>> 
>>  #ifdef __sun__
>> 
>> -int lock_pages(void *addr, size_t len) { return 0; }
>> +int xc_lock_pages(xc_interface *xch, void *addr, size_t len) { return 0; }
>>  void unlock_pages(void *addr, size_t len) { }
>> 
>>  int hcall_buf_prep(void **addr, size_t len) { return 0; }
>> @@ -182,13 +182,40 @@
>> 
>>  #else /* !__sun__ */
>> 
>> -int lock_pages(void *addr, size_t len)
>> +int xc_lock_pages(xc_interface *xch, void *addr, size_t len)
>>  {
>>        int e;
>>        void *laddr = (void *)((unsigned long)addr & PAGE_MASK);
>>        size_t llen = (len + ((unsigned long)addr - (unsigned long)laddr) +
>>                       PAGE_SIZE - 1) & PAGE_MASK;
>> -      e = mlock(laddr, llen);
>> +      size_t offset;
>> +      int iterations = 0;
>> +      int dummy;
>> +
>> +      for (;;) {
>> +          e = mlock(laddr, llen);
>> +          if (!e) {
>> +              if (iterations > 5)
>> +                  DBGPRINTF("mlock (libxc_lock_pages) (len=%zu)"
>> +                            " took %d iterations", len, iterations);
>> +              return 0;
>> +          }
>> +          if (errno != EAGAIN) {
>> +              PERROR("mlock (libxc_lock_pages) failed (len=%zu)", len);
>> +              return e;
>> +          }
>> +          if (++iterations > 100) {
>> +              ERROR("mlock (libxc_lock_pages) too much EAGAIN (len=%zu)",
>> len);
>> +              return -1;
>> +          }
>> +          if (iterations > 10) {
>> +              /* max total time: 1/2 * 100 * 2000 us = 1 second */
>> +              usleep(iterations * 2000);
>> +          }
>> +          for (offset = 0; offset < len; offset += PAGE_SIZE) {
>> +              dummy = ((volatile unsigned char*)addr)[offset];
>> +          }
>> +      }
>>        return e;
>>  }
>> 
>> @@ -230,7 +257,7 @@
>>      pthread_key_create(&hcall_buf_pkey, _xc_clean_hcall_buf);
>>  }
>> 
>> -int hcall_buf_prep(void **addr, size_t len)
>> +int hcall_buf_prep(xc_interface *xch, void **addr, size_t len)
>>  {
>>      struct hcall_buf *hcall_buf;
>> 
>> @@ -248,7 +275,7 @@
>>      if ( !hcall_buf->buf )
>>      {
>>          hcall_buf->buf = xc_memalign(PAGE_SIZE, PAGE_SIZE);
>> -        if ( !hcall_buf->buf || lock_pages(hcall_buf->buf, PAGE_SIZE) )
>> +        if ( !hcall_buf->buf || xc_lock_pages(xch, hcall_buf->buf,
>> PAGE_SIZE) )
>>          {
>>              free(hcall_buf->buf);
>>              hcall_buf->buf = NULL;
>> @@ -265,7 +292,7 @@
>>      }
>> 
>>   out:
>> -    return lock_pages(*addr, len);
>> +    return xc_lock_pages(xch, *addr, len);
>>  }
>> 
>>  void hcall_buf_release(void **addr, size_t len)
>> @@ -307,7 +334,7 @@
>>      DECLARE_HYPERCALL;
>>      long ret = -EINVAL;
>> 
>> -    if ( hcall_buf_prep((void **)&op, nr_ops*sizeof(*op)) != 0 )
>> +    if ( hcall_buf_prep(xch, (void **)&op, nr_ops*sizeof(*op)) != 0 )
>>      {
>>          PERROR("Could not lock memory for Xen hypercall");
>>          goto out1;
>> @@ -341,7 +368,7 @@
>>      hypercall.arg[2] = 0;
>>      hypercall.arg[3] = mmu->subject;
>> 
>> -    if ( lock_pages(mmu->updates, sizeof(mmu->updates)) != 0 )
>> +    if ( xc_lock_pages(xch, mmu->updates, sizeof(mmu->updates)) != 0 )
>>      {
>>          PERROR("flush_mmu_updates: mmu updates lock_pages failed");
>>          err = 1;
>> @@ -408,14 +435,14 @@
>>      case XENMEM_increase_reservation:
>>      case XENMEM_decrease_reservation:
>>      case XENMEM_populate_physmap:
>> -        if ( lock_pages(reservation, sizeof(*reservation)) != 0 )
>> +        if ( xc_lock_pages(xch, reservation, sizeof(*reservation)) != 0 )
>>          {
>>              PERROR("Could not lock");
>>              goto out1;
>>          }
>>          get_xen_guest_handle(extent_start, reservation->extent_start);
>>          if ( (extent_start != NULL) &&
>> -             (lock_pages(extent_start,
>> +             (xc_lock_pages(xch, extent_start,
>>                      reservation->nr_extents * sizeof(xen_pfn_t)) != 0) )
>>          {
>>              PERROR("Could not lock");
>> @@ -424,13 +451,13 @@
>>          }
>>          break;
>>      case XENMEM_machphys_mfn_list:
>> -        if ( lock_pages(xmml, sizeof(*xmml)) != 0 )
>> +        if ( xc_lock_pages(xch, xmml, sizeof(*xmml)) != 0 )
>>          {
>>              PERROR("Could not lock");
>>              goto out1;
>>          }
>>          get_xen_guest_handle(extent_start, xmml->extent_start);
>> -        if ( lock_pages(extent_start,
>> +        if ( xc_lock_pages(xch, extent_start,
>>                     xmml->max_extents * sizeof(xen_pfn_t)) != 0 )
>>          {
>>              PERROR("Could not lock");
>> @@ -439,7 +466,7 @@
>>          }
>>          break;
>>      case XENMEM_add_to_physmap:
>> -        if ( lock_pages(arg, sizeof(struct xen_add_to_physmap)) )
>> +        if ( xc_lock_pages(xch, arg, sizeof(struct xen_add_to_physmap)) )
>>          {
>>              PERROR("Could not lock");
>>              goto out1;
>> @@ -448,7 +475,7 @@
>>      case XENMEM_current_reservation:
>>      case XENMEM_maximum_reservation:
>>      case XENMEM_maximum_gpfn:
>> -        if ( lock_pages(arg, sizeof(domid_t)) )
>> +        if ( xc_lock_pages(xch, arg, sizeof(domid_t)) )
>>          {
>>              PERROR("Could not lock");
>>              goto out1;
>> @@ -456,7 +483,7 @@
>>          break;
>>      case XENMEM_set_pod_target:
>>      case XENMEM_get_pod_target:
>> -        if ( lock_pages(arg, sizeof(struct xen_pod_target)) )
>> +        if ( xc_lock_pages(xch, arg, sizeof(struct xen_pod_target)) )
>>          {
>>              PERROR("Could not lock");
>>              goto out1;
>> @@ -535,7 +562,7 @@
>>      memset(pfn_buf, 0, max_pfns * sizeof(*pfn_buf));
>>  #endif
>> 
>> -    if ( lock_pages(pfn_buf, max_pfns * sizeof(*pfn_buf)) != 0 )
>> +    if ( xc_lock_pages(xch, pfn_buf, max_pfns * sizeof(*pfn_buf)) != 0 )
>>      {
>>          PERROR("xc_get_pfn_list: pfn_buf lock failed");
>>          return -1;
>> @@ -618,7 +645,7 @@
>>          break;
>>      }
>> 
>> -    if ( (argsize != 0) && (lock_pages(arg, argsize) != 0) )
>> +    if ( (argsize != 0) && (xc_lock_pages(xch, arg, argsize) != 0) )
>>      {
>>          PERROR("Could not lock memory for version hypercall");
>>          return -ENOMEM;
>> diff -r 267ecb2ee5bf tools/libxc/xc_private.h
>> --- a/tools/libxc/xc_private.h        Tue Jun 01 14:25:59 2010 +0100
>> +++ b/tools/libxc/xc_private.h        Tue Jun 01 17:54:51 2010 +0100
>> @@ -85,10 +85,10 @@
>> 
>>  void *xc_memalign(size_t alignment, size_t size);
>> 
>> -int lock_pages(void *addr, size_t len);
>> +int xc_lock_pages(xc_interface*, void *addr, size_t len);
>>  void unlock_pages(void *addr, size_t len);
>> 
>> -int hcall_buf_prep(void **addr, size_t len);
>> +int hcall_buf_prep(xc_interface*, void **addr, size_t len);
>>  void hcall_buf_release(void **addr, size_t len);
>> 
>>  static inline void safe_munlock(const void *addr, size_t len)
>> @@ -117,7 +117,7 @@
>> 
>>      DECLARE_HYPERCALL;
>> 
>> -    if ( hcall_buf_prep(&op, len) != 0 )
>> +    if ( hcall_buf_prep(xch, &op, len) != 0 )
>>      {
>>          PERROR("Could not lock memory for Xen hypercall");
>>          goto out1;
>> @@ -145,7 +145,7 @@
>>      int ret = -1;
>>      DECLARE_HYPERCALL;
>> 
>> -    if ( hcall_buf_prep((void **)&domctl, sizeof(*domctl)) != 0 )
>> +    if ( hcall_buf_prep(xch, (void **)&domctl, sizeof(*domctl)) != 0 )
>>      {
>>          PERROR("Could not lock memory for Xen hypercall");
>>          goto out1;
>> @@ -174,7 +174,7 @@
>>      int ret = -1;
>>      DECLARE_HYPERCALL;
>> 
>> -    if ( hcall_buf_prep((void **)&sysctl, sizeof(*sysctl)) != 0 )
>> +    if ( hcall_buf_prep(xch, (void **)&sysctl, sizeof(*sysctl)) != 0 )
>>      {
>>          PERROR("Could not lock memory for Xen hypercall");
>>          goto out1;
>> diff -r 267ecb2ee5bf tools/libxc/xc_resume.c
>> --- a/tools/libxc/xc_resume.c Tue Jun 01 14:25:59 2010 +0100
>> +++ b/tools/libxc/xc_resume.c Tue Jun 01 17:54:51 2010 +0100
>> @@ -180,7 +180,7 @@
>>          goto out;
>>      }
>> 
>> -    if ( lock_pages(&ctxt, sizeof(ctxt)) )
>> +    if ( xc_lock_pages(xch, &ctxt, sizeof(ctxt)) )
>>      {
>>          ERROR("Unable to lock ctxt");
>>          goto out;
>> diff -r 267ecb2ee5bf tools/libxc/xc_tbuf.c
>> --- a/tools/libxc/xc_tbuf.c   Tue Jun 01 14:25:59 2010 +0100
>> +++ b/tools/libxc/xc_tbuf.c   Tue Jun 01 17:54:51 2010 +0100
>> @@ -120,7 +120,7 @@
>>      set_xen_guest_handle(sysctl.u.tbuf_op.cpu_mask.bitmap, bytemap);
>>      sysctl.u.tbuf_op.cpu_mask.nr_cpus = sizeof(bytemap) * 8;
>> 
>> -    if ( lock_pages(&bytemap, sizeof(bytemap)) != 0 )
>> +    if ( xc_lock_pages(xch, &bytemap, sizeof(bytemap)) != 0 )
>>      {
>>          PERROR("Could not lock memory for Xen hypercall");
>>          goto out;
>> diff -r 267ecb2ee5bf tools/libxc/xc_tmem.c
>> --- a/tools/libxc/xc_tmem.c   Tue Jun 01 14:25:59 2010 +0100
>> +++ b/tools/libxc/xc_tmem.c   Tue Jun 01 17:54:51 2010 +0100
>> @@ -14,7 +14,7 @@
>> 
>>      hypercall.op = __HYPERVISOR_tmem_op;
>>      hypercall.arg[0] = (unsigned long)op;
>> -    if (lock_pages(op, sizeof(*op)) != 0)
>> +    if (xc_lock_pages(xch, op, sizeof(*op)) != 0)
>>      {
>>          PERROR("Could not lock memory for Xen hypercall");
>>          return -EFAULT;
>> @@ -52,7 +52,7 @@
>>      op.u.ctrl.arg3 = arg3;
>> 
>>      if (subop == TMEMC_LIST) {
>> -        if ((arg1 != 0) && (lock_pages(buf, arg1) != 0))
>> +        if ((arg1 != 0) && (xc_lock_pages(xch, buf, arg1) != 0))
>>          {
>>              PERROR("Could not lock memory for Xen hypercall");
>>              return -ENOMEM;
>> 
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@xxxxxxxxxxxxxxxxxxx
> http://lists.xensource.com/xen-devel



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel