[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH] mem_event: use wait queue when ring is full


  • To: "Olaf Hering" <olaf@xxxxxxxxx>
  • From: "Andres Lagar-Cavilla" <andres@xxxxxxxxxxxxxxxx>
  • Date: Mon, 5 Dec 2011 08:34:19 -0800
  • Cc: xen-devel@xxxxxxxxxxxxxxxxxxx
  • Delivery-date: Mon, 05 Dec 2011 16:35:21 +0000
  • Domainkey-signature: a=rsa-sha1; c=nofws; d=lagarcavilla.org; h=message-id :in-reply-to:references:date:subject:from:to:cc:reply-to :mime-version:content-type:content-transfer-encoding; q=dns; s= lagarcavilla.org; b=lMMcaWsHbAqaZS+DBG8nTOteC3HGoCYd0RXMtp1TSvdE 4JzIpYcl5JTZR8VDljBHNoD9UIqPcAKeNWN0O16Qa/7EfbGXYyZrcMaIQzrrUxHM pktI/zjF9KUKo03vmwH2T0lRUC89DwzMft3Gcqr8gDQZuFyKByr+1aZRHK0q0ck=
  • List-id: Xen developer discussion <xen-devel.lists.xensource.com>

> On Mon, Dec 05, Andres Lagar-Cavilla wrote:
>
>> > +    med->bit = bit;
>> I think it's been asked before for this to have a more expressive name.
>
> I have to recheck, AFAIK it was the mem_bit where mem_ is redundant.
how about pause_flag?

>
>> >  static int mem_event_disable(struct mem_event_domain *med)
>> >  {
>> > +    if (!list_empty(&med->wq.list))
>> > +        return -EBUSY;
>> > +
>> What does the caller do with EBUSY? Retry?
>
> Yes, and mail the devs at xen-devel that something isn't right ;-)
Heh, good one :)

>
> At least the pager uses this just in the exit path. I dont know about
> access and sharing, wether these tools enable/disable more than once at
> guest runtime.
>
>> > @@ -287,7 +394,7 @@ int mem_event_domctl(struct domain *d, x
>> >              if ( p2m->pod.entry_count )
>> >                  break;
>> >
>> > -            rc = mem_event_enable(d, mec, med);
>> > +            rc = mem_event_enable(d, mec, _VPF_mem_paging, med);
>> >          }
>> >          break;
>> >
>> > @@ -326,7 +433,7 @@ int mem_event_domctl(struct domain *d, x
>> >              if ( boot_cpu_data.x86_vendor != X86_VENDOR_INTEL )
>> >                  break;
>> >
>> > -            rc = mem_event_enable(d, mec, med);
>> > +            rc = mem_event_enable(d, mec, _VPF_mem_access, med);
>
>> Ok, the idea of bit is that different vcpus will sleep with different
>> pause flags, depending on the ring they're sleeping on. But this is only
>> used in wake_waiters, which is not used by all rings. In fact, why do we
>> need wake_waiters with wait queues?
>
> Before this patch, mem_event_unpause_vcpus() was used to resume waiters
> for the ring itself and for room in the ring.
> Now there is mem_event_wake_waiters(), which indicates the ring is
> active, and there is mem_event_wake_requesters() which indicates the
> ring has room to place guest requests.

I think that if there is no ring where one is expected, harsher actions
should happen. That is what we do in our patch. e.g.
p2m_mem_paging_populate -> no ring -> crash domain, or
p2m_mem_access_check -> access_required -> no ring -> crash domain.

That would eliminate wake_waiters, methinks?

>
> I agree that only _VPF_mem_access is really needed, and _VPF_mem_paging
> could be removed because paging without having a ring first is not
> possible.
>
>
>> > @@ -653,7 +643,7 @@ gfn_found:
>> >      if(ret == 0) goto private_page_found;
>> >
>> >      old_page = page;
>> > -    page = mem_sharing_alloc_page(d, gfn);
>> > +    page = alloc_domheap_page(d, 0);
>> >      if(!page)
>> >      {
>> >          /* We've failed to obtain memory for private page. Need to
>> re-add
>> > the
>> > @@ -661,6 +651,7 @@ gfn_found:
>> >          list_add(&gfn_info->list, &hash_entry->gfns);
>> >          put_gfn(d, gfn);
>> >          shr_unlock();
>> > +        mem_sharing_notify_helper(d, gfn);
>> This is nice. Do you think PoD could use this, should it ever run into a
>> ENOMEM situation? And what about mem_paging_prep? Perhaps, rather than a
>> sharing ring (which is bit rotted) we could have an ENOMEM ring with a
>> utility launched by xencommons listening. The problem, again, is what if
>> ENOMEM is itself caused by dom0 (e.g. writable mapping of a shared page)
>
> I have no idea about mem_sharing. I just move the existing code outside
> the lock so that mem_event_put_request() is (hopefully) called without
> any locks from mem_sharing_get_nr_saved_mfns().
> Since there is appearently no user of a sharing ring, this whole new
> mem_sharing_notify_helper() is a big no-op.
Fair enough. I do think that generally, for x86/mm, an ENOMEM mem_event
ring is a good idea. Later...

>
>> > @@ -1167,9 +1159,11 @@ void p2m_mem_access_resume(struct domain
>> >      if ( rsp.flags & MEM_EVENT_FLAG_VCPU_PAUSED )
>> >          vcpu_unpause(d->vcpu[rsp.vcpu_id]);
>> >
>> > -    /* Unpause any domains that were paused because the ring was full
>> or
>> > no listener
>> > -     * was available */
>> > -    mem_event_unpause_vcpus(d);
>> > +    /* Wake vcpus waiting for room in the ring */
>> > +    mem_event_wake_requesters(&d->mem_event->access);
>> > +
>> > +    /* Unpause all vcpus that were paused because no listener was
>> > available */
>> > +    mem_event_wake_waiters(d, &d->mem_event->access);
>> Is this not used in p2m_mem_paging_resume? Why the difference? Why are
>> two
>> mechanisms needed (wake_requesters, wake_sleepers)?
>
> As said above, wake_sleepers is for those who wait for the ring itself,
> and wake_requesters is for room in the ring.
> p2m_mem_paging_resume() always has a ring, so it does not need to call
> wake_sleepers.
>
>
> Do you have a suggestion for a better name?
>
> Olaf
>



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.