[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 0 of 3] Update paging/sharing/access interfaces v2


  • To: "Tim Deegan" <tim@xxxxxxx>
  • From: "Andres Lagar-Cavilla" <andres@xxxxxxxxxxxxxxxx>
  • Date: Fri, 10 Feb 2012 10:13:44 -0800
  • Cc: andres@xxxxxxxxxxxxxx, xen-devel@xxxxxxxxxxxxxxxxxxx, ian.campbell@xxxxxxxxxx, adin@xxxxxxxxxxxxxx
  • Delivery-date: Fri, 10 Feb 2012 18:14:15 +0000
  • Domainkey-signature: a=rsa-sha1; c=nofws; d=lagarcavilla.org; h=message-id :in-reply-to:references:date:subject:from:to:cc:reply-to :mime-version:content-type:content-transfer-encoding; q=dns; s= lagarcavilla.org; b=ssX5oklkvyJf+PXJZSm9SwcfvYtvWaERKPupRQgO8mOm JYus43LmooqMUL7HhrQ1iZG9ljDMQhcpilvSLEpO0beJrWENbGSvl6030AV/WODa df78tx5XJedu9rMjCnkqvcx4gNsJ2B5Sq3JEIf5R5SbGPpFBBhLN+VzKJ7K5bBo=
  • List-id: Xen developer discussion <xen-devel.lists.xensource.com>

> At 01:08 -0500 on 09 Feb (1328749705), Andres Lagar-Cavilla wrote:
>> i(Was switch from domctl to memops)
>> Changes from v1 posted Feb 2nd 2012
>>
>> - Patches 1 & 2 Acked-by Tim Deegan on the hypervisor side
>> - Added patch 3 to clean up the enable domctl interface, based on
>>   discussion with Ian Campbell
>>
>> Description from original post follows:
>>
>> Per page operations in the paging, sharing, and access tracking
>> subsystems are
>> all implemented with domctls (e.g. a domctl to evict one page, or to
>> share one
>> page).
>>
>> Under heavy load, the domctl path reveals a lack of scalability. The
>> domctl
>> lock serializes dom0's vcpus in the hypervisor. When performing
>> thousands of
>> per-page operations on dozens of domains, these vcpus will spin in the
>> hypervisor. Beyond the aggressive locking, an added inefficiency of
>> blocking
>> vcpus in the domctl lock is that dom0 is prevented from re-scheduling
>> any of
>> its other work-starved processes.
>>
>> We retain the domctl interface for setting up and tearing down
>> paging/sharing/mem access for a domain. But we migrate all the per page
>> operations to use the memory_op hypercalls (e.g XENMEM_*).
>>
>> This is a backwards-incompatible ABI change. It's been floating on the
>> list for
>> a couple weeks now, with no nacks thus far.
>>
>> Signed-off-by: Andres Lagar-Cavilla <andres@lagarcavilla>
>> Signed-off-by: Adin Scannell <adin@xxxxxxxxxxx>
>
> Applied 1 and 2; thanks.
>
> I'll leave patch 3 for others to comment -- I know there are out-of-tree
> users of the mem-access interface, and changing the hypercalls is less
> disruptive than changing the libxc interface.

Makes a lot of sense. Thanks.

I don't view this change as sine qua-non, yet, "it would be nice if"...

Is there a timeout mechanism if out-of-tree consumers are not on the ball?

Actually, this hiatus allows me to float a perhaps cleaner way to map the
ring: the known problem is that the pager may die abruptly, and Xen is
still posting events to a page now belonging to some other dom0 process.
This is dealt with in the qemu-dm case by stuffing the ring in an unused
pfn (presumably somewhere in the mmio hole?)

Would that work? Is there a policy for parceling out these "magic pfn's"?

Andres

>
> Tim.
>



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.