[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH] mem_event: use wait queue when ring is full


  • To: xen-devel@xxxxxxxxxxxxxxxxxxx
  • From: "Andres Lagar-Cavilla" <andres@xxxxxxxxxxxxxxxx>
  • Date: Thu, 15 Dec 2011 06:56:03 -0800
  • Cc: olaf@xxxxxxxxx, tim@xxxxxxx
  • Delivery-date: Thu, 15 Dec 2011 14:56:27 +0000
  • Domainkey-signature: a=rsa-sha1; c=nofws; d=lagarcavilla.org; h=message-id :in-reply-to:references:date:subject:from:to:cc:reply-to :mime-version:content-type:content-transfer-encoding; q=dns; s= lagarcavilla.org; b=dkrGNhqu0FgxqJqwCWanUTsmW1mmOe37kNbOnP7AIR9q b6jbDidRCZSipvgOO8mM0PCFCGtDk45yHUTmkAH9VW9W3VQjDm58on7jK207qjNP bZDD1hGxexxBgKqCHJRBhASB+mu5cIJ/1WJ3fUOqRt9gNWi6+jZihi0Ffrfq96k=
  • List-id: Xen developer discussion <xen-devel.lists.xensource.com>

> Date: Tue, 13 Dec 2011 14:40:16 +0100
> From: Olaf Hering <olaf@xxxxxxxxx>
> To: Andres Lagar-Cavilla <andres@xxxxxxxxxxxxxxxx>
> Cc: xen-devel@xxxxxxxxxxxxxxxxxxx, tim@xxxxxxx
> Subject: Re: [Xen-devel] [PATCH] mem_event: use wait queue when ring
>       is full
> Message-ID: <20111213134016.GA20700@xxxxxxxxx>
> Content-Type: text/plain; charset=utf-8
>
> On Fri, Dec 09, Andres Lagar-Cavilla wrote:
>
>> Olaf,
>> Tim pointed out we need both solutions to ring management in the
>> hypervisor. With our patch ("Improve ring management for memory events.
>> Do
>> not lose guest events."), we can handle the common case quickly, without
>> preempting VMs. With your patch, we can handle extreme situations of
>> ring
>> congestion with the big hammer called wait queue.
>
> With my patch the requests get processed as they come in, both foreign
> and target requests get handled equally. There is no special accounting.
>
> A few questions about your requirements:
> - Is the goal is that each guest vcpu can always put at least one request?
Yes

> - How many requests should foreign vcpus place in the ring if the guest
>   has more vcpus than available slots in the ring? Just a single one so
>   that foreigners can also make some progress?
The idea is that foreign vcpus can place as many events as they want as
long as each guest vcpu that is not blocked on a men event has room to
send one men event. When we reach that border condition, no more foreign
men events.

The case for which there are way too many guest vcpus needs to be handled,
either by capping the max number of vcpus for domains using a men event,
or by growing the ring size.

> - Should access and paging have the same rules for accounting?
Absolutely.

And both should use wait queues in extreme cases in which a guest vcpu
with a single action generates multiple memory events. Given that when we
hit a border condition the guest vcpu will place one event and be flagged
VPF_mem_event_paused (or whatever that flag is named), if a guest vcpu
generates another event when flagged, that's our queue for putting the
vcpu on a wait queue.

Thanks
Andres
>
> Olaf
>
>
>



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.