[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Hypercall continuation and wait_event


  • To: "xen-devel@xxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxx>
  • From: Ruslan Nikolaev <nruslan_devel@xxxxxxxxx>
  • Date: Mon, 9 Apr 2012 13:16:29 -0700 (PDT)
  • Delivery-date: Mon, 09 Apr 2012 20:20:07 +0000
  • Domainkey-signature: a=rsa-sha1; q=dns; c=nofws; s=s1024; d=yahoo.com; h=X-YMail-OSG:Received:X-Mailer:References:Message-ID:Date:From:Reply-To:Subject:To:In-Reply-To:MIME-Version:Content-Type; b=p1pXTudCK6Ov0WK1CiiqxRpX0Du7Qk/rQhdCvPPUBHrFWSLn6Av6c0YF/4OgQ4horDog6zRv4PWWNyEdFKRvQ4aH7HBqrCMX7pRNRl/c2cXYEFSYVdonJUfhXZ6G5pvudI415YsfThQyAB4X+OoDVatmUkUuE1Z29Na3KUR6jYk=;
  • List-id: Xen developer discussion <xen-devel.lists.xen.org>

Keir,

Thanks for your replies! Just one more question about 
local_event_need_delivery(). Under what (common) conditions I would expect to 
have local events that need delivery?

Ruslan



----- Original Message -----
From: Keir Fraser <keir.xen@xxxxxxxxx>
To: Ruslan Nikolaev <nruslan_devel@xxxxxxxxx>; "xen-devel@xxxxxxxxxxxxx" 
<xen-devel@xxxxxxxxxxxxx>
Cc: 
Sent: Monday, April 9, 2012 8:09 PM
Subject: Re: [Xen-devel] Hypercall continuation and wait_event

On 09/04/2012 20:18, "Ruslan Nikolaev" <nruslan_devel@xxxxxxxxx> wrote:

> Thanks for the reply.
> 
> Since it can take arbitrarily long for an event to arrive (e.g., it is coming
> from a different guest on a user request), how do I need to handle this
> case?Does it mean that I only need to make sure that nothings get scheduled on
> this VCPU in the guest?

Nothing else *can* get scheduled on this VCPU in the guest. The VCPU will
sleep within wait_event within the hypercall context. Hence you must not
hold any hypervisor spinlocks either, for example.

> Also, it is not exactly clear to me how wait_event avoids the need for
> hypercall continuation. What about local_events_need_delivery() or
> softirq_pending()? Are they going to be handled by wait_event internally?

Your VCPU gets descheduled. Hence softirq_pending() is not your concern for
the duration that you're descheduled. And if local_event_need_delivery(),
that's too bad, they have to wait for the vcpu to wake up on the event.

-- Keir

> Ruslan
> 
> 
> 
> 
> 
> 
> ----- Original Message -----
> From: Keir Fraser <keir.xen@xxxxxxxxx>
> To: Ruslan Nikolaev <nruslan_devel@xxxxxxxxx>; "xen-devel@xxxxxxxxxxxxx"
> <xen-devel@xxxxxxxxxxxxx>
> Cc: 
> Sent: Monday, April 9, 2012 6:54 PM
> Subject: Re: [Xen-devel] Hypercall continuation and wait_event
> 
> On 09/04/2012 18:51, "Ruslan Nikolaev" <nruslan_devel@xxxxxxxxx> wrote:
> 
>> Hi
>> 
>> I am curious how I can properly support hypercall continuation and
>> wait_event.
>> I have a dedicated VCPU in a domain which makes a special hypercall, and the
>> hypercall waits for certain event to arrive. I am using queues available in
>> Xen, so wait_event will be invoked in the hypercall once its ready to accept
>> events. However, my understanding that even though I have a dedicated VCPU
>> for
>> this hypercall, I still may need to support hypercall continuation properly.
>> (Is this the case?) So, my question is how exactly the need for hypercall
> 
> No it's not the case, the old hypercall_create_continuation() mechanism does
> not need to be used with wait_event().
> 
> -- Keir
> 
>> preemption may affect wait_event() and wait() operations, and where would I
>> need to do hypercall_preempt_check()?
>> 
>> Thank you!
>> Ruslan
>> 
>> 
>> _______________________________________________
>> Xen-devel mailing list
>> Xen-devel@xxxxxxxxxxxxx
>> http://lists.xen.org/xen-devel
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@xxxxxxxxxxxxx
> http://lists.xen.org/xen-devel



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.