[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Event delivery and "domain blocking" on PVHv2



On 22.06.2020 18:09, Roger Pau Monné wrote:
> On Mon, Jun 22, 2020 at 05:31:00PM +0200, Martin Lucina wrote:
>> On 2020-06-22 15:58, Roger Pau Monné wrote:
>>> On Mon, Jun 22, 2020 at 12:58:37PM +0200, Martin Lucina wrote:
>>>> Aha! Thank you for pointing this out. I think you may be right, but
>>>> this
>>>> should be possible without doing the demuxing in interrupt context.
>>>
>>> If you don't do the demuxing in the interrupt context (ie: making the
>>> interrupt handler a noop), then you don't likely need such interrupt
>>> anyway?
>>
>> I need the/an interrupt to wake the VCPU from HLT state if we went to sleep.
>>
>>>
>>>> How about this arrangement, which appears to work for me; no hangs I
>>>> can see
>>>> so far and domU survives ping -f fine with no packet loss:
>>>>
>>>> CAMLprim value
>>>> mirage_xen_evtchn_block_domain(value v_deadline)
>>>> {
>>>>     struct vcpu_info *vi = VCPU0_INFO();
>>>>     solo5_time_t deadline = Int64_val(v_deadline);
>>>>
>>>>     if (solo5_clock_monotonic() < deadline) {
>>>>         __asm__ __volatile__ ("cli" : : : "memory");
>>>>         if (vi->evtchn_upcall_pending) {
>>>>             __asm__ __volatile__ ("sti");
>>>>         }
>>>>         else {
>>>>             hypercall_set_timer_op(deadline);
>>>
>>> What if you set a deadline so close that evtchn_upcall_pending gets
>>> set by Xen here and the interrupt is injected? You would execute the
>>> noop handler and just hlt, and could likely end up in the same blocked
>>> situation as before?
>>
>> Why would an interrupt be injected here? Doesn't the immediately preceding
>> "cli" disable that?
> 
> Well, I mean between the sti and the hlt instruction.

When EFLAGS.IF was clear before STI, then the first point at which
an interrupt can get injected is when HLT is already executed (i.e.
to wake from this HLT). That's the so called "STI shadow".

Jan



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.