[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH V2 4/5] xen, libxc: Request page fault injection via libxc



On 09/05/2014 12:19 PM, Razvan Cojocaru wrote:
> On 09/04/2014 08:08 PM, Tian, Kevin wrote:
>>> From: Razvan Cojocaru [mailto:rcojocaru@xxxxxxxxxxxxxxx]
>>> Sent: Thursday, September 04, 2014 10:02 AM
>>>
>>> On 09/04/14 19:56, Razvan Cojocaru wrote:
>>>> On 09/04/14 18:58, Tian, Kevin wrote:
>>>>>> From: Razvan Cojocaru [mailto:rcojocaru@xxxxxxxxxxxxxxx]
>>>>>> Sent: Thursday, September 04, 2014 1:02 AM
>>>>>>
>>>>>> On 09/04/2014 10:54 AM, Razvan Cojocaru wrote:
>>>>>>> On 09/04/2014 04:25 AM, Tian, Kevin wrote:
>>>>>>>>> From: Razvan Cojocaru [mailto:rcojocaru@xxxxxxxxxxxxxxx]
>>>>>>>>> diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
>>>>>>>>> index 5761ff9..5d3e4d4 100644
>>>>>>>>> --- a/xen/arch/x86/hvm/hvm.c
>>>>>>>>> +++ b/xen/arch/x86/hvm/hvm.c
>>>>>>>>> @@ -420,6 +420,31 @@ static bool_t hvm_wait_for_io(struct
>>>>>>>>> hvm_ioreq_vcpu *sv, ioreq_t *p)
>>>>>>>>>      return 1;
>>>>>>>>>  }
>>>>>>>>>
>>>>>>>>> +static bool_t hvm_is_pf_requested(struct vcpu *v)
>>>>>>>>> +{
>>>>>>>>> +    const struct domain *d = v->domain;
>>>>>>>>> +    struct segment_register seg;
>>>>>>>>> +    uint64_t mask;
>>>>>>>>> +
>>>>>>>>> +    hvm_get_segment_register(v, x86_seg_ss, &seg);
>>>>>>>>> +
>>>>>>>>> +    if ( seg.attr.fields.dpl != 3 ) /* Guest is not in user mode */
>>>>>>>>> +        return 0;
>>>>>>>>> +
>>>>>>>>> +    if ( hvm_long_mode_enabled(v) )
>>>>>>>>> +        mask = PADDR_MASK & PAGE_MASK; /* Bits 51:12. */
>>>>>>>>> +    else if ( hvm_pae_enabled(v) )
>>>>>>>>> +        mask = 0x00000000ffffffe0; /* Bits 31:5. */
>>>>>>>>> +    else
>>>>>>>>> +        mask = (uint32_t)PAGE_MASK; /* Bits 31:12. */
>>>>>>>>> +
>>>>>>>>> +    if ( (v->arch.hvm_vcpu.guest_cr[3] & mask) !=
>>>>>>>>> +         (d->arch.hvm_domain.inject_trap.cr3 & mask) )
>>>>>>>>> +        return 0;
>>>>>>>>> +
>>>>>>>>> +    return 1;
>>>>>>>>> +}
>>>>>>>>> +
>>>>>>>>>  void hvm_do_resume(struct vcpu *v)
>>>>>>>>>  {
>>>>>>>>>      struct domain *d = v->domain;
>>>>>>>>> @@ -451,6 +476,15 @@ void hvm_do_resume(struct vcpu *v)
>>>>>>>>>      }
>>>>>>>>>
>>>>>>>>>      /* Inject pending hw/sw trap */
>>>>>>>>
>>>>>>>> you want to make comment more introspection related.
>>>>>>>>
>>>>>>>>> +    if ( d->arch.hvm_domain.inject_trap.vector != -1 &&
>>>>>>>>> +         v->arch.hvm_vcpu.inject_trap.vector == -1 &&
>>>>>>>>> +         hvm_is_pf_requested(v) )
>>>>>>>>> +    {
>>>>>>>>> +        hvm_inject_trap(&d->arch.hvm_domain.inject_trap);
>>>>>>>>> +        d->arch.hvm_domain.inject_trap.vector = -1;
>>>>>>>>> +    }
>>>>>>>>> +
>>>>>>>>> +    /* Inject pending hw/sw trap */
>>>>>>>>>      if ( v->arch.hvm_vcpu.inject_trap.vector != -1 )
>>>>>>>>>      {
>>>>>>>>>          hvm_inject_trap(&v->arch.hvm_vcpu.inject_trap);
>>>>>>>>> @@ -1473,9 +1507,10 @@ int hvm_domain_initialise(struct domain
>>> *d)
>>>>>>>>>              printk(XENLOG_G_INFO "PVH guest must have HAP
>>>>>> on\n");
>>>>>>>>>              return -EINVAL;
>>>>>>>>>          }
>>>>>>>>> -
>>>>>>>>>      }
>>>>>>>>>
>>>>>>>>> +    d->arch.hvm_domain.inject_trap.vector = -1;
>>>>>>>>> +
>>>>>>>>
>>>>>>>> A question here. With new design, now you have two places which may
>>>>>> cache
>>>>>>>> fault injection information. If unfortunately two consecutive 
>>>>>>>> hypercalls
>>> with
>>>>>>>> one having vcpuid=-1 and the other having vcpuid=n, it's possible two
>>>>>> injections
>>>>>>>> will be handled together if same vcpu is chosen for two pending
>>> injections.
>>>>>>>>
>>>>>>>> I think the hypercall needs a check of all previous pending requests, 
>>>>>>>> not
>>>>>> only
>>>>>>>> in domain specific structure as what you did in this version.
>>>>>>
>>>>>> If two consecutive hypercalls with a wildcard, and then a specific
>>>>>> vcpuid take place, the per-domain injection data will indeed be set
>>>>>> along with the VCPU-specific injection data.
>>>>>
>>>>> I'm thinking one hypercall with a wildchard, while another hypercall
>>>>> with a specific vcpuid.
>>>>
>>>> If I understand this correctly, that's what my original design did, with
>>>> xc_domain_request_pagefault(). Please see:
>>>>
>>>> http://lists.xen.org/archives/html/xen-devel/2014-08/msg02782.html
>>>>
>>>> But it has been argued that Xen already has HVMOP_inject_trap and that a
>>>> new hypercall should only be added if it's not possible to extend the
>>>> existing one.
>>>
>>> I might have misunderstood your comment - you were probably just
>>> clarifying the sequence of hypercall calls, not suggesting implementing
>>> a separate hypercall for the wildcard case.
>>>
>>>
>>
>> Exactly. Just want to see a consistent policy. In current hypercall 
>> implementation,
>> you check pending domain structure which implies only one pending injection
>> allowed. But with vcpu structure it's possible to have two or even more 
>> pending
>> in the same time. Need clarify your policy and make it consistent. :-)
> 
> I've modified the code to do this:
> 
>         rc = -ENOENT;
> 
>         if ( tr.vcpuid == (uint32_t)~0 ) /* Any VCPU. */
>         {
>             unsigned int i;
> 
>             for ( i = 0; i < d->max_vcpus; i++ )
>                 if ( (v = d->vcpu[i]) != NULL &&
>                      v->arch.hvm_vcpu.inject_trap.vector == -1 )
>                 {
>                     printk("Busy cpu\n");
>                     rc = -EBUSY;
>                     break;
>                 }

Please ignore my previous message, the test should have been, of course,
v->arch.hvm_vcpu.inject_trap.vector != -1. Will do more testing and send
a new series.


Thanks,
Razvan Cojocaru

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.