[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH RFC V7 4/5] xen, libxc: Request page fault injection via libxc



> From: Razvan Cojocaru [mailto:rcojocaru@xxxxxxxxxxxxxxx]
> Sent: Tuesday, August 26, 2014 9:59 AM
> 
> On 08/26/14 18:49, Jan Beulich wrote:
> >>>> On 26.08.14 at 16:56, <rcojocaru@xxxxxxxxxxxxxxx> wrote:
> >> On 08/26/2014 05:44 PM, Jan Beulich wrote:
> >>>>>> On 26.08.14 at 16:24, <rcojocaru@xxxxxxxxxxxxxxx> wrote:
> >>>> On 08/26/2014 05:13 PM, Jan Beulich wrote:
> >>>>>>>> On 13.08.14 at 17:28, <rcojocaru@xxxxxxxxxxxxxxx> wrote:
> >>>>>> --- a/xen/include/asm-x86/hvm/domain.h
> >>>>>> +++ b/xen/include/asm-x86/hvm/domain.h
> >>>>>> @@ -141,6 +141,14 @@ struct hvm_domain {
> >>>>>>       */
> >>>>>>      uint64_t sync_tsc;
> >>>>>>
> >>>>>> +    /* Memory introspection page fault injection data. */
> >>>>>> +    struct {
> >>>>>> +        uint64_t address_space;
> >>>>>> +        uint64_t virtual_address;
> >>>>>> +        uint32_t errcode;
> >>>>>> +        bool_t valid;
> >>>>>> +    } fault_info;
> >>>>>
> >>>>> Sorry for noticing this only now, but how can this be a per-domain
> >>>>> thing rather than a per-vCPU one?
> >>>>
> >>>> The requirement for our introspection application has simply been to
> >>>> bring back in a swapped-out page, regardless of what VCPU ends up
> >>>> actually doing it.
> >>>
> >>> But please remember that what you add to the public code base
> >>> shouldn't be tied to specific needs of your application, it should
> >>> be coded in a generally useful way.
> >>
> >> Of course, perhaps I should have written "the scenario we're working
> >> with" rather than "the requirement for our application". I'm just trying
> >> to understand all the usual cases for this.
> >>
> >>> Furthermore, how would this work if you have 2 vCPU-s hit such
> >>> a condition, and you need to bring in 2 pages in parallel?
> >>
> >> Since this is all happening in the context of processing mem_events,
> >> it's not really possible for two VCPUs to need to do this in parallel,
> >> since processing mem_events is being done sequentially. A VCPU needs to
> >> put a mem_event in the ring buffer and pause before this hypercall can
> >> be called from userspace.
> >
> > I'd certainly want to hear Tim's opinion here before settling on
> > either model. Considering that this is at least mem-event related,
> > it's slightly odd you didn't copy him in the first place.
> 
> Sorry about that, scripts/get_maintainter.pl did not list him and I
> forgot to CC him.
> 
> 

From code seems this info is a condition for PF injection, instead of
recording VCPU faulting information. So it looks OK to be a per-domain
structure, but the structure name 'fault_info' is too generic...

Thanks
Kevin

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.