[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v2] x86/HVM: p2m_ram_ro is incompatible with device pass-through



> -----Original Message-----
> From: Jan Beulich <JBeulich@xxxxxxxx>
> Sent: 04 July 2019 10:19
> To: Paul Durrant <Paul.Durrant@xxxxxxxxxx>; xen-devel@xxxxxxxxxxxxxxxxxxxx
> Cc: Andrew Cooper <Andrew.Cooper3@xxxxxxxxxx>; George Dunlap 
> <George.Dunlap@xxxxxxxxxx>; Roger Pau
> Monne <roger.pau@xxxxxxxxxx>; Wei Liu <wl@xxxxxxx>
> Subject: Re: [PATCH v2] x86/HVM: p2m_ram_ro is incompatible with device 
> pass-through
> 
> On 03.07.2019 17:22, Paul Durrant wrote:
> >> -----Original Message-----
> >> From: Jan Beulich <JBeulich@xxxxxxxx>
> >> Sent: 03 July 2019 12:36
> >> To: xen-devel@xxxxxxxxxxxxxxxxxxxx
> >> Cc: George Dunlap <George.Dunlap@xxxxxxxxxx>; Paul Durrant 
> >> <Paul.Durrant@xxxxxxxxxx>; Andrew Cooper
> >> <Andrew.Cooper3@xxxxxxxxxx>; Wei Liu <wl@xxxxxxx>; Roger Pau Monne 
> >> <roger.pau@xxxxxxxxxx>
> >> Subject: [PATCH v2] x86/HVM: p2m_ram_ro is incompatible with device 
> >> pass-through
> >>
> >> The write-discard property of the type can't be represented in IOMMU
> >> page table entries. Make sure the respective checks / tracking can't
> >> race, by utilizing the domain lock. The other sides of the sharing/
> >> paging/log-dirty exclusion checks should subsequently perhaps also be
> >> put under that lock then.
> >>
> >> Take the opportunity and also convert neighboring bool_t to bool in
> >> struct hvm_domain.
> >>
> >> Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx>
> >> ---
> >> v2: Don't set p2m_ram_ro_used when failing the request.
> >>
> >> --- a/xen/arch/x86/hvm/dm.c
> >> +++ b/xen/arch/x86/hvm/dm.c
> >> @@ -255,16 +255,33 @@ static int set_mem_type(struct domain *d
> >>
> >>        mem_type = array_index_nospec(data->mem_type, ARRAY_SIZE(memtype));
> >>
> >> -    if ( mem_type == HVMMEM_ioreq_server )
> >> +    switch ( mem_type )
> >>        {
> >>            unsigned int flags;
> >>
> >> +    case HVMMEM_ioreq_server:
> >>            if ( !hap_enabled(d) )
> >>                return -EOPNOTSUPP;
> >>
> >>            /* Do not change to HVMMEM_ioreq_server if no ioreq server 
> >> mapped. */
> >>            if ( !p2m_get_ioreq_server(d, &flags) )
> >>                return -EINVAL;
> >> +
> >> +        break;
> >> +
> >> +    case HVMMEM_ram_ro:
> >> +        /* p2m_ram_ro can't be represented in IOMMU mappings. */
> >> +        domain_lock(d);
> >> +        if ( has_iommu_pt(d) )
> >> +            rc = -EXDEV;
> >> +        else
> >> +            d->arch.hvm.p2m_ram_ro_used = true;
> >
> > Do we really want this to be a one-way trip? On the face of it, it
> > would seem that keeping a count of p2m_ram_ro entries would be more
> > desirable such that, once the last one is gone, devices can be
> > assigned again?
> 
> Well, at this point I'm not really up to introducing accounting of
> the number of uses of p2m_ram_ro. This could be a further step to
> be done in the future, if necessary.
> 
> > If not maybe it's time to go all the way and make iommu page table
> > construction part of domain create and then we simplify a lot of
> > code and we don't need any flag/refcount like this at all.
> 
> I've said this before: I don't think it should be a requirement to
> know at the time of the creation of a VM whether it'll eventually
> have a pass-through device assigned. Furthermore you realize that
> this suggestion of yours is contrary to what you've said further up:
> This way you'd make the two things exclusive of one another without
> any recourse.

Yes, I realize the suggestions are contradictory. My point is that adding IOMMU 
pages to a running domain is tricky and leads to issues like the one you are 
trying to solve with the ram_ro_used flag.
The whole subsystem is in need of an overhaul anyway so I guess this band-aid 
is ok for now.

  Paul
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.