[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 3/6] remove late (on-demand) construction of IOMMU page tables



> -----Original Message-----
> From: Jan Beulich <jbeulich@xxxxxxxx>
> Sent: 07 August 2019 11:32
> To: Paul Durrant <Paul.Durrant@xxxxxxxxxx>
> Cc: xen-devel@xxxxxxxxxxxxxxxxxxxx; Julien Grall <julien.grall@xxxxxxx>; 
> Alexandru Isaila
> <aisaila@xxxxxxxxxxxxxxx>; Petre Pircalabu <ppircalabu@xxxxxxxxxxxxxxx>; 
> Razvan Cojocaru
> <rcojocaru@xxxxxxxxxxxxxxx>; Andrew Cooper <Andrew.Cooper3@xxxxxxxxxx>; Roger 
> Pau Monne
> <roger.pau@xxxxxxxxxx>; VolodymyrBabchuk <Volodymyr_Babchuk@xxxxxxxx>; George 
> Dunlap
> <George.Dunlap@xxxxxxxxxx>; Ian Jackson <Ian.Jackson@xxxxxxxxxx>; Stefano 
> Stabellini
> <sstabellini@xxxxxxxxxx>; Konrad Rzeszutek Wilk <konrad.wilk@xxxxxxxxxx>; 
> Tamas K Lengyel
> <tamas@xxxxxxxxxxxxx>; Tim (Xen.org) <tim@xxxxxxx>; Wei Liu <wl@xxxxxxx>
> Subject: Re: [Xen-devel] [PATCH 3/6] remove late (on-demand) construction of 
> IOMMU page tables
> 
> On 30.07.2019 15:44, Paul Durrant wrote:
> > NOTE: This patch will cause a small amount of extra resource to be used
> >        to accommodate IOMMU page tables that may never be used, since the
> >        per-domain IOMMU flag enable flag is currently set to the value
> >        of the global iommu_enable flag. A subsequent patch will add an
> >        option to the toolstack to allow it to be turned off if there is
> >        no intention to assign passthrough hardware to the domain.
> 
> In particular if the default of this is going to be "true" (I
> didn't look at that patch yet, but the wording above makes me
> assume so), in auto-ballooning mode without shared page tables
> more memory should imo be ballooned out of Dom0 now. It has
> always been a bug that IOMMU page tables weren't accounted for,
> but it would become even more prominent then.

Ultimately, once the whole series is applied, then nothing much should change 
for those specifying passthrough h/w in an xl.cfg. The main difference will be 
that h/w cannot be passed through to a domain that was not originally created 
with IOMMU pagetables.
With patches applied up to this point then, yes, every domain will get IOMMU 
page tables. I guess I'll take a look at the auto-ballooning code and see what 
needs to be done.

> 
> > --- a/xen/arch/x86/hvm/mtrr.c
> > +++ b/xen/arch/x86/hvm/mtrr.c
> > @@ -783,7 +783,8 @@ HVM_REGISTER_SAVE_RESTORE(MTRR, hvm_save_mtrr_msr, 
> > hvm_load_mtrr_msr, 1,
> >
> >   void memory_type_changed(struct domain *d)
> >   {
> > -    if ( (has_iommu_pt(d) || cache_flush_permitted(d)) && d->vcpu && 
> > d->vcpu[0] )
> > +    if ( (is_iommu_enabled(d) || cache_flush_permitted(d)) && d->vcpu &&
> > +         d->vcpu[0] )
> 
> As a really minor comment - I think it wouldn't be bad for both
> d->vcpu references to end up on the same line.

Ok.

> 
> > @@ -625,8 +548,7 @@ static void iommu_dump_p2m_table(unsigned char key)
> >       ops = iommu_get_ops();
> >       for_each_domain(d)
> >       {
> > -        if ( is_hardware_domain(d) ||
> > -             dom_iommu(d)->status < IOMMU_STATUS_initialized )
> > +        if ( !is_iommu_enabled(d) )
> >               continue;
> 
> Why do you drop the hwdom check here?

Because is_iommu_enabled() for the h/w domain will always be true if 
iommu_enabled is true, so no need for a special case.

> 
> > --- a/xen/include/asm-arm/iommu.h
> > +++ b/xen/include/asm-arm/iommu.h
> > @@ -21,7 +21,7 @@ struct arch_iommu
> >   };
> >
> >   /* Always share P2M Table between the CPU and the IOMMU */
> > -#define iommu_use_hap_pt(d) (has_iommu_pt(d))
> > +#define iommu_use_hap_pt(d) (is_iommu_enabled(d))
> 
> I'd suggest dropping the stray outer pair of parentheses at the
> same time.

Ok, will do.

  Paul

> 
> Jan
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.