[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v7 2/7] iommu: track reserved ranges using a rangeset



>>> On 15.10.18 at 12:35, <paul.durrant@xxxxxxxxxx> wrote:
> @@ -1987,28 +1986,35 @@ static int rmrr_identity_mapping(struct domain *d, 
> bool_t map,
>              if ( --mrmrr->count )
>                  return 0;
>  
> -            while ( base_pfn < end_pfn )
> +            err = rangeset_remove_range(hd->reserved_ranges,
> +                                        base_pfn, end_pfn);
> +            while ( !err && base_pfn < end_pfn )
>              {
>                  if ( clear_identity_p2m_entry(d, base_pfn) )
> -                    ret = -ENXIO;
> +                    err = -ENXIO;
> +
>                  base_pfn++;
>              }

Please note how previously this loop was no exited early in case of error.
This was intentional and should be retained.

Also shouldn't you remove the frames from the rangeset only when the
P2M entries were cleared successfully? You also add them ahead of
setting P2M entries.

Finally - why the rename from ret to err?

> --- a/xen/drivers/passthrough/x86/iommu.c
> +++ b/xen/drivers/passthrough/x86/iommu.c
> @@ -241,6 +241,10 @@ void __hwdom_init arch_iommu_hwdom_init(struct domain *d)
>          else
>              rc = iommu_map_page(d, _dfn(pfn), _mfn(pfn),
>                                  IOMMUF_readable | IOMMUF_writable);
> +
> +        if ( !rc && !page_is_ram_type(pfn, RAM_TYPE_CONVENTIONAL) )
> +            rc = rangeset_add_singleton(dom_iommu(d)->reserved_ranges, pfn);

Along the lines of the earlier remark, perhaps better to set this before
doing the mapping? It's not _that_ important for Dom0, but just to not
set bad precedents consistency would seem desirable.

Jan



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.