[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [v7][RFC][PATCH 08/13] xen/x86/p2m: set p2m_access_n for reserved device memory mapping



On 2014/10/27 18:33, Jan Beulich wrote:
On 27.10.14 at 10:05, <tiejun.chen@xxxxxxxxx> wrote:
On 2014/10/24 23:11, Jan Beulich wrote:
On 24.10.14 at 09:34, <tiejun.chen@xxxxxxxxx> wrote:
   int
   guest_physmap_add_entry(struct domain *d, unsigned long gfn,
                           unsigned long mfn, unsigned int page_order,


+            if ( rc )

And the return value from the called function is of type int -
non-zero may not just mean "true" but (when negative) also
"error". You need to distinguish these cases.

But in our case its impossible to get a negative value.

Being guaranteed by what? Please don't simply take the
_current implementation_ of iommu_get_reserved_device_memory()
as reference - it could be changed at any time, and it allowing for an

Okay.
        if ( rc == 1 )

This means we hit a RMRR page. Other failed cases just post a warning message.

error return status already would make it perfectly fine for someone
adding an actual case thereof not to go through all existing callers
to check whether they can cope. This is a general code quality
requirement to assure things remain maintainable.

+            {
+                /*
+                 * Just set p2m_access_n in case of shared-ept
+                 * or non-shared ept but 1:1 mapping.
+                 */
+                if ( iommu_use_hap_pt(d) ||
+                     (!iommu_use_hap_pt(d) && mfn == gfn) )

How would, other than by chance, mfn equal gfn here? Also the
double use of iommu_use_hap_pt(d) is pointless here.

There are two scenarios we should concern:

#1 in case of shared-ept.

We always need to check so iommu_use_hap_pt(d) is good.

#2 in case of non-sharepd-ept

If mfn != gfn I think guest don't access RMRR range, so its allowed.

And what if subsequently a device needing a 1:1 mapping at this GFN
gets assigned? (I simply don't see why shared vs non-shared would
matter here.)

In case of non-shared ept we just create VT-d table, if we assign a device with 1:1 RMRR mapping. So as long as mfn != gfn, its not necessary to set p2m_access_n.

In case of shared ept, we have to set p2m_access_n in any scenarios.


+                {
+                    rc = p2m_set_entry(p2m, gfn, _mfn(mfn), page_order, t,
+                                       p2m_access_n);
+                    if ( rc )
+                        gdprintk(XENLOG_WARNING, "set rdm p2m failed: 
(%#lx)\n",
+                                 gfn);

Such messages are (due to acting on a foreign domain) relatively
useless without also logging the domain that is affected. Conversely,
logging the current domain and vCPU (due to using gdprintk()) is
rather pointless. Also please drop either the colon or the
parentheses in the message.

Can P2M_DEBUG work here?

P2M_DEBUG("set rdm p2m failed: %#lx\n", gfn);

I don't think this would magically add the missing information. Plus it

Sorry, is it okay?

gdprintk(XENLOG_WARNING, "Domain %hu set rdm p2m failed: %#lx\n",
                                 d->domain_id, gfn);

And I think I don't understand what you said properly, so I will ask other guys.

would limit output to the !NDEBUG case, putting the practical
usefulness of this under question even more.

But anyway, looking at the existing code again, I think you'd be better
off falling through to the p2m_set_entry() that's already there, just

Are you saying I should do this in p2m_set_entry()?

altering the access permission value you pass. Less code, better

But here, guest_physmap_add_entry() just initiate to set p2m_access_n. We will alter the access permission until if we really assign device to create a 1:1 mapping.

Thanks
Tiejun

readable.

Jan



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.