[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [v7][RFC][PATCH 08/13] xen/x86/p2m: set p2m_access_n for reserved device memory mapping



On 2014/10/31 16:25, Jan Beulich wrote:
On 31.10.14 at 03:50, <tiejun.chen@xxxxxxxxx> wrote:
On 2014/10/30 17:24, Jan Beulich wrote:
On 30.10.14 at 08:39, <tiejun.chen@xxxxxxxxx> wrote:
On 2014/10/29 17:20, Jan Beulich wrote:
Getting closer. Just set a to p2m->default_access before the if(),
and overwrite it when rc == 1 inside the if(). And properly handle
the error case (just logging a message - which btw lacks a proper
XENLOG_G_* prefix - doesn't seem enough to me).

Please check the follows:

@@ -686,8 +686,22 @@ guest_physmap_add_entry(struct domain *d, unsigned long 
gfn,
        /* Now, actually do the two-way mapping */
        if ( mfn_valid(_mfn(mfn)) )
        {
-        rc = p2m_set_entry(p2m, gfn, _mfn(mfn), page_order, t,
-                           p2m->default_access);
+        rc = 0;
+        a =  p2m->default_access;
+        if ( !is_hardware_domain(d) )
+        {
+            rc = 
iommu_get_reserved_device_memory(p2m_check_reserved_device_memory,
+                                                  &gfn);
+            /* We need to set reserved device memory as p2m_access_n. */
+            if ( rc == 1 )
+                a = p2m_access_n;
+            else if ( rc < 0 )
+                printk(XENLOG_WARNING
+                       "Domain %d can't check reserved device memory.\n",
+                       d->domain_id);
+        }
+
+        rc = p2m_set_entry(p2m, gfn, _mfn(mfn), page_order, t, a);
            if ( rc )
                goto out; /* Failed to update p2m, bail without updating m2p.
*/

The handling of "a" looks good now, but the error handling and
logging is still as broken as it was before.

Do you mean I'm missing some necessary info? Like gfn and mfn, so domain
id, gfn and mfn can show enough message.

Sorry I'm poor to understand what you expect.

But I explained it already, and that explanation is still visible in
the quotes above. But to avoid any doubt, I'll repeat: "And

I tried to understand what you said but felt a confusion so ask if you show me directly.

properly handle the error case (just logging a message - which
btw lacks a proper XENLOG_G_* prefix - doesn't seem enough
to me)."

Looks there are two problems:

#1: the error message

If current line is not fine,
printk(XENLOG_G_WARNING "Domain %d can't check reserved device memory.\n", d->domain_id);

I mean could you change this directly.

#2 the error handling

In an error case what should I do? Currently we still create these mapping as normal. This means these mfns will be valid so later we can't set them again then device can't be assigned as passthrough. I think this makes sense. Or we should just stop them from setting 1:1 mapping?


But then again this code may change altogether if you avoid
populating the reserved regions in the first place.

Are you saying this scenario?

#1 Here we first set these ranges as p2m_access_n
#2 We reset them as 1:1 RMRR mapping with p2m_access_rw somewhere
#3 Someone may try to populate these ranges again

No. I pointed at the fact that if you avoid populating the holes,
there's no need to force them to p2m_access_n. Any attempts
to map other than the 1:1 thing there could then simply be
rejected.

I think any population should be rejected totally, because 1:1 mapping
means guest can access these RMRR ranges in case of no any device
assignment with RMRR, right? Any access to these range corrupt the real
device usage.

Oh yes, of course I implied that the 1:1 mapping would be
permitted only for those ranges where the RMRR corresponds
to a device the guest got assigned.


Yeah.

Tim,

Please take a look at this, and I hope this can make sure we'll be the same page finally :)

Thanks
Tiejun

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.