[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RE: [Xen-devel] [PATCH 6/6] Change MMU_PT_UPDATE_RESERVE_AD to support update page table for foreign domain


  • To: Keir Fraser <keir.fraser@xxxxxxxxxxxxx>
  • From: "Jiang, Yunhong" <yunhong.jiang@xxxxxxxxx>
  • Date: Mon, 1 Jun 2009 16:40:40 +0800
  • Accept-language: en-US
  • Acceptlanguage: en-US
  • Cc: Xen-devel <xen-devel@xxxxxxxxxxxxxxxxxxx>
  • Delivery-date: Mon, 01 Jun 2009 01:42:07 -0700
  • List-id: Xen developer discussion <xen-devel.lists.xensource.com>
  • Thread-index: Acnh4J83XJstlRtsS9u8RG96aAW7awAoIXfQAAM2lcAAAXUgAA==
  • Thread-topic: [Xen-devel] [PATCH 6/6] Change MMU_PT_UPDATE_RESERVE_AD to support update page table for foreign domain

Thanks for suggestion. I'm always nervous on API changes.

I'm still considering if any other potential usage mode for patch 5/6l (i.e. 
change page table or exchange memory for other domain),, but frustratedly 
realize no other requirement. 

Thanks
-- jyh


xen-devel-bounces@xxxxxxxxxxxxxxxxxxx wrote:
> I'd pack an extra domid into the top 16 bits of the foreigndom
> parameter to mmu_update(). Bottom 16 bits remain foreign owner of data
> page. Upper 16
> bits, if non-zero, are foreign owner of pt page (we could make
> this field +1
> so that dom0 can be encoded as well).
> 
> -- Keir
> 
> On 01/06/2009 07:24, "Jiang, Yunhong" <yunhong.jiang@xxxxxxxxx> wrote:
> 
>> I want to clarify that this patch need more discussion because there
>> is no clear way to distinguish if the page table address (i.e.
>> address passed in req[:2] ) is owned by the current domain or by the
>> foreign domain. So I just try to check if mfn_valid(), but I suspect
>> if this is the right method. 
>> 
>> Thanks
>> Yunhong Jiang
>> 
>> xen-devel-bounces@xxxxxxxxxxxxxxxxxxx wrote:
>>> foreign domain
>>> 
>>> Currently MMU_PT_UPDATE_RESERVE_AD support only update page
>>> table for current domain. This patch add support for foreign domain
>>> also. 
>>> 
>>> Signed-off-by: Jiang, Yunhong <yunhong.jiang@xxxxxxxxx> Update
>>> the page table update hypercall
>>> 
>>> diff -r f6457425560b xen/arch/x86/mm.c
>>> --- a/xen/arch/x86/mm.c Wed May 27 03:22:43 2009 +0800
>>> +++ b/xen/arch/x86/mm.c Wed May 27 03:40:38 2009 +0800 @@ -110,6
>>> +110,7 @@ #include <asm/hypercall.h>
>>> #include <asm/shared.h>
>>> #include <public/memory.h>
>>> +#include <public/sched.h>
>>> #include <xsm/xsm.h>
>>> #include <xen/trace.h>
>>> 
>>> @@ -2990,7 +2991,8 @@ int do_mmu_update(
>>>     struct page_info *page;
>>>     int rc = 0, okay = 1, i = 0;
>>>     unsigned int cmd, done = 0;
>>> -    struct domain *d = current->domain;
>>> +    struct domain *d = current->domain, *pt_owner = NULL;
>>> +    struct vcpu *v = current;
>>>     struct domain_mmap_cache mapcache;
>>> 
>>>     if ( unlikely(count & MMU_UPDATE_PREEMPTED) )
>>> @@ -3051,10 +3053,35 @@ int do_mmu_update(
>>>             gmfn = req.ptr >> PAGE_SHIFT;
>>>             mfn = gmfn_to_mfn(d, gmfn);
>>> 
>>> -            if ( unlikely(!get_page_from_pagenr(mfn, d)) )
>>> +            if (!mfn_valid(mfn))
>>> +                mfn = gmfn_to_mfn(FOREIGNDOM, gmfn);
>>> +            if (!mfn_valid(mfn))
>>>             {
>>>                 MEM_LOG("Could not get page for normal update");
>>> break; +            } +
>>> +            pt_owner =
>>> page_get_owner_and_reference(mfn_to_page(mfn)); + +            if (
>>> pt_owner != d ) +            {
>>> +                if ( pt_owner == FOREIGNDOM )
>>> +                {
>>> +                    spin_lock(&FOREIGNDOM->shutdown_lock);
>>> +                    if ( !IS_PRIV(d) ||
>>> +                         !FOREIGNDOM->is_shut_down ||
>>> +                          (FOREIGNDOM->shutdown_code !=
>>> SHUTDOWN_suspend) ) +                    {
>>> +                        spin_unlock(&FOREIGNDOM->shutdown_lock);
>>> +                        rc = -EPERM;
>>> +                        break;
>>> +                    }
>>> +                    v = FOREIGNDOM->vcpu[0];
>>> +                }else
>>> +                {
>>> +                    rc = -EPERM;
>>> +                    break;
>>> +                }
>>>             }
>>> 
>>>             va = map_domain_page_with_cache(mfn, &mapcache);
>>> @@ -3070,24 +3097,21 @@ int do_mmu_update(
>>>                 {
>>>                     l1_pgentry_t l1e = l1e_from_intpte(req.val);
>>>                     okay = mod_l1_entry(va, l1e, mfn,
>>> -                                        cmd ==
>>> MMU_PT_UPDATE_PRESERVE_AD,
>>> -                                        current);
>>> +                                        cmd ==
>>> MMU_PT_UPDATE_PRESERVE_AD, v);
>>>                 }
>>>                 break;
>>>                 case PGT_l2_page_table:
>>>                 {
>>>                     l2_pgentry_t l2e = l2e_from_intpte(req.val);
>>>                     okay = mod_l2_entry(va, l2e, mfn,
>>> -                                        cmd ==
>>> MMU_PT_UPDATE_PRESERVE_AD,
>>> -                                        current);
>>> +                                        cmd ==
>>> MMU_PT_UPDATE_PRESERVE_AD, v);
>>>                 }
>>>                 break;
>>>                 case PGT_l3_page_table:
>>>                 {
>>>                     l3_pgentry_t l3e = l3e_from_intpte(req.val);
>>>                     rc = mod_l3_entry(va, l3e, mfn,
>>> -                                      cmd ==
>>> MMU_PT_UPDATE_PRESERVE_AD, 1,
>>> -                                      current);
>>> +                                      cmd ==
>>> MMU_PT_UPDATE_PRESERVE_AD, 1, v);
>>>                     okay = !rc;
>>>                 }
>>>                 break;
>>> @@ -3096,8 +3120,7 @@ int do_mmu_update(
>>>                 {
>>>                     l4_pgentry_t l4e = l4e_from_intpte(req.val);
>>>                     rc = mod_l4_entry(va, l4e, mfn,
>>> -                                      cmd ==
>>> MMU_PT_UPDATE_PRESERVE_AD, 1,
>>> -                                      current);
>>> +                                      cmd ==
>>> MMU_PT_UPDATE_PRESERVE_AD, 1, v);
>>>                     okay = !rc;
>>>                 }
>>>                 break;
>>> @@ -3105,7 +3128,7 @@ int do_mmu_update(
>>>                 case PGT_writable_page:
>>>                     perfc_incr(writable_mmu_updates);
>>>                     okay = paging_write_guest_entry(
>>> -                        current, va, req.val, _mfn(mfn));
>>> +                        v, va, req.val, _mfn(mfn));               
>>>                 break; } page_unlock(page);
>>> @@ -3116,11 +3139,13 @@ int do_mmu_update(
>>>             {
>>>                 perfc_incr(writable_mmu_updates);
>>>                 okay = paging_write_guest_entry(
>>> -                    current, va, req.val, _mfn(mfn));
>>> +                    v, va, req.val, _mfn(mfn));
>>>                 put_page_type(page);
>>>             }
>>> 
>>>             unmap_domain_page_with_cache(va, &mapcache);
>>> +            if (pt_owner != d)
>>> +                spin_unlock(&FOREIGNDOM->shutdown_lock);
>>>             put_page(page); break;
>>> 
>>> diff -r f6457425560b xen/include/public/xen.h
>>> --- a/xen/include/public/xen.h Wed May 27 03:22:43 2009 +0800
>>> +++ b/xen/include/public/xen.h Wed May 27 03:26:46 2009 +0800
>>> @@ -170,6 +170,8 @@ DEFINE_XEN_GUEST_HANDLE(xen_pfn_t);
>>>  * table entry is valid/present, the mapped frame must belong
>>> to the FD, if
>>>  * an FD has been specified. If attempting to map an I/O page then
>>> the 
>>>  * caller assumes the privilege of the FD.
>>> + * The page table entry normally belongs to the calling
>>> domain. Otherwise it
>>> + * should belong to the FD and the FD should be suspended already
>>>  * FD == DOMID_IO: Permit /only/ I/O mappings, at the priv
>>> level of the caller.
>>>  * FD == DOMID_XEN: Map restricted areas of Xen's heap space.
>>>  * ptr[:2]  -- Machine address of the page-table entry to modify.
> 
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@xxxxxxxxxxxxxxxxxxx
> http://lists.xensource.com/xen-devel
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.