[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] Xentrace on Xilinx ARM



Hello,

 

My name is Ben Sanda, I’m a kernel/firmware developer with DornerWorks

engineering. Our team is working on support for Xen on the new Xilinx

Ultrascale+ MPSoC platforms (ARM A53 core) and I’ve specifically been tasked

with characterizing performance, particularly that of the schedulers. I wanted

to make use of the xentrace tool to help give us some timing and performance

benchmarks, but searching over the Xen mailing lists it appears xentrace has not

yet been ported to ARM. When you run it crashes complaining about allocating

memory buffers. While we could just write some custom quick program to

collect the data we need, I would rather help get xentrace working on ARM

so is generally available to everyone and usable for any benchmarking

moving forward.

 

In searching for existing topics on this my main reference thread for this has

been the “[Xen-devel] xentrace, arm, hvm” email chain started by Pavlo Suikov

here: http://xen.markmail.org/thread/zochggqxcifs5cdi

 

I have been trying to follow that email chain, which made some suggestions as to

how xentrace could be ported to ARM and where things are going wrong, but it

never came to any concrete conclusions. I have gathered from the thread that

there are issues with the memory allocation for the xentrace buffers due to MFN

and PFN mapping differences on the ARM vs x86 when attempting to map

from the XEN HEAP. I followed the suggestions posed by the thread as follows

(performed against the Xilinx/master Git version of the Xen source here:

https://github.com/Xilinx/xen):

 

First, in mm.c, I modified the xenmem_add_to_physmap_one() and its call to

rcu_lock_domain_by_any_id() call to provide a special case for the DOM_XEN

domain request. For this I created a new function, get_pg_owner(), which does

the same domain checks as get_pg_owner() on the x86 source performs. This allows

the correct domid_t to be returned.

 

--- xen-src/xen/arch/arm/mm.c   2016-03-04 10:44:31.364572302 -0800

+++ xen-src_mod/xen/arch/arm/mm.c   2016-02-24 09:41:43.000000000 -0800

@@ -41,6 +41,7 @@

#include <xen/pfn.h>

 struct domain *dom_xen, *dom_io, *dom_cow;

+static struct domain *get_pg_owner(domid_t domid);

 /* Static start-of-day pagetables that we use before the allocators

  * are up. These are used by all CPUs during bringup before switching

@@ -1099,7 +1100,8 @@

     {

         struct domain *od;

         p2m_type_t p2mt;

-        od = rcu_lock_domain_by_any_id(foreign_domid);

+        od = get_pg_owner(foreign_domid);

+

         if ( od == NULL )

             return -ESRCH;

@@ -1132,7 +1134,15 @@

             return -EINVAL;

         }

-        mfn = page_to_mfn(page);

+        if(od->domain_id != DOMID_XEN)

+        {

+            mfn = page_to_mfn(page);

+        }

+        else

+        {

+            mfn = idx;

+        }

+

         t = p2m_map_foreign;

         rcu_unlock_domain(od);

@@ -1312,6 +1321,42 @@

     unmap_domain_page(p);

}

+static struct domain *get_pg_owner(domid_t domid)

+{

+    struct domain *pg_owner = NULL, *curr = current->domain;

+

+    if ( likely(domid == DOMID_SELF) )

+    {

+        pg_owner = rcu_lock_current_domain();

+        goto out;

+    }

+

+    if ( unlikely(domid == curr->domain_id) )

+    {

+        goto out;

+    }

+

+    switch ( domid )

+    {

+    case DOMID_IO:

+        pg_owner = rcu_lock_domain(dom_io);

+        break;

+    case DOMID_XEN:

+        /*printk("DOM_XEN Selected\n");*/

+        pg_owner = rcu_lock_domain(dom_xen);

+        break;

+    default:

+        if ( (pg_owner = rcu_lock_domain_by_id(domid)) == NULL )

+        {

+            break;

+        }

+        break;

+    }

+

+ out:

+    return pg_owner;

+}

 

Second I modified p2m_lookup() in p2m.c to manage the fact that xentrace

provides a MFN, not a PFN to the domain lookup calls. It now checks for DOM_XEN,

and if found, returns the MFN directly instead of trying to translate it from a

PFN. It also sets the page type to p2m_raw_rw. (Which I guessed was the correct

type for read/write to the XEN Heap? I’m not sure if that’s correct.)

 

@@ -228,10 +228,19 @@

     paddr_t ret;

     struct p2m_domain *p2m = &d->arch.p2m;

-    spin_lock(&p2m->lock);

-    ret = __p2m_lookup(d, paddr, t);

-    spin_unlock(&p2m->lock);

+    if(d->domain_id != DOMID_XEN)

+    {

+        spin_lock(&p2m->lock);

+        ret = __p2m_lookup(d, paddr, t);

+        spin_unlock(&p2m->lock);

+    }

+    else

+    {

+        *t = p2m_ram_rw;

+        ret = paddr;

+    }

+   

     return ret;

}

 

The result is that now when I call xentrace() no errors are reported, however; I

also don’t observe any trace logs actually being collected. I can invoke

xentrace (as xentrace –D –S 256 -t 10 –e all out.bin), and I see xentrace start,

and out.bin created, but it’s always empty.

 

[root@xilinx-dom0 ~]# xentrace -D -S 256 -T 10 -e all out.bin

change evtmask to 0xffff000

(XEN) xentrace: requesting 2 t_info pages for 256 trace pages on 4 cpus

(XEN) xentrace: p0 mfn 7ffb6 offset 65

(XEN) xentrace: p1 mfn 7fd7c offset 321

(XEN) xentrace: p2 mfn 7fc7c offset 577

(XEN) xentrace: p3 mfn 7fb7c offset 833

(XEN) xentrace: initialised

[root@xilinx-dom0 ~]# ls -l

total 5257236

-rwxrwxr-x    1 root   root   9417104 Feb 10  2016 Dom1-Kernel*

-rw-rw-r--    1 root   root   073741824 Mar  4  2016 Dom1.img

-rw-r--r--    1 root   root   3221225472 Mar  4  2016 linaro-openembedded-fs.img

-rw-r--r--    1 root   root   0 Jan  1 00:00 out.bin

-rw-r--r--    1 root   root   1073741824 Mar  4  2016 ubuntu-core-fs.img

-rwxrwxr-x    1 root   root   4104 Mar  4  2016 xzd_bare.img*

[root@xilinx-dom0 ~]#

 

Thank you for any assistance,

 

Benjamin Sanda

Embedded Software Engineer

616.389.6138

Ben.Sanda@xxxxxxxxxxxxxxx

 

DornerWorks, Ltd.

3445 Lake Eastbrook Blvd. SE

Grand Rapids, MI 49546

 

 

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.