[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] ARM fixes for my improved privcmd patch.



On 18/12/12 16:07, Konrad Rzeszutek Wilk wrote:
On Tue, Dec 18, 2012 at 01:51:49PM +0000, Mats Petersson wrote:
On 18/12/12 11:40, Ian Campbell wrote:
On Tue, 2012-12-18 at 11:28 +0000, Mats Petersson wrote:
On 18/12/12 11:17, Ian Campbell wrote:
On Mon, 2012-12-17 at 17:38 +0000, Mats Petersson wrote:
On 17/12/12 16:57, Ian Campbell wrote:
On Fri, 2012-12-14 at 17:00 +0000, Mats Petersson wrote:
Ian, Konrad:
I took Konrad's latest version [I think] and applied my patch (which
needed some adjustments as there are other "post 3.7" changes to same
source code - including losing the xen_flush_tlb_all() ??)

Attached are the patches:
arm-enlighten.patch, which updates the ARM code.
improve-pricmd.patch, which updates the privcmd code.

Ian, can you have a look at the ARM code - which I quickly hacked
together, I haven't compiled it, and I certainly haven't tested it,
There are a lot of build errors as you might expect (patch below, a few
warnings remain). You can find a cross compiler at
http://www.kernel.org/pub/tools/crosstool/files/bin/x86_64/4.6.3/

or you can use
drall:/home/ianc/devel/cross/x86_64-gcc-4.6.0-nolibc_arm-unknown-linux-gnueabi.tar.bz2

which is an older version from the same place.

Anyway, the patch...
and
it needs further changes to make my changes actually make it more
efficient.
Right, the benefit on PVH or ARM would be in batching the
XENMEM_add_to_physmap_range calls. The batching of the
apply_to_page_range which this patch adds isn't useful because there is
no HYPERVISOR_mmu_update call to batch in this case. So basically this
patch as it stands does a lot of needless work for no gain I'm afraid.
So, basically, what is an improvement on x86 isn't anything useful on
ARM, and you'd prefer to loop around in privcmd.c calling into
xen_remap_domain_mfn_range() a lot of times?
Not at all. ARM (and PVH) still benefits from the interface change but
the implementation of the benefit is totally different.

For normal x86 PV you want to batch the HYPERVISOR_mmu_update.

For both x86 PVH and ARM this hypercall  doesn't exist but instead there
is a call to HYPERVISOR_memory_op XENMEM_add_to_physmap_range which is
something which would benefit from batching.
So, you want me to fix that up?
If you want to sure, yes please.

But feel free to just make the existing code work with the interface,
without adding any batching. That should be a much smaller change than
what you proposed.

(aside; I do wonder how much of this x86/arm code could be made generic)
I think, once it goes to PVH everywhere, quite a bit (as I believe
the hypercalls should be the same by then, right?)

In the PVOPS kernel, it's probably a bit more job. I'm sure it can
be done, but with a bit more work.

I think I'll do the minimal patch first, then, if I find some spare
time, work on the "batching" variant.
OK. The batching is IMHO just using the multicall variant.

To make xentrace not work until it is fixed wouldn't be a terrible
thing, would it?
On ARM I think it is fine (I doubt this is the only thing stopping
xentrace from working). I suspect people would be less impressed with
breaking xentrace on x86. For PVH it probably is a requirement for it to
keep working, I'm not sure though.
Ok, ENOSYS it is for remap_range() then.
  Then we can remove that old gunk from x86 as well
(eventually).
Thanks. I was starting to wonder if I'd been teleported back to the time
when I struggled with pointers...
Maybe it needs a better comment.
The other thing I had missed was that this was a pure increment and not
taking the value at the same time, which also confused me.

Splitting the increment out from the dereference usually makes these
things clearer, I was obviously just being a bit hard of thinking
yesterday!
No worries. I will see about making a more readable comment (and for
ARM, I can remove the whole if/else and just do the one increment
(based on above discussion), so should make the code better.
You can use the v3.8 tree as your base - it has the required PVH and ARM
patches. There is one bug (where dom0 crashes) - and I just sent
a git pull for that in your Linus's tree:

  git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git 
stable/for-linus-3.8
Thanks. I have attached a patch for x86 (+generic changes) and one for arm.
Both compile and the x86 has been tested with localhost migration.

Ian: Can you review the code - I'm fairly sure it does the right thing, without too much "extra" code, and achieves "what we want". It's not optimized, but if the hypercalls are batched by multicall, it shouldn't be horrible. In fact I expect it will be marginally better than before, because it saves one level of calls with 7 or so arguments per mapped page since we've pushed the loop down a level to the do_remap_mfn() - I've kept the structure similar to x86, should we decide to merge the code into something common in the future. No doubt it will diverge, but starting with something similar gives it a little chance... ;)

Note: the purpose of this post is mainly to confirm that the ARM solution is "on track".

--
Mats

Attachment: xen-privcmd-remap-array-arm.patch
Description: Text Data

Attachment: xen-privcmd-remap-array-x86.patch
Description: Text Data

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.