[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] Xen page sharing



Hi sahil:
 
     I think the reason why you cannot get page shared is due to the gref you got.
     Gref is responsible for a page allocated from domU, in my understanding it should not be
     0, that is a gref 0 can not be shared, that's why I skip gref 0 to be nominated.
 
     The gref is nominated to Xen and later used to find a corrspond MFN, so it shall not always be the same.
 
     More, I am using 2.6.31, as far as I know, the blkback is much different from 2.6.32,
     But I can't tell the detail, since I know much less on the latest blkback.
 
    thanks.
     

From: sahil@xxxxxxxxxxxxxx
Date: Tue, 5 Apr 2011 15:16:57 -0400
Subject: Re: Xen page sharing
To: tinnycloud@xxxxxxxxxxx

Hi Mao,

Thanks a lot for agreeing to help. I greatly appreciate it.
Here is my current page sharing environment configuration:

1. Xen unstable version 4.1.0.rc6; mercurial repository revision #22964

2. Dom0 kernel: linux-2.6-pvops_x86_64 / 2.6.32.28

3. Relevant portions of the DomU config file (it uses the same Dom0 kernel as above)
kernel = "hvmloader"
builder='hvm'
memory = 512
memory_sharing = 1
vif = [ 'type=ioemu,bridge=xenbr0,mac=00:16:3E:B1:DF:13' ]
disk = [ 'tap2:tapdisk:vhd:/home/sahil/xen-guest-hvm/xenguesthvmaiovhd,hda,w', 'phy:/dev/cdrom,hdc:cdrom,r' ]
device_model = 'qemu-dm'


4. I am assuming both the DomUs need to use the same disk image for memory sharing to work; so the same config file is used for the other DomU.

5. I have tried using various combinations in the 'disk' parameter above: raw aio image, pure vhd image, aio based vhd image (using vhd-util); mode: 'w' / 'r' / 'w!'.
From what I understand, a multi level disk image like the aio based vhd image is required as flags TD_OPEN_RDONLY and TD_OPEN_SHAREABLE are only set on level >= 1 in 'tapdisk_vbd_open_level()' [tapdisk-vbd.c].

6. In your patch - 'tools.patch', i could not understand why you check f or this condition to hold true- '((&vreq->req)->seg[treq.sidx].gref)' in functions- '__tapdisk_vbd_complete_td_request' and '__tapdisk_vbd_reissue_td_request' [tapdisk-vbd.c].
I mean, is gref=0 bad?
If I keep this condition the control is never transferred to  'memshr_vbd_issue_ro_request' or  'memshr_vbd_complete_ro_request' [interface.c] as gref is always 0 everytime I run my DomUs.
And if I remove this condition, the check '(page->count_info != (PGC_allocated | (2 + expected_refcnt)))' fails in 'page_make_sharable' [mm.c]

7. Also we had to pass domain_id to the function ''__tap_ctl_spawn()" [tap-ctl-spawn.c] that execs tapdisk, so that vbd_info.domid is set when tapdisk2 calls 'memshr_set_domid()' [interface.c].

I hope this provides sufficient information to help you in detecting the problem point. I shall wait for your reply.

Thanks again.

Regards,
Sahil



2011/4/5 tinnycloud <tinnycloud@xxxxxxxxxxx>:
> Hi Sahil:
>
>        Sure, no problem.
>        Did you enable memory_sharing = 1 in your HVM file?
>        Please send me your configurations, thx.
>
> ----------
> 发件人: sahilsuneja@xxxxxxxxx [mailto:sahilsuneja@xxxxxxxxx] 代表 Sahil
> Suneja
> 发送时间: 2011年4月5日 5:17
> 收件人: tinnycloud@xxxxxxxxxxx
> 主题: Xen page sharing
>
> Hi Mao Xiaoyun!
>
> I hope you are doing good.
> I am a graduate student at University of Toronto and am trying to get
> page sharing up and running on Xen for the past whole month now.
> I have been following the email communications between you, Jui-Hao
> Chiang and Tim Deegan on the Xen-devel mailing list. I have even tried
> your patch 'tools.patch' as posted in the email thread but can't seem
> to get any memory sharing to work. I am presently working on Xen 4.1.0
> rc6.
> I sincerely hope you would agree to help me set up the mem-sharing
> environment. Please let me know if you would be willing to do so.
> Thereafter, I shall provide the details on my present Xen/domO/domU
> configurations.
>
> Waiting for your reply,
>
> Thanks and regards,
> Sahil Suneja
> Systems and Networks
> University of Toronto.
>
>

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.