Hi,
I want to use the libxc function xc_sched_credit_domain_set( int
xc_handle, uint32_t domid, struct xen_domctl_sched_credit *sdom) in a
user-level application. But I don't know how to set the value of
xc_handle. Is there somebody familiar with it? Thanks.
Regards,
Cong
2011/10/4 David Xu <davidxu06@xxxxxxxxx>:
> Hi,
>
> I made the experiment with httperf again, and used tcpdump to capture
> the packets on both server side and client side. I found that if there
> was some retransmission (e.t. client send several syn to server
> because it didn't receive ack in time), the vif only receive the last
> packet (syn) and missed the former ones. So I think there are some
> issues happened between the eth0(veth0) and vif in the dom0. But if I
> use a low latency scheduler written by myself (only modify the client
> scheduler and didn't touch other parts ), there will not be
> retransmission or there is very few retransmission. I am not familiar
> with netback of xen. Can you give me some suggestion? Or which part
> source code I need to check to find the reason of packets loss between
> eth0(veth0) and vif in dom0? Thanks.
>
> Regards,
> Cong
>
> 2011/9/30 Ian Campbell <Ian.Campbell@xxxxxxxxxxxxx>:
>> On Fri, 2011-09-30 at 15:44 +0100, David Xu wrote:
>>> Hi,
>>>
>>> 2011/9/29 Ian Campbell <Ian.Campbell@xxxxxxxxxx>:
>>> > On Fri, 2011-09-30 at 05:18 +0100, David Xu wrote:
>>> >> Hi,
>>> >>
>>> >> Does anybody know whether the ring buffer between front end and back
>>> >> end will suffer from overflow? I just wonder if the ring buffer will
>>> >> be full and drop some packets when the Net I/O load is very heavy.
>>> >
>>> > In the case of networking whichever end is putting stuff on the ring
>>> > checks that there is enough room and will stop the queue when it cannot
>>> > transmit any more and restart when room becomes available.
>>>
>>> You mean even there is not enough room in ring buffer, xen will * not
>>> drop the packets * and just delay the transmission.
>>
>> It's not Xen but rather the kernel back and front ends which is involved
>> here. You can examine the hard_start_xmit functions in both netback and
>> netfront to determine for yourself whether or not packets can be dropped
>> and when.
>>
>>> I used httperf to
>>> measure the performance of web server running in a VM (The workload in
>>> this VM is mixed, so it can not benefit from boost mechanism. The net
>>> i/o suffers from relatively high latency which depends on the number
>>> of VMs in the system). I found that with the increase of request rate
>>> in client side, the connection rate will drop and the connection time
>>> will increase dramatically. And the retransmission appears when the
>>> request rate is over than a quantum. So I doubted that the http/tcp
>>> connection suffer from the packets drop when the ring buffer is full
>>> because of high request rate.
>>>
>>> >
>>> >> BTW, If I want to change the size of i/o ring buffer, how should I do?
>>> >> I tried to reset the NET_TX_RING_SIZE and NET_RX_RING_SIZE in both
>>> >> front end and back end, but it seems doesn't work. Thanks.
>>> >
>>> > Currently the rings are limited to 1 page so if you want to increase the
>>> > size you would need to add multipage ring support to the network
>>> > protocol. There have been patches to do this for the blk protocol but I
>>> > do not recall any for the net protocol.
>>>
>>> Yes, increasing the size is relatively hard. So I just want reduce the
>>> size of ring buffer to make sure my doubt described above. I directly
>>> set NET_TX_RING_SIZE and NET_RX_RING_SIZE to 128, but it doesn't seem to
>>> work.
>>
>> You need to make sure both ends of the connection agree on the ring
>> size.
>>
>> I'm afraid this is not a very common thing to want to do so if you want
>> to persist with this approach I'm afraid you'll have to do some
>> debugging.
>>
>> Ian.
>>
>>
>>
>
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel
|