[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] questions about the number of pending requests that the host system can detect


  • To: Daniel Stodden <daniel.stodden@xxxxxxxxxx>
  • From: Yuehai Xu <yuehaixu@xxxxxxxxx>
  • Date: Sun, 15 Aug 2010 22:41:44 -0400
  • Cc: Jeremy Fitzhardinge <jeremy@xxxxxxxx>, xen-devel@xxxxxxxxxxxxxxxxxxx, yhxu@xxxxxxxxx, yuehai.xu@xxxxxxxxx
  • Delivery-date: Sun, 15 Aug 2010 19:42:41 -0700
  • Domainkey-signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type:content-transfer-encoding; b=xczXU0iw4/dYQIijD4nzKm04oCCAxMqAzMsB2zNFVywdGfzAhpdskj3yYjICmTscnr zz9Pd4wou4NIdeJV/+RU7unX9ZsdhbbgNxV1cOZO8A0NXSyTqA6uexY35O/uBZcg2u3e kyBWRSr1Ut0NDyw7Yb0LAyxQ2qJE1Gcs7ytMI=
  • List-id: Xen developer discussion <xen-devel.lists.xensource.com>

On Sun, Aug 15, 2010 at 4:12 PM, Daniel Stodden
<daniel.stodden@xxxxxxxxxx> wrote:
> On Thu, 2010-08-12 at 14:36 -0400, Yuehai Xu wrote:
>> On Thu, Aug 12, 2010 at 2:21 PM, Jeremy Fitzhardinge <jeremy@xxxxxxxx> wrote:
>> >  On 08/12/2010 11:18 AM, Yuehai Xu wrote:
>> >>
>> >> On Thu, Aug 12, 2010 at 2:16 PM, Yuehai Xu<yuehaixu@xxxxxxxxx>  wrote:
>> >>>
>> >>> On Thu, Aug 12, 2010 at 2:04 PM, Jeremy Fitzhardinge<jeremy@xxxxxxxx>
>> >>>  wrote:
>> >>>>
>> >>>>  On 08/11/2010 08:42 PM, Yuehai Xu wrote:
>> >>>>>
>> >>>>> However, the result turns out that my assumption is wrong. The number
>> >>>>> of pending requests, according to the trace of blktrace, is changing
>> >>>>> like this way: 9 8 7 6 5 4 3 2 1 1 1 2 3 4 5 4 3 2 1 1 1 2 3 4 5 6 7 8
>> >>>>> 8 8..., just like a curve.
>> >>>>>
>> >>>>> I am puzzled about this weird result. Can anybody explain what has
>> >>>>> happened between domU and dom0 for this result? Does this result make
>> >>>>> sense? or I did something wrong to get this result.
>> >>>>
>> >>>> If you're using a journalled filesystem in the guest, it will be need to
>> >>>> drain the IO queue periodically to control the write ordering.  You
>> >>>> should
>> >>>> also observe barrier writes in the blkfront stream.
>> >>>>
>> >>>>    J
>> >>>>
>> >>> The file system I use in the guest system is ext3, which is a
>> >>> journaled file system. However, I don't quite understand what you said
>> >>> ".. control the write ordering" because the 10 processes running in
>> >>> the guest system all just send requests, there is no write request.
>> >>> What do you mean of "barrier writes" here?
>> >>>
>> >>> Thanks,
>> >>> Yuehai
>> >>>
>> >> I am sorry for the missing word, the requests sent by the 10 processes
>> >> in the guest system are all read requests.
>> >
>> > Even a pure read-only workload may generate writes for metadata unless
>> > you've turned it off.  Is it a read-only mount?  Do you have the noatime
>> > mount option?  Is the device itself read-only?
>> >
>>
>> The definition of my disk is: ['tap2:aio:/PATH/dom.img, hda1, w'], so,
>> I think it should not be read-only mount, and I don't set any specific
>> option for mount. The device itself should be read-write.
>>
>>
>> > Still, it seems odd that it won't/can't keep the queue full of read
>> > requests.  Unless its getting local cache hits?
>> >
>> >    J
>> >
>>
>> I don't think the local cache would be hit because every time I did
>> the test, I drop the cache both in the guest and host OS. And, the
>> access pattern is stride read, it is impossible to hit the cache.
>>
>> I am not sure whether there are write requests, even there are, I
>> think the number of write requests should be very small, will it
>> affect the I/O queue of guest or host? I don't think so. The common
>> sense should be that the I/O queue in the host system should be almost
>> full because tapdisk2 is async.
>
> Most of what is coming to my mind has already been mentioned above.
> Maybe try a read-only mount to avoid metadata updates.

I compile linux 2.6.31.13 as the guest kernel instead the original
2.6.18, the problem disappear, that even I run 10 processes stride
reading data in the guest system, from the host level, the number of
pending requests keeps at around 8~9. This makes sense

>
> What do you mean by stride read? Just reads with some fixed stride? What
> stride size? Did you make sure to turned off OS readahead (iirc 128k)?
> What's the underlying storage type? If it's a file, was the data fully
> preallocated?

Stride read here just as what you have understood, I am sorry not to
interpret clearly, the stride size is 8K, in this way, readahead
should not be trigged, the trace from blktrace can also confirm this.

>
> If the request offsets qualify for a merge, then blktap will do so quite
> aggressively, so you will see a lot of the I/O complete discretely not
> incrementally request-by-request.
>
> How did you sample the pending number of requests?

The sample of the pending number can be done in such way, to run
blktrace in the host OS, then, the 6th column indicates the status of
the requests, among these status, we can know when a request is
inserted to the block device level and when a request is dispatched to
the hard disk. So, the number of pending requests can be gotten.

As I know, all the non work conserving I/O schedulers base on the info
of process, such as CFQ and AS, however, as there is only one
process(tapdisk) in the host system to handle all the requests from a
guest system. It is impossible for the I/O scheduler in host OS to
recognize a certain process in guest OS, so, the mechanism for
anticipation of CFQ(AS is deleted in the latest kernel branch since
CFQ can also do anticipation) will be tuned off. In that way, for some
workloads, especially when several processes run concurrently in a
guest OS, the throughput might be lowed down, because the CFQ in host
system will never do anticipation.

Meanwhile, in the guest system, since the I/O scheduler should never
know the position of the real disk head because several guest systems
share a single disk head, what's the most suitable I/O scheduler for
guest?

What do you think of the problem that the I/O scheduler in both the
guest and the host system faces?

Thanks,
Yuehai

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.