|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH RESEND 4/4] qemu-xen-dir/hw/block: Cache local buffers used in grant copy
On Thu, 2016-06-02 at 16:19 +0200, Roger Pau Monné wrote:
> On Tue, May 31, 2016 at 06:44:58AM +0200, Paulina Szubarczyk wrote:
> > If there are still pending requests the buffers are not free() but
> > cached in an array of a size max_request*BLKIF_MAX_SEGMENTS_PER_REQUEST
> >
> > ---
> > hw/block/xen_disk.c | 60
> > +++++++++++++++++++++++++++++++++++++++++------------
> > 1 file changed, 47 insertions(+), 13 deletions(-)
> >
> > diff --git a/hw/block/xen_disk.c b/hw/block/xen_disk.c
> > index 43cd9c9..cf80897 100644
> > --- a/hw/block/xen_disk.c
> > +++ b/hw/block/xen_disk.c
> > @@ -125,6 +125,10 @@ struct XenBlkDev {
> > /* */
> > gboolean feature_discard;
> >
> > + /* request buffer cache */
> > + void **buf_cache;
> > + int buf_cache_free;
>
> Have you checked if there's some already available FIFO queue structure that
> you can use?
>
> Glib Trash Stacks looks like a suitable candidate:
>
> https://developer.gnome.org/glib/stable/glib-Trash-Stacks.html
Persistent regions are using a single-link-list GSList and I was
thinking that using that structure here will be better since from the
link you send comes out that Trash-Stacks are deprecated from 2.48.
But I have some problems with debuging qemu-system-i386. gdb is not able
to load symbols, it informs "qemu-system-i386...(no debugging symbols
found)...done." It was not an issue earlier and I have tried to run
configure with --enable-debug before the build as well as setting
'strip_opt="yes"'.
>
> > +
> > /* qemu block driver */
> > DriveInfo *dinfo;
> > BlockBackend *blk;
> > @@ -284,12 +288,16 @@ err:
> > return -1;
> > }
> >
> > -
> > -static void* get_buffer(void) {
> > +static void* get_buffer(struct XenBlkDev *blkdev) {
> > void *buf;
> >
> > - buf = mmap(NULL, 1 << XC_PAGE_SHIFT, PROT_READ | PROT_WRITE,
> > + if(blkdev->buf_cache_free <= 0) {
> > + buf = mmap(NULL, 1 << XC_PAGE_SHIFT, PROT_READ | PROT_WRITE,
> > MAP_SHARED | MAP_ANONYMOUS, -1, 0);
> > + } else {
> > + blkdev->buf_cache_free--;
> > + buf = blkdev->buf_cache[blkdev->buf_cache_free];
> > + }
> >
> > if (unlikely(buf == MAP_FAILED))
> > return NULL;
> > @@ -301,21 +309,40 @@ static int free_buffer(void* buf) {
> > return munmap(buf, 1 << XC_PAGE_SHIFT);
> > }
> >
> > -static int free_buffers(void** page, int count)
> > +static int free_buffers(void** page, int count, struct XenBlkDev *blkdev)
> > {
> > - int i, r = 0;
> > + int i, put_buf_cache = 0, r = 0;
> > +
> > + if (blkdev->more_work && blkdev->requests_inflight < max_requests) {
>
> Shouldn't this be <=?
>
> Or else you will only cache at most 341 pages instead of the maximum
> number of pages that can be in-flight (352).
At the moment when the request is completing and freeing the pages it is
still a part of in-flight requests and then I think there should not be
scheduled more then max_request-1 of others.
Paulina
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |