WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

[Xen-devel] Re: [PATCH 06/11] ttm/driver: Expand ttm_backend_func to inc

To: Konrad Rzeszutek Wilk <konrad.wilk@xxxxxxxxxx>
Subject: [Xen-devel] Re: [PATCH 06/11] ttm/driver: Expand ttm_backend_func to include two overrides for TTM page pool.
From: Thomas Hellstrom <thomas@xxxxxxxxxxxx>
Date: Mon, 24 Oct 2011 19:42:25 +0200
Cc: thellstrom@xxxxxxxxxx, xen-devel@xxxxxxxxxxxxxxxxxxx, linux-kernel@xxxxxxxxxxxxxxx, dri-devel@xxxxxxxxxxxxxxxxxxxxx, j.glisse@xxxxxxxxxx, airlied@xxxxxxxxxx, bskeggs@xxxxxxxxxx
Delivery-date: Tue, 25 Oct 2011 10:09:51 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <20111024172728.GD2320@xxxxxxxxxxxxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
References: <1319062772-2793-1-git-send-email-konrad.wilk@xxxxxxxxxx> <1319062772-2793-7-git-send-email-konrad.wilk@xxxxxxxxxx> <4EA28FA6.7000006@xxxxxxxxxxxx> <20111024172728.GD2320@xxxxxxxxxxxxxxxxxxx>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
User-agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US; rv:1.9.1.10) Gecko/20100624 Mandriva/3.0.5-0.1mdv2009.1 (2009.1) Thunderbird/3.0.5
On 10/24/2011 07:27 PM, Konrad Rzeszutek Wilk wrote:
On Sat, Oct 22, 2011 at 11:40:54AM +0200, Thomas Hellstrom wrote:
Konrad,

I was hoping that we could get rid of the dma_address shuffling into
core TTM,
like I mentioned in the review. From what I can tell it's now only
used in the backend and
core ttm doesn't care about it.

Is there a particular reason we're still passing it around?
Yes - and I should have addressed that in the writeup but forgot, sorry about 
that.

So initially I thought you meant this:

diff --git a/drivers/gpu/drm/ttm/ttm_page_alloc.c 
b/drivers/gpu/drm/ttm/ttm_page_alloc.c
index 360afb3..06ef048 100644
--- a/drivers/gpu/drm/ttm/ttm_page_alloc.c
+++ b/drivers/gpu/drm/ttm/ttm_page_alloc.c
@@ -662,8 +662,7 @@ out:

  /* Put all pages in pages list to correct pool to wait for reuse */
  static void __ttm_put_pages(struct list_head *pages, unsigned page_count,
-                           int flags, enum ttm_caching_state cstate,
-                           dma_addr_t *dma_address)
+                           int flags, enum ttm_caching_state cstate)
  {
        unsigned long irq_flags;
        struct ttm_page_pool *pool = ttm_get_pool(flags, cstate);
@@ -707,8 +706,7 @@ static void __ttm_put_pages(struct list_head *pages, 
unsigned page_count,
   * cached pages.
   */
  static int __ttm_get_pages(struct list_head *pages, int flags,
-                          enum ttm_caching_state cstate, unsigned count,
-                          dma_addr_t *dma_address)
+                          enum ttm_caching_state cstate, unsigned count)
  {
        struct ttm_page_pool *pool = ttm_get_pool(flags, cstate);
        struct page *p = NULL;
@@ -864,7 +862,7 @@ int ttm_get_pages(struct ttm_tt *ttm, struct list_head 
*pages,
        if (ttm->be&&  ttm->be->func&&  ttm->be->func->get_pages)
                return ttm->be->func->get_pages(ttm, pages, count, dma_address);
        return __ttm_get_pages(pages, ttm->page_flags, ttm->caching_state,
-                               count, dma_address);
+                               count)
  }
  void ttm_put_pages(struct ttm_tt *ttm, struct list_head *pages,
                   unsigned page_count, dma_addr_t *dma_address)
@@ -873,5 +871,5 @@ void ttm_put_pages(struct ttm_tt *ttm, struct list_head 
*pages,
                ttm->be->func->put_pages(ttm, pages, page_count, dma_address);
        else
                __ttm_put_pages(pages, page_count, ttm->page_flags,
-                               ttm->caching_state, dma_address);
+                               ttm->caching_state)
  }
which is trivial (thought I have not compile tested it), but it should do it.

But I think you mean eliminate the dma_address handling completly in
ttm_page_alloc.c and ttm_tt.c.

For that there are couple of architectural issues I am not sure how to solve.

There has to be some form of TTM<->[Radeon|Nouveau] lookup mechanism
to say: "here is a 'struct page *', give me the bus address". Currently
this is solved by keeping an array of DMA addresses along with the list
of pages. And passing the list and DMA address up the stack (and down)
from TTM up to the driver (when ttm->be->func->populate is called and they
are handed off) does it. It does not break any API layering .. and the internal
TTM pool (non-DMA) can just ignore the dma_address altogether (see patch above).


I actually had something more simple in mind, but when tinking a bit deeper into it, it seems more complicated than I initially thought.

Namely that when we allocate pages from the ttm_backend, we actually populated it at the same time. be::populate would then not take a page array as an argument, and would actually be a no-op on many
drivers.

This makes us move towards struct ttm_tt consisting almost only of its backend, so that whole API should perhaps be looked at with new eyes.

So anyway, I'm fine with high level things as they are now, and the dma_addr issue can be looked at at a later time. If we could get a couple of extra eyes to review the code for style etc. would be great, because I have very little time the next couple of weeks.

/Thomas












_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel

<Prev in Thread] Current Thread [Next in Thread>