[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Re: [drm:r100_ring_test] *ERROR* radeon: ring test failed



On Tue, Oct 27, 2009 at 01:00:19PM -0400, Konrad Rzeszutek Wilk wrote:
> On Tue, Oct 27, 2009 at 05:46:51PM +0200, Pasi Kärkkäinen wrote:
> > On Wed, Oct 21, 2009 at 02:31:30PM -0400, Konrad Rzeszutek Wilk wrote:
> > > > I updated the pv_ops dom0 git tree to the latest 2.6.31.4 tree as of
> > > > today, and also applied your ttm.patch.
> > > > 
> > > > Modesetting works now, and there are no drm/radeon errors.
> > > 
> > > Thank you for testing it.
> > > 
> > 
> > Btw are you going to post this for inclusion in drm/ttm trees?
> 
> I am not really comfortable with it. It has the same drawbacks
> as the fix for the drm_scatter, where we blindly assume
> phys_to_bus(virt_to_phys(X)) will give us the same value as
> what dma_alloc_coherent provides. We should save that bus address
> somewhere...
> 
> Saving it somewhere (perhaps in some of the structs the drm_ttm allocates)
> could do it. But we should probably differentiate between memory
> that is being allocated for DMA transfers vs other things so that
> we don't over-exercise the dma_alloc_coherent. Thought maybe
> the memory returned via drm_tt calls are only used for DMA transfers.
>

Ok..

> We can figure this out.  Pasi, I don't have a modesetting working machine,
> but you do. Can you compile your pv_ops with the fix I provided earlier,
> along with enabling CONFIG_DMA_API_DEBUG=y. Once mode-setting is turned
> on and your machine is humming along (maybe even run glxgears), compile
> the attached module and load it. You should get a kernel dump
> off all devices that are using the DMA buffers. Can you e-mail me that back
> please?
> 

Yeah, I'll do that and get back to you.

-- Pasi



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.