[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re[2]: [Xen-devel] PV DRM doesn't work without auto_translated_physmap feature in Dom0



Sure.
 
I will make it a bit later, when I have the more stable result.
 
Best regards,
Alexander
 
 
Понедельник, 20 апреля 2020, 8:59 +03:00 от Oleksandr Andrushchenko <oleksandr_andrushchenko@xxxxxxxx>:
 
Hi,

On 4/19/20 14:26, Santucco wrote:
> Hello,
> I have found a source of the problem.
> In displ_be,  BaseDump copies to the drm buffer with a size from
> i915 drm driver, but this size a bit more than a size of my frontend
> display buffer. I have made a quick and dirty fix, a copy a line of my
> display buffer to a middle of a line of the drm display buffer. Patch
> is attached.

Thank you for the patch and your efforts to fix the issue.

Could you please make a pull request to [1], so we can continue there?

Thank you in advance,

Oleksandr

> Best regards,
> Alexander
>
> Четверг, 6 февраля 2020, 11:20 +03:00 от Oleksandr Andrushchenko
> <oleksandr_andrushchenko@xxxxxxxx>:
> On 2/5/20 8:59 PM, Santucco wrote:
> > Hello,
> > Ok, I  commented out the memcpy call and run the test.
> > displ_be hasn’t crached, I have seen FLIP events in the log.
> > But there hasn’t been the black screen, just a blink effect every
> > couple of seconds.
> > Logs are attached.
> Ok, so I believe that frontend - backend (displ_be) communication
> is ok
> and there is nothing to do there.
>
> Next, I would start debugging the following in Xen:
> (XEN) mm.c:2223:d2v0 Bad L1 flags 80
> and have a look at [1]. Probably, someone on Xen x86 side can tell
> if this could be related to the flags at [2].
>
> > Best regards,
> > Alexander
> >
> > Среда, 5 февраля 2020, 9:31 +03:00 от Oleksandr Andrushchenko
> > <oleksandr_andrushchenko@xxxxxxxx
> </compose?To=oleksandr_andrushchenko@xxxxxxxx>>:
> > On 2/4/20 10:28 AM, Santucco wrote:
> > > Hello,
> > > displ_be was compiled without zero-copy support early.
> > > I have tried with the recompiled dom0 kernel, a result is the
> same.
> > > Logs and configs (+displ_be’s CMakeCache.txt ) are attached.
> > Ok, yet another test to localize the problem.
> > Could you please remove memcpy from
> > #1  0x000055e5a1f28bec in Drm::DumbDrm::copy
> (this=0x7f9338000e00) at
> >
> /home/santucco/tmp/xen-troops/displ_be/src/displayBackend/drm/Dumb.cpp:149
> > and just memset the destination with 0 or whatever.
> >
> > I expect that system won't crash, nothing will be shown (black
> > screen), but
> > displ_be will show page flip events in its logs.
> > > Best regards,
> > > Alexander
> > >
> > > Понедельник, 3 февраля 2020, 10:36 +03:00 от Oleksandr
> > > Andrushchenko <oleksandr_andrushchenko@xxxxxxxx
> </compose?To=oleksandr_andrushchenko@xxxxxxxx>
> > </compose?To=oleksandr_andrushchenko@xxxxxxxx>>:
> > >
> > >
> > > On 2/1/20 4:39 PM, Santucco wrote:
> > > > Hello again,
> > > > I have not yet made to work my drm client, so I have tried
> to run
> > > > linux like a domU (to see how it should work), it doesn’t
> work too
> > > > — displ_be catches SIGSEGV:
> > > >
> > > > #0  0x00007f4afed1c161 in ?? () from /lib64/libc.so.6
> > > > #1  0x000055723b9c5bec in Drm::DumbDrm::copy
> > > (this=0x7f4adc000e00) at
> > > >
> > >
> >
> /home/santucco/tmp/xen-troops/displ_be/src/displayBackend/drm/Dumb.cpp:149
> > > > #2  0x000055723b9a8f51 in BuffersStorage::getFrameBufferAndCopy
> > > > (this=0x7f4ae00010e0, fbCookie=18446612682295083264) at
> > > >
> > >
> >
> /home/santucco/tmp/xen-troops/displ_be/src/displayBackend/BuffersStorage.cpp:165
> > > > It tries to copy to mBuffer with non-accessible address.
> > > > For the moment I see a strange offset for mmap call of
> > > /dev/drm/card0
> > > > in the strace log — 0x100000000. Is that normal?
> > > > Any direction of which to dig will be very helpful.
> > > > Configuration details:
> > > > Xen 4.12.1 Dom0: Linux 4.20.17-gentoo #13 SMP Sat Dec 28
> > > 11:12:24 MSK
> > > > 2019 x86_64 Intel(R) Celeron(R) CPU N3050 @ 1.60GHz GenuineIntel
> > > GNU/Linux
> > > > DomU: Linux 4.20.17-gentoo
> > > > last xen-troops/libxenbe and xen-troops/displ_be
> > > > Logs (dmesg, xl dmesg, displ_be, strace log of displ_be), a
> > > backtrace
> > > > of gdb and kernel configs are attached.
> > > > Thanks in advance.
> > > Could you please try Dom0 kernel WITHOUT the options below:
> > > CONFIG_XEN_GNTDEV_DMABUF=y
> > > CONFIG_XEN_GRANT_DMA_ALLOC=y
> > >
> > > Then, just to make sure, did you build displ_be without zero-copy
> > > support?
> > >
> > > > On 1/8/20 5:38 PM, Santucco wrote:
> > > > > Thank you very much for all your answers.
> > > > >
> > > > > Среда, 8 января 2020, 10:54 +03:00 от Oleksandr Andrushchenko
> > > > > <oleksandr_andrushchenko@xxxxxxxx
> </compose?To=oleksandr_andrushchenko@xxxxxxxx>
> > </compose?To=oleksandr_andrushchenko@xxxxxxxx>
> > > > > </compose?To=oleksandr_andrushchenko@xxxxxxxx>>:
> > > > > On 1/6/20 10:38 AM, Jürgen Groß wrote:
> > > > > > On 06.01.20 08:56, Santucco wrote:
> > > > > >> Hello,
> > > > > >>
> > > > > >> I’m trying to use vdispl interface from PV OS, it doesn’t
> > work.
> > > > > >> Configuration details:
> > > > > >>      Xen 4.12.1
> > > > > >>      Dom0: Linux 4.20.17-gentoo #13 SMP Sat Dec 28
> > 11:12:24 MSK
> > > > > 2019
> > > > > >> x86_64 Intel(R) Celeron(R) CPU N3050 @ 1.60GHz GenuineIntel
> > > > > GNU/Linux
> > > > > >>      DomU: x86 Plan9, PV
> > > > > >> displ_be as a backend for vdispl and vkb
> > > > > >>
> > > > > >> when VM starts, displ_be reports about an error:
> > > > > >> gnttab: error: ioctl DMABUF_EXP_FROM_REFS failed: Invalid
> > > argument
> > > > > >> (displ_be.log:221)
> > > > > >>
> > > > > >> related Dom0 output is:
> > > > > >> [ 191.579278] Cannot provide dma-buf: use_ptemode 1
> > > > > >> (dmesg.create.log:123)
> > > > > >
> > > > > > This seems to be a limitation of the xen dma-buf driver.
> > It was
> > > > > written
> > > > > > for being used on ARM initially where PV is not available.
> > > > > This is true and we never tried/targeted PV domains with this
> > > > > implementation,
> > > > > so if there is a need for that someone has to take a look
> on the
> > > > > proper
> > > > > implementation for PV…
> > > > >
> > > > > Have I got your right and there is no the proper
> > > implementation :-)?
> > > > There is no
> > > > >
> > > > > >
> > > > > > CC-ing Oleksandr Andrushchenko who is the author of that
> > > driver. He
> > > > > > should be able to tell us what would be needed to enable PV
> > > dom0.
> > > > > >
> > > > > > Depending on your use case it might be possible to use PVH
> > > dom0, but
> > > > > > support for this mode is "experimental" only and some
> features
> > > > > are not
> > > > > > yet working.
> > > > > >
> > > > > Well, one of the workarounds possible is to drop zero-copying
> > > use-case
> > > > > (this is why display backend tries to create dmu-bufs from
> > grants
> > > > > passed
> > > > > by the guest domain and fails because of "Cannot provide
> > dma-buf:
> > > > > use_ptemode 1")
> > > > > So, in this case display backend will do memory copying
> for the
> > > > > incoming
> > > > > frames
> > > > > and won't touch DMABUF_EXP_FROM_REFS ioctl.
> > > > > To do so just disable zero-copying while building the
> > backend [1]
> > > > >
> > > > > Thanks, I have just tried the workaround. The backend
> has failed
> > > > > in an other place not corresponding with dma_buf.
> > > > > Anyway it is enough to continue debugging  my
> > > frontend implementation.
> > > > > Do you know how big is performance penalty in comparison with
> > > > > the zero-copy variant?
> > > > Well, it solely depends on your setup, so I cannot tell what
> > > > would the numbers be in your case. Comparing to what I have
> > doesn't
> > > > make any sense to me: one should compare apples to apples
> > > > > Does it make a sense if I make a dedicated HVM domain with
> > > linux only
> > > > > for the purpose of vdispl and vkbd backends? Is there a
> > hope this
> > > > > approach will work?
> > > > You can try if this approach fits your design and requirements
> > > > >
> > > > > >
> > > > > > Juergen
> > > > > >
> > > > > [1]
> > > > >
> > > >
> > >
> >
> https://github.com/xen-troops/displ_be/blob/master/CMakeLists.txt#L12
> <https://urldefense.com/v3/__https://github.com/xen-troops/displ_be/blob/master/CMakeLists.txt*L12__;Iw!!GF_29dbcQIUBPA!mhlQAPS_Ozy57xa_0OR66qc1mjlSEz7lj3MkWCyDDF91BGa7J7-BOYWYcdksplocZvxIZMirWg$>
> >
> <https://urldefense.com/v3/__https://github.com/xen-troops/displ_be/blob/master/CMakeLists.txt*L12__;Iw!!GF_29dbcQIUBPA!kZ1JQFRS2pXj_IuXBhvYhmP9Q_svcLyjCXK9465ULGB4MeiYPRz2cF7lepHggr9UxPU9zOBEUw$>
> > >
> >
> <https://urldefense.com/v3/__https://github.com/xen-troops/displ_be/blob/master/CMakeLists.txt*L12__;Iw!!GF_29dbcQIUBPA!kvDgy3X0IuSQk7D2DdsGtsjtyGroYbNKOrPG95OpyoAkuBVbFSmzozwfor05jkRl0ita0FumBw$>
> > > >
> > >
> >
> <https://urldefense.com/v3/__https://github.com/xen-troops/displ_be/blob/master/CMakeLists.txt*L12__;Iw!!GF_29dbcQIUBPA!gi81oZZNvWaFWUVnaZluA_mNBAItLMd4RZmnc-M_FmlpDojqeQQnS7aXSNlbo80re9uOl2wqFA$>
> > > > >
> > > >
> > >
> >
> <https://urldefense.com/v3/__https://github.com/xen-troops/displ_be/blob/master/CMakeLists.txt*L12__;Iw!!GF_29dbcQIUBPA!mz3gn1wQMX2DXeNuAV-1_dI7nxFYYZOgdPiJNSFMesCz9lAzOKlwVPlddbxbcLmUO44NOy0TFA$>
> > > > >
> > > > > Best regards,
> > > > >   Alexander Sychev
> > > >
> > >
> > >
> >
> ------------------------------------------------------------------------
> >
> [1]
> https://elixir.bootlin.com/linux/v5.5/source/drivers/xen/gntdev.c#L300
> <https://urldefense.com/v3/__https://elixir.bootlin.com/linux/v5.5/source/drivers/xen/gntdev.c*L300__;Iw!!GF_29dbcQIUBPA!mhlQAPS_Ozy57xa_0OR66qc1mjlSEz7lj3MkWCyDDF91BGa7J7-BOYWYcdksplocZvzn6flk6Q$>
> [2]
> https://elixir.bootlin.com/linux/v5.5/source/drivers/xen/gntdev.c#L319
> <https://urldefense.com/v3/__https://elixir.bootlin.com/linux/v5.5/source/drivers/xen/gntdev.c*L319__;Iw!!GF_29dbcQIUBPA!mhlQAPS_Ozy57xa_0OR66qc1mjlSEz7lj3MkWCyDDF91BGa7J7-BOYWYcdksplocZvwjOfYJxg$>
>

[1] https://github.com/xen-troops/displ_be
 

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.