[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel][PATCH][LINUX] Dynamic modes support for xenfb (2 of 2)



Pat Campbell <plc@xxxxxxxxxx> writes:

> Markus Armbruster wrote:
>> Pat Campbell <plc@xxxxxxxxxx> writes:
>>
>>   
>>> Attached patch adds dynamic frame buffer resolution support to
>>> the PV xenfb frame buffer driver.
>>>
>>> Corresponding backend IOEMU patch is required for functionality but
>>> this patch is not dependent on it, preserving backwards compatibility.
>>>
>>> Please apply to tip of linux-2.6.18-xen
>>>
>>> Signed-off-by: Pat Campbell <plc@xxxxxxxxxx>
>>>     
>>
>> Adding another lock (queue_lock) to our already ticklish locking gives
>> me a queasy feeling...
>>
>> The purpose of the lock is not obvious to me.  Please explain that, in
>> the code.  Likewise, it's not entirely obvious that the locking works.
>> Please update the "How the locks work together" comment.
>>
>> But before you do that, I suggest you tell us exactly what problem
>> you're attempting to solve with queue_lock.  Maybe we can come up with
>> a less scary solution.  Maybe not.  But it's worth a try.  If it's
>> just to ensure the changes made by xenfb_set_par() are seen in
>> xenfb_do_resize() correctly, a memory barrier should to the trick more
>> easily.
>>
>>   
> xenfb_set_par() needs to complete it's work before returning to the
> caller so I need to send the resize event from within the
> xenfb_set_par() function.  This would mean we now have two event queue
> writers, xenfb_update_screen() and xenfb_set_par().   Very simple 2
> writer one reader problem easily solved by the addition of the queue_lock.
>
> Previously I handled the resize in xenfb_thread().  This could lead to
> us missing a resize and or using the incorrect values on a multi
> processor system.
>
> IE: Wrong resolution
>     P1:1     set_par        sets resize_dpy    w, h already in fb struct
>     P2:1      thread wakeup sees resize_dpy set
>     P2:2     do_resize()
>     P2:3        read fb width    
>     P1:3     check_var/set_par   changes fb height
>     P2:4       read fb height    (2nd value here)
>
> IE: Missed resolution change
>     P1:1     set_par        sets resize_dpy    w, h already in fb struct
>     P2:1      thread wakeup sees resize_dpy set
>     P2:2     do_resize()
>     P2:3        read fb width    
>     P2:4        read fb height
>     P1:3     set_par   sets resize_dpy
>     P2:5       clears resize_dpy      (Missed the second resize)
>           
> Adding in a queue lock should be safe because:
>    1. xenfb_update_screen() can hold a mutex without interfering with
> zap_page_range
>    2. xenfb_set_par() will timeout releasing the lock so update can run
>
> I have verified that the lock works as expected and times out if necessary.

Your race is real.

Your old code shared the resolution state (info->resize_dpy and the
resolution variables in info->fb_info) between xenfb_do_resize()
(running in xenfb_thread) and xenfb_set_par() (running in another
thread).

Your new code shares the ring buffer instead.

Both methods can be made race-free with suitable synchronization.

With your old code, xenfb_set_par() always succeeds.  Until the
backend gets around to process the XENFB_TYPE_RESIZE message, it
interprets the frame buffer for the old resolution.  As you argue
below, this isn't fatal, it can just mess up the display somewhat.

With your new code, xenfb_set_par() can take a long time (one second),
and it can fail, leaving the display in a deranged state until the
next resolution change.  It still doesn't fully synchronize with the
backend: it returns successfully after sending the message, leaving
the time window until the backend receives the message open.

I'd prefer the old behavior.  But I could be missing something here.
Am I?

I think you could fix the old code by protecting access to the shared
resolution state by a spin lock.

If I am missing something, and your new code is the way to go, you
still need to migrate your explanation why it works from e-mail to
source file, where the poor maintainer can find it.

[...]

Looks like this is the last issue that you haven't addressed /
explained fully yet :)

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.