Pat Campbell <plc@xxxxxxxxxx> writes:
> Markus Armbruster wrote:
>> Pat Campbell <plc@xxxxxxxxxx> writes:
>>
> cut out a whole bunch to address the need to protect some variables.
>>
>>
>>> +
>>> + xenfb_send_event(info, &event);
>>> }
>>>
>>> static int xenfb_queue_full(struct xenfb_info *info)
>>> @@ -209,6 +250,16 @@ static void xenfb_update_screen(struct x
>>> xenfb_do_update(info, x1, y1, x2 - x1, y2 - y1);
>>> }
>>>
>>> +static void xenfb_resize_screen(struct xenfb_info *info)
>>> +{
>>> + if (xenfb_queue_full(info))
>>> + return;
>>> +
>>> + info->resize_dpy = 0;
>>> + xenfb_do_resize(info);
>>> + xenfb_refresh(info, 0, 0, info->xres, info->yres);
>>>
>>
>> Hmm, ugly. See next comment.
>>
>>
>>> +}
>>> +
>>> static int xenfb_thread(void *data)
>>> {
>>> struct xenfb_info *info = data;
>>> @@ -217,6 +268,9 @@ static int xenfb_thread(void *data)
>>> if (info->dirty) {
>>> info->dirty = 0;
>>> xenfb_update_screen(info);
>>> + }
>>> + if (info->resize_dpy) {
>>> + xenfb_resize_screen(info);
>>> }
>>>
>>
>> Both screen resizes and dirtying don't go to the backend immediately.
>> Instead, they accumulate in struct xenfb_info (dirty rectangle and
>> resize_dpy flag) until they get pushed by xenfb_thread().
>>
>> The way things work, a resize triggers three events:
>>
>> 1. The update event for the dirty rectangle. In theory, the dirty
>> rectangle could be clean, but I expect it to be quite dirty in
>> practice, as user space typically redraws everything right after a
>> resize.
>>
>> 2. The resize event.
>>
>> 3. Another update event, because xenfb_resize_screen() dirties the
>> whole screen. This event is delayed until the next iteration of
>> xenfb_thread()'s loop.
>>
>> I'd rather do it like this: decree that resize events count as whole
>> screen updates. Make xenfb_resize_screen() clear the dirty rectangle
>> (optional, saves an update event). Test resize_dpy before dirty.
>>
>> Also: you access resize_dpy, xres, yres and fb_stride from multiple
>> threads without synchronization. I fear that's not safe. Don't you
>> have to protect them with a spinlock, like dirty_lock protects the
>> dirty rectangle? Could reuse dirty_lock, I guess.
>>
>>
> Don't need to protect these three:
> fb_stride: Eliminated, now using fb_info->fixed.line_len which is read
> only after probe()
> xres, yres:
> Set in xenfb_set_par(), read only in all other threads.
Say one thread does:
xenfb_info->xres = ...
xenfb_info->yres = ...
xenfb_info->resize_dpy = 1;
And another thread does:
if (info->resize_dpy) {
info->resize_dpy = 0;
event.type = XENFB_TYPE_RESIZE;
event.resize.width = info->xres;
event.resize.height = info->yres;
event.resize.stride = info->fb_stride;
You're in trouble on SMP. The assignment to resize_dpy may be visible
on another processor *before* the assignments to xres, yres. In other
words, the second thread can use old values, and the resize gets lost
or bent out of shape.
You need memory barriers to protect against this. The easiest way by
far to get memory barriers right is to use some higher level
synchronization primitive.
> resize_dpy:
> While thinking about this variable it occurred to me that a resize would
> invalidate any screen updates that are in the queue. Resize from
> xenfb_thread() was done so that I could ensure that the resize did not
> get lost due to a queue full situation. So, if I clear the queue,
> cons=prod, I can add in the resize event packet directly from
> xenfb_set_par() getting rid of resizing from within the thread. Change
It's tempting, isn't it?
> would require a new spinlock to protect the queue but would eliminate
> resize_dpy.
>
> Setting cons=prod should be safe. Worst case is when a clear occurs
> while backend is in the event for loop, loop process events it did not
> need to.
>
> xenfb_resize_screen() becomes: (called directly from xenfb_set_par())
> spin_lock_irqsave(&info->queue_lock, flags);
> info->page->out_cons = info->page->out_prod;
> spin_unlock_irqrestore(&info->queue_lock, flags);
> xenfb_do_resize(info);
> xenfb_refresh(info, 0, 0, info->xres, info->yres);
>
> xenfb_send_event() becomes:
> spin_lock_irqsave(&info->queue_lock, flags);
> prod = info->page->out_prod;
> /* caller ensures !xenfb_queue_full() */
> mb(); /* ensure ring space available */
> XENFB_OUT_RING_REF(info->page, prod) = *event;
> wmb(); /* ensure ring contents visible */
> info->page->out_prod = prod + 1;
> spin_unlock_irqrestore(&info->queue_lock, flags);
>
> Thoughts on this approach?
>
> Pat
The correctness of a lockless ring buffer depends on having *one*
producer put stuff in (here: frontend), and *one* consumer take stuff
out (here: backend), very carefully.
If I understand you correctly, you're proposing to rely on queue_lock
to synchronize within the frontend (mutex xenfb_resize_screen() and
xenfb_send_event()), and argue that neglecting to synchronize frontend
with backend can't do harm.
I fear you are wrong.
Say the producer empties the ring buffer after the consumer determined
that the ring buffer is not empty, and before it is done taking out
the first element. Say that element is in slot #7.
The producer then continues, and puts stuff into the now empty ring
buffer normally. Eventually, it'll reach slot#7.
The consumer may or may not be done is with that element at that time.
We got a race condition.
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel
|