[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 03/14 v4] xen/arm: vpl011: Add pl011 uart emulation in Xen



Hi Julien,


>>> +static uint8_t vpl011_read_data(struct domain *d)
>>> +{
>>> +    unsigned long flags;
>>> +    uint8_t data = 0;
>>> +    struct vpl011 *vpl011 = &d->arch.vpl011;
>>> +    struct xencons_interface *intf = vpl011->ring_buf;
>>> +    XENCONS_RING_IDX in_cons = intf->in_cons;
>>> +    XENCONS_RING_IDX in_prod = intf->in_prod;
>>
>>
>> After reading the indexes, we always need barriers. In this case:
>>
>>   smp_rmb();
>
>
> Well, there are already barrier with the spinlock. However, I am bit concern
> about reading those index without the lock taken. You can have concurrent
> call to vpl011_read_data happening and therefore the indexes may have
> changed when the lock will be taken.

Is there a possibility of concurrent access since this function is
executed as part of
trap handling which will serialize access to this function?
>
>
>>
>>
>>> +    VPL011_LOCK(d, flags);
>>> +
>>> +    /*
>>> +     * It is expected that there will be data in the ring buffer when
>>> this
>>> +     * function is called since the guest is expected to read the data
>>> register
>>> +     * only if the TXFE flag is not set.
>>> +     * If the guest still does read when TXFE bit is set then 0 will be
>>> returned.
>>> +     */
>>> +    if ( xencons_queued(in_prod, in_cons, sizeof(intf->in)) > 0 )
>>> +    {
>>> +        data = intf->in[xencons_mask(in_cons, sizeof(intf->in))];
>>> +        in_cons += 1;
>>> +        intf->in_cons = in_cons;
>>> +        smp_mb();
>>> +    }
>>> +    else
>>> +    {
>>> +        gprintk(XENLOG_ERR, "vpl011: Unexpected IN ring buffer
>>> empty\n");
>>> +    }
>>> +
>>> +    if ( xencons_queued(in_prod, in_cons, sizeof(intf->in)) == 0 )
>>> +    {
>>> +        vpl011->uartfr |= RXFE;
>>> +        vpl011->uartris &= ~RXI;
>>> +    }
>>> +    vpl011->uartfr &= ~RXFF;
>>> +    VPL011_UNLOCK(d, flags);
>>
>>
>> I am pretty sure that the PV console protocol requires us to notify the
>> other end even on reads. We need to add a notify_via_xen_event_channel
>> here, I think.
>
>
> I would agree here. On the previous version, I asked Bhupinder to explain
> why it is necessary and he said: "On second thoughs, notification is not
> required".
>
I understand that xenconsole is currently using the event notification
as an indication to read
data from the ring buffer. For writing data, it keeps checking
periodically if there is space in the
ring buffer. If there is space then it writes more data.

I agree that as a protocol, it may be a requirement to send the
notifications on read also. I will
add back the notification for this case.

>>
>>
>>> +    return data;
>>> +}
>>> +
>>> +static void vpl011_write_data(struct domain *d, uint8_t data)
>>> +{
>>> +    unsigned long flags;
>>> +    struct vpl011 *vpl011 = &d->arch.vpl011;
>>> +    struct xencons_interface *intf = vpl011->ring_buf;
>>> +    XENCONS_RING_IDX out_cons = intf->out_cons;
>>> +    XENCONS_RING_IDX out_prod = intf->out_prod;
>>
>>
>>   smp_mb()
>
>
> Same remark as above.
>
> [...]
>
>>> +static void vpl011_data_avail(struct domain *d)
>>> +{
>>> +    unsigned long flags;
>>> +    struct vpl011 *vpl011 = &d->arch.vpl011;
>>> +    struct xencons_interface *intf = vpl011->ring_buf;
>>> +    XENCONS_RING_IDX in_cons = intf->in_cons;
>>> +    XENCONS_RING_IDX in_prod = intf->in_prod;
>>> +    XENCONS_RING_IDX out_cons = intf->out_cons;
>>> +    XENCONS_RING_IDX out_prod = intf->out_prod;
>>> +    XENCONS_RING_IDX in_ring_qsize, out_ring_qsize;
>>
>>
>>   smb_mb()
>
>
> Ditto.
>
Regards,
Bhupinder

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.