[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v12] This is the ABI for the two halves of a para-virtualized sound driver to communicate with each to other.



>>> On 25.11.16 at 15:27, <andr2000@xxxxxxxxx> wrote:
> On 11/25/2016 04:12 PM, Jan Beulich wrote:
>>>>> On 25.11.16 at 14:59, <andr2000@xxxxxxxxx> wrote:
>>> On 11/25/2016 02:54 PM, Jan Beulich wrote:
>>>>>>> On 25.11.16 at 12:57, <andr2000@xxxxxxxxx> wrote:
>>>>> +struct xensnd_page_directory {
>>>>> +    grant_ref_t gref_dir_next_page;
>>>>> +    uint32_t num_grefs;
>>>> You can't fit that many requests on one page anyway, so I think it
>>>> would be better to limit this to 16 bits, and leave 16 bits reserved
>>>> for future use (specified as must-be-zero).
>>> these are not for requests, but for buffers. you can imagine the size of
>>> this
>>> buffer if we play something like 8 channels 192kHz...
>> I don't understand: You have one such entry per page, so how
>> can it be larger than (or even equal to) PAGE_SIZE / sizeof(grant_ref_t)?
> we employ the idea of a linked list here, e.g. gref_dir_next_page points
> to the next page of the same structure.
> if there are no more pages with grefs then num_grefs will not be equal to
> (PAGE_SIZE - sizeof(gref_dir_next_page) - sizeof(num_grefs)) / 
> sizeof(grant_ref_t),
> but will contain the remaining number of grefs in this page
> this way we can flexibly allocate and reference buffers of any requested 
> size

This in no way addresses my original or my subsequent replies.
(Apart from that elsewhere you state that a zero gref indicates
the end of the list. And the, am I reading this most recent reply
of yours correctly that you mean for this list to be volatile? If
so, how do frontend and backend synchronize accesses?)

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.