[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [alsa-devel] [PATCH 0/2] sndif: add explicit back and front synchronization



On 03/13/2018 08:48 PM, Takashi Iwai wrote:
On Tue, 13 Mar 2018 18:31:55 +0100,
Oleksandr Andrushchenko wrote:
On 03/13/2018 06:31 PM, Takashi Iwai wrote:
On Tue, 13 Mar 2018 12:49:00 +0100,
Oleksandr Andrushchenko wrote:
So, I tried to make a POC to stress the protocol changes and see
what implementation of the HW parameter negotiation would look like.

Please find protocol changes at [1]:
- add XENSND_OP_HW_PARAM_QUERY request to read/update
     configuration space for the parameter given: request passes
     desired parameter interval and the response to this request
     returns min/max interval for the parameter to be used.
     Parameters supported by this request:
       - frame rate
       - sample rate
       - number of channels
       - buffer size
       - period size
   - add minimum buffer size to XenStore configuration

  From the previous changes to the protocol which I posted earlier I see
that XENSND_OP_HW_PARAM_SET is not really needed - removed.

The implementation in the PV frontend driver is at [2].

Takashi, could you please take a look at the above if it meets your
expectations
so I can move forward?
This looks almost good through a quick glance.
But the mixture of SNDRV_PCM_HW_PARAM_PERIOD_SIZE and
SNDRV_PCM_HW_PARAM_BUFFER_BYTES are likely confusing.
The *_SIZE means in frames unit while *_BYTES means in bytes.
You should align both PERIOD_ and BUFFER_ to the same units,
i.e. either use SNDRV_PCM_HW_PARAM_PERIOD_BYTES and *_BUFFER_BYTES,
or SNDRV_PCM_HW_PARAM_PERIOD_SIZE and *_BUFFER_SIZE.
You are correct, fixed this at [1]
Also, a slightly remaining concern is the use-case where hw_params is
called multiple times.  An application may call hw_free and hw_params
freely, or even hw_params calls multiple times, in order to change the
parameter.

If the backend needs to resolve some dependency between parameters
(e.g. the available period size depends on the sample rate), the
backend has to remember the previously passed parameters.

So, instead of passing a single parameter, you may extend the protocol
always to pass the full (five) parameters, too.

OTOH, this can be considered to be a minor case, and the backend
(e.g. PA) can likely support every possible combinations, so maybe a
simpler code may be a better solution in the end.
Yes, let's have it step by step.
If you are ok with what we have at the moment then, after I implement both
backend and frontend changes and confirm that protocol works,
I will send v3 of the series (protocol changes).

Still there some questions:
1. Do we really need min buffer value as configuration [2]? I see no
way it can be used,
for instance at [3], we only have snd_pcm_hardware.buffer_bytes_max,
but not min.
So, I feel I can drop that
Actually with the hw_param query mechanism, this setup is moot.
You can pass a fixed value that should be enough large for all cases
there.
ok, so only buffer max as it is already defined
2. Can I assume that min buffer size == period size and add such a
constraint
in the frontend driver?
The buffer sie == period size is a special case, i.e. periods=1, and
this won't work most likely.  It's used only for a case like PA
deployment without the period interrupt.  And it needs a special
hw_params flag your driver doesn't deal with.

So for the sane setup, you can safely assume min_periods=2.
Thanks, will limit min to 2 periods then
3. On backend side (ALSA), with current changes in the protocol I will
call something like
int snd_pcm_hw_params_set_channels_minmax(snd_pcm_t *pcm,
snd_pcm_hw_params_t *params, unsigned int *min, unsigned int *max)

instead of

int snd_pcm_hw_params_set_channels(snd_pcm_t *pcm, snd_pcm_hw_params_t
*params, unsigned int val)

while servicing
XENSND_OP_HW_PARAM_QUERY.XENSND_OP_HW_PARAM_CHANNELS. Does this make
sense?
Yeah, that's better, I suppose.
Excellent

Takashi
Thank you very much for helping with this!!!
Oleksandr Andrushchenko

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.