|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] PCI passthrough for HVM with stubdomain broken by "tools/libxl: handle the iomem parameter with the memory_mapping hcall"
On 23/06/16 14:25, Marek Marczykowski-Górecki wrote:
> On Thu, Jun 23, 2016 at 03:46:46AM -0600, Jan Beulich wrote:
>>>>> On 23.06.16 at 11:23, <marmarek@xxxxxxxxxxxxxxxxxxxxxx> wrote:
>>> On Thu, Jun 23, 2016 at 11:18:24AM +0200, Marek Marczykowski-Górecki wrote:
>>>> On Thu, Jun 23, 2016 at 03:12:47AM -0600, Jan Beulich wrote:
>>>>>>>> On 23.06.16 at 10:57, <marmarek@xxxxxxxxxxxxxxxxxxxxxx> wrote:
>>>>>> On Thu, Jun 23, 2016 at 02:32:29AM -0600, Jan Beulich wrote:
>>>>>>> I wonder what good the duplication of the returned domain ID does: I'm
>>>>>>> tempted to remove the one in the command-specific structure. Does
>>>>>>> anyone have insight into why it was done that way?
>>>>>> Isn't XEN_DOMCTL_getdomaininfo supposed to return info about first
>>>>>> existing domain with ID >= passed one? Reading various comments in code
>>>>>> it looks to be used to domain enumeration. This patch changes this
>>>>>> behaviour.
>>>>> No, it doesn't. It adjusts the behavior only for the DM case (which
>>>>> isn't supposed to get information on other than the target domain,
>>>>> i.e. in this one specific case the very domain ID needs to be passed
>>>>> in).
>>>> int xc_domain_getinfo(xc_interface *xch,
>>>> uint32_t first_domid,
>>>> unsigned int max_doms,
>>>> xc_dominfo_t *info)
>>>> {
>>>> unsigned int nr_doms;
>>>> uint32_t next_domid = first_domid;
>>>> DECLARE_DOMCTL;
>>>> int rc = 0;
>>>>
>>>> memset(info, 0, max_doms*sizeof(xc_dominfo_t));
>>>>
>>>> for ( nr_doms = 0; nr_doms < max_doms; nr_doms++ )
>>>> {
>>>> domctl.cmd = XEN_DOMCTL_getdomaininfo;
>>>> domctl.domain = (domid_t)next_domid;
>>>> if ( (rc = do_domctl(xch, &domctl)) < 0 )
>>>> break;
>>>> info->domid = (uint16_t)domctl.domain;
>>>> (...)
>>>> next_domid = (uint16_t)domctl.domain + 1;
>>>>
>>>>
>>>> Looks like heavily dependent on XEN_DOMCTL_getdomaininfo returning next
>>> valid
>>>> domain.
>>> Hmm, looks like I've misread you patch. Reading again...
>>>
>>> But now I see rcu_read_lock(&domlist_read_lock) is gets called only when
>>> looping over domains, but rcu_read_unlock is called in any case. Is it
>>> correct?
>> How that? There is this third hunk:
> Ok, after drawing a flowchart of the control in this function after your
> change, on a piece of paper, this case looks fine. But depending on how
> the domain was found (explicit loop or rcu_lock_domain_by_id), different
> locks are held, which makes it harder to follow what is going on.
>
> Crazy idea: how about making the code easy/easier to read instead of
> obfuscating it even more? XEN_DOMCTL_getdomaininfo semantic is
> convolved enough. How about this version (2 patches):
>
> xen: move domain lookup for getdomaininfo to the same
>
> XEN_DOMCTL_getdomaininfo have different semantics than most of others
> domctls - it returns information about first valid domain with ID >=
> argument. But that's no excuse for having the lookup done in a different
> place, which made handling different corner cases unnecessary complex.
> Move the lookup to the first switch clause. And adjust locking to be the
> same as for other cases.
>
> Signed-off-by: Marek Marczykowski-Górecki <marmarek@xxxxxxxxxxxxxxxxxxxxxx>
FWIW, I prefer this solution to the issue.
Reviewed-by: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>, with a few style
nits.
> ---
> xen/common/domctl.c | 41 ++++++++++++++++++++---------------------
> 1 file changed, 20 insertions(+), 21 deletions(-)
>
> diff --git a/xen/common/domctl.c b/xen/common/domctl.c
> index e43904e..6ae1fe0 100644
> --- a/xen/common/domctl.c
> +++ b/xen/common/domctl.c
> @@ -442,11 +442,29 @@ long do_domctl(XEN_GUEST_HANDLE_PARAM(xen_domctl_t)
> u_domctl)
> switch ( op->cmd )
> {
> case XEN_DOMCTL_createdomain:
> - case XEN_DOMCTL_getdomaininfo:
> case XEN_DOMCTL_test_assign_device:
> case XEN_DOMCTL_gdbsx_guestmemio:
> d = NULL;
> break;
Newline here please.
> + case XEN_DOMCTL_getdomaininfo:
> + d = rcu_lock_domain_by_id(op->domain);
And here.
> + if ( d == NULL )
> + {
> + /* search for the next valid domain */
/* Search for the next available domain. */
> + rcu_read_lock(&domlist_read_lock);
> +
> + for_each_domain ( d )
> + if ( d->domain_id >= op->domain )
> + {
> + rcu_lock_domain(d);
> + break;
> + }
> +
> + rcu_read_unlock(&domlist_read_lock);
> + if ( d == NULL )
> + return -ESRCH;
> + }
> + break;
Another newline here please.
> default:
> d = rcu_lock_domain_by_id(op->domain);
> if ( d == NULL )
> @@ -862,33 +880,14 @@ long do_domctl(XEN_GUEST_HANDLE_PARAM(xen_domctl_t)
> u_domctl)
> break;
>
> case XEN_DOMCTL_getdomaininfo:
> - {
> - domid_t dom = op->domain;
> -
> - rcu_read_lock(&domlist_read_lock);
> -
> - for_each_domain ( d )
> - if ( d->domain_id >= dom )
> - break;
> -
> - ret = -ESRCH;
> - if ( d == NULL )
> - goto getdomaininfo_out;
> -
> ret = xsm_getdomaininfo(XSM_HOOK, d);
> if ( ret )
> - goto getdomaininfo_out;
> + break;
>
> getdomaininfo(d, &op->u.getdomaininfo);
> -
> op->domain = op->u.getdomaininfo.domain;
> copyback = 1;
> -
> - getdomaininfo_out:
> - rcu_read_unlock(&domlist_read_lock);
> - d = NULL;
> break;
> - }
As far as I can tell, this is purely cleanup, and independent of XSM change.
~Andrew
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |