[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v2] vpci: introduce per-domain lock to protect vpci structure


  • To: "roger.pau@xxxxxxxxxx" <roger.pau@xxxxxxxxxx>
  • From: Oleksandr Andrushchenko <Oleksandr_Andrushchenko@xxxxxxxx>
  • Date: Tue, 15 Feb 2022 08:41:28 +0000
  • Accept-language: en-US
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=none; dmarc=none; dkim=none; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=C7bIRe9UwptJsB5+qL2+PuymXXE7ASuNAow3USbPVHc=; b=ltEoJAXlue+s9ViShoP3509CiF1fknNcNJigCAxySSmae5rxIWewxqlpQaTNSxjB1KmRDP/lSM5a6zBWE3oKdExLKKPM8WzVovQgLQcVsGreZtRZNPZGi/cVN2UrFtv/TKbZcimdejbs+elPBHo4UPkMk7GQiwdwL1HcuqoGIAH7V0aEf1ow/PMvZ24Zb+Pxf/LqJReZZ0sZYHsOTG5So1W/IziPu7o5ccrj+tRi+FTYGxyAMTzAlMCLbhZTIlnls8CA9KXC73++/WD9Tuak/1Cy7gf9yoJ8qJ8V+JqOrISPEKC9/JxM5vwcP+hSamDt1TjnOINx/th3D2f5Op7gNQ==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=P0Ce/8sX+S6isvf37kzvBq46YnK9vpM12tIUR0XNu5Es/vi/DTyqS0LaYBUgVKprKaNO1Gjv79P2xhz+jp1NEqjbI8Gv9qwrsMBn6zZQDHgO9HqyiZvIEpD1WOjUdsafObpakbw7vCOhg8X/YhrCERLW2bAbl8aNa416d/GLsDRjiESoviiPGyw3pVH2DN/RuIt8OJ78HcDqBQY8cIOC6xnUW6Rz/LYMcvpkpQWHs8TcolUTAWg//tIMowUI2z/S3VxU53lAzbWH+V04AVb7nIIG3Nzi+ikNTCly4uCDDjtPzO/beagZk3OJ5pHUIlosSAQoHNhqnq9hbPJP/GwQCg==
  • Cc: "jbeulich@xxxxxxxx" <jbeulich@xxxxxxxx>, "julien@xxxxxxx" <julien@xxxxxxx>, "sstabellini@xxxxxxxxxx" <sstabellini@xxxxxxxxxx>, Oleksandr Tyshchenko <Oleksandr_Tyshchenko@xxxxxxxx>, Volodymyr Babchuk <Volodymyr_Babchuk@xxxxxxxx>, Artem Mygaiev <Artem_Mygaiev@xxxxxxxx>, Bertrand Marquis <bertrand.marquis@xxxxxxx>, Rahul Singh <rahul.singh@xxxxxxx>, Oleksandr Andrushchenko <Oleksandr_Andrushchenko@xxxxxxxx>, "xen-devel@xxxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxxx>
  • Delivery-date: Tue, 15 Feb 2022 08:41:51 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>
  • Thread-index: AQHYIkOs9pmsZb0l5kefj1m3qdTVCayUSxyA
  • Thread-topic: [PATCH v2] vpci: introduce per-domain lock to protect vpci structure


On 15.02.22 10:11, Oleksandr Andrushchenko wrote:
> From: Oleksandr Andrushchenko <oleksandr_andrushchenko@xxxxxxxx>
>
> @@ -171,8 +173,24 @@ static int __init apply_map(struct domain *d, const 
> struct pci_dev *pdev,
>       struct map_data data = { .d = d, .map = true };
>       int rc;
>   
> +    ASSERT(rw_is_write_locked(&d->vpci_rwlock));
> +
>       while ( (rc = rangeset_consume_ranges(mem, map_range, &data)) == 
> -ERESTART )
> +    {
> +        /*
> +         * FIXME: apply_map is called from dom0 specific init code when
> +         * system_state < SYS_STATE_active, so there is no race condition
> +         * possible between this code and vpci_process_pending. So, neither
> +         * vpci_process_pending may try to acquire the lock in read mode and
> +         * also destroy pdev->vpci in its error path nor pdev may be disposed
> +         * yet. This means that it is not required to check if the relevant
> +         * pdev->vpci still exists after re-acquiring the lock.
> +         */

> I'm not sure why you need to mention vpci_process_pending here:
> apply_map and defer_map are mutually exclusive, so given the current
> code it's impossible to get in a situation where apply_map is called
> while there's pending work on the vCPU (ie: v->vpci.mem != NULL).
>
> Also there's no need for a FIXME tag: the current approach doesn't
> require any fixes unless we start using apply_map in a different
> context.
>
> Hence I think the comment should be along the lines of:
>
> /*
>  * It's safe to drop and reacquire the lock in this context without
>  * risking pdev disappearing because devices cannot be removed until the
>  * initial domain has been started.
>  */
This sounds good, will use this
> 
> Thanks, Roger.


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.