[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v3 1/7] xen/gnttab: Rework resource acquisition


  • To: Jan Beulich <jbeulich@xxxxxxxx>
  • From: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
  • Date: Fri, 15 Jan 2021 16:57:45 +0000
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com; dkim=pass header.d=citrix.com; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=rKvlSnsqHYc9aXs1UVn+X1jWn9cUUHdrTLlA66JsQ38=; b=YR1xGrckkBt/lFaqVs/BAHpS+eA41a2U1mJj+6Iak3VjRc5SpirNb3thWUX7/eyGPtu0NRCAw3hUCUMkndfHh/bU+wprHzx106ito3G1HzuW1cqrj8fk7TKX69L21It0qt0iSb8N3oMt/QfGgTEpf34uIN1VHTtFPytycyOmovkN772IlwbR04U3hizoaD+F8+RDlRhUWeXRPBDRpQ5n2DzN+TKp8+PF1mrq5Jc+DrjJ4cQMPu/lLpRkWj+/0biPPLIlB5vuqCqDvwhdnSj3wbY8PPHkpArJVgMY4rjJCWzU6pz1aPqX5AfTGr6RTHb4pUlTRAqnflJHRmpDK1wBCw==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=ErPDdVYwBD6UANOnVroEXqpPRb7x/qAN+xEoKlde0SZFZcSNDVDNFvpuCqCFOM88sUrBkWADwc+5U0m98TX5WqQvIIYPZsAXBkVAtnj43DHGRwsSib/tKi6SRU7XT/zliL1QlcrIHyl8QjX1ACTtWfazmAP/VoSlIbSt5aBUxhUPku1AKUvK8LOquTVlgBxSWxP3AFDh9jA4EZPC5HQoqMa3rqsOko4wGMEz2UMOyjXXPnuF2ZLkcxM+eiaT0OIc+08BSksPPytcj7iu3kyV49GktclxidrPbh0AdXtX69fZk6hQpVtYngZSSMHW7JuOP6wy1Xw5msMJ1LIKZbidgg==
  • Authentication-results: esa4.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
  • Cc: George Dunlap <George.Dunlap@xxxxxxxxxxxxx>, Ian Jackson <iwj@xxxxxxxxxxxxxx>, Stefano Stabellini <sstabellini@xxxxxxxxxx>, Wei Liu <wl@xxxxxxx>, Julien Grall <julien@xxxxxxx>, Paul Durrant <paul@xxxxxxx>, Michał Leszczyński <michal.leszczynski@xxxxxxx>, Hubert Jasudowicz <hubert.jasudowicz@xxxxxxx>, Tamas K Lengyel <tamas@xxxxxxxxxxxxx>, Xen-devel <xen-devel@xxxxxxxxxxxxxxxxxxxx>
  • Delivery-date: Fri, 15 Jan 2021 16:58:02 +0000
  • Ironport-sdr: 1gjDKJK4DAchNSydPaLbazclgOyCsND9xYx0Y0mVv0yJfYlDf3ySE1myjBlVjprcEP3hWvUrng N8fGOoETTBFAyz6QCWUVBvlBA1R8a2w8yD7hqVYEyApf7KLIXuzmCpyYRa1IdLaeJyTMmtac3d StuEnwd43wUWj1559yS179yy8vzCxoaS4pUd4VI/gvxwAGvu84da1Zm5t0ZOrQ+vh5H7CfBlQT BuOGUgdkCUwnJtUkEM5VVKYcE4Oo8zyil1uS6/jgXU8KP+d9PBeh8FCv7qdJs2G7KQ5vmn3Gpw r+A=
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

On 15/01/2021 11:56, Jan Beulich wrote:
>> +    /* Grow table if necessary. */
>> +    grant_write_lock(gt);
>> +    rc = -EINVAL;
>> +    switch ( id )
>> +    {
>> +    case XENMEM_resource_grant_table_id_shared:
>> +        vaddrs = gt->shared_raw;
>> +        rc = gnttab_get_shared_frame_mfn(d, tot_frames - 1, &tmp);
> ... this will degenerate (and still cause an error) when frame
> is also zero, and will cause undue growing of the table when
> frame is non-zero yet not overly large.

Urgh, yes - that is why I had the check.

In which case I retract my change between v2 and v3 here.

> As indicated before, I'm of the clear opinion that here - like
> elsewhere - a number of zero frames requested means that no
> action be taken at all, and success be returned.

The general world we work in (POSIX) agrees with my opinion over yours
when it comes to this matter.

I spent a lot of time and effort getting this logic correct in v2, and I
do not have any further time to waste adding complexity to support a
non-existent corner case, nor is it reasonable to further delay all the
work which is depending on this series.  This entire mess is already too
damn complicated, without taking extra complexity.

Entertaining the idea of supporting 0 length requests is really not as
simple as you seem to think it is, and is a large part of why I'm
stubbornly refusing to do so.

I am going to commit this patch (with some of the other minor adjustments).

If you wish to add the extra complexity yourself, you are welcome to all
the unit tests I put together which exercise the complexity, but I will
not ack the resulting change.

~Andrew



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.