[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: hypercalls with 64-bit results

  • To: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>, Anthony Perard <anthony.perard@xxxxxxxxxx>
  • From: Jan Beulich <jbeulich@xxxxxxxx>
  • Date: Thu, 17 Jun 2021 10:03:50 +0200
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com; dkim=pass header.d=suse.com; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=8iiEYt0fEL+ik/evNO33yFdF+zFcXcN1O5IxHQeWCJI=; b=WEKrt6OSFtCPuVbFp0qcSZJpM9tGaghfn9mob+CKskiV4MjhrAnRRb9Ow52+7ovbpx77II3l6tFNtHdLErqQzJM6eHbqSLObjsBKqIxhpTsMF4kBHH6NWqS+jjK15DiDnOUFpk5FTv7+tCPzlEqZFOZRKeb0bFACbCBLqD6jPWhAoGVrYxSivDSSeewQO00Q6t+TbFHwzw/XVwGmney8O2wGwpTKV+IRSSAYtjjChHFT9z7S9vROticTEMXAJ3/Yuto/STCAF9uz9e0TqS+hghfoILCeOODD3d5ny0UzNjAnN0ywZB3S9GcxjAODUA9IAUuJ5HqaO+hNFpT7MvSi4A==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=MPtr9gUmERvu5Lg57upfHynsmbnQRCB3x9VZ6C+pRDXQb789GFjoSEukMngr1wL0YQ8AH7X4X/Z3QU5YnI+R3G+vGvkhhZfmSRSolur9PSpSDNiCGq/0o7aWQdNCklrmEXA3RI0fwWsOcw9QM3L6uPri9C7onfJ+IgOphBE1kKM+oo6KJVMeoKI3PHjlnuPJyQvt2ZXmklQiIXDcJwnfWt5/Y8LnRP9cxIb78FBmNoqEvyQRPlt+Yxr0r7KScFDkyiAenP0B3PXf4ksve1pGm5vMMVUZquVm6T7nNNku1/BZQzyc7yJ2UMnPY4JrvoQsScAcHDs3R/6vv5GtUYI6cg==
  • Authentication-results: oracle.com; dkim=none (message not signed) header.d=none;oracle.com; dmarc=none action=none header.from=suse.com;
  • Cc: "xen-devel@xxxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxxx>, Ian Jackson <iwj@xxxxxxxxxxxxxx>, George Dunlap <george.dunlap@xxxxxxxxxx>, Julien Grall <julien@xxxxxxx>, Stefano Stabellini <sstabellini@xxxxxxxxxx>, Wei Liu <wl@xxxxxxx>, Juergen Gross <jgross@xxxxxxxx>, Roger Pau Monné <roger.pau@xxxxxxxxxx>, Boris Ostrovsky <boris.ostrovsky@xxxxxxxxxx>
  • Delivery-date: Thu, 17 Jun 2021 08:04:03 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

On 16.06.2021 20:15, Andrew Cooper wrote:
> On 16/06/2021 17:04, Jan Beulich wrote:
>> All,
>> several years back do_memory_op() in libxc was changed to have "long"
>> return type. This is because some of the sub-ops return potentially
>> large values as the hypercall return value (i.e. not in an argument
>> structure field). This change, however, didn't have the intended
>> effect from all I can tell, which apparently manifests in the present
>> two remaining ovmf failures in the staging osstest flights. Anthony
>> tells me that ovmf as of not very long ago puts the shared info page
>> at a really high address, thus making the p2m of the guest very large.
>> Its size gets returned by XENMEM_maximum_gpfn, as function return
>> value.
>> Since hypercalls from the tool stack are based on ioctl(), and since
>> ioctl() has a return type of "int", I'm afraid there's no way we can
>> deal with this by adjusting function return types in the libraries.
>> Instead we appear to need either a new privcmd ioctl or new XENMEM_*
>> subops (for those cases where potentially large values get returned).
>> Until we manage to deal with this I wonder whether we should suggest
>> to the ovmf folks to undo that change. I'm anyway not really
>> convinced this aggressive enlarging of the p2m is a good idea. There
>> are a number of cases in the hypervisor where we try to reduce GFN
>> ranges based on this upper bound, and there in particular is a loop
>> in mem-sharing code going all the way up to that limit. EPT P2M
>> dumping also has such a loop.
> There are multiple things in here which are disappointing, but I think
> they've mostly been known already.
> But I do agree that this is very much another nail in the coffin of the
> ioctl ABI.
> For ABIv2, there are many changes needed, and this ioctl ABI was never
> going to survive, for other reasons too.  Obviously, we can't wait for
> ABIv2 to fix this immediate issue.
> However, I think it might be reasonable to wait for ABIv2 until we can
> reasonably support VMs larger than 8T(?).

But it's not just XENMEM_maximum_gpfn that's affected; that's just the
one pointing out the underlying issue. Plus if so, shouldn't we avoid
returning values that are going to be truncated (and, as can be seen
here, then get perhaps recognized as error codes up the call chain)?

> For now, I'd agree with trying to undo the change in OVMF.

Anthony, thoughts?




Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.