[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: hypercalls with 64-bit results

  • To: Jan Beulich <jbeulich@xxxxxxxx>
  • From: Roger Pau Monné <roger.pau@xxxxxxxxxx>
  • Date: Thu, 17 Jun 2021 11:22:28 +0200
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com; dkim=pass header.d=citrix.com; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=aMc2tDcfiwSqaJYV/V3UiexFJa88SQibSqbYs4O6KDc=; b=DaodqIh8b0YHuIvw2Ggj4IvzefdnmE2uywuEOuVTjsq9l1Eai+dmCA1uker+vHxI/30qOWk+j3bQkCiL8RdmfjxjJNVsgt9bQLrxc1TPDNrxQf3Pt1FebZCONFTN6qA8B+rX9LfM2p8osknXrIjOW5rDVmYnonxYbruMtGHeJ0lk7ZA2SSPLfr7CKWCxsb9W3ah00bmuWd8klEPCBHRdkc2I30Pl/eJSh+UPoE3NfO/E8xUIWmHyGdGgmMihcwcz7gopbmUbqnyk6Ybuea9TpsAzbAURewuy8uy7LMjwt/AW3OyPBQ1zaz7DMaxKY5wnmBXcKyOuHmAdQV/gOsDMdA==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=FbLCzRR0SmS8O4qo7jlwoYso6SSOE+LEhFz/WyaW1pqr4uhSSSJJ9yhLYVJ1VPo/D0iYiYtbIOO9xE0bLah5xuUXHww+ftZDoG+bDRwOVqO8hXlvLETl6EHvfQe/YtClcGuy4KqKsEPjKQCl4PPa9nn8BM6eWoxW/O3ib6G+7F7l7uv/SliRy82EgVlzZmkWfKGcHPoU9s6HnvgP7sHZn+rzBHnIkicPH/NiyVKDGZFGsgePskp2D3wrue8E8370e+43EvD62EKPzwZ14qHI2DnMMebxAAqfhge6IzH4DBIXaNc6RZxzcxEKRVVVsMBtK/MoXMbkL5eNw7uc9wGwCg==
  • Authentication-results: esa4.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
  • Cc: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>, George Dunlap <george.dunlap@xxxxxxxxxx>, Ian Jackson <iwj@xxxxxxxxxxxxxx>, Julien Grall <julien@xxxxxxx>, Stefano Stabellini <sstabellini@xxxxxxxxxx>, Wei Liu <wl@xxxxxxx>, Juergen Gross <jgross@xxxxxxxx>, Boris Ostrovsky <boris.ostrovsky@xxxxxxxxxx>, Anthony Perard <anthony.perard@xxxxxxxxxx>, "xen-devel@xxxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxxx>
  • Delivery-date: Thu, 17 Jun 2021 09:27:16 +0000
  • Ironport-hdrordr: A9a23:TOEXWasP1apxGOqzqwK1HclS7skDu9V00zEX/kB9WHVpm5Sj5q eTdPRy73DJYUUqKRcdcLG7SdW9qBznhP1ICOUqUItKGTOW3FdAT7sSkrcKoQeQeREWn9Q1vc wLT0E9MqyUMbEQt6jHCXyDc+rIt+PnzEnHv4vjJjxWPHhXgulbnn9E4gr3KDwMeOBpP+tCKL OMotNOvSCtdGkXdcKmCHgMVeKrnay3qK7b
  • Ironport-sdr: U3qK9CY5UjHEFFHPCls9XmtTy1mX38rEJb9pw7Bl/ngX86FUelGaXY9D1A0wMfeL+RQldDl2WA xDl3ImQGDCTtQaaYc4uNoeNRLgwtbxiQsQgo85RssPV0bqO9kr2NXtdZceqQUKKNfZBQKt6i25 abtBRfKyftn+pGo/PxKp0VeHL4PPkgrk4zXHk8DQjffvgexx4E7MdqtArsinSHuj4Th20sgOk6 jlEX5f3M7YwXdNF7y2FdnxZ6qjq2oBpMmKc1wiV0N5GBdV8eLyNSkoVNI6crEDReLA1jcEH/Kv m5w=
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

On Wed, Jun 16, 2021 at 06:04:02PM +0200, Jan Beulich wrote:
> All,
> several years back do_memory_op() in libxc was changed to have "long"
> return type. This is because some of the sub-ops return potentially
> large values as the hypercall return value (i.e. not in an argument
> structure field). This change, however, didn't have the intended
> effect from all I can tell, which apparently manifests in the present
> two remaining ovmf failures in the staging osstest flights. Anthony
> tells me that ovmf as of not very long ago puts the shared info page
> at a really high address, thus making the p2m of the guest very large.
> Its size gets returned by XENMEM_maximum_gpfn, as function return
> value.
> Since hypercalls from the tool stack are based on ioctl(), and since
> ioctl() has a return type of "int", I'm afraid there's no way we can
> deal with this by adjusting function return types in the libraries.
> Instead we appear to need either a new privcmd ioctl or new XENMEM_*
> subops (for those cases where potentially large values get returned).

AFAICT NetBSD and FreeBSD are not affected by this issue as the
hypercall return value is propagated to the caller using a long field
in the ioctl structure payload for hypercalls.

osdep_hypercall however should be fixed in libs/call in order to
return a long instead of an int, and wrappers around it should also be




Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.