[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v2 12/18] AMD/IOMMU: allow use of superpage mappings


  • To: Jan Beulich <jbeulich@xxxxxxxx>
  • From: Roger Pau Monné <roger.pau@xxxxxxxxxx>
  • Date: Mon, 13 Dec 2021 10:45:57 +0100
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com; dkim=pass header.d=citrix.com; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=VhCULhHec/tuhqvDPjk6+vqcrVCSdujL89mIN6CkgDc=; b=Gy9PfOckdjMHB3/Rbu/qOAmkVoZoRGze0FeG5zhh9Qm+ZO9UEkoI8+mVItHAx+1GnUK93FcT7HiB169WYNBxSQjRj1rod8LGCkZfUq3yI6/eh/+G3ggT6P9T9kxpxbz2NE4LKqC5VAGtucDCeDjczDnqNHClWVXDjdMtixHqrG/84e9QIJF0JiCJaez3T22At1m6M4K722Whsd9Tv7MfgTAC//WfljO9+QmzAxtA9e6XFghEpMoqQPjzty8FPFMZgCDc7584iJF5Pi9oVYxYkldhj59WvCWSa04P5BZ0g14ka9z3su8VuQa0hv4OO1TmMvnRbu6sgCy4VJOw+wwF5w==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=FUv1Ke8ECtu2NbVJjm1w4otV8vt516LfU5P2bT4KosKXHCBLqt6J7DyipEqIIA/fJ/Y0WwEEibUrSXYv3fQSgZerDnMY2463uIOgZHdWGtFFkqm6mcfwDMFxdkCuS50OSHFWTCdw5bwwdX9y5wv6VUPEe9/2uojzK+bNvdmoZQH69bO8+xdacQmP6FeYYvdtHRK0siE0JJ745/bqgaa5+H+30c0WsjG3fD3EkzMQKQdVZP910RrWeU+YsjVQOQU+Zr+hQULHw1F0Zvzo7jp/c4ehHNOFxdPM8VYdsFT8mYLn/nvqT9ZssG5Q7V4KUTLkmKiC+v7ELwCOfTkLdVyzww==
  • Authentication-results: esa6.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
  • Cc: "xen-devel@xxxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxxx>, Paul Durrant <paul@xxxxxxx>, Andrew Cooper <andrew.cooper3@xxxxxxxxxx>, George Dunlap <george.dunlap@xxxxxxxxxx>, Ian Jackson <iwj@xxxxxxxxxxxxxx>, Julien Grall <julien@xxxxxxx>, Stefano Stabellini <sstabellini@xxxxxxxxxx>, Wei Liu <wl@xxxxxxx>
  • Delivery-date: Mon, 13 Dec 2021 09:46:28 +0000
  • Ironport-data: A9a23:dA2RIateeYSbcoUnj9+YagTU+ufnVMtZMUV32f8akzHdYApBsoF/q tZmKWvUOPfbYmTyeI0kbYm39RkDv5bdxtU3HQVp/ntnQylH+JbJXdiXEBz9bniYRiHhoOOLz Cm8hv3odp1coqr0/0/1WlTZQP0VOZigHtIQMsadUsxKbVIiGHdJZS5LwbZj29cy2IPhWWthh PupyyHhEA79s9JLGjp8B5Kr8HuDa9yr5Vv0FnRnDRx6lAe2e0s9VfrzFonoR5fMeaFGH/bSe gr25OrRElU1XfsaIojNfr7TKiXmS1NJVOSEoiI+t6OK2nCuqsGuu0qS2TV1hUp/0l20c95NJ NpllrCqRCcZN5P3qukHQTMHMRlRGrdG0eqSSZS/mZT7I0zudnLtx7NlDV0sPJ1e8eFyaY1M3 aVGcnZXNEnF3r/ohuLgIgVvrp1LwM3DJoQQt2sm1TjEJf0nXYrCU+PB4towMDIY25EeRK+PP JZxhTxHUA/baE1VZHgrOcg3vMz2uFymfA0BgQfAzUYwyzeKl1EguFT3C/LXZ9iLSMN9jkue4 GXc8AzRGQoGPdaSzT6E9HOEheLVmy7/HoUIG9WQ6fpCkFCVgGsJB3U+VES5iem0jFakXNBSI FBS/TAhxYAw/kG2Stj2XzWjvWWJ+BUbXrJ4A+A8rQ2A1KfQywKYHXQfCC5MbsQ8s807TiBs0 UWG9/vxDCFrmK2YTzSa7Lj8kN+pEXFLdylYP3ZCFFZbpYm4yG0usv7RZvVPCqOVvtTLIgP92 SKg8woGqJIJjedegs1X4mv7qz6ro5HISCs86QPWQn+p42tFWWK1W2C7wQOFtKgdde51WnHE5 SFZwJbGsIjiGLnUzHTVKNjhCo1F8Bps3Nf0pVd0V6cs+D22k5JIVdABuWouTKuF3yttRNMIX KMxkV4IjHOwFCHzBUOSX25WI551pZUM7fy/CpjpgiNmO/CdjjOv8iB0flK31GvwikUqmqxXE c7FKpf3VipKVPk3l2PeqwIhPVgDnH9W+I8ubcqjk0TPPUS2OhZ5tovpwHPRN7tkvctoUS3e8 spFNtvi9vmseLaWX8UjyqZKdQpiBSFiXfje8pULHsbec1MOMDxwUJf5nOJ+E7GJaowIz48kC FnmARQGoLc+7FWaQTi3hodLNOmyAM0h9C1jZkTB/z+AghAeXGpm149GH7Mfdrg77u1zi/lyS vgOYcKbBfpTDD/A/lwggVPV9eSOrTyn2lCDOTSLej86c8IyTgDF4Ia8LADu6DMPHmy8ss5n+ ++s0QbSQJwiQQV+DZmJNKLzng3p5XVNyvhvW0boI8VIfBm++oZdNCGs3OQ8JNsBKEufy2LCh RqWGxoRucLEv5QxrIvSnamBoorwS7l+E0NWEnP197GzMSWGrGOvzZUZCLSDfCzHVXOy86KnP L0Hw/b5OfwBvVBLr4sjTOo7kfNgv4Pi/uYIwB5lEXPHa0WQJolhenTWj9NSsqBtx6NCvVfkU Ey45dQHa66CP9noEQBNKVN9PPiDz/wdhhLb8e8xfBfh/CZy8beKDRdSMh2LhHAPJbd5Ktp4k +IoucpQ4A2jkBs6dN2Bi3kMpWiLK3UBVYQht40bX9C32lZ6lAkabMyOEDLy7bGOd85IYxsjL TKjjabfg6hRmxjZeH0pGHmRhedQiPziYvyRIIPu87hRpuf4uw==
  • Ironport-hdrordr: A9a23:QWdcHqMIJDGIK8BcTs+jsMiBIKoaSvp037BL7SxMoHluGfBw+P rAoB1273HJYVQqOE3I6OrgBEDoexq1n/NICOIqTNSftWfdyQ6VBbAnwYz+wyDxXw3Sn9QtsZ uIqpIOauHNMQ==
  • Ironport-sdr: rlsmQixTcEc59mcTlm3rnJrGXfBuUPQT1UWpBkRphYsxrbyBkGB72/V7Jg42+BLuPIYOEnGu/f wDTgzCmxqfrfIoOJbFYuq8sqSsmtcJUz2/8JP6eCiKthdAzS0F25Sd2J/PitKmAZ8QfeB5ewOM KigecMNXZSvaKmcMLR1/R/mmLxwJK92otVJ07d3viglTA/pIV6DCycUGpd7ju4oTAJBPt9F0zW ugmpBilSvAARy1AYcJv2ipWwD5wCPY6W1zyiq1ASz5h9Lz9484Rvv5LoeRozAkWPurNxxD2gw9 ekd/bDbZA+UX8Y8bt078/Uc2
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

On Mon, Dec 13, 2021 at 09:49:50AM +0100, Jan Beulich wrote:
> On 10.12.2021 16:06, Roger Pau Monné wrote:
> > On Fri, Sep 24, 2021 at 11:52:14AM +0200, Jan Beulich wrote:
> >> ---
> >> I'm not fully sure about allowing 512G mappings: The scheduling-for-
> >> freeing of intermediate page tables can take quite a while when
> >> replacing a tree of 4k mappings by a single 512G one. Plus (or otoh)
> >> there's no present code path via which 512G chunks of memory could be
> >> allocated (and hence mapped) anyway.
> > 
> > I would limit to 1G, which is what we support for CPU page tables
> > also.
> 
> I'm not sure I buy comparing with CPU side support when not sharing
> page tables. Not the least with PV in mind.

Hm, my thinking was that similar reasons that don't allow us to do
512G mappings for the CPU side would also apply to IOMMU. Regardless
of that, given the current way in which replaced page table entries
are freed, I'm not sure it's fine to allow 512G mappings as the
freeing of the possible huge amount of 4K entries could allow guests
to hog a CPU for a long time.

It would be better if we could somehow account this in a per-vCPU way,
kind of similar to what we do with vPCI BAR mappings.

> > Should we also assert that level (or next_level) is always != 0 for
> > extra safety?
> 
> As said elsewhere - if this wasn't a static helper, I'd agree. But all
> call sites have respective conditionals around the call. If anything
> I'd move those checks into the function (but only if you think that
> would improve things, as to me having them at the call sites is more
> logical).

I'm fine to leave the checks in the callers, was just a suggestion in
case we gain new callers that forgot to do the checks themselves.

Thanks, Roger.



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.