[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RE: [PATCH 04/37] xen: introduce an arch helper for default dma zone status


  • To: Jan Beulich <jbeulich@xxxxxxxx>
  • From: Wei Chen <Wei.Chen@xxxxxxx>
  • Date: Wed, 19 Jan 2022 02:49:02 +0000
  • Accept-language: en-US
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass header.d=arm.com; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=Ytwro7Mk5CruANDZuOOefJLoX/o3qtD2UMYOdtJ+P7s=; b=O6Xr0Ami+2dko/Y4BqRPXQrD9LeiH9WSEtC1qmJwRslr4yB42ioTUQt8kOVuFussr5KUJgPv8mXbtIc3NqnzK1zZaNw31J11INRA5S6b+AKAB1LYeJyZ2l5A5/scoAMaogXZgNan1mTILDjcLRP+p7GsZEnvJdNi1rMqZXvKX5ysWrJBS3sJKPb+gFbm8qnFNTzXXqhnAa8KFobxckSIz3p9UuNWSv8ERcMrB+QDDZpHlrMGkRMcn9+5I132uUSKbZOLeU5jSGmKcRmSMIyz2zKrC2yI2SEHAdPdjUkTCJuNMANm6vXvhtG4N24kH8KwUFVVh/WujSDqIN4rGKkajw==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=j9MuXT2szxhgWuz5M3JbMWYhSXokp/HGps57c3U3bpkJRkqCTRoR4Sv60xLj1F5vq6RX+6U08p7d848qii+fo4gsRvU6IKyBgMWVaqSqEfCxOoL7nNCCQOZXQEcL6Ef+j8yeYmiTQ/3aQF8KzzbqcmgZtENGTeRK8hq2rXQngRB7NRAqvuNqFARU2nb0C5eU1T0AzhJDFVM5x0wB9FW7V38/GyAuvngopjIjUSJIK+hbwLXWHPggqRgP/2L29NsmuDwYcI1wgf7UX9fUKEZxrXr13SkQqpPynGAV3CVlsXoOqfHVg3vfVZCX2nDicUjRm69nuaCYBvnAJ7wRx+UFsQ==
  • Authentication-results-original: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=arm.com;
  • Cc: Bertrand Marquis <Bertrand.Marquis@xxxxxxx>, "xen-devel@xxxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxxx>, "sstabellini@xxxxxxxxxx" <sstabellini@xxxxxxxxxx>, "julien@xxxxxxx" <julien@xxxxxxx>
  • Delivery-date: Wed, 19 Jan 2022 02:50:04 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>
  • Nodisclaimer: true
  • Original-authentication-results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=arm.com;
  • Thread-index: AQHXsHMGx8xGhif/FUCDSa9NS/fSLqxoGKiAgAD49HCAABTIAIAACXAAgABbKICAANEXUA==
  • Thread-topic: [PATCH 04/37] xen: introduce an arch helper for default dma zone status

Hi Jan,

> -----Original Message-----
> From: Jan Beulich <jbeulich@xxxxxxxx>
> Sent: 2022年1月18日 22:16
> To: Wei Chen <Wei.Chen@xxxxxxx>
> Cc: Bertrand Marquis <Bertrand.Marquis@xxxxxxx>; xen-
> devel@xxxxxxxxxxxxxxxxxxxx; sstabellini@xxxxxxxxxx; julien@xxxxxxx
> Subject: Re: [PATCH 04/37] xen: introduce an arch helper for default dma
> zone status
> 
> On 18.01.2022 10:20, Wei Chen wrote:
> >> From: Jan Beulich <jbeulich@xxxxxxxx>
> >> Sent: 2022年1月18日 16:16
> >>
> >> On 18.01.2022 08:51, Wei Chen wrote:
> >>>> From: Jan Beulich <jbeulich@xxxxxxxx>
> >>>> Sent: 2022年1月18日 0:11
> >>>> On 23.09.2021 14:02, Wei Chen wrote:
> >>>>> In current code, when Xen is running in a multiple nodes NUMA
> >>>>> system, it will set dma_bitsize in end_boot_allocator to reserve
> >>>>> some low address memory for DMA.
> >>>>>
> >>>>> There are some x86 implications in current implementation. Becuase
> >>>>> on x86, memory starts from 0. On a multiple nodes NUMA system, if
> >>>>> a single node contains the majority or all of the DMA memory. x86
> >>>>> prefer to give out memory from non-local allocations rather than
> >>>>> exhausting the DMA memory ranges. Hence x86 use dma_bitsize to set
> >>>>> aside some largely arbitrary amount memory for DMA memory ranges.
> >>>>> The allocations from these memory ranges would happen only after
> >>>>> exhausting all other nodes' memory.
> >>>>>
> >>>>> But the implications are not shared across all architectures. For
> >>>>> example, Arm doesn't have these implications. So in this patch, we
> >>>>> introduce an arch_have_default_dmazone helper for arch to determine
> >>>>> that it need to set dma_bitsize for reserve DMA allocations or not.
> >>>>
> >>>> How would Arm guarantee availability of memory below a certain
> >>>> boundary for limited-capability devices? Or is there no need
> >>>> because there's an assumption that I/O for such devices would
> >>>> always pass through an IOMMU, lifting address size restrictions?
> >>>> (I guess in a !PV build on x86 we could also get rid of such a
> >>>> reservation.)
> >>>
> >>> On Arm, we still can have some devices with limited DMA capability.
> >>> And we also don't force all such devices to use IOMMU. This devices
> >>> will affect the dma_bitsize. Like RPi platform, it sets its
> dma_bitsize
> >>> to 30. But in multiple NUMA nodes system, Arm doesn't have a default
> >>> DMA zone. Multiple nodes is not a constraint on dma_bitsize. And some
> >>> previous discussions are placed here [1].
> >>
> >> I'm afraid that doesn't give me more clues. For example, in the mail
> >> being replied to there I find "That means, only first 4GB memory can
> >> be used for DMA." Yet that's not an implication from setting
> >> dma_bitsize. DMA is fine to occur to any address. The special address
> >> range is being held back in case in particular Dom0 is in need of such
> >> a range to perform I/O to _some_ devices.
> >
> > I am sorry that my last reply hasn't given you more clues. On Arm, only
> > Dom0 can have DMA without IOMMU. So when we allocate memory for Dom0,
> > we're trying to allocate memory under 4GB or in the range of dma_bitsize
> > indicated. I think these operations meet your above Dom0 special address
> > range description. As we have already allocated memory for DMA, so I
> > think we don't need a DMA zone in page allocation. I am not sure is that
> > answers your earlier question?
> 
> I view all of this as flawed, or as a workaround at best. Xen shouldn't
> make assumptions on what Dom0 may need. Instead Dom0 should make
> arrangements such that it can do I/O to/from all devices of interest.
> This may involve arranging for address restricted buffers. And for this
> to be possible, Xen would need to have available some suitable memory.
> I understand this is complicated by the fact that despite being HVM-like,
> due to the lack of an IOMMU in front of certain devices address
> restrictions on Dom0 address space alone (i.e. without any Xen
> involvement) won't help ...
> 

I agree with you that the current implementation is probably the best
kind of workaround. Do you have some suggestions for this patch to
address above comments? Or should I just need to modify the commit log
to contain some of our above discussions?

Thanks,
Wei Chen

> Jan


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.