[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [RFC XEN PATCH v3 00/39] Add vNVDIMM support to HVM domains



On 10/27/17 11:26 +0800, Chao Peng wrote:
> On Mon, 2017-09-11 at 12:37 +0800, Haozhong Zhang wrote:
> > Overview
> > ==================
> > 
> > > (RFC v2 can be found at https://lists.xen.org/archives/html/xen-
> devel/2017-03/msg02401.html)
> > 
> > Well, this RFC v3 changes and inflates a lot from previous versions.
> > The primary changes are listed below, most of which are to simplify
> > the first implementation and avoid additional inflation.
> > 
> > 1. Drop the support to maintain the frametable and M2P table of PMEM
> >    in RAM. In the future, we may add this support back.
> 
> I don't find any discussion in v2 about this, but I'm thinking putting
> those Xen data structures in RAM sometimes is useful (e.g. when
> performance is important). It's better not making hard restriction on
> this.

Well, this is to reduce the complexity, as you see the current patch
size is already too big. In addition, the size of NVDIMM can be very
large, e.g. several tera-bytes or even more, which would require a
large RAM space to store its frametable and M2P (~10 MB per 1 GB) and
leave fewer RAM for guest usage.

> 
> > 
> > 2. Hide host NFIT and deny access to host PMEM from Dom0. In other
> >    words, the kernel NVDIMM driver is loaded in Dom 0 and existing
> >    management utilities (e.g. ndctl) do not work in Dom0 anymore. This
> >    is to workaround the inferences of PMEM access between Dom0 and Xen
> >    hypervisor. In the future, we may add a stub driver in Dom0 which
> >    will hold the PMEM pages being used by Xen hypervisor and/or other
> >    domains.
> > 
> > 3. As there is no NVDIMM driver and management utilities in Dom0 now,
> > >    we cannot easily specify an area of host NVDIMM (e.g., by
> /dev/pmem0)
> >    and manage NVDIMM in Dom0 (e.g., creating labels).  Instead, we
> >    have to specify the exact MFNs of host PMEM pages in xl domain
> >    configuration files and the newly added Xen NVDIMM management
> >    utility xen-ndctl.
> > 
> >    If there are indeed some tasks that have to be handled by existing
> >    driver and management utilities, such as recovery from hardware
> >    failures, they have to be accomplished out of Xen environment.
> 
> What kind of recovery can happen and does the recovery can happen at
> runtime? For example, can we recover a portion of NVDIMM assigned to a
> certain VM while keep other VMs still using NVDIMM?

For example, evaluate ACPI _DSM (maybe vendor specific) for error
recovery and/or scrubbing bad blocks, etc.

> 
> > 
> >    After 2. is solved in the future, we would be able to make existing
> >    driver and management utilities work in Dom0 again.
> 
> Is there any reason why we can't do it now? If existing ndctl (with
> additional patches) can work then we don't need introduce xen-ndctl
> anymore? I think that keeps user interface clearer.

The simple reason is I want to reduce the components (Xen/kernel/QEMU)
touched by the first patchset (whose primary target is to implement
the basic functionality, i.e. mapping host NVDIMM to guest as a
virtual NVDIMM). As you said, leaving a driver (the nvdimm driver
and/or a stub driver) in Dom0 would make the user interface
clearer. Let's see what I can get in the next version.

Thanks,
Haozhong

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.