[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [RFC Design Doc] Add vNVDIMM support for Xen



> > 2.2 vNVDIMM Implementation in KVM/QEMU
> > 
> >  (1) Address Mapping
> > 
> >   As described before, the host Linux NVDIMM driver provides a block
> >   device interface (/dev/pmem0 at the bottom) for a pmem NVDIMM
> >   region. QEMU can than mmap(2) that device into its virtual address
> >   space (buf). QEMU is responsible to find a proper guest physical
> >   address space range that is large enough to hold /dev/pmem0. Then
> >   QEMU passes the virtual address of mmapped buf to a KVM API
> >   KVM_SET_USER_MEMORY_REGION that maps in EPT the host physical
> >   address range of buf to the guest physical address space range where
> >   the virtual pmem device will be.
> > 
> >   In this way, all guest writes/reads on the virtual pmem device is
> >   applied directly to the host one.
> > 
> >   Besides, above implementation also allows to back a virtual pmem
> >   device by a mmapped regular file or a piece of ordinary ram.
> 
> What's the point of backing pmem with ordinary ram? I can buy-in
> the value of file-backed option which although slower does sustain
> the persistency attribute. However with ram-backed method there's
> no persistency so violates guest expectation.

Containers - like the Intel Clear Containers? You can use this work
to stitch an exploded initramfs on a tmpfs right in the guest.
And you could do that for multiple guests.

Granted this has nothing to do with pmem, but this work would allow
one to setup containers this way.

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.