[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v3 04/25] xen/arm: document dom0less



On Wed, 1 Aug 2018, Julien Grall wrote:
> Hi Stefano,
> 
> On 01/08/18 00:27, Stefano Stabellini wrote:
> > Add a new document to provide information on how to use dom0less related
> > features and their current limitations.
> > 
> > Signed-off-by: Stefano Stabellini <stefanos@xxxxxxxxxx>
> > 
> > ---
> > Changes in v3:
> > - add patch
> > ---
> >   docs/misc/arm/dom0less | 47
> > +++++++++++++++++++++++++++++++++++++++++++++++
> >   1 file changed, 47 insertions(+)
> >   create mode 100644 docs/misc/arm/dom0less
> > 
> > diff --git a/docs/misc/arm/dom0less b/docs/misc/arm/dom0less
> 
> This should be suffixed with .txt. You also want to add a line in docs/INDEX
> describing the file.

I'll make this changes and all the others suggested in this email


> > new file mode 100644
> > index 0000000..ae5a8b1
> > --- /dev/null
> > +++ b/docs/misc/arm/dom0less
> > @@ -0,0 +1,47 @@
> > +Dom0less
> > +========
> > +
> > +"Dom0less" is a set of Xen features that enable the deployment of a Xen
> > +system without Dom0.
> 
> I think this sentence is misleading. You still deploy Xen with Dom0.
> 
> Also, we have been trying to removing the wording Dom0 anywhere in the code.
> Instead, we are now using "Hardware Domain". I would rather avoid to use Dom0
> in the documentation as it could be misleading, you will always have a domain
> with ID (it may not be what you call Dom0 here).
> 
> > Each feature can be used independently from the
> > +others, unless otherwise stated.
> > +
> > +Booting Multiple Domains from Device Tree
> > +=========================================
> > +
> > +This feature enables Xen to create a set of DomUs alongside Dom0 at boot
> > +time. Information about the DomUs to be created by Xen is passed to the
> > +hypervisor via Device Tree. Specifically, the existing Device Tree based
> > +Multiboot specification has been extended to allow for multiple domains
> > +to be passed to Xen. See docs/misc/arm/device-tree/booting.txt for more
> > +information about the Multiboot specification and how to use it.
> > +
> > +Instead of waiting for Dom0 to be fully booted and the Xen tools to
> > +become available, domains created by Xen this way are started in
> > +parallel to Dom0. Hence, their boot time is typically much shorter.
> > +
> > +Domains started by Xen at boot time currently have the following
> > +limitations:
> > +
> > +- they cannot be properly shutdown or rebooted using xl
> > +If one of them crashes, the whole platform should be rebooted.
> > +
> > +- some xl operations might not work as expected
> > +xl is meant to be used with domains that have been created by it. Using
> > +xl with domains started by Xen at boot might not work as expected.
> > +
> > +- the GIC version is the native version
> > +In absence of other information, the GIC version exposed to the domains
> > +started by Xen at boot is the same as the native GIC version.
> > +
> > +- no PV drivers
> > +There is no support for PV devices at the moment. All devices need to be
> > +statically assigned to guests.
> > +
> > +- vCPU pinning
> > +Pinning vCPUs of domains started by Xen at boot can be done from dom0,
> > +using `xl vcpu-pin' as usual. It is not currently possible to configure
> > +vCPU pinning for domains other than dom0 without dom0. However, the NULL
> > +scheduler (currently unsupported) can be selected by passing
> 
> I would rather not mention NULL scheduler is unsupported here. That's another
> place to update the doc when it gets supported and maybe be missed.
> 
> > +`sched=null' to the Xen command line. The NULL scheduler automatically
> > +assignes and pins vCPUs to pCPUs, but the vCPU-pCPU assignments cannot
> 
> s/assignes/assigns/
> 
> > +be configured.
> 
> Cheers,
> 
> -- 
> Julien Grall
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.