[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [ARM] Native application design and discussion (I hope)



On Fri, 7 Apr 2017, Stefano Stabellini wrote:
> On Fri, 7 Apr 2017, Volodymyr Babchuk wrote:
> > >> Native application is an another domain type. It has own vCPU (only one 
> > >> at this
> > >> moment) Native app is loaded as any other kernel, using ELF loader.
> > >> It looks like another stub-domain such as MiniOS, but there are two big
> > >> differences:
> > >
> > > Could you describe the reason why you are suggesting it? Unless strictly
> > > necessary, I wouldn't go down the vcpu route, because as soon as we
> > > bring a vcpu into the picture, we have a number of problems, including
> > > scheduling, affinity, etc. It is also user-visible (xl vcpu-list) which
> > > I don't think it should be.
> > I used this in my PoC because I didn't want to do extra work. Also this 
> > looks
> > very natural. Domain is actually the same as a process, vcpu is like a 
> > thread.
> > But yes, I already had some issues with scheduler. Manageable, thought.
> > 
> > > I understand that one of the goals is "Modularity", which makes us think
> > > of an ELF loader, such as the one for a new domain. I agree that
> > > modularity is important, but I would solve it as a second step. In first
> > > instance, I would limit the scope to run some code under
> > > /xen/arch/arm/apps or, better, /apps (for example) in a lower privilege
> > > mode. After that is done and working, I am sure we can find a way to
> > > dynamically load more apps at run time.
> > Again, use of existing domain framework was the easiest way. I needed
> > some container to hold app and domain fits perfectly. I need to map pages
> > there, need routines to copy to and from its memory, need p2m code, etc.
> > 
> > But, yes, if we are going to implement this in right way, then maybe we need
> > separate identities like 'app_container' and 'app_thread'. See below.
> > 
> > >
> > > A vcpu is expected to be running simultenously with other vcpus of the
> > > same domain or different domains. The scheduler is expected to choose
> > > when it is supposed to be running. On the other end, an el0 app runs to
> > > handle/emulate a single request from a guest vcpu, which will be paused
> > > until the el0 app finishes. After that, the guest vcpu will resume.
> > Okay, but what should be stored in `current` while el0 application is 
> > running?
> > Remember, that it can issue syscalls, which will be handled in hypervisor.
> > 
> > We can create separates types for native applications. But then we can end
> > having two parallel and mostly identical frameworks. One for domains and
> > another one - for apps. What do you think?
> 
> This is a great topic for the Xen Hackathon.
> 
> This is the most difficult problem that we need to solve as part of this
> work. It is difficult to have the right answer at the beginning, before
> seeing any code. If the app_container/app_thread approach causes too
> much duplication of work, the alternative would be to fix/improve
> stubdoms (minios) until they match what we need. Specifically, these
> would be the requirements:
> 
> 1) Determinism: a stubdom servicing a given guest needs to be scheduled
>    immediately after the guest vcpu traps into Xen. It needs to
>    deterministic. The stubdom vcpu has to be scheduled on the same pcpu.
>    This is probably the most important missing thing at the moment.
> 
> 2) Accounting: memory and cpu time of a stubdom should be accounted
>    agaist the domain it is servicing. Otherwise it's not fair.
> 
> 3) Visibility: stub domains and vcpus should be marked differently from other
>    vcpus as not to confuse the user. Otherwise "xl list" becomes
>    confusing.
> 
> 
> 1) and 2) are particularly important. If we had them, we would not need
> el0 apps. I believe stubdoms would be as fast as el0 apps too.

CC'ing George and Dario. I was speaking with George about this topic,
I'll let him explain his view as scheduler maintainer, but he suggested
to avoid scheduler modifications (all schedulers would need to be
taught to handle this) and extend struct vcpu for el0 apps instead.


> > >> At this moment entry point is hardcoded and you need to update it every 
> > >> time
> > >> you rebuild native application. Also there are no actual parameters 
> > >> passed.
> > >> Also, whole code is a piece of gosa, because it was first time I hacked 
> > >> XEN.
> > >
> > > :-)
> > > I would start by introducing a proper way to pass parameters and return
> > > values.
> > >
> > >> I don't want to repeat benchmark results, because they already was 
> > >> posted in ML.
> > >> You can find them at [3].
> > >>
> > >> I understand that I have missed many things:
> > >>
> > >> 1. How to ship and load native app, because some of them will be needed 
> > >> even
> > >> before dom0 is created.
> > >
> > > I envision something like Linux's insmod, but I suggest postponing this
> > > problem. At the moment, it would be fine to assume that all apps need to
> > > be built statically and cannot be loaded at runtime.
> > Okay. Then we need to hold them in special sections of hypervisor image
> > and also we need some sort of loader in hypervisor.
> > 
> > >> 2. How to distinguish multiple native apps
> > >
> > > Each apps need to specify a range of MMIO/SMC handlers. Xen will invoke
> > > the right one.
> > What about device drivers? Consider power management for example. This is
> > crucial if we want to use XEN in mobile devices. Our (there, in EPAM) idea 
> > is
> > to hold drivers for PM, drivers for coprocessors and so on in native apps.
> > Probably we will need different types of apps: SMC handler, MMIO handler,
> > PM driver, and so on.
> 
> Yes, something like that.
> 
> 
> > >> 3. Concurrency in native apps
> > >
> > > This is an interesting problem: what do we do if two guest vcpus make
> > > simultenous requests that need to be handled by the same app?
> > > Technically, we could run the same app twice on two different pcpus
> > > simultenously. But then, the apps would need to be able to cope with
> > > concurrency (spin_locks, etc.) From Xen point of view, it should be OK
> > > though.
> > Yes. Probably, we can pass id of pcpu to app, so it can have per-cpu storage
> > if it wants to. Plus spin_locks and no blocking syscalls.
> > 
> > >
> > >> 4. How to restart misbehaved apps.
> > >
> > > A related question is the following: do we expect to allocate each app
> > > once at boot or once per guest? Apps need to have some per-domain
> > > context, but it could be passed from Xen to the app on a shared page,
> > > possibly reducing the need for allocating the same app once per guest?
> > SMC handler needs to be cross-domain for example. Emulators can be
> > tied to guests, I think. Device drivers should be cross-domain also.
> > 
> > >
> > >> But at this moment I want to discuss basic approach. If there are will 
> > >> be no
> > >> objections against basic concept, then we can develop details.
> > >>
> > >> [1] https://github.com/lorc/xen_app_stub - native app
> > >> [2] https://github.com/lorc/xen/tree/el0_app - my branch with PoC
> > >> [3] http://marc.info/?l=xen-devel&m=149088856116797&w=2 - benchmark 
> > >> results
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.