[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Fwd: [v3 14/15] Update Posted-Interrupts Descriptor during vCPU scheduling



On Fri, 2015-07-10 at 00:07 +0000, Wu, Feng wrote:

> > From: Dario Faggioli [mailto:dario.faggioli@xxxxxxxxxx]

> > What I mean is, can you describe when you need each specific operation
> > needs to happen? Something like "descriptor needs to be updated like
> > this upon migration", "notification should be disabled when vcpu starts
> > running", "notification method should be changed that other way when
> > vcpu is preempted", etc.
> 
> I cannot see the differences, I think the requirements are clearly listed in
> the design doc and the comments of this patch.
> 
The difference is, and is IMO quite a big one, this: do you need to do
something when a vcpu wakes up, perhaps depending whether it is runnable
or not immediately after that, or when a vcpu enters runstate
RUNSTATE_runnable.

IOW, are you interested in the event, or in the change that such an
event causes, as far as a particular subsystem (in this case
accounting/information reporting) is concerned?

And no, the fact that when a vcpu wakes up, if it's runnable, it enters
te RUNSTATE_runnable runstate is not enough to say that they're the same
thing! Runstate are an abstraction used for accounting and for reporting
information to the higher levels.
So, why not use it? No reason, and in fact it's used a lot! For
instance, xenalyze (and tracing in general) uses it; getdomaininfo()
uses it; XEN_DOMCTL_getvcpuinfo uses it.

However, there is no one single feature (e.g., for hardware enablement,
like yours) that I can find, within Xen, that builds on top of runstates
(the only exception is credit1 scheduler, and only it, using
runstate.state_entry_time once... and I think that's quite bad of it,
FWIW).

Theoretically speaking, runstates could well disappear, or change
meaning, or be replaced by something else, and only the accounting and
reporting code (as far as the hypervisor is concerned, of course) would
suffer/need changing.

I think, OTOH, that you should really be interesting in making sure you
intercept an event, in this example a wake-up, and adding an
architectural hook in vcpu_wake() is certainly a way for doing that.

In fact, even if runstates are (ever) going away or be changed, vcpus
are always going to wake-up! :-)

Regards,
Dario

> > 
> > This would help a lot, IMO, figuring out the actual functional
> > requirements that needs to be satisfied for things to work well. Once
> > that is done, we can go check in the code where is the best place to put
> > each call, hook, or whatever.
> > 
> > 
> > Note that I've already tried to infer the above, by looking at the
> > patches, and that is making me think that it would be possible to
> > implement things in another way. But maybe I'm missing something. So it
> > would be really valuable if you, with all your knowledge of how PI
> > should work, could do it.
> 
> I keep describing how PI works, what the purpose of the two vectors are,
> how special they are from the beginning.
> 
> Thanks,
> Feng
> 
> 
> > 
> > Thanks and Regards,
> > Dario
> > --
> > <<This happens because I choose it to happen!>> (Raistlin Majere)
> > -----------------------------------------------------------------
> > Dario Faggioli, Ph.D, http://about.me/dario.faggioli
> > Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@xxxxxxxxxxxxx
> http://lists.xen.org/xen-devel

-- 
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)

Attachment: signature.asc
Description: This is a digitally signed message part

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.