[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] xen/arm: Assertion 'timer->status >= TIMER_STATUS_inactive' failed at timer.c:279

On Wed, 2016-04-27 at 14:43 +0100, George Dunlap wrote:
> On 26/04/16 18:49, Dario Faggioli wrote:
> > Let me know, and I'll resubmit the patch properly (together with
> > another bugfix I have in my queue).
> Yeah, assuming the description in your changeset is accurate, this
> seems
> like the right approach.
Ok, thanks for having a look, I'll submit a proper series.

> The main thing to add here I think is that we need to document what
> different circumstances under which the various functions may be
> called
> -- for instance, in credit1 free_pdata(), it seems to expect that spc
> may == null at some point.  Future schedulers need to know the
> circumstances under which this might happen so they can DTRT.
I saw that too (many times). And in fact, I'm not sure whether that can
actually happen or not, but I certainly can look at this.

And if by "document what different circumstances under which the
various functions may be called" you mean adding comments to that
effect somewhere, I'm up for that (I just need to figure out where it
would be best to put such comments).

> It might be nice at some point to have the alloc / free / init /
> deinit
> functions in credit1 ordered in a rational way so that they could be
> understood by glancing at them, rather than having to jump around,
> but
> that's probably a nice-to-have clean-up for another time. :-)
When you say "ordered" you mean the order in which they appear in the
source file? If yes, I agree, but no, I'm not doing that right now (but
I can queue this for when 4.8 opens).

<<This happens because I choose it to happen!>> (Raistlin Majere)
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)

Attachment: signature.asc
Description: This is a digitally signed message part

Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.