|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [RFC PATCH v1] Replace tasklets with per-cpu implementation.
>>> On 09.09.14 at 16:37, <konrad.wilk@xxxxxxxxxx> wrote:
> On Tue, Sep 09, 2014 at 10:01:09AM +0100, Jan Beulich wrote:
>> >>> On 08.09.14 at 21:01, <konrad.wilk@xxxxxxxxxx> wrote:
>> > +static int cpu_callback(
>> > + struct notifier_block *nfb, unsigned long action, void *hcpu)
>> > +{
>> > + unsigned int cpu = (unsigned long)hcpu;
>> > +
>> > + switch ( action )
>> > + {
>> > + case CPU_UP_PREPARE:
>> > + INIT_LIST_HEAD(&per_cpu(dpci_list, cpu));
>> > + break;
>> > + case CPU_UP_CANCELED:
>> > + case CPU_DEAD:
>> > + migrate_tasklets_from_cpu(cpu, (&per_cpu(dpci_list, cpu)));
>>
>> Can CPUs go down while softirqs are pending on them?
>
> No. By the time we get here, the CPU is no longer "hearing" them.
So what's that code (also still present in the newer patch you
had attached here) for then?
> +void dpci_kill(struct domain *d)
> +{
> + while ( test_and_set_bit(STATE_SCHED, &d->state) )
> + {
> + do {
> + process_pending_softirqs();
> + } while ( test_bit(STATE_SCHED, &d->state) );
> + }
> +
> + while ( test_bit(STATE_RUN, &d->state) )
> + {
> + cpu_relax();
> + }
> + clear_bit(STATE_SCHED, &d->state);
Does all this perhaps need preemption handling? The caller
(pci_release_devices()) is direct descendant from
domain_relinquish_resources(), so even bubbling -EAGAIN or
-ERESTART back up instead of spinning would seem like an
option.
Jan
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |