|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH] evtchn: make EVTCHNOP_reset suitable for kexec
Andrew Cooper <andrew.cooper3@xxxxxxxxxx> writes:
> On 25/07/14 16:48, Vitaly Kuznetsov wrote:
>> It would be nice to allow guests to close all event channels in
>> ABI-agnostic way on startup. This would allow a guest to perform
>> full cleanup in case of kexec/kdump. EVTCHNOP_reset looks suitable
>> for this purpose. However it has two issues:
>>
>> - current implementation unconditionally closes all event channels
>> including store/console channels mapped to Dom0. There is no way
>> for a guest to restore these channels after they were closed.
>>
>> - control blocks for vcpus need cleanup when FIFO ABI is being used.
>>
>> With this change a guest can simply do EVTCHNOP_reset before its
>> init in both 2-level and FIFO cases. Unfortunately we'll need to
>> put something like "xen_version >= 4.5" check before doing it as
>> if we do it with the old implementation the guest will get stuck.
>>
>> The issue can also be solved by introducing a new EVTCHNOP
>> operation but it seems that EVTCHNOP_reset was originally designed
>> for such reset and it's not being used at this moment.
>>
>> [The idea was suggested by Ian Campbell and Andrew Cooper]
>>
>> Signed-off-by: Vitaly Kuznetsov <vkuznets@xxxxxxxxxx>
>
> This patch is looking very much better than its predecessor.
>
Thanks!
> The xenstore and console event channels are set up by the toolstack for
> the domain, and not expected to be set up by the domain itself; They
> are fine to be exempt here.
>
> However, all other event channels to dom0 are not special in the
> slightest, and should be reset. They can be re-negotiated with the
> backends via the usual xenstore methods.
I agree. Unfortunately I wasn't able to figure out how to distinguish
between store/console channels and all other interdomain channels bound
to dom0 (I know toolstack has this information, but how would we check
that in hypervisor?) Any suggestions are more than welcome!
>> ---
>> xen/common/event_channel.c | 17 +++++++++++++++--
>> xen/common/event_fifo.c | 1 +
>> 2 files changed, 16 insertions(+), 2 deletions(-)
>>
>> diff --git a/xen/common/event_channel.c b/xen/common/event_channel.c
>> index db952af..46a0ef0 100644
>> --- a/xen/common/event_channel.c
>> +++ b/xen/common/event_channel.c
>> @@ -944,6 +944,7 @@ static long evtchn_reset(evtchn_reset_t *r)
>> {
>> domid_t dom = r->dom;
>> struct domain *d;
>> + struct evtchn *chn;
>> int i, rc;
>>
>> d = rcu_lock_domain_by_any_id(dom);
>> @@ -954,8 +955,20 @@ static long evtchn_reset(evtchn_reset_t *r)
>> if ( rc )
>> goto out;
>>
>> - for ( i = 0; port_is_valid(d, i); i++ )
>> - (void)__evtchn_close(d, i);
>> + for ( i = 1; port_is_valid(d, i); i++ )
>
> What about channel 0?
I was under an imression that port = 0 is a special ECS_RESERVED one (we
set it up in evtchn_init()). We need to either skip it or re-initialize
it after we close it.
> What if there is an invalid port preceding a subsequent valid one?
I *think* that never happens (and the original code relies on
that, I just added port=0 skip). Please correct me if I'm wrong.
>
> ~Andrew
>
>> + {
>> + /*
>> + * Leave all interdomain connections to Dom0 untouched as we need to
>> + * preserve store/console channels.
>> + */
>> + chn = evtchn_from_port(d, i);
>> + if ( chn->state != ECS_INTERDOMAIN ||
>> + chn->u.interdomain.remote_dom->domain_id != 0 )
>> + (void)__evtchn_close(d, i);
>> + }
>> +
>> + if (d->evtchn_fifo)
>> + evtchn_fifo_destroy(d);
>>
>> rc = 0;
>>
>> diff --git a/xen/common/event_fifo.c b/xen/common/event_fifo.c
>> index 1fce3f1..51b4ff6 100644
>> --- a/xen/common/event_fifo.c
>> +++ b/xen/common/event_fifo.c
>> @@ -451,6 +451,7 @@ static void cleanup_event_array(struct domain *d)
>> for ( i = 0; i < EVTCHN_FIFO_MAX_EVENT_ARRAY_PAGES; i++ )
>> unmap_guest_page(d->evtchn_fifo->event_array[i]);
>> xfree(d->evtchn_fifo);
>> + d->evtchn_fifo = NULL;
>> }
>>
>> static void setup_ports(struct domain *d)
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@xxxxxxxxxxxxx
> http://lists.xen.org/xen-devel
--
Vitaly
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |