[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 1/2] enable event channel wake-up for mem_event interfaces



On Thu, Oct 06, Tim Deegan wrote:

> At 17:24 -0400 on 28 Sep (1317230698), Adin Scannell wrote:
> > -void mem_event_put_request(struct domain *d, struct mem_event_domain *med, 
> > mem_event_request_t *req)
> > +static inline int mem_event_ring_free(struct domain *d, struct 
> > mem_event_domain *med)
> > +{
> > +    int free_requests;
> > +
> > +    free_requests = RING_FREE_REQUESTS(&med->front_ring);
> > +    if ( unlikely(free_requests < d->max_vcpus) )
> > +    {
> > +        /* This may happen. */
> > +        gdprintk(XENLOG_INFO, "mem_event request slots for domain %d: 
> > %d\n",
> > +                               d->domain_id, free_requests);
> > +        WARN_ON(1);
> 
> If this is something that might happen on production systems (and is
> basically benign except for the performance), we shouldn't print a full
> WARN().  The printk is more than enough.

While I havent reviewed the whole patch (sorry for that), one thing that
will break is p2m_mem_paging_populate() called from dom0.

If the ring is full, the gfn state was eventually forwarded from
paging-out state to paging-in state. But since the ring was full, no
request was sent to xenpaging which means the gfn remains in
p2m_ram_paging_in_start until the guest eventually tries to access the
gfn as well. Dom0 will call p2m_mem_paging_populate() again and again (I
think), but there will be no attempt to send a new request once the ring
has free slots again, because the gfn is already in the page-in path and
the vcpu does not belong to the guest.

I have some wild ideas how to handle this situation, but the patch as is
will break page-in attempts from xenpaging itself.

Olaf

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.