[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 8/9] vm_event: Add vm_event_ng interface



On Tue, 2019-06-04 at 15:43 +0100, Andrew Cooper wrote:
> On 30/05/2019 15:18, Petre Pircalabu wrote:
> > 
> > Signed-off-by: Petre Pircalabu <ppircalabu@xxxxxxxxxxxxxxx>
> 
> There are a number of concerns here.
> 
> First and foremost, why is a new domctl being added?  Surely this
> should
> just be a "type of ring access" parameter to event_enable? 
> Everything
> else in the vm_event set of APIs should be unchanged as a result of
> the
> interface differences.
> 
> Or am I missing something?
> 
I've used different domctls in order to completely separate the new
interface from the old one.
One thing I don't really like about the old vm_event interface is that
the "create" and "start" operations are handled in the same call
(XEN_VM_EVENT_ENABLE).
These calls should separated in the new interface because the client
needs to perform its own initalization (mapping the resource and event
channel binding) between "create" and "start".

> > diff --git a/xen/common/vm_event_ng.c b/xen/common/vm_event_ng.c
> > new file mode 100644
> > index 0000000..17ae33c
> > --- /dev/null
> > +++ b/xen/common/vm_event_ng.c
> > <snip>
> > 
> > +static int vm_event_channels_alloc_buffer(struct
> > vm_event_channels_domain *impl)
> > +{
> > +    int i, rc = -ENOMEM;
> > +
> > +    for ( i = 0; i < impl->nr_frames; i++ )
> > +    {
> > +        struct page_info *page = alloc_domheap_page(impl->ved.d,
> > 0);
> 
> This creates pages which are reference-able (in principle) by the
> guest,
> and are bounded by d->max_pages.
> 
> Both of these are properties of the existing interface which we'd
> prefer
> to remove.
The allocation mechanism is similar with the one used by ioreq (the
main difference is the number of pages).
> 
> > +        if ( !page )
> > +            goto err;
> > +
> > +        if ( !get_page_and_type(page, impl->ved.d,
> > PGT_writable_page) )
> > +        {
> > +            rc = -ENODATA;
> > +            goto err;
> > +        }
> > +
> > +        impl->mfn[i] = page_to_mfn(page);
> > +    }
> > +
> > +    impl->slots = (struct vm_event_slot *)vmap(impl->mfn, impl-
> > >nr_frames);
> 
> You appear to have opencoded vmalloc() here.  Is there any reason not
> to
> use that?
> 

The problem with vmalloc is that if the pages are not assigned to a
specific domain the remapping fails in the monitor domain.
e.g.:
...
(XEN) mm.c:1015:d0v2 pg_owner d1 l1e_owner d0, but real_pg_owner d-1
(XEN) mm.c:1091:d0v7 Error getting mfn 5fbf53 (pfn ffffffffffffffff)
from L1 entry 80000005fbf53227 for l1e_owner d0, pg_owner d1

> > +err:
> > +    spin_unlock(&impl->ved.lock);
> > +    XFREE(impl);
> 
> You don't free the event channels on error.
> 
> Please write make the destructor idempotent and call it from here.
> 
> > 
> > +#ifdef CONFIG_HAS_MEM_PAGING
> > +    case XEN_VM_EVENT_TYPE_PAGING:
> > +#endif
> > +
> > +#ifdef CONFIG_HAS_MEM_SHARING
> > +    case XEN_VM_EVENT_TYPE_SHARING:
> > +#endif
> 
> These are unnecessary, as they don't deviate from the default.
> 
> ~Andrew
> 
> > 
I will correct these in the next patchset iteration.

Many thanks for your support,
Petre

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.