[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 8/9] vm_event: Add vm_event_ng interface


  • To: Petre Pircalabu <ppircalabu@xxxxxxxxxxxxxxx>, <xen-devel@xxxxxxxxxxxxxxxxxxxx>
  • From: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
  • Date: Tue, 4 Jun 2019 15:43:16 +0100
  • Authentication-results: esa5.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none; spf=None smtp.pra=andrew.cooper3@xxxxxxxxxx; spf=Pass smtp.mailfrom=Andrew.Cooper3@xxxxxxxxxx; spf=None smtp.helo=postmaster@xxxxxxxxxxxxxxxxxxxxxxxxxx
  • Autocrypt: addr=andrew.cooper3@xxxxxxxxxx; prefer-encrypt=mutual; keydata= mQINBFLhNn8BEADVhE+Hb8i0GV6mihnnr/uiQQdPF8kUoFzCOPXkf7jQ5sLYeJa0cQi6Penp VtiFYznTairnVsN5J+ujSTIb+OlMSJUWV4opS7WVNnxHbFTPYZVQ3erv7NKc2iVizCRZ2Kxn srM1oPXWRic8BIAdYOKOloF2300SL/bIpeD+x7h3w9B/qez7nOin5NzkxgFoaUeIal12pXSR Q354FKFoy6Vh96gc4VRqte3jw8mPuJQpfws+Pb+swvSf/i1q1+1I4jsRQQh2m6OTADHIqg2E ofTYAEh7R5HfPx0EXoEDMdRjOeKn8+vvkAwhviWXTHlG3R1QkbE5M/oywnZ83udJmi+lxjJ5 YhQ5IzomvJ16H0Bq+TLyVLO/VRksp1VR9HxCzItLNCS8PdpYYz5TC204ViycobYU65WMpzWe LFAGn8jSS25XIpqv0Y9k87dLbctKKA14Ifw2kq5OIVu2FuX+3i446JOa2vpCI9GcjCzi3oHV e00bzYiHMIl0FICrNJU0Kjho8pdo0m2uxkn6SYEpogAy9pnatUlO+erL4LqFUO7GXSdBRbw5 gNt25XTLdSFuZtMxkY3tq8MFss5QnjhehCVPEpE6y9ZjI4XB8ad1G4oBHVGK5LMsvg22PfMJ ISWFSHoF/B5+lHkCKWkFxZ0gZn33ju5n6/FOdEx4B8cMJt+cWwARAQABtClBbmRyZXcgQ29v cGVyIDxhbmRyZXcuY29vcGVyM0BjaXRyaXguY29tPokCOgQTAQgAJAIbAwULCQgHAwUVCgkI CwUWAgMBAAIeAQIXgAUCWKD95wIZAQAKCRBlw/kGpdefoHbdD/9AIoR3k6fKl+RFiFpyAhvO 59ttDFI7nIAnlYngev2XUR3acFElJATHSDO0ju+hqWqAb8kVijXLops0gOfqt3VPZq9cuHlh IMDquatGLzAadfFx2eQYIYT+FYuMoPZy/aTUazmJIDVxP7L383grjIkn+7tAv+qeDfE+txL4 SAm1UHNvmdfgL2/lcmL3xRh7sub3nJilM93RWX1Pe5LBSDXO45uzCGEdst6uSlzYR/MEr+5Z JQQ32JV64zwvf/aKaagSQSQMYNX9JFgfZ3TKWC1KJQbX5ssoX/5hNLqxMcZV3TN7kU8I3kjK mPec9+1nECOjjJSO/h4P0sBZyIUGfguwzhEeGf4sMCuSEM4xjCnwiBwftR17sr0spYcOpqET ZGcAmyYcNjy6CYadNCnfR40vhhWuCfNCBzWnUW0lFoo12wb0YnzoOLjvfD6OL3JjIUJNOmJy RCsJ5IA/Iz33RhSVRmROu+TztwuThClw63g7+hoyewv7BemKyuU6FTVhjjW+XUWmS/FzknSi dAG+insr0746cTPpSkGl3KAXeWDGJzve7/SBBfyznWCMGaf8E2P1oOdIZRxHgWj0zNr1+ooF /PzgLPiCI4OMUttTlEKChgbUTQ+5o0P080JojqfXwbPAyumbaYcQNiH1/xYbJdOFSiBv9rpt TQTBLzDKXok86LkCDQRS4TZ/ARAAkgqudHsp+hd82UVkvgnlqZjzz2vyrYfz7bkPtXaGb9H4 Rfo7mQsEQavEBdWWjbga6eMnDqtu+FC+qeTGYebToxEyp2lKDSoAsvt8w82tIlP/EbmRbDVn 7bhjBlfRcFjVYw8uVDPptT0TV47vpoCVkTwcyb6OltJrvg/QzV9f07DJswuda1JH3/qvYu0p vjPnYvCq4NsqY2XSdAJ02HrdYPFtNyPEntu1n1KK+gJrstjtw7KsZ4ygXYrsm/oCBiVW/OgU g/XIlGErkrxe4vQvJyVwg6YH653YTX5hLLUEL1NS4TCo47RP+wi6y+TnuAL36UtK/uFyEuPy wwrDVcC4cIFhYSfsO0BumEI65yu7a8aHbGfq2lW251UcoU48Z27ZUUZd2Dr6O/n8poQHbaTd 6bJJSjzGGHZVbRP9UQ3lkmkmc0+XCHmj5WhwNNYjgbbmML7y0fsJT5RgvefAIFfHBg7fTY/i kBEimoUsTEQz+N4hbKwo1hULfVxDJStE4sbPhjbsPCrlXf6W9CxSyQ0qmZ2bXsLQYRj2xqd1 bpA+1o1j2N4/au1R/uSiUFjewJdT/LX1EklKDcQwpk06Af/N7VZtSfEJeRV04unbsKVXWZAk uAJyDDKN99ziC0Wz5kcPyVD1HNf8bgaqGDzrv3TfYjwqayRFcMf7xJaL9xXedMcAEQEAAYkC HwQYAQgACQUCUuE2fwIbDAAKCRBlw/kGpdefoG4XEACD1Qf/er8EA7g23HMxYWd3FXHThrVQ HgiGdk5Yh632vjOm9L4sd/GCEACVQKjsu98e8o3ysitFlznEns5EAAXEbITrgKWXDDUWGYxd pnjj2u+GkVdsOAGk0kxczX6s+VRBhpbBI2PWnOsRJgU2n10PZ3mZD4Xu9kU2IXYmuW+e5KCA vTArRUdCrAtIa1k01sPipPPw6dfxx2e5asy21YOytzxuWFfJTGnVxZZSCyLUO83sh6OZhJkk b9rxL9wPmpN/t2IPaEKoAc0FTQZS36wAMOXkBh24PQ9gaLJvfPKpNzGD8XWR5HHF0NLIJhgg 4ZlEXQ2fVp3XrtocHqhu4UZR4koCijgB8sB7Tb0GCpwK+C4UePdFLfhKyRdSXuvY3AHJd4CP 4JzW0Bzq/WXY3XMOzUTYApGQpnUpdOmuQSfpV9MQO+/jo7r6yPbxT7CwRS5dcQPzUiuHLK9i nvjREdh84qycnx0/6dDroYhp0DFv4udxuAvt1h4wGwTPRQZerSm4xaYegEFusyhbZrI0U9tJ B8WrhBLXDiYlyJT6zOV2yZFuW47VrLsjYnHwn27hmxTC/7tvG3euCklmkn9Sl9IAKFu29RSo d5bD8kMSCYsTqtTfT6W4A3qHGvIDta3ptLYpIAOD2sY3GYq2nf3Bbzx81wZK14JdDDHUX2Rs 6+ahAA==
  • Cc: Stefano Stabellini <sstabellini@xxxxxxxxxx>, Razvan Cojocaru <rcojocaru@xxxxxxxxxxxxxxx>, Wei Liu <wl@xxxxxxx>, Konrad Rzeszutek Wilk <konrad.wilk@xxxxxxxxxx>, George Dunlap <George.Dunlap@xxxxxxxxxxxxx>, Tim Deegan <tim@xxxxxxx>, Ian Jackson <ian.jackson@xxxxxxxxxxxxx>, Julien Grall <julien.grall@xxxxxxx>, Tamas K Lengyel <tamas@xxxxxxxxxxxxx>, Jan Beulich <jbeulich@xxxxxxxx>, Roger Pau Monné <roger.pau@xxxxxxxxxx>
  • Delivery-date: Tue, 04 Jun 2019 14:43:34 +0000
  • Ironport-sdr: cca+/uJSg/1CSoYVnGtrTtzq5iPfnUfd8adGNiOwx0lONMgAdSj+H5/5IQiHlTsl4fjUpOba24 yzDpc/goPeu82SXkQusf/w+oxtT5v6HX+jbCc+mrsqx2bgGwWPGZNI7NAiH9Iu98aqbxtECwce /vv6M16gQ5ONUpiILREAJnr77SncxhmkNLQt3iJCLFhZYjr0AXRE46i1OtePfTnSsYCLb3ML8G kKToIEuoNdzRJxnHvPBHUaggtW1adG3FTjo8UOxQuHztUwCHDy1qVQiQri4wkIuTPQU7Qtny09 Wa4=
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>
  • Openpgp: preference=signencrypt

On 30/05/2019 15:18, Petre Pircalabu wrote:
> In high throughput introspection scenarios where lots of monitor
> vm_events are generated, the ring buffer can fill up before the monitor
> application gets a chance to handle all the requests thus blocking
> other vcpus which will have to wait for a slot to become available.
>
> This patch adds support for a different mechanism to handle synchronous
> vm_event requests / responses. As each synchronous request pauses the
> vcpu until the corresponding response is handled, it can be stored in
> a slotted memory buffer (one per vcpu) shared between the hypervisor and
> the controlling domain.
>
> Signed-off-by: Petre Pircalabu <ppircalabu@xxxxxxxxxxxxxxx>

There are a number of concerns here.

First and foremost, why is a new domctl being added?  Surely this should
just be a "type of ring access" parameter to event_enable?  Everything
else in the vm_event set of APIs should be unchanged as a result of the
interface differences.

Or am I missing something?

> diff --git a/xen/common/vm_event_ng.c b/xen/common/vm_event_ng.c
> new file mode 100644
> index 0000000..17ae33c
> --- /dev/null
> +++ b/xen/common/vm_event_ng.c
> <snip>
>
> +static int vm_event_channels_alloc_buffer(struct vm_event_channels_domain 
> *impl)
> +{
> +    int i, rc = -ENOMEM;
> +
> +    for ( i = 0; i < impl->nr_frames; i++ )
> +    {
> +        struct page_info *page = alloc_domheap_page(impl->ved.d, 0);

This creates pages which are reference-able (in principle) by the guest,
and are bounded by d->max_pages.

Both of these are properties of the existing interface which we'd prefer
to remove.

> +        if ( !page )
> +            goto err;
> +
> +        if ( !get_page_and_type(page, impl->ved.d, PGT_writable_page) )
> +        {
> +            rc = -ENODATA;
> +            goto err;
> +        }
> +
> +        impl->mfn[i] = page_to_mfn(page);
> +    }
> +
> +    impl->slots = (struct vm_event_slot *)vmap(impl->mfn, impl->nr_frames);

You appear to have opencoded vmalloc() here.  Is there any reason not to
use that?

> +    if ( !impl->slots )
> +        goto err;
> +
> +    for ( i = 0; i < impl->nr_frames; i++ )
> +        clear_page((void*)impl->slots + i * PAGE_SIZE);
> +
> +    return 0;
> +
> +err:
> +    while ( --i >= 0 )
> +    {
> +        struct page_info *page = mfn_to_page(impl->mfn[i]);
> +
> +        if ( test_and_clear_bit(_PGC_allocated, &page->count_info) )
> +            put_page(page);
> +        put_page_and_type(page);
> +    }
> +
> +    return rc;
> +}
> +
> +static void vm_event_channels_free_buffer(struct vm_event_channels_domain 
> *impl)
> +{
> +    int i;
> +
> +    ASSERT(impl);
> +
> +    if ( !impl->slots )
> +        return;
> +
> +    vunmap(impl->slots);
> +
> +    for ( i = 0; i < impl->nr_frames; i++ )
> +    {
> +        struct page_info *page = mfn_to_page(impl->mfn[i]);
> +
> +        ASSERT(page);

mfn_to_page() is going to explode before this ASSERT() does.

> +        if ( test_and_clear_bit(_PGC_allocated, &page->count_info) )
> +            put_page(page);
> +        put_page_and_type(page);
> +    }
> +}
> +
> +static int vm_event_channels_create(
> +    struct domain *d,
> +    struct xen_domctl_vm_event_ng_op *vec,
> +    struct vm_event_domain **_ved,
> +    int pause_flag,
> +    xen_event_channel_notification_t notification_fn)
> +{
> +    int rc, i;
> +    unsigned int nr_frames = PFN_UP(d->max_vcpus * sizeof(struct 
> vm_event_slot));
> +    struct vm_event_channels_domain *impl;
> +
> +    if ( *_ved )
> +        return -EBUSY;
> +
> +    impl = _xzalloc(sizeof(struct vm_event_channels_domain) +
> +                           nr_frames * sizeof(mfn_t),
> +                    __alignof__(struct vm_event_channels_domain));
> +    if ( unlikely(!impl) )
> +        return -ENOMEM;
> +
> +    spin_lock_init(&impl->ved.lock);
> +    spin_lock(&impl->ved.lock);
> +
> +    impl->nr_frames = nr_frames;
> +    impl->ved.d = d;
> +    impl->ved.ops = &vm_event_channels_ops;
> +
> +    rc = vm_event_init_domain(d);
> +    if ( rc < 0 )
> +        goto err;
> +
> +    rc = vm_event_channels_alloc_buffer(impl);
> +    if ( rc )
> +        goto err;
> +
> +    for ( i = 0; i < d->max_vcpus; i++ )
> +    {
> +        rc = alloc_unbound_xen_event_channel(d, i, 
> current->domain->domain_id,
> +                                             notification_fn);
> +        if ( rc < 0 )
> +            goto err;
> +
> +        impl->slots[i].port = rc;
> +        impl->slots[i].state = STATE_VM_EVENT_SLOT_IDLE;
> +    }
> +
> +    impl->enabled = false;
> +
> +    spin_unlock(&impl->ved.lock);
> +    *_ved = &impl->ved;
> +    return 0;
> +
> +err:
> +    spin_unlock(&impl->ved.lock);
> +    XFREE(impl);

You don't free the event channels on error.

Please write make the destructor idempotent and call it from here.

> +    return rc;
> +}
> +
> <snip>
> +int vm_event_ng_domctl(struct domain *d, struct xen_domctl_vm_event_ng_op 
> *vec,
> +                       XEN_GUEST_HANDLE_PARAM(void) u_domctl)
> +{
> +    int rc;
> +
> +    if ( vec->op == XEN_VM_EVENT_NG_GET_VERSION )
> +    {
> +        vec->u.version = VM_EVENT_INTERFACE_VERSION;
> +        return 0;
> +    }
> +
> +    if ( unlikely(d == NULL) )
> +        return -ESRCH;
> +
> +    rc = xsm_vm_event_control(XSM_PRIV, d, vec->type, vec->op);
> +    if ( rc )
> +        return rc;
> +
> +    if ( unlikely(d == current->domain) ) /* no domain_pause() */
> +    {
> +        gdprintk(XENLOG_INFO, "Tried to do a memory event op on itself.\n");
> +        return -EINVAL;
> +    }
> +
> +    if ( unlikely(d->is_dying) )
> +    {
> +        gdprintk(XENLOG_INFO, "Ignoring memory event op on dying domain 
> %u\n",
> +                 d->domain_id);
> +        return 0;
> +    }
> +
> +    if ( unlikely(d->vcpu == NULL) || unlikely(d->vcpu[0] == NULL) )
> +    {
> +        gdprintk(XENLOG_INFO,
> +                 "Memory event op on a domain (%u) with no vcpus\n",
> +                 d->domain_id);
> +        return -EINVAL;
> +    }
> +
> +    switch ( vec->type )
> +    {
> +    case XEN_VM_EVENT_TYPE_MONITOR:
> +    {
> +        rc = -EINVAL;
> +
> +        switch ( vec-> op)
> +        {
> +        case XEN_VM_EVENT_NG_CREATE:
> +            /* domain_pause() not required here, see XSA-99 */
> +            rc = arch_monitor_init_domain(d);
> +            if ( rc )
> +                break;
> +            rc = vm_event_channels_create(d, vec, &d->vm_event_monitor,
> +                                     _VPF_mem_access, monitor_notification);
> +            break;
> +
> +        case XEN_VM_EVENT_NG_DESTROY:
> +            if ( !vm_event_check(d->vm_event_monitor) )
> +                break;
> +            domain_pause(d);
> +            rc = vm_event_channels_destroy(&d->vm_event_monitor);
> +            arch_monitor_cleanup_domain(d);
> +            domain_unpause(d);
> +            break;
> +
> +        case XEN_VM_EVENT_NG_SET_STATE:
> +            if ( !vm_event_check(d->vm_event_monitor) )
> +                break;
> +            domain_pause(d);
> +            to_channels(d->vm_event_monitor)->enabled = !!vec->u.enabled;
> +            domain_unpause(d);
> +            rc = 0;
> +            break;
> +
> +        default:
> +            rc = -ENOSYS;
> +        }
> +        break;
> +    }
> +
> +#ifdef CONFIG_HAS_MEM_PAGING
> +    case XEN_VM_EVENT_TYPE_PAGING:
> +#endif
> +
> +#ifdef CONFIG_HAS_MEM_SHARING
> +    case XEN_VM_EVENT_TYPE_SHARING:
> +#endif

These are unnecessary, as they don't deviate from the default.

~Andrew

> +
> +    default:
> +        rc = -ENOSYS;
> +    }
> +
> +    return rc;
> +}
>


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.