[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v2 3/6] ioreq-server: create basic ioreq server abstraction.



>>> On 04.03.14 at 12:40, Paul Durrant <paul.durrant@xxxxxxxxxx> wrote:
>  bool_t hvm_io_pending(struct vcpu *v)
>  {
> +    struct domain *d = v->domain;
> +    struct hvm_ioreq_server *s = d->arch.hvm_domain.ioreq_server;

Hardly worth having "d" as a separate variable here, considering its
only use if the one above. Same elsewhere in the patch.

> +static int hvm_init_ioreq_server(struct domain *d)
> +{
> +    struct hvm_ioreq_server *s;
> +    int i;

"unsigned int" please.

> +static void hvm_update_ioreq_server_evtchn(struct hvm_ioreq_server *s)
> +{
> +    struct domain *d = s->domain;
> +
> +    if ( s->ioreq.va != NULL )
> +    {
> +        shared_iopage_t *p = s->ioreq.va;
> +        struct vcpu *v;

Please be consistent - either all variables needed only inside the
if() body should get declared inside that body, or all of them at
the top of the function.

> +static void hvm_ioreq_server_remove_vcpu(struct hvm_ioreq_server *s, struct 
> vcpu *v)
> +{
> +    if ( v->vcpu_id == 0 )
> +    {
> +        if ( s->buf_ioreq_evtchn >= 0 )

Please fold these if()s together.

> +static int hvm_set_ioreq_server_domid(struct hvm_ioreq_server *s, domid_t 
> domid)
> +{
> +    struct domain *d = s->domain;
> +    struct vcpu *v;
> +    int rc = 0;
> +
> +    domain_pause(d);
> +
> +    if ( d->vcpu[0] )
> +    {
> +        rc = hvm_replace_event_channel(d->vcpu[0], domid, 
> &s->buf_ioreq_evtchn);
> +        if ( rc < 0 )
> +            goto done;
> +    }
> +
> +    for_each_vcpu ( d, v )
> +    {
> +        rc = hvm_replace_event_channel(v, domid, 
> &s->ioreq_evtchn[v->vcpu_id]);
> +        if ( rc < 0 )
> +            goto done;
> +    }
> +
> +    hvm_update_ioreq_server_evtchn(s);
> +
> +    s->domid = domid;
> +
> +done:

In an earlier function you set rc to zero right before a similar final
label. Here you don't. Please be consistent again. And while I
personally would favor avoiding pointless assignments, I wouldn't
want the code to apparently depend on functions only ever
returning non-positive values. I.e. I'd prefer the respective error
checks here and elsewhere to say "if ( rc )" instead of "if ( rc < 0 )".

> @@ -4379,6 +4493,12 @@ long do_hvm_op(unsigned long op, 
> XEN_GUEST_HANDLE_PARAM(void) arg)
>          {
>              switch ( a.index )
>              {
> +            case HVM_PARAM_BUFIOREQ_EVTCHN: {
> +                struct hvm_ioreq_server *s = d->arch.hvm_domain.ioreq_server;
> +
> +                a.value = s->buf_ioreq_evtchn;
> +                break;
> +            }

Why? If d->arch.hvm_domain.params[a.index] can get out of sync,
perhaps it would be better to get it synchronized again.

> --- a/xen/include/asm-x86/hvm/domain.h
> +++ b/xen/include/asm-x86/hvm/domain.h
> @@ -41,10 +41,17 @@ struct hvm_ioreq_page {
>      void *va;
>  };
>  
> -struct hvm_domain {
> +struct hvm_ioreq_server {
> +    struct domain          *domain;
> +    domid_t                domid;

These two fields are too generic to be here without any comment:
It's unclear without looking at the rest of the patch/code which one
refers to the subject domain and which one represents the server.

>      struct hvm_ioreq_page  ioreq;
> +    int                    ioreq_evtchn[MAX_HVM_VCPUS];

Ugly. Would be much better if this was stored in each struct vcpu.
And considering that we likely won't stay at 128 vCPU-s as the
upper bound indefinitely (we already don't strictly need to with
x2APIC emulation support in place), this array could eventually
grow quite large, to the point of making the allocation exceed the
1 page boundary we (informally) have in place for all runtime
allocations.

And _if_ this needs to remain an array, it should be the last item
in the structure, as to not badly affect code size for accesses to
subsequent members.

And finally, while I realize current code is not consistent in that
regard, please try using evtchn_port_t for event channels at
least in new code additions.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.