[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RE: [Xen-devel] use of struct hvm_mirq_dpci_mapping.gmsi vs. HVM_IRQ_DPCI_*_MSI flags



>>> On 27.04.11 at 04:49, "Kay, Allen M" <allen.m.kay@xxxxxxxxx> wrote:
>> >
>>> I'm largely asking because I think struct hvm_mirq_dpci_mapping.dom
>>> and .digl_list could actually overlay .gmsi, as much as struct
>>> hvm_irq_dpci.hvm_timer could actually rather be folded into struct
>>> hvm_mirq_dpci_mapping (and then also overlay .gmsi). The overlay
>>> distinction bit would, based on initialization, be HVM_IRQ_DPCI_GUEST_MSI,
>>> but according to use it wouldn't be clear which of the two
>>> HVM_IRQ_DPCI_*_MSI bits is actually the correct one.
>>>
> 
> Jan, sorry for the late reply.  I was out of the office in the past week.
> 
> Are you proposing the following data structure change?
> 
> struct hvm_mirq_dpci_mapping {
>     uint32_t flags;
>     int pending;
>     union {
>         struct timer *hvm_timer;
>         struct list_head_digl_list;
>         struct domain *dom;
>         struct hvm_gmsi_info gmsi;
>     };
> }

No - afaics timer, digl_list, and dom must be usable at the same
time, so only gmsi is an actual overlay (union) candidate. But
then again there's not that much of a significance to this
anymore once these won't get allocated as arrays, so it's more
of a second level optimization.

Also, with my current (not yet posted) implementation there
won't be arrays of pointers either, instead there'll be a radix
tree (indexed by guest pirq) with pointers attached. So it'll be
a per-domain structure

struct hvm_irq_dpci {
    /* Guest IRQ to guest device/intx mapping. */
    struct list_head girq[NR_HVM_IRQS];
    /* Record of mapped ISA IRQs */
    DECLARE_BITMAP(isairq_map, NR_ISAIRQS);
    /* Record of mapped Links */
    uint8_t link_cnt[NR_LINK];
    struct tasklet dirq_tasklet;
};

and a per-guest-pirq one

struct hvm_pirq_dpci {
    uint32_t flags;
    bool_t masked;
    uint16_t pending;
    struct list_head digl_list;
    struct domain *dom;
    struct hvm_gmsi_info gmsi;
    struct timer timer;
};

which possibly in a second step could become

struct hvm_pirq_dpci {
    uint32_t flags;
    bool_t masked;
    uint16_t pending;
    union {
        struct {
            struct list_head digl_list;
            struct domain *dom;
            struct timer timer;
        } pci;
        struct {
            uint32_t gvec;
            uint32_t gflags;
            int dest_vcpu_id; /* -1 :multi-dest, non-negative: dest_vcpu_id */
        } msi;
    };
};

But clarification on the current (perhaps vs intended) use of
HVM_IRQ_DPCI_*_MSI would still be much appreciated (and if,
as suspected, there's need to clean this up, I'd like the cleanup
to be done before the patches I have pending).

Also, there is one more open question (quoting
the mail titled "pt_irq_time_out() dropping d->event_lock before
calling pirq_guest_eoi()"):

"What is the reason for this? irq_desc's lock nests inside d->event_lock,
and not having to drop the lock early would not only allow the two loops
to be folded, but also to call a short cut version of pirq_guest_eoi()
that already obtained the pirq->irq mapping (likely going to be created
when splitting the d->nr_pirqs sized arrays I'm working on currently)."

In my pending patches I imply that this separation is indeed
unnecessary.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.