|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH v2 06/27] ARM: GICv3 ITS: introduce device mapping
Hi,
On 22/03/17 17:29, Julien Grall wrote:
> Hi Andre,
>
> On 16/03/17 11:20, Andre Przywara wrote:
>> The ITS uses device IDs to map LPIs to a device. Dom0 will later use
>> those IDs, which we directly pass on to the host.
>> For this we have to map each device that Dom0 may request to a host
>> ITS device with the same identifier.
>> Allocate the respective memory and enter each device into an rbtree to
>> later be able to iterate over it or to easily teardown guests.
>>
>> Signed-off-by: Andre Przywara <andre.przywara@xxxxxxx>
>> ---
>> xen/arch/arm/gic-v3-its.c | 207
>> +++++++++++++++++++++++++++++++++++++++
>> xen/arch/arm/vgic-v3.c | 3 +
>> xen/include/asm-arm/domain.h | 3 +
>> xen/include/asm-arm/gic_v3_its.h | 18 ++++
>> 4 files changed, 231 insertions(+)
>>
>> diff --git a/xen/arch/arm/gic-v3-its.c b/xen/arch/arm/gic-v3-its.c
>> index 5c11b0d..60b15b5 100644
>> --- a/xen/arch/arm/gic-v3-its.c
>> +++ b/xen/arch/arm/gic-v3-its.c
>> @@ -21,6 +21,8 @@
>> #include <xen/lib.h>
>> #include <xen/delay.h>
>> #include <xen/mm.h>
>> +#include <xen/rbtree.h>
>> +#include <xen/sched.h>
>> #include <xen/sizes.h>
>> #include <asm/gic.h>
>> #include <asm/gic_v3_defs.h>
>> @@ -32,6 +34,17 @@
>>
>> LIST_HEAD(host_its_list);
>>
>> +struct its_devices {
>> + struct rb_node rbnode;
>> + struct host_its *hw_its;
>> + void *itt_addr;
>> + paddr_t guest_doorbell;
>
> I think it would be worth to explain in the commit message why you need
> the guest_doorbell in the struct its_devices and how you plan to use it.
In v4 I now also elaborated on the reason in a comment (before the
struct), which I deem more useful than something in the commit message.
>> + uint32_t host_devid;
>> + uint32_t guest_devid;
>> + uint32_t eventids;
>> + uint32_t *host_lpis;
>> +};
>> +
>> bool gicv3_its_host_has_its(void)
>> {
>> return !list_empty(&host_its_list);
>> @@ -149,6 +162,24 @@ static int its_send_cmd_mapc(struct host_its
>> *its, uint32_t collection_id,
>> return its_send_command(its, cmd);
>> }
>>
>> +static int its_send_cmd_mapd(struct host_its *its, uint32_t deviceid,
>> + uint8_t size_bits, paddr_t itt_addr,
>> bool valid)
>> +{
>> + uint64_t cmd[4];
>> +
>> + if ( valid )
>> + {
>> + ASSERT(size_bits < 32);
>
> It would be better if you do the check against the real number in
> hardware (i.e GITS_TYPER.ID_bits).
Added in v4.
>
>> + ASSERT(!(itt_addr & ~GENMASK(51, 8)));
>> + }
>> + cmd[0] = GITS_CMD_MAPD | ((uint64_t)deviceid << 32);
>> + cmd[1] = valid ? size_bits : 0x00;
>
> This is really confusing. The check was not on the previous version. So
> why do you need that?
Admittedly I was taken away be some intention to check this here
properly. But since itt_addr and size are only valid with V=1, I removed
this in v3.
> Also, it would have been better to hide the "size - 1" in the helper
> avoiding to really on the caller to do the right thing.
I tend to agree, but then we have the awkward case where an unmap passes
0 in size, which then gets decremented by one. But you are right that
it's still saner this way, so I pass 1 now in the unmap call and do the
"-1" encoding in here.
>> + cmd[2] = valid ? (itt_addr | GITS_VALID_BIT) : 0x00;
>
> Ditto about "valid? ...".
Removed in v3.
> [...]
>
>> +static struct host_its *gicv3_its_find_by_doorbell(paddr_t
>> doorbell_address)
>> +{
>> + struct host_its *hw_its;
>> +
>> + list_for_each_entry(hw_its, &host_its_list, entry)
>> + {
>> + if ( hw_its->addr + ITS_DOORBELL_OFFSET == doorbell_address )
>
> Why not storing the ITS address rather than the doorbell to avoid this
> check?
Because the doorbell address is a nice architectural property of MSIs in
general. And we need this check anyway, it's just the addition of the
doorbell offset that is different.
> [...]
>
>> +int gicv3_its_map_guest_device(struct domain *d,
>> + paddr_t host_doorbell, uint32_t
>> host_devid,
>> + paddr_t guest_doorbell, uint32_t
>> guest_devid,
>> + uint32_t nr_events, bool valid)
>> +{
>> + void *itt_addr = NULL;
>> + struct host_its *hw_its;
>> + struct its_devices *dev = NULL, *temp;
>> + struct rb_node **new = &d->arch.vgic.its_devices.rb_node, *parent
>> = NULL;
>> + int ret = -ENOENT;
>> +
>> + hw_its = gicv3_its_find_by_doorbell(host_doorbell);
>> + if ( !hw_its )
>> + return ret;
>> +
>> + /* check for already existing mappings */
>> + spin_lock(&d->arch.vgic.its_devices_lock);
>> + while ( *new )
>> + {
>> + temp = rb_entry(*new, struct its_devices, rbnode);
>> +
>> + parent = *new;
>> + if ( !compare_its_guest_devices(temp, guest_doorbell,
>> guest_devid) )
>> + {
>> + if ( !valid )
>> + rb_erase(&temp->rbnode, &d->arch.vgic.its_devices);
>> +
>> + spin_unlock(&d->arch.vgic.its_devices_lock);
>> +
>> + if ( valid )
>
> Again, a printk(XENLOG_GUEST...) here would be useful to know which host
> DeviceID was associated to the guest DeviceID.
I added a gprintk(XENLOG_DEBUG, ), which I think is more appropriate (as
it may spam the console when some stupid guest is running). Let me know
if you want to have a different loglevel.
>> + return -EBUSY;
>> +
>> + return remove_mapped_guest_device(temp);
>
> Just above you removed the device from the RB-tree but this function may
> fail and never free the memory. This means that memory will be leaked
> leading to a potential denial of service.
So I fixed this case in v4, though there is still a tiny chance of a
memleak: if the MAPD(V=0) command fails. We can't free the ITT table
then, really, because it still belongs to the ITS. I don't think we can
do much about it, though.
I free the other allocations of the memory now, anyway.
>> + }
>> +
>> + if ( compare_its_guest_devices(temp, guest_doorbell,
>> guest_devid) > 0 )
>> + new = &((*new)->rb_left);
>> + else
>> + new = &((*new)->rb_right);
>> + }
>> +
>> + if ( !valid )
>> + goto out_unlock;
>> +
>> + ret = -ENOMEM;
>> +
>> + /* An Interrupt Translation Table needs to be 256-byte aligned. */
>> + itt_addr = _xzalloc(nr_events * hw_its->itte_size, 256);
>> + if ( !itt_addr )
>> + goto out_unlock;
>> +
>> + dev = xzalloc(struct its_devices);
>> + if ( !dev )
>> + goto out_unlock;
>> +
>> + ret = its_send_cmd_mapd(hw_its, host_devid, fls(nr_events - 1) - 1,
>
> I don't understand why nr_events - 1. Can you explain?
Xen lacks an ilog2, so "fls" is the closest I could find. "fls" has this
slightly weird semantic (from the Linux source):
"Note fls(0) = 0, fls(1) = 1, fls(0x80000000) = 32."
I think this translates into: "How many bits do I need to express this
number?". For our case the highest event number we need to encode is
"nr_events - 1", hence the subtraction.
So is it worth to introduce a:
static inline int ilog2_64(uint64_t n) ...
in xen/include/asm-arm/bitops.h to document this?
>
> [...]
>
>> +/* Removing any connections a domain had to any ITS in the system. */
>> +void gicv3_its_unmap_all_devices(struct domain *d)
>> +{
>> + struct rb_node *victim;
>> + struct its_devices *dev;
>> +
>> + /*
>> + * This is an easily readable, yet inefficient implementation.
>> + * It uses the provided iteration wrapper and erases each node,
>> which
>> + * possibly triggers rebalancing.
>> + * This seems overkill since we are going to abolish the whole
>> tree, but
>> + * avoids an open-coded re-implementation of the traversal
>> functions with
>> + * some recursive function calls.
>> + * Performance does not matter here, since we are destroying a
>> domain.
>
> Again, this is slightly untrue. Performance matter when destroying a
> domain as Xen cannot be preempted. So if it takes too long, you will
> have an impact on the overall system.
I reworded this sentence in v3, since you apparently misunderstood me.
By inefficient I meant sub-optimal, but this is not a _critical_ path,
so we don't care too much. The execution time is clearly bounded by the
number of devices. We simply shouldn't allow gazillion of devices on a
DomU if we care about those things.
> However, I think it would be fair to assume that all device will be
> deassigned before the ITS is destroyed. So I would just drop this
> function. Note that we have the same assumption in the SMMU driver.
Well, why not keep it as a safeguard, then? If a domain gets destroyed,
aren't we supposed to clean up?
>> + */
>> +restart:
>> + spin_lock(&d->arch.vgic.its_devices_lock);
>> + if ( (victim = rb_first(&d->arch.vgic.its_devices)) )
>> + {
>> + dev = rb_entry(victim, struct its_devices, rbnode);
>> + rb_erase(victim, &d->arch.vgic.its_devices);
>> +
>> + spin_unlock(&d->arch.vgic.its_devices_lock);
>> +
>> + remove_mapped_guest_device(dev);
>> +
>> + goto restart;
>> + }
>> +
>> + spin_unlock(&d->arch.vgic.its_devices_lock);
>> +}
>> +
>> /* Scan the DT for any ITS nodes and create a list of host ITSes out
>> of it. */
>> void gicv3_its_dt_init(const struct dt_device_node *node)
>> {
>> @@ -455,6 +661,7 @@ void gicv3_its_dt_init(const struct dt_device_node
>> *node)
>> its_data->addr = addr;
>> its_data->size = size;
>> its_data->dt_node = its;
>> + spin_lock_init(&its_data->cmd_lock);
>
> This should be in patch #5.
Done in v4.
>>
>> printk("GICv3: Found ITS @0x%lx\n", addr);
>>
>> diff --git a/xen/arch/arm/vgic-v3.c b/xen/arch/arm/vgic-v3.c
>> index d61479d..1fadb00 100644
>> --- a/xen/arch/arm/vgic-v3.c
>> +++ b/xen/arch/arm/vgic-v3.c
>> @@ -1450,6 +1450,9 @@ static int vgic_v3_domain_init(struct domain *d)
>> d->arch.vgic.nr_regions = rdist_count;
>> d->arch.vgic.rdist_regions = rdist_regions;
>>
>> + spin_lock_init(&d->arch.vgic.its_devices_lock);
>> + d->arch.vgic.its_devices = RB_ROOT;
>
> Again, the placement of those 2 lines are likely wrong. This should
> belong to the vITS and not the vgic-v3.
Well, it's a "domain which has an ITS" thing, as this is not per ITS. So
far we only have an per-ITS init function, as we don't iterate over the
host ITSes there anymore.
So I refrained from adding a separate function to initialize these
simple and generic data structures. Please let me know if you insist on
some ITS-per-domain init function.
Cheers,
Andre.
> I think it would make sense to get a patch that introduces a skeleton
> for the vITS before this patch and start plumbing through.
>
>> +
>> /*
>> * Domain 0 gets the hardware address.
>> * Guests get the virtual platform layout.
>
> Cheers,
>
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |