[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] Re: [PATCH v6] Userspace grant communication



On 02/14/2011 11:14 AM, Konrad Rzeszutek Wilk wrote:
> On Thu, Feb 03, 2011 at 12:18:58PM -0500, Daniel De Graaf wrote:
>> Changes since v5:
>>   - Added a tested xen version to workaround in #4
>>   - Cleaned up variable names & structures
>>   - Clarified some of the cleanup in gntalloc
>>   - Removed copyright statement from public-domain files
>>
>> [PATCH 1/6] xen-gntdev: Change page limit to be global instead of per-open
>> [PATCH 2/6] xen-gntdev: Use find_vma rather than iterating our vma list 
>> manually
>> [PATCH 3/6] xen-gntdev: Add reference counting to maps
>> [PATCH 4/6] xen-gntdev: Support mapping in HVM domains
>> [PATCH 5/6] xen-gntalloc: Userspace grant allocation driver
>> [PATCH 6/6] xen/gntalloc,gntdev: Add unmap notify ioctl
> 
> Hey Daniel,
> 
> I took a look at the patchset and then the bug-fixes:
> 
> Daniel De Graaf (12):
>       xen-gntdev: Change page limit to be global instead of per-open
>       xen-gntdev: Use find_vma rather than iterating our vma list manually
>       xen-gntdev: Add reference counting to maps
>       xen-gntdev: Support mapping in HVM domains
>       xen-gntalloc: Userspace grant allocation driver
>       xen/gntalloc,gntdev: Add unmap notify ioctl
>       xen-gntdev: Fix memory leak when mmap fails
>       xen-gntdev: Fix unmap notify on PV domains
>       xen-gntdev: Use map->vma for checking map validity
>       xen-gntdev: Avoid unmapping ranges twice
>       xen-gntdev: prevent using UNMAP_NOTIFY_CLEAR_BYTE on read-only mappings
>       xen-gntdev: Avoid double-mapping memory
> 
> 
> And besides the two questions I posted today they look OK to me. However
> I have on question that I think points to a bug.
> 
> Say that I call GNTDEV_MAP_GRANT_REF three times. The first time I provide
> a count of 4, then 1, and then once more 1.
> 
> The first call would end up with priv having:
> 
> priv-map[0] => map.count=4, map.user=1, map.index=0. We return op.index as 0.
> 
> The next call:
> 
> priv-map[0] => map.count=4, map.user=1, map.index=0.
> priv-map[1] => map.count=1, map.user=1, map.index=5 (gntdev_add_map
> ends up adding the index and the count from the previous map to it). We 
> return op.index as 20480.
> 
I think this will come out with map.index=4, op.index=8192, since the only
entry in priv->maps has map->index = 0 and map->count = 4.

> The last call ends up with
> priv-map[0] => map.count=4, map.user=1, map.index=0.
> priv-map[1] => map.count=1, map.user=1, map.index=5
> priv-map[2] => map.count=1, map.user=1, map.index=0. And we return
> op.index as = 0.
> 
How do we return that? The "goto done" branch should not be taken unless there 
is
a hole in the existing priv->maps list created by a previous deletion.

I see add->index starting at 0, then set to 4 and then 5, its final value.

> It looks as gntdev_add_map ends does not do anything to the 
> map.index if the "if (add->index + add->count < map->index)" comes
> out true, and we end up with op.index=0. Which naturally is
> incorrect as that is associated with grant_map that has four entries!
> 
> I hadn't yet tried to modify the nice test-case program you provided
> to see if this is can happen in practice, but it sure looks like it could?
 
This code wasn't changed from the old gntdev code. In gntalloc, I went with the
much simpler method of an always-incrementing index for generating offsets;
there's no reason that that can't be done here if it looks like there's a
mistake. The code does look correct to me, and I have tested it with 
variable-size
grants (although not in the 4/1/1 configuration you suggested).

-- 
Daniel De Graaf
National Security Agency

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.