[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 4/6] xen-gntdev: Support mapping in HVM domains



On 02/14/2011 10:51 AM, Konrad Rzeszutek Wilk wrote:
>> +static int unmap_grant_pages(struct grant_map *map, int offset, int pages);
>> +
>>  /* ------------------------------------------------------------------ */
>>  
>>  static void gntdev_print_maps(struct gntdev_priv *priv,
>> @@ -179,11 +184,34 @@ static void gntdev_put_map(struct grant_map *map)
>>  
>>      atomic_sub(map->count, &pages_mapped);
>>  
>> -    if (map->pages)
>> +    if (map->pages) {
>> +            if (!use_ptemod)
>> +                    unmap_grant_pages(map, 0, map->count);
> 
> In the past (before this patch) the unmap_grant_pages would be called
> on the .ioctl, .release, and .close (on VMA). This adds it now also
> on the mmu_notifier_ops paths. Why?
> 
This does not actually add the unmap on the mmu_notifier path. The MMU
notifier is used only if use_ptemod is true, and unmap_grant_pages is
only called when use_ptemod is false.

The HVM path for map and unmap is slightly different: HVM keeps the pages
mapped until the area is deleted, while the PV case (use_ptemod being true)
must unmap them when userspace unmaps the range. In the normal use case,
this makes no difference to users since unmap time is deletion time.

-- 
Daniel De Graaf
National Security Agency

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.