[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [RFC PATCH V3 10/11] HV/Netvsc: Add Isolation VM support for netvsc driver





On 6/7/2021 2:50 PM, Christoph Hellwig wrote:
On Sun, May 30, 2021 at 11:06:27AM -0400, Tianyu Lan wrote:
+       if (hv_isolation_type_snp()) {
+               pfns = kcalloc(buf_size / HV_HYP_PAGE_SIZE, sizeof(unsigned 
long),
+                              GFP_KERNEL);
+               for (i = 0; i < buf_size / HV_HYP_PAGE_SIZE; i++)
+                       pfns[i] = virt_to_hvpfn(net_device->recv_buf + i * 
HV_HYP_PAGE_SIZE) +
+                               (ms_hyperv.shared_gpa_boundary >> 
HV_HYP_PAGE_SHIFT);
+
+               vaddr = vmap_pfn(pfns, buf_size / HV_HYP_PAGE_SIZE, 
PAGE_KERNEL_IO);
+               kfree(pfns);
+               if (!vaddr)
+                       goto cleanup;
+               net_device->recv_original_buf = net_device->recv_buf;
+               net_device->recv_buf = vaddr;
+       }

This probably wnats a helper to make the thing more readable.  But who
came up with this fucked up communication protocol where the host needs
to map random pfns into a contigous range?  Sometime I really have to
wonder what crack the hyper-v people take when comparing this to the
relatively sane approach others take.

Agree. Will add a helper function.

+       for (i = 0; i < page_count; i++)
+               dma_unmap_single(&hv_dev->device, packet->dma_range[i].dma,
+                                packet->dma_range[i].mapping_size,
+                                DMA_TO_DEVICE);
+
+       kfree(packet->dma_range);

Any reason this isn't simply using a struct scatterlist?

I will have a look. Thanks to reminder scatterlist.


+       for (i = 0; i < page_count; i++) {
+               char *src = phys_to_virt((pb[i].pfn << HV_HYP_PAGE_SHIFT)
+                                        + pb[i].offset);
+               u32 len = pb[i].len;
+
+               dma = dma_map_single(&hv_dev->device, src, len,
+                                    DMA_TO_DEVICE);

dma_map_single can only be used on page baked memory, and if this is
using page backed memory you wouldn't need to do thee phys_to_virt
tricks.  Can someone explain the mess here in more detail?

Sorry. Could you elaborate the issue? These pages in the pb array are not allocated by DMA API and using dma_map_single() here is to map these pages' address to bounce buffer physical address.


        struct rndis_device *dev = nvdev->extension;
        struct rndis_request *request = NULL;
+       struct hv_device *hv_dev = ((struct net_device_context *)
+                       netdev_priv(ndev))->device_ctx;

Why not use a net_device_context local variable instead of this cast
galore?


OK. I will update.


Thanks.



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.