[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Virtio on Xen with Rust




Hello Viresh

[sorry for the possible format issues]

On Thu, Jun 23, 2022 at 8:48 AM Viresh Kumar <viresh.kumar@xxxxxxxxxx> wrote:
On 22-06-22, 18:05, Oleksandr Tyshchenko wrote:
> Even leaving
> aside the fact that restricted virtio memory access in the guest means that
> not all of guest memory can be accessed, so even having pre-maped guest
> memory in advance, we are not able to calculate a host pointer as we don't
> know which gpa the particular grant belongs to.

Ahh, I clearly missed that as well. We can't simply convert the
address here on the requests :(


Exactly, the grant represents the granted guest page, but the backend doesn't know the guest physical address of that page and it shouldn't know it, that is the point.
So the backend can only map granted pages, for which the guest explicitly calls dma_map_*(). The more, currently the backend shouldn't keep them mapped more than necessary, for example to cache mappings. Otherwise, when calling dma_unmap_*() guest will notice that grant is still in use by the backend and complain.

 

> I am not sure that I understand this use-case.
> Well, let's consider the virtio-disk example, it demonstrates three
> possible memory mapping modes:
> 1. All addresses are gpa, map/unmap at runtime using foreign mappings
> 2. All addresses are gpa, map in advance using foreign mappings
> 3. All addresses are grants, only map/unmap at runtime using grants mappings
>
> If you are asking about #4 which would imply map in advance together with
> using grants then I think, no. This won't work with the current stuff.
> These are conflicting opinions, either grants and map at runtime or gpa and
> map in advance.
> If there is a wish to optimize when using grants then "maybe" it is worth
> looking into how persistent grants work for PV block device for example
> (feature-persistent in blkif.h).

I though #4 may make it work for our setup, but it isn't what we need
necessarily.

The deal is that we want hypervisor agnostic backends, they won't and
shouldn't know what hypervisor they are running against. So ideally,
no special handling.

I see and agree
 

To make it work, the simplest of the solutions can be to map all that
we need in advance, when the vhost negotiations happen and memory
regions are passed to the backend. It doesn't necessarily mean mapping
entire guest, but just the regions we need.

With what I have understood about grants until now, I don't think it
will work straight away.

yes

Below is my understanding, which might be wrong.

I am not sure about x86, there are some moments with its modes, for example PV guests should always use grants for virtio, but on Arm (which guest type is HVM): 
1. If you run backend(s) in dom0 which is trusted by default, you don't necessarily need to use grants for the virtio so you will be able to map what you need in advance using foreign mappings.
2. If you run backend(s) in another domain *which you trust* and you don't want to use grants for the virtio, I think, you also will be able to map in advance using foreign mappings, but for that you will need a security policy to allow your backend's domain to map arbitrary guest pages.
3. If you run backend(s) in non-trusted domain, you will have to use grants for the virtio, so there is no way to map in advance, only to map at the runtime what was previously granted by the guest and umap right after using it.

These is another method how to restrict backend without modifying guest which is CONFIG_DMA_RESTRICTED_POOL in Linux, but this includes memcpy in the guest and requires some support in toolstack to make it work, but I wouldn't
suggest it as the usage of grants for the virtio is better (and already in upsteam).   

Regarding your previous attempt to map 512MB by using grants, what I understand from the error message is that Xen complains that the passed grant ref is bigger than the current number of grant table entries.
Now I am wondering where do these 0x40000 - 0x5ffff grant refs (which backend tries to map in a single call) come from, are they really were previously granted by the guest and passed to the backend in a single request?
If the answer is yes, then what does gnttab_usage_print_all() say (key 'g' in Xen console)? I expect there should be a lot of Xen messages like "common/grant_table.c:1882:d2v3 Expanding d2 grant table from 28 to 29 frames. Do you see them?

 

> Yes, this is the correct environment. Please note that Juergen has recently
> pushed new version [1]

Yeah, I am following them up, will test the one you all agree on :)

Thanks.

--
viresh


--
Regards,

Oleksandr Tyshchenko

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.