[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v3 07/15] argo: implement the register op



On Wed, Jan 9, 2019 at 10:28 AM Julien Grall <julien.grall@xxxxxxxxx> wrote:
>
>
>
> On Wed, 9 Jan 2019, 12:54 Wei Liu, <wei.liu2@xxxxxxxxxx> wrote:
>>
>> On Wed, Jan 09, 2019 at 12:02:34PM -0500, Julien Grall wrote:
>> > Hi,
>> >
>> > Sorry for the formatting. Sending it from my phone.
>> >
>> > On Wed, 9 Jan 2019, 11:03 Christopher Clark, 
>> > <christopher.w.clark@xxxxxxxxx>
>> > wrote:
>> >
>> > > On Wed, Jan 9, 2019 at 7:56 AM Wei Liu <wei.liu2@xxxxxxxxxx> wrote:
>> > > >
>> > > > On Sun, Jan 06, 2019 at 11:42:40PM -0800, Christopher Clark wrote:
>> > > > > The register op is used by a domain to register a region of memory 
>> > > > > for
>> > > > > receiving messages from either a specified other domain, or, if
>> > > specifying a
>> > > > > wildcard, any domain.
>> > > > >
>> > > > > This operation creates a mapping within Xen's private address space
>> > > that
>> > > > > will remain resident for the lifetime of the ring. In subsequent
>> > > commits,
>> > > > > the hypervisor will use this mapping to copy data from a sending
>> > > domain into
>> > > > > this registered ring, making it accessible to the domain that
>> > > registered the
>> > > > > ring to receive data.
>> > > > >
>> > > > > Wildcard any-sender rings are default disabled and registration will 
>> > > > > be
>> > > > > refused with EPERM unless they have been specifically enabled with 
>> > > > > the
>> > > > > argo-mac boot option introduced here. The reason why the default for
>> > > > > wildcard rings is 'deny' is that there is currently no means to
>> > > protect the
>> > > > > ring from DoS by a noisy domain spamming the ring, affecting other
>> > > domains
>> > > > > ability to send to it. This will be addressed with XSM policy 
>> > > > > controls
>> > > in
>> > > > > subsequent work.
>> > > > >
>> > > > > Since denying access to any-sender rings is a significant functional
>> > > > > constraint, a new bootparam is provided to enable overriding this:
>> > > > >  "argo-mac" variable has allowed values: 'permissive' and 
>> > > > > 'enforcing'.
>> > > > > Even though this is a boolean variable, use these descriptive strings
>> > > in
>> > > > > order to make it obvious to an administrator that this has potential
>> > > > > security impact.
>> > > > >
>> > > > > The p2m type of the memory supplied by the guest for the ring must be
>> > > > > p2m_ram_rw and the memory will be pinned as PGT_writable_page while
>> > > the ring
>> > > > > is registered.
>> > > > >
>> > > > > xen_argo_page_descr_t type is introduced as a page descriptor, to
>> > > convey
>> > > > > both the physical address of the start of the page and its
>> > > granularity. The
>> > > > > smallest granularity page is assumed to be 4096 bytes and the lower
>> > > twelve
>> > > > > bits of the type are used to indicate the size of page of memory
>> > > supplied.
>> > > > > The implementation of the hypercall op currently only supports 4K
>> > > pages.
>> > > > >
>> > > >
>> > > > What is the resolution for the Arm issues mentioned by Julien? I read
>> > > > the conversation in previous thread. A solution seemed to have been
>> > > > agreed upon, but the changelog doesn't say anything about it.
>> > >
>> > > I made the interface changes that Julien had asked for. The register
>> > > op now takes arguments that can describe the granularitity of the
>> > > pages supplied, though only support for 4K pages is accepted in the
>> > > current implementation. I believe it meets Julien's requirements.
>> >
>> >
>> > I still don't think allowing 4K or 64K is the right solution to go. You are
>> > adding unnecessary burden in the hypervisor and would prevent optimization
>> > i the hypervisor and unwanted side effect.
>> >
>> > For instance a 64K hypervisor will always map 64K even when the guest is
>> > passing 4K. You also can't map everything contiguously in Xen (if you ever
>> > wanted to).
>> >
>> > We need to stick on a single chunk size. That could be different between
>> > Arm and x86. For Arm it would need to be 64KB.
>>
>> Doesn't enforcing 64KB granularity has its own limitation as well?
>> According to my understanding of arm (and this could be wrong), you
>> would need to have the guest allocate (via memory exchange perhaps) 64KB
>> machine contiguous memory even when the hypervisor doesn't need it to be
>> 64KB (when hypervisor is running on 4KB granularity).
>
>
> The 64K is just about the interface with the guest.
> The hypervisor could just split the 64K in 16 4K chunk. No need for memory 
> exchange here.
>
>>
>> I think having a method to return granularity to guest, like Stefano
>> suggested, is more sensible. Hypervisor will then reject registration
>> request which doesn't conform to the requirement.
>
>
> The problem is not that simple... For instance, 64K is required to support 
> 52-bits PA yet you may still want to run your current Debian on that platform.
>
> You can do that nicely on KVM but on Xen it is a pain due to the current 
> interface. If you use 4K you may end up to expose too much to the other side.
>
> The only viable solution here is a full re-design of the ABI for Arm. We can 
> do that step by step or at one go.
>
> The discussion here was to start solving it on Argo so that's one less step 
> to do. Christoffer kindly try to tackle it. Sadly, I don't think the 
> interface suggested is going to work.
>
> But I don't want Argo to miss 4.12 for that. So maybe the solution is to 
> stick with the usal Xen interface.

Thanks for the consideration. With that understanding, I'll put the
frame number -based interface back into place for a new revision of
the series, aiming for 4.12.

thanks,

Christopher

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.