[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [RFC] Support of non-indirect grant backend on 64KB guest
El 20/08/15 a les 1.44, Stefano Stabellini ha escrit: > On Wed, 19 Aug 2015, Roger Pau Monnà wrote: >> My opinion is that we have already merged quite a lot of this mess in >> order to support guests with different page sizes. And in this case, the >> addition of code can be done to a userspace component, which is much >> less risky than adding it to blkfront, also taking into account that >> it's a general improvement for Qdisk that other arches can also leverage. >> >> So in one hand you are adding code to a kernel component, that makes the >> code much more messy and can only be leveraged by ARM. On the other >> hand, you can add code a user-space backend, and that code is also >> beneficial for other arches. IMHO, the decision is quite clear. > > 64K pages not working is entirely a Linux problem, not a Xen problem. > Xen uses 4K pages as usual and exports the same 4K based hypercall > interface as usual. That needs to work, no matter what the guest decides > to put in its own pagetables. > > I remind everybody that Xen interfaces on ARM and ARM64 are fully > maintained for backward compatibility. Xen is not forcing Linux to use > 64K pages, that's entirely a Linux decision. The issue has nothing to do > with Xen. > > The bug here is that Linux has broken 64K pages support and that should > be fixed. I don't think is reasonable to make changes to the Xen ABIs > just to accommodate the brokenness of one guest kernel in a particular > configuration. Is it a change to the ABI to mandate indirect-descriptors support in order to run arm64 with 64KB guests? IMHO, it is not, and none of the proposed solutions (either change blkfront or Qdisk) include any change to the Xen ABI. In this case my preference would be to perform the change in the backend for the reasons detailed above. Anyway, I'm not going to block such a change, I just think there are technically better ways to solve this issue. Roger. _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |