[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v3 2/4] x86/mem_sharing: introduce and use page_lock_memshr instead of page_lock



On Tue, Apr 30, 2019 at 1:15 AM Jan Beulich <JBeulich@xxxxxxxx> wrote:
>
> >>> On 29.04.19 at 18:35, <tamas@xxxxxxxxxxxxx> wrote:
> > On Mon, Apr 29, 2019 at 9:18 AM Jan Beulich <JBeulich@xxxxxxxx> wrote:
> >> >>> On 26.04.19 at 19:21, <tamas@xxxxxxxxxxxxx> wrote:
> >> > --- a/xen/arch/x86/mm.c
> >> > +++ b/xen/arch/x86/mm.c
> >> > @@ -2030,12 +2030,11 @@ static inline bool 
> >> > current_locked_page_ne_check(struct page_info *page) {
> >> >  #define current_locked_page_ne_check(x) true
> >> >  #endif
> >> >
> >> > -int page_lock(struct page_info *page)
> >> > +#if defined(CONFIG_PV) || defined(CONFIG_HAS_MEM_SHARING)
> >> > +static int _page_lock(struct page_info *page)
> >>
> >> As per above, personally I'm against introducing
> >> page_{,un}lock_memshr(), as that makes the abuse even more
> >> look like proper use. But if this was to be kept this way, may I
> >> ask that you switch int -> bool in the return types at this occasion?
> >
> > Switching them to bool would be fine. Replacing them with something
> > saner is unfortunately out-of-scope at the moment. Unless someone has
> > a specific solution that can be put in place. I don't have one.
>
> I've outlined a solution already: Make a mem-sharing private variant
> of page_{,un}lock(), derived from the PV ones (but with pieces
> dropped you don't want/need).

Well, that's what I already did here in this patch. No?

Tamas

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.