[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH XEN v5 19/23] tools/libs/call: Update some log messages to not refer to xc.



On Fri, 2015-11-13 at 16:20 +0000, Andrew Cooper wrote:
> On 09/11/15 12:00, Ian Campbell wrote:
> > Signed-off-by: Ian Campbell <ian.campbell@xxxxxxxxxx>
> > Acked-by: Wei Liu <wei.liu2@xxxxxxxxxx>
> 
> I agree with the sentiment, but the error message refer to the
> libxencall entry API, so a developer can match it back to the call in
> their code.

The PERROR will prepend "xencall" onto the message.

This osdep_alloc_pages can be called from either xencall_alloc_buffer_pages
or xencall_alloc_buffer, so I don't think it is possible to make them more
specific than that "xencall: alloc_pages". (The old message was similarly
misleading in that regard)

Ian.

> 
> ~Andrew
> 
> > ---
> > Âtools/libs/call/linux.c | 4 ++--
> > Â1 file changed, 2 insertions(+), 2 deletions(-)
> > 
> > diff --git a/tools/libs/call/linux.c b/tools/libs/call/linux.c
> > index 906ca7e..80b505c 100644
> > --- a/tools/libs/call/linux.c
> > +++ b/tools/libs/call/linux.c
> > @@ -88,7 +88,7 @@ void *osdep_alloc_pages(xencall_handle *xcall,
> > unsigned int npages)
> > ÂÂÂÂÂp = mmap(NULL, size, PROT_READ|PROT_WRITE,
> > MAP_PRIVATE|MAP_ANONYMOUS|MAP_LOCKED, -1, 0);
> > ÂÂÂÂÂif ( p == MAP_FAILED )
> > ÂÂÂÂÂ{
> > -ÂÂÂÂÂÂÂÂPERROR("xc_alloc_hypercall_buffer: mmap failed");
> > +ÂÂÂÂÂÂÂÂPERROR("alloc_pages: mmap failed");
> > ÂÂÂÂÂÂÂÂÂreturn NULL;
> > ÂÂÂÂÂ}
> > Â
> > @@ -97,7 +97,7 @@ void *osdep_alloc_pages(xencall_handle *xcall,
> > unsigned int npages)
> > ÂÂÂÂÂrc = madvise(p, npages * PAGE_SIZE, MADV_DONTFORK);
> > ÂÂÂÂÂif ( rc < 0 )
> > ÂÂÂÂÂ{
> > -ÂÂÂÂÂÂÂÂPERROR("xc_alloc_hypercall_buffer: madvise failed");
> > +ÂÂÂÂÂÂÂÂPERROR("alloc_pages: madvise failed");
> > ÂÂÂÂÂÂÂÂÂgoto out;
> > ÂÂÂÂÂ}
> > Â
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.