[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH] fix potential null dereference

On Tue, 16 Jan 2018, Jan Beulich wrote:
> >>> On 15.01.18 at 19:51, <sstabellini@xxxxxxxxxx> wrote:
> > On Mon, 15 Jan 2018, Jan Beulich wrote:
> >> >>> On 13.01.18 at 07:21, <julien.grall@xxxxxxxxxx> wrote:
> >> > On 01/12/2018 11:40 PM, Stefano Stabellini wrote:
> >> >> handles can theoretically be NULL, check for it explicitly before
> >> >> dereferencing it.
> >> > 
> >> > I doubt handles could be NULL if LocateHandle succeed. This seems to be 
> >> > confirmed by the spec (Page 208 in UEFI spec 2.7).
> >> > 
> >> > So I am not entirely convince we should add yet another check in the 
> >> > code. An ASSERT might be better.
> >> 
> >> Indeed if there is a platform where NULL is coming back in the
> >> success case, that platform should be named as a justification
> >> in the commit message. Otherwise I don't see the value of this
> >> change.
> > 
> > Truthfully, it is mostly to silence Coverity. We can all appreciate when
> > static analysts cannot find defects in the code.
> So what does Coverity dislike here (the more that this is on a
> boot path, i.e. not exploitable by guests at all in the first place)?
> Merely the NULL pointer? What if the interface gave back a
> pointer with a value of 0x123456789abcdef?

Coverity complains "Dereferencing null pointer handles". The code path
to get to that point is the following, as explained by Coverity:

  - handles is set to NULL at the beginning of efi_get_gop
  - status is not EFI_BUFFER_TOO_SMALL, AllocatePool is not called, handles is 
still NULL
  - later we call efi_bs->HandleProtocol(handles[i], "Dereferencing null 
pointer handles"

Given that in the first call to LocateHandle we pass size == 0, by the
spec I don't see how it can return anything other than EFI_BUFFER_TOO_SMALL.

In other words, if LocateHandle complies, then the code is correct. I'll
mark it as won't fix.

Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.