[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v1.1 for-4.11 0/3] vpci bugfixes



On Mon, Mar 26, 2018 at 06:39:21AM -0600, Jan Beulich wrote:
> >>> On 26.03.18 at 13:28, <roger.pau@xxxxxxxxxx> wrote:
> > This tree patches are bugfixes for the vPCI code merged last week. They
> > where spotted by Coverity.
> 
> Thanks for dealing with them. You having omitted Coverity IDs I
> suppose the report you've looked at was from the XenServer internal
> instance. That would also explain why you have a fix for an issue the
> open source instance didn't spot. It spotted another issue though:
> 
> *** CID 1430809:    (BAD_SHIFT)
> /xen/drivers/vpci/vpci.c: 382 in vpci_read()
> 376                                              size - data_offset);
> 377     
> 378             data = merge_result(data, tmp_data, size - data_offset, 
> data_offset);
> 379         }
> 380         spin_unlock(&pdev->vpci->lock);
> 381     
> >>>     CID 1430809:    (BAD_SHIFT)
> >>>     In expression "0xffffffffU >> 32U - 8U * size", right shifting by 
> >>> more than 31 bits has undefined behavior.  The shift amount, "32U - 8U * 
> >>> size", is 32.
> 382         return data & (0xffffffff >> (32 - 8 * size));
> 383     }
> 
> And indeed there's no way I can see that it could prove size to
> only ever be 1, 2, or 4. I can't figure whether they've actually
> found a code path where size could end up being zero here. I
> think/hope a suitable ASSERT() would help.

I've also seen that one, but was wondering whether this should be
fixed in the handler dispatcher code instead. But seeing that
vpci_read/write can be called from both the IO or the MMIO handlers, I
guess it's best to just add an ASSERT(size); to both the read and the
write handlers.

Thanks, Roger.

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.