[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v2 1/2] net/xen-netfront: Correct printf format in xennet_get_responses



On 04/06/15 17:25, Joe Perches wrote:
> On Thu, 2015-06-04 at 13:52 +0100, Julien Grall wrote:
>> On 04/06/15 13:46, David Vrabel wrote:
>>> On 04/06/15 13:45, Julien Grall wrote:
>>>> On 03/06/15 18:06, Joe Perches wrote:
>>>>> On Wed, 2015-06-03 at 17:55 +0100, Julien Grall wrote:
>>>>>> rx->status is an int16_t, print it using %d rather than %u in order to
>>>>>> have a meaningful value when the field is negative.
>>>>> []
>>>>>> diff --git a/drivers/net/xen-netfront.c b/drivers/net/xen-netfront.c
>>>>> []
>>>>>> @@ -733,7 +733,7 @@ static int xennet_get_responses(struct 
>>>>>> netfront_queue *queue,
>>>>>>                  if (unlikely(rx->status < 0 ||
>>>>>>                               rx->offset + rx->status > PAGE_SIZE)) {
>>>>>>                          if (net_ratelimit())
>>>>>> -                                dev_warn(dev, "rx->offset: %x, size: 
>>>>>> %u\n",
>>>>>> +                                dev_warn(dev, "rx->offset: %x, size: 
>>>>>> %d\n",
>>>>>
>>>>> If you're going to do this, perhaps it'd be sensible to
>>>>> also change the %x to %#x or 0x%x so that people don't
>>>>> mistake offset without an [a-f] for decimal.
>>>>
>>>> Good idea. I will resend a version of this series.
>>>>
>>>> David, can I keep you Reviewed-by for this change?#
>>>
>>> Can you make the offset %d instead?
> 
> If you do, please change similar uses in
> drivers/net/xen-netback/ in the same patch.

The format is not really consistent accross the 2 drivers and even
within the same driver (see pending_idx which is some times print with
%x and %d).

Anyway, ss it's a different drivers and maintainers I will prefer to
send a separate patch for this.

Regards,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.