[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Enhancing Xen's Kconfig infrastructure to support tailored solutions





On 18 Feb 2019, at 12:16, George Dunlap <george.dunlap@xxxxxxxxxx> wrote:

On 2/18/19 12:11 PM, George Dunlap wrote:
On 2/18/19 12:01 PM, Andrew Cooper wrote:
On 18/02/2019 11:57, Wei Liu wrote:
On Mon, Feb 18, 2019 at 11:53:15AM +0000, Lars Kurth wrote:

On 18 Feb 2019, at 11:30, George Dunlap <george.dunlap@xxxxxxxxxx> wrote:

On 2/18/19 11:23 AM, Wei Liu wrote:
On Mon, Feb 18, 2019 at 11:17:56AM +0000, Lars Kurth wrote:
Thank you Wei. It's interesting though that the full vs HVM only is almost identical in terms of SLOC's
Lars
The cloc target counts the files in the dependency graph generated by
make.
Do we know for sure that CLOC counts everything in a file or does it honour the pre-processor settings?
We certainly don't feed any preprocessor defines to it. I doubt it
understand C to that level of details anyway.

LoC isn't a fantastic metric under any circumstance.

Bigger code is definitely better, if the reason it is bigger is because
it is because it is formatted for readability/clarity etc.

Attempting to optimise for smaller LoC, other than making entire
functional areas optional, is usually short sighted.

For instance, we could probably decrease the LoC by nearly 20k by
changing the style not to give the opening bracket its own line:

$ find . -name '*.c' | xargs grep '^[[:space:]]*{' | wc -l
19896
$ find . -name '*.[ch]' | xargs grep '^[[:space:]]*{' | wc -l
21847

This is hypervisor only BTW (run from xen.git/xen).

It is a bit mind-boggling to think that there are more open brackets in
the Xen code base than there is PV-specific code. O_o

As we have the same coding conventions across hypervisor code, that shouldn't make a difference

Lars
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.