[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v2 15/17] xen/cpupool: add cpupool directories



On 04.12.20 10:10, Jan Beulich wrote:
On 01.12.2020 09:21, Juergen Gross wrote:
@@ -1003,12 +1006,131 @@ static struct notifier_block cpu_nfb = {
      .notifier_call = cpu_callback
  };
+#ifdef CONFIG_HYPFS
+static const struct hypfs_entry *cpupool_pooldir_enter(
+    const struct hypfs_entry *entry);
+
+static struct hypfs_funcs cpupool_pooldir_funcs = {

Yet one more const missing?

Already fixed locally.


+    .enter = cpupool_pooldir_enter,
+    .exit = hypfs_node_exit,
+    .read = hypfs_read_dir,
+    .write = hypfs_write_deny,
+    .getsize = hypfs_getsize,
+    .findentry = hypfs_dir_findentry,
+};
+
+static HYPFS_VARDIR_INIT(cpupool_pooldir, "%u", &cpupool_pooldir_funcs);
+
+static const struct hypfs_entry *cpupool_pooldir_enter(
+    const struct hypfs_entry *entry)
+{
+    return &cpupool_pooldir.e;
+}
+
+static int cpupool_dir_read(const struct hypfs_entry *entry,
+                            XEN_GUEST_HANDLE_PARAM(void) uaddr)
+{
+    int ret = 0;
+    const struct cpupool *c;
+    unsigned int size = 0;
+
+    list_for_each_entry(c, &cpupool_list, list)
+    {
+        size += hypfs_dynid_entry_size(entry, c->cpupool_id);

Why do you maintain size here? I can't spot any use.

Oh, indeed.

This is a remnant of an earlier variant.


With this dropped the function then no longer depends on its
"entry" parameter, which makes me wonder ...

+        ret = hypfs_read_dyndir_id_entry(&cpupool_pooldir, c->cpupool_id,
+                                         list_is_last(&c->list, &cpupool_list),
+                                         &uaddr);
+        if ( ret )
+            break;
+    }
+
+    return ret;
+}
+
+static unsigned int cpupool_dir_getsize(const struct hypfs_entry *entry)
+{
+    const struct cpupool *c;
+    unsigned int size = 0;
+
+    list_for_each_entry(c, &cpupool_list, list)
+        size += hypfs_dynid_entry_size(entry, c->cpupool_id);

... why this one does. To be certain their results are consistent
with one another, I think both should produce their results from
the same data.

In the end they do. Creating a complete direntry just for obtaining its
size is overkill, especially as hypfs_read_dyndir_id_entry() is not
directly calculating the size, but copying the fixed and the variable
parts in two portions.


+    return size;
+}
+
+static const struct hypfs_entry *cpupool_dir_enter(
+    const struct hypfs_entry *entry)
+{
+    struct hypfs_dyndir_id *data;
+
+    data = hypfs_alloc_dyndata(sizeof(*data));

I generally like the added type safety of the macro wrappers
around _xmalloc(). I wonder if it wouldn't be a good idea to have
such here as well, to avoid random mistakes like

     data = hypfs_alloc_dyndata(sizeof(data));

Fine with me.


However I further notice that the struct allocated isn't cpupool
specific at all. It would seem to me that such an allocation
therefore doesn't belong here. Therefore I wonder whether ...

+    if ( !data )
+        return ERR_PTR(-ENOMEM);
+    data->id = CPUPOOLID_NONE;
+
+    spin_lock(&cpupool_lock);

... these two properties (initial ID and lock) shouldn't e.g. be
communicated via the template, allowing the enter/exit hooks to
become generic for all ID templates.

The problem with the lock is that it is rather user specific. For
domains this will be split (rcu_read_lock(&domlist_read_lock) for
the /domain directory, and get_domain() for the per-domain level).

And memory allocation might need other data as well, so this won't
be the same structure for all cases. A two level dynamic directory
(e.g. domain/vcpu) might want to allocate the needed dyndata for
both levels already when entering /domain.


Yet in turn I notice that the "id" field only ever gets set, both
in patch 14 and here. But yes, I've now spotted the consumers in
patch 16.

+    return entry;
+}
+
+static void cpupool_dir_exit(const struct hypfs_entry *entry)
+{
+    spin_unlock(&cpupool_lock);
+
+    hypfs_free_dyndata();
+}
+
+static struct hypfs_entry *cpupool_dir_findentry(
+    const struct hypfs_entry_dir *dir, const char *name, unsigned int name_len)
+{
+    unsigned long id;
+    const char *end;
+    const struct cpupool *cpupool;
+
+    id = simple_strtoul(name, &end, 10);
+    if ( end != name + name_len )
+        return ERR_PTR(-ENOENT);
+
+    cpupool = __cpupool_find_by_id(id, true);

Silent truncation from unsigned long to unsigned int?

Oh, indeed. Need to check against UINT_MAX.


+    if ( !cpupool )
+        return ERR_PTR(-ENOENT);
+
+    return hypfs_gen_dyndir_entry_id(&cpupool_pooldir, id);
+}
+
+static struct hypfs_funcs cpupool_dir_funcs = {

Yet another missing const?

Already fixed.


+    .enter = cpupool_dir_enter,
+    .exit = cpupool_dir_exit,
+    .read = cpupool_dir_read,
+    .write = hypfs_write_deny,
+    .getsize = cpupool_dir_getsize,
+    .findentry = cpupool_dir_findentry,
+};
+
+static HYPFS_VARDIR_INIT(cpupool_dir, "cpupool", &cpupool_dir_funcs);

Why VARDIR? This isn't a template, is it? Or does VARDIR really
serve multiple purposes?

Basically it just takes an additional parameter for the function vector.
Maybe I should rename it to HYPFS_DIR_INIT_FUNC()?


+static void cpupool_hypfs_init(void)
+{
+    hypfs_add_dir(&hypfs_root, &cpupool_dir, true);
+    hypfs_add_dyndir(&cpupool_dir, &cpupool_pooldir);
+}
+#else
+
+static void cpupool_hypfs_init(void)
+{
+}
+#endif

I think you want to be consistent with the use of blank lines next
to #if / #else / #endif. In cases when they enclose multiple entities,
I think it's generally better to have intervening blank lines
everywhere. I also think in such cases commenting #else and #endif is
helpful. But you're the maintainer of this code ...

I think I'll change it.


Juergen

Attachment: OpenPGP_0xB0DE9DD628BF132F.asc
Description: application/pgp-keys

Attachment: OpenPGP_signature
Description: OpenPGP digital signature


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.