[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH RFC 2/3] xen/domain: add domain hypfs directories



On 10.12.20 12:51, Julien Grall wrote:


On 10/12/2020 07:54, Jürgen Groß wrote:
On 09.12.20 17:37, Julien Grall wrote:
  only the syntax used in this document):
  * STRING -- an arbitrary 0-delimited byte string.
@@ -191,6 +192,15 @@ The scheduling granularity of a cpupool.
  Writing a value is allowed only for cpupools with no cpu assigned and if the
  architecture is supporting different scheduling granularities.

[...]

+
+static int domain_dir_read(const struct hypfs_entry *entry,
+                           XEN_GUEST_HANDLE_PARAM(void) uaddr)
+{
+    int ret = 0;
+    const struct domain *d;
+
+    for_each_domain ( d )

This is definitely going to be an issue if you have a lot of domain running as Xen is not preemptible.

In general this is correct, but in this case I don't think it will
be a problem. The execution time for each loop iteration should be
rather short (in the microsecond range?), so even with 32000 guests
we would stay way below one second.

The scheduling slice are usually in ms and not second (yet this will depend on your scheduler). It would be unacceptable to me if another vCPU cannot run for a second because dom0 is trying to list the domain via HYPFS.

Okay, I did a test.

The worrying operation is the reading of /domain/ with lots of domains.

"xenhypfs ls /domain" with 500 domains running needed 231 us of real
time for the library call, while "xenhypfs ls /" needed about 70 us.
This makes 3 domains per usec, resulting in about 10 msecs with 30000
domains.


Juergen

Attachment: OpenPGP_0xB0DE9DD628BF132F.asc
Description: application/pgp-keys

Attachment: OpenPGP_signature
Description: OpenPGP digital signature


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.