WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-api

RE: RE: [Xen-API] Xen-API: XMLRPC Documentation

To: "xen-api@xxxxxxxxxxxxxxxxxxx" <xen-api@xxxxxxxxxxxxxxxxxxx>
Subject: RE: RE: [Xen-API] Xen-API: XMLRPC Documentation
From: George Shuklin <george.shuklin@xxxxxxxxx>
Date: Fri, 27 Aug 2010 17:39:52 +0400
Delivery-date: Fri, 27 Aug 2010 06:40:59 -0700
Dkim-signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:received:received:subject:from:to:in-reply-to :references:content-type:date:message-id:mime-version:x-mailer :content-transfer-encoding; bh=PVXPziRdfRH/WzVXjqT1uXlzDc1CibzyMcfexNV/Qi0=; b=usZqj5jewalGV9HqQiIzZstlhPKsK96mSyfMKr6YtTt6ZmRvHkiYqp5xwkkfl8HhH5 yx/BqENkWY9Us+hqlSGKxZCRsk2yvTrMJBcV4w+ohnEh1Spj6/X8tlSrzvGI4xxosVUA THkmeBflUwASTi8R74peGOGrSNiK0751X5eTg=
Domainkey-signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=subject:from:to:in-reply-to:references:content-type:date:message-id :mime-version:x-mailer:content-transfer-encoding; b=LR74q3IWmznk9mVggK2AxKIB7o+gaBZnPqRYKkg9lNiXue8wcj50rJIovCgBzx9wUl 6CHGCpfcTKnI4Y5odkDP6AnKK4UBtqQ3EidW00c6Fyd/9zWEgMYmsiJu0iTaC8I6X9Cb ER80hC7Y9HuNRdaKQpRe5XiEJoPFG+zAtVe9I=
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <81A73678E76EA642801C8F2E4823AD219330C427ED@xxxxxxxxxxxxxxxxxxxxxxxxx>
List-help: <mailto:xen-api-request@lists.xensource.com?subject=help>
List-id: Discussion of API issues surrounding Xen <xen-api.lists.xensource.com>
List-post: <mailto:xen-api@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-api>, <mailto:xen-api-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-api>, <mailto:xen-api-request@lists.xensource.com?subject=unsubscribe>
References: <246ad9ca-7fd4-6961-a1aa-d708b94870bf@xxxxxx> <1282868254.8152.25.camel@xxxxxxxxxxxxxxxx> <81A73678E76EA642801C8F2E4823AD219330C427ED@xxxxxxxxxxxxxxxxxxxxxxxxx>
Sender: xen-api-bounces@xxxxxxxxxxxxxxxxxxx
Thank you for reply.

В Птн, 27/08/2010 в 13:50 +0100, Dave Scott пишет:

> > 3) Not very cute internal architecture of xapi. Database are kept in
> > plaintext format (just ocaml serialisation), intensive cross-syncing
> > (lot of empty pool updates via XenAPI notification system), not very
> > cute architecture (master and slave, all slaves are kept database, but
> > only master will reply to requests).
> 
> The database is serialized in XML format but the primary copy is kept in 
> memory on the pool master. The copies sent to the slaves are just backups. 
> The master is authoritative for the pool metadata.

... And this cause problems for any third party applications: they need
to use XenAPI calls to host master to get any, may be non-important
information every time. We wants to show customer list of 10 VMs? We
need to do _TEN_ XenAPI call just to get data to display (customer press
'f5' and we sending requests again). As alternative we can get status
for all VM, but this unacceptable in case of thousands customers.

I think, using some kind of database (sql is not good, may be something
like mongo?) with allowed access to database (with nice and powerful
query language) will help to obtain required data without master
overload.

(see notes about our project below).
 
> I think that the best choice of architecture depends entirely on the proposed 
> users and usage scenarios. What do you think? What sort of scenarios and 
> architectures are you thinking of?


Right now we are working on cloud for customers with resources on demand
and payment on real usage (not prepaid resources).

And we need right now simply replicate all data in XCP to our mongo
database to allow web-interface and service routines obtain required
data without pack of requests... (Most simple trouble: we need keep
information about destroyed VMs). 


I believe, that external controller for cloud with database and simple
agents on hosts will be better solution. But XCP is very interests by
implemented conception of pool, shared resources, etc. 

Those controller (as primary source of information for cloud) will solve
one more real problem - controlling few different pools (f.e. with
different host configuration) with consistent uuid (we looks to VM/VDI
uuid and can say 'which pool contain it').

> > 4) Lack of full access to domain internals (XS, XC interfaces). Domains
> > have pool-wide options and 'local' options (from xc point of view). We
> > have no method to set up some XS values to keep them between migration.
> 
> This is interesting -- could you explain more what you are trying to do? For 
> stuff like calling libxc functions, I recommend adding flags to the 
> VM.platform/ map, which gets written to xenstore, and then picking them up in 
> the 'xenguest' binary (mostly written in C)

Few things we extremely lack in XCP - is resource accounting.

1. CPU usage. xc.domain_getinfo() let us know in seconds. It cool, but
value loss on reboot and migration (so we create subsystem to collect
data).

2. Memory usage. XCP data suck. If xenballon fail to complete request we
can get memory-dynamic/target values different from real allocation.
xc.domain_getinfo() show us real use in kb. If we changing memory on
demand, it requied to account it in some synthetic form (kb*hr).

3. disk io accounting. I found information about I/O operation amount
in /sys fs for VBD and we collecting those data (by our agent). And, yet
again, information lost on every domain reboot/migration.

4. Same for network (vif).

If those data are storing in metrics - it really helps... But common
database with direct access via requests is required again.

> > 5) Scrapyard within other-config field (it contains some critical for
> > VM
> > values to run (install-repository), but it all joined in some kind of
> > blob "dig it out yourself").
> The other-config (string -> string) map is intended for "extra" data which 
> isn't part of the full API. It's good for making things work quickly and 
> experimenting... the idea is that, once we are happy with an approach, we can 
> make "first-class" API fields for the data. The danger is of adding things to 
> the API too quickly.. because then you are left with a backwards-compat 
> legacy problem IYSWIM.
I'm talking not about adding those values to MAIN XenAPI spec, but a simple 
structured access, like vm.other_config.install-repository, not 
vm.other_config["install_repository"]. OK, we saying 'other_config' hierarchy 
can change between versions. But at least they will not be a pack of strings 
(some of them are numbers, some can be none/NULL, etc).

One other problem is last_boot_record. It content is simple ocaml code
with pieces of XML.


> > 7) Lack of 'change startup config while VM running' (i'm about values,
> > which one have meaning while vm is starting: VCPUs-at-startup,
> > memory-static-*, etc - you can change them only while VM is offline;
> > more reasonably implementation must allow to change those values while
> > VM is online, but accept them to work at reboot/shutdown/start).
> 
> This was actually changed relatively recently for two reasons (which you may 
> disagree with!):
> 1. it simplifies the presentation in the user interface
> 2. it makes it easier to tightly control the amount of memory in use by a 
> domain, which is essential for controlling ballooning on the host/

No, we simply must split 'domain running configuration' and 'domain
startup configuration'. Domain running configuration have many fields
R/O., domain startup.configuration allow to change them. Every domain
destruction (reboot/restart), except migration, startup values will
transfer to running. (And needs in VCPUs-at-startup will be lost - it
will be simple vm.running.vcpus-number and vm.startup.vcpus-number).

>  
> > 8) Heavy limitation on pool size (I still not test it myself, but I
> > hear, limitation is 16 hosts per pool - that is not good for normal
> > cloud with many hosts).
> There is no pool size limit. However we focus most of our testing on 16 host 
> pools. Interestingly, I'm currently working on a bug which manifests on pools 
> with 30+ hosts.
> Although, it depends on what you mean by "many". The xapi pool is really the 
> "unit of shared storage"; if you have a much larger setup then I'd recommend 
> making some kind of higher-level interface.
Thank you for information. I'll test it up to 50 hosts next month to see what 
happens.


> > 9) Unavailability for using external kernels for PV.
> Could you explain this one more?

Well... It simple. We have many customers (not yet, but hoping them to
be after we finish :), every customer have few vm's. Now we decide to
change kernel for VM (for example, due pool upgrade). ... And what we
shall to do? Writing every customer with message 'please update kernel
on every VM?'. I think, using external files with kernel image (like
xend with /etc/xen/vmconfigs and "kernel=" parameter) will be best
solution.


> > 10) Heavy depends on guest tools.
> I think this could probably be changed fairly easily.
yes, this is not a strategic problem for XCP, I just pumped up to wrote all 
problems at once, sorry.


> > One more problem is XenAPI by itself: one connection per one operation
> > is TOO heavy to use as fast basis for future development. (I think, it
> > needs for some kind of interaction without connectivity lost after each
> > command).
> 
> I don't understand... xapi does support HTTP/1.1 with persistent 
> connections-- do you mean something different?
OK, may be I wrong, but how I could do few post requests for XenAPI via
single http connections? It is really possible? If yes, this will close
this nasty question.


Thank you again.

PS We have some presentation of our targets but they are in Russian and
project is in stage of development, so translation to English and more
wide presentation of conception I plan later...


---
wBR, George Shuklin
system administrator
Selectel.ru



_______________________________________________
xen-api mailing list
xen-api@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/mailman/listinfo/xen-api