[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Xen help


  • To: Derek Riley <derek.riley@xxxxxxxxxxxxxxxxxxx>, xen-devel@xxxxxxxxxxxxxxxxxxx
  • From: George Dunlap <dunlapg@xxxxxxxxx>
  • Date: Thu, 13 Aug 2009 11:52:55 +0100
  • Cc:
  • Delivery-date: Thu, 13 Aug 2009 03:53:23 -0700
  • Domainkey-signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:sender:in-reply-to:references:date :x-google-sender-auth:message-id:subject:from:to:content-type :content-transfer-encoding; b=D9VTcMBfonO8pyYEXJuAfFtZ7bcsVjpUjWHDd742E4yHzLyA963dymJnV0YNqifmm5 7srnv9/sXBfijhktTqOjPh+78uv3DE/a5YYTjW6AJ/qvE6wwcHzEuMhcb0lqWcRgbdL0 uic0XYL2MByUfFwKKvUqcmADTECcETEDcJJfI=
  • List-id: Xen developer discussion <xen-devel.lists.xensource.com>

(Please keep the discussion on the list.)

Because Xen doesn't have any state, and it can't directly invoke a
user-space process or prevent it from being preempted by the guest,
there cannot be locks on communications between Xen and a user-space
process.  Instead, specific read/write ordering discipline is used.  A
brief description can be found in the original Xen paper, here:
http://leitl.org/docs/intel/IR-TR-2003-114-110720030715_178.pdf,
particularly section 3.2.

Basically, there is conceptually a shared "ring" of a fixed size.
There are also shared indexes, a producer and consumer index.  Rules
are that only Xen will modify the producer index, and will only
increment it; only xentrace will modify the consumer interface, and
will only increment it.  xentrace will never increment consumer past
producer; and Xen will never increment producer past (consumer +
buffer size).

Generalized Xen producer code:

if ( ring->prod + 1 < ring->cons + ring->size )
{
  ring->element[(ring prod+1)%ring->size] = trace_record;
  ring->prod++;
}

Generalized xentrace consumer code:

if ( ring->prod > ring->cons )
{
  trace_record = ring->element[(ring->cons)%ring->size];
  ring->cons++;
  write(fd, trace_record);
}

As long as the reads and writes happen in the proper order, there are
no races (or only benign ones).  Does that make sense?

As always, the devil's in the details (sleeping, waking, event
channels, lost records, variable-sized records, multiple pcpus, &c).
But hopefully that will be enough to get you started.

 -George

On Wed, Aug 12, 2009 at 12:12 PM, Derek
Riley<derek.riley@xxxxxxxxxxxxxxxxxxx> wrote:
> Thanks for the reply; the interface that you suggest sounds ideal.  I am new
> to Xen and relatively new to kernel programming, so I don't really
> understand how buffers can be used to pass information or what types of
> mutex or other issues must be considered.  I haven't found the xentrace (or
> any other Xen code for that matter) very readable or understandable.  I have
> spent most of my programming life working with user space processes and
> parallel code.  Any help or suggestions are appreciated.  Thanks for your
> time and patience.
> --Derek
>
> On Wed, Aug 12, 2009 at 5:00 AM, George Dunlap <dunlapg@xxxxxxxxx> wrote:
>>
>> On Tue, Aug 11, 2009 at 2:55 PM, Derek
>> Riley<derek.riley@xxxxxxxxxxxxxxxxxxx> wrote:
>> > To re-iterate what I posted, xentrace appears to be a 1-way
>> > communication
>> > mechanism that goes the "wrong way" for what I need.  I need to be able
>> > to
>> > pass information from the user space to the Xen scheduler, and Xentrace
>> > does
>> > just the opposite.
>>
>> Sure, xentrace copies data from the hypervisor and writes it to a
>> file.  But the same basic technique can be turned around: you can
>> write a new interface so that a program in dom0 can read data from a
>> file and feed it into the hypervisor, to be read by (if I recall
>> correctly) your new scheduler feature.
>>
>> So, how might you adapt the basic technique xentrace uses to put stuff
>> into the hypervisor instead of taking out out of the hypervisor?
>>
>>  -George
>
>

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.