|
|
|
|
|
|
|
|
|
|
xen-ia64-devel
RE: [Xen-ia64-devel] SMP designs and discuss
> here is a list of point I'd like to share and to discuss with you.
> Please, comment and do not hesitate to cut this mail to
> create new threads.
>
> Tristan.
>
> * smp_processor_id() (getting the current cpu number).
> Currently, this number is stored inside the current domain
> (cpu field),
> and is read throught the variable 'current' (ie r13=tp).
> Another possibility is to store this number inside the
> per-cpu storage.
An interesting question... I see that Xen/x86 uses per-cpu
storage but Linux (all asms?) uses the current_thread_info.
It might be worth asking on xen-devel why the Xen developers
chose to do this differently than Linux.
> * scheduler spin-lock.
> Currently, I use an hack to release the spin-lock acquired in
> __enter_schedule. This is done in schedule_tail.
> The problem is during the first activation of a domain: the
> spin-lock is
> acquired, but context_switch never returns, and on x86 the
> spin-lock is
> released after context_switch.
Not sure if there is a better solution than your hack but others
may have suggestions.
> * Idle regs.
> Currently, idle domains have no regs (the regs field is NULL).
> [I am not sure it is true for idle0].
> Is it a problem ?
> I had to modify the heartbeat so that it doesn't reference regs.
Personally I don't think idle should exist, but it definitely
shouldn't require state to be saved and restored.
> * Why xentime.c is so complicate ?
> What is the purpose of itc_at_irq, stime_irq ?
Some of this is historical from my early attempts to leverage
Linux code while merging in the necessary Xen code. Time
management needs to be rewritten but we have delayed working
on it until higher priority tasks are done.
> * Xenheap size.
> It is too small for more than 2 cpus.
> Maybe its size must depends of MAX_CPUS ?
I think that is a good idea.
> * I'd like to catch memory access out of Xen.
> I think it is very easy for code (just remove TR size).
> Also alt_itlb_miss
> must crash Xen.
> Maybe quite more difficult for data. I have to identify
> where Xen try to
> access out of its data region. Here is a first try:
> * mmio(serial/vga/...) (can be mapped)
> * ACPI tables (can be copied)
> * Call to PAL/SAL/EFI (can enable alt_dtlb_miss)
This is also a good idea but we have delayed improving
robustness until higher priority tasks are done.
> * VHPT
> How many VHPT per system ?
> 1) Only one
> 2) One per LP (current design)
> 3) One per VCPU (original Xen-VTI).
> I think (1) is not scalable...
I thought the current design is (1). Perhaps I didn't
look closely enough at your SMP patch!
There was some debate about this in April. It started off-list
but some of it is archived here:
http://lists.xensource.com/archives/html/xen-ia64-devel/2005-04/msg00017
.html
We decided that different use models may require different
VHPT models and that we would support both, at least so that
we could measure them both. This also hasn't been high priority
as, until recently, we didn't have more than one LP or VCPU!
> * Instructions to be virtualized (when using SMP):
> * TLB related (itc, itr, ptc)
> * cache related (fc, PAL_CACHE_FLUSH)
> * ALAT: nothing to be done as invalidated during domain switch.
> * other ?
> Currently, any problems are avoided by pinning VCPUs to LP.
> If you don't want to pin VCPU, I have a proposal:
> * Add per VCPUs bitmaps of LP. Bit is set when the VCPU
> runs on the LP.
> There may be one bitmap for cache and one bitmap for TLB.
> For cache operations, send IPI to every LP whose bit is
> set in the bitmap.
> PAL_CACHE_FLUSH clears bit of all VCPUS which have run of the LP.
> For ptc, send IPI or ptc.g according to number of bits set.
This deserves more thought. I agree that pinning is not
a good idea, but I think that's what Xen/x86 does today,
isn't it?
This also might be worth a discussion on xen-devel...
Dan
_______________________________________________
Xen-ia64-devel mailing list
Xen-ia64-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-ia64-devel
|
|
|
|
|