Hi, Keir,
Thanks for your detailed comments. I have worked out an
updated patch.
I removed my own MSR macros definitions and hard TAB
indentation. Also, I removed LVTPC write-through in vlapic_write. Now, only when
guest both enables counting and interrupt, physical LVTPC is written. And when
vcpu is scheduled out, LVTPC is masked.
In addition, I employed a "first come, first service"
policy to grant PMU access to xenoprof/hvm_guest. When access is granted to hvm
guest, oprofile will get "Device Busy" . On the contrary, guest can not use PMU
like before.
HVM save/restore is tested, the patch will not break
current code.
Can you have a look and give me your comments? Thanks
in advance!
Best
Regards Haitao Shan
Oh yes, I see you save/restore MSR state and LVTPC
across context switch. That’s fine then. But I don’t think you should pass
through any of the virtual LVTPC register fields at all, except to toggle the
mask field of the real LVTPC depending on whether or not performance counter
interrupt delivery is currently enabled for the VCPU (I’d implement this just to
avoid unnecessary real interrupts which are certainly not going to turn into
virtual interrupts). There are no real LVTPC fields which should really be under
guest control, and so cooking the guest LVTPC value and poking it into the real
LVTPC in the vlapic code looks odd. As does save/restore of the whole LVTPC on
context switch -- at most you should need to track masked/not-masked. And you’re
letting the guest have access to reserved bits of the real LVTPC, which is not
good...
Ignore my comment about save/restore ― I misread your
context-switching code as HVM save/restore code!
-- Keir
On
11/12/07 13:32, "Shan, Haitao" <haitao.shan@xxxxxxxxx>
wrote:
* Poking the LVTPC as a one-off event only when the
VCPU writes its virtual LVTPC is not going to fly. What if the VCPU is
subsequently migrated to a different physical CPU? In any case the LVTPC is a
shared resource that needs synchronised access via e.g., context switching or
mutual exclusion (see the next point below). I think in
the patch I will save/load LVTPC during context switch.
*
How does this interact with, for example, xenoprof? Ideally we should be able
to run xenoprof and this concurrently over disjoint subsets of domains. At the
very least xenoprof and this patch should avoid stomping on each other by
implementing some form of mutual exclusion. Probably some resource sharing
(e.g., the PMC interrupt vector) also is possible. I expect this will require
some design thought, but unfortunately that is the price for being second into
the tree. Yes. How to share PMU resources will need
careful design. But I really don’t think it possible to run both concurrently
unless we do some sort of PMC’s partition. Of course, it is silly of me not to
implement a mechanism for mutual exclusion. I will implement one.
* Impact on save/restore to/from Core-2 processors: has
this been tested at all? I will try doing the test. I
think whether it should support save/restore can be argued. Does anyone want
to run VTune/Oprofile during save/restore? This can be hardly a good usage
model. But at least, I update it to ensure it does not break current
save/restore.
pmu.patch
Description: pmu.patch
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel
|