xen-ia64-devel
RE: [Xen-ia64-devel] Re: [PATCH]: ptc.ga for SMP-g
>From: Tristan Gingold [mailto:Tristan.Gingold@xxxxxxxx]
>Sent: 2006年3月30日 17:25
>For sure, this is almost the safest. But there is still a race-condition: if
>vcpu migrates during the IPI, the tlb can be modified by two cpus.
So first, you need hint to indicate which domain ptc.g emulation happens
on. That means each domain needs to have a private "struct
ptc_ga_args" to indicate progress of itself. By this way, previous LP where
target domain runs before migration can do a no-op when receiving an
IPI, since current context is not part of that domain. Then vtlb will only be
updated by the new LP.
Second, migration shouldn't happen on a running vcpu and only the
runnable vcpu on the runqueue is the candidate to be migrated. By this
way, it's impossible for one vcpu to be migrated at middle of process when
updating vtlb.
Based on above two points, I think we can avoid vtlb to be updated by two LPs.
VHPT will be a bit different since it's per-LP and all LPs domain ever runs
on need to flush stale content.
>
>> >The main problem with IPI is migration. However, currently
>migration
>> >doesn't
>> >work well. I think it is ok to migrate a vcpu from CPU X to CPU Y, but
>> >we
>> >don't support migrating back from CPU Y to CPU X.
>>
>> Elaboration? Curious about the reason...
>This is not an SMP-g issue, but an SMP issue.
>After migration to CPU Y, the VHPT for CPU X is not updated. When it
>migrates
>again to CPU X, it has an oudated VHPT.
For this issue, it depends on the policy of migration. Whether should
migration happen frequently or not? Whether migration is done
automatically or manually? A rough thought is that migration of vcpu
shouldn't be that frequent as normal process migration, and also the
overhead of migration is big.
For the issue you raised, at least we have two approaches:
- vcpu->vcpu_dirty_mask records all LPs target vcpu ever runs on.
So when flushing a VHPT entry, IPI can be sent to all LPs covered by the
mask and then ensure no stale entries there when scheduling back to
CPU X.
- Since overhead of migration is big which doesn't happen frequently,
we may add a little bit more overhead to migration point. That means we
only sending IPI to the very vcpu that target vcpu is running on when
flushing VHPT entry. Then at migration, check whether new LP (CPU X) is
the one that migrated vcpu ever runs. If yes, a full vhpt flush may be
issued to ensure no stale entries existing. This way sacrifices migration
but saves for normal run-time.
Anyway, I think the right solution depending on migration policy, however
we always need to track important information about the footprints of
target domain, like the dirty mask mentioned above which is the key for
the actual implementation.
Thanks,
Kevin
_______________________________________________
Xen-ia64-devel mailing list
Xen-ia64-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-ia64-devel
|
<Prev in Thread] |
Current Thread |
[Next in Thread>
|
RE: [Xen-ia64-devel] Re: [PATCH]: ptc.ga for SMP-g, Tian, Kevin
RE: [Xen-ia64-devel] Re: [PATCH]: ptc.ga for SMP-g,
Tian, Kevin <=
RE: [Xen-ia64-devel] Re: [PATCH]: ptc.ga for SMP-g, Xu, Anthony
RE: [Xen-ia64-devel] Re: [PATCH]: ptc.ga for SMP-g, Xu, Anthony
RE: [Xen-ia64-devel] Re: [PATCH]: ptc.ga for SMP-g, Xu, Anthony
RE: [Xen-ia64-devel] Re: [PATCH]: ptc.ga for SMP-g, Tian, Kevin
|
|
|