[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RE: [Xen-devel] softtsc for PV guests



> That only matters if things happen that Xen doesn't know about.  If
> something happens that affects the tsc's parameters, it will 
> update them
> immediately.
> 
> No, they're in the shared info area.  It reads them afresh 
> each time it
> reads the tsc.  The info has a version counter which gets updated when
> the info changes so the guest can make sure it has a 
> consistent snapshot
> of both the timing parameters and the tsc.  The timing 
> parameters for a
> given CPU are only ever updated by that CPU, so there's no 
> risk of races between CPUs.

OK, now looking at the code in 2.6.30, that all makes sense.

Has anyone stress-tested this code across the wide range
of TSC characteristics that might exist in migrating around
a virtualized data center?  I wonder, for example, what is
the longest period of time for which vgettimeofday will
return the same result (e.g. for which time is "stopped").

> >> Right.  That's basically not supported under Linux, except 
> as part of
> >> certain ABIs like vgettimeofday (which is functionally 
> >> identical to the
> >> Xen PV clock ABI).
> >>     
> > Again, a shame.  I'm learning that it is not uncommon for 
> unprivileged
> > code to sample "time" tens of thousands or even hundreds of 
> thousands
> > of times per processor per second.  Trapping all app rdtscs or Linux
> > going to HPET or PIT just doesn't cut it if the frequency is
> > this high.  If TSC is "safe" 99.99% of the time, it sure would
> > be nice if those apps could use rdtsc.
> 
> They can, with the gettimeofday vsyscall (= "syscall" which executes
> entirely in usermode within a kernel-provided vsyscall page).

Any idea what the cost of a gettimeofday vsyscall is relative
to an rdtsc?

(Alternately, do I need to do anything in a 2.6.30 kernel or when
compiling a simple C test program to enable vgettimeofday to be used?
I'd like to compare the cost myself.)

> You're trying to make rdtsc something it isn't, even in 
> native execution.
>
> rdtsc represents a massive lost opportunity and failure of imagination
> on Intel's part; one hopes that they'll eventually redeem themselves
> with a new mechanism which does actually have all the properties one
> wants - and that mechanism may eventually end up with rdtsc in it
> somewhere.  But we're not really there yet, and I think trying to make
> rdtsc that thing is a quixotic effort.

Windmills are my specialty :-)  Intel(AMD) *has* solved the TSC
problem on the vast majority of new (single-socket multi-core) systems.
The trick is determining when the mechanism is safe to use and
when it is not.
 
> > I'm trying to find a solution that allows this to be supported
> > in a virtual environment (without huge loss of performance).
> > And I think I might have one.
> 
> Apps can't reliably use a raw rdtsc anyway, without making unwarranted
> assumptions about the underlying hardware.  Any app which 
> does may work
> well on one system, but then mysteriously fail when you move it to the
> backup server.

Exactly.

But, reliable or not, they *can* and *do* and *will* use rdtsc.
And it *will* be reliable in enough systems that it may never
be noticed as unreliable, except as some weird bug that
occurs randmomly only when the app is run in a virtual environment
and which never gets root-caused to be a TSC-related issue.

So wouldn't it be nice if apps could take advantage of a fast
synchronized rdtsc that it IS reliable 99% of the time, but be
smart enough to adapt when it is NOT reliable?

And, for that matter, if rdtsc is much faster than vgettimeofday
(to be determined), wouldn't it be nice if Linux could take
advantage of a TSC clocksource that IS reliable 99%
of the time, but be smart enough to adapt when it is NOT
reliable?

Dan (Quixote)

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.