[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 00/13] Add VMX TSC scaling support



On 11/22/2015 12:54 PM, Haozhong Zhang wrote:
Hi Jan, Boris and Aravind,

(Sorry for sending such a long email and thanks for your patience)

First, thank you very much for doing this.


Because this patchset also touches the existing SVM TSC ratio code, I
tested it on an AMD machine with an AMD A10-7700K CPU (3.4 GHz) that
supports SVM TSC ratio. There are two goals of the test:
  (1) Check whether this patchset works well for SVM TSC ratio.
  (2) Check whether the existing SVM TSC ratio code works correctly.

* TL;DR
   The detailed testing process is boring and long, so I put the
   conclusions first.

   According to the following test,
   (1) this patchset works well for SVM TSC ratio, and
   (2) the existing SVM TSC ratio code does not work correctly.


* Preliminary bug fix

   Before testing (specially for goal (2)), I have to fix another bug
   found in the current svm_get_tsc_offset() (commit e08f383):

   static uint64_t svm_get_tsc_offset(uint64_t host_tsc, uint64_t guest_tsc,
     uint64_t ratio)
   {
       uint64_t offset;

       if (ratio == DEFAULT_TSC_RATIO)
           return guest_tsc - host_tsc;

       /* calculate hi,lo parts in 64bits to prevent overflow */
       offset = (((host_tsc >> 32U) * (ratio >> 32U)) << 32U) +
       ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
             (host_tsc & 0xffffffffULL) * (ratio & 0xffffffffULL);
             ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
             ^^ wrong

       return guest_tsc - offset;
   }

   Looking at the AMD's spec about TSC ratio MSR and where this function is
   called, it's expected to calculate
       guest_tsc - (host_tsc * ratio) >> 32
   but above underlined code is definitely not "(host_tsc * ratio) >> 32",
   and above function will return a much larger result than
   expected if (guest TSC rate / host TSC rate) > 1. In practice, it
   could result the guest TSC jumping to several years later after
   migration (which I came across and was confuse by in this test).

Yes, this is obviously wrong.


   This bug can be fixed either later by patch 5 which introduces a
   common function hvm_scale_tsc() to scale TSC, or by replacing above
   underlined code with a simplified and inlined version of
   hvm_scale_tsc() as below:
       uint64_t mult, frac;
       mult    = ratio >> 32;
       frac    = ratio & ((1ULL << 32) - 1);
       offset  = host_tsc * mult;
       offset += (host_tsc >> 32) * frac;
       offset += ((host_tsc & ((1ULL << 32) - 1)) * frac) >> 32;

I am not sure I understand the last line (or maybe 2 lines)

If by 'offset' here you are trying to calculate the scaled version of host TSC then I think it would be

(host_tsc * (ratio >> 32)) + ( (host_tsc * (ratio & 0xffffffff)) >> 32 )

(sanity check: assuming host_tsc is 8 and the ratio is 1.5 (i.e. 0x180000000) we get 12)


-boris


   For testing goal (2), I apply the latter fix.


* Test for goal (1)

   * Environment
     (1) Xen (commit e08f383)
     (2) Host Linux kernel 3.19.0
     (3) Guest Linux kernel 3.19.0 & 4.2.0

   * Process
     (1) Apply the whole patchset on commit e08f383.

     (2) Launch a HVM domain from the configuration xl-high.cfg (in
         attachment).

         Expected: The guest Linux should boot normally in the domain.

     (3) Execute the command "dmesg | grep -i tsc" in the guest Linux
         to check the TSC rate detected by the guest Linux.

         Expected: Suppose the detected TSC rate is 'gtsc_khz' in KHz,
                  then it should be as close to the value of 'vtsc_khz'
                  option in xl-high.cfg as possible.

     (4) Execute the program "./test_tsc <nr_secs> gtsc_khz" to check
         whether the guest TSC rate is synchronized with the wall clock.
         The code of test_tsc is also in the attachment. It records the
         beginning and ending TSC values (tsc0 and tsc1) for a period
         of nr_secs and outputs the result of
        (tsc1 - tsc0) / (gtsc_khz * 1000).

         Expected: The output should be as close to nr_secs as possible.

      Follows test the migration.

      (5) Save the current domain by "xl save hvm-test saved_domain".

      (6) Restore the domain.

      (7) Take above step (4) again to check whether the guest TSC rate
          is still synchronized with the wall clock.

          Expected: the same as step (5)

      (8) Switch to the configuration xl-low.cfg and take above
          steps (2) ~ (6) again.

   * Results (OK: All as expected)
     First round w/ xl-high.cfg (vtsc_khz = 4000000):
     (3) gtsc_khz = 4000000 KHz
     (4) ./test_tsc 10 4000000   outputs: Passed 9.99895 s
         ./test_tsc 3600 4000000 outputs: Passed 3599.99754 s
     (7) ./test_tsc 10 4000000   outputs: Passed 9.99885 s
         ./test_tsc 3600 4000000 outputs: Passed 3599.98987 s

     Second round w/ xl-low.cfg (vtsc_khz = 2000000):
     (3) gtsc_khz = 2000000 KHz
     (4) ./test_tsc 10 4000000   outputs: Passed 9.99886 s
         ./test_tsc 3600 4000000 outputs: Passed 3599.99810 s
     (7) ./test_tsc 10 4000000   outputs: Passed 9.99885 s
         ./test_tsc 3600 4000000 outputs: Passed 3599.99853 s

    I also switched the clocksource of guest Linux to 'hpet' and got
    very similar results to above.


* Test for goal (2)

   * Environment
     The same as above

   * Process
     (1) ~ (5): the same as above.
     (6) Reboot to Xen hypervisor and toolstack w/o this patchset but
         w/ the bug fix at the beginning and restore the domain.
     (7) the same as above.

   * Results (Failed)
     (7) ./test_tsc 10 4000000 outputs: Passed 63.319284 s


* Conclusion

   This patchset works well for SVM TSC ratio and fixes existing bugs
   in SVM TSC ratio code.


Thanks for your patience to read such a long email,
Haozhong



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.