[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] about the funtion call memory_type_changed()



> >> I found the restore process of the live migration is quit long, so I
> >> try to
> > find out what's going on.
> >> By debugging, I found the most time consuming process is restore the
> >> VM's
> > MTRR MSR,
> >> The process is done in the function hvm_load_mtrr_msr(), it will call
> >> the memory_type_changed(), which eventually call the time consuming
> >> function flush_all().
> >>
> >> All this is caused by adding the memory_type_changed in your patch,
> >> here is
> > the link
> >> http://lists.xen.org/archives/html/xen-devel/2014-03/msg03792.html,
> >>
> >> I am not sure if it's necessary to call flush_all, even it's
> >> necessary,
> > call the function
> >>  hvm_load_mtrr_msr one time will cause dozens call of flush_all, and
> >> each
> > call of the
> >>  flush_all function will consume about 8 milliseconds, in my test
> > environment, the VM
> >>  has 4 VCPUs, the hvm_load_mtrr_msr() will be called four times, and
> >> totally
> > consumes
> >>  about 500 milliseconds. Obviously, there are too many flush_all calls.
> >>
> >>  I think something should be done to solve this issue, do you think so?
> >
> > The flush_all() cant be avoided completely, as it is permitted to use
> > sethvmcontext on an already-running VM.  In this case, the flush
> > certainly does need to happen if altering the MTRRs has had a real
> > effect on dirty cache lines.

Yes, it's true. But I still don't understand why to do the flush_all just when 
iommu_enable is true. Could you  explain why ?

> Plus the actual functions calling memory_type_changed() in mtrr.c can also
> be called while the VM is already running.
> 
> > However, having a batching mechanism across hvm_load_mtrr_msr() with
> a
> > single flush at the end seems like a wise move.
> 
> And that shouldn't be very difficult to achieve. Furthermore perhaps it would
> be possible to check whether the VM did run at all already, and if it didn't 
> we
> could avoid the flush altogether in the context load case?
> 

I have write a patch according to your suggestions. But there is still a lot of 
flush_all 
when the guest booting, and this prolong the guest booting time about 600ms.




_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.