[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 1/3] xen/vt-d: need barriers to workaround CLFLUSH



>>> On 04.05.15 at 11:14, <andrew.cooper3@xxxxxxxxxx> wrote:
> On 04/05/2015 09:52, Jan Beulich wrote:
>>>>> On 04.05.15 at 04:16, <tiejun.chen@xxxxxxxxx> wrote:
>>> --- a/xen/drivers/passthrough/vtd/x86/vtd.c
>>> +++ b/xen/drivers/passthrough/vtd/x86/vtd.c
>>> @@ -56,7 +56,9 @@ unsigned int get_cache_line_size(void)
>>>  
>>>  void cacheline_flush(char * addr)
>>>  {
>>> +    mb();
>>>      clflush(addr);
>>> +    mb();
>>>  }
>> I think the purpose of the flush is to force write back, not to evict
>> the cache line, and if so wmb() would appear to be sufficient. As
>> the SDM says that's not the case, a comment explaining why wmb()
>> is not sufficient would seem necessary. Plus in the description I
>> think "serializing" needs to be changed to "fencing", as serialization
>> is not what we really care about here. If you and the maintainers
>> agree, I could certainly fix up both aspects while committing.
> 
> On the subject of writebacks, we should get around to alternating-up the
> use of clflushopt and clwb, either of which would be better than a
> clflush in this case (avoiding the need for the leading mfence).

Plus the barrier would perhaps rather sit around the loop
invoking cacheline_flush() in __iommu_flush_cache(), and I
wonder whether VT-d code shouldn't use available flushing
code elsewhere in the system, and whether that code then
wouldn't need barriers added (or use clflushopt/clwb as you
suggest) instead.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.