[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 2 of 6 V6] amd iommu: call guest_iommu_set_base from hvmloader



>>> On 15.10.12 at 14:23, Wei Wang <wei.wang2@xxxxxxx> wrote:
> On 10/15/2012 12:11 PM, Jan Beulich wrote:
>>>>> On 15.10.12 at 12:00, Wei Wang<wei.wang2@xxxxxxx>  wrote:
>>> On 09/27/2012 10:27 AM, Jan Beulich wrote:
>>>>>>> On 26.09.12 at 16:46, Wei Wang<wei.wang2@xxxxxxx>   wrote:
>>>>> @@ -3834,6 +3835,9 @@ long do_hvm_op(unsigned long op, 
>>>>> XEN_GUEST_HANDLE(void)
>>> arg)
>>>>>               case HVM_PARAM_BUFIOREQ_EVTCHN:
>>>>>                   rc = -EINVAL;
>>>>>                   break;
>>>>> +            case HVM_PARAM_IOMMU_BASE:
>>>>> +                rc = guest_iommu_set_base(d, a.value);
>>>>
>>>> This suggests that you're allowing for only a single IOMMU per
>>>> guest - is that not going to become an issue sooner or later?
>>>
>>> I think that one iommu per guest is probably enough. Because guest IVRS
>>> table is totally virtual, it does not reflect any pci relationship of
>>> real systems. Even if qemu supports multi pci buses, we can still
>>> virtually group them together into one virtual IVRS table. It might be
>>> an issue if qemu uses multi pci segments, but so far even hardware iommu
>>> only uses segment 0. Additionally, the guest iommu is only used by ats
>>> capable GPUs. Normal passthrough device should not make use of it. So,,
>>> What do you think?
>>
>> Especially the multi-segment aspect makes me think that the
>> interface should allow for multiple IOMMUs, even if the
>> implementation supports only one for now.
> 
> Ok, then I will rework the interface to take iommu segment as an 
> additional parameter.

That'll likely make the interface even more ugly than the more
flexible variant allowing for multiple IOMMUs independent of
the segment they're on/for. But let's see what you come up
with...

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.