[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v2] IOMMU: make DMA containment of quarantined devices optional

On 13.12.2019 14:12, Durrant, Paul wrote:
>> -----Original Message-----
>> From: Xen-devel <xen-devel-bounces@xxxxxxxxxxxxxxxxxxxx> On Behalf Of Jan
>> Beulich
>> Sent: 13 December 2019 12:53
>> To: xen-devel@xxxxxxxxxxxxxxxxxxxx
>> Cc: Juergen Gross <jgross@xxxxxxxx>; Kevin Tian <kevin.tian@xxxxxxxxx>;
>> Stefano Stabellini <sstabellini@xxxxxxxxxx>; Julien Grall
>> <julien@xxxxxxx>; Wei Liu <wl@xxxxxxx>; Konrad Wilk
>> <konrad.wilk@xxxxxxxxxx>; George Dunlap <George.Dunlap@xxxxxxxxxxxxx>;
>> Andrew Cooper <andrew.cooper3@xxxxxxxxxx>; Paul Durrant <paul@xxxxxxx>;
>> Ian Jackson <ian.jackson@xxxxxxxxxx>; Roger Pau Monné
>> <roger.pau@xxxxxxxxxx>
>> Subject: [Xen-devel] [PATCH v2] IOMMU: make DMA containment of quarantined
>> devices optional
>> Containing still in flight DMA was introduced to work around certain
>> devices / systems hanging hard upon hitting an IOMMU fault. Passing
>> through (such) devices (on such systems) is inherently insecure (as
>> guests could easily arrange for IOMMU faults to occur). Defaulting to
>> a mode where admins may not even become aware of issues with devices can
>> be considered undesirable. Therefore convert this mode of operation to
>> an optional one, not one enabled by default.
>> This involves resurrecting code commit ea38867831da ("x86 / iommu: set
>> up a scratch page in the quarantine domain") did remove, in a slightly
>> extended and abstracted fashion. Here, instead of reintroducing a pretty
>> pointless use of "goto" in domain_context_unmap(), and instead of making
>> the function (at least temporarily) inconsistent, take the opportunity
>> and replace the other similarly pointless "goto" as well.
>> In order to key the re-instated bypasses off of there (not) being a root
>> page table this further requires moving the allocate_domain_resources()
>> invocation from reassign_device() to amd_iommu_setup_domain_device() (or
>> else reassign_device() would allocate a root page table anyway); this is
>> benign to the second caller of the latter function.
>> Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx>
>> ---
>> As far as 4.13 is concerned, I guess if we can't come to an agreement
>> here, the only other option is to revert ea38867831da from the branch,
>> for having been committed prematurely (I'm not so much worried about the
>> master branch, where we have ample time until 4.14). What I surely want
>> to see us avoid is a back and forth in behavior of released versions.
>> (Note that 4.12.2 is similarly blocked on a decision either way here.)
>> I'm happy to take better suggestions to replace "full".
> How about simply "sink", since that's what it does?

But it's not really a "sink", as we still fault writes (which is the
only thing I can see to be "sunk" if I'm getting the meaning of the
word right).

>> --- a/xen/drivers/passthrough/iommu.c
>> +++ b/xen/drivers/passthrough/iommu.c
>> @@ -30,13 +30,17 @@ bool_t __initdata iommu_enable = 1;
>>  bool_t __read_mostly iommu_enabled;
>>  bool_t __read_mostly force_iommu;
>>  bool_t __read_mostly iommu_verbose;
>> -bool __read_mostly iommu_quarantine = true;
>>  bool_t __read_mostly iommu_igfx = 1;
>>  bool_t __read_mostly iommu_snoop = 1;
>>  bool_t __read_mostly iommu_qinval = 1;
>>  bool_t __read_mostly iommu_intremap = 1;
>>  bool_t __read_mostly iommu_crash_disable;
>> +#define IOMMU_quarantine_none  0
>> +#define IOMMU_quarantine_basic 1
>> +#define IOMMU_quarantine_full  2
>> +uint8_t __read_mostly iommu_quarantine = IOMMU_quarantine_basic;
> If we have 'IOMMU_quarantine_sink' instead of 'IOMMU_quarantine_full',
> then how about 'IOMMU_quarantine_write_fault' instead of
> 'IOMMU_quarantine_basic'?

Why "write_fault"? Even in "full" mode you only avoid read faults
aiui (see also above). So if anything "write_fault" would be a
replacement for "full"; "basic" could be replaced by just "fault"


Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.