[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 1/2] AMD IOMMU: also spot missing IO-APIC entries in IVRS table



>>> On 06.02.13 at 15:41, Boris Ostrovsky <boris.ostrovsky@xxxxxxxxxx> wrote:
> On 2/6/2013 8:12 AM, Jan Beulich wrote:
> 
>>
>> +    /* Each IO-APIC must have been mentioned in the table. */
>> +    for ( apic = 0; !error&&  apic<  nr_ioapics; ++apic )
>> +    {
>> +        if ( !nr_ioapic_entries[apic] ||
>> +             ioapic_sbdf[IO_APIC_ID(apic)].pin_setup )
>> +            continue;
>> +
>> +        printk(XENLOG_ERR "IVHD Error: no information for IO-APIC %#x\n",
>> +               IO_APIC_ID(apic));
>> +        if ( amd_iommu_perdev_intremap )
>> +            error = -ENXIO;
>> +        else
>> +        {
>> +            ioapic_sbdf[IO_APIC_ID(apic)].pin_setup = xzalloc_array(
>> +                unsigned long, BITS_TO_LONGS(nr_ioapic_entries[apic]));
>> +            if ( !ioapic_sbdf[IO_APIC_ID(apic)].pin_setup )
>> +            {
>> +                printk(XENLOG_ERR "IVHD Error: Out of memory\n");
>> +                error = -ENOMEM;
>> +            }
>> +        }
>> +    }
>> +
>>       return error;
>>   }
>>
> 
> Don't we end up with ioapic_sbdf[IO_APIC_ID(apic)].bdf/seg being 
> uninitialized? They are usually set in parse_ivhd_device_special(), at 
> the same time pin_setup is allocated, but with IVRS broken in this way 
> we'll never get there, will we?

Correct. .bdf/.seg being uninitialized is no much of a problem
when using global intremap tables though. And certainly not on
a system with just a single IOMMU (as was the case on the
crashing system). Do you see alternatives? Disable the IOMMU
always, even if not using global remap tables? That could be
seen as a regression, as at least global remap tables worked fine
so far on such systems.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.