[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 3/3] x86/AMD-Vi: Fix IVRS HPET special->handle override



On 9/24/2013 1:47 AM, Jan Beulich wrote:
On 23.09.13 at 18:48, Suravee Suthikulpanit <suravee.suthikulpanit@xxxxxxx> 
wrote:
I am a bit confused on what you want to do.  I believe all the systems
at this point
should have only one HPET.  That's why the code only has one data
structure for
initialize one HPET.  Are you expecting that certain systems could have
more than
one HPETs?
Of course. The code assuming just one HPET is wrong in the first
place, so I'm really not looking towards making matters worse.
The only valid a priori information we have is that there's
exactly one HPET nominated as the legacy replacement one
(through the ACPI HPET table), but there could be more (as
said, this is quite likely on multi-node NUMA systems).
Ok, after looking into Intel HPET specification, I can see that the spec allows a particular system to have multiple HPETs. And Jan was correct that only one is required to be listed in the APCI HPET table.
The rest are listed in ACPI namespace.

I look at the "arch/x86/hpet.c" and saw that this supports discovery in the ACPI HPET table. However, there is only one "hpet_address" and "hpet_blockid", which are initialized in the "arch/x86/acpi/boot.c: acpi_parse_hpet()". If the code were to support more than one HPET,
this would have to be changed also.  Do you expect these to change as well?

Also, I don't see the code that would walk the ACPI namespace anywhere. Does it exist?

FYI, I have checked with the hardware platforms team, and there is no AMD systems with multiple southbridge (HPET is in the southbridge). I also check on the system with 2 SR56xx chips ( each contains an IOMMU), and there is only one southbridge.

Suravee


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.