[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v3 02/11] acpi: Define ACPI IO registers for PVH guests





On 11/22/2016 09:07 AM, Jan Beulich wrote:
On 22.11.16 at 13:28, <boris.ostrovsky@xxxxxxxxxx> wrote:


On 11/22/2016 05:37 AM, Jan Beulich wrote:
On 21.11.16 at 22:00, <boris.ostrovsky@xxxxxxxxxx> wrote:
@@ -119,11 +122,33 @@ typedef struct buffered_iopage buffered_iopage_t;

 /* Compatibility definitions for the default location (version 0). */
 #define ACPI_PM1A_EVT_BLK_ADDRESS    ACPI_PM1A_EVT_BLK_ADDRESS_V0
+#define ACPI_PM1A_EVT_BLK_LEN        0x04
+#define ACPI_PM1A_EVT_BLK_BIT_OFFSET 0x00
 #define ACPI_PM1A_CNT_BLK_ADDRESS    ACPI_PM1A_CNT_BLK_ADDRESS_V0
+#define ACPI_PM1A_CNT_BLK_LEN        0x02
+#define ACPI_PM1A_CNT_BLK_BIT_OFFSET 0x00
 #define ACPI_PM_TMR_BLK_ADDRESS      ACPI_PM_TMR_BLK_ADDRESS_V0
+#define ACPI_PM_TMR_BLK_LEN          0x04
+#define ACPI_PM_TMR_BLK_BIT_OFFSET   0x00
 #define ACPI_GPE0_BLK_ADDRESS        ACPI_GPE0_BLK_ADDRESS_V0
 #define ACPI_GPE0_BLK_LEN            ACPI_GPE0_BLK_LEN_V0

+#if __XEN_INTERFACE_VERSION__ >= 0x00040800
+#if defined(__XEN__) || defined(__XEN_TOOLS__)
+
+/* Location of online VCPU bitmap. */
+#define XEN_ACPI_CPU_MAP             0xaf00
+#define XEN_ACPI_CPU_MAP_LEN         ((HVM_MAX_VCPUS + 7) / 8)
+
+#if XEN_ACPI_CPU_MAP + XEN_ACPI_CPU_MAP_LEN >= ACPI_GPE0_BLK_ADDRESS_V1
+#error "XEN_ACPI_CPU_MAP is too big"
+#endif
+
+/* GPE0 bit set during CPU hotplug */
+#define XEN_GPE0_CPUHP_BIT           2
+
+#endif /* defined(__XEN__) || defined(__XEN_TOOLS__) */
+#endif /* __XEN_INTERFACE_VERSION__ >= 0x00040800 */

 #endif /* _IOREQ_H_ */

I'm afraid there's been some misunderstanding here during the v2
discussion: New hypervisor/tools only definitions don't need an
additional interface version guard. It's instead the pre-existing
ones which should be removed from the namespace by adding
such a guard.

We want to keep all of them now. Shouldn't then the interface check be
added after we move to Andrew's suggestion of dynamic IO ranges?

No, we want them gone from the public interface for new
consumers. The only valid consumer has always been the tool
stack, just that this hadn't been properly done in the header.
Hence the need to add __XEN__ / __XEN_TOOLS__ around new
additions, and additionally interface version guards around
existing ones.


pmtimer.c uses some of those macros so how will wrapping interface version check around existing definitions continue to work when interface version is updated, unless dynamic IO ranges are also added by that time?


And of course _everything_ being added here anew needs to be
XEN_ prefixed and guarded.


The ones that I didn't add the prefix to are the standard ACPI names. If
I did this would look like

#define ACPI_PM_TMR_BLK_ADDRESS      ACPI_PM_TMR_BLK_ADDRESS_V0
+#define XEN_ACPI_PM_TMR_BLK_LEN          0x04
+#define XEN_ACPI_PM_TMR_BLK_BIT_OFFSET   0x00

There's nothing standard here. Implementations are fine to specify
a larger length iirc, at least for ACPI v2 and newer. If these values
were all fixed, there wouldn't have been a need to specify them in
FADT in the first place.

I meant that, unlike XEN_ACPI_CPU_MAP that I added, these blocks are ACPI-standard, not macros' values.

Are you asking to rename just the newly introduced definitions (lengths/offsets) or existing address macros as well? Doing the latter will also require changes to at least pmtimer.c

-boris


And as for being guarded, don't you think it mill make things difficult
to read. E.g.:

#define ACPI_PM_TMR_BLK_ADDRESS      ACPI_PM_TMR_BLK_ADDRESS_V0
+#if defined(__XEN__) || defined(__XEN_TOOLS__)
+#define XEN_ACPI_PM_TMR_BLK_LEN          0x04
+#define XEN_ACPI_PM_TMR_BLK_BIT_OFFSET   0x00
+#endif

Repeated 3 times.

Then don't repeat it three times and put the lengths and bit offsets
after all the addresses.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.