[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH] x86/vvmx: Fix nested virt on VMCS-Shadow capable hardware
On 30.07.2019 16:42, Andrew Cooper wrote: > c/s e9986b0dd "x86/vvmx: Simplify per-CPU memory allocations" had the wrong > indirection on its pointer check in nvmx_cpu_up_prepare(), causing the > VMCS-shadowing buffer never be allocated. Fix it. > > This in turn results in a massive quantity of logspam, as every virtual > vmentry/exit hits both gdprintk()s in the *_bulk() functions. The "in turn" here applies to the original bug (which gets fixed here) aiui, i.e. there isn't any log spam with the fix in place anymore, is there? If so, ... > Switch these to using printk_once(). The size of the buffer is chosen at > compile time, so complaining about it repeatedly is of no benefit. ... I'm not sure I'd agree with this move: Why would it be of interest only the first time that we (would have) overrun the buffer? After all it's not only the compile time choice of buffer size that matters here, but also the runtime aspect of what value "n" has got passed into the functions. If this is on the assumption that we'd want to know merely of the fact, not how often it occurs, then I'd think this ought to remain a debugging printk(). > Finally, drop the runtime NULL pointer checks. It is not terribly appropriate > to be repeatedly checking infrastructure which is set up from start-of-day, > and in this case, actually hid the above bug. I don't see how the repeated checking would have hidden any bug: Due to the lack of the extra indirection the pointer would have remained NULL, and hence the log message would have appeared (as also mentioned above) _until_ you had fixed the indirection mistake. (This isn't to mean I'm against dropping the check, I'd just like to understand the why.) > @@ -922,11 +922,10 @@ static void vvmcs_to_shadow_bulk(struct vcpu *v, > unsigned int n, > if ( !cpu_has_vmx_vmcs_shadowing ) > goto fallback; > > - if ( !value || n > VMCS_BUF_SIZE ) > + if ( n > VMCS_BUF_SIZE ) > { > - gdprintk(XENLOG_DEBUG, "vmcs sync fall back to non-bulk mode, " > - "buffer: %p, buffer size: %d, fields number: %d.\n", > - value, VMCS_BUF_SIZE, n); > + printk_once(XENLOG_ERR "%pv VMCS sync too many fields %u\n", > + v, n); > goto fallback; > } > > @@ -962,11 +961,10 @@ static void shadow_to_vvmcs_bulk(struct vcpu *v, > unsigned int n, > if ( !cpu_has_vmx_vmcs_shadowing ) > goto fallback; > > - if ( !value || n > VMCS_BUF_SIZE ) > + if ( n > VMCS_BUF_SIZE ) > { > - gdprintk(XENLOG_DEBUG, "vmcs sync fall back to non-bulk mode, " > - "buffer: %p, buffer size: %d, fields number: %d.\n", > - value, VMCS_BUF_SIZE, n); > + printk_once(XENLOG_ERR "%pv VMCS sync too many fields %u\n", > + v, n); Would you mind taking the opportunity and also disambiguate the two log messages, so that from observing one it is clear which instance it was that got triggered? Jan _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxxx https://lists.xenproject.org/mailman/listinfo/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |