[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH V5 10/22] xen/ioreq: Move x86's io_completion/io_req fields to struct vcpu



On 28.01.2021 15:21, Julien Grall wrote:
> On 28/01/2021 13:53, Jan Beulich wrote:
>> On 28.01.2021 14:41, Julien Grall wrote:
>>> On 25/01/2021 19:08, Oleksandr Tyshchenko wrote:
>>>> From: Oleksandr Tyshchenko <oleksandr_tyshchenko@xxxxxxxx>
>>>>
>>>> The IOREQ is a common feature now and these fields will be used
>>>> on Arm as is. Move them to common struct vcpu as a part of new
>>>> struct vcpu_io and drop duplicating "io" prefixes. Also move
>>>> enum hvm_io_completion to xen/sched.h and remove "hvm" prefixes.
>>>>
>>>> This patch completely removes layering violation in the common code.
>>>>
>>>> Signed-off-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@xxxxxxxx>
>>>> Reviewed-by: Julien Grall <jgrall@xxxxxxxxxx>
>>>> Reviewed-by: Paul Durrant <paul@xxxxxxx>
>>>> Acked-by: Jan Beulich <jbeulich@xxxxxxxx>
>>>> CC: Julien Grall <julien.grall@xxxxxxx>
>>>> [On Arm only]
>>>> Tested-by: Wei Chen <Wei.Chen@xxxxxxx>
>>>
>>> I seem to have trouble running Xen on x86 platform after this patch is
>>> applied (see trace below).
>>>
>>> The bisector pointed out to this patch but I can't quite figure out why
>>> this is breaking.
>>>
>>> Does this ring any bell to someone?
>>
>> Memory overwriting / corruption? This ...
>>
>>> (XEN) *** Serial input to DOM0 (type 'CTRL-a' three times to switch input)
>>> (XEN) ----[ Xen-4.15-unstable  x86_64  debug=n gcov=y  Tainted:   C   ]----
>>> (XEN) CPU:    1
>>> (XEN) RIP:    e008:[<ffff82d04041c1c7>]
>>> x86_64/entry.S#restore_all_guest+0x7/0x145
>>> (XEN) RFLAGS: 0000000000010002   CONTEXT: hypervisor (d0v0)
>>> (XEN) rax: 00000000000000ff   rbx: ffff83027c806000   rcx: ffff82d0406c9a80
>>> (XEN) rdx: 0000000000000000   rsi: fffffffffffffed9   rdi: 0000000000000001
>>> (XEN) rbp: ffff83027c887df0   rsp: ffff83027c887ef8   r8:  00000000aaa8946e
>>> (XEN) r9:  0000000000000002   r10: ffff83027c806040   r11: ffff83027c8cc020
>>> (XEN) r12: ffff83027c80f000   r13: ffff83027c895000   r14: 0000000000000000
>>> (XEN) r15: 0000000000000000   cr0: 0000000080050033   cr4: 00000000003426e0
>>> (XEN) cr3: 0000000273a2d000   cr2: 0000000000000000
>>> (XEN) fsb: 0000000000000000   gsb: 0000000000000000   gss: 0000000000000000
>>> (XEN) ds: 0000   es: 0000   fs: 0000   gs: 0000   ss: 0000   cs: e008
>>> (XEN) Xen code around <ffff82d04041c1c7>
>>> (x86_64/entry.S#restore_all_guest+0x7/0x145):
>>> (XEN)  00 48 8b 93 98 0d 00 00 <44> 8b 3a 4c 8b 8b 68 0b 00 00 ba ff 7f
>>
>> ... is
>>
>> restore_all_guest:
>>          ASSERT_INTERRUPTS_DISABLED
>>
>>          /* Stash guest SPEC_CTRL value while we can read struct vcpu. */
>>          mov VCPU_arch_msrs(%rbx), %rdx
>>          mov VCPUMSR_spec_ctrl_raw(%rdx), %r15d
>>
>> i.e. Dom0/vCPU0's v->arch.msrs got zeroed in an unintended way,
>> hence %rdx is zero here (%rbx looks at least plausible).
>>
>> I take it that you double check this isn't an incremental build
>> issue, i.e. entry.o for some reason not having got rebuilt
>> despite struct vcpu's layout having changed?
> 
> I was going to reply back on my e-mail with more debugging information. 
> It seems that this is a build issue as if I clean the repo the error 
> disappear.
> 
> The error happens when I move from staging to a batch with this series 
> applied without a cleaning the tree. It also happens the other way 
> around as well.
> 
> Removing entry.o or asm-offsets.h before building doesn't help. Any 
> other idea?

No, I'd need to know how exactly to repro and then try to debug.

> On a side note, it looks like asm-offsets.h doesn't get rebuild when 
> Kconfig change. I noticed an issue when trying to turn on the perf counters.

That's bad and needs fixing. Assuming you mean the kconfig change
in fact incurs a change to asm-offsets.h. Otherwise there might
be a move-if-changed somewhere preventing unnecessary rebuilding.

Jan



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.