|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [PATCH v3 2/4] x86/hvm: fix handling of accesses to partial r/o MMIO pages
On 17.04.2025 18:23, Roger Pau Monné wrote:
> On Thu, Apr 17, 2025 at 05:38:54PM +0200, Jan Beulich wrote:
>> On 17.04.2025 17:25, Roger Pau Monne wrote:
>>> --- /dev/null
>>> +++ b/xen/arch/x86/hvm/mmio.c
>>> @@ -0,0 +1,125 @@
>>> +/* SPDX-License-Identifier: GPL-2.0-only */
>>> +/*
>>> + * MMIO related routines.
>>> + *
>>> + * Copyright (c) 2025 Cloud Software Group
>>> + */
>>> +
>>> +#include <xen/io.h>
>>> +#include <xen/mm.h>
>>> +
>>> +#include <asm/p2m.h>
>>> +
>>> +static int cf_check subpage_mmio_accept(struct vcpu *v, unsigned long addr)
>>> +{
>>> + p2m_type_t t;
>>> + mfn_t mfn = get_gfn_query_unlocked(v->domain, PFN_DOWN(addr), &t);
>>> +
>>> + return !mfn_eq(mfn, INVALID_MFN) && t == p2m_mmio_direct &&
>>> + subpage_mmio_find_page(mfn);
>>> +}
>>> +
>>> +/*
>>> + * The guest has read access to those regions, and consequently read
>>> accesses
>>> + * shouldn't fault. However read-modify-write operations may take this
>>> path,
>>> + * so handling of reads is necessary.
>>> + */
>>> +static int cf_check subpage_mmio_read(
>>> + struct vcpu *v, unsigned long addr, unsigned int len, unsigned long
>>> *data)
>>> +{
>>> + struct domain *d = v->domain;
>>> + unsigned long gfn = PFN_DOWN(addr);
>>> + p2m_type_t t;
>>> + mfn_t mfn;
>>> + struct subpage_ro_range *entry;
>>> + volatile void __iomem *mem;
>>> +
>>> + *data = ~0UL;
>>> +
>>> + if ( !IS_ALIGNED(len | addr, len) )
>>
>> What's the point of doing the | ? len can't be misaligned with itself?
>
> Hm, it's the same form that's used in mmio_ro_emulated_write(), I
> assumed it was to catch illegal access lengths, like 3.
Oh, I see. But that's not using IS_ALIGNED(), and imo validly so, despite the
apparent open-coding. IS_ALIGNED() requires the 2nd argument to be a power of
two. The combined check there is folding the power-of-2 one with the is-
aligned one.
Jan
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |