[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH RFC v2 10/12] x86: allocate per-vcpu stacks for interrupt entries



>>> On 22.01.18 at 13:32, <jgross@xxxxxxxx> wrote:
> In case of XPTI being active for a pv-domain allocate and initialize
> per-vcpu stacks. The stacks are added to the per-domain mappings of
> the pv-domain.

Considering the intended use of these stacks (as per the overview
mail) I consider 32k per vCPU a non-negligible amount of extra memory
use.

> +static int pv_vcpu_init_xpti(struct vcpu *v)
> +{
> +    struct domain *d = v->domain;
> +    struct page_info *pg;
> +    void *ptr;
> +    struct cpu_info *info;
> +    unsigned long stack_bottom;
> +    int rc;
> +
> +    /* Populate page tables. */
> +    rc = create_perdomain_mapping(d, XPTI_START(v), STACK_PAGES,
> +                                  NIL(l1_pgentry_t *), NULL);
> +    if ( rc )
> +        goto done;
> +
> +    /* Map stacks. */
> +    rc = create_perdomain_mapping(d, XPTI_START(v), IST_MAX,
> +                                  NULL, NIL(struct page_info *));
> +    if ( rc )
> +        goto done;
> +
> +    ptr = alloc_xenheap_page();
> +    if ( !ptr )
> +    {
> +        rc = -ENOMEM;
> +        goto done;
> +    }
> +    clear_page(ptr);
> +    addmfn_to_perdomain_mapping(d, XPTI_START(v) + STACK_SIZE - PAGE_SIZE,
> +                                _mfn(virt_to_mfn(ptr)));

This can't be create_perdomain_mapping() because of ...? If it's
the Xen heap page you use here - that would be the next question:
Does it need to be such, rather than a domheap one? I do see ...

> +    info = (struct cpu_info *)((unsigned long)ptr + PAGE_SIZE) - 1;
> +    info->flags = ON_VCPUSTACK;
> +    v->arch.pv_vcpu.stack_regs = &info->guest_cpu_user_regs;

... this pointer, but without a clear picture on intended use it's
hard to judge.

> +    /* Map TSS. */
> +    rc = create_perdomain_mapping(d, XPTI_TSS(v), 1, NULL, &pg);
> +    if ( rc )
> +        goto done;
> +    info = (struct cpu_info *)(XPTI_START(v) + STACK_SIZE) - 1;

Iiuc this is a pointer one absolutely must not de-reference. A bit
dangerous, I would say, the more that further up the same
variable is being de-referenced.

Also I would assume the TSS can be mapped r/o.

> +    stack_bottom = (unsigned long)&info->guest_cpu_user_regs.es;
> +    ptr = __map_domain_page(pg);
> +    tss_init(ptr, stack_bottom);
> +    unmap_domain_page(ptr);
> +
> +    /* Map stub trampolines. */
> +    rc = create_perdomain_mapping(d, XPTI_TRAMPOLINE(v), 1, NULL, &pg);
> +    if ( rc )
> +        goto done;
> +    ptr = __map_domain_page(pg);
> +    write_stub_trampoline((unsigned char *)ptr, XPTI_TRAMPOLINE(v),

I would be very surprised if you really needed the cast here.

> @@ -25,6 +25,21 @@
>   */
>  
>  /*
> + * The vcpu stacks used for XPTI are arranged similar to the physical cpu
> + * stacks with some modifications. The main difference are the primary stack
> + * size (only 1 page) and usage of the unused mappings for TSS and IDT.
> + *
> + * 7 - Primary stack (with a struct cpu_info at the top)
> + * 6 - unused
> + * 5 - TSS

Judging by the comment this might mean "TSS / IDT", or slots 4 or 6
might be used for the IDT. Otoh I don't see any IDT related logic in
pv_vcpu_init_xpti(). Please clarify this.

> @@ -37,10 +52,24 @@ struct vcpu;
>  
>  struct cpu_info {
>      struct cpu_user_regs guest_cpu_user_regs;
> -    unsigned int processor_id;
> -    struct vcpu *current_vcpu;
> -    unsigned long per_cpu_offset;
> -    unsigned long cr4;
> +    union {
> +        /* per physical cpu mapping */
> +        struct {
> +            struct vcpu *current_vcpu;
> +            unsigned long per_cpu_offset;
> +            unsigned long cr4;
> +        };
> +        /* per vcpu mapping (xpti) */
> +        struct {
> +            unsigned long pad1;
> +            unsigned long pad2;
> +            unsigned long stack_bottom_cpu;
> +        };

In order to avoid accidental use in the wrong context as much as
possible, I think you want to name both structures.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.