[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v10 09/25] x86: refactor psr: L3 CAT: set value: implement framework.



On 17-04-12 03:09:56, Jan Beulich wrote:
> >>> On 12.04.17 at 07:53, <yi.y.sun@xxxxxxxxxxxxxxx> wrote:
> > On 17-04-11 09:01:53, Jan Beulich wrote:
> >> >>> On 01.04.17 at 15:53, <yi.y.sun@xxxxxxxxxxxxxxx> wrote:
> >> > +        info->cos_ref[cos]--;
> >> > +        spin_unlock(&info->ref_lock);
> >> > +
> >> > +        d->arch.psr_cos_ids[socket] = 0;
> >> > +    }
> >> 
> >> Overall, while you say in the revision log that this was a suggestion of
> >> mine, I don't recall any such (and I've just checked the v9 thread of
> >> this patch without finding anything), and hence it's not really clear to
> >> me why this is needed. After all you should be freeing info anyway
> > 
> > We discussed this in v9 05 patch.
> 
> Ah, that's why I didn't find it.
> 
> > I paste it below for your convenience to
> > check.
> > [Sun Yi]:
> >   > So, you think the MSRs values may not be valid after such process and 
> >   > reloading (write MSRs to default value) is needed. If so, I would like 
> >   > to do more operations in 'free_feature()':
> >   > 1. Iterate all domains working on the offline socket to change
> >   >    'd->arch.psr_cos_ids[socket]' to COS 0, i.e restore it back to init
> >   >    status.
> >   > 2. Restore 'socket_info[socket].cos_ref[]' to all 0.
> >   > 
> >   > These can make the socket's info be totally restored back to init 
> > status.
> > 
> > [Jan]
> >   Yes, that's what I think is needed.
> > 
> >> (albeit I can't see this happening, which would look to be a bug in
> >> patch 5), so getting the refcounts adjusted seems pointless in any
> >> event. Whether d->arch.psr_cos_ids[socket] needs clearing I'm not
> > 
> > We only free resources in 'socket_info[socekt]' but do not free itself.
> > Below is how we allocate 'socket_info'. So, the 'socket_info[socekt]'
> > is not a pointer that can be directly freed.
> >   socket_info = xzalloc_array(struct psr_socket_info, nr_sockets);
> > 
> > That is the reason to reduce the 'info->cos_ref[cos]'.
> 
> I see. But then there's no need to decrement it for each domain
> using it, you could simply flush it to zero.
> 
> >> certain - this may indeed by unavoidable, to match up with
> >> psr_alloc_cos() using xzalloc.
> >> 
> >> Furthermore I'm not at all convinced this is appropriate to do in the
> >> context of a CPU_UP_CANCELED / CPU_DEAD notification: If you
> >> have a few thousand VMs, the loop above may take a while.
> >> 
> > Hmm, that may be a potential issue. I have two proposals below. Could you
> > please help to check which one you prefer? Or provide another solution?
> > 
> > 1. Start a tasklet in free_socket_resources() to restore 
> > 'psr_cos_ids[socket]'
> >    of all domains. The action is protected by 'ref_lock' to avoid 
> > confliction
> >    in 'psr_set_val'. We can reduce 'info->cos_ref[cos]' in tasklet or memset
> >    the array to 0 in free_socket_resources().
> > 
> > 2. Move 'psr_cos_ids[]' from 'domain' to 'psr_socket_info' and change index
> >    from 'socket' to 'domain_id'. So we keep all domains' COS IDs per socket
> >    and can memset the array to 0 when socket is offline. But here is an 
> > issue
> >    that we do not know how many members this array should have. I cannot 
> > find
> >    a macro something like 'DOMAIN_MAX_NUMBER'. So, I prefer to use 
> > reallocation
> >    in 'psr_alloc_cos' if the newly created domain's id is bigger than 
> > current
> >    array number.
> 
> The number of domains is limited by the special DOMID_* values.
> However, allocating an array with 32k entries doesn't sound very
> reasonable.

I think 32K entries should be the extreme case. I can allocate e.g. 100 entries
when the first domain is created. If a new domain's id exceeds 100, reallocate
another 100 entries. The total number of entries allocated should be less than
32K. This is a functional requirement which cannot be avoided. How do you think?

> Sadly the other solution doesn't look very attractive
> either, as there'll be quite a bit of synchronization needed (you'd
> have to defer the same socket being re-used by a CPU being
> onlined until you've done the cleanup). This may need some more
> thought, but I can't see myself finding time for this any time soon,
> so I'm afraid I'll have to leave it to you for now.
> 
Yes, the first option need carefully consider the synchronization which is more
complex than second option.

> Jan

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.