[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v12 6/9] x86: collect global QoS monitoring information



> -----Original Message-----
> From: Jan Beulich [mailto:JBeulich@xxxxxxxx]
> Sent: Friday, July 04, 2014 8:14 PM
> To: Xu, Dongxiao
> Cc: andrew.cooper3@xxxxxxxxxx; Ian.Campbell@xxxxxxxxxx;
> George.Dunlap@xxxxxxxxxxxxx; Ian.Jackson@xxxxxxxxxxxxx;
> stefano.stabellini@xxxxxxxxxxxxx; xen-devel@xxxxxxxxxxxxx;
> konrad.wilk@xxxxxxxxxx; dgdegra@xxxxxxxxxxxxx; keir@xxxxxxx
> Subject: Re: [PATCH v12 6/9] x86: collect global QoS monitoring information
> 
> >>> On 04.07.14 at 10:34, <dongxiao.xu@xxxxxxxxx> wrote:
> > +        case XEN_SYSCTL_PQOS_MONITOR_get_l3_cache_size:
> > +            sysctl->u.pqos_monitor_op.data =
> boot_cpu_data.x86_cache_size;
> > +            break;
> 
> ISTR having asked before - is boot_cpu_data.x86_cache_size really
> always the L3 cache size?

On current machines with CQM enabled, boot_cpu_data.x86_cache_size reflects the 
L3 cache size, unless if later we have more level of caches (maybe L4, L5, 
etc... just guessing).
I think the current solution is okay, unless you prefer to have another round 
CPUID enumeration here.

> 
> > +        case XEN_SYSCTL_PQOS_MONITOR_get_socket_cpu:
> > +        {
> > +            unsigned int i, cpu;
> > +            unsigned int socket = sysctl->u.pqos_monitor_op.data;
> > +
> > +            for ( i = 0; i < nr_cpu_ids; i++ )
> > +            {
> > +                if ( cpu_to_socket(i) < 0 || cpu_to_socket(i) != socket )
> > +                    continue;
> > +                cpu = cpumask_any(per_cpu(cpu_core_mask, i));
> > +                if ( cpu < nr_cpu_ids )
> > +                {
> > +                    sysctl->u.pqos_monitor_op.data = cpu;
> > +                    break;
> > +                }
> > +            }
> > +
> > +            if ( i == nr_cpu_ids )
> > +                ret = -ESRCH;
> > +        }
> > +        break;
> 
> It should be possible for the tools to get this information by using
> XEN_SYSCTL_topologyinfo.

I noticed that the XEN_SYSCTL_topologyinfo hypercall is heavy loaded, which 
passes more data between hypervisor and guest, while we only need one "int" 
result.

> 
> > +
> > +        default:
> > +            sysctl->u.pqos_monitor_op.data = 0;
> > +            ret = -ENOSYS;
> > +            break;
> > +        }
> > +        copyback = 1;
> > +        break;
> > +
> >      default:
> >          ret = -ENOSYS;
> >          break;
> >      }
> >
> > +    if ( copyback && __copy_to_guest(u_sysctl, sysctl, 1) )
> > +        ret = -EFAULT;
> 
> That's not very nice here: You only ever want to copy back a single
> field (sysctl->u.pqos_monitor_op.data), so please do just that at
> the end of the case XEN_SYSCTL_pqos_monitor_op handling.
> 
> > +#define XEN_SYSCTL_PQOS_MONITOR_get_total_rmid            0
> > +#define XEN_SYSCTL_PQOS_MONITOR_get_l3_upscaling_factor   1
> > +#define XEN_SYSCTL_PQOS_MONITOR_get_l3_cache_size         2
> > +#define XEN_SYSCTL_PQOS_MONITOR_get_socket_cpu            3
> > +#define XEN_SYSCTL_PQOS_MONITOR_cqm_enabled               4
> > +struct xen_sysctl_pqos_monitor_op {
> > +    uint32_t cmd;
> > +    uint64_aligned_t data;
> 
> Please explicitly add a field between the two (e.g. named "flags"),
> and check it to be zero in the handler. That way we have room for
> extending the interface without bumping the sysctl interface
> version.

Okay.

Thanks,
Dongxiao

> 
> Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.