[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH 3/9] tools/libx[cl]: Move processing loop down into xc_cpuid_set()


  • To: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
  • From: Ian Jackson <ian.jackson@xxxxxxxxxx>
  • Date: Mon, 15 Jun 2020 15:54:51 +0100
  • Authentication-results: esa2.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
  • Cc: Xen-devel <xen-devel@xxxxxxxxxxxxxxxxxxxx>, Paul Durrant <paul@xxxxxxx>, Jan Beulich <JBeulich@xxxxxxxx>, Wei Liu <wl@xxxxxxx>, Roger Pau Monne <roger.pau@xxxxxxxxxx>
  • Delivery-date: Mon, 15 Jun 2020 14:55:06 +0000
  • Ironport-sdr: l1fclsBTfHR/StnfDRWourWOGq9MJluDdFLK5ZduVpRg7b4MLYtKtIbu/g5QDP+xu9+hNt4Rr6 cyEok8Wd9Moc6JPaQdLr3ljNXjfP4NLsL27lK16esa5kQ/8Z9KWSkk3JovQhgos+VPQS9TYNaA R45788FpvlmFoGYu3tK5GEuNRAR/7o2YjrMy8emW9r65LoAH3C92OU2hFyRbXPgh/ni0LxGNQq 8r0uFee8UX/h0pzw8T/ghhTZgYlkz98G/kfYTLmBGadLlUGxTP9z4pyc978LyNktf22rp+VlYc 4/U=
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

Andrew Cooper writes ("[PATCH 3/9] tools/libx[cl]: Move processing loop down 
into xc_cpuid_set()"):
> Currently, libxl__cpuid_legacy() passes each element of the policy list to
> xc_cpuid_set() individually.  This is wasteful both in terms of the number of
> hypercalls made, and the quantity of repeated merging/auditing work performed
> by Xen.
> 
> Move the loop processing down into xc_cpuid_set(), which allows us to do one
> set of hypercalls, rather than one per list entry.
> 
> In xc_cpuid_set(), obtain the full host, guest max and current policies to
> begin with, and loop over the xend array, processing one leaf at a time.
> Replace the linear search with a binary search, seeing as the serialised
> leaves are sorted.
> 
> No change in behaviour from the guests point of view.

This is not my area of expertise.  Ideally at this stage of the
release I would like an ack from a 2nd hypervisor maintainer.

The processing code in libxc looks OK to me.

Ian.



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.