[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 06/13] libx86: Introduce a helper to serialise a cpuid_policy object


  • To: Jan Beulich <JBeulich@xxxxxxxx>
  • From: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
  • Date: Thu, 5 Jul 2018 14:34:57 +0100
  • Autocrypt: addr=andrew.cooper3@xxxxxxxxxx; prefer-encrypt=mutual; keydata= xsFNBFLhNn8BEADVhE+Hb8i0GV6mihnnr/uiQQdPF8kUoFzCOPXkf7jQ5sLYeJa0cQi6Penp VtiFYznTairnVsN5J+ujSTIb+OlMSJUWV4opS7WVNnxHbFTPYZVQ3erv7NKc2iVizCRZ2Kxn srM1oPXWRic8BIAdYOKOloF2300SL/bIpeD+x7h3w9B/qez7nOin5NzkxgFoaUeIal12pXSR Q354FKFoy6Vh96gc4VRqte3jw8mPuJQpfws+Pb+swvSf/i1q1+1I4jsRQQh2m6OTADHIqg2E ofTYAEh7R5HfPx0EXoEDMdRjOeKn8+vvkAwhviWXTHlG3R1QkbE5M/oywnZ83udJmi+lxjJ5 YhQ5IzomvJ16H0Bq+TLyVLO/VRksp1VR9HxCzItLNCS8PdpYYz5TC204ViycobYU65WMpzWe LFAGn8jSS25XIpqv0Y9k87dLbctKKA14Ifw2kq5OIVu2FuX+3i446JOa2vpCI9GcjCzi3oHV e00bzYiHMIl0FICrNJU0Kjho8pdo0m2uxkn6SYEpogAy9pnatUlO+erL4LqFUO7GXSdBRbw5 gNt25XTLdSFuZtMxkY3tq8MFss5QnjhehCVPEpE6y9ZjI4XB8ad1G4oBHVGK5LMsvg22PfMJ ISWFSHoF/B5+lHkCKWkFxZ0gZn33ju5n6/FOdEx4B8cMJt+cWwARAQABzSlBbmRyZXcgQ29v cGVyIDxhbmRyZXcuY29vcGVyM0BjaXRyaXguY29tPsLBegQTAQgAJAIbAwULCQgHAwUVCgkI CwUWAgMBAAIeAQIXgAUCWKD95wIZAQAKCRBlw/kGpdefoHbdD/9AIoR3k6fKl+RFiFpyAhvO 59ttDFI7nIAnlYngev2XUR3acFElJATHSDO0ju+hqWqAb8kVijXLops0gOfqt3VPZq9cuHlh IMDquatGLzAadfFx2eQYIYT+FYuMoPZy/aTUazmJIDVxP7L383grjIkn+7tAv+qeDfE+txL4 SAm1UHNvmdfgL2/lcmL3xRh7sub3nJilM93RWX1Pe5LBSDXO45uzCGEdst6uSlzYR/MEr+5Z JQQ32JV64zwvf/aKaagSQSQMYNX9JFgfZ3TKWC1KJQbX5ssoX/5hNLqxMcZV3TN7kU8I3kjK mPec9+1nECOjjJSO/h4P0sBZyIUGfguwzhEeGf4sMCuSEM4xjCnwiBwftR17sr0spYcOpqET ZGcAmyYcNjy6CYadNCnfR40vhhWuCfNCBzWnUW0lFoo12wb0YnzoOLjvfD6OL3JjIUJNOmJy RCsJ5IA/Iz33RhSVRmROu+TztwuThClw63g7+hoyewv7BemKyuU6FTVhjjW+XUWmS/FzknSi dAG+insr0746cTPpSkGl3KAXeWDGJzve7/SBBfyznWCMGaf8E2P1oOdIZRxHgWj0zNr1+ooF /PzgLPiCI4OMUttTlEKChgbUTQ+5o0P080JojqfXwbPAyumbaYcQNiH1/xYbJdOFSiBv9rpt TQTBLzDKXok86M7BTQRS4TZ/ARAAkgqudHsp+hd82UVkvgnlqZjzz2vyrYfz7bkPtXaGb9H4 Rfo7mQsEQavEBdWWjbga6eMnDqtu+FC+qeTGYebToxEyp2lKDSoAsvt8w82tIlP/EbmRbDVn 7bhjBlfRcFjVYw8uVDPptT0TV47vpoCVkTwcyb6OltJrvg/QzV9f07DJswuda1JH3/qvYu0p vjPnYvCq4NsqY2XSdAJ02HrdYPFtNyPEntu1n1KK+gJrstjtw7KsZ4ygXYrsm/oCBiVW/OgU g/XIlGErkrxe4vQvJyVwg6YH653YTX5hLLUEL1NS4TCo47RP+wi6y+TnuAL36UtK/uFyEuPy wwrDVcC4cIFhYSfsO0BumEI65yu7a8aHbGfq2lW251UcoU48Z27ZUUZd2Dr6O/n8poQHbaTd 6bJJSjzGGHZVbRP9UQ3lkmkmc0+XCHmj5WhwNNYjgbbmML7y0fsJT5RgvefAIFfHBg7fTY/i kBEimoUsTEQz+N4hbKwo1hULfVxDJStE4sbPhjbsPCrlXf6W9CxSyQ0qmZ2bXsLQYRj2xqd1 bpA+1o1j2N4/au1R/uSiUFjewJdT/LX1EklKDcQwpk06Af/N7VZtSfEJeRV04unbsKVXWZAk uAJyDDKN99ziC0Wz5kcPyVD1HNf8bgaqGDzrv3TfYjwqayRFcMf7xJaL9xXedMcAEQEAAcLB XwQYAQgACQUCUuE2fwIbDAAKCRBlw/kGpdefoG4XEACD1Qf/er8EA7g23HMxYWd3FXHThrVQ HgiGdk5Yh632vjOm9L4sd/GCEACVQKjsu98e8o3ysitFlznEns5EAAXEbITrgKWXDDUWGYxd pnjj2u+GkVdsOAGk0kxczX6s+VRBhpbBI2PWnOsRJgU2n10PZ3mZD4Xu9kU2IXYmuW+e5KCA vTArRUdCrAtIa1k01sPipPPw6dfxx2e5asy21YOytzxuWFfJTGnVxZZSCyLUO83sh6OZhJkk b9rxL9wPmpN/t2IPaEKoAc0FTQZS36wAMOXkBh24PQ9gaLJvfPKpNzGD8XWR5HHF0NLIJhgg 4ZlEXQ2fVp3XrtocHqhu4UZR4koCijgB8sB7Tb0GCpwK+C4UePdFLfhKyRdSXuvY3AHJd4CP 4JzW0Bzq/WXY3XMOzUTYApGQpnUpdOmuQSfpV9MQO+/jo7r6yPbxT7CwRS5dcQPzUiuHLK9i nvjREdh84qycnx0/6dDroYhp0DFv4udxuAvt1h4wGwTPRQZerSm4xaYegEFusyhbZrI0U9tJ B8WrhBLXDiYlyJT6zOV2yZFuW47VrLsjYnHwn27hmxTC/7tvG3euCklmkn9Sl9IAKFu29RSo d5bD8kMSCYsTqtTfT6W4A3qHGvIDta3ptLYpIAOD2sY3GYq2nf3Bbzx81wZK14JdDDHUX2Rs 6+ahAA==
  • Cc: Sergey Dyasli <sergey.dyasli@xxxxxxxxxx>, Xen-devel <xen-devel@xxxxxxxxxxxxx>, Wei Liu <wei.liu2@xxxxxxxxxx>, Ian Jackson <Ian.Jackson@xxxxxxxxxxxxx>, Roger Pau Monne <roger.pau@xxxxxxxxxx>
  • Delivery-date: Thu, 05 Jul 2018 13:35:03 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>
  • Openpgp: preference=signencrypt

On 05/07/18 09:46, Jan Beulich wrote:
>>>> On 04.07.18 at 18:46, <andrew.cooper3@xxxxxxxxxx> wrote:
>> On 04/07/18 10:01, Jan Beulich wrote:
>>>>>> On 03.07.18 at 22:55, <andrew.cooper3@xxxxxxxxxx> wrote:
>>>> --- a/xen/common/libx86/cpuid.c
>>>> +++ b/xen/common/libx86/cpuid.c
>>>> @@ -34,6 +34,100 @@ const uint32_t *x86_cpuid_lookup_deep_deps(uint32_t 
>>>> feature)
>>>>  }
>>>>  
>>>>  /*
>>>> + * Copy a single cpuid_leaf into a provided xen_cpuid_leaf_t buffer,
>>>> + * performing boundary checking against the buffer size.
>>>> + */
>>>> +static int copy_leaf_to_buffer(uint32_t leaf, uint32_t subleaf,
>>>> +                               const struct cpuid_leaf *data,
>>>> +                               cpuid_leaf_buffer_t leaves,
>>>> +                               uint32_t *curr_entry, const uint32_t 
>>>> nr_entries)
>>>> +{
>>>> +    const xen_cpuid_leaf_t val = {
>>>> +        leaf, subleaf, data->a, data->b, data->c, data->d,
>>>> +    };
>>>> +
>>>> +    if ( *curr_entry == nr_entries )
>>>> +        return -ENOBUFS;
>>>> +
>>>> +    if ( copy_to_buffer_offset(leaves, *curr_entry, &val, 1) )
>>>> +        return -EFAULT;
>>>> +
>>>> +    ++*curr_entry;
>>> Following on from what Wei has said - you don't mean to have a way
>>> here then to indicate to a higher up caller how many slots would have
>>> been needed?
>> I don't understand your query.  An individual build has a compile-time
>> static maximum number of leaves, and this number can be obtained in the
>> usual way by making a hypercall with a NULL guest handle.
> My point is that this generally is a sub-optimal interface. Seeing how
> closely tied libxc is to a specific hypervisor build (or at least version),
> I don't see why the caller couldn't set up a suitably sized array without
> first querying with a null handle, and only re-issue the call in the
> unlikely event that actually a larger buffer is necessary.

I'm all for good interface design, but what you describe isn't plausibly
going to happen.

Code using the raw hypercall accessors in this series have no idea what
size the buffers need to be, and always need to explicitly ask Xen.

Code in one of the followup series which allows for manipulation of the
policy objects entirely in the toolstack will use its own static idea of
the size of the policies, and never needs to ask Xen.  (At this point,
if you've got a mismatched Xen and Libxc, then tough - you've got no
option but to recompile.)

Anything else is unnecessary extra complexity.

>
>>>> +int x86_cpuid_copy_to_buffer(const struct cpuid_policy *p,
>>>> +                             cpuid_leaf_buffer_t leaves,
>>>> +                             uint32_t *nr_entries_p)
>>>> +{
>>>> +    const uint32_t nr_entries = *nr_entries_p;
>>>> +    uint32_t curr_entry = 0, leaf, subleaf;
>>>> +
>>>> +#define COPY_LEAF(l, s, data)                                       \
>>>> +    ({  int ret;                                                    \
>>>> +        if ( (ret = copy_leaf_to_buffer(                            \
>>>> +                  l, s, data, leaves, &curr_entry, nr_entries)) )   \
>>>> +            return ret;                                             \
>>>> +    })
>>>> +
>>>> +    /* Basic leaves. */
>>>> +    for ( leaf = 0; leaf <= MIN(p->basic.max_leaf,
>>>> +                                ARRAY_SIZE(p->basic.raw) - 1); ++leaf )
>>>> +    {
>>>> +        switch ( leaf )
>>>> +        {
>>>> +        case 0x4:
>>>> +            for ( subleaf = 0; subleaf < ARRAY_SIZE(p->cache.raw); 
>>>> ++subleaf )
>>>> +                COPY_LEAF(leaf, subleaf, &p->cache.raw[subleaf]);
>>>> +            break;
>>>> +
>>>> +        case 0x7:
>>>> +            for ( subleaf = 0;
>>>> +                  subleaf <= MIN(p->feat.max_subleaf,
>>>> +                                 ARRAY_SIZE(p->feat.raw) - 1); ++subleaf )
>>>> +                COPY_LEAF(leaf, subleaf, &p->feat.raw[subleaf]);
>>>> +            break;
>>>> +
>>>> +        case 0xb:
>>>> +            for ( subleaf = 0; subleaf < ARRAY_SIZE(p->topo.raw); 
>>>> ++subleaf )
>>>> +                COPY_LEAF(leaf, subleaf, &p->topo.raw[subleaf]);
>>>> +            break;
>>>> +
>>>> +        case 0xd:
>>>> +            for ( subleaf = 0; subleaf < ARRAY_SIZE(p->xstate.raw); 
>>>> ++subleaf )
>>>> +                COPY_LEAF(leaf, subleaf, &p->xstate.raw[subleaf]);
>>>> +            break;
>>>> +
>>>> +        default:
>>>> +            COPY_LEAF(leaf, XEN_CPUID_NO_SUBLEAF, &p->basic.raw[leaf]);
>>>> +            break;
>>>> +        }
>>>> +    }
>>>> +
>>>> +    COPY_LEAF(0x40000000, XEN_CPUID_NO_SUBLEAF,
>>>> +              &(struct cpuid_leaf){ p->hv_limit });
>>>> +    COPY_LEAF(0x40000100, XEN_CPUID_NO_SUBLEAF,
>>>> +              &(struct cpuid_leaf){ p->hv2_limit });
>>> Is it a good idea to produce wrong (zero) EBX, ECX, and EDX values here?
>> The handling of these leaves currently problematic, and this patch is
>> bug-compatible with how DOMCTL_set_cpuid currently behaves (See
>> update_domain_cpuid_info()).
>>
>> Annoyingly, I need this marshalling series implemented before I can fix
>> the hypervisor leaves to use the "new" CPUID infrastructure; the main
>> complication being because of the dynamic location of the Xen leaves.
> Well, okay, but I'd prefer if such restrictions and bug-compatibilities
> were spelled out in the commit message.

I'll do that, and leave a /* TODO */ here.

>> Eventually, the interface will be that Xen leaves live at 0x40000000 and
>> the toolstack can manipulate a subset of the information by providing
>> leaves in the usual manor.  To enable viridian, the toolstack writes
>> HyperV's signature at 0x40000000, and Xen's at 0x40000100.  This also
>> allows for a mechanism to hide the Xen CPUID leaves by writing a 0 max leaf.
>>
>> Amongst other things, this will allow sensible control of the Viridian
>> features without having to squeeze more bits into the HVMPARAM.
> Ah, interesting - you basically mean to deprecate the current way of
> configuring Viridian features then, if I get this right?

Correct.  The xl.cfg interface can remain the same, but this new
libxc/Xen interface will be far more flexible than the current "all or
nothing" approach.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.