[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [RFC PATCH v1 00/16] xen: sched: implement core-scheduling


  • To: Dario Faggioli <dfaggioli@xxxxxxxx>, xen-devel@xxxxxxxxxxxxxxxxxxxx
  • From: Juergen Gross <jgross@xxxxxxxx>
  • Date: Fri, 12 Oct 2018 10:35:35 +0200
  • Autocrypt: addr=jgross@xxxxxxxx; prefer-encrypt=mutual; keydata= xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOB ycWxw3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJve dYm8Of8Zd621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJ NwQpd369y9bfIhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvx XP3FAp2pkW0xqG7/377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEB AAHNHkp1ZXJnZW4gR3Jvc3MgPGpncm9zc0BzdXNlLmRlPsLAeQQTAQIAIwUCU4xw6wIbAwcL CQgHAwIBBhUIAgkKCwQWAgMBAh4BAheAAAoJELDendYovxMvi4UH/Ri+OXlObzqMANruTd4N zmVBAZgx1VW6jLc8JZjQuJPSsd/a+bNr3BZeLV6lu4Pf1Yl2Log129EX1KWYiFFvPbIiq5M5 kOXTO8Eas4CaScCvAZ9jCMQCgK3pFqYgirwTgfwnPtxFxO/F3ZcS8jovza5khkSKL9JGq8Nk czDTruQ/oy0WUHdUr9uwEfiD9yPFOGqp4S6cISuzBMvaAiC5YGdUGXuPZKXLpnGSjkZswUzY d9BVSitRL5ldsQCg6GhDoEAeIhUC4SQnT9SOWkoDOSFRXZ+7+WIBGLiWMd+yKDdRG5RyP/8f 3tgGiB6cyuYfPDRGsELGjUaTUq3H2xZgIPfOwE0EU4xwFgEIAMsx+gDjgzAY4H1hPVXgoLK8 B93sTQFN9oC6tsb46VpxyLPfJ3T1A6Z6MVkLoCejKTJ3K9MUsBZhxIJ0hIyvzwI6aYJsnOew cCiCN7FeKJ/oA1RSUemPGUcIJwQuZlTOiY0OcQ5PFkV5YxMUX1F/aTYXROXgTmSaw0aC1Jpo w7Ss1mg4SIP/tR88/d1+HwkJDVW1RSxC1PWzGizwRv8eauImGdpNnseneO2BNWRXTJumAWDD pYxpGSsGHXuZXTPZqOOZpsHtInFyi5KRHSFyk2Xigzvh3b9WqhbgHHHE4PUVw0I5sIQt8hJq 5nH5dPqz4ITtCL9zjiJsExHuHKN3NZsAEQEAAcLAXwQYAQIACQUCU4xwFgIbDAAKCRCw3p3W KL8TL0P4B/9YWver5uD/y/m0KScK2f3Z3mXJhME23vGBbMNlfwbr+meDMrJZ950CuWWnQ+d+ Ahe0w1X7e3wuLVODzjcReQ/v7b4JD3wwHxe+88tgB9byc0NXzlPJWBaWV01yB2/uefVKryAf AHYEd0gCRhx7eESgNBe3+YqWAQawunMlycsqKa09dBDL1PFRosF708ic9346GLHRc6Vj5SRA UTHnQqLetIOXZm3a2eQ1gpQK9MmruO86Vo93p39bS1mqnLLspVrL4rhoyhsOyh0Hd28QCzpJ wKeHTd0MAWAirmewHXWPco8p1Wg+V+5xfZzuQY0f4tQxvOpXpt4gQ1817GQ5/Ed/wsDtBBgB CAAgFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAlrd8NACGwIAgQkQsN6d1ii/Ey92IAQZFggA HRYhBFMtsHpB9jjzHji4HoBcYbtP2GO+BQJa3fDQAAoJEIBcYbtP2GO+TYsA/30H/0V6cr/W V+J/FCayg6uNtm3MJLo4rE+o4sdpjjsGAQCooqffpgA+luTT13YZNV62hAnCLKXH9n3+ZAgJ RtAyDWk1B/0SMDVs1wxufMkKC3Q/1D3BYIvBlrTVKdBYXPxngcRoqV2J77lscEvkLNUGsu/z W2pf7+P3mWWlrPMJdlbax00vevyBeqtqNKjHstHatgMZ2W0CFC4hJ3YEetuRBURYPiGzuJXU pAd7a7BdsqWC4o+GTm5tnGrCyD+4gfDSpkOT53S/GNO07YkPkm/8J4OBoFfgSaCnQ1izwgJQ jIpcG2fPCI2/hxf2oqXPYbKr1v4Z1wthmoyUgGN0LPTIm+B5vdY82wI5qe9uN6UOGyTH2B3p hRQUWqCwu2sqkI3LLbTdrnyDZaixT2T0f4tyF5Lfs+Ha8xVMhIyzNb1byDI5FKCb
  • Cc: Wei Liu <wei.liu2@xxxxxxxxxx>, George Dunlap <George.Dunlap@xxxxxxxxxxxxx>, Andrew Cooper <andrew.cooper3@xxxxxxxxxx>, Ian Jackson <ian.jackson@xxxxxxxxxxxxx>, Bhavesh Davda <bhavesh.davda@xxxxxxxxxx>, Jan Beulich <jbeulich@xxxxxxxx>
  • Delivery-date: Fri, 12 Oct 2018 08:35:50 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>
  • Openpgp: preference=signencrypt

On 12/10/2018 09:49, Dario Faggioli wrote:
> On Fri, 2018-10-12 at 07:15 +0200, Juergen Gross wrote:
>> On 11/10/2018 19:37, Dario Faggioli wrote:
>>>
>>> So, for example:
>>> - domain A has vCore0 and vCore1
>>> - each vCore has 2 threads ({vCore0.0, vCore0.1} and
>>>   {vCore1.0, vCore1.1})
>>> - we're on a 2-way SMT host
>>> - vCore1 is running on physical core 3 on the host
>>> - more specifically, vCore1.0 is currently executing on thread 0 of
>>>   physical core 3 of the host, and vCore1.1 is currently executing
>>> on
>>>   thread 1 of core 3 of the host
>>> - say that both vCore1.0 and vCore1.1 are in guest context
>>>
>>> Now:
>>> * vCore1.0 blocks. What happens?
>>
>> It is going to vBlocked (the physical thread is sitting in the
>> hypervisor waiting for either a (core-)scheduling event or for
>> unblocking vCore1.0). vCore1.1 keeps running. Or, if vCore1.1
>> is already vIdle/vBlocked, vCore1 is switching to blocked and the
>> scheduler is looking for another vCore to schedule on the physical
>> core.
>>
> Ok. And then we'll have one thread in guest context, and one thread in
> Xen (albeit, idle, in this case). In these other cases...
> 
>>> * vCore1.0 makes an hypercall. What happens?
>>
>> Same as today. The hypercall is being executed.
>>
>>> * vCore1.0 VMEXITs. What happens?
>>
>> Same as today. The VMEXIT is handled.
>>
> ... we have one thread in guest context, and one thread in Xen, and the
> one in Xen is not just staying idle, it's doing hypercalls and VMEXIT
> handling.
> 
>> In case you referring to a potential rendezvous for e.g. L1TF
>> mitigation: this would be handled scheduler agnostic.
>>
> Yes, that was what I was thinking to. I.e., in order to be able to use
> core-scheduling as a _fully_effective_ mitigation for stuff like L1TF,
> we'd need something like that.
> 
> In fact, core-scheduling "per-se" mitigates leaks among guests, but if
> we want to fully avoid for two threads to ever be in different security
> contexts (like one in guest and one in Xen, to prevent Xen data leaking
> to a guest), we do need some kind of synchronized Xen enters/exits,
> AFAIUI.
> 
> What I'm trying to understand right now, is whether implementing things
> in this way you're proposing, would help achieving that. And what I've
> understood so far is that, no it doesn't.

This aspect will need about the same effort in both solutions.
Maybe my proposal would make it easier to decide whether such a
rendezvous is needed, as there would be only one instance to ask
(schedule.c) instead of multiple instances (sched_*.c).

> 
> The main difference between the two approaches would be that we
> implement it once in schedule.c, for all schedulers. But this, I see it
> as something having both up and down sides (yeah, like everything on
> Earth, I know! :-P). More on this later.
> 
>>> All in all, I like the idea, because it is about introducing nice
>>> abstractions, it is general, etc., but it looks like a major rework
>>> of
>>> the scheduler.
>>
>> Correct. Finally something to do :-p
>>
> Indeed! :-)
> 
>>> Note that, while this series which tries to implement core-
>>> scheduling
>>> for Credit1 is rather long and messy, doing the same (and with a
>>> similar approach) for Credit2 is a lot easier and nicer. I have it
>>> almost ready, and will send it soon.
>>
>> Okay, but would it keep vThreads of the same vCore let always running
>> together on the same physical core?
>>
> It doesn't right now, as we don't have a way to expose such information
> to the guest, yet. And since without such a mechanism, the guest can't
> take advantage of something like this (neither from a performance nor
> from a vuln. mitigation point of view), I kept that out.
> 
> But I certainly can see about making it do so (I was already planning
> to).
> 
>>> Right. But again, in Credit2, I've been able to implement socket-
>>> wise
>>> coscheduling with this approach (I mean, an approach similar to the
>>> one
>>> in this series, but adapted to Credit2).
>>
>> And then there still is sched_rt.c
>>
> Ok, so I think this is the main benefit of this approach. We do the
> thing once, and all schedulers get core-scheduling (or whatever
> granularity of group scheduling we implement/allow).
> 
> But how easy it is to opt out, if one doesn't want it? E.g., in the
> context of L1TF, what if I'm not affected, and hence am not interested
> in core-scheduling? What if I want a cpupool with core-scheduling and
> one without?
> 
> I may be wrong, but out of the top of my head, but it seems to me that
> doing things in schedule.c makes this a lot harder, if possible at all.

Why? This would be a per-cpupool setting, so the scheduling granularity
would live in struct scheduler.


Juergen


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.