[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Status of Nested Virt in 4.4 (Was: Re: Xen 4.4 development update)



George Dunlap wrote on 2014-01-25:
> On 01/24/2014 03:02 PM, Ian Campbell wrote:
>> On Fri, 2014-01-24 at 15:56 +0100, Tim Deegan wrote:
>>> B11;rgb:0000/0000/0000At 14:47 +0000 on 24 Jan (1390571231), George
> Dunlap wrote:
>>>> On 01/17/2014 09:40 AM, Ian Campbell wrote:
>>>>> On Fri, 2014-01-17 at 02:16 +0000, Zhang, Yang Z wrote:
>>>>>> As Andrew said, nested still in experimental stage, because there are
>>>>>> still lots of scenarios I am not covered in my testing. So it may not
>>>>>> accurate to say it is good supported. But I hope people know that the
>>>>>> nested is ready to use now. And encourage them to try it and report
>>>>>> bug to us to push nested move forward.
>>>>> Perhaps we could say it is "tech preview" rather than "experimental"?
>>>> If { {xp,win7,win8,rhel6}x{x86,x64} } x { Xen, KVM, VMWare } and Win7 XP
>>>> compatibility mode are tested regularly, and only HyperV, L2 shadow, and
>>>> paging / PoD don't work, I think we should be able to call this a "1.0"
>>>> release for nested virt.  Then we can add in "now works with HyperV",
>>>> "Now works with shadow", "Now works with paging" as those become
> mature.
>>> That depends on what the failure modes are for the other cases --
>>> esp. given that the L1 guest's choice of hypervisor, shadow vs HAP &c,
>>> are not under the control of the L0 admin.  I thikn that has to be
>>> clearly understood before we encourage people to turn this on.
>> Especially in the light of the previous two bugs here which let the
>> guest admin crash the host, in at least one of the two cases even if the
>> host admin had disabled nested virt for that guest (and I think it was
>> actually in both cases...)
> 
> Right -- well I think then we need to help try to define some criteria
> that VMX nested virt would need to meet for portions of it to stop being
> considered "experimental" or "tech preview".  Just a couple of angles:
> 
> * L1 / L2 guests tested.  What do people think of the mix of L1 / L2
> guests there?  They look like a pretty good combination to me.
> 
> * L2 workloads tested
> 
> Other than booting, what kinds of workloads are run in the L2 guests?
> Do the L2 guests ever get into heavy swapping scenarios, for instance?

Currently, we didn't start the workload testing. I guess there will be more 
bugs coming if running workload inside guest. :)

> 
> * Minimum subset of functionality
> 
> I think it makes sense to explicitly say that we support only certain
> hypervisors, and to not support some advanced features in L2 guests.
>
> Saying only L1 HAP L2 HAP is reasonable, I think.  No HyperV, no L2
> shadow, no PoD are reasonable restrictions; it should be fine for us to
> say that the L1 admin enables that, and badness ensues, he has only
> himself to blame.

I think Hyper-V should be acceptable. 

>
> 
> * Security
> 
> That said, I think we must assume that some of our users will have L0
> admin != L1 admin.  This means that L1 admin must not be able to do
> anything to crash L0.  In the PoD case above, for example, if L1 enables
> PoD or paging, it might cause locking issue in L0; that's not acceptable.
> 
> Anything else?
> 

Should we consider the save/restore and migration for L1? I believe it also 
doesn't working currently.

>   -George


Best regards,
Yang


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.