[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [XEN PATCH 1/2] hvm: Support more than 32 VCPUS when migrating.



>>> On 09.04.14 at 17:27, <konrad.wilk@xxxxxxxxxx> wrote:
> On Wed, Apr 09, 2014 at 10:06:12AM +0100, Jan Beulich wrote:
>> >>> On 08.04.14 at 19:25, <konrad@xxxxxxxxxx> wrote:
>> > --- a/xen/arch/x86/hvm/hvm.c
>> > +++ b/xen/arch/x86/hvm/hvm.c
>> > @@ -3470,6 +3470,9 @@ static long hvm_vcpu_op(
>> >      case VCPUOP_stop_singleshot_timer:
>> >      case VCPUOP_register_vcpu_info:
>> >      case VCPUOP_register_vcpu_time_memory_area:
>> > +    case VCPUOP_down:
>> > +    case VCPUOP_up:
>> > +    case VCPUOP_is_up:
>> 
>> This, if I checked it properly, leaves only VCPUOP_initialise,
>> VCPUOP_send_nmi, and VCPUOP_get_physid disallowed for HVM.
>> None of which look inherently bad to be used by HVM (but
>> VCPUOP_initialise certainly would need closer checking), so I
>> wonder whether either the wrapper shouldn't be dropped altogether
>> or at least be converted from a white list approach to a black list one.
> 
> I was being conservative here because I did not want to allow the
> other ones without at least testing it.
> 
> Perhaps that can be done as a seperate patch and this just as
> a bug-fix?

I'm clearly not in favor of the patch as is - minimally I'd want it to
convert the white list to a black list. And once you do this it would
seem rather natural to not pointlessly add entries.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.