[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH 3/3] hvm/pirq: allow control domains usage of PHYSDEVOP_{un,}map_pirq


  • To: Alex Olson <this.is.a0lson@xxxxxxxxx>
  • From: Jan Beulich <jbeulich@xxxxxxxx>
  • Date: Fri, 4 Mar 2022 08:02:00 +0100
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com; dkim=pass header.d=suse.com; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=R2B0lIJ9tAjzbrUIlL9HmnypKenAbC+cZenZHpOlip8=; b=e9rJBM0VmYNJ2JsaYU5Nj9jyWIHEAmK0ZL4BGCggpKKFBIV0ABYI32WHHguaCgyoa3N/rjR42NDCiA//kHpOTMh5IDCVP18hTxWu9+p406Igs2UNG/V9DD/+LdqCU6Tpio/OHtbOTXg9F1sagdKzhaBdHDNtqo0brk/DnOh9tWlfcrHLq1rnqdk9ZMhAHrWLn4WpsO/ScxGMRmrR/Nc+L6JZf/av21BnqWoaWy/sNH75HJXsENWkVENGFOarIg9cyGPiyBoKYwq6Md8YDcH5vlmmkewhSh/N6BR8/EfQA+B6maFDQaqhQWasoiUn2evDAY1MHeLRG14CFTPfSYdJ2A==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=CdnAhoVKHeoJ/6hdHOgC3ZJtBSTpICAWzre2uEis4/ZtdFW9tf0CKgnZ2rEY6AGlwDkSsnKl1sx4ui5dENQ2hWYSfmiN9agMvoE3LDHPmwoVllUvmTYp42D79POiqQKVhNjK5tnigy/4hhWBGmCLq+AlIrNT3K7HD72eaWJg84qTo6iGreWHiqOsaRHkIw95EejDJIItGMIo2QLnPlYm95ZkmGxvZwl6X9aIv9LoJeFKLZ7dT1Zaf9+9ordP6NAtDiAPtv2SXP7GWXj5echfwckUg83X8kzhgOZX49Avj5vKh92u8wghhh8EfNzYd9wistk6393MnUJVwhSnJSfzUw==
  • Authentication-results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=suse.com;
  • Cc: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>, Wei Liu <wl@xxxxxxx>, Roger Pau Monne <roger.pau@xxxxxxxxxx>, xen-devel@xxxxxxxxxxxxxxxxxxxx
  • Delivery-date: Fri, 04 Mar 2022 07:02:28 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

On 03.03.2022 18:14, Alex Olson wrote:
> I wasn't sure of the distinction between hardware domain and control domain 
> for these commands, but they appear to be blocked at the moment when dom0 
> executes them, including a lot at boot.  Are you suggesting to use 
> is_hardware_domain(currd) instead in my diff?    
> 
> Or should the hardware domain always be able to execute any physdev op 
> command? (such as to bypass the switch statement entirely)

No, certainly not. It was on purpose to restrict PVH Dom0. Only PV
Dom0 is supposed to be able to access all sub-ops.

> It looks like hvm_physdev_op() is the only real caller of do_physdev_op(), 
> and  several other commands (besides the ones in the diff below) are also 
> being blocked by the default case of hvm_physdev_op.
> 
> PHYSDEVOP_pirq_eoi_gmfn_v2
> PHYSDEVOP_pirq_eoi_gmfn_v1
> PHYSDEVOP_IRQ_UNMASK_NOTIFY // legacy?
> PHYSDEVOP_apic_read
> PHYSDEVOP_apic_write
> PHYSDEVOP_alloc_irq_vector
> PHYSDEVOP_set_iopl
> PHYSDEVOP_set_iobitmap
> PHYSDEVOP_restore_msi
> PHYSDEVOP_restore_msi_ext
> PHYSDEVOP_setup_gsi
> PHYSDEVOP_get_free_pirq
> PHYSDEVOP_dbgp_op
> 
> Thanks
> 
> -Alex

Also - please don't top-post.

Jan

> On Thu, 2022-03-03 at 17:47 +0100, Jan Beulich wrote:
>> On 03.03.2022 17:45, Alex Olson wrote:
>>> --- a/xen/arch/x86/hvm/hypercall.c
>>> +++ b/xen/arch/x86/hvm/hypercall.c
>>> @@ -84,6 +84,17 @@ static long hvm_physdev_op(int cmd,
>>> XEN_GUEST_HANDLE_PARAM(void) arg)
>>>  
>>>      switch ( cmd )
>>>      {
>>> +
>>> +    case PHYSDEVOP_manage_pci_add:
>>> +    case PHYSDEVOP_manage_pci_remove:
>>> +    case PHYSDEVOP_pci_device_add:
>>> +    case PHYSDEVOP_pci_device_remove:
>>> +    case PHYSDEVOP_manage_pci_add_ext:
>>> +    case PHYSDEVOP_prepare_msix:
>>> +    case PHYSDEVOP_release_msix:
>>> +        if ( is_control_domain(currd) )
>>> +            break;
>>
>> These are all operations which I think are purposefully permitted to
>> be invoked by the hardware domain only. That's where all the devices
>> live when they're not passed through to guests.
>>
>> Jan
>>
> 




 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.