|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH RFC] xen/pvh: use a custom IO bitmap for PVH hardware domains
On 14/04/15 11:01, Roger Pau Monnà wrote:
> El 08/04/15 a les 14.57, Roger Pau Monne ha escrit:
>> Since a PVH hardware domain has access to the physical hardware create a
>> custom more permissive IO bitmap. The permissions set on the bitmap are
>> populated based on the contents of the ioports rangeset.
>>
>> Also add the IO ports of the serial console used by Xen to the list of not
>> accessible IO ports.
> I have one question about the current IO port handling for PVH guests
> (DomU and Dom0). There's some code right now in vmx_vmexit_handler
> (EXIT_REASON_IO_INSTRUCTION) that's kind PVH specific:
>
> if ( exit_qualification & 0x10 )
> {
> /* INS, OUTS */
> if ( unlikely(is_pvh_vcpu(v)) /* PVH fixme */ ||
> !handle_mmio() )
> hvm_inject_hw_exception(TRAP_gp_fault, 0);
> }
> else
> {
> /* IN, OUT */
> uint16_t port = (exit_qualification >> 16) & 0xFFFF;
> int bytes = (exit_qualification & 0x07) + 1;
> int dir = (exit_qualification & 0x08) ? IOREQ_READ : IOREQ_WRITE;
>
> if ( handle_pio(port, bytes, dir) )
> update_guest_eip(); /* Safe: IN, OUT */
> }
>
> Is there any need for DomUs to access the IO ports?
In the case of PCI passthrough, the guest may need to use a devices IO BARs.
However, PCI passthrough and PVH is still a very open question, so
making a change here isn't really breaking anything.
> I know that FreeBSD
> will poke at some of them during boot to scan for devices, but I'm not
> sure if we could just make them noops in the PVH case and simply return
> garbage.
If anything, ~0 is what should be returned to match real hardware.
~Andrew
>
> Also, once this is set the PVH Specification document should be updated
> to reflect what can guests expect when poking at IO ports.
>
> Roger.
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |