[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Xen ARM - Exposing a PL011 to the guest



Hi Stefano,

On 19/12/2016 21:24, Stefano Stabellini wrote:
On Mon, 19 Dec 2016, Christoffer Dall wrote:
On Fri, Dec 16, 2016 at 05:03:13PM +0000, Julien Grall wrote:
(CC rest maintainers for event channel questions)

On 16/12/16 10:06, Bhupinder Thakur wrote:
Hi,

Hi Bhupinder,

The idea is for Xen to act as an intermediary as shown below:

              ring buffers
                          rx/tx fifo
dom0 <-------------------> Xen HYP (running pl011 emulation)
<-------------------> domU
                   event
                          interrupts

Xen will directly manage the in/out console ring buffers (allocated by
dom0 for dom0-domU console communication) for reading/writing console
data from/to dom0. On the other side, Xen HYP will emulate pl011 to
read/write data from/to domU and pass it on to/from dom0 over the
in/out console ring buffers. There should be no change in dom0 as it
will still use the same ring buffers. Similarly there should be no
change in domU which would be running a standard pll011 driver.

Currently, I am working on the interface between dom0 and Xen HYP. I
want to intercept the console events in Xen HYP which pass between
dom0 and domU. For now, I just want to capture console data coming
>from dom0 at Xen HYP and loop it back to dom0, to confirm that this
interface is working.

Since each guest domain will have a unique event channel assigned for
console communication, Xen HYP can find out the event channel for a
given domU from the start_info page of that domU, which should have

The start_info page is x86 specific. If you want to get the console
event channel for ARM, you would have to use
d->arch.hvm_domain.params[HVM_PARAM_CONSOLE_EVTCHN].

This parameter will be setup by the toolstack (see alloc_magic_pages
in libxc/xc_dom_arm.c).

been allocated by dom0. Whenever, an event is to be dispatched via
evtchn_send() API in Xen, it can check if the event channel is the
console event channel for a given domU. If yes and its source domain
is dom0 and destination domain is domU then it will write the data
back to the console out ring buffer of the domU and raise a console
event to dom0.

Once this interface is working, Xen HYP can check the source and
destination dom ids and decide which way the event came from and
accordingly process the console data. To allow a mix of PV console
guests and pl011 guests, Xen might have to maintain a flag per domain,
which tells whether Xen HYP should intercept and process the data (for
pl011 UART case) or let it go transparently (for PV conosle case).

I am not very familiar with the event channel code. I will let the
others comment on this bit.

Regardless that, how would you decide whether the hypervisor should
intercept the notification?

I can see 2 different cases:
        1) The guest is starting to use the pl011 then move to the HVC
console (or HVC then pl011)
        2) The guest is using both the PL011 and the HVC console

Should we consider the second case valid? I would say yes, because a
user could specify both on the command line. If we use the same
ring, the output would be a total garbage.

So maybe we need to allocate two distinct rings and event channel?

This sounds like the only sensible thing to me.  I think this is really
about adding a new device to the Xen virtual platform, and providing the
user the option to choose which one he wants the tool in Dom0 to be
presented using stdin/out. Presumably the other console/serial can be
redirected to a file or socket or something?

Let me explain how the PV console protocol and drivers work, because
they are a bit unusual. The first PV console is advertised via
hvm_params. The guest calls:

  hvm_get_parameter(HVM_PARAM_CONSOLE_EVTCHN, &v);
  hvm_get_parameter(HVM_PARAM_CONSOLE_PFN, &v);

to get the two parameters to setup the ring and evtchn. If they are 0,
the guest considers the first console unavailable. Other PV console
rings, from the second onward, are advertised via xenstore like any
other Xen PV protocols. In those cases, frontend and backend access
xenstore to setup ring and event channel.

The PV console backends are unusual too. xenconsoled, available on all
Xen systems, is one process per host and can handle only one PV console
per domain. Specifically, it is only able to deal with the first console.
Domains that have multiple PV consoles require QEMU (not as an emulator,
but as a PV backends provider). The toolstack writes "type" =
"xenconsoled" or "ioemu" to distinguish PV consoles that xenconsoled or
QEMU are supposed to handle. Ideally, we shouldn't require QEMU for
pl011 PV consoles, but it wouldn't be the end of the world if we did.

Additionally, Xen cannot speak xenstore. It can neither read nor write
to it. I don't think we should add xenstore support to the hypervisor
for this. We need to come up with a solution that doesn't require it.

Agree on this.


Finally, we cannot hijack one of the guest PV consoles, regardless of
whether it's the first console or one of the others, because the guest
can always try to use them at any time. We need a PV console reserved
for Xen-Dom0 communications on behalf of the guest. When a VM is created
with "pl011=y", the toolstack needs to allocate one more page and evtchn
for the exclusive hypervisor usage.  They are not going to be advertised
to the guest as PV consoles; otherwise, the guest could rightfully
access them.

Both Xen and the PV console backend need access to the two numbers (pfn
and evtchn) though. Xen doesn't do xenstore, so I suggest the toolstack
should use another way to tell pfn and evtchn to Xen, maybe hvm_params.

I think it will be the other way around. Xen will allocate the event channel and then report to the PV backends. Very similar to what it is done for ioreq server on x86 today.

If we use hvm_params for this, we need two new hvm_params and Xen needs
to unmap the pfn from the guest immediately, because we don't want the
guest to have access to it.

If you unmap the pfn, the PV backend will not be able to request the page because there will be no translation available.

So what you want to do is preventing the guest to at least write into region (not sure if it is worth to restrict read) and unmap the page via the hypercall XENMEM_decrease_reservation.


However, the PV console backend can access xenstore, so in that case, it
is fine to write the pfn and evtchn of the PV console for pl011 to
xenstore, paying attention at using the xenstore permissions
appropriately. There are no reasons why the guest should have access to
them; only the console backend should be able to read them. Given that
the console backend has dom0 privileges, it is not a problem. I also
suggest using new xenstore nodes, different from any of the existing PV
console nodes.  For example:

/local/domain/$DOMID/xen-console/$NUM/ring-ref
/local/domain/$DOMID/xen-console/$NUM/port

Where $DOMID is the guest domain id, and $NUM is the console number,
starting from 0. If we use new hvm_parms for the pl011 PV console, we
might get away without any xenstore stuff.

For simplicity, given that xenconsoled doesn't support multiple PV
consoles, we could setup the pl011 PV console *instead* of the regular
PV console, hacking tools/console/daemon/io.c:domain_create_ring. It's
safe if the toolstack doesn't provide a PV console. When pl011 is
requested, libxl could set the pfn and evtchn hvm_params to 0 for the
initial PV console. Eventually, it would be nice if xenconsoled was able
to support both consoles at the same time.

The PL011 emulation will be slower than the PV console. While I think it is a sensible approach to either have PL011 or PV console, we will have to support both in the future.

IIRC, the UEFI firmware will use Xen console by default but I am not sure it will fallback to the PL011 if present. So we may require some change in the firmware to allow booting on different configuration (i.e PL011 guest or PV console guest).

Cheers,

--
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.