[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Xen and safety certification, Minutes of the meeting on Apr 4th



Hi Stefano

On 06.04.18 23:47, Stefano Stabellini wrote:
On Fri, 6 Apr 2018, Artem Mygaiev wrote:
2) Create a subset of functions that need to go through certifications
Next step: create a small Kconfig. We could use the Renesas Rcar as
reference. We need a discussion about the features we need, for
example real-time schedulers, do we need them or not?


Identifying this subset is very important. My recommendation would be to
identify the very smallest subset to start with that supports a single, high
value use case, which I would suggest is consolidation of Linux and
real-time applications with mixed criticality, but not necessarily shared/PV
I/O, onto a single processing cluster. Identifying the highest reasonable
safety criticality to support would also be very helpful.


Unfortunately in mixed criticality systems (at least in automotive) we see a
lot of attention to performance and , so processing cluster partitioning may
not be well accepted in the industry

Sorry, I didn't quite understand your comment. Are you saying that
statically partitioning a cluster into VMs, for example with
vcpu-pinning or the null scheduler, in a way to have a total number of
vcpus equal to the total number of pcpus, is not acceptable because it
leads to lower hardware utilization? We need nr_vcpus > nr_pcpus?

Yep. In other words, OEMs want to use as much as possible of HW they have.

At the Xen level, you might get away with just the null scheduler if VMs are
pinned to their own cores (and jitter caused by contention on the bus and in
the cache is acceptable). However, to do CAST-32a type scheduling
(effectively time slicing the SoC between your VMs), an updated ARINC-653
scheduler would be needed.


We are now looking into RTDS as a possible solution for industrial or
automotive domains. Also , from our experience bus/cache contention in systems
with high load is actually an issue... Looking into that, too

Bus/cache contention is where issues can become very board specific. It
is also why we'll need to narrow down a small set of boards initially.

We'd like to do a bit more analysis before deciding... I am not very convinced with numbers yet.

Since I do not think that a previously certified OS will be available for
free, I see 3 general approaches wrt dom0:
1) Find and certify an open source OS. My guess is this will not be Linux
due to code base size. POSIX support a plus.
2) Use a commercially available, previously certified OS for dom0. DW ported
VxWorks to run on Xen in 2017 and uc/OS-III in 2016.
3) Go with a dom0-less solution; bootloader starts up the necessary VMs
based on a static configuration.

The XL toolstack in its current form will likely cause cert issues and will
probably need to be stripped down and/or rewritten.
Bootloader (U-Boot, GRUB, or whatever) will also need to be certified.


We'd like to explore both FreeRTOS in dom0 and dom0-less options. I think
there were some patches while ago for dom0-less xen.

"Dom0-less" is a great name actually :-)

Up until now, we discussed this topic under the name of "create multiple
guests from device tree". There are no patches (as far as I know), but
it was submitted as the Xen on ARM project for Outreachy this year.
There are patches for a different project to setup shared memory regions
from the xl config file (no need for grant table or xenbus support).

Do you have anyone interested in taking this task?


We plan to analyze efforts to port FreeRTOS as dom0 OS

Great! I think it makes sense to start from that. I wrote "Artem" down
in the wikipage
(https://wiki.xenproject.org/wiki/Safety_Certification_Challenges) as
the reference contact for the dom0 stuff. Keep us in the loop as Julien
and I are very interested in it.

Sure!

 -- Artem

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.