[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [ANNOUNCE] Xen 4.13 Development Update

On Thu, Aug 01, 2019 at 11:11:50AM -0700, Roman Shaposhnik wrote:
> On Thu, Aug 1, 2019 at 9:01 AM Juergen Gross <jgross@xxxxxxxx> wrote:
> >
> > This email only tracks big items for xen.git tree. Please reply for items 
> > you
> > would like to see in 4.13 so that people have an idea what is going on and
> > prioritise accordingly.
> >
> > You're welcome to provide description and use cases of the feature you're
> > working on.
> >
> > = Timeline =
> >
> > We now adopt a fixed cut-off date scheme. We will release about every 8 
> > months.
> > The upcoming 4.13 timeline are as followed:
> >
> > * Last posting date: September 13th, 2019
> > * Hard code freeze: September 27th, 2019
> > * RC1: TBD
> > * Release: November 7th, 2019
> >
> > Note that we don't have freeze exception scheme anymore. All patches
> > that wish to go into 4.13 must be posted initially no later than the
> > last posting date and finally no later than the hard code freeze. All
> > patches posted after that date will be automatically queued into next
> > release.
> >
> > RCs will be arranged immediately after freeze.
> >
> > We recently introduced a jira instance to track all the tasks (not only big)
> > for the project. See: https://xenproject.atlassian.net/projects/XEN/issues.
> >
> > Some of the tasks tracked by this e-mail also have a corresponding jira task
> > referred by XEN-N.
> >
> > I have started to include the version number of series associated to each
> > feature. Can each owner send an update on the version number if the series
> > was posted upstream?
> Great timeline! On LF Edge Project EVE side, we'd like to help with testing an
> upcoming 4.13 as much as we can. The goal for us is to get rid of out-of-tree
> patches (most of them have to do with Alpine Linux support) and to make sure
> that we don't have issue on any of these platforms:
>      https://wiki.lfedge.org/display/EVE/Hardware+Platforms+Supporting+EVE

We currently have kind of different tests systems, we have Travis and
Gitlab CI build testing, which basically makes sure the build works on
a variety of OSes and toolchains. Gitlab CI also has a couple of
functional tests (by booting Xen in QEMU), but those are very minimal.

Then pushes to master are gatted by osstest [0], which is a Xen specific
test system. osstest manages a pool of hardware (both x86 and ARM) and
performs a wide variety of tests before declaring a given commit as
good. You can for example take a look at one of the recent osstest
reports on the staging branch:




Which contains the full test matrix.

> Since I'm still a little bit new to how Xen release process works, I'm 
> wondering
> what's the best way for us to stay on the same page with the rest of
> the community
> testing efforts?

It's possible to integrate new hardware into osstest, but I think in
as a bare minimum it needs to be something that can be safely racked
in a datacenter and reliably operated without any human intervention.
I've added him to the Cc in case you have more specific questions
about this process.

> Will there be nightly tarballs published at some point? Is there any
> kind of build
> infrastructure I can hook up on our side to make sure that I report issues as
> soon as possible?

The release process will start with the code freeze, and afterwards
release candidates will be cut on a bi-weekly (or weekly?) basis. A
source code tarball is published for each of those. During the RC
process we ask users to test the release candidates and make sure all
the features they care about work properly, otherwise bug reports are
filled directly on the mailing list.

If you have the capacity and automation just testing staging or master
would also be helpful, since AFACT from the EVE supported platforms
you have a wide array of hardware to test on.

Thanks, Roger.

[0] http://xenbits.xen.org/gitweb/?p=osstest.git;a=summary

Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.