Test Results

Results Viewer

Testing Farm uses a simple results viewer for viewing test results via it’s Oculus component. It provides a unified interfaces for viewing results among all users of Testing Farm. The results viewer is the index.html of the Artifacts Storage.

The code is contributable in case you would like to introduce some improvements.

Tests Passed

If all tests passed, everything is collapsed to provide a nice view of all executed tests (or plans).

Here is an example of a run with all passed results:

oculus passed

Tests Failed

When some of the tests failed, the viewer shows only expanded failed tests.

All the passed tests are by default hidden, to view them, click on the Show passed tests checkbox in the header.

oculus failed

Header

The results viewer header provides some useful links.

oculus header
  • API request

    A link to the API request details. It provides additional information about the request.

    The JSON is not pretty printed, it is advised to use a JSON formatter to display it nicely, in case your browser does not support it out-of-box.

  • Pipeline log

    This is the output of the Testing Farm Worker, in cases of errors it can provide more context about the issues hit in the pipeline.

  • Issues for this page

    A link to the viewer’s public issue tracker.

  • Download JUnit

    Testing Farm provides a standard JUnit XML which can be downloaded by clicking on the link. You can use this file for importing results to other result viewers.

Anatomy of the Test Results

See Test Process page to understand more about how Testing Farm executes the tests.

In general, results are displayed as a list of one or more plans, where each of these plans has one or more tests.

In case of STI tests, the plans are the Ansible playbooks being executed.

├── plan1
│   ├── test1
│   ├── test2
│   └── test3
├── plan2
│   ├── test1
│   ├── test2
│   └── test3
...

Test Execution Logs

The test execution logs are the outputs of executed tests. They are previewed directly in the result viewer. There is also a link to access the test output file.

  • for tmt tests, the link to them is called testout.log and it points to the output of the tmt test execution.

    oculus tmt test output
  • for STI tests the link has a .log suffix, and it called according to the generated test name, it points to the captured test output.

    oculus sti test output

At the bottom of the test execution logs additional links are provided.

oculus tmt test log links

These currently contain at least one item - log_dir:

  • for tmt tests this is the link to the tests execution data directory

  • for STI tests this points to the working of the whole execution, i.e. it is the same for all tests

In some cases the log links can provide additional logs, see for example logs from the rpminspect test:

oculus log links rpminspect

Additional Logs and Artifacts

Additional logs and artifacts are provided for each tmt plan or STI playbook.

oculus test logs artifacts

You can use the link Go to Logs and Artifacts to quickly scroll down to them.

tmt-reproducer

See Reproducer for details.

Currently logs from the test environment preparation are included:

oculus log links

For tmt tests, also a link to the working directly with all tmt logs is provided:

  • workdir - you can find here for example the full tmt log - log.txt

Console Log

Serial console of the provisioned machine is available under console.log. The content of the file can be incomplete, depending on the underlying infrastructure where the tests run. Some infratructures limit the maximum size of the console log. Console log snapshots can be found in the workdir, in case the console log would be incomplete.

Artemis guest events log

The log with Artemis guest events details, available under Guest events log link. It describes how a guest was provisioned and destroyed. Can be useful for guest errors debugging.

Reproducer

For tmt tests a code snippet is provided for reproducing test execution on your localhost.

The reproducer is not available for STI tests. See the tmt official documentation how to port your tests from STI to tmt.

The reproducer steps for Red Hat ranch are incorrect and they need manual adjustments for the tmt command. For example for the reproducer command:

tmt --root . -c arch=x86_64 -c distro=rhel-8.8.0 -c trigger=build \
    run --until provision --verbose -e @tmt-environment-plan.yaml \
    provision --how virtual --image RHEL-8.8.0-Nightly plan --name ^/plan$

You can use the:

  • 1minutetip vargrant images available at http://liver3.brq.redhat.com/1mt-vagrant/ instead of RHEL-8.8.0-Nightly (choose the latest available image):

    tmt --root . -c arch=x86_64 -c distro=rhel-8.8.0 -c trigger=build \
        run --until provision --verbose -e @tmt-environment-plan.yaml \
        provision --how virtual --image http://liver3.brq.redhat.com/1mt-vagrant/1MT-RHEL-8.8.0-updates-20241008.0.box \
        plan --name ^/plan$
  • use the minute provision plugin from tmt-redhat-all package, available from qa-tools copr repository:

    tmt --root . -c arch=x86_64 -c distro=rhel-8.8.0 -c trigger=build \
        run --until provision --verbose -e @tmt-environment-plan.yaml \
        provision --how minute --image rhel-8.8 plan --name ^/plan$
oculus tmt reproducer

You can use this snippet to:

  • clone the same test repository that was used for testing

  • install tested artifacts before running the tests

  • run testing on your localhost against a similar environment that was used in CI

    If applicable, the artifacts installation is optional in the Testing Farm request.

The tmt-reproducer is a work in progress, and currently has these limitations:

  • the pre-artifact installation and post-artifact installation sub-phases are not included

  • the test environment is not the same as in Testing Farm, where the tests run against an AWS EC2 instance (or other infrastructures)

  • the compose version might not be the same

Even if these limitations currently exist, we believe the reproducer is very handy for debugging test failures, and it is advised to use it in case of problems.

Errors

This section gives you some hints for investigating common errors you can encounter with Testing Farm.

When the request errors out, the viewer header shows error. All plans which failed with an error are marked with an orange color.

oculus error

Errors can happen in various stages of the Test Process. Usually, a reasonable error is shown in the plan summary, that should give you a hint what went wrong.

oculus error reason

Provisioning

If Testing Farm is not able to provision resources required to run the tests, the test environment execution will fail.

This can have multiple causes:

Test Environment Preparation

If Testing Farm is not able to prepare the test environment, the test environment execution will fail. There is no point to continue in the testing process if test environment is not properly prepared.

Artifacts Installation Failed

This can happen if all artifacts cannot be installed because of conflicts, missing dependencies, etc. The viewer shows the logs from the artifact installation, which should help you identifying the problem.

In case your artifacts have conflicting packages, you can use the exclude in install plugin of the prepare step. See the tmt documentation for details.

The excluding works only for fedora-koji-build and brew-koji-build artifacts. Support for other artifact types is not working yet. See this issue for details.

Testing Farm installs all rpms from all artifacts given. This can cause issues when multiple builds are tested together, for example with certain Bodhi updates. We plan to fix this in the next releases. See this issue for details.

Ansible Playbook Fails

In certain cases, the playbooks run in the pre-artifact-installation and post-artifact-installation sub-phases can fail. This usually happens in case of outage with mirrors, or other connection problems. Try to restart the testing and if the situation persists, contact us.

Testing Stuck

If the testing is stuck in progress log, it usually means:

  • Your test run for more then 12 hours, and it was forcibly cancelled

    We are working on improving this. See this issue for details.