Frequently Asked Questions
How to look up machines in Beaker provisioned by Testing Farm?
Testing Farm exposes the artifacts URL in the job whiteboard, e.g. https://artifacts.osci.redhat.com/testing-farm/18b7a17a-2158-48ea-91a1-65b4dabd980c.
Go to Beaker Jobs and search using the artifacts URL or the Testing Farm request ID.
If the search does not return any jobs, Testing Farm has either not started provisioning yet or is provisioning resources on a different infrastructure, e.g. AWS.
How to find previous Testing Farm requests?
If you’ve lost the UUID of a previous request or the link to list your requests, you can retrieve them using the API:
-
First, find your user/token ID using the whoami endpoint:
$ http -A bearer -a $TESTING_FARM_API_TOKEN https://api.testing-farm.io/v0.1/whoamiLook for the
idfield in thetokensection of the response. -
Then list your requests for the last 7 days:
$ http "https://api.testing-farm.io/v0.1/requests?token_id=97c77906-f223-499d-b1d1-e98f9a2d1de1&created_after=$(date -Idate -d "7 days ago")"Replace
97c77906-f223-499d-b1d1-e98f9a2d1de1with your actual token ID from step 1.
You can adjust the created_after parameter to look further back in time if needed.
How does Testing Farm handle the tmt provision step?
Testing Farm behaves differently for the default single-host pipeline and the multi-host pipeline.
Single-host Pipeline (default)
|
This is the default pipeline used by Testing Farm, i.e. it is run if you do not specify the |
In the single-host pipeline, the provision step is overridden with how: artemis and the provisioning details are specified in the API request.
The provision.hardware field is honored; others are ignored. The hardware field from the API always overrides the one from the plan — there is no merging.
Let’s take this simple plan as an example:
provision:
how: virtual
image: Fedora
hardware:
memory: ">= 4Gi"
execute:
how: tmt
And this Testing Farm request using the above plan:
testing-farm request --compose Fedora-Rawhide --hardware 'disk.size=>100Gi'
The provisioned machine will have these properties:
|
Please note that there is no actual provision step generated. Testing Farm calls Artemis directly; this is just for illustration. |
|
Please note that hardware requirements from the API take precedence. There is no merging with the hardware requirements in the plan. |
provision
how: artemis
image: Fedora-Rawhide
hardware:
disk:
- size: ">= 100Gi"
Multi-host Pipeline
In the multi-host pipeline, the how field is fully respected. Testing Farm uses tmt’s --update-missing functionality to fill in missing details from the API provision data.
This often requires adjusting the provision plan using context variables so it can run both locally and in the multi-host pipeline.
Consider this simple multi-host plan:
provision:
- name: client
how: virtual
image: Fedora-41
- name: server
how: virtual
image: Fedora-42
execute:
how: tmt
And this Testing Farm request using the above plan:
testing-farm request --compose Fedora-Rawhide --pipeline-type tmt-multihost
This plan will fail because Testing Farm honors what is in the plan, and the virtual provisioner is not supported on Testing Farm workers.
The compose value will also be ignored because the image is already specified in the plan, and that takes precedence.
To fix this, remove the how from the plan. Testing Farm will then apply its own provisioning method (e.g. how: artemis).
Also, remove the image where you want it to be updated from the request.
provision:
- name: client
- name: server
image: Fedora-42
execute:
how: tmt
With this change, Testing Farm will populate how and image for the client using the API.
provision:
- name: client
how: artemis
image: Fedora-Rawhide
- name: server
how: artemis
image: Fedora-42
execute:
how: tmt
How to exclude artifacts from installing
Testing Farm installs artifacts specified in the environments[].artifacts field in the API before running tmt. If you want to skip this installation but still have the artifacts available on the system via a repository, add the following prepare step in your tmt plan.
prepare:
- name: Skip Testing Farm installation of artifacts
how: install
exclude:
- .*
How to reserve a machine in FIPS mode?
Testing Farm supports reserving machines with FIPS mode enabled. This requires using the --tmt-prepare option to configure FIPS during the prepare phase.
For more details, see the Reserve Machine in FIPS Mode section.
testing-farm CLI$ testing-farm reserve --tmt-prepare="--insert --order 0 --how feature --fips enabled" --compose RHEL-10.0-Nightly
How are hardware requirements handled when specified via the API and in the tmt plan?
Hardware requirements specified in the API request will completely override hardware requirements from the tmt plan.
There is no merging of hardware requirements - the API specification replaces the plan specification entirely.
For example, if you have a plan with these hardware requirements:
provision:
hardware:
memory: ">= 4Gi"
cpu:
processors: ">= 2"
And submit a Testing Farm request with:
testing-farm request ... --hardware 'disk.size=>=100Gi'
The final provisioned machine will only have the disk size requirement from the API request. The memory and CPU requirements from the plan will be ignored.
To include all requirements, specify them in the API request:
testing-farm request ... --hardware 'memory=>=4Gi' --hardware 'cpu.processors=>=2' --hardware 'disk.size=>=100Gi'
How to find out that Testing Farm has a large queue
|
Currently, we provide this information only to Red Hat employees. |
See this Grafana dashboard to see the public and redhat ranch queue sizes.
Why do BeakerLib libraries fail in image mode?
In image mode (e.g. RHEL-9.7.0-image-mode), RPM packages are not available in the traditional way. This means that BeakerLib libraries normally provided by RPMs (such as beakerlib-libs-* packages) cannot be installed on the guest.
When a test uses the old RPM-style library reference:
require:
- library(kernel/network)
tmt first tries to fetch the library from GitHub (e.g. https://github.com/beakerlib/kernel). If the library is not found on GitHub, tmt falls back to installing it as a regular RPM package. In image mode, this RPM fallback fails because the package manager cannot install packages from the usual repositories.
Symptoms
In the tmt log (available under tmt-log in the Log Links section of the results), you will see errors like:
no package provides library(kernel/network) No match for argument: library(kernel/network)
Solution
Use the explicit FMF identifier format to reference BeakerLib libraries via git. This bypasses the RPM fallback entirely and fetches the library directly from its git repository:
require:
- url: https://github.com/beakerlib/kernel
name: /network
type: library
You can also pin a specific branch or tag:
require:
- url: https://github.com/beakerlib/kernel
name: /network
ref: main
type: library
If the library is not available on GitHub under the beakerlib organization, you need to find its actual git repository location and reference it accordingly.
|
This applies to all image mode composes. If your tests need to run on both traditional and image mode composes, using the FMF identifier format is recommended as it works in both cases. |
Why are my tests failing with unsigned package errors on Fedora?
Starting with Fedora 45, the default RPM package verification level changed from digest to all.
This means RPM now requires packages to have verified signatures by default.
Additionally, Fedora has been transitioning from dnf4 to dnf5 since Fedora 41, and legacy dnf4-based tools like yum-builddep (from yum-utils) do not integrate well with the stricter signature checking.
As a result, tests that use dnf4-era tooling to install packages — for example build dependencies via yum-builddep — may fail with GPG signature errors.
The dnf5 package manager handles unsigned packages from repositories configured with gpgcheck=0 correctly.
Symptoms
-
yum-builddepor otheryum-utils/dnf4commands fail with signature verification errors -
Error messages reference unsigned or unverified packages
-
The issue appears on Fedora 45+
Solution
Replace legacy dnf4 / yum-utils commands with their dnf5 equivalents.
On Fedora 41+, the dnf command is already symlinked to dnf5.
| Old Command | DNF5 Equivalent | Required Package |
|---|---|---|
|
|
|
|
|
|
On Fedora 41+, the versionless dnf command is symlinked to dnf5, so dnf builddep works correctly.
If your tests need to run on both dnf5 and non-dnf5 systems (e.g. RHEL 10 and older), prefer using the versionless dnf builddep over dnf5 builddep.
|
For example, replace:
yum-builddep -y ./my-package.src.rpm
With:
dnf builddep -y ./my-package.src.rpm
|
The |
How to use package managers (DNF/YUM) with the standalone Python runtime?
Testing Farm uses a standalone Python distribution for test environment preparation (see release 2026-01.1).
This custom distribution does not include C bindings for system package managers, so Ansible’s builtin dnf and yum modules will fail with an error like:
Could not import the dnf python module using /var/opt/standalone-python/bin/python3.
Please install `python3-dnf` package or ensure you have specified the correct ansible_python_interpreter.
To work around this, use Testing Farm’s own CLI-based wrapper for YUM and DNF together with Ansible’s builtin package module.
Add the following task to your Ansible playbook before any package installation tasks:
- name: Override package manager module
when: "ansible_pkg_mgr in ['unknown', 'dnf']"
ansible.builtin.set_fact:
ansible_package_use: "testing_farm_profiles.library.dnf_yum_cli" # noqa: var-naming
This tells Ansible to use the dnf_yum_cli wrapper, which invokes DNF/YUM via the CLI instead of relying on their Python C bindings, making it compatible with the standalone Python runtime.
Why is my merge request pipeline stuck on a fork?
Testing Farm projects use merge request pipelines which require access to project CI/CD secrets. For security reasons, GitLab does not expose these secrets to pipelines triggered from forked repositories. As a result, your pipeline may appear stuck with a message like:
This job is stuck because of one of the following problems. There are no active runners online, no runners for the protected branch, or no runners that match all of the job's tags.
|
This applies to all Testing Farm repositories, both public (gitlab.com) and Red Hat internal (gitlab.cee.redhat.com). |
What to do
-
Wait for a maintainer to approve and run the pipeline. A project maintainer will review your merge request and manually trigger the pipeline once they are satisfied the changes are safe.
-
Allow edits from maintainers. When creating your merge request, make sure to enable the Allow commits from members who can merge to the target branch option. This lets maintainers push fixes or trigger pipelines directly on your fork’s branch, speeding up the review process.
-
If you are in a hurry, reach out to the maintainers via the support channels to request a pipeline run.
|
This is standard GitLab CI behavior to protect secrets from being exposed in untrusted code. It is not a bug or misconfiguration — maintainers simply need to verify and approve the pipeline before it can run. |
How to use subscription-manager against staging RHSM?
|
This FAQ item is applicable only for Red Hat Ranch. |
When using subscription-manager against the staging Red Hat Subscription Management (RHSM) service in Testing Farm, you should configure a proxy server due to IP range restrictions for the best testing experience. The AWS instance IP ranges used by Testing Farm are not permitted to connect directly to the staging RHSM service. The problem is not present for PSI Openstack or Beaker, but might be also required for IBM Cloud.
subscription-manager using the proxy server:$ subscription-manager register --force --baseurl=https://cdn.stage.redhat.com --serverurl=subscription.rhsm.stage.redhat.com --username=USER --password=PASSWORD --proxy squid.corp.redhat.com:3128