Frequently Asked Questions
How to look up machines in Beaker provisioned by Testing Farm?
Testing Farm exposes the artifacts URL
in the job whiteboard, e.g. https://artifacts.osci.redhat.com/testing-farm/18b7a17a-2158-48ea-91a1-65b4dabd980c
.
Go to Beaker Jobs and search using the artifacts URL
or the Testing Farm request ID
.
If the search does not return any jobs, Testing Farm has either not started provisioning yet or is provisioning resources on a different infrastructure, e.g. AWS
.
How does Testing Farm handle the tmt
provision step?
Testing Farm behaves differently for the default single-host pipeline and the multi-host pipeline.
Single-host Pipeline (default)
This is the default pipeline used by Testing Farm, i.e. it is run if you do not specify the |
In the single-host pipeline, the provision
step is overridden with how: artemis
and the provisioning details are specified in the API request.
The provision.hardware
field is honored; others are ignored. The hardware field from the API always overrides the one from the plan — there is no merging.
Let’s take this simple plan as an example:
provision:
how: virtual
image: Fedora
hardware:
memory: >= 4Gi
execute:
how: tmt
And this Testing Farm request using the above plan:
testing-farm request --compose Fedora-Rawhide --hardware 'disk.size=>100Gi'
The provisioned machine will have these properties:
Please note that there is no actual provision step generated. Testing Farm calls Artemis directly; this is just for illustration. |
Please note that hardware requirements from the API take precedence. There is no merging with the hardware requirements in the plan. |
provision
how: artemis
image: Fedora-Rawhide
hardware:
disk:
- size >= 100Gi
Multi-host Pipeline
This is the default pipeline used by Testing Farm, i.e. it is run if you do not specify the |
In the multi-host pipeline, the how
field is fully respected. Testing Farm uses tmt’s --update-missing
functionality to fill in missing details from the API provision data.
This often requires adjusting the provision plan using context variables so it can run both locally and in the multi-host pipeline.
Consider this simple multi-host plan:
provision:
- name: client
how: virtual
image: Fedora-41
- name: server
how: virtual
image: Fedora-42
execute:
how: tmt
And this Testing Farm request using the above plan:
testing-farm request --compose Fedora-Rawhide --pipeline-type tmt-multihost
This plan will fail because Testing Farm honors what is in the plan, and the virtual
provisioner is not supported on Testing Farm workers.
The compose
value will also be ignored because the image
is already specified in the plan, and that takes precedence.
To fix this, remove the how
from the plan. Testing Farm will then apply its own provisioning method (e.g. how: artemis
).
Also, remove the image
where you want it to be updated from the request.
provision:
- name: client
- name: server
image: Fedora-42
execute:
how: tmt
With this change, Testing Farm will populate how
and image
for the client
using the API.
provision:
- name: client
how: artemis
image: Fedora-Rawhide
- name: server
how: artemis
image: Fedora-42
execute:
how: tmt