Test Request
- tmt
- Environment
- Hardware Requirements
- Architecture
- Selection by hostname
- RAM size selection
- Disk size selection
- Selection by TPM version
- Selecting systems by their boot method - BIOS
- Selecting systems by their compatible distro
- Selection by the model name of processor
- Selection by the model of processor
- Selection by the number of processors
- Selecting virtualized guests by their hypervisor
- Selecting virtualized and non-virtualized guests
- Selecting guests with virtualization support
- Selecting guests with a GPU
- Artifact Installation Order
- Settings
- Hardware Requirements
- Pipeline
- Reporting
Requesting testing is easiest via our CLI tool. |
tmt
Plan Filter
Testing Farm allows to use tmt plan filter.
The feature allows to filter out plans with a help of tmt filtering and regex.
The specified plan filter will be used in tmt plan ls --filter <YOUR-FILTER>
command.
By default enabled: true
filter is applied.
See tmt documentation for more information.
Test Filter
Testing Farm allows to use tmt test filter.
The feature allows to filter out tests in plans with a help of tmt filtering and regex.
The specified plan filter will be used in tmt run discover plan test --filter <YOUR-FILTER>
command.
See tmt documentation for more information.
Environment
Hardware Requirements
Testing Farm allows users to define hardware requirements for the testing environment. These hardware requirements are used to provision appropriate resources on supported infrastructures.
The CLI examples are shortened for brevity and concentrate only on the hardware selection. Additional required options will be required if you use them. |
The hardware selection is currently supported only on the Red Hat Ranch. Support for Public Ranch is coming in Q2/2023. |
Selection by hostname
Testing Farm provides an ability to provision a guest with a specific hostname. Ability to request a hostname matching a filter is also needed, because of guests of similar nature often sharing (sub)domain.
RAM size selection
Testing Farm provides an ability to provision a guest with specified amount of RAM. Most often, a specific amount of RAM is needed to accommodate a memory-hungry test, making the minimal requirement the most demanded one.
Disk size selection
Testing Farm provides an ability to provision a guest with specified disk size. The guest will get the disk size according to one of the suitable flavors:
-
🎩 Red Hat Ranch flavors (RH only link)
The default disk size is:
-
50 GiB
for 🌍 Public Ranch -
250 GiB
for 🎩 Red Hat Ranch
Selection by TPM version
Testing Farm provides an ability to provision a guest with specified Trusted Platform Module (TPM) version.
Selecting systems by their boot method - BIOS
Testing Farm provides an ability to provision a guest supporting a specific boot method. The most common ones are (legacy) BIOS and UEFI, but some architectures may support their own specific methods as well.
Examples
testing-farm
CLI$ testing-farm request --hardware boot.method='bios'
$ testing-farm request --hardware boot.method='!= bios'
...
{
"environments": [{
"hardware": {
"boot": {
"method": "bios"
}
}
}]
}
...
...
{
"environments": [{
"hardware": {
"boot": {
"method": "!= bios"
}
}
}]
}
...
In tmt
plan
provision:
hardware:
boot:
method: "!= bios"
Selecting systems by their compatible distro
Testing Farm provides an ability to provision a guest supporting selected distributions (OS). It is possible to select a HW that is able to run a list of selected distributions.
Examples
testing-farm
CLI$ testing-farm request --hardware compatible.distro='rhel-7' --hardware compatible.distro='rhel-8'
This functionality is currently broken with CLI. See the issue here. |
...
{
"environments": [{
"hardware": {
"compatible": {
"distro": [
"rhel-7",
"rhel-8"
]
}
}
}]
}
...
In tmt
plan
provision:
hardware:
compatible:
distro:
- rhel-7
- rhel-8
Selection by the model name of processor
Testing Farm provides an ability to provision a guest with a CPU of a particular model name.
Selection by the model of processor
Testing Farm provides an ability to provision a guest with a CPU of a particular model.
Selection by the number of processors
Testing Farm provides an ability to provision a guest with a given (minimal) number of logical processors.
Selecting virtualized guests by their hypervisor
Testing Farm provides an ability to provision a guest that powered by a particular hypervisor.
Selecting virtualized and non-virtualized guests
Testing Farm provides an ability to provision a guest either virtualized or definitely not virtualized.
Selecting guests with virtualization support
Testing Farm provides an ability to ask for guests which support or do not support virtualization.
Selecting guests with a GPU
Testing Farm provides an ability to ask for guests with a GPU.
GPU selection is currently only supported on the public ranch.
If you want to see it in the |
Supported GPUs
Currently we support selecting these using gpu.device-name
and gpu.vendor-name
attributes.
vendor-name |
device-name |
---|---|
NVIDIA |
GK210 (Tesla K80) |
NVIDIA |
GV100 (Tesla V100) |
Examples
testing-farm
CLI$ testing-farm request --hardware gpu.device-name="GK210 (Tesla K80)" --hardware gpu.device-name="NVIDIA"
...
{
"environments": [{
"hardware": {
"gpu": {
"device-name": "GK210 (Tesla K80)",
"vendor-name": "NVIDIA"
}
}
}]
}
...
tmt
planprovision:
hardware:
gpu:
device-name: GK210 (Tesla K80)
vendor-name: NVIDIA
Artifact Installation Order
While installing artifacts, Testing Farm follows the order described below.
-
repository-file -
10
-
repository -
20
-
fedora-copr-build -
30
-
redhat-brew-build -
40
-
fedora-koji-build -
50
You can change the order by setting the order
parameter of an artifact in the artifacts
section of the environments
definition.
The order of the installation is from lowest to the highest.
Settings
Various per-environment settings.
Provisioning
Post Installation Script
The post installation script provides a way to inject a user defined data to the instance startup. The usage depends on the used pool.
-
For clouds providers
AWS
,Openstack
,Azure
andIBM Cloud
can be a shell script or a cloud init configuration. -
For
Beaker
can be a shell script executed after first boot of the system.
For Public Ranch Testing Farm injects by default the following post install script to unblock the root user from logging into the system.
If you set a post install script, it will override the default. So please make sure you include this snippet, or a cloud init equivalent, or otherwise Testing Farm will not be able to connect to the instance and error out. |
Examples
testing-farm
CLI$ testing-farm request --post-install-script="#!/bin/sh\nsed -i 's/.*ssh-rsa/ssh-rsa/' /root/.ssh/authorized_keys"
...
{
"environments": [{
"settings": {
"provisioning": {
"post-install-script": "#!/bin/sh\nsed -i 's/.*ssh-rsa/ssh-rsa/' /root/.ssh/authorized_keys"
}
}
}]
}
...
tmt
planNot supported.
Tags
Testing Farm supports multiple special purpose tags to adjust the provisioning settings for the testing environment.
-
ArtemisUseSpot
(boolean) - Controls if to use spot instances. If set totrue
, force usage of spot instances. Provisioning pools that do not supporting spot instances are ignored. If set tofalse
, disallow usage of spot instances. Ignored if spot instances are not supported. By default, if not specified, spot instances are used if available for the chosen infrastructure. Spot instances are currently supported only forAWS
provisioning pools. Supported only on Red Hat ranch. -
ArtemisOneShotOnly
(boolean) - Controls if provisioning errors should be retried. If set totrue
, do not retry recoverable provisioning errors. Has a affect once a suitable pool is found. Routing will always be retried. If set tofalse
, retry recoverable provisioning errors. By default set tofalse
. Note that Artemis retries for several hours before giving up for recoverable errors. -
BusinessUnit
(string) Used for Cloud Costs reporting. Can be set to any string. Set by default to the Testing Farm token name. Supported only on Red Hat ranch.
Examples
testing-farm
CLI$ testing-farm request --tag ArtemisUseSpot=false --tag ArtemisOneShotOnly=true --tag BussinessUnit=MyTeam
...
{
"environments": [{
"settings": {
"provisioning": {
"tags": {
"ArtemisUseSpot": "false",
"ArtemisOneShotOnly": "true",
"BusinessUnit": "MyTeam"
}
}
}
}]
}
...
tmt
planNot supported.
Pipeline
Timeout
Testing Farm allows users to set the timeout for their requests in minutes. The default is 720 (12 hours). If a job is expected to take a short amount of time, setting a shorter timeout will benefit the user as they will not wait long to timeout on infrastructure issues for example.
Parallelization
Testing Farm by default runs plans in parallel. The maximum number of plans run in parallel is by default set to these values:
-
12
for Public ranch -
5
for Red Hat ranch
The defaults can be overridden.
Cancelling a Request
In some cases you want to prematurely cancel your request. Cancelling is straight forward.
Use the cancel
CLI command and provide the request ID or a string containing it.
testing-farm
CLI$ testing-farm cancel 9baab88b-aca6-4652-ad93-8d954e109a25
$ testing-farm cancel https://api.testing-farm.io/v0.1/requests/a0f18d55-2dd5-466d-b2b8-6bd4a60ca12e
$ testing-farm cancel https://artifacts.dev.testing-farm.io/a0f18d55-2dd5-466d-b2b8-6bd4a60ca12e
If you prefer using our API, submit a request via the DELETE
method of the requests
endpoint and pass your token in the body of the request.
http DELETE https://api.testing-farm.io/v0.1/requests/a0f18d55-2dd5-466d-b2b8-6bd4a60ca12e api_key=<YOUR_API_KEY>
Multihost Testing
Since release 2023-10.1, Testing Farm supports tmt
multihost testing.
This feature is introduced as a new Testing Farm pipeline, and it is not used by default.
Enable Multihost Pipeline
Multihost pipeline is currently opt-in using a feature flag in the test request. To enable it, the user has to fill the following field in the request JSON:
The multihost pipeline runs plans with |
...
{
"settings": {
"pipeline": {
"type": "tmt-multihost"
}
}
}
...
The CLI supports this via the option --pipeline-type tmt-multihost
, which is available for request
and restart
commands:
testing-farm
CLI$ testing-farm request --pipeline-type tmt-multihost --git-url https://gitlab.com/testing-farm/tests --plan /testing-farm/multihost --compose Fedora-Rawhide
$ testing-farm restart --pipeline-type tmt-multihost REQUEST_ID
Feel free to submit a request yourself using the command above to try it out! |
Current Limitations
-
In the Testing Farm API, the fields
pool
,hardware
,artifacts
,settings
, andkickstart
from theenvironments
are ignored. -
Test environment preparation is not performed.
-
The multihost testing is not available in Public Ranch, use Red Hat Ranch for testing.
Reporting
Testing Farm supports reporting test results to external systems.
This feature is available only for |
These reporting features are implemented as tmt
plugins, and Testing Farm only provides a way how to safely pass required environment variables for these plugins to work.
The environment variables are directly passed to the tmt
process, and are explicitly allowlisted in the Testing Farm configuration.
Do not confuse the |
To set the tmt
plugin environment variables in Testing Farm use the API or the testing-farm
CLI tool.
{
...
"environments": [
{
"tmt": {
"environment": {
"VAR1": "VALUE1",
"VAR2": "VALUE2",
}
}
}
]
...
}
testing-farm
CLI$ testing-farm request --tmt-environment VAR1=VALUE1 --tmt-environment VAR2=VALUE2 ...
Polarion
To report to Polarion make sure your plan has the polarion
report plugin enabled.
report:
how: polarion
See for more information the polarion report plugin documentation.
Specify the following environment variables:
POLARION_REPO=https://polarion.example.com/repo
POLARION_URL=https://polarion.example.com/polarion
POLARION_PROJECT=your-project
POLARION_USERNAME=your-username
POLARION_PASSWORD=your-password
ReportPortal
To report to ReportPortal make sure your plan has the reportportal
report plugin enabled.
report:
how: reportportal
project: your-project
See for more information the reportportal report plugin documentation.
Specify the following environment variables:
TMT_PLUGIN_REPORT_REPORTPORTAL_URL=https://reportportal.example.com
TMT_PLUGIN_REPORT_REPORTPORTAL_TOKEN=your-token
The variables listed below are optional and are typically configured by users directly within their tmt
metadata:
TMT_PLUGIN_REPORT_REPORTPORTAL_PROJECT
TMT_PLUGIN_REPORT_REPORTPORTAL_LAUNCH
TMT_PLUGIN_REPORT_REPORTPORTAL_LAUNCH_DESCRIPTION