Test Request

Submitting

Requesting testing is easiest via our CLI tool. You can also directly use our API.

Cancelling

In some cases you want to prematurely cancel your request. Cancelling is straightforward.

Use the cancel CLI command and provide the request ID or a string containing it.

testing-farm CLI
$ testing-farm cancel 9baab88b-aca6-4652-ad93-8d954e109a25
$ testing-farm cancel https://api.testing-farm.io/v0.1/requests/a0f18d55-2dd5-466d-b2b8-6bd4a60ca12e
$ testing-farm cancel https://artifacts.dev.testing-farm.io/a0f18d55-2dd5-466d-b2b8-6bd4a60ca12e

If you prefer using our API, submit a request via the DELETE method of the requests endpoint and pass your token in headers of the request.

Testing Farm API
http -A bearer -a <YOUR_API_KEY> DELETE https://api.testing-farm.io/v0.1/requests/a0f18d55-2dd5-466d-b2b8-6bd4a60ca12e

Reserve After Testing

Starting with release 2025-01.1 Testing Farm can reserve a machine after testing. This is supported for request and restart commands.

Currently, reservations with autoconnect are supported only for a single plan. Restart the request with a single plan if you need reservations.

The reservation will insert the test /testing-farm/reserve-system from https://gitlab.com/testing-farm/tests into the running plan.

testing-farm CLI
$ testing-farm request --compose Fedora-Rawhide --reserve
$ testing-farm restart --reserve https://artifacts.dev.testing-farm.io/de0f4294-be6d-405f-b41c-15358e5dca57
Testing Farm API
Not supported.

Tmt Plan Filter

Testing Farm allows to use tmt plan filter. The feature allows to filter out plans with a help of tmt filtering and regex. The specified plan filter will be used in tmt plan ls --filter <YOUR-FILTER> command. By default enabled: true filter is applied. See tmt documentation for more information.

With testing-farm CLI
$ testing-farm request --plan-filter "tag: tier1"
In Testing Farm API:
...
{
    "test": {
        "tmt": {
            "plan_filter": "tag: tier1"
        }
    }
}
...

Tmt Test Filter

Testing Farm allows to use tmt test filter. The feature allows to filter out tests in plans with a help of tmt filtering and regex. The specified plan filter will be used in tmt run discover plan test --filter <YOUR-FILTER> command. See tmt documentation for more information.

With testing-farm CLI
$ testing-farm request --test-filter "tag: tier1"
In Testing Farm API:
...
{
    "test": {
        "tmt": {
            "test_filter": "tag: tier1"
        }
    }
}
...

Hardware Requirements

Testing Farm allows users to define hardware requirements for the testing environment. These hardware requirements are used to provision appropriate resources on supported infrastructures.

The CLI examples are shortened for brevity and concentrate only on the hardware selection. Additional required options will be required if you use them.

The hardware selection is currently supported only on the Red Hat Ranch. Support for Public Ranch is coming in Q2/2023.

Architecture

Testing Farm provides an ability to provision a guest with a given architecture.

With testing-farm CLI
$ testing-farm request --arch x86_64
In Testing Farm API:
...
{
    "environments": [{
        "arch": "x86_64"
    }]
}
...

In tmt plan the arch field is ignored and replaced by Testing Farm environments arch specification. We plan to fix this problem in the future.

Selection by hostname

Testing Farm provides an ability to provision a guest with a specific hostname. Ability to request a hostname matching a filter is also needed, because of guests of similar nature often sharing (sub)domain.

With testing-farm CLI
$ testing-farm request --hardware hostname='=~ sheep.+'
In Testing Farm API:
...
{
    "environments": [{
        "hardware": {
            "hostname": "=~ sheep.+"
        }
    }]
}
...

In tmt plan

provision:
  hardware:
    hostname: "=~ sheep.+"

RAM size selection

Testing Farm provides an ability to provision a guest with specified amount of RAM. Most often, a specific amount of RAM is needed to accommodate a memory-hungry test, making the minimal requirement the most demanded one.

With testing-farm CLI
$ testing-farm request --hardware memory='>= 8 GB'
In Testing Farm API:
...
{
    "environments": [{
        "hardware": {
            "memory": ">= 8 GB"
        }
    }]
}
...

In tmt plan

provision:
  hardware:
    memory: ">= 8 GB"

Disk size selection

Testing Farm provides an ability to provision a guest with specified disk size. The guest will get the disk size according to one of the suitable flavors:

The default disk size is:

  • 50 GiB for 🌍 Public Ranch

  • 250 GiB for 🎩 Red Hat Ranch

With testing-farm CLI
$ testing-farm request --hardware disk.size='>= 80 GB'
In Testing Farm API:
...
{
    "environments": [{
        "hardware": {
            "disk": [{
                "size": ">= 80 GB"
            }]
        }
    }]
}
...

In tmt plan

provision:
  hardware:
    disk:
      - size: ">= 80 GB"

Selection by TPM version

Testing Farm provides an ability to provision a guest with specified Trusted Platform Module (TPM) version.

With testing-farm CLI
$ testing-farm request --hardware tpm.version=2
In Testing Farm API:
...
{
    "environments": [{
        "hardware": {
            "tpm": {
                "version": "2"
            }
        }
    }]
}
...

In tmt plan

provision:
  hardware:
    tpm:
      version: 2

Selecting systems by their boot method - BIOS

Testing Farm provides an ability to provision a guest supporting a specific boot method. The most common ones are (legacy) BIOS and UEFI, but some architectures may support their own specific methods as well.

With testing-farm CLI
$ testing-farm request --hardware boot.method='bios'
$ testing-farm request --hardware boot.method='!= bios'
In Testing Farm API:
...
{
    "environments": [{
        "hardware": {
            "boot": {
                "method": "bios"
            }
        }
    }]
}
...
...
{
    "environments": [{
        "hardware": {
            "boot": {
                "method": "!= bios"
            }
        }
    }]
}
...

In tmt plan

provision:
  hardware:
    boot:
      method: "!= bios"

Selecting systems by their compatible distro

Testing Farm provides an ability to provision a guest supporting selected distributions (OS). It is possible to select a HW that is able to run a list of selected distributions.

With testing-farm CLI
$ testing-farm request --hardware compatible.distro='rhel-7' --hardware compatible.distro='rhel-8'

This functionality is currently broken with CLI. See the issue here.

In Testing Farm API:
...
{
    "environments": [{
        "hardware": {
            "compatible": {
                "distro": [
                    "rhel-7",
                    "rhel-8"
                ]
            }
        }
    }]
}
...

In tmt plan

provision:
  hardware:
    compatible:
      distro:
        - rhel-7
        - rhel-8

Selection by the model name of processor

Testing Farm provides an ability to provision a guest with a CPU of a particular model name.

With testing-farm CLI
$ testing-farm request --hardware cpu.model-name='=~ Intel Xeon'
In Testing Farm API:
...
{
    "environments": [{
        "hardware": {
            "cpu": {
                "model-name": "=~ Intel Xeon"
            }
        }
    }]
}
...

In tmt plan

provision:
  hardware:
    cpu:
      model-name: "=~ Intel Xeon"

Selection by the model of processor

Testing Farm provides an ability to provision a guest with a CPU of a particular model.

With testing-farm CLI
$ testing-farm request --hardware cpu.model='1'
In Testing Farm API:
...
{
    "environments": [{
        "hardware": {
            "cpu": {
                "model": "1"
            }
        }
    }]
}
...
In tmt plan
provision:
  hardware:
    cpu:
      model: 1

Selection by the number of processors

Testing Farm provides an ability to provision a guest with a given (minimal) number of logical processors.

With testing-farm CLI
$ testing-farm request --hardware cpu.processors='>= 4'
In Testing Farm API:
...
{
    "environments": [{
        "hardware": {
            "cpu": {
                "processors": ">= 4"
            }
        }
    }]
}
...
In tmt plan
provision:
  hardware:
    cpu:
      processors: ">= 4"

Selecting virtualized guests by their hypervisor

Testing Farm provides an ability to provision a guest that powered by a particular hypervisor.

With testing-farm CLI
$ testing-farm request --hardware virtualization.hypervisor="!= hyperv"
In Testing Farm API:
...
{
    "environments": [{
        "hardware": {
            "virtualization": {
                "hypervisor": "!= hyperv"
            }
        }
    }]
}
...
In tmt plan
provision:
  hardware:
    virtualization:
      hypervisor: "!= hyperv"

Selecting virtualized and non-virtualized guests

Testing Farm provides an ability to provision a guest either virtualized or definitely not virtualized.

With testing-farm CLI
$ testing-farm request --hardware virtualization.is-virtualized='false'
In Testing Farm API:
...
{
    "environments": [{
        "hardware": {
            "virtualization": {
                "is-virtualized": "false"
            }
        }
    }]
}
...
In tmt plan
provision:
  hardware:
    virtualization:
      is-virtualized: false

Selecting guests with virtualization support

Testing Farm provides an ability to ask for guests which support or do not support virtualization.

With testing-farm CLI
$ testing-farm request --hardware virtualization.is-supported='true'
In Testing Farm API
...
{
    "environments": [{
        "hardware": {
            "virtualization": {
                "is-supported": "true"
            }
        }
    }]
}
...
In tmt plan
provision:
  hardware:
    virtualization:
      is-supported: true

Selecting guests with a GPU

Testing Farm provides an ability to ask for guests with a GPU.

GPU selection is currently only supported on the public ranch. If you want to see it in the redhat ranch, please file an issue.

Currently we support selecting these using gpu.device-name and gpu.vendor-name attributes.

vendor-name device-name

NVIDIA

GK210 (Tesla K80)

NVIDIA

GV100 (Tesla V100)

With testing-farm CLI
$ testing-farm request --hardware gpu.device-name="GK210 (Tesla K80)" --hardware gpu.device-name="NVIDIA"
In Testing Farm API
...
{
    "environments": [{
        "hardware": {
            "gpu": {
                "device-name": "GK210 (Tesla K80)",
                "vendor-name": "NVIDIA"
            }
        }
    }]
}
...
In tmt plan
provision:
  hardware:
    gpu:
      device-name: GK210 (Tesla K80)
      vendor-name: NVIDIA

Modifying Tmt Steps

Testing Farm allows modifying discover, prepare and finish tmt step phases. This can be useful to insert a new phase into the tmt execution or update and existing one. Multiple modifications are supported per request.

For more information see the official tmt documentation on multiple phases.

With testing-farm CLI
$ testing-farm request --tmt-discover="--insert --how fmf --url https://gitlab.com/testing-farm/tests --test /testing-farm/reboot" \
                       --tmt-prepare='--insert --how shell --script "echo prepare1"' \
                       --tmt-prepare='--insert --how shell --script "echo prepare2"' \
                       --tmt-finish='--insert --how shell --script "echo finish"'
In Testing Farm API:
...
{
    "environments": [{
        "tmt": {
            "extra_args": {
                "discover": [
                    "--insert --how fmf --url https://gitlab.com/testing-farm/tests --test /testing-farm/reboot"
                ],
                "prepare": [
                    "--insert --how shell --script \"echo prepare1\"",
                    "--insert --how shell --script \"echo prepare2\"",
                ],
                "finish": [
                    "--insert --how shell --script \"echo finish\""
                ],
            }
        }
    }]
}
...

Artifact Installation Order

While installing artifacts, Testing Farm follows the order described below.

  • repository-file - 10

  • repository - 20

  • fedora-copr-build - 30

  • redhat-brew-build - 40

  • fedora-koji-build - 50

You can change the order by setting the order parameter of an artifact in the artifacts section of the environments definition. The order of the installation is from lowest to the highest.

Post Installation Script

The post installation script provides a way to inject a user defined data to the instance startup. The usage depends on the used pool.

  • For clouds providers AWS, Openstack, Azure and IBM Cloud can be a shell script or a cloud init configuration.

  • For Beaker can be a shell script executed after first boot of the system.

For Public Ranch Testing Farm injects by default the following post install script to unblock the root user from logging into the system.

#!/bin/sh
sed -i 's/.*ssh-rsa/ssh-rsa/' /root/.ssh/authorized_keys

If you set a post install script, it will override the default. So please make sure you include this snippet, or a cloud init equivalent, or otherwise Testing Farm will not be able to connect to the instance and error out.

Do not use the post install script for testing environment setup. The errors are hard to discover and can hide the fact the post installation script failed.

With testing-farm CLI
$ testing-farm request --post-install-script="#!/bin/sh\nsed -i 's/.*ssh-rsa/ssh-rsa/' /root/.ssh/authorized_keys"
In Testing Farm API
...
{
    "environments": [{
        "settings": {
            "provisioning": {
                "post-install-script": "#!/bin/sh\nsed -i 's/.*ssh-rsa/ssh-rsa/' /root/.ssh/authorized_keys"
            }
        }
    }]
}
...
In tmt plan

Not supported.

Provisioning Tags

You can set additional tags for the resources created by the provisioner. These tags will be used to tag public cloud instances if the provider supports them. In case of Beaker the tags will be added as task parameters ARTEMIS_TAG_<key>=<value>

Testing Farm also supports multiple special purpose tags to adjust the provisioning settings for the testing environment. See the next sections for details.

Dedicated Instances

Testing Farm by default uses spot instances, if available, to improve cloud costs. This can cause issues with long running requests, because the spot instances can be returned early on the provider discretion. In case the spot instance is returned early, usually the testing errors out while connecting back to the instance via SSH. If you want to avoid this problem, you can ask for dedicated instances using the following provisioning tag:

  • ArtemisUseSpot (boolean)

If set to true, force usage of spot instances. Provisioning pools that do not supporting spot instances are ignored. If set to false, disallow usage of spot instances. Ignored if spot instances are not supported. By default, if not specified, spot instances are used if available for the chosen infrastructure. Spot instances are currently supported only for AWS provisioning pools. Supported only on Red Hat ranch.

With testing-farm CLI
$ testing-farm request --tag ArtemisUseSpot=false
In Testing Farm API
...
{
    "environments": [{
        "settings": {
            "provisioning": {
                "tags": {
                    "ArtemisUseSpot": "false",
                }
            }
        }
    }]
}
...
In tmt plan

Not supported.

Cloud Costs

Testing Farm provides cloud costs reporting, see Cloud Costs for details. You can easily group together your costs by using the following tag:

  • BusinessUnit (string)

Can be set to any string. Set by default to the Testing Farm token name when unset. Supported only on Red Hat ranch.

With testing-farm CLI
$ testing-farm request --tag BusinessUnit=MyGroup
In Testing Farm API
...
{
    "environments": [{
        "settings": {
            "provisioning": {
                "tags": {
                    "BusinessUnit": "MyTeam"
                }
            }
        }
    }]
}
...
In tmt plan

Not supported.

Provisioning Error Retries

Testing Farm by default retries provisioning errors for several hours to deal with infrastructure hiccups. You can control the enablement of these retries using the following tag:

  • ArtemisOneShotOnly (boolean)

If set to true, do not retry recoverable provisioning errors. Has an effect once a suitable pool is found. Routing will always be retried. If set to false, retry recoverable provisioning errors. By default set to false.

With testing-farm CLI
$ testing-farm request --tag ArtemisOneShotOnly=true
In Testing Farm API
...
{
    "environments": [{
        "settings": {
            "provisioning": {
                "tags": {
                    "ArtemisOneShotOnly": "true",
                }
            }
        }
    }]
}
...
In tmt plan

Not supported.

Pipeline Timeout

Testing Farm allows users to set the timeout for their requests in minutes. The default is 720 (12 hours). If a job is expected to take a short amount of time, setting a shorter timeout will benefit the user as they will not wait long to timeout on infrastructure issues for example.

With testing-farm CLI
$ testing-farm request --timeout=20
In Testing Farm API:
...
{
    "settings": {
        "pipeline": {
            "timeout": 20
        }
    }
}
...

Parallelization

Testing Farm by default runs plans in parallel. The maximum number of plans run in parallel is by default set to these values:

  • 12 for Public ranch

  • 5 for Red Hat ranch

The defaults can be overridden.

With testing-farm CLI
$ testing-farm request --parallel-limit 10
In Testing Farm API:
...
{
    "settings": {
        "pipeline": {
            "parallel-limit": 10
        }
    }
}
...

Multihost Testing

Since release 2023-10.1, Testing Farm supports tmt multihost testing. This feature is introduced as a new Testing Farm pipeline, and it is not used by default.

Enable Multihost Pipeline

Multihost pipeline is currently opt-in using a feature flag in the test request. To enable it, the user has to fill the following field in the request JSON:

The multihost pipeline runs plans with tmt run provision --update-missing, updating the image in the plan attribute only if missing. Ensure plans are correctly written.

Testing Farm API
...
{
    "settings": {
        "pipeline": {
            "type": "tmt-multihost"
        }
    }
}
...

The CLI supports this via the option --pipeline-type tmt-multihost, which is available for request and restart commands:

testing-farm CLI
$ testing-farm request --pipeline-type tmt-multihost --git-url https://gitlab.com/testing-farm/tests --plan /testing-farm/multihost --compose Fedora-Rawhide
$ testing-farm restart --pipeline-type tmt-multihost REQUEST_ID

Feel free to submit a request yourself using the command above to try it out!

Current Limitations

  • In the Testing Farm API, the fields pool, hardware, artifacts, settings, and kickstart from the environments are ignored.

  • Test environment preparation is not performed.

  • The multihost testing is not available in Public Ranch, use Red Hat Ranch for testing.

Tmt Process Environment Variables

Testing Farm supports reporting test results to external systems.

This feature is available only for tmt tests.

These reporting features are implemented as tmt plugins, and Testing Farm only provides a way how to safely pass required environment variables for these plugins to work. The environment variables are directly passed to the tmt process, and are explicitly allowlisted in the Testing Farm configuration.

Do not confuse the tmt plugin environment variables with test environment variables. They are two distinct sets of environment variables for different purposes.

To set the tmt plugin environment variables in Testing Farm use the API or the testing-farm CLI tool.

Testing Farm API
{
  ...
  "environments": [
    {
      "tmt": {
        "environment": {
          "VAR1": "VALUE1",
          "VAR2": "VALUE2",
        }
      }
    }
  ]
  ...
}
testing-farm CLI
$ testing-farm request --tmt-environment VAR1=VALUE1 --tmt-environment VAR2=VALUE2 ...

Test Environment Secrets

Testing Farm supports injecting sensitive environment variables into tmt environment. The difference from test environment variables is that these variables are censored in all artifacts and logs produced by Testing Farm.

The tests must make sure to expose secrets in logs as they are to mitigate revealing them. Testing Farm uses the sed command to replace the secrets in all produced artifacts.

Currently there are two methods of injecting secrets into a Testing Farm request.

Adding secrets when submitting a new test request

Testing Farm API
{
  ...
  "environments": [
    {
      "secrets": {
          "KEY1": "SECRET_VALUE1",
          "KEY2": "SECRET_VALUE2"
      }
    }
  ]
  ...
}
testing-farm CLI
$ testing-farm request --secret KEY1=SECRET_VALUE1 --secret KEY2=SECRET_VALUE2 ...

Adding secrets via in-repository configuration

Since the 2025-01.1 release which implements a part of RFD2, credentials can be stored publicly in the tested git repository. These credentials are encrypted with RSA using a private key that is generated by and stored in Testing Farm. The RSA key pair is unique for each git repository and each Testing Farm token to avoid the possibility of stealing secrets.

To use this method of secrets injection, user has to perform two steps:

  1. Encrypt a secret value using the CLI encrypt command:

testing-farm CLI
$ testing-farm encrypt --git-url https://gitlab.com/testing-farm/tests "secret message"
🔒 Encrypting secret for your token in repo https://gitlab.com/testing-farm/tests
"83ba2098-0902-494f-8381-fd33bdd2b3b4,VnH/dLzVFdtgYJqSYYzpLJUNeQqWhna6dNXWBy4NiHJgIDTyt/IdTCPT4uXE1DAyMgJSKMXU5hYL8y+Kogs643E37NDjGRJIU/oMM+EQ80x2GCJsl3XRsfw1Ng7sBeNY4nxdh5SMKm0k1yPc4HPQ15N/VR34ar22WCtXS/DQG6Iuc/3bP9S2e3Wvt470/D5h8DRqfsL2/AdalgpIqmSREE7GmOlXm8kcctgD5Uuo+5Zgh4bgpKSYtCz2EGr8i83bMXW3Mfa2htK803iOjFMJet01cQS3AUETFA2g9/XmeJOHkrPFO9cWzjpadY8H6w8HV4HYtGzjsppsSLZsKPzj7ofn2R67YGX/eUZGcqjygqwWWiIz9DQ1i+hvPmlzFtzByH1pTDEzrNSqFdsg2MlB6Wk4fnZcHzCy3xDVkXJfXgY/No0yFlGPoi2wjNRxdFcnb+bsLQRhGzEGj4G/R84jgvXOzjrDAEcfHoIq4untdt7nbEghF6iYtkevSns1UPSVttnUnEKlkZX26BZoZglQOrrrxl3/s6XRAcmj8/p/uaKqrTcaRzfFgvsts0z6tDNnsToCtFq80UsFZz1li2y+6e2n9OviNHjzgCRmrtQuVJrJUkQwKdx8ybW5dxSXbS+1/q/tF4JdccIKnNViuifggcH0H6n5q39AY2l+gB0TslU="
  1. Store the encrypted secret value in .testing-farm.yaml file in the root of the git repository.

Contents of .testing-farm.yaml
version: 1

environments:
  secrets:
    SECRET_KEY: "83ba2098-0902-494f-8381-fd33bdd2b3b4,VnH/dLzVFdtgYJqSYYzpLJUNeQqWhna6dNXWBy4NiHJgIDTyt/IdTCPT4uXE1DAyMgJSKMXU5hYL8y+Kogs643E37NDjGRJIU/oMM+EQ80x2GCJsl3XRsfw1Ng7sBeNY4nxdh5SMKm0k1yPc4HPQ15N/VR34ar22WCtXS/DQG6Iuc/3bP9S2e3Wvt470/D5h8DRqfsL2/AdalgpIqmSREE7GmOlXm8kcctgD5Uuo+5Zgh4bgpKSYtCz2EGr8i83bMXW3Mfa2htK803iOjFMJet01cQS3AUETFA2g9/XmeJOHkrPFO9cWzjpadY8H6w8HV4HYtGzjsppsSLZsKPzj7ofn2R67YGX/eUZGcqjygqwWWiIz9DQ1i+hvPmlzFtzByH1pTDEzrNSqFdsg2MlB6Wk4fnZcHzCy3xDVkXJfXgY/No0yFlGPoi2wjNRxdFcnb+bsLQRhGzEGj4G/R84jgvXOzjrDAEcfHoIq4untdt7nbEghF6iYtkevSns1UPSVttnUnEKlkZX26BZoZglQOrrrxl3/s6XRAcmj8/p/uaKqrTcaRzfFgvsts0z6tDNnsToCtFq80UsFZz1li2y+6e2n9OviNHjzgCRmrtQuVJrJUkQwKdx8ybW5dxSXbS+1/q/tF4JdccIKnNViuifggcH0H6n5q39AY2l+gB0TslU="

Only values of secret keys can be encrypted. The key should not contain any sensitive information.

When the user submits a Testing Farm request with the same token that requested secrets encryption, the secret will be injected into tmt environment.

The steps above described a case where a user submits request on their own behalf. When a third party is submitting requests on user’s behalf, modification applies as described below.

If a third party (e.g. Packit) is submitting Testing Farm requests on behalf of user, user needs to provide third party’s API Token ID to the testing-farm encrypt via the --token-id option.

testing-farm CLI
$ testing-farm encrypt --git-url https://gitlab.com/testing-farm/tests --token-id c8adc4cd-6a4d-4d58-acf4-2ef599405f2b "secret message"
🔒 Encrypting secret for token id c8adc4cd-6a4d-4d58-acf4-2ef599405f2b in repo https://gitlab.com/testing-farm/tests
"c8adc4cd-6a4d-4d58-acf4-2ef599405f2b,TsVjDwttR/Icc5NtOsWMjo7K7rxt8Cpnck5MFVueHNF+71aGfI4YS+szbahNvqn/Ecn0do1G8qDKGrIvx6PNKVgDWuK4R04m+QyxDrup3dzBvBsiaEkklhPClKZWHgSCWrtDRJYP/0GXuY/SCBeyURaTy9Uc3M7ne4lLacxpm5zunW7l0u+0I+lhttSOLK6zZ0+bhlkY+HMo0xiqv2OBILZ0FQ+xT2vvH66W8+0GOQgcGuQdZZlRVjTm7SumVGJ9K6aAJ+B/S2OtMqfMxGUtGG8ZBYxWZDeyyoPYnkzxnf9A11uSWU/C6nTISHtQod0ztkn8bVgS2fYogpKsE9qmLO7O0v4XvQE1JBhDAeIEW40v00Uq866BeHOSUaTl79z7PicybBex+TrO4hX5LMqKV60oe1LOlFbuo6hHGoH4HaXmhcKvAPxNu+PpgfxfkJAlGgwMnC7uv/kSVzRvBzRZ73MjECRksTkEDcUnX5L5kwiTTl3PPYDpK/ntzSRvvD64IvmJkZVo9nX6pglZY5DDUSriCju7HINM3qmVNho8d4bEDX9mXl8jZJKxiNUo0tVxqNxdiYNu0nS0BjAlzA8umQKr9p3lI2HpIB1dbS+jdsBp3KDWwxhd106C/430nq1SqUc1in0jQBgFzgUHzis0tL7oIC5k1k3k37r5kFsW/Zs="

The secrets will be injected when a request is submitted with an API Token that corresponds to the submitted Token ID.

When the encrypted secrets are invalid, they will be ignored by Testing Farm.

Report to Polarion

To report to Polarion make sure your plan has the polarion report plugin enabled.

report:
    how: polarion

See for more information the polarion report plugin documentation.

Specify the following environment variables:

POLARION_REPO=https://polarion.example.com/repo
POLARION_URL=https://polarion.example.com/polarion
POLARION_PROJECT=your-project
POLARION_USERNAME=your-username
POLARION_PASSWORD=your-password

Report to ReportPortal

To report to ReportPortal make sure your plan has the reportportal report plugin enabled.

report:
    how: reportportal
    project: your-project

See for more information the reportportal report plugin documentation.

Specify the following environment variables:

TMT_PLUGIN_REPORT_REPORTPORTAL_URL=https://reportportal.example.com
TMT_PLUGIN_REPORT_REPORTPORTAL_TOKEN=your-token

The variables listed below are optional and are typically configured by users directly within their tmt metadata:

TMT_PLUGIN_REPORT_REPORTPORTAL_PROJECT
TMT_PLUGIN_REPORT_REPORTPORTAL_LAUNCH
TMT_PLUGIN_REPORT_REPORTPORTAL_LAUNCH_DESCRIPTION