Modern CI pipelines in this repository classify runners into two categories to control cost and coverage. Understanding the difference allows you to target expensive resources only when necessary and keep feedback cycles fast.
- Environment – GitHub-hosted virtual machines such as
ubuntu-latestorwindows-latest. - Use cases – Linting, unit tests and any step that can run on a clean ephemeral image.
- Configuration – In
requirements.jsonomitrunner_typeor set it tostandard. Workflows simply useruns-on: ubuntu-latestor similar. - Characteristics – High concurrency, minimal boot time and no persistent state. Ideal for rapid validation.
- Environment – Long-lived machines with preinstalled tooling such as LabVIEW, g-cli and hardware drivers. They may be self-hosted or specialized GitHub images.
- Use cases – End‑to‑end scenarios that interact with external systems, require licensed software or need deterministic state.
- Configuration – Tag the runner with
runner_type: "integration"inrequirements.jsonand reference the runner by label in workflows. Integration entries often setskip_dry_run: trueto force real execution. - Characteristics – Limited availability and higher cost; jobs are serialized to protect shared resources. Treat these runners as scarce infrastructure.
The requirements.json file defines which runner each test or requirement targets:
{
"runners": {
"ubuntu-latest": {
"runner_label": "ubuntu-latest",
"runner_type": "integration"
},
"windows-latest": {
"runner_label": "windows-latest"
/* implicit runner_type: "standard" */
}
},
"requirements": [
{
"id": "REQ-009",
"runner": "ubuntu-latest",
"tests": ["Build.Workflow"]
}
]
}The summarizer partitions results by runner_type, producing artifacts such as summary-integration.md alongside summary-standard.md. This separation keeps integration evidence distinct from fast feedback produced on default runners.
- Minimize integration usage. Start with standard runners and move tests to integration environments only when they require external dependencies or state.
- Isolate heavy workflows. Place integration jobs in separate stages or repositories to avoid blocking quick validation paths.
- Protect self-hosted runners. Apply concurrency limits and explicit
needschains so multiple integration jobs do not compete for the same hardware. - Audit runner labels. Keep
requirements.jsonsynchronized with the actual fleet of self-hosted machines. Stale labels lead to idle jobs. - Document expectations. When adding new requirements or workflows, update
docs/requirements.mdso theRunnerandRunner Typecolumns reflect the intended infrastructure.
By explicitly classifying jobs, teams can scale the project efficiently—routine tasks remain fast on default runners while integration tests validate real‑world behavior without overloading scarce resources.