Skip to content

CI/Testing Workflow ๐Ÿงช

Continuous Integration (CI) is essential for maintaining high-quality code by catching issues early. This guide covers CI testing and quality checks for Ultralytics projects.

CI Actions ๐Ÿ”„

All PRs must pass automated CI checks before merging. Our CI pipeline includes:

CI Tests

Primary CI test running unit tests, linting checks, and comprehensive tests.

Docker Deployment

Validates deployment using Docker, ensuring Dockerfile and related scripts work correctly.

Scans codebase for broken or dead links in markdown and HTML files.

CodeQL Analysis

GitHub's semantic analysis tool for finding potential security vulnerabilities and maintaining code quality.

PyPI Publishing

Validates project can be packaged and published to PyPI without errors.

Platform Testing ๐Ÿ–ฅ๏ธ

Tests run on multiple environments:

  • OS: Ubuntu, Windows, macOS
  • Python: 3.8, 3.9, 3.10, 3.11, 3.12

Code Coverage ๐Ÿ“Š

We use Codecov to measure and visualize code coverage, providing insights into how well tests exercise the codebase.

Coverage Integration

Codecov integration provides:

  • Detailed coverage insights
  • Coverage comparisons between commits
  • Visual overlays on code showing covered lines
  • Coverage percentage for the ultralytics package

View full coverage details at codecov.io/github/ultralytics/ultralytics.

Understanding Coverage

Code coverage shows what percentage of code is executed during tests. High coverage indicates well-tested code but doesn't guarantee absence of bugs. Coverage helps identify untested areas that might be prone to errors.

Running Tests Locally ๐Ÿ–ฅ๏ธ

Install Development Dependencies

pip install -e ".[dev]"

Run All Tests

pytest tests/

Run Specific Tests

# Single file
pytest tests/test_engine.py

# Single test function
pytest tests/test_engine.py::test_train

# Tests matching pattern
pytest -k "export"

# Slow tests only
pytest -m slow

Run with Coverage

pytest --cov=ultralytics tests/

Parallel Testing

# Install pytest-xdist
pip install pytest-xdist

# Run tests in parallel
pytest -n auto

Pre-commit Hooks ๐Ÿช

Set up pre-commit hooks to catch issues before pushing:

pip install pre-commit
pre-commit install

Hooks automatically run:

  • Ruff (linting and formatting)
  • docformatter (docstring formatting)
  • Trailing whitespace removal
  • YAML validation

Run manually:

pre-commit run --all-files

Writing Tests โœ๏ธ

Test Structure

from pathlib import Path

from ultralytics import YOLO


def test_model_export():
    """Test ONNX model export."""
    model = YOLO("yolo11n.pt")
    model.export(format="onnx")
    assert Path("yolo11n.onnx").exists()

Best Practices

  • Descriptive names: test_export_onnx_format() not test_1()
  • Single assertion: Test one thing per function
  • Fast tests: Use small models/datasets
  • Fixtures: Use pytest fixtures for setup/teardown
  • Markers: @pytest.mark.slow for long-running tests

Test Organization

tests/
โ”œโ”€โ”€ test_engine.py      # Training, validation, prediction
โ”œโ”€โ”€ test_nn.py          # Model architecture
โ”œโ”€โ”€ test_data.py        # Dataset handling
โ”œโ”€โ”€ test_utils.py       # Utility functions
โ””โ”€โ”€ test_exports.py     # Export formats

Test Markers

import pytest


@pytest.mark.slow
def test_full_training():
    """Test full training run (slow)."""
    model = YOLO("yolo11n.pt")
    model.train(data="coco128.yaml", epochs=1)

Code Quality Checks ๐ŸŽฏ

Formatting with Ruff

# Check formatting
ruff check ultralytics/

# Auto-fix issues
ruff check --fix ultralytics/

# Format code
ruff format ultralytics/

Learn more about code standards in our development workflow.

Type Checking

# Run mypy (where configured)
mypy ultralytics/

Docstring Formatting

# Check docstrings
docformatter --check ultralytics/

# Auto-fix
docformatter --in-place ultralytics/

CI Troubleshooting ๐Ÿ”ง

Tests Pass Locally But Fail in CI

Common causes:

  • Platform-specific issues: Test on target OS
  • Python version differences: Check version compatibility
  • Missing dependencies: Verify CI config
  • Timing/concurrency issues: Add retries or increase timeouts

Slow CI Runs

Solutions:

  • Use @pytest.mark.slow for expensive tests
  • Mock external dependencies
  • Reduce test dataset sizes
  • Parallelize with pytest-xdist

Flaky Tests

Fixes:

  • Add retries for network-dependent tests
  • Increase timeouts for slow operations
  • Fix race conditions in async code
  • Use deterministic random seeds

Performance Benchmarks ๐Ÿ“ˆ

CI tracks key metrics:

  • Inference speed (FPS)
  • Memory usage
  • Model size
  • Export times

Significant regressions block merging. If metrics change:

  1. Verify change is expected
  2. Document reason in PR
  3. Get approval from maintainers

CI Status ๐Ÿ“‹

Check CI status for all Ultralytics repositories at docs.ultralytics.com/help/CI.

Main Repository Badges

CI Docker Links PyPI codecov

Skipping CI Checks โš ๏ธ

Add [skip ci] to commit message to skip CI (use sparingly):

git commit -m "Update README [skip ci]"

Only for:

  • Documentation-only changes
  • Non-code file updates
  • Emergency hotfixes (with approval)

Resources ๐Ÿ“š



๐Ÿ“… Created 1 month ago โœ๏ธ Updated 9 days ago