Benchmark Runner¶
Canonical Command¶
Use this as the single benchmark entrypoint:
python3 tests/runtime/run_benchmark.py \
--godot-binary ./bin/godot.linuxbsd.editor.dev.x86_64 \
--project-path ./tests/examples/godot/test_project
--profile defaults to everything.
Current Public Snapshot¶
The current committed public result is a single low-noise raster baseline row:
| Lane | Score | Avg FPS | P99 Frame (ms) | GPU Time (ms) |
|---|---|---|---|---|
static_baseline |
90.7 | 74.0 | 15.62 | 0.0 |
That snapshot is what backs the public performance dashboard until more committed lane results are added.
Standard Flags¶
--profile(everything|quick|performance|synthetic-only|ab-only)--godot-binary--project-path--output-dir--reference-dir--capture--fail-fast
Compatibility extras are still accepted (--capture-lane, --no-captures, --no-dashboard) but the command above is the supported path.
Single-Process Execution¶
run_benchmark.py launches one Godot process and delegates all lanes to:
res://scenes/benchmark_orchestrator.tscn
The orchestrator loads each lane scene sequentially in-process and writes lane JSON reports with the existing suite-compatible structure.
Profiles¶
everything: suite lanes + unified + small baseline + synthetic scenesquick: shortened smoke profileperformance: suite-focused performance profilesynthetic-only: synthetic scenes onlyab-only: instance pipeline serial vs single-pass lanes
Asset Policy¶
Asset generation and mapping are canonicalized through:
tests/runtime/prepare_synthetic_assets.pytests/fixtures/benchmark_asset_manifest.json
The benchmark runner calls synthetic asset preparation automatically before execution.
Suite Coverage¶
These are the user-relevant lanes already encoded in the suite and available for publication once committed results exist:
| Lane | Purpose | Current publication status |
|---|---|---|
static_baseline |
Low-noise raster baseline | Published |
streaming_corridor |
Camera sweep stressing chunk turnover | Suite-only |
city_flyover |
High-altitude visibility-change stress | Suite-only |
instance_storm |
Many-instance submission pressure | Suite-only |
lighting_stress |
Animated light and shading stress | Suite-only |
unified_composite |
Integrated all-systems composite lane | Suite-only |
Outputs¶
Default output directory:
tests/output/benchmark_suite/<timestamp>/
Generated artifacts:
benchmark_suite_report.jsonbenchmark_suite_summary.mdbenchmark_orchestrator.logbenchmark_orchestrator_report.json<lane_id>.jsonper lane- optional dashboard artifacts (
benchmark_suite_dashboard.html,benchmark_suite_*.svg)
The public docs surface should prefer the snapshot table above for the current committed result and keep the charts focused on the exported lane data.
Interactive Performance Charts¶
Data source
Charts below render from assets/data/benchmark_latest.json, generated by scripts/export_benchmark_vegalite.py during the docs build. If no benchmark data is available, charts will show an empty state.
Lane Scores¶
How to Update¶
- Run a benchmark:
python tests/runtime/run_benchmark.py --profile everything - Export data:
python scripts/export_benchmark_vegalite.py - Refresh the snapshot and coverage tables in
docs/performance/index.mdwhen new committed results are available. - Build docs:
python scripts/build_docs_site.py --strict