At 11:40 PM, the script looked innocent: 70 lines, one API call, one CSV output, and a promise to “just run it in CI.” It worked on my laptop, worked in my teammate’s shell, then failed in GitHub Actions with ModuleNotFoundError because someone had installed a transitive dependency globally two weeks ago and forgot about it.
That incident is why I now treat one-file automation as production software. Not heavyweight software, but still software with runtime contracts, dependency policy, and a reproducible execution path. In 2026, the cleanest path I’ve found is: PEP 723 inline script metadata plus uv script lockfiles.
If you already run Python in operations pipelines, release tooling, or data sanity checks, this is the practical, low-friction setup that keeps one-file scripts from becoming mystery artifacts.
Where one-file Python is still the right tool
Not everything needs a full project scaffold. Single-file scripts are still ideal for:
- incident-response helpers you need fast,
- repeatable ops tasks that don’t justify a long-lived service,
- small migration utilities that should be easy to review and archive.
The problem is never the script size. The problem is hidden environment state. If runtime assumptions live only in your shell history, your script is already brittle.
Step 1, put the runtime contract in the script itself
PEP 723 defines a standard inline metadata block for script dependencies and Python version constraints. That gives your script a portable contract tools can read consistently.
#!/usr/bin/env -S uv run --script
# /// script
# requires-python = ">=3.12"
# dependencies = [
# "httpx>=0.27,<0.28",
# "tenacity>=9.0,<10",
# "rich>=13.7,<14",
# ]
# [tool.uv]
# exclude-newer = "2026-04-20T00:00:00Z"
# ///
from __future__ import annotations
import csv
from datetime import UTC, datetime
import httpx
from tenacity import retry, stop_after_attempt, wait_exponential
@retry(stop=stop_after_attempt(4), wait=wait_exponential(multiplier=0.5, max=8))
def fetch_health(endpoint: str) -> dict:
with httpx.Client(timeout=10.0) as client:
r = client.get(endpoint)
r.raise_for_status()
return r.json()
if __name__ == "__main__":
payload = fetch_health("https://status.example.com/api/health")
with open("health_snapshot.csv", "w", newline="", encoding="utf-8") as f:
writer = csv.DictWriter(f, fieldnames=["checked_at", "status", "region"])
writer.writeheader()
writer.writerow({
"checked_at": datetime.now(UTC).isoformat(),
"status": payload.get("status", "unknown"),
"region": payload.get("region", "global"),
})
Why this matters:
- The
dependencieslist is explicit and reviewable in code review. requires-pythonstops silent failures on older interpreters.- The optional
exclude-newertimestamp (supported by uv) reduces surprise from newly published upstream packages.
Step 2, lock it like a project, even if it is one file
From the uv scripts guide, one-file scripts can be locked explicitly. That is the difference between “latest compatible” and “known good.”
# First run locally
uv run --script scripts/health_snapshot.py
# Freeze exact resolution
uv lock --script scripts/health_snapshot.py
# Commit both files
git add scripts/health_snapshot.py scripts/health_snapshot.py.lock
git commit -m "Lock health snapshot script dependencies"
In CI, run exactly what you reviewed:
name: health-snapshot
on:
workflow_dispatch:
schedule:
- cron: "15 * * * *"
jobs:
run:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Install uv
uses: astral-sh/setup-uv@v4
- name: Execute locked script
run: uv run --script scripts/health_snapshot.py
This setup pairs nicely with prior supply-chain controls like attestations and provenance checks (see this runbook), but keeps the script workflow lightweight enough for everyday engineering.
Step 3, decide your update policy before the next incident
Most teams fail here. They lock once, then drift forever or blindly update everything. A better rhythm:
- Cadence: refresh lockfiles weekly for non-critical scripts, faster for security-sensitive ones.
- Scope: update one script family at a time, not every automation in one PR.
- Gate: run script output assertions in CI before accepting lockfile refreshes.
If your script triggers backend writes, combine this with idempotent ingestion patterns so reruns are safe under retries. We covered the downstream side of that in our partial commit reliability blueprint.
Tradeoffs you should accept consciously
1) Fast start vs strict reproducibility
Running with floating constraints is faster to start, but less reproducible. Lockfiles add a small maintenance cost and remove a lot of firefighting.
2) One-file convenience vs lifecycle growth
When your script accumulates config files, tests, and deployment rules, move to a full project layout. PEP 723 helps you start clean, not stay tiny forever.
3) Tooling simplicity vs ecosystem flexibility
Standard venv workflows remain foundational and portable (see the official Python venv docs). uv script workflows optimize speed and ergonomics, but your team should align on one default path to reduce operational confusion.
A practical migration path for existing scripts
If you already have 20 to 50 scripts in a repo, do not migrate everything in one sprint. Start with the scripts that can wake someone up at night: billing exports, deployment gates, user-impact checks.
- Inventory: list scripts that run in CI, cron, or release workflows.
- Contract: add
requires-pythonand dependency metadata to each candidate script. - Lock: generate script lockfiles and commit them in one focused PR.
- Observe: add minimal runtime logging (duration, retries, exit reason) for first-week confidence.
- Automate updates: create a scheduled lock refresh PR and review diffs like production code.
This staged approach gives you predictable progress without creating a “big bang” migration risk.
Troubleshooting, when it still fails outside your laptop
Symptom: CI uses a different Python minor version
Check: metadata has requires-python and runner image provides it.
Fix: pin interpreter in CI and keep requires-python aligned.
Symptom: Dependency resolves differently on two days
Check: missing .lock file or not committed.
Fix: run uv lock --script ..., commit lockfile, and fail CI if lockfile changes unexpectedly.
Symptom: Script unexpectedly picks project dependencies
Check: whether you are running in a project folder and how invocation is done.
Fix: prefer explicit --script usage with inline metadata, which uv treats as script-scoped dependencies.
Symptom: “Works locally” but times out in CI
Check: retry strategy, timeout defaults, and remote endpoint limits.
Fix: add bounded retries and explicit timeouts, then log failure context for replay. For broader reliability patterns in Python automation, this older durable context post is still useful.
FAQ
Do I still need virtual environments if I use uv scripts?
Yes, conceptually. uv manages environments for you, but environment isolation still exists and still matters. The difference is less manual setup.
Should every script use lockfiles?
If the script runs in CI, production operations, or scheduled jobs, yes. For throwaway local experiments, maybe not. The risk threshold should decide.
Can I keep using pyproject.toml projects and still adopt this?
Absolutely. Use full projects for long-lived applications, and PEP 723 scripts for focused one-file automation. They complement each other rather than compete.
Actionable takeaways
- Add PEP 723 inline script metadata to every shared Python script this week.
- Generate and commit a uv script lockfile for each CI-executed script.
- Set a lock refresh cadence and review lock diffs like production changes.
- Use internal runbooks for downstream idempotency and rollback paths, not just script retries.

Leave a Reply