Ops AI: Python APM Configuration

This guide shows how to enable Ops AI for Python services with the Middleware Python SDK. You’ll install the SDK, start your app with the middleware-run launcher, and (recommended) provide VCS metadata so Ops AI can point to exact files/lines and propose a PR when it detects a code issue.

Use Middleware SDK v2.4.0+ for the best Ops AI experience. The current PyPI release (2.4.x) supports Python ≥ 3.8.

What you get with the Python SDK

  • Exception → code context: when a runtime exception occurs, the SDK automatically captures the code of the function where it happened, making root cause review much faster than a stack trace only. No extra config needed.
  • Framework coverage: works with OpenTelemetry-supported Python frameworks (e.g., Flask/FastAPI/Django via contrib instrumentations).

Quick start (3 steps)

1 Install the Middleware package

1pip install middleware-io

This installs the SDK and a small CLI called middleware-run.

2 Launch your app with middleware-run

Run your usual start command, prefixed with the launcher. Example with Flask:

1# Instead of:
2flask run
3
4# Use:
5middleware-run flask run

You can use the same pattern for other entrypoints (e.g., python app.py, gunicorn ..., etc.). The launcher initializes tracing/logging hooks before your app starts.

3 (Recommended) Provide VCS metadata for Ops AI

If your app has a .git directory, the SDK auto-detects the repository URL and commit SHA. In containerized builds or when .git isn’t present, set these before you start the app:

1export MW_VCS_COMMIT_SHA="$(git rev-parse HEAD)"
2export MW_VCS_REPOSITORY_URL="$(git config --get remote.origin.url)"
3middleware-run flask run

These hints let Ops AI resolve the correct file/line and open a PR against the right commit/branch when it generates a suggested fix.

Two Instrumentation Styles (pick one)

A. Zero-code (env-only) instrumentation

Best for fast onboarding and no code edits required. Set env vars and start with middleware-run:

1export MW_API_KEY="<YOUR_API_KEY>"
2export MW_TARGET="https://<YOUR_WORKSPACE>.middleware.io:443"
3export MW_SERVICE_NAME="MyFlaskServer"
4
5middleware-run python app.py

This attaches the Python agent and exports telemetry to your workspace. (The general Python APM docs show the same pattern.)

One-time framework bootstrap (optional but useful)

Install matching OpenTelemetry instrumentations for the frameworks you already use:

1middleware-bootstrap -a install

This scans installed packages (e.g., flask) and installs the corresponding OTel “contrib” adapters (e.g., opentelemetry-instrumentation-flask) so request/DB spans are captured automatically. In containers, add to your Dockerfile:

1RUN middleware-bootstrap -a install

B. Tracker-function Integration (more control)

If you prefer explicit control, you can wire the SDK via a tracker function and then start with a flag:

1from middleware import mw_tracker, MWOptions
2
3mw_tracker(MWOptions(
4    access_token="<MW_API_KEY>",
5    target="https://<YOUR_WORKSPACE>.middleware.io:443",
6    # ...additional options...
7))

Run with:

1MW_TRACKER=True middleware-run python app.py

(Required when using mw_tracker().) See the configuration table below for option/env equivalents.

Configuration Reference (high-value options)

You can configure via options or environment variables. A few commonly tuned settings:

  • Auth / Target
    • MW_API_KEY: Your workspace API key
    • MW_TARGET or OTEL_EXPORTER_OTLP_ENDPOINT: Export endpoint, e.g., https://<workspace>.middleware.io:443
  • Identity / Routing
    • MW_SERVICE_NAME (or OTEL_SERVICE_NAME): The name you’ll use to filter in APM UI
    • MW_CUSTOM_RESOURCE_ATTRIBUTES: Comma-separated labels (e.g., env=prod,team=payments)
  • Collectors
    • MW_APM_COLLECT_TRACES / MW_APM_COLLECT_METRICS / MW_APM_COLLECT_LOGS / MW_APM_COLLECT_PROFILING: Enable/disable specific signal types (profiling defaults to off).
  • Sampling
    • MW_SAMPLE_RATE — 1.0 = AlwaysOn; 0.0 = AlwaysOff; or any 0–1 float for ratio sampling.
  • Debugging
    • MW_CONSOLE_EXPORTER (bool) and MW_DEBUG_LOG_FILE (bool) for local troubleshooting; MW_LOG_LEVEL to change verbosity.
  • Detectors (cloud env tags)
    • MW_DETECTORS: Enable detectors for AWS Lambda, GCP, etc.

Validating Your Setup

  • SDK present & version: pip show middleware-io (ensure ≥ 2.4.0).
  • Instrumentations installed: middleware-bootstrap -a install, then pip list | grep -i opentelemetry-instrumentation-… to confirm the adapters for your frameworks were added.
  • Launched through the SDK: confirm you used middleware-run … so hooks are active.
  • VCS metadata available: either .git exists (auto) or the two envs are set (MW_VCS_REPOSITORY_URL, MW_VCS_COMMIT_SHA).

How does this power Ops AI

With traces/logs flowing and VCS metadata in place, Ops AI can correlate an incident to the precise file/line, show the captured function code, and, when applicable, propose a fix in a PR for your review.

Troubleshooting Quick Wins

  • middleware-run not found: ensure the same environment that ran pip install is active (venv/path).
  • No spans from your web framework: run middleware-bootstrap -a install so the OTel adapter for your framework is present.
  • Ops AI didn’t suggest a PR: check that repo URL/commit SHA are discoverable (either .git present or the two env vars set).

Need assistance or want to learn more about Middleware? Contact our support team at [email protected] or join our Slack channel.