Custom Metrics

Send your own business or infrastructure metrics to Middleware and visualize them alongside built-in integrations. You can:

  • Post OTLP/HTTP (JSON) from the command line (cURL), or
  • Emit metrics from your application using the OpenTelemetry Python SDK (OTLP gRPC).

Use resource attributes to decide where data is stored: either into an existing dataset (e.g., Host, Kubernetes) or into the Custom Metrics dataset.

Prerequisites#

  • Your Middleware workspace URL (e.g., https://<YOUR_WORKSPACE>.middleware.io).
  • A Middleware API key with permission to ingest metrics.
  • Outbound network access from the sender to your workspace URL.

Methods:#

What this does#

This method sends OTLP/HTTP JSON to POST /v1/metrics. The payload contains:

  • A resource (where the series belongs)
  • One or more metrics (name, description, unit, type)
  • Data points (value + attributes/dimensions + timestamp)

Timestamps use time_unix_nano: nanoseconds since Unix epoch.

Step-by-step#

  1. Set your workspace URL and API key (use env vars or a secret manager in production).
  2. Prepare the JSON payload describing your metric(s).
  3. POST to your workspace’s /v1/metrics endpoint.

Example:

API_KEY="<YOUR_API_KEY>"
MW_ENDPOINT="https://<YOUR_WORKSPACE>.middleware.io:443"

curl -X POST "$MW_ENDPOINT/v1/metrics" \
  -H "Accept: application/json" \
  -H "Content-Type: application/json" \
  -H "Authorization: $API_KEY" \
  -d @- << 'EOF'
{
  "resource_metrics": [
    {
      "resource": {
        "attributes": [
          {
            "key": "mw.resource_type",
            "value": { "string_value": "custom" }
          }
        ]
      },
      "scope_metrics": [
        {
          "metrics": [
            {
              "name": "swap-usage",
              "description": "SWAP usage",
              "unit": "Bytes",
              "gauge": {
                "data_points": [
                  {
                    "attributes": [
                      {
                        "key": "device",
                        "value": { "string_value": "nvme0n1p4" }
                      }
                    ],
                    "time_unix_nano": 1758473263000000000,
                    "asInt": 4000500678
                  }
                ]
              }
            }
          ]
        }
      ]
    }
  ]
}
EOF

Why these fields matter

  • mw.resource_type: custom : Stores data in the Custom Metrics dataset (see mapping options below).
  • name / description / unit: Improves discoverability and correct charting (e.g., Bytes, ms, 1).
  • gauge with asInt/asDouble: Represents a point-in-time measurement (use sum for counters, histogram for distributions).
  • attributes (e.g., device): Dimensions you can group/filter by in dashboards and alerts.
  • time_unix_nano: The exact time of the measurement (nanoseconds).

What this does:#

Your app uses OpenTelemetry to create instruments (counters, histograms, etc.). A periodic reader exports metrics to Middleware over OTLP gRPC, including any attached attributes (dimensions).

Install Required Packages:#

pip install opentelemetry-api opentelemetry-sdk opentelemetry-exporter-otlp

Use the template codebase given below to send custom metrics:

from opentelemetry import metrics
from opentelemetry.sdk.metrics import MeterProvider
from opentelemetry.sdk.metrics.export import PeriodicExportingMetricReader
from opentelemetry.exporter.otlp.proto.grpc.metric_exporter import OTLPMetricExporter
import time

# Configure OTLP Exporter to export metrics to Middleware
exporter = OTLPMetricExporter(
    endpoint="https://<YOUR_WORKSPACE>.middleware.io",
    headers={"authorization": "<YOUR_API_KEY>"},
)

metric_reader = PeriodicExportingMetricReader(exporter)
metrics.set_meter_provider(MeterProvider(metric_readers=[metric_reader]))

# Get a meter
meter = metrics.get_meter(__name__)

# Define metrics
counter = meter.create_counter(
    name="custom_counter",
    description="Counts something custom",
    unit="1",
)

histogram = meter.create_histogram(
    name="custom_histogram",
    description="Records histogram data",
    unit="ms",
)

# Record metrics
while True:
    counter.add(1, attributes={"environment": "production", "region": "us-east-1"})
    histogram.record(100, attributes={"operation": "database_query"})
    time.sleep(5)

Here:

  • Endpoint: for OTLP gRPC use the workspace base URL (no /v1/metrics path).
  • Headers: include your API key as authorization.
  • Attributes: add stable dimensions you’ll filter/group by later (environment, region, service, etc.).
  • Export cadence: the PeriodicExportingMetricReader batches and sends on an interval; keep the process running.

Ingest Into Existing Resources#

If you want your custom data to live under an existing Middleware dataset, include the required resource attribute from the table below.

Example: to attach a metric to a host, add host.id in the request body.

TypeResource Attributes RequiredData Will Be Stored To This Data Set
hosthost.idHost Metrics
k8s.nodek8s.node.uidK8s Node Metrics
k8s.podk8s.pod.uidK8s POD metrics
k8s.deploymentk8s.deployment.uidK8s Deployment Metrics
k8s.daemonsetk8s.daemonset.uid~
k8s.replicasetk8s.replicaset.uid~
k8s.statefulsetk8s.statefulset.uid~
k8s.namespacek8s.namespace.uid~
serviceservice.name~
osos.type~

Ingest custom data#

If your data doesn’t fit the existing types, send it to the Custom Metrics dataset:

mw.resource_type: custom

Any series with this resource attribute will appear under Custom Metrics.

Explore Data & Build Graphs#

  1. Open Dashboards → add a new widget.
  2. Select the dataset: either Custom Metrics or the specific dataset you targeted (e.g., Host Metrics).
  3. Choose your metric (e.g., swap-usage, custom_counter, custom_histogram).
  4. Use attributes (device, environment, region, etc.) to filter or group your series.
  5. Save the widget and compose your dashboard.

Set up Alerts#

  1. Create an alert and select the dataset/metric you’re sending.
  2. Define the condition (threshold/anomaly), evaluation window, and recipients.
  3. Use attribute filters to scope alerts (e.g., only environment=production).

Troubleshooting & Best Practices#

  • Auth errors / no data: verify the Authorization header and the workspace URL.
  • Wrong dataset: double-check the resource attribute (e.g., mw.resource_type=custom vs host.id).
  • Timestamps off: make sure time_unix_nano is in nanoseconds and your sender’s clock is correct.
  • Dimension drift: keep attribute keys consistent (avoid mixing env and environment).
  • Secrets: don’t hard-code API keys; prefer environment variables or a secret manager.

OTLP/HTTP JSON field reference (at a glance)#

ConceptWhere to put itExample
Dataset tagresource.attributes[]mw.resource_type=custom, host.id=abc123
Metric namemetrics[].nameswap-usage, custom_counter, latency
Descriptionmetrics[].description“SWAP usage”
Unitmetrics[].unitBytes, ms, 1
Typegauge / sum / histogrammatch to your data shape
ValueasInt / asDouble4000500678
Dimensionsdata_points[].attributes[]device=nvme0n1p4, environment=production
Timestampdata_points[].time_unix_nano1758473263000000000

Need assistance or want to learn more about Middleware? Contact our support team at [email protected] or join our Slack channel.