Custom Metrics
Send your own business or infrastructure metrics to Middleware and visualize them alongside built-in integrations. You can:
- Post OTLP/HTTP (JSON) from the command line (cURL), or
- Emit metrics from your application using the OpenTelemetry Python SDK (OTLP gRPC).
Use resource attributes to decide where data is stored: either into an existing dataset (e.g., Host, Kubernetes) or into the Custom Metrics dataset.
Prerequisites#
- Your Middleware workspace URL (e.g.,
https://<YOUR_WORKSPACE>.middleware.io). - A Middleware API key with permission to ingest metrics.
- Outbound network access from the sender to your workspace URL.
Methods:#
What this does#
This method sends OTLP/HTTP JSON to POST /v1/metrics. The payload contains:
- A resource (where the series belongs)
- One or more metrics (name, description, unit, type)
- Data points (value + attributes/dimensions + timestamp)
Timestamps use time_unix_nano: nanoseconds since Unix epoch.
Step-by-step#
- Set your workspace URL and API key (use env vars or a secret manager in production).
- Prepare the JSON payload describing your metric(s).
- POST to your workspace’s
/v1/metricsendpoint.
Example:
API_KEY="<YOUR_API_KEY>"
MW_ENDPOINT="https://<YOUR_WORKSPACE>.middleware.io:443"
curl -X POST "$MW_ENDPOINT/v1/metrics" \
-H "Accept: application/json" \
-H "Content-Type: application/json" \
-H "Authorization: $API_KEY" \
-d @- << 'EOF'
{
"resource_metrics": [
{
"resource": {
"attributes": [
{
"key": "mw.resource_type",
"value": { "string_value": "custom" }
}
]
},
"scope_metrics": [
{
"metrics": [
{
"name": "swap-usage",
"description": "SWAP usage",
"unit": "Bytes",
"gauge": {
"data_points": [
{
"attributes": [
{
"key": "device",
"value": { "string_value": "nvme0n1p4" }
}
],
"time_unix_nano": 1758473263000000000,
"asInt": 4000500678
}
]
}
}
]
}
]
}
]
}
EOFWhy these fields matter
mw.resource_type: custom: Stores data in the Custom Metrics dataset (see mapping options below).name / description / unit: Improves discoverability and correct charting (e.g.,Bytes,ms,1).gaugewithasInt/asDouble: Represents a point-in-time measurement (use sum for counters, histogram for distributions).attributes(e.g.,device): Dimensions you can group/filter by in dashboards and alerts.time_unix_nano: The exact time of the measurement (nanoseconds).
What this does:#
Your app uses OpenTelemetry to create instruments (counters, histograms, etc.). A periodic reader exports metrics to Middleware over OTLP gRPC, including any attached attributes (dimensions).
Install Required Packages:#
pip install opentelemetry-api opentelemetry-sdk opentelemetry-exporter-otlpUse the template codebase given below to send custom metrics:
from opentelemetry import metrics
from opentelemetry.sdk.metrics import MeterProvider
from opentelemetry.sdk.metrics.export import PeriodicExportingMetricReader
from opentelemetry.exporter.otlp.proto.grpc.metric_exporter import OTLPMetricExporter
import time
# Configure OTLP Exporter to export metrics to Middleware
exporter = OTLPMetricExporter(
endpoint="https://<YOUR_WORKSPACE>.middleware.io",
headers={"authorization": "<YOUR_API_KEY>"},
)
metric_reader = PeriodicExportingMetricReader(exporter)
metrics.set_meter_provider(MeterProvider(metric_readers=[metric_reader]))
# Get a meter
meter = metrics.get_meter(__name__)
# Define metrics
counter = meter.create_counter(
name="custom_counter",
description="Counts something custom",
unit="1",
)
histogram = meter.create_histogram(
name="custom_histogram",
description="Records histogram data",
unit="ms",
)
# Record metrics
while True:
counter.add(1, attributes={"environment": "production", "region": "us-east-1"})
histogram.record(100, attributes={"operation": "database_query"})
time.sleep(5)Here:
- Endpoint: for OTLP gRPC use the workspace base URL (no
/v1/metricspath). - Headers: include your API key as
authorization. - Attributes: add stable dimensions you’ll filter/group by later (
environment,region,service, etc.). - Export cadence: the
PeriodicExportingMetricReaderbatches and sends on an interval; keep the process running.
Ingest Into Existing Resources#
If you want your custom data to live under an existing Middleware dataset, include the required resource attribute from the table below.
Example: to attach a metric to a host, add host.id in the request body.
| Type | Resource Attributes Required | Data Will Be Stored To This Data Set |
|---|---|---|
| host | host.id | Host Metrics |
| k8s.node | k8s.node.uid | K8s Node Metrics |
| k8s.pod | k8s.pod.uid | K8s POD metrics |
| k8s.deployment | k8s.deployment.uid | K8s Deployment Metrics |
| k8s.daemonset | k8s.daemonset.uid | ~ |
| k8s.replicaset | k8s.replicaset.uid | ~ |
| k8s.statefulset | k8s.statefulset.uid | ~ |
| k8s.namespace | k8s.namespace.uid | ~ |
| service | service.name | ~ |
| os | os.type | ~ |
Ingest custom data#
If your data doesn’t fit the existing types, send it to the Custom Metrics dataset:
mw.resource_type: customAny series with this resource attribute will appear under Custom Metrics.
Explore Data & Build Graphs#
- Open Dashboards → add a new widget.
- Select the dataset: either Custom Metrics or the specific dataset you targeted (e.g., Host Metrics).
- Choose your metric (e.g.,
swap-usage,custom_counter,custom_histogram). - Use attributes (
device,environment,region, etc.) to filter or group your series. - Save the widget and compose your dashboard.
Set up Alerts#
- Create an alert and select the dataset/metric you’re sending.
- Define the condition (threshold/anomaly), evaluation window, and recipients.
- Use attribute filters to scope alerts (e.g., only
environment=production).
Troubleshooting & Best Practices#
- Auth errors / no data: verify the Authorization header and the workspace URL.
- Wrong dataset: double-check the resource attribute (e.g., mw.resource_type=custom vs host.id).
- Timestamps off: make sure time_unix_nano is in nanoseconds and your sender’s clock is correct.
- Dimension drift: keep attribute keys consistent (avoid mixing env and environment).
- Secrets: don’t hard-code API keys; prefer environment variables or a secret manager.
OTLP/HTTP JSON field reference (at a glance)#
| Concept | Where to put it | Example |
|---|---|---|
| Dataset tag | resource.attributes[] | mw.resource_type=custom, host.id=abc123 |
| Metric name | metrics[].name | swap-usage, custom_counter, latency |
| Description | metrics[].description | “SWAP usage” |
| Unit | metrics[].unit | Bytes, ms, 1 |
| Type | gauge / sum / histogram | match to your data shape |
| Value | asInt / asDouble | 4000500678 |
| Dimensions | data_points[].attributes[] | device=nvme0n1p4, environment=production |
| Timestamp | data_points[].time_unix_nano | 1758473263000000000 |
Need assistance or want to learn more about Middleware? Contact our support team at [email protected] or join our Slack channel.