log_to_metric transform

Accepts `log` events and allows you to convert logs into one or more metrics.

The log_to_metric transform accepts log events and allows you to convert logs into one or more metrics.

Config File

vector.toml (example)
vector.toml (schema)
vector.toml (specification)
[transforms.my_transform_id]
# REQUIRED - General
type = "log_to_metric" # must be: "log_to_metric"
inputs = ["my-source-id"]
# REQUIRED - Metrics
[[transforms.my_transform_id.metrics]]
type = "counter" # enum: "counter", "gauge", "histogram", and "set"
field = "duration"
name = "duration_total"
increment_by_value = false # default, relevant when type = "counter"
tags = {host = "${HOSTNAME}", region = "us-east-1", status = "{{status}}"}

Options

Key

Type

Description

REQUIRED - General

type

string

The component type required must be: "log_to_metric"

inputs

[string]

A list of upstream source or transform IDs. See Config Composition for more info. required example: ["my-source-id"]

REQUIRED - Metrics

metrics.type

string

The metric type. required enum: "counter", "gauge", "histogram", and "set"

metrics.field

string

The log field to use as the metric. See Null Fields for more info. required example: "duration"

metrics.name

string

The name of the metric. Defaults to <field>_total for counter and <field> for gauge. required example: "duration_total"

metrics.increment_by_value

bool

If true the metric will be incremented by the field value. If false the metric will be incremented by 1 regardless of the field value. Only relevant when type = "counter" default: false

metrics.tags.*

string

Key/value pairs representing the metric tags. required example: (see above)

Examples

Timings
Counting
Summing
Gauges
Sets

This example demonstrates capturing timings in your logs.

log
{
"host": "10.22.11.222",
"message": "Sent 200 in 54.2ms",
"status": 200,
"time": 54.2,
}

You can convert the time field into a histogram metric:

vector.toml
[transforms.log_to_metric]
type = "log_to_metric"
[[transforms.log_to_metric.metrics]]
type = "histogram"
field = "time"
name = "time_ms" # optional
tags.status = "{{status}}" # optional
tags.host = "{{host}}" # optional

A metric event will be emitted with the following structure:

{
"histogram": {
"name": "time_ms",
"val": 52.2,
"smaple_rate": 1,
"tags": {
"status": "200",
"host": "10.22.11.222"
}
}
}

This metric will then proceed down the pipeline, and depending on the sink, will be aggregated in Vector (such is the case for the prometheus sink) or will be aggregated in the store itself.

This example demonstrates counting HTTP status codes.

Given the following log line:

log
{
"host": "10.22.11.222",
"message": "Sent 200 in 54.2ms",
"status": 200
}

You can count the number of responses by status code:

vector.toml
[transforms.log_to_metric]
type = "log_to_metric"
[[transforms.log_to_metric.metrics]]
type = "counter"
field = "status"
name = "response_total" # optional
tags.status = "{{status}}"
tags.host = "{{host}}"

A metric event will be emitted with the following structure:

{
"counter": {
"name": "response_total",
"val": 1.0,
"tags": {
"status": "200",
"host": "10.22.11.222"
}
}
}

This metric will then proceed down the pipeline, and depending on the sink, will be aggregated in Vector (such is the case for the prometheus sink) or will be aggregated in the store itself.

In this example we'll demonstrate computing a sum. The scenario we've chosen is to compute the total of orders placed.

Given the following log line:

log
{
"host": "10.22.11.222",
"message": "Order placed for $122.20",
"total": 122.2
}

You can reduce this log into a counter metric that increases by the field's value:

vector.toml
[transforms.log_to_metric]
type = "log_to_metric"
[[transforms.log_to_metric.metrics]]
type = "counter"
field = "total"
name = "order_total" # optional
increment_by_value = true # optional
tags.host = "{{host}}" # optional

A metric event will be emitted with the following structure:

{
"counter": {
"name": "order_total",
"val": 122.20,
"tags": {
"host": "10.22.11.222"
}
}
}

This metric will then proceed down the pipeline, and depending on the sink, will be aggregated in Vector (such is the case for the prometheus sink) or will be aggregated in the store itself.

In this example we'll demonstrate creating a gauge that represents the current CPU load verages.

Given the following log line:

log
{
"host": "10.22.11.222",
"message": "CPU activity sample",
"1m_load_avg": 78.2,
"5m_load_avg": 56.2,
"15m_load_avg": 48.7
}

You can reduce this logs into multiple gauge metrics:

vector.toml
[transforms.log_to_metric]
type = "log_to_metric"
[[transforms.log_to_metric.metrics]]
type = "gauge"
field = "1m_load_avg"
tags.host = "{{host}}" # optional
[[transforms.log_to_metric.metrics]]
type = "gauge"
field = "5m_load_avg"
tags.host = "{{host}}" # optional
[[transforms.log_to_metric.metrics]]
type = "gauge"
field = "15m_load_avg"
tags.host = "{{host}}" # optional

Multiple metric events will be emitted with the following structure:

[
{
"gauge": {
"name": "1m_load_avg",
"val": 78.2,
"tags": {
"host": "10.22.11.222"
}
}
},
{
"gauge": {
"name": "5m_load_avg",
"val": 56.2,
"tags": {
"host": "10.22.11.222"
}
}
},
{
"gauge": {
"name": "15m_load_avg",
"val": 48.7,
"tags": {
"host": "10.22.11.222"
}
}
}
]

This metric will then proceed down the pipeline, and depending on the sink, will be aggregated in Vector (such is the case for the prometheus sink) or will be aggregated in the store itself.

In this example we'll demonstrate how to use sets. Sets are primarly a Statsd concept that represent the number of unique values seens for a given metric. The idea is that you pass the unique/high-cardinality value as the metric value and the metric store will count the number of unique values seen.

For example, given the following log line:

log
{
"host": "10.22.11.222",
"message": "Sent 200 in 54.2ms",
"remote_addr": "233.221.232.22"
}

You can count the number of unique remote_addr values by using a set:

vector.toml
[transforms.log_to_metric]
type = "log_to_metric"
[[transforms.log_to_metric.metrics]]
type = "set"
field = "remote_addr"
tags.host = "{{host}}" # optional

A metric event will be emitted with the following structure:

{
"set": {
"name": "remote_addr",
"val": "233.221.232.22",
"tags": {
"host": "10.22.11.222"
}
}
}

This metric will then proceed down the pipeline, and depending on the sink, will be aggregated in Vector (such is the case for the prometheus sink) or will be aggregated in the store itself.

How It Works

Environment Variables

Environment variables are supported through all of Vector's configuration. Simply add ${MY_ENV_VAR} in your Vector configuration file and the variable will be replaced before being evaluated.

You can learn more in the Environment Variables section.

Multiple Metrics

For clarification, when you convert a single log event into multiple metric events, the metric events are not emitted as a single array. They are emitted individually, and the downstream components treat them as individual events. Downstream components are not aware they were derived from a single log event.

Null Fields

If the target log field contains a null value it will ignored, and a metric will not be emitted.

Reducing

It's important to understand that this transform does not reduce multiple logs into a single metric. Instead, this transform converts logs into granular individual metrics that can then be reduced at the edge. Where the reduction happens depends on your metrics storage. For example, the prometheus sink will reduce logs in the sink itself for the next scrape, while other metrics sinks will proceed to forward the individual metrics for reduction in the metrics storage itself.

Troubleshooting

The best place to start with troubleshooting is to check the Vector logs. This is typically located at /var/log/vector.log, then proceed to follow the Troubleshooting Guide.

If the Troubleshooting Guide does not resolve your issue, please:

  1. If encountered a bug, please file a bug report.

  2. If encountered a missing feature, please file a feature request.

  3. If you need help, join our chat/forum community. You can post a question and search previous questions.

Resources