aws_s3 sink

Batches `log` events to AWS S3 via the `PutObject` API endpoint.

The aws_s3 sink is in beta. Please see the current enhancements and bugs for known issues. We kindly ask that you add any missing issues as it will help shape the roadmap of this component.

The aws_s3 sink batches log events to AWS S3 via the PutObject API endpoint.

Config File

vector.toml (simple)
vector.toml (advanced)
# REQUIRED - General
type = "aws_s3" # must be: "aws_s3"
inputs = ["my-source-id"]
bucket = "my-bucket"
region = "us-east-1"
# REQUIRED - Requests
encoding = "ndjson" # enum: "ndjson" or "text"
# OPTIONAL - Batching
batch_size = 10490000 # default, bytes
batch_timeout = 300 # default, seconds
# OPTIONAL - Object Names
key_prefix = "date=%F/"
# OPTIONAL - Requests
compression = "gzip" # default, enum: "gzip" or "none"
# For a complete list of options see the "advanced" tab above.


The aws_s3 sink batches log up to the batch_size or batch_timeout options. When flushed, Vector will write to AWS S3 via the PutObject API endpoint. The encoding is dictated by the encoding option. For example:

Host: kinesis.<region>.<domain>
Content-Length: <byte_size>
Content-Type: application/x-amz-json-1.1
Connection: Keep-Alive
X-Amz-Target: Kinesis_20131202.PutRecords
"Records": [
"Data": "<base64_encoded_event>",
"PartitionKey": "<partition_key>"
"Data": "<base64_encoded_event>",
"PartitionKey": "<partition_key>"
"Data": "<base64_encoded_event>",
"PartitionKey": "<partition_key>"
"StreamName": "<stream_name>"

How It Works


Vector checks for AWS credentials in the following order:

  1. Environment variables AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY.

  2. The credential_process command in the AWS config file. (usually located at ~/.aws/config)

  3. The AWS credentials file. (usually located at ~/.aws/credentials)

  4. The IAM instance profile. (will only work if running on an EC2 instance with an instance profile/role)

If credentials are not found the healtcheck will fail and an error will be logged.

Obtaining an access key

In general, we recommend using instance profiles/roles whenever possible. In cases where this is not possible you can generate an AWS access key for any user within your AWS account. AWS provides a detailed guide on how to do this.

Buffers & Batches

The aws_s3 sink buffers & batches data as shown in the diagram above. You'll notice that Vector treats these concepts differently, instead of treating them as global concepts, Vector treats them as sink specific concepts. This isolates sinks, ensuring services disruptions are contained and delivery guarantees are honored.

Buffers types

The buffer.type option allows you to control buffer resource usage:




Pros: Fast. Cons: Not persisted across restarts. Possible data loss in the event of a crash. Uses more memory.


Pros: Persisted across restarts, durable. Uses much less memory. Cons: Slower, see below.

Buffer overflow

The buffer.when_full option allows you to control the behavior when the buffer overflows:




Applies back pressure until the buffer makes room. This will help to prevent data loss but will cause data to pile up on the edge.


Drops new data as it's received. This data is lost. This should be used when performance is the highest priority.

Batch flushing

Batches are flushed when 1 of 2 conditions are met:

  1. The batch age meets or exceeds the configured batch_timeout (default: 300 seconds).

  2. The batch size meets or exceeds the configured batch_size (default: 10490000 bytes).

Columnar Formats

Vector has plans to support column formats, such as ORC and Parquet, in v0.6.


The aws_s3 sink compresses payloads before flushing. This helps to reduce the payload size, ultimately reducing bandwidth and cost. This is controlled via the compression option. Each compression type is described in more detail below:




The payload will be compressed in Gzip format before being sent.


The payload will not compressed at all.

Delivery Guarantee

This component offers an at least once delivery guarantee if your pipeline is configured to achieve this.


The aws_s3 sink encodes events before writing them downstream. This is controlled via the encoding option which accepts the following options:




The payload will be encoded in new line delimited JSON payload, each line representing a JSON encoded event.


The payload will be encoded as new line delimited text, each line representing the value of the "message" key.

Dynamic encoding

By default, the encoding chosen is dynamic based on the explicit/implcit nature of the event's structure. For example, if this event is parsed (explicit structuring), Vector will use json to encode the structured data. If the event was not explicitly structured, the text encoding will be used.

To further explain why Vector adopts this default, take the simple example of accepting data over the tcp source and then connecting it directly to the aws_s3 sink. It is less surprising that the outgoing data reflects the incoming data exactly since it was not explicitly structured.

Environment Variables

Environment variables are supported through all of Vector's configuration. Simply add ${MY_ENV_VAR} in your Vector configuration file and the variable will be replaced before being evaluated.

You can learn more in the Environment Variables section.

Health Checks

Health checks ensure that the downstream service is accessible and ready to accept data. This check is performed upon sink initialization.

If the health check fails an error will be logged and Vector will proceed to start. If you'd like to exit immediately upon health check failure, you can pass the --require-healthy flag:

vector --config /etc/vector/vector.toml --require-healthy

And finally, if you'd like to disable health checks entirely for this sink you can set the healthcheck option to false.

Object Naming

By default, Vector will name your S3 objects in the following format:

no compression

For example:

no compression

Vector appends a UUIDV4 token to ensure there are no name conflicts in the unlikely event 2 Vector instances are writing data at the same time.

You can control the resulting name via the key_prefix, filename_time_format, and filename_append_uuid options.


Partitioning is controlled via the key_prefix options and allows you to dynamically partition data on the fly. You'll notice that Vector's template sytax is supported for these options, enabling you to use field values as the partition's key.

Rate Limits

Vector offers a few levers to control the rate and volume of requests to the downstream service. Start with the rate_limit_duration and rate_limit_num options to ensure Vector does not exceed the specified number of requests in the specified window. You can further control the pace at which this window is saturated with the request_in_flight_limit option, which will guarantee no more than the specified number of requests are in-flight at any given time.

Please note, Vector's defaults are carefully chosen and it should be rare that you need to adjust these. If you found a good reason to do so please share it with the Vector team by opening an issie.

Retry Policy

Vector will retry failed requests (status == 429, >= 500, and != 501). Other responses will not be retried. You can control the number of retry attempts and backoff rate with the retry_attempts and retry_backoff_secs options.


Storing log data in S3 is a powerful strategy for persisting log data. Mainly because data on S3 is searchable. And AWS Athena makes this easier than ever.


  1. Head over to the Athena console.

  2. Create a new table, replace the <...> variables as needed:

    timestamp string,
    message string,
    host string
    PARTITIONED BY (date string)
    ROW FORMAT serde ''
    with serdeproperties ( 'paths'='timestamp, message, host' )
    LOCATION 's3://<region>.<key_prefix>';
  3. Discover your partitions by running the following query:

  4. Query your data:

    SELECT host, COUNT(*)
    FROM logs
    GROUP BY host

Vector has plans to support columnar formats in v0.6 which will allows for very fast and efficient querying on S3.

Template Syntax

The key_prefix options support Vector's template syntax, enabling dynamic values derived from the event's data. This syntax accepts strftime specifiers as well as the {{ field_name }} syntax for accessing event fields. For example:

# ...
key_prefix = "date=%F/"
key_prefix = "date=%F/hour=%H/"
key_prefix = "year=%Y/month=%m/day=%d/"
key_prefix = "application_id={{ application_id }}/date=%F/"
# ...

You can read more about the complete syntax in the template syntax section.


To ensure the pipeline does not halt when a service fails to respond Vector will abort requests after 30 seconds. This can be adjsuted with the request_timeout_secs option.

It is highly recommended that you do not lower value below the service's internal timeout, as this could create orphaned requests, pile on retries, and result in deuplicate data downstream.


The best place to start with troubleshooting is to check the Vector logs. This is typically located at /var/log/vector.log, then proceed to follow the Troubleshooting Guide.

If the Troubleshooting Guide does not resolve your issue, please:

  1. If encountered a bug, please file a bug report.

  2. If encountered a missing feature, please file a feature request.

  3. If you need help, join our chat/forum community. You can post a question and search previous questions.