Kinesis and SigNoz Integration
Powerful performance with an easy integration, powered by Telegraf, the open source data connector built by InfluxData.
5B+
Telegraf downloads
#1
Time series database
Source: DB Engines
1B+
Downloads of InfluxDB
2,800+
Contributors
Table of Contents
Powerful Performance, Limitless Scale
Collect, organize, and act on massive volumes of high-velocity data. Any data is more valuable when you think of it as time series data. with InfluxDB, the #1 time series platform built to scale with Telegraf.
See Ways to Get Started
Input and output integration overview
<p>The Kinesis plugin enables you to read from Kinesis data streams, supporting various data formats and configurations.</p>
<p>This configuration turns any Telegraf agent into a Remote Write publisher for SigNoz, streaming rich metrics straight into the SigNoz backend with a single URL change.</p>
Integration details
Kinesis
<p>The Kinesis Telegraf plugin is designed to read from Amazon Kinesis data streams, enabling users to gather metrics in real-time. As a service input plugin, it operates by listening for incoming data rather than polling at regular intervals. The configuration specifies various options including the AWS region, stream name, authentication credentials, and data formats. It supports tracking of undelivered messages to prevent data loss, and users can utilize DynamoDB for maintaining checkpoints of the last processed records. This plugin is particularly useful for applications requiring reliable and scalable stream processing alongside other monitoring needs.</p>
SigNoz
<p>SigNoz is an open source observability platform that stores metrics, traces, and logs. When you deploy SigNoz, its <strong>signoz-otel-collector-metrics</strong> service exposes a Prometheus Remote Write receiver (default <strong>:13133/api/v1/write</strong>). By configuring Telegraf’s Prometheus plugin to point at this endpoint, you can push any Telegraf collected metrics, SNMP counters, cloud services, or business KPIs—directly into SigNoz. The plugin natively serializes metrics in the Remote Write protobuf format, supports external labels, metadata export, retries, and TLS or bearer-token auth, so it fits zero-trust and multi-tenant SigNoz clusters. Inside SigNoz, the data lands in ClickHouse tables that back Metrics Explorer, alert rules, and unified dashboards. This approach lets organizations unify Prometheus and OTLP pipelines, enables long-term retention powered by ClickHouse compression, and avoids vendor lock-in while retaining PromQL-style queries.</p>
Configuration
Kinesis
SigNoz
Input and output integration examples
Kinesis
<ol> <li> <p><strong>Real-Time Data Processing with Kinesis</strong>: This use case involves integrating the Kinesis plugin with a monitoring dashboard to analyze incoming data metrics in real-time. For instance, an application could consume logs from multiple services and present them visually, allowing operations teams to quickly identify trends and react to anomalies as they occur.</p> </li> <li> <p><strong>Serverless Log Aggregation</strong>: Utilize this plugin in a serverless architecture where Kinesis streams aggregate logs from various microservices. The plugin can create metrics that help detect issues in the system, automating alerting processes through third-party integrations, enabling teams to minimize downtime and improve reliability.</p> </li> <li> <p><strong>Dynamic Scaling Based on Stream Metrics</strong>: Implement a solution where stream metrics consumed by the Kinesis plugin could be used to adjust resources dynamically. For example, if the number of records processed spikes, corresponding scale-up actions could be triggered to handle the increased load, ensuring optimal resource allocation and performance.</p> </li> <li> <p><strong>Data Pipeline to S3 with Checkpointing</strong>: Create a robust data pipeline where Kinesis stream data is processed through the Telegraf Kinesis plugin, with checkpoints stored in DynamoDB. This approach can ensure data consistency and reliability, as it manages the state of processed data, enabling seamless integration with downstream data lakes or storage solutions.</p> </li> </ol>
SigNoz
<ol> <li> <p><strong>Multi-Cluster Federated Monitoring</strong>: Drop a Telegraf DaemonSet into each Kubernetes cluster, tag metrics with <code>cluster=<name></code>, and Remote Write them to a central SigNoz instance. Ops teams get a single PromQL window across prod, staging, and edge clusters without running Thanos sidecars.</p> </li> <li> <p><strong>Factory-Floor Edge Gateway</strong>: A rugged Intel NUC on the shop floor runs Telegraf to scrape Modbus PLCs and environmental sensors. It batches readings every 5 seconds and pushes them over an intermittent 4G link to SigNoz SaaS. ClickHouse compression keeps costs low while AI-based outlier detection in SigNoz flags overheating motors before failure.</p> </li> <li> <p><strong>SaaS Usage Metering</strong>: Telegraf runs alongside each micro-service, exporting per-tenant counters (<code>api_calls</code>, <code>gigabytes_processed</code>). Remote Write streams the data to SigNoz where a scheduled ClickHouse materialized view aggregates usage for monthly billing—no separate metering stack required.</p> </li> <li> <p><strong>Autoscaling Feedback Loop</strong>: Combine Telegraf’s Kubernetes input with the Remote Write output to publish granular pod CPU and queue-length metrics into SigNoz. A custom SigNoz alert fires when P95 latency breaches 200 ms and a GitOps controller reads that alert to trigger a HorizontalPodAutoscaler tweak—closing the loop between observability and automation.</p> </li> </ol>
Feedback
Thank you for being part of our community! If you have any general feedback or found any bugs on these pages, we welcome and encourage your input. Please submit your feedback in the InfluxDB community Slack.
Powerful Performance, Limitless Scale
Collect, organize, and act on massive volumes of high-velocity data. Any data is more valuable when you think of it as time series data. with InfluxDB, the #1 time series platform built to scale with Telegraf.
See Ways to Get Started
Related Integrations
Related Integrations
HTTP and InfluxDB Integration
The HTTP plugin collects metrics from one or more HTTP(S) endpoints. It supports various authentication methods and configuration options for data formats.
View IntegrationKafka and InfluxDB Integration
This plugin reads messages from Kafka and allows the creation of metrics based on those messages. It supports various configurations including different Kafka settings and message processing options.
View IntegrationKinesis and InfluxDB Integration
The Kinesis plugin allows for reading metrics from AWS Kinesis streams. It supports multiple input data formats and offers checkpointing features with DynamoDB for reliable message processing.
View Integration