Fluentd and InfluxDB Integration
Powerful performance with an easy integration, powered by Telegraf, the open source data connector built by InfluxData.
5B+
Telegraf downloads
#1
Time series database
Source: DB Engines
1B+
Downloads of InfluxDB
2,800+
Contributors
Table of Contents
Powerful Performance, Limitless Scale
Collect, organize, and act on massive volumes of high-velocity data. Any data is more valuable when you think of it as time series data. with InfluxDB, the #1 time series platform built to scale with Telegraf.
See Ways to Get Started
Input and output integration overview
<p>The Fluentd Input Plugin gathers metrics from Fluentd’s in_monitor plugin endpoint. It provides insights into various plugin metrics while allowing for custom configurations to reduce series cardinality.</p>
<p>The InfluxDB plugin writes metrics to the InfluxDB HTTP service, allowing for efficient storage and retrieval of time series data.</p>
Integration details
Fluentd
<p>This plugin gathers metrics from the Fluentd plugin endpoint provided by the in_monitor plugin. It reads data from the /api/plugin.json resource and allows exclusion of specific plugins based on their type.</p>
InfluxDB
<p>The InfluxDB Telegraf plugin serves to send metrics to the InfluxDB HTTP API, facilitating the storage and query of time series data in a structured manner. Integrating seamlessly with InfluxDB, this plugin provides essential features such as token-based authentication and support for multiple InfluxDB cluster nodes, ensuring reliable and scalable data ingestion. Through its configurability, users can specify options like organization, destination buckets, and HTTP-specific settings, providing flexibility to tailor how data is sent and stored. The plugin also supports secret management for sensitive data, which enhances security in production environments. This plugin is particularly beneficial in modern observability stacks where real-time analytics and storage of time series data are crucial.</p>
Configuration
Fluentd
InfluxDB
Input and output integration examples
Fluentd
<ol> <li><strong>Basic Configuration</strong>: Set up the Fluentd Input Plugin to gather metrics from your Fluentd instance’s monitoring endpoint, ensuring you are able to track performance and usage statistics.</li> <li><strong>Excluding Plugins</strong>: Use the <code>exclude</code> option to ignore specific plugins’ metrics that are not necessary for your monitoring needs, streamlining data collection and focusing on what matters.</li> <li><strong>Custom Plugin ID</strong>: Implement the <code>@id</code> parameter in your Fluentd configuration to maintain a consistent <code>plugin_id</code>, which helps avoid issues with high series cardinality during frequent restarts.</li> </ol>
InfluxDB
<ol> <li> <p><strong>Real-Time System Monitoring</strong>: Utilize the InfluxDB plugin to capture and store metrics from a range of system components, such as CPU usage, memory consumption, and disk I/O. By pushing these metrics into InfluxDB, you can create a live dashboard that visualizes system performance in real time. This setup not only helps in identifying performance bottlenecks but also assists in proactive capacity planning by analyzing trends over time.</p> </li> <li> <p><strong>Performance Tracking for Web Applications</strong>: Automatically gather and push metrics related to web application performance, such as request durations, error rates, and user interactions, to InfluxDB. By employing this plugin in your monitoring stack, you can use the stored metrics to generate reports and analyses that help understand user behavior and application efficiency, thus guiding development and optimization efforts.</p> </li> <li> <p><strong>IoT Data Aggregation</strong>: Leverage the InfluxDB Telegraf plugin to collect sensor data from various IoT devices and store it in a centralized InfluxDB instance. This use case enables you to analyze trends and patterns in environmental or machine data over time, facilitating smarter decisions and predictive maintenance strategies. By integrating IoT data into InfluxDB, organizations can harness the power of historical data analysis to drive innovation and operational efficiency.</p> </li> <li> <p><strong>Analyzing Historical Metrics for Forecasting</strong>: Set up the InfluxDB plugin to send historical metric data into InfluxDB and use it to drive forecasting models. By analyzing past performance metrics, you can create predictive models that forecast future trends and demands. This application is particularly useful for business intelligence purposes, helping organizations prepare for fluctuations in resource needs based on historical usage patterns.</p> </li> </ol>
Feedback
Thank you for being part of our community! If you have any general feedback or found any bugs on these pages, we welcome and encourage your input. Please submit your feedback in the InfluxDB community Slack.
Powerful Performance, Limitless Scale
Collect, organize, and act on massive volumes of high-velocity data. Any data is more valuable when you think of it as time series data. with InfluxDB, the #1 time series platform built to scale with Telegraf.
See Ways to Get Started
Related Integrations
Related Integrations
HTTP and InfluxDB Integration
The HTTP plugin collects metrics from one or more HTTP(S) endpoints. It supports various authentication methods and configuration options for data formats.
View IntegrationKafka and InfluxDB Integration
This plugin reads messages from Kafka and allows the creation of metrics based on those messages. It supports various configurations including different Kafka settings and message processing options.
View IntegrationKinesis and InfluxDB Integration
The Kinesis plugin allows for reading metrics from AWS Kinesis streams. It supports multiple input data formats and offers checkpointing features with DynamoDB for reliable message processing.
View Integration