AMQP and Apache Hudi Integration
Powerful performance with an easy integration, powered by Telegraf, the open source data connector built by InfluxData.
5B+
Telegraf downloads
#1
Time series database
Source: DB Engines
1B+
Downloads of InfluxDB
2,800+
Contributors
Table of Contents
Powerful Performance, Limitless Scale
Collect, organize, and act on massive volumes of high-velocity data. Any data is more valuable when you think of it as time series data. with InfluxDB, the #1 time series platform built to scale with Telegraf.
See Ways to Get Started
Input and output integration overview
<p>The AMQP Consumer Input Plugin allows you to ingest data from an AMQP 0-9-1 compliant message broker, such as RabbitMQ, enabling seamless data collection for monitoring and analytics purposes.</p>
<p>Writes metrics to Parquet files via Telegraf’s Parquet output plugin, preparing them for ingestion into Apache Hudi’s lakehouse architecture.</p>
Integration details
AMQP
<p>This plugin provides a consumer for use with AMQP 0-9-1, a prominent implementation of which is RabbitMQ. AMQP, or Advanced Message Queuing Protocol, was originally developed to enable reliable, interoperable messaging between diverse systems in a network. The plugin reads metrics from a topic exchange using a configured queue and binding key, delivering a flexible and efficient means of collecting data from AMQP-compliant messaging systems. This enables users to leverage existing RabbitMQ implementations to monitor their applications effectively by capturing detailed metrics for analysis and alerting.</p>
Apache Hudi
<p>This configuration leverages Telegraf’s Parquet plugin to serialize metrics into columnar Parquet files suitable for downstream ingestion by Apache Hudi. The plugin writes metrics grouped by metric name into files in a specified directory, buffering writes for efficiency and optionally rotating files on timers. It considers schema compatibility—metrics with incompatible schemas are dropped—ensuring consistency. Apache Hudi can then consume these Parquet files via tools like DeltaStreamer or Spark jobs, enabling transactional ingestion, time-travel queries, and upserts on your time series data.</p>
Configuration
AMQP
Apache Hudi
Input and output integration examples
AMQP
<ol> <li> <p><strong>Integrating Application Metrics with AMQP</strong>: Use the AMQP Consumer plugin to gather application metrics that are published to a RabbitMQ exchange. By configuring the plugin to listen to specific queues, teams can gain insights into application performance, track request rates, error counts, and latency metrics, all in real-time. This setup not only aids in anomaly detection but also provides valuable data for capacity planning and system optimization.</p> </li> <li> <p><strong>Event-Driven Monitoring</strong>: Configure the AMQP Consumer to trigger specific monitoring events whenever certain conditions are met within an application. For instance, if a message indicating a high error rate is received, the plugin can feed this data into monitoring tools, generating alerts or scaling events. This integration can improve responsiveness to issues and automate parts of the operations workflow.</p> </li> <li> <p><strong>Cross-Platform Data Aggregation</strong>: Leverage the AMQP Consumer plugin to consolidate metrics from various applications distributed across different platforms. By utilizing RabbitMQ as a centralized message broker, organizations can unify their monitoring data, allowing for comprehensive analysis and dashboarding through Telegraf, thus maintaining visibility across heterogeneous environments.</p> </li> <li> <p><strong>Real-Time Log Processing</strong>: Extend the use of the AMQP Consumer to capture log data sent to a RabbitMQ exchange, processing logs in real time for monitoring and alerting purposes. This application ensures that operational issues are detected and addressed swiftly by analyzing log patterns, trends, and anomalies as they occur.</p> </li> </ol>
Apache Hudi
<ol> <li> <p><strong>Transactional Lakehouse Metrics</strong>: Buffer and write Web service metrics as Parquet files for DeltaStreamer to ingest into Hudi, enabling upserts, ACID compliance, and time-travel on historical performance data.</p> </li> <li> <p><strong>Edge Device Batch Analytics</strong>: Telegraf running on IoT gateways writes metrics to Parquet locally, where periodic Spark jobs ingest them into Hudi for long-term analytics and traceability.</p> </li> <li> <p><strong>Schema-Enforced Abnormal Metric Handling</strong>: Use Parquet plugin’s strict schema-dropping behavior to prevent malformed or unexpected metric changes. Hudi ingestion then guarantees consistent schema and data quality in downstream datasets.</p> </li> <li> <p><strong>Data Platform Integration</strong>: Store Telegraf metrics as Parquet files in an S3/ADLS landing zone. Hudi’s Spark-based ingestion pipeline then loads them into a unified, queryable lakehouse with business events and logs.</p> </li> </ol>
Feedback
Thank you for being part of our community! If you have any general feedback or found any bugs on these pages, we welcome and encourage your input. Please submit your feedback in the InfluxDB community Slack.
Powerful Performance, Limitless Scale
Collect, organize, and act on massive volumes of high-velocity data. Any data is more valuable when you think of it as time series data. with InfluxDB, the #1 time series platform built to scale with Telegraf.
See Ways to Get Started
Related Integrations
Related Integrations
HTTP and InfluxDB Integration
The HTTP plugin collects metrics from one or more HTTP(S) endpoints. It supports various authentication methods and configuration options for data formats.
View IntegrationKafka and InfluxDB Integration
This plugin reads messages from Kafka and allows the creation of metrics based on those messages. It supports various configurations including different Kafka settings and message processing options.
View IntegrationKinesis and InfluxDB Integration
The Kinesis plugin allows for reading metrics from AWS Kinesis streams. It supports multiple input data formats and offers checkpointing features with DynamoDB for reliable message processing.
View Integration