Components and Services within the Kubernetes Cluster
Last updated
Was this helpful?
Last updated
Was this helpful?
The heart of Condense lies within the Kubernetes cluster, where various services collaborate to process data:
This versatile web server acts as the network traffic controller for Condense.
Manages API access for user interaction with Condense.
Handles Communication Protocols like TCP/IP and MQTT(s).
Configures ingress resources and port mappings for API access and device data ingestion.
Facilitates TLS termination for secure communication.
Condense leverages a Redis cluster for data caching and performance optimization.
Stores metering data for various dimensions used in consumption-based billing.
Manages authentication tokens used by Condense services for API interactions.
Acts as a checkpointing mechanism for user-configured pipelines, storing sequence information, metadata, and real-time data like throughput and status.
Stores asset ping information displayed on the Condense dashboard.
Condense employs Kafka as a message queue, serving as the backbone for data communication between services after transformations.
Kafka brokers and Zookeeper run within the cluster, ensuring fault tolerance and data integrity.
Operates as a one-way queue, facilitating data progression through various transformation stages.
Event-driven auto scalers automatically adjust consumer group offsets to maintain optimal performance and prevent lag.
Kafka version upgrades are managed by Condense with user consent for seamless adoption of new features and security patches.
Condense provides a fully managed Kafka service within its Fully Managed BYOC model, ensuring seamless data streaming, high availability, and zero infrastructure complexity. By embedding Kafka into its industry-specific verticalized streaming ecosystem, Condense enables real-time event processing, data communication between services, and scalable pipeline orchestration—all within the customer's cloud environment.
With Condense fully managed Kafka, enterprises get enterprise-grade performance, dynamic scaling, and automated operational management, eliminating the need for manual intervention in cluster provisioning, maintenance, scaling, and security updates.
Condense uses Kafka as a high-throughput event streaming system, ensuring fault-tolerant, distributed message queuing between microservices and streaming applications.
Kafka operates as a one-way queue, moving event data seamlessly through various prebuilt transformations, custom transformations, and real-time processing stages before reaching downstream applications.
Kafka ensures service independence, meaning applications can produce and consume data asynchronously without blocking.
Optimized for millions of events per second, ensuring low-latency streaming and high-throughput data processing.
Kafka brokers and Zookeeper run within customer-controlled cloud environments, managed by Condense, ensuring high availability and resilience.
Multi-cluster redundancy and automatic replication prevent data loss and ensure continuous streaming even in the event of node failures.
Kafka maintains strong replication and partitioning strategies, ensuring data consistency across distributed environments.
Condense handles Kafka version upgrades with user consent, allowing smooth adoption of new features and security patches without disrupting operations.
Kafka dynamically scales consumer groups based on event throughput, preventing backlogs and maintaining near-real-time processing.
Event-driven auto scalers optimize cluster performance by dynamically adjusting consumer group offsets, ensuring Kafka maintains low-latency event processing at scale.
Kafka automatically redistributes partitions among consumers during scaling, eliminating bottlenecks.
Kafka clusters are deployed across multiple availability zones within customer cloud environments, preventing downtime due to infrastructure failures.
If a Kafka broker fails, Zookeeper automatically elects a new leader, ensuring data availability and seamless failover.
Kafka persists event logs based on configurable retention policies, preventing data loss and ensuring replayability.
Condense ensures end-to-end encryption for Kafka data in transit (TLS) and at rest (AES-256).
Fine-grained permission management ensures that only authorized applications and users can interact with Kafka topics.
Built-in observability tools provide real-time monitoring of Kafka performance, including throughput, latency, and partition distribution.
Kafka operates within customer-controlled BYOC environments, ensuring GDPR, HIPAA, and SOC 2 compliance.
Also read our article on Redis - Harnessing the Power of In-Memory Data Storage here: