# Components and Services within the Kubernetes Cluster

{% hint style="info" %}
**First Time User? Free Credits on us!**

Get up to $200 worth of Free credits when you sign up for the first time. Use this link to create a new account on Condense and claim your free credits!

[<mark style="background-color:purple;">https://www.zeliot.in/try-now</mark>](https://www.zeliot.in/try-now)
{% endhint %}

The heart of Condense lies within the Kubernetes cluster, where various services collaborate to process data:&#x20;

### Nginx&#x20;

This versatile web server acts as the network traffic controller for Condense.&#x20;

* Manages API access for user interaction with Condense.&#x20;
* Handles Communication Protocols like TCP/IP and MQTT(s).&#x20;
* Configures ingress resources and port mappings for API access and device data ingestion.&#x20;
* Facilitates TLS termination for secure communication.&#x20;

### Redis&#x20;

Condense leverages a Redis cluster for data caching and performance optimization. &#x20;

* Stores metering data for various dimensions used in consumption-based billing.&#x20;
* Manages authentication tokens used by Condense services for API interactions.&#x20;
* Acts as a checkpointing mechanism for user-configured pipelines, storing sequence information, metadata, and real-time data like throughput and status.&#x20;
* Stores asset ping information displayed on the Condense dashboard.&#x20;

> Also read our article on Redis - Harnessing the Power of In-Memory Data Storage here: [<mark style="background-color:purple;">https://www.zeliot.in/blog/redis-harnessing-the-power-of-in-memory-data-storage</mark>](https://www.zeliot.in/blog/redis-harnessing-the-power-of-in-memory-data-storage)

### Kafka&#x20;

Condense employs Kafka as a message queue, serving as the backbone for data communication between services after transformations. &#x20;

* Kafka brokers and Zookeeper run within the cluster, ensuring fault tolerance and data integrity.&#x20;
* Operates as a one-way queue, facilitating data progression through various transformation stages.&#x20;
* Event-driven auto scalers automatically adjust consumer group offsets to maintain optimal performance and prevent lag.&#x20;
* Kafka version upgrades are managed by Condense with user consent for seamless adoption of new features and security patches.&#x20;

## Condense inherently deploys with a Fully - Managed Kafka

<figure><img src="https://3716651141-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FrwKRGO3QthZ6EMqqYblg%2Fuploads%2FCngEpDKnbEn9sLAna2Ym%2FFtCondVec1.png?alt=media&#x26;token=3688d9e1-cbe0-4ea2-9ff0-f3ac4fd6884a" alt=""><figcaption></figcaption></figure>

Condense provides a fully managed Kafka service within its Fully Managed BYOC model, ensuring seamless data streaming, high availability, and zero infrastructure complexity. By embedding Kafka into its industry-specific verticalized streaming ecosystem, Condense enables real-time event processing, data communication between services, and scalable pipeline orchestration—all within the customer's cloud environment.&#x20;

With Condense fully managed Kafka, enterprises get enterprise-grade performance, dynamic scaling, and automated operational management, eliminating the need for manual intervention in cluster provisioning, maintenance, scaling, and security updates.&#x20;

### Kafka as the Messaging Backbone

#### Event-Driven Data Streaming

Condense uses Kafka as a high-throughput event streaming system, ensuring fault-tolerant, distributed message queuing between microservices and streaming applications.&#x20;

#### One-Way Queue Model&#x20;

Kafka operates as a one-way queue, moving event data seamlessly through various prebuilt transformations, custom transformations, and real-time processing stages before reaching downstream applications.&#x20;

#### Decoupled Service Communication

Kafka ensures service independence, meaning applications can produce and consume data asynchronously without blocking.&#x20;

#### High-Performance Streaming

Optimized for millions of events per second, ensuring low-latency streaming and high-throughput data processing.&#x20;

### Cluster Management: Kafka Brokers and Zookeeper

#### Fully Managed Kafka Cluster

Kafka brokers and Zookeeper run within customer-controlled cloud environments, managed by Condense, ensuring high availability and resilience.&#x20;

#### Fault-Tolerant Architecture

Multi-cluster redundancy and automatic replication prevent data loss and ensure continuous streaming even in the event of node failures.&#x20;

#### Data Integrity & Consistency

Kafka maintains strong replication and partitioning strategies, ensuring data consistency across distributed environments.&#x20;

#### Seamless Kafka Version Upgrades

Condense handles Kafka version upgrades with user consent, allowing smooth adoption of new features and security patches without disrupting operations.&#x20;

### Automated Scaling with Event-Driven Auto-Scalers

#### Consumer Group Auto-Balancing

Kafka dynamically scales consumer groups based on event throughput, preventing backlogs and maintaining near-real-time processing.&#x20;

#### Adaptive Resource Scaling

Event-driven auto scalers optimize cluster performance by dynamically adjusting consumer group offsets, ensuring Kafka maintains low-latency event processing at scale.&#x20;

#### Intelligent Partition Rebalancing

Kafka automatically redistributes partitions among consumers during scaling, eliminating bottlenecks.&#x20;

### Reliability and High Availability (99.95% Uptime SLA)&#x20;

#### Multi-Zone Deployment

Kafka clusters are deployed across multiple availability zones within customer cloud environments, preventing downtime due to infrastructure failures.&#x20;

#### Automatic Failover & Leader Election

If a Kafka broker fails, Zookeeper automatically elects a new leader, ensuring data availability and seamless failover.&#x20;

#### Log Retention & Durability

Kafka persists event logs based on configurable retention policies, preventing data loss and ensuring replayability.&#x20;

### Security, Compliance, and Observability&#x20;

#### Data Encryption

Condense ensures end-to-end encryption for Kafka data in transit (TLS) and at rest (AES-256).&#x20;

#### Role-Based Access Control (RBAC)

Fine-grained permission management ensures that only authorized applications and users can interact with Kafka topics.&#x20;

#### Audit Logging & Monitoring

Built-in observability tools provide real-time monitoring of Kafka performance, including throughput, latency, and partition distribution.&#x20;

#### Enterprise Compliance

Kafka operates within customer-controlled BYOC environments, ensuring GDPR, HIPAA, and SOC 2 compliance.&#x20;
