# Split Utility

<div data-with-frame="true"><figure><img src="https://content.gitbook.com/content/rwKRGO3QthZ6EMqqYblg/blobs/hdLIIGsYv79mA4BBEcR2/image.png" alt=""><figcaption></figcaption></figure></div>

## Overview

The Split Utility in Kafka Streams helps streamline data processing by categorising and routing data based on user-defined conditions. It effectively manages data by directing it to primary or secondary topics, ensuring efficient and real-time processing

## **Key Features** <a href="#id-2.-key-features" id="id-2.-key-features"></a>

### **Intelligent Data Routing**

* Automatically routes data to either primary or secondary topics based on conditional evaluation
* Applies filtering logic to streaming data without coding or manual intervention

### **Operational Flexibility**

* Modify routing rules without disrupting data flow or redeploying pipelines
* Pre-validate data structures to ensure compatibility before deployment

### **Zero-Code Implementation**

* Configure complex stream splitting logic through an intuitive UI without writing code
* Define data routing policies using a clear, business-oriented approach by defining rules for splitting

## **How to Use the Split Utility** <a href="#id-3.-how-to-use-the-split-utility" id="id-3.-how-to-use-the-split-utility"></a>

### **Step 1. Access the Split Utility**

<div data-with-frame="true"><figure><img src="https://content.gitbook.com/content/rwKRGO3QthZ6EMqqYblg/blobs/LDZaL9mNZN9pJqahhfSE/image.png" alt=""><figcaption></figcaption></figure></div>

1. Log in to your Condense account
2. From the main menu, select **Transforms**
3. Click on **Split Utility** from the Transforms page

### **Step 2: Choose Configuration Method**

#### **Option A: Auto-Fill from Topic&#x20;**<mark style="color:blue;">**(Recommended)**</mark>

Click **"Choose a Topic"** under *Read Topic.* It auto-populates schema fields for the configuration of the Split Utility.

<div data-with-frame="true"><figure><img src="https://content.gitbook.com/content/rwKRGO3QthZ6EMqqYblg/blobs/aSwWzGyErspGKxcuIc8J/image.png" alt=""><figcaption></figcaption></figure></div>

{% hint style="info" %}
If the topic is empty, the user will not be able to set up the configuration for the Split utility
{% endhint %}

#### **Option B: Manual JSON Upload**

<div data-with-frame="true"><figure><img src="https://content.gitbook.com/content/rwKRGO3QthZ6EMqqYblg/blobs/eOsJ8B53P9qbu7jgwHr7/image.png" alt=""><figcaption></figcaption></figure></div>

Click "Upload" and provide JSON as well as validate it. It auto-populates schema fields for the configuration of the Split Utility.

{% hint style="info" %}
Manual JSON must exactly match your Kafka topic schema
{% endhint %}

### **Step 3. Set Up Configurations for Topic**

1. Enter a descriptive **Utility Name** for your split configuration
2. Select or specify the **Input Topic** that will provide data to the utility
3. Define a unique **Primary Output Topic** for condition-satisfied data
4. Define a unique **Secondary Output Topic** for condition-rejected data

### **Step 4: Configure Rules for Split**

1. Click **Add Rule**
2. For each rule:
   * **Field Name**: Select from auto-populated list (never type manually)
   * **Condition**: Set operator (>, <, =) and threshold
3. Grouping Rules - Click **"Add Group"** to combine multiple rule&#x73;*.* Rules follow logical evaluation order.

### **Step 5: Deploy**

1. Real-Time Rule Preview - The top preview bar shows your current rule structure for split\
   (Updates instantly as you add/change rules)

Check the preview bar to verify:

* Correct grouping with parentheses
* Logical operator priority matches your intent

2. Click **Deploy Utility**
3. Go to **Pipelines** to monitor your split utility deployment on the pipeline.

## **Editing a Split Configuration** <a href="#id-4.-editing-a-split-configuration" id="id-4.-editing-a-split-configuration"></a>

1. Navigate to the **Pipelines** page

<div data-with-frame="true"><figure><img src="https://content.gitbook.com/content/rwKRGO3QthZ6EMqqYblg/blobs/ZUHq3gE1TAV0Xl7XS3im/image.png" alt=""><figcaption></figcaption></figure></div>

2. Select the deployed Split Utility you wish to modify
3. Click **Edit Configurations** to access settings

<div data-with-frame="true"><figure><img src="https://content.gitbook.com/content/rwKRGO3QthZ6EMqqYblg/blobs/G7hXPOsQsqiFKHvwk3Kh/image.png" alt=""><figcaption></figcaption></figure></div>

4. Update rule parameters as needed

<div data-with-frame="true"><figure><img src="https://content.gitbook.com/content/rwKRGO3QthZ6EMqqYblg/blobs/j8XdLwSs7Pxl2qUJ3WnA/image.png" alt=""><figcaption></figcaption></figure></div>

5. Click **Deploy Utility** to apply changes

## **Deleting a Split Configuration** <a href="#id-5.-deleting-a-split-configuration" id="id-5.-deleting-a-split-configuration"></a>

1. Navigate to the **Pipelines** page
2. Select the deployed Split Utility you wish to remove
3. Click the **Delete Utility** button

<div data-with-frame="true"><figure><img src="https://content.gitbook.com/content/rwKRGO3QthZ6EMqqYblg/blobs/1UMlTAkLlZdZ6q2wkEzq/image.png" alt=""><figcaption></figcaption></figure></div>

4. Confirm deletion in the prompt window

## Frequently Asked Questions (FAQs)

#### **Q1: What is the primary purpose of the Split Utility?**

A: The Split Utility routes Kafka data into primary and secondary output topics based on user-defined conditions.

#### **Q2: How does the Split Utility determine where to route messages?**

A: The utility evaluates JSON messages against configured rules and conditions. Messages meeting the criteria are sent to the primary output topic; those failing the criteria go to the secondary output topic.

#### **Q3: Can I use the Split Utility with non-JSON data?**

A: No, the Split Utility specifically processes JSON-formatted data. Other formats are not supported in the current implementation.

#### **Q4: Can I use the same topic for input and output?**

A: No, the input topic, primary output topic, and secondary output topic must all be different to prevent processing loops and ensure proper data flow.

#### **Q5: What happens if I configure conflicting or overlapping rules?**

A: Rules are evaluated in the order they are defined. Once a message satisfies a rule, it is immediately routed to the primary output topic.


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://docs.zeliot.in/condense/v2.4.0/utilities/split-utility.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
