Split Utility
Last updated
Was this helpful?
Last updated
Was this helpful?
The Split Utility in Kafka Streams helps streamline data processing by categorising and routing data based on user-defined conditions. It effectively manages data by directing it to primary or secondary topics, ensuring efficient and real-time processing
Automatically routes data to either primary or secondary topics based on conditional evaluation
Applies filtering logic to streaming data without coding or manual intervention
Modify routing rules without disrupting data flow or redeploying pipelines
Pre-validate data structures to ensure compatibility before deployment
Configure complex stream splitting logic through an intuitive UI without writing code
Define data routing policies using a clear, business-oriented approach by defining rules for splitting
Log in to your Condense account
From the main menu, select Connectors
Click on Split Utility from the Connectors page
Click "Choose a Topic" under Read Topic. It auto-populates schema fields for the configuration of the Split Utility.
Click "Upload" and provide JSON as well as validate it. It auto-populates schema fields for the configuration of the Split Utility.
Enter a descriptive Utility Name for your split configuration
Select or specify the Input Topic that will provide data to the utility
Define a unique Primary Output Topic for condition-satisfied data
Define a unique Secondary Output Topic for condition-rejected data
Click Add Rule
For each rule:
Field Name: Select from auto-populated list (never type manually)
Condition: Set operator (>, <, =) and threshold
Grouping Rules - Click "Add Group" to combine multiple rules. Rules follow logical evaluation order.
Real-Time Rule Preview - The top preview bar shows your current rule structure for split (Updates instantly as you add/change rules)
Check the preview bar to verify:
Correct grouping with parentheses
Logical operator priority matches your intent
Click Deploy Utility
Go to Pipelines to monitor your split utility deployment on the pipeline.
Navigate to the Pipelines page
Select the deployed Split Utility you wish to modify
Click Edit Configurations to access settings
Update rule parameters as needed
Click Deploy Utility to apply changes
Navigate to the Pipelines page
Select the deployed Split Utility you wish to remove
Click the Delete Utility button
Confirm deletion in the prompt window
A: The Split Utility routes Kafka data into primary and secondary output topics based on user-defined conditions.
A: The utility evaluates JSON messages against configured rules and conditions. Messages meeting the criteria are sent to the primary output topic; those failing the criteria go to the secondary output topic.
A: No, the Split Utility specifically processes JSON-formatted data. Other formats are not supported in the current implementation.
A: No, the input topic, primary output topic, and secondary output topic must all be different to prevent processing loops and ensure proper data flow.
A: Rules are evaluated in the order they are defined. Once a message satisfies a rule, it is immediately routed to the primary output topic.