network-wiredBuild your First Pipeline

A Pipeline in Condense is the visual and functional representation of data flow between deployed connectors, transforms, and utilities inside a workspace. It is automatically materialized when a connector or transform is deployed. No manual creation needed.

circle-info

Before you begin, make sure you have:

  • A workspace created and configured

  • At least one environment linked to the workspace

  • Workspace Admin or Maintainer role

What You'll Build

[ Input Connector ]  →  [ Kafka Topic ]  →  [ Transform ]  →  [ Output Connector ]
   (Data Source)            (Stream)          (Process)          (Destination)

By the end of this guide, you will have a live pipeline ingesting data from a source, processing it through a transform, and delivering it to a destination, all inside your Condense workspace.


Step 1 - Deploy an Input Connector

An Input Connector receives data from an external system and injects it into your pipeline.

  1. Navigate to your Workspace → click Connectors in the left sidebar

  2. Click + Add Connector

  3. Select Input as the connector role

  4. Choose your connector type (e.g. Kafka, HTTP, Teltonika, Pub/Sub)

  5. Fill in the required configuration fields:

    • Title - a unique name for this connector

    • Output Topic - the Kafka topic this connector will publish data to

    • Environment Variables - credentials, endpoints, or protocol settings

  6. Click Deploy

    circle-check

Step 2 - Add a Transform

A Transform processes messages as they flow between your Input and Output connectors.

Option A - Prebuilt Transform

  1. On the Pipeline canvas, click + Add Transform

  2. Select Prebuilt

  3. Choose a transform type:

    • Geofence - location-based event triggering

    • Alert Utility - trigger alerts when a condition is met

    • Split Utility - route messages based on rules

  4. Configure the required parameters:

    • Input Topic - must match the Output Topic from your Input Connector

    • Output Topic - the topic the transform will publish results to

  5. Click Deploy

Option B - Custom Transform

  1. Go to Applications → create or select an existing application

  2. Build, test, and publish the application

  3. Back on the Pipeline canvas, click + Add TransformCustom

  4. Select your published application and version

  5. Configure input/output topics and environment variables

  6. Click Deploy


Step 3 - Deploy an Output Connector

An Output Connector takes the processed data and delivers it to an external destination.

  1. Navigate to Connectors → click + Add Connector

  2. Select Output as the connector role

  3. Choose your connector type (e.g. PostgreSQL, Big Query, S3, Kafka)

  4. Fill in the required configuration fields:

    • Title - a unique name for this connector

    • Input Topic - must match the Output Topic from your Transform

    • Environment Variables - destination credentials and settings

  5. Click Deploy

    circle-check

Step 4 - View Your Pipeline

Click Pipeline in the left sidebar. You will see all deployed blocks connected by lines:

Pipeline Canvas Showing Connected Blocks

Clicking any block opens its Component Detail Panel showing the connector name, category, type, status, and Kafka topic information:

Node Details Panel
circle-info

Pipelines are automatically materialized. They appear automatically as soon as connectors and transforms are deployed. You do not manually create a pipeline.


Step 5 - Monitor Your Pipeline

Real-Time Metrics

Each block displays live CPU and memory utilization:

Real-Time Resource Utilization Metrics

Colour-coded status indicators let you instantly identify Running, Stopped, or Error states:

Colour-Coded Pipeline Status Indicators

Logs and Debugging

Access detailed real-time logs for each block with customizable streaming intervals (5 seconds by default):

Logs and Debugging Panel

Editing Your Pipeline

If you need to fix a misconfiguration after deployment, use Edit Configurations:

Edit Configurations Panel
  1. Click the block you want to edit on the Pipeline canvas

  2. Click Edit Configuration

  3. Update the required fields

  4. Click Save Changes

    circle-exclamation

Roles and Permissions

Operation
Admin
Maintainer
Developer
Viewer

Deploy pre-built connectors

Deploy custom connectors/transforms

Configure deployed components

Delete deployed components

View pipelines and connections

View component logs/configuration


Best Practices

Use clear topic naming to make the pipeline canvas self-explanatory

Group related connectors and transforms logically to simplify understanding

Avoid unused topic links to reduce visual clutter

Regularly check component logs to catch issues early

Document the pipeline purpose in the workspace description


Common Pitfalls

  • Misaligned Topic Names - Double-check that Output Topic of one block exactly matches the Input Topic of the next block

  • Unused Components Left Running - Remove or disable unused connectors to prevent unnecessary resource usage

  • Overcrowded Canvas - Use multiple workspaces when a single pipeline becomes too dense

Last updated