Condense
Visit WebsiteRaise a Support TicketBook a Meeting
  • Overview
  • Introduction to Condense
    • What is Condense?
    • Features of Condense
    • Condense Architecture
      • Detailed Component Breakdown
      • Additional Services
      • Components and Services within the Kubernetes Cluster
    • Key Benefits of Condense
    • Why Condense?
    • Condense Use-Cases
    • FAQs
  • Fully Managed kafka
    • Kafka Management
    • Kafka Connect
    • Schema Registry
    • Securing Kafka
    • Kafka Administration
  • Security
  • Condense Deployment
    • Bring Your Own Cloud (BYOC)
      • Deployment from GCP Marketplace
      • Deployment from AWS Marketplace
      • Deployment from Azure Marketplace
  • Condense App - Getting Started
    • Glossary
    • Features of Condense App
    • Video Guide
    • SSO (Single Sign On) - Creating an Account/Logging into the Condense App
    • Workspace in Condense
    • Pre-Built Connectors
    • Custom Transforms
    • Applications
    • Pipelines
    • Settings
    • Role Based Access Control (RBAC)
    • Activity Auditor
    • Campaigns
    • Split Utility
    • Alert Utility
    • KSQL
  • Connectors in Condense
    • Available Connectors
    • Upcoming Connectors
  • Certifications
    • ISO 27001:2013
    • ISO 9001:2015
  • Legal
    • End User License Agreement (EULA)
    • Privacy Policy
    • Usage of Cookies
    • Terms and Conditions
  • Marketing Assets
    • Wallpapers
    • Social Media
Powered by GitBook
On this page

Was this helpful?

  1. Fully Managed kafka

Kafka Administration

Condense offers a fully managed Kafka service, that automates the complexities of Kafka infrastructure management. This includes handling tasks such as cluster provisioning, scaling, and maintenance, allowing users to focus on building and deploying data pipelines. The platform provides built-in security features and real-time observability, enabling users to monitor and optimize their Kafka deployments effortlessly.

To manage Kafka-related configurations, Condense offers Kafka admin client APIs. The admin client APIs can be used for all the cluster operations, such as createTopics, createPartitions, etc.

API Contract along with a detailed description

API Name
API Request
API Description
API Response

/listGroups

NA

The purpose of the ListGroups API is to get a simple list of the Group IDs that are managed by broker.

{ "data": { "groups": [

{ "groupId": "b8316KqF_M9OwFfFAABXhdfhbu", "protocolType": "consumer" }, { "groupId": "connect-cluster-sqs", "protocolType": "connect" }, { "groupId": "connect-aws-sqs-sink-connector", "protocolType": "consumer" } ]

}, "status": "success", "message": "Groups retrieved successfully" }

/resetOffSets

{ "groupId": "my-consumer-group", "topic": "postmanValidation" }

Resets the consumer group offset to the earliest or latest offset. Note that consumer group must have no running instances when performing the reset

{ "status": "success", "message": "Offset Reset successfully" }

/setOffSets

{ "topic": "postmanValidation", "partitions": [ { "partition": 0, "offset": 0 } ] }

Delete records for a selected topic. This will delete all records from the earliest offset. To delete all records in a partition, use a target offset of -1.

NA

/createTopic

{ "topicName": "postmanValidation", "timeoutTime":5000, "numOfPartitions": 2, "configEntries": [] }

This API will create a new topic and will return true if created successfully or false when creating a new topic. NOTE: If configEnties and other configurations are not given, it’ll take the default configuration from the Broker.

{ "status": "success", "message": "Topic Created Successfully" }

/deleteTopic

{ "topicName":"postmanValidation" }

This will delete the topic from kafka cluster. Only one topic can be deleted at once.

{ "status": "success", "message": "Topic deleted successfully" }

/getAllTopicDetails

NA

API will fetch all the metadata related to all topics in kafka cluster

{ "name": "test", "partitions": [ { "partitionErrorCode": 0, "partitionId": 0, "leader": 2, "replicas": [ 2, 1, 0 ], "isr": [ 2, 1, 0 ], "offlineReplicas": [] }, { "partitionErrorCode": 0, "partitionId": 3, "leader": 2, "replicas": [ 2, 0, 1 ], "isr": [ 2, 1, 0 ], "offlineReplicas": [] } ] }

/getTopicDetails/:topic

{ "topic": "postmanValidation" }

This API shall fetch metadata along with latest offset of a single topic.

{ "data": { "topics": [ { "name": "postmanValidation", "partitions": [ { "partitionErrorCode": 0, "partitionId": 0, "leader": 2, "replicas": [ 2, 1, 0 ], "isr": [ 2, 1, 0 ], "offlineReplicas": [] }, { "partitionErrorCode": 0, "partitionId": 1, "leader": 1, "replicas": [ 1, 0, 2 ], "isr": [ 1, 0, 2 ], "offlineReplicas": [] } ],

"offsetOfTopic": [ { "partition": 1, "offset": "0", "high": "0", "low": "0" }, { "partition": 0, "offset": "0", "high": "0", "low": "0" } ]

}

]

}, "status": "success", "message": "Data retrieved successfully" }

/describeCluster

NA

API gets information about the broker cluster. This API details contains details related to monitoring

{ "data": { "brokers": [ { "nodeId": 0, "host": "my-cluster-kafka-0.my-cluster-kafka-brokers.kafka.svc", "port": 9092 },

{ "nodeId": 2, "host": "my-cluster-kafka-2.my-cluster-kafka-brokers.kafka.svc", "port": 9092 },

{ "nodeId": 1, "host": "my-cluster-kafka-1.my-cluster-kafka-brokers.kafka.svc", "port": 9092 }

],

"controller": 2, "clusterId": "YpxpUciGScack2Z9CijpGw" },

"status": "success", "message": "Data retrieved successfully" }

/describeTopic

{ "topicName":"postmanValidation" }

Get the configuration for the specified topic in kafka cluster

{ "data":

{ "resources":

[

{ "errorCode": 0, "errorMessage": "", "resourceType": 2, "resourceName": "postmanValidation", "configEntries":

[

{ "configName": "compression.type", "configValue": "producer", "readOnly": false, "isDefault": true, "configSource": 5, "isSensitive": false, "configSynonyms": []

},

]

}

]

},

"status": "success", "message": "Data retrieved successfully" }

/updateTopicConfigs

{ "topicName": "postmanValidation", "configEntries": [

{ "name": "cleanup.policy", "value": "compact" }

]

}

Update the configuration for the specified topic

{ "data":

{ "resources":

[

{ "errorCode": 0, "errorMessage": null, "resourceType": 2, "resourceName": "postmanValidation"

}

]

},

"status": "success", "message": "Configuration updated successfully" }

/getOffsetsForGroup

{ "groupId":"my-consumer-group" }

It returns the consumer group offset for a list of topics

{ "status": "success", "message": "Offsets retrieved successfully", "partitions": [] }

/deleteGroup

{ "groupId":"my-consumer-group" }

This API shall delete group by groupId. Only a single group can be deleted at once

{ "status": "success", "message": "Group Deleted successfully" }

PreviousSecuring KafkaNextSecurity

Last updated 3 months ago

Was this helpful?