Condense
Visit WebsiteRaise a Support TicketBook a Meeting
  • Overview
  • Introduction to Condense
    • What is Condense?
    • Features of Condense
    • Condense Architecture
      • Detailed Component Breakdown
      • Additional Services
      • Components and Services within the Kubernetes Cluster
    • Key Benefits of Condense
    • Why Condense?
    • Condense Use-Cases
    • FAQs
  • Fully Managed kafka
    • Kafka Management
    • Kafka Connect
    • Schema Registry
    • Securing Kafka
    • Kafka Administration
  • Security
  • Condense Deployment
    • Bring Your Own Cloud (BYOC)
      • Deployment from GCP Marketplace
      • Deployment from AWS Marketplace
      • Deployment from Azure Marketplace
  • Condense App - Getting Started
    • Glossary
    • Features of Condense App
    • Video Guide
    • SSO (Single Sign On) - Creating an Account/Logging into the Condense App
    • Workspace in Condense
    • Pre-Built Connectors
    • Custom Transforms
    • Applications
    • Pipelines
    • Settings
    • Role Based Access Control (RBAC)
    • Activity Auditor
    • Campaigns
    • Split Utility
    • Alert Utility
    • KSQL
  • Connectors in Condense
    • Available Connectors
    • Upcoming Connectors
  • Certifications
    • ISO 27001:2013
    • ISO 9001:2015
  • Legal
    • End User License Agreement (EULA)
    • Privacy Policy
    • Usage of Cookies
    • Terms and Conditions
  • Marketing Assets
    • Wallpapers
    • Social Media
Powered by GitBook
On this page

Was this helpful?

  1. Fully Managed kafka

Schema Registry

With a managed Kafka instance, Condense ships a schema registry. It provides a RESTful interface for storing and retrieving your Avro®, JSON Schema, and Protobuf schemas.

It stores a versioned history of all schemas based on a specified subject name strategy, provides multiple compatibility settings, and allows the evolution of schemas according to the configured compatibility settings and expanded support for these schema types.

It provides serializers that plug into Apache Kafka® clients that handle schema storage and retrieval for Kafka messages sent in any supported formats.

Schema registry is hosted with the Condense deployment and the APIs are available on an internal load balancer IP.

Examples

# Register a new version of a schema under the subject "Kafka-key"
$ curl -X POST -H "Content-Type: application/vnd.schemaregistry.v1+json" \
    --data '{"schema": "{\"type\": \"string\"}"}' \
    http://<internal-lb-ip>:8081/subjects/Kafka-key/versions
  {"id":1}

# Register a new version of a schema under the subject "Kafka-value"
$ curl -X POST -H "Content-Type: application/vnd.schemaregistry.v1+json" \
    --data '{"schema": "{\"type\": \"string\"}"}' \
     http://<internal-lb-ip>:8081/subjects/Kafka-value/versions
  {"id":1}

# List all subjects
$ curl -X GET http://<internal-lb-ip>:8081/subjects
  ["Kafka-value","Kafka-key"]

# List all schema versions registered under the subject "Kafka-value"
$ curl -X GET http://<internal-lb-ip>:8081/subjects/Kafka-value/versions
  [1]

# Fetch a schema by globally unique id 1
$ curl -X GET http://<internal-lb-ip>:8081/schemas/ids/1
  {"schema":"\"string\""}

# Fetch version 1 of the schema registered under subject "Kafka-value"
$ curl -X GET http://<internal-lb-ip>:8081/subjects/Kafka-value/versions/1
  {"subject":"Kafka-value","version":1,"id":1,"schema":"\"string\""}

# Fetch the most recently registered schema under subject "Kafka-value"
$ curl -X GET http://<internal-lb-ip>:8081/subjects/Kafka-value/versions/latest
  {"subject":"Kafka-value","version":1,"id":1,"schema":"\"string\""}

# Delete version 3 of the schema registered under subject "Kafka-value"
$ curl -X DELETE http://<internal-lb-ip>:8081/subjects/Kafka-value/versions/3
  3

# Delete all versions of the schema registered under subject "Kafka-value"
$ curl -X DELETE http://<internal-lb-ip>:8081/subjects/Kafka-value
  [1, 2, 3, 4, 5]

# Check whether a schema has been registered under subject "Kafka-key"
$ curl -X POST -H "Content-Type: application/vnd.schemaregistry.v1+json" \
    --data '{"schema": "{\"type\": \"string\"}"}' \
    http://<internal-lb-ip>:8081/subjects/Kafka-key
  {"subject":"Kafka-key","version":1,"id":1,"schema":"\"string\""}

# Test compatibility of a schema with the latest schema under subject "Kafka-value"
$ curl -X POST -H "Content-Type: application/vnd.schemaregistry.v1+json" \
    --data '{"schema": "{\"type\": \"string\"}"}' \
    http://<internal-lb-ip>:8081/compatibility/subjects/Kafka-value/versions/latest
  {"is_compatible":true}

# Get top level config
$ curl -X GET http://<internal-lb-ip>:8081/config
  {"compatibilityLevel":"BACKWARD"}

# Update compatibility requirements globally
$ curl -X PUT -H "Content-Type: application/vnd.schemaregistry.v1+json" \
    --data '{"compatibility": "NONE"}' \
    http://<internal-lb-ip>:8081/config
  {"compatibility":"NONE"}

# Update compatibility requirements under the subject "Kafka-value"
$ curl -X PUT -H "Content-Type: application/vnd.schemaregistry.v1+json" \
    --data '{"compatibility": "BACKWARD"}' \
    http://<internal-lb-ip>:8081/config/Kafka-value
  {"compatibility":"BACKWARD"}

PreviousKafka ConnectNextSecuring Kafka

Last updated 3 months ago

Was this helpful?