Pre-Built Connectors
Last updated
Was this helpful?
Last updated
Was this helpful?
Condense provides pre-built connectors that users can leverage to build data pipelines. Condense offers different types of Pre Built Connectors primarily classified under Vehicle Telematics, Stream connectors and Store connectors
This category includes connectors specifically designed for Vehicle telematics and Mobility-Related data. These devices act as sensors, constantly gathering data from their environment. Condense establishes secure communication channels with these telematics devices, enabling the real-time flow of telemetry data. This data could include GPS location, engine performance metrics, sensor readings, or user activity data, depending on the specific device and application.
Condense acts as a telematics gateway. It receives, parses, registers, and stores the data coming from connected devices. Condense offers device configuration for these telematics device manufacturers:
Teltonika
iTriangle
Jimi Concox
Condense Edge (Zeliot)
Device Type: You select the type of device you want to connect from a predefined list of supported devices offered by Condense.
Communication Protocol: Based on the device type you choose, Condense automatically defines the communication protocol it uses. Condense currently supports two communication protocols:
TCP/IP: Standard Transmission Control Protocol/Internet Protocol for device communication.
MQTT: Message Queuing Telemetry Transport, a lightweight messaging protocol for machine-to-machine communication
Data Processing: When a device connects and sends data, Condense uses its configured port mapping to route the data for internal processing. This involves data transformation (e.g., converting raw sensor readings to human-readable units).
Output Topic: This parameter defines the topic where the processed data from the device will be published within Condense.
Customizable Output Topic: You can create and configure a customizable output topic to subscribe to an input topic, process the incoming data (if required), and publish the data to the newly configured output topic.
Condense Edge is a modular, low-memory-footprint firmware developed by Zeliot, designed to enable the collection and transfer of rich data generated from vehicles. It also facilitates Over-the-Air (OTA) updates for vehicle Electronic Control Units (ECUs). Condense Edge is hardware-agnostic and supports edge computing, allowing data processing at the device level. This capability optimizes the memory footprint for cloud computing by performing edge-specific computations directly on the device.
The Condense Edge plugin is a special input connector in Condense. This works with Zeliot’s Condense Edge.
The plugin enables the Condense Edge to interface with the server for various functionalities
Transmission over TCP/IP with TLS.
Configuration-based data parsing of custom CAN parameters and alerts.
Cloud-to-device command interface.
Condense Edge and Condense, as a closed ecosystem, works well with each other. This plugin has same deployment flow as that of other input telematics device connectors.
Stream connectors handle the continuous flow of data, simultaneously focusing on the reliable delivery of individual data. It plays a vital role in moving and managing data within your Condense Pipeline for IoT applications. It can be used as either an input connector or an output connector.
Condense currently supports these stream connectors:
These are designed to manage high-volume, real-time data streams. They are ideal for processing continuous data flows, such as sensor readings, logs, or machine-generated data, making them well-suited for handling real-time data from various sources and applications.
These protocols are ideal for ingesting data from the web or real-time data feeds exposed over the internet. This could be sensor data transmitted from an IoT device using a web interface or data retrieved from an external service.
It allows you to buffer incoming data streams from devices, ensuring reliable delivery even with temporary network issues. This buffering can be helpful for situations where real-time processing might not be critical.
It is a lightweight, publish-subscribe network protocol designed for constrained devices and low-bandwidth, high-latency, or unreliable networks.
HTTPS connector facilitates the integration between Condense and external systems over HTTPS (HyperText Transfer Protocol Secure) so as to securely transmit data from Condense to external systems or from external systems to Condense using HTTPS POST/GET requests.
The HTTPS Stream Connector integrates Condense with external systems through HTTPS endpoints, enabling real-time data exchange from Kafka topics via HTTPS POST requests.
The HTTPS URL where data is sent. Typically used for secure communication over the web.
a. Determine the server endpoint where your application needs to send the data.
b. Ensure the URL starts with https://
to use secure communication.
c. Obtain HTTPS URLs from the web service documentation or API endpoint being used.
This key defines the name of the Kafka topic from which the connector will read data. It also serves as the reference point for recording the progress of the consumer within the topic (known as offsets). Offsets track which messages have already been processed, ensuring the connector doesn't re-consume them.
The Pub/Sub Connector integrates Kafka with Google Cloud Pub/Sub, facilitating message exchange between Kafka topics and Pub/Sub topics
Find the Official Documentation from Google Cloud here: Google Pub Sub
This key defines the name of the Kafka topic from which the connector will read data. It also serves as the reference point for recording the progress of the consumer within the topic (known as offsets). Offsets track which messages have already been processed, ensuring the connector doesn't re-consume them.
Available in the Google Cloud Console under your project’s details.
Obtain the topic name by creating a topic in Google Cloud Pub/Sub or selecting an existing one.
Create or obtain service account credentials from the Google Cloud Console under IAM & Admin > Service Accounts.
Download the JSON key file for the service account.
Configured based on the Kafka topic designated for error handling in your system.
Amazon Kinesis Streams is a fully managed service for real-time data streaming at scale. It is used to collect and process large streams of data records in real-time. The service allows developers to build applications that can continuously ingest and process large, streaming data in real-time.
Find the Official Documentation from Amazon here: Amazon Kinesis
Find the Developer Documentation from Amazon here: Developer Documentation
The access key ID is required to authenticate and access AWS Kinesis.
Log in to the AWS Management Console.
Navigate to IAM (Identity and Access Management).
Select Users
and then your user.
Choose Security credentials
tab.
Create a new access key or use an existing one.
The secret access key is required to authenticate and access AWS Kinesis.
Go to the AWS Management Console.
Navigate to IAM (Identity and Access Management).
Select Users
and then your user.
Choose Security credentials
tab.
Create a new access key or use an existing one.
The AWS region where the Kinesis stream is located.
Go to the AWS Management Console.
Navigate to Kinesis.
Check the region setting in the top-right corner.
The name of the Kinesis stream to which data will be sent.
Go to the AWS Management Console.
Navigate to Kinesis.
List and select the stream name from the dashboard.
This key defines the name of the Kafka topic from which the connector will read data. It also serves as the reference point for recording the progress of the consumer within the topic (known as offsets). Offsets track which messages have already been processed, ensuring the connector doesn't re-consume them.
Simple Queue Service (SQS) is a fully managed message queuing service offered by Amazon Web Services (AWS). It enables the decoupling and scaling of microservices, distributed systems, and serverless applications by allowing the asynchronous transmission of messages between components.
Find the Official Documentation from Amazon here: Amazon SQS
Find the Developer Documentation from Amazon here: Developer Documentation
Sign in to the AWS Management Console and open the Amazon SQS console at https://console.aws.amazon.com/sqs/.
Select the Queues from the navigation pane.
Choose the queue you want to use.
The queue's details page will display the Queue Name and the Amazon Resource Name (ARN).
Sign in to the AWS Management Console and open the IAM console at https://console.aws.amazon.com/iam/.
In the navigation pane, choose Users.
Select the user account for which you want to create or view access keys.
On the user details page, choose the Security credentials tab.
In the Access keys section, choose Create access key to generate a new key pair if none exists, or view the existing access keys.
Note: Keep the Secret Access Key secure and do not share it publicly.
Go to the AWS Management Console.
Navigate to SQS.
Check the region setting in the top-right corner.
This key defines the name of the Kafka topic from which the connector will read data. It also serves as the reference point for recording the progress of the consumer within the topic (known as offsets). Offsets track which messages have already been processed, ensuring the connector doesn't re-consume them.
MQTT (Message Queuing Telemetry Transport) is a lightweight, publish-subscribe network protocol designed for constrained devices and low-bandwidth, high-latency, or unreliable networks. As a stream output connector, MQTT allows clients to publish messages to a broker, which then routes those messages to subscribing clients. MQTT is optimized for minimal bandwidth and device resource usage, making it ideal for IoT (Internet of Things) applications where resources are often limited.
Find the Official Documentation from Amazon here: MQTT
Topic Name
Broker URL
Username
Password
Client ID
Store-typeStore type connectors are primarily classified as output connectors for integrating Condense Pipeline with data storage solution, giving you control over how you manage the information collected from your IoT devices or other input connectors. This ensures the data is persisted for later analysis, visualization, or use in applications.
Condense currently supports below store connectors:
Bigtable Well-suited for storing and analyzing massive datasets from real-time IoT applications.
Cassandra A distributed NoSQL database designed for high availability and scalability, ideal for handling large-scale unstructured data.
MongoDB A flexible, document-oriented NoSQL database that stores data in JSON-like formats, perfect for unstructured or semi-structured data.
MySQL For storing structured data with timestamps and values, like temperature readings or device status.
Microsoft SQL Server A robust relational database management system (RDBMS) that supports structured data and SQL queries for enterprise applications.
BigQuery A powerful tool for large-scale data analysis of historical IoT data stored in Google Cloud Platform.
InfluxDB Specifically designed to handle high volumes of time-series data with timestamps, making it a great fit for sensor data from IoT devices.
ClickHouse A high-performance, columnar database optimized for real-time analytics and large-scale data processing, often used in cloud environments.
Timescale A time-series database built on PostgreSQL, optimized for storing and querying time-series data like sensor metrics and logs.
Google Cloud Bigtable is a fully managed, scalable NoSQL database service designed for large analytical and operational workloads. It is ideal for applications that require high read and write throughput and low latency.
Find the Official Documentation from Google here: Google Bigtable
Find the API Documentation from Google here: API Documentation
Find the Developer Documentation from Google here: Developer Documentation
The Bigtable Connector facilitates the integration between Condense and Bigtable. This connector allows for efficient data ingestion from Kafka topics directly into a Bigtable instance, enabling real-time analytics and storage of streaming data
The ID of the Bigtable project.
Navigate to the Google Cloud Console.
Select your project from the project selector drop-down.
The Project ID is displayed in the project info panel.
The Instance ID of the Google Bigtable project.
In the Google Cloud Console, go to the Bigtable Instances page.
Select your Bigtable instance to view its details.
The Instance ID will be visible in the instance summary.
The table ID of the Google Bigtable.
Go to the Bigtable section in the Google Cloud Console.
Open your instance and navigate to the Tables section.
Select the table you want to use; the Table ID will be listed.
The Bigtable service account credential is in base64 format.
Create a service account for Bigtable with appropriate roles (Bigtable Admin, Bigtable User).
Download the JSON key file for the service account.
The Kafka topic from which the data will be read to be written to the specified Bigtable instance.
Cassandra is a highly scalable and distributed NoSQL database known for its ability to handle large amounts of data across many servers, offering high availability without a single point of failure.
Find the Official Documentation from Apache Cassandra here: Apache Cassandra
The endpoint address that allows applications to connect to the Cassandra database.
Open your Apache Cassandra configuration file (cassandra.yaml).
Locate the rpc_address
setting which contains the URL.
Alternatively, consult your system administrator for the connection URL.
The network port through which Cassandra accepts connections, typically defaulting to 9042.
Open the Cassandra configuration file (cassandra.yaml).
Find the native_transport_port
setting; the default is usually 9042.
Verify with your system administrator if a custom port is used.
The network port through which Cassandra accepts connections, typically defaulting to 9042.
Use a CQL (Cassandra Query Language) client to connect to your Cassandra instance.
Execute the command DESCRIBE KEYSPACES;
to list all keyspaces.
Choose the keyspace relevant to your project.
A structure within a keyspace that organizes data into rows and columns, functioning like a table in a relational database.
Connect to your Cassandra instance using a CQL client.
Use the command USE keyspace_name;
to switch to the desired keyspace.
Run DESCRIBE TABLES;
to view all tables within that keyspace.
This refers to the method used to distribute data replicas across nodes. Popular strategies include SimpleStrategy for single data centre use and NetworkTopologyStrategy for multiple data centres.
Within a CQL client, execute DESCRIBE KEYSPACE keyspace_name;
.
Review the output for the replication
settings which detail the strategy being used.
The number of data copies maintained across the Cassandra cluster. It determines how many nodes will store copies of the same data.
In the CQL client, after running DESCRIBE KEYSPACE keyspace_name;
, check the replication_factor
in the description output.
This number indicates how many copies of the data are maintained across the cluster.
These are the columns in a Cassandra table, each defined by a name and a data type.
Connect using a CQL client and select the desired keyspace and table.
Execute DESCRIBE TABLE table_name;
to see all fields (columns) and their data types in the table.
This key defines the name of the Kafka topic from which the connector will read data. It also serves as the reference point for recording the progress of the consumer within the topic (known as offsets). Offsets track which messages have already been processed, ensuring the connector doesn't re-consume them.
A flexible, document-oriented NoSQL database that stores data in JSON-like formats, perfect for unstructured or semi-structured data.
Find the Official Documentation from MongoDB here: MongoDB Documentation
This is the connection string used to connect to your MongoDB server, typically including the server's address and port.
Access your MongoDB configuration or consult the system administrator.
The URL is often formatted like mongodb://<hostname>:<port>
.
The specific database within your MongoDB instance that you want to access or manage.
Once you're connected to the MongoDB server, you can retrieve the list of databases by using the show dbs command.
Collections in MongoDB are equivalent to tables in relational databases and hold the actual data records.
Connect to your desired database using a MongoDB client.
Execute show collections
in the MongoDB shell to list all collections within the database.
These are the individual pieces of data stored within documents in a collection, similar to columns in a SQL table.
Use a MongoDB client to access the database and collection.
Run a sample query findOne()
to view a document, which will display the fields and their current data structure.
This key defines the name of the Kafka topic from which the connector will read data. It also serves as the reference point for recording the progress of the consumer within the topic (known as offsets). Offsets track which messages have already been processed, ensuring the connector doesn't re-consume them.
A popular open-source relational database management system (RDBMS) used for storing and managing data. It functions as an output connector within Condense Pipeline for IoT applications.
Find the Official Documentation from MySQL here: MySQL Documentation
Links to Download MySQL Connectors: Download MySQL Connectors
The server address where the MySQL database is hosted.
Check the configuration files of your application.
Ask your system administrator.
Review hosting provider documentation.
The specific database within the MySQL server you are connecting to
Query the MySQL server: SHOW DATABASES;
Check the configuration files of your application.
Ask your database administrator.
Mysql Port
The network port MySQL server is listening on, is typically 3306.
Check the configuration files of your MySQL server.
Use the command: SHOW VARIABLES LIKE 'port';
Ask your system administrator.
The username is used to authenticate and connect to the MySQL server.
Check configuration files of your application.
Query the MySQL server: SELECT user FROM mysql.user;
Ask your database administrator.
The password is associated with the MySQL user account.
Check configuration files of your application.
Ask your database administrator.
Note: For security reasons, passwords are usually not stored in plaintext and should be handled securely.
The specific table within the MySQL database you are interacting with.
Query the MySQL database: SHOW TABLES;
Check configuration files of your application.
Ask your database administrator.
The columns or fields within the specified MySQL table.
Query the MySQL table: DESCRIBE table_name;
Check the documentation or schema design of your database.
Ask your database administrator.
This key defines the name of the Kafka topic from which the connector will read data. It also serves as the reference point for recording the progress of the consumer within the topic (known as offsets). Offsets track which messages have already been processed, ensuring the connector doesn't re-consume them.
Microsoft SQL Server (MSSQL) is a widely used relational database management system (RDBMS) known for its reliability, scalability, and robust feature set. In the context of Condense Pipelines for Internet of Things (IoT) applications, MSSQL functions as an output connector.
Find the Official Documentation from Microsoft here: MSSQL Documentation
This is the hostname or IP address of the server where your Microsoft SQL Server instance is running.
Login to the SQL Server Management Studio (SSMS).
Check the server name at the top of the Object Explorer panel. This is your MSSQL Host.
If you are working in a local environment, it could be localhost
or 127.0.0.1
The name of the database where your data resides.
Open SQL Server Management Studio.
Connect to your SQL Server instance.
Expand the "Databases" node in the Object Explorer panel to see the list of databases.
The port number that SQL Server is listening on.
Open SQL Server Configuration Manager.
Navigate to SQL Server Network Configuration > Protocols for [INSTANCE_NAME]
.
Right-click on TCP/IP
and select "Properties".
Go to the "IP Addresses" tab and scroll down to the "IPAll" section to see the port number in the "TCP Port" field, usually it is 1433
The username you use to connect to the SQL Server database.
This is typically created during the SQL Server installation or can be created via SQL Server Management Studio.
To view or create a user, open SQL Server Management Studio, and go to Security > Logins.
The password associated with the MSSQL User.
This is set by the database administrator during user setup.
If you forget it, you may need to contact your database administrator to reset it.
The name of the table within your database that contains the data you are interested in.
Open SQL Server Management Studio and connect to your database.
Expand the "Databases" node and then the specific database.
Expand the "Tables" folder to see the list of tables.
These are the column names or fields within your table that you need to interact with.
Open SQL Server Management Studio.
Navigate to the table of interest.
Right-click the table and select "Design" to view all the fields (columns) in the table.
This key defines the name of the Kafka topic from which the connector will read data. It also serves as the reference point for recording the progress of the consumer within the topic (known as offsets). Offsets track which messages have already been processed, ensuring the connector doesn't re-consume them.
InfluxDB is a time-series database designed to handle high write and query loads. It is useful for handling metrics, events, and time-series data.
Find the Official Documentation from InfluxDB here: Influx DB Documentation
The InfluxDB URL is the endpoint used to connect to your InfluxDB instance.
Identify the InfluxDB instance you want to connect to.
If using a local or self-hosted InfluxDB:
Default URL: http://localhost:8086
If using InfluxDB Cloud:
Log in to your InfluxDB Cloud account.
Navigate to your organization and select the desired instance.
The URL provided in the instance details will be your InfluxDB URL.InfluxDB OSS is accessed at localhost:8086
by default, but you can also customize your InfluxDB host and port.
The organization name is the identifier for different user groups within the InfluxDB instance, helping to segregate and manage resources and permissions.
Log in to your InfluxDB Cloud account or self-hosted InfluxDB web interface.
Navigate to the "Organizations" tab.
Note the organization name listed or create a new one if necessary.
The authentication token is used to access the InfluxDB API securely. It grants permissions based on the roles assigned and is necessary for any interaction with the database.
Log in to your InfluxDB Cloud account or self-hosted InfluxDB web interface.
Navigate to the "Tokens" or "API Tokens" section under the "Data" or "Settings" tab.
Generate a new token or use an existing one. Ensure it has the necessary permissions for the required operations.
Copy the token for use in your configurations.
Buckets in InfluxDB are logical containers for time-series data, similar to tables in traditional databases. Data is written to and queried from buckets.
Log in to your InfluxDB Cloud account or self-hosted InfluxDB web interface.
Navigate to the "Buckets" section.
View the list of existing buckets or create a new one as needed.
Note the name of the bucket for use in your configurations.
This key defines the name of the Kafka topic from which the connector will read data. It also serves as the reference point for recording the progress of the consumer within the topic (known as offsets). Offsets track which messages have already been processed, ensuring the connector doesn't re-consume them.