Condense
Visit WebsiteRaise a Support TicketBook a Meeting
  • Overview
  • Introduction to Condense
    • What is Condense?
    • Features of Condense
    • Condense Architecture
      • Detailed Component Breakdown
      • Additional Services
      • Components and Services within the Kubernetes Cluster
    • Key Benefits of Condense
    • Why Condense?
    • Condense Use-Cases
    • FAQs
  • Fully Managed kafka
    • Kafka Management
    • Kafka Connect
    • Schema Registry
    • Securing Kafka
    • Kafka Administration
  • Security
  • Condense Deployment
    • Bring Your Own Cloud (BYOC)
      • Deployment from GCP Marketplace
      • Deployment from AWS Marketplace
      • Deployment from Azure Marketplace
  • Condense App - Getting Started
    • Glossary
    • Features of Condense App
    • Video Guide
    • SSO (Single Sign On) - Creating an Account/Logging into the Condense App
    • Workspace in Condense
    • Pre-Built Connectors
    • Custom Transforms
    • Applications
    • Pipelines
    • Settings
    • Role Based Access Control (RBAC)
    • Activity Auditor
    • Campaigns
    • Split Utility
    • Alert Utility
    • KSQL
  • Connectors in Condense
    • Available Connectors
    • Upcoming Connectors
  • Certifications
    • ISO 27001:2013
    • ISO 9001:2015
  • Legal
    • End User License Agreement (EULA)
    • Privacy Policy
    • Usage of Cookies
    • Terms and Conditions
  • Marketing Assets
    • Wallpapers
    • Social Media
Powered by GitBook
On this page
  • Vehicle Telematics
  • How can you start using these connectors on the Condense App?
  • Condense Edge
  • Stream Connectors
  • Real-time Data Ingestion:
  • How can you start using these connectors on the Condense App?
  • HTTPS: Output Connector
  • HTTPS_URL
  • KAFKA_TOPIC
  • Pub Sub: Output connector
  • Kinesis: Output Connector
  • SQS: Output connector
  • MQTT
  • Store Connectors
  • NoSQL Databases (Highly Scalable for Large, Unstructured Data)
  • Relational Databases (Structured Data with Defined Formats)
  • Cloud-Based Data Warehouses (Scalable Analytics Platforms)
  • Time-Series Databases (Optimized for Sensor Data)
  • How can you start using these connectors on the Condense App?
  • Big Table: Store connector
  • Cassandra: Store connector
  • MongoDB
  • My SQL: Store connector
  • MS SQL: Store connector
  • InFlux DB: Store connector

Was this helpful?

  1. Condense App - Getting Started

Pre-Built Connectors

PreviousWorkspace in CondenseNextCustom Transforms

Last updated 3 months ago

Was this helpful?

Condense provides pre-built connectors that users can leverage to build data pipelines. Condense offers different types of Pre Built Connectors primarily classified under Vehicle Telematics, Stream connectors and Store connectors

Vehicle Telematics

This category includes connectors specifically designed for Vehicle telematics and Mobility-Related data. These devices act as sensors, constantly gathering data from their environment. Condense establishes secure communication channels with these telematics devices, enabling the real-time flow of telemetry data. This data could include GPS location, engine performance metrics, sensor readings, or user activity data, depending on the specific device and application.

Condense acts as a telematics gateway. It receives, parses, registers, and stores the data coming from connected devices. Condense offers device configuration for these telematics device manufacturers:

  • Teltonika

  • iTriangle

  • Jimi Concox

  • Condense Edge (Zeliot)

How can you start using these connectors on the Condense App?

  1. Device Type: You select the type of device you want to connect from a predefined list of supported devices offered by Condense.

  2. Communication Protocol: Based on the device type you choose, Condense automatically defines the communication protocol it uses. Condense currently supports two communication protocols:

    • TCP/IP: Standard Transmission Control Protocol/Internet Protocol for device communication.

    • MQTT: Message Queuing Telemetry Transport, a lightweight messaging protocol for machine-to-machine communication

  3. Data Processing: When a device connects and sends data, Condense uses its configured port mapping to route the data for internal processing. This involves data transformation (e.g., converting raw sensor readings to human-readable units).

  4. Output Topic: This parameter defines the topic where the processed data from the device will be published within Condense.

  5. Customizable Output Topic: You can create and configure a customizable output topic to subscribe to an input topic, process the incoming data (if required), and publish the data to the newly configured output topic.

Condense Edge

Condense Edge is a modular, low-memory-footprint firmware developed by Zeliot, designed to enable the collection and transfer of rich data generated from vehicles. It also facilitates Over-the-Air (OTA) updates for vehicle Electronic Control Units (ECUs). Condense Edge is hardware-agnostic and supports edge computing, allowing data processing at the device level. This capability optimizes the memory footprint for cloud computing by performing edge-specific computations directly on the device.

The Condense Edge plugin is a special input connector in Condense. This works with Zeliot’s Condense Edge.

The plugin enables the Condense Edge to interface with the server for various functionalities

  1. Transmission over TCP/IP with TLS.

  2. Configuration-based data parsing of custom CAN parameters and alerts.

  3. Cloud-to-device command interface.

Condense Edge and Condense, as a closed ecosystem, works well with each other. This plugin has same deployment flow as that of other input telematics device connectors.

Stream Connectors

Stream connectors handle the continuous flow of data, simultaneously focusing on the reliable delivery of individual data. It plays a vital role in moving and managing data within your Condense Pipeline for IoT applications. It can be used as either an input connector or an output connector.

Condense currently supports these stream connectors:

Real-time Data Ingestion:

Kinesis (AWS), Pub/Sub (GCP)

These are designed to manage high-volume, real-time data streams. They are ideal for processing continuous data flows, such as sensor readings, logs, or machine-generated data, making them well-suited for handling real-time data from various sources and applications.

HTTPS

These protocols are ideal for ingesting data from the web or real-time data feeds exposed over the internet. This could be sensor data transmitted from an IoT device using a web interface or data retrieved from an external service.

SQS (AWS)

It allows you to buffer incoming data streams from devices, ensuring reliable delivery even with temporary network issues. This buffering can be helpful for situations where real-time processing might not be critical.

MQTT (Message Queuing Telemetry Transport)

It is a lightweight, publish-subscribe network protocol designed for constrained devices and low-bandwidth, high-latency, or unreliable networks.

How can you start using these connectors on the Condense App?

HTTPS: Output Connector

HTTPS connector facilitates the integration between Condense and external systems over HTTPS (HyperText Transfer Protocol Secure) so as to securely transmit data from Condense to external systems or from external systems to Condense using HTTPS POST/GET requests.

Configurations

The HTTPS Stream Connector integrates Condense with external systems through HTTPS endpoints, enabling real-time data exchange from Kafka topics via HTTPS POST requests.

HTTPS_URL

The HTTPS URL where data is sent. Typically used for secure communication over the web.

How to obtain the URL

a. Determine the server endpoint where your application needs to send the data.

b. Ensure the URL starts with https:// to use secure communication.

c. Obtain HTTPS URLs from the web service documentation or API endpoint being used.

KAFKA_TOPIC

This key defines the name of the Kafka topic from which the connector will read data. It also serves as the reference point for recording the progress of the consumer within the topic (known as offsets). Offsets track which messages have already been processed, ensuring the connector doesn't re-consume them.

Pub Sub: Output connector

The Pub/Sub Connector integrates Kafka with Google Cloud Pub/Sub, facilitating message exchange between Kafka topics and Pub/Sub topics

How to obtain these Configurations?

KAFKA_TOPIC_NAME

This key defines the name of the Kafka topic from which the connector will read data. It also serves as the reference point for recording the progress of the consumer within the topic (known as offsets). Offsets track which messages have already been processed, ensuring the connector doesn't re-consume them.

PUB_SUB_PROJECT_ID

Available in the Google Cloud Console under your project’s details.

PUB_SUB_TOPIC_NAME / Subscription name for input connector

Obtain the topic name by creating a topic in Google Cloud Pub/Sub or selecting an existing one.

PUB_SUB_SERVICE_ACCOUNT_CREDENTIAL

  • Create or obtain service account credentials from the Google Cloud Console under IAM & Admin > Service Accounts.

  • Download the JSON key file for the service account.

KAFKA_ERROR_TOPIC

Configured based on the Kafka topic designated for error handling in your system.

Kinesis: Output Connector

Amazon Kinesis Streams is a fully managed service for real-time data streaming at scale. It is used to collect and process large streams of data records in real-time. The service allows developers to build applications that can continuously ingest and process large, streaming data in real-time.

How to obtain these Configurations?

KINESIS_ACCESS_KEY_ID

The access key ID is required to authenticate and access AWS Kinesis.

  • Log in to the AWS Management Console.

  • Navigate to IAM (Identity and Access Management).

  • Select Users and then your user.

  • Choose Security credentials tab.

  • Create a new access key or use an existing one.

KINESIS_SECRET_KEY

The secret access key is required to authenticate and access AWS Kinesis.

  • Go to the AWS Management Console.

  • Navigate to IAM (Identity and Access Management).

  • Select Users and then your user.

  • Choose Security credentials tab.

  • Create a new access key or use an existing one.

KINESIS_REGION_NAME

The AWS region where the Kinesis stream is located.

  • Go to the AWS Management Console.

  • Navigate to Kinesis.

  • Check the region setting in the top-right corner.

KINESIS_STREAM_NAME

The name of the Kinesis stream to which data will be sent.

  • Go to the AWS Management Console.

  • Navigate to Kinesis.

  • List and select the stream name from the dashboard.

KAFKA_TOPIC_NAME

This key defines the name of the Kafka topic from which the connector will read data. It also serves as the reference point for recording the progress of the consumer within the topic (known as offsets). Offsets track which messages have already been processed, ensuring the connector doesn't re-consume them.

SQS: Output connector

Simple Queue Service (SQS) is a fully managed message queuing service offered by Amazon Web Services (AWS). It enables the decoupling and scaling of microservices, distributed systems, and serverless applications by allowing the asynchronous transmission of messages between components.

How to obtain these Configurations?

SQS_QUEUE_NAME

  • Select the Queues from the navigation pane.

  • Choose the queue you want to use.

  • The queue's details page will display the Queue Name and the Amazon Resource Name (ARN).

SQS_SECRET_KEY and SQS_ACCESS_KEY

  • In the navigation pane, choose Users.

  • Select the user account for which you want to create or view access keys.

  • On the user details page, choose the Security credentials tab.

  • In the Access keys section, choose Create access key to generate a new key pair if none exists, or view the existing access keys.

  • Note: Keep the Secret Access Key secure and do not share it publicly.

SQS_REGION_NAME

  • Go to the AWS Management Console.

  • Navigate to SQS.

  • Check the region setting in the top-right corner.

KAFKA_TOPIC_NAME

This key defines the name of the Kafka topic from which the connector will read data. It also serves as the reference point for recording the progress of the consumer within the topic (known as offsets). Offsets track which messages have already been processed, ensuring the connector doesn't re-consume them.

MQTT

MQTT (Message Queuing Telemetry Transport) is a lightweight, publish-subscribe network protocol designed for constrained devices and low-bandwidth, high-latency, or unreliable networks. As a stream output connector, MQTT allows clients to publish messages to a broker, which then routes those messages to subscribing clients. MQTT is optimized for minimal bandwidth and device resource usage, making it ideal for IoT (Internet of Things) applications where resources are often limited.

How to obtain these Configurations?

KAFKA_TOPIC_NAME

  • Topic Name

  • Broker URL

  • Username

  • Password

  • Client ID

Store Connectors

Store-typeStore type connectors are primarily classified as output connectors for integrating Condense Pipeline with data storage solution, giving you control over how you manage the information collected from your IoT devices or other input connectors. This ensures the data is persisted for later analysis, visualization, or use in applications.

Condense currently supports below store connectors:

NoSQL Databases (Highly Scalable for Large, Unstructured Data)

Bigtable Well-suited for storing and analyzing massive datasets from real-time IoT applications.

Cassandra A distributed NoSQL database designed for high availability and scalability, ideal for handling large-scale unstructured data.

MongoDB A flexible, document-oriented NoSQL database that stores data in JSON-like formats, perfect for unstructured or semi-structured data.

Relational Databases (Structured Data with Defined Formats)

MySQL For storing structured data with timestamps and values, like temperature readings or device status.

Microsoft SQL Server A robust relational database management system (RDBMS) that supports structured data and SQL queries for enterprise applications.

Cloud-Based Data Warehouses (Scalable Analytics Platforms)

BigQuery A powerful tool for large-scale data analysis of historical IoT data stored in Google Cloud Platform.

Time-Series Databases (Optimized for Sensor Data)

InfluxDB Specifically designed to handle high volumes of time-series data with timestamps, making it a great fit for sensor data from IoT devices.

ClickHouse A high-performance, columnar database optimized for real-time analytics and large-scale data processing, often used in cloud environments.

Timescale A time-series database built on PostgreSQL, optimized for storing and querying time-series data like sensor metrics and logs.

How can you start using these connectors on the Condense App?

Big Table: Store connector

Google Cloud Bigtable is a fully managed, scalable NoSQL database service designed for large analytical and operational workloads. It is ideal for applications that require high read and write throughput and low latency.

The Bigtable Connector facilitates the integration between Condense and Bigtable. This connector allows for efficient data ingestion from Kafka topics directly into a Bigtable instance, enabling real-time analytics and storage of streaming data

How to obtain these Configurations?

BIGTABLE_PROJECT_ID

The ID of the Bigtable project.

  • Select your project from the project selector drop-down.

  • The Project ID is displayed in the project info panel.

BIGTABLE_INSTANCE_ID

The Instance ID of the Google Bigtable project.

  • In the Google Cloud Console, go to the Bigtable Instances page.

  • Select your Bigtable instance to view its details.

  • The Instance ID will be visible in the instance summary.

BIGTABLE_TABLE_ID

The table ID of the Google Bigtable.

  • Go to the Bigtable section in the Google Cloud Console.

  • Open your instance and navigate to the Tables section.

  • Select the table you want to use; the Table ID will be listed.

SERVICE_ACCOUNT_CREDENTIAL

The Bigtable service account credential is in base64 format.

  • Create a service account for Bigtable with appropriate roles (Bigtable Admin, Bigtable User).

  • Download the JSON key file for the service account.

KAFKA_TOPIC

The Kafka topic from which the data will be read to be written to the specified Bigtable instance.

Cassandra: Store connector

Cassandra is a highly scalable and distributed NoSQL database known for its ability to handle large amounts of data across many servers, offering high availability without a single point of failure.

How to obtain these Configurations?

Cassandra URL

The endpoint address that allows applications to connect to the Cassandra database.

  • Open your Apache Cassandra configuration file (cassandra.yaml).

  • Locate the rpc_address setting which contains the URL.

  • Alternatively, consult your system administrator for the connection URL.

Cassandra PORT

The network port through which Cassandra accepts connections, typically defaulting to 9042.

  • Open the Cassandra configuration file (cassandra.yaml).

  • Find the native_transport_port setting; the default is usually 9042.

  • Verify with your system administrator if a custom port is used.

Cassandra Keyspace

The network port through which Cassandra accepts connections, typically defaulting to 9042.

  • Use a CQL (Cassandra Query Language) client to connect to your Cassandra instance.

  • Execute the command DESCRIBE KEYSPACES; to list all keyspaces.

  • Choose the keyspace relevant to your project.

Cassandra Table

A structure within a keyspace that organizes data into rows and columns, functioning like a table in a relational database.

  • Connect to your Cassandra instance using a CQL client.

  • Use the command USE keyspace_name; to switch to the desired keyspace.

  • Run DESCRIBE TABLES; to view all tables within that keyspace.

Cassandra Replication Strategy

This refers to the method used to distribute data replicas across nodes. Popular strategies include SimpleStrategy for single data centre use and NetworkTopologyStrategy for multiple data centres.

  • Within a CQL client, execute DESCRIBE KEYSPACE keyspace_name;.

  • Review the output for the replication settings which detail the strategy being used.

Cassandra Replication Factor

The number of data copies maintained across the Cassandra cluster. It determines how many nodes will store copies of the same data.

  • In the CQL client, after running DESCRIBE KEYSPACE keyspace_name;, check the replication_factor in the description output.

  • This number indicates how many copies of the data are maintained across the cluster.

Cassandra Fields

These are the columns in a Cassandra table, each defined by a name and a data type.

  • Connect using a CQL client and select the desired keyspace and table.

  • Execute DESCRIBE TABLE table_name; to see all fields (columns) and their data types in the table.

KAFKA_TOPIC_NAME

This key defines the name of the Kafka topic from which the connector will read data. It also serves as the reference point for recording the progress of the consumer within the topic (known as offsets). Offsets track which messages have already been processed, ensuring the connector doesn't re-consume them.

MongoDB

A flexible, document-oriented NoSQL database that stores data in JSON-like formats, perfect for unstructured or semi-structured data.

How to obtain these Configurations?

MongoDB URL

This is the connection string used to connect to your MongoDB server, typically including the server's address and port.

  • Access your MongoDB configuration or consult the system administrator.

  • The URL is often formatted like mongodb://<hostname>:<port>.

MongoDB Database Name

The specific database within your MongoDB instance that you want to access or manage.

  • Once you're connected to the MongoDB server, you can retrieve the list of databases by using the show dbs command.

MongoDB Collection Name

Collections in MongoDB are equivalent to tables in relational databases and hold the actual data records.

  • Connect to your desired database using a MongoDB client.

  • Execute show collections in the MongoDB shell to list all collections within the database.

MongoDB Fields

These are the individual pieces of data stored within documents in a collection, similar to columns in a SQL table.

  • Use a MongoDB client to access the database and collection.

  • Run a sample query findOne() to view a document, which will display the fields and their current data structure.

KAFKA_TOPIC_NAME

This key defines the name of the Kafka topic from which the connector will read data. It also serves as the reference point for recording the progress of the consumer within the topic (known as offsets). Offsets track which messages have already been processed, ensuring the connector doesn't re-consume them.

My SQL: Store connector

A popular open-source relational database management system (RDBMS) used for storing and managing data. It functions as an output connector within Condense Pipeline for IoT applications.

How to obtain these Configurations?

Mysql Host

The server address where the MySQL database is hosted.

  • Check the configuration files of your application.

  • Ask your system administrator.

  • Review hosting provider documentation.

Mysql Database Name

The specific database within the MySQL server you are connecting to

  • Query the MySQL server: SHOW DATABASES;

  • Check the configuration files of your application.

  • Ask your database administrator.

Mysql Port

The network port MySQL server is listening on, is typically 3306.

  • Check the configuration files of your MySQL server.

  • Use the command: SHOW VARIABLES LIKE 'port';

  • Ask your system administrator.

Mysql User

The username is used to authenticate and connect to the MySQL server.

  • Check configuration files of your application.

  • Query the MySQL server: SELECT user FROM mysql.user;

  • Ask your database administrator.

Mysql Password

The password is associated with the MySQL user account.

  • Check configuration files of your application.

  • Ask your database administrator.

  • Note: For security reasons, passwords are usually not stored in plaintext and should be handled securely.

Mysql Table Name

The specific table within the MySQL database you are interacting with.

  • Query the MySQL database: SHOW TABLES;

  • Check configuration files of your application.

  • Ask your database administrator.

Mysql Fields

The columns or fields within the specified MySQL table.

  • Query the MySQL table: DESCRIBE table_name;

  • Check the documentation or schema design of your database.

  • Ask your database administrator.

KAFKA_TOPIC_NAME

This key defines the name of the Kafka topic from which the connector will read data. It also serves as the reference point for recording the progress of the consumer within the topic (known as offsets). Offsets track which messages have already been processed, ensuring the connector doesn't re-consume them.

MS SQL: Store connector

Microsoft SQL Server (MSSQL) is a widely used relational database management system (RDBMS) known for its reliability, scalability, and robust feature set. In the context of Condense Pipelines for Internet of Things (IoT) applications, MSSQL functions as an output connector.

How to obtain these Configurations?

Mssql Host

This is the hostname or IP address of the server where your Microsoft SQL Server instance is running.

  1. Login to the SQL Server Management Studio (SSMS).

  2. Check the server name at the top of the Object Explorer panel. This is your MSSQL Host.

  3. If you are working in a local environment, it could be localhost or 127.0.0.1

Mssql Database Name

The name of the database where your data resides.

  1. Open SQL Server Management Studio.

  2. Connect to your SQL Server instance.

  3. Expand the "Databases" node in the Object Explorer panel to see the list of databases.

Mssql Port

The port number that SQL Server is listening on.

  1. Open SQL Server Configuration Manager.

  2. Navigate to SQL Server Network Configuration > Protocols for [INSTANCE_NAME].

  3. Right-click on TCP/IP and select "Properties".

  4. Go to the "IP Addresses" tab and scroll down to the "IPAll" section to see the port number in the "TCP Port" field, usually it is 1433

Mssql User

The username you use to connect to the SQL Server database.

  1. This is typically created during the SQL Server installation or can be created via SQL Server Management Studio.

  2. To view or create a user, open SQL Server Management Studio, and go to Security > Logins.

Mssql Password

The password associated with the MSSQL User.

  1. This is set by the database administrator during user setup.

  2. If you forget it, you may need to contact your database administrator to reset it.

Mssql Table Name

The name of the table within your database that contains the data you are interested in.

  1. Open SQL Server Management Studio and connect to your database.

  2. Expand the "Databases" node and then the specific database.

  3. Expand the "Tables" folder to see the list of tables.

Msql Fields (Often referred to as MSSQL Fields)

These are the column names or fields within your table that you need to interact with.

  1. Open SQL Server Management Studio.

  2. Navigate to the table of interest.

  3. Right-click the table and select "Design" to view all the fields (columns) in the table.

KAFKA_TOPIC_NAME

This key defines the name of the Kafka topic from which the connector will read data. It also serves as the reference point for recording the progress of the consumer within the topic (known as offsets). Offsets track which messages have already been processed, ensuring the connector doesn't re-consume them.

InFlux DB: Store connector

InfluxDB is a time-series database designed to handle high write and query loads. It is useful for handling metrics, events, and time-series data.

How to obtain these Configurations?

InFlux DB URL

The InfluxDB URL is the endpoint used to connect to your InfluxDB instance.

  1. Identify the InfluxDB instance you want to connect to.

  2. If using a local or self-hosted InfluxDB:

    Default URL: http://localhost:8086

  3. If using InfluxDB Cloud:

    • Log in to your InfluxDB Cloud account.

    • Navigate to your organization and select the desired instance.

    • The URL provided in the instance details will be your InfluxDB URL.InfluxDB OSS is accessed at localhost:8086 by default, but you can also customize your InfluxDB host and port.

InfluxDB Organization Name

The organization name is the identifier for different user groups within the InfluxDB instance, helping to segregate and manage resources and permissions.

  1. Log in to your InfluxDB Cloud account or self-hosted InfluxDB web interface.

  2. Navigate to the "Organizations" tab.

  3. Note the organization name listed or create a new one if necessary.

InfluxDB Authentication Token

The authentication token is used to access the InfluxDB API securely. It grants permissions based on the roles assigned and is necessary for any interaction with the database.

  1. Log in to your InfluxDB Cloud account or self-hosted InfluxDB web interface.

  2. Navigate to the "Tokens" or "API Tokens" section under the "Data" or "Settings" tab.

  3. Generate a new token or use an existing one. Ensure it has the necessary permissions for the required operations.

  4. Copy the token for use in your configurations.

InfluxDB Bucket Name

Buckets in InfluxDB are logical containers for time-series data, similar to tables in traditional databases. Data is written to and queried from buckets.

  1. Log in to your InfluxDB Cloud account or self-hosted InfluxDB web interface.

  2. Navigate to the "Buckets" section.

  3. View the list of existing buckets or create a new one as needed.

  4. Note the name of the bucket for use in your configurations.

KAFKA_TOPIC_NAME

This key defines the name of the Kafka topic from which the connector will read data. It also serves as the reference point for recording the progress of the consumer within the topic (known as offsets). Offsets track which messages have already been processed, ensuring the connector doesn't re-consume them.

Find the Official Documentation from Google Cloud here:

Find the Official Documentation from Amazon here:

Find the Developer Documentation from Amazon here:

Find the Official Documentation from Amazon here:

Find the Developer Documentation from Amazon here:

Sign in to the AWS Management Console and open the Amazon SQS console at .

Sign in to the AWS Management Console and open the IAM console at .

Find the Official Documentation from Amazon here:

Find the Official Documentation from Google here:

Find the API Documentation from Google here:

Find the Developer Documentation from Google here:

Navigate to the .

Find the Official Documentation from Apache Cassandra here:

Find the Official Documentation from MongoDB here:

Find the Official Documentation from MySQL here:

Links to Download MySQL Connectors:

Find the Official Documentation from Microsoft here:

Find the Official Documentation from InfluxDB here:

Google Pub Sub
Amazon Kinesis
Developer Documentation
Amazon SQS
Developer Documentation
https://console.aws.amazon.com/sqs/
https://console.aws.amazon.com/iam/
MQTT
Google Bigtable
API Documentation
Developer Documentation
Google Cloud Console
Apache Cassandra
MongoDB Documentation
MySQL Documentation
Download MySQL Connectors
MSSQL Documentation
Influx DB Documentation