# Bridge device data to Confluent Cloud using the Data Integrations

In this article, we will simulate temperature and humidity data and report these data to EMQX Cloud via the MQTT protocol and then use the EMQX Cloud Data Integrations to bridge the data into Confluent Cloud.

Before you start, you need to complete the following operations:

  • Deployments have already been created on EMQX Cloud (EMQX Cluster).

  • This feature is available for professional deployment

  • There are three types of Confluent Cloud cluster you could choose:

    • For basic and standard cluster, please open the NAT first.
    • For dedicated cluster, please complete Peering Connection Creation first, all IPs mentioned below refer to the intranet IP of the resource.

# Confluent Cloud Configuration

# Create a cluster

  • Login to the Confluent Cloud console and create a cluster.

  • At this time, we select the dedicated cluster as an example.

    cluster

  • Select region/zones (make sure the deployment region matches the region of the Confluent Cloud)

    region

  • Select VPC Peering for the networking so this cluster could be accessed only by vpc peering connection.

    nat

  • Specify a CIDR block for the cluster and click Conttinue

  • Based on your needs, choose the way to manage the encryption key

    security

  • After binding the card, you are ready to launch the cluster

# Manage the cluster using Confluent Cloud CLI

Now that you have a cluster up and running in Confluent Cloud, you can manage it using the Confluent Cloud CLI. Here are some basic commands that you could use with Confluent Cloud CLI.

# Install the Confluent Cloud CLI

curl -L --http1.1 https://cnfl.io/ccloud-cli | sh -s -- -b /usr/local/bin
1

If you already have the CLI installed, you could update it by:

ccloud update
1

# Log in to your account

ccloud login --save
1

# Select the environment

ccloud environment use env-v9y0p
1

# Select the cluster

ccloud kafka cluster use lkc-djr31
1

# Use an API key and secret

If you have an existing API key that you'd like to use, add it to the CLI by:

ccloud api-key store --resource lkc-djr31
Key: <API_KEY>
Secret: <API_SECRET>
1
2
3

If you don't have the API key and secret, you can create one by:

ccloud api-key create --resource lkc-djr31
1

After add them to teh CLI, you could use the API key and secret by:

ccloud api-key use "API_Key" --resource lkc-djr31
1

# Create a topic

ccloud kafka topic create topic-name
1

You could check the topic list by:

ccloud kafka topic list
1

# Produce messages to the topic

ccloud kafka topic produce topic-name
1

# Consume messages from the topic

ccloud kafka topic consume -b topic-name
1

# Build VPC Peering Connection with the deployment

After the cluster has been created, we should add peering

  • Go to the Networking section of the Cluster settings page and click on the Add Peering button.

    addPeering

  • Fill in the vpc information. (You could get the information from VPC Peering section of the deployment console)

    vpc_info

    vpc_info

  • When the connection status is Inactive, go back to the deployment console to accept the peering request. Fill in the vpc information of the confluent cloud cluster and click Confirm. When the vpc status turns to running, you successfully create the vpc peering connection.

    vpc_info

    vpc

# Deployment Data Integrations Configuration

Go to the Data Integrations page

  1. Create kafka resources and verify that they are available.

    On the data integration page, click kafka resources, fill in the kafka connection details, and then click test. Please check the kafka service if the test fails. create resource

  2. Click the New button after the test is passed and you will see the Create Resource successfully message.

    kafka_created_successfully

  3. Create a new rule

    Put the following SQL statement in the SQL input field. The device reporting message time (up timestamp), client ID, and message body (Payload) will be retrieved from the temp hum/emqx subject in the SQL rule, and the device ambient temperature and humidity will be read from the message body.

    SELECT 
    timestamp as up_timestamp, 
    clientid as client_id, 
    payload.temp as temp,
    payload.hum as hum
    FROM
    "temp_hum/emqx"
    
    1
    2
    3
    4
    5
    6
    7

    rule sql

  4. Rule SQL Testing

    To see if the rule SQL fulfills our requirements, click SQL test and fill in the test payload, topic, and client information.

    rule sql

  5. Add Action to Rule

    Click Next to add a Kafka forwarding action to the rule once the SQL test succeeds. To demonstrate how to bridge the data reported by the device to Kafka, we'll utilize the following Kafka topic and message template.

    # kafka topic
    emqx
    
    # kafka message template 
    {"up_timestamp": ${up_timestamp}, "client_id": ${client_id}, "temp": ${temp}, "hum": ${hum}}
    
    1
    2
    3
    4
    5

    rule sql

  6. After successfully binding the action to the rule, click View Details to see the rule sql statement and the bound actions.

    monitor

  7. To see the created rules, go to Data Integrations/View Created Rules. Click the Monitor button to see the detailed match data of the rule.

    monitor

# Test

  1. Use MQTT X (opens new window) to simulate temperature and humidity data reporting

    You need to replace broker.emqx.io with the created deployment connection address, add client authentication information to the EMQX Dashboard. MQTTX

  2. View data bridging results

    # Go to the Kafka instance and view the emqx topic
      
    $ docker exec -it mykafka /opt/kafka/bin/kafka-console-consumer.sh --bootstrap-server <broker IP>:9092  --topic emqx --from-beginning
      
    
    1
    2
    3
    4

    kafka