Migrating from HiveMQ to EMQX
This guide explains how to migrate an existing HiveMQ deployment to EMQX. It focuses on the common enterprise pattern where devices connect over TLS (port 8883) with either X.509 client certificates or username/password credentials managed by the HiveMQ Enterprise Security Extension (ESE). The objective is to replicate the connectivity, authentication, and data-integration behavior defined in config.xml and related HiveMQ extension files using EMQX’s HOCON-based configuration and dynamic rule engine.
Migration at a Glance
The migration can be treated as three phases:
- Inventory HiveMQ Assets – Collect the TLS keystores,
config.xml, ESE files, and extension properties that define listeners, authentication, clustering, and data pipelines. - Configure EMQX – Translate HiveMQ settings into EMQX HOCON, convert keystores to PEM, recreate listener and cluster settings, and set up the authentication chain and rule engine.
- Update Devices & Integrations – Point devices to the EMQX endpoint, deploy the EMQX server CA, validate client identities, and migrate downstream integrations such as Kafka or Prometheus.
| Parameter / Artifact | HiveMQ (Example) | EMQX (Example) | Notes |
|---|---|---|---|
| Endpoint hostname | mqtt.internal.example.com (as configured in load balancer / Control Center) | mqtt.example.com (EMQX load balancer / VIP) | Update device firmware or deployment manifests. |
| TLS assets | conf/hivemq.jks | /etc/emqx/certs/server-cert.pem, /etc/emqx/certs/server-key.pem | Convert JKS/PKCS12 to PEM with keytool + openssl. |
| Client authentication | ESE file realm (credentials.xml) | authentication = [{mechanism = password_based, backend = built_in_database}] | Import user list via REST API or Dashboard. |
| Client certificates | Stored as PEM in device fleet, validated by HiveMQ when mTLS enabled | Same device certs, EMQX listener ssl_options.cacertfile = "device-ca.pem" | No reprovisioning needed if devices already use your CA. |
| Cluster discovery | DNS or extensions/*-discovery*/*.properties | cluster.discovery_strategy = dns (or static, etcd, k8s) | Replace extension-based discovery with native EMQX strategy. |
| Kafka integration | extensions/hivemq-kafka-extension/kafka-configuration.xml | EMQX connectors + rules + actions (SELECT ... FROM "device/+/data") | Use EMQX Data Bridge instead of Java transformers. |
| Rate limiting / restrictions | <restrictions> block + Overload Protection | listeners.*.max_connections, messages_rate, bytes_rate, limiter.* | Configure per-listener quotas and global limiters. |
Phase 1: Inventory HiveMQ Configuration Artifacts
1. Collect TLS Keystores and Convert Them
- Locate the keystore referenced in
<tls-tcp-listener>(for example,/opt/hivemq/conf/hivemq.jks). - Export the server certificate and key:
keytool -importkeystore \
-srckeystore /opt/hivemq/conf/hivemq.jks \
-destkeystore /tmp/hivemq.p12 \
-deststoretype PKCS12
openssl pkcs12 -in /tmp/hivemq.p12 -nodes -nokeys -out /tmp/server-cert.pem
openssl pkcs12 -in /tmp/hivemq.p12 -nodes -nocerts -out /tmp/server-key.pem- Copy the resulting PEM files into
/etc/emqx/certs/(or your container secret mount). Keep the device CA (device-ca.pem) that HiveMQ trusted for client certificates; EMQX will reuse it.
2. Export HiveMQ Configuration Files
conf/config.xml: Listeners, restrictions, clustering, persistence, Control Center users.conf/logback.xml: Logging targets (translate to EMQXlogsection).extensions/<name>/conf/*.xmlor.properties: Discovery, Kafka, Prometheus, custom auth.extensions/hivemq-enterprise-security-extension/enterprise-security-extension.xml: Authentication realms and pipelines.- Any
credentials.xmlor custom user stores referenced by the ESE.
Store these artifacts in version control for traceability. Highlight environment-variable placeholders (e.g., ${ENV:HIVEMQ_PORT}) so they can be remapped to EMQX’s double-underscore environment override syntax (EMQX_LISTENERS__TCP__DEFAULT__BIND=0.0.0.0:1883).
3. Classify Authentication Modes
Determine which of the following patterns you are using:
- Username/password via file realm or SQL realm.
- X.509 client certificates (mTLS) with CN = client ID.
- Hybrid (e.g., TLS + SASL plugin). Each path maps to a specific EMQX authenticator chain.
Phase 2: Configure EMQX to Mirror HiveMQ Baseline
2.1 Recreate MQTT Listeners
Translate each <tcp-listener>, <tls-tcp-listener>, <websocket-listener> and <tls-websocket-listener> element into HOCON.
For example, given this HiveMQ configuration that defines three listeners:
<hivemq>
<listeners>
<tcp-listener>
<port>1883</port>
<bind-address>0.0.0.0</bind-address>
</tcp-listener>
<tls-tcp-listener>
<port>8883</port>
<bind-address>0.0.0.0</bind-address>
<tls>
<keystore>
<path>/opt/hivemq/conf/keystore.jks</path>
<password>password</password>
<private-key-password>pkpassword</private-key-password>
</keystore>
<truststore>
<path>/opt/hivemq/conf/truststore.jks</path>
<password>password</password>
</truststore>
<client-authentication-mode>NONE</client-authentication-mode>
</tls>
</tls-tcp-listener>
<tls-websocket-listener>
<port>8084</port>
<bind-address>0.0.0.0</bind-address>
<path>/mqtt</path>
<subprotocols>
<subprotocol>mqttv3.1</subprotocol>
<subprotocol>mqtt</subprotocol>
</subprotocols>
<tls>
<keystore>
<path>/opt/hivemq/conf/keystore.jks</path>
<password>hivemq</password>
</keystore>
<truststore>
<path>/opt/hivemq/conf/truststore.jks</path>
<password>hivemq</password>
</truststore>
</tls>
</tls-websocket-listener>
</listeners>
</hivemq>Translate to this EMQX configuration snippet:
listeners.tcp.default {
bind = "0.0.0.0:1883"
}
listeners.ssl.default {
bind = "0.0.0.0:8883"
ssl_options {
certfile = "/etc/certs/server-cert.pem"
keyfile = "/etc/certs/server-key.pem"
}
}
listeners.wss.default {
bind = "0.0.0.0:8083"
mqtt_path = "/mqtt"
ssl_options {
certfile = "/etc/certs/server-cert.pem"
keyfile = "/etc/certs/server-key.pem"
}
}To convert truststore.jks and keystore.jks to PEM, follow the steps in Section 1.1.
2.2 Map MQTT configuration options
<queued-messages>
<max-queue-size>1000</max-queue-size>
<strategy>discard</strategy>
</queued-messages>
<topic-alias>
<enabled>true</enabled>
<max-per-client>5</max-per-client>
</topic-alias>
<message-expiry>
<max-interval>4294967296</max-interval>
</message-expiry>
<session-expiry>
<max-interval>4294967295</max-interval>
</session-expiry>
<packets>
<max-packet-size>268435460</max-packet-size>
</packets>
<receive-maximum>
<server-receive-maximum>10</server-receive-maximum>
</receive-maximum>
<quality-of-service>
<max-qos>2</max-qos>
</quality-of-service>
<wildcard-subscriptions>
<enabled>true</enabled>
</wildcard-subscriptions>
<shared-subscriptions>
<enabled>true</enabled>
</shared-subscriptions>
<subscription-identifier>
<enabled>true</enabled>
</subscription-identifier>
<retained-messages>
<enabled>true</enabled>
</retained-messages>Corresponding EMQX configuration:
mqtt {
max_mqueue_len = 1000
mqueue_priorities = disabled
max_topic_alias = 5
message_expiry_interval = infinity # 4294967296 in HiveMQ
session_expiry_interval = infinity
max_packet_size = "256MB"
max_inflight = 10
max_qos_allowed = 2
wildcard_subscription = true
shared_subscription = true
retain_available = true
# subscription_identifier is enabled by default
}2.3 Map the <restrictions> block
HiveMQ collects global limits under <restrictions>. EMQX splits these values between the global mqtt section and each listener.
Here is an example HiveMQ configuration:
<restrictions>
<max-client-id-length>65535</max-client-id-length>
<max-connections>-1</max-connections>
<incoming-bandwidth-throttling>0</incoming-bandwidth-throttling>
<no-connect-idle-timeout>10000</no-connect-idle-timeout>
</restrictions>And corresponding EMQX configuration snippet:
listeners.ssl.default {
bind = "0.0.0.0:8883"
max_connections = infinity
bytes_rate = "0" # 'incoming-bandwidth-throttling'
bytes_burst = "0"
}
mqtt {
max_clientid_len = 65535
idle_timeout = "10s" # no-connect-idle-timeout
}2.4 Configure Clustering
Replace HiveMQ discovery extension and other discovery methods with EMQX’s native strategies.
For example, this cluster configuration:
<cluster>
<enabled>true</enabled>
<transport>
<tcp>
<bind-address>127.0.0.1</bind-address>
<bind-port>7800</bind-port>
</tcp>
</transport>
<discovery>
<static>
<node>
<host>127.0.0.1</host>
<port>7800</port>
</node>
<node>
<host>127.0.0.1</host>
<port>7801</port>
</node>
</static>
</discovery>
</cluster>corresponds to this EMQX configuration:
cluster {
discovery_strategy = static
static {
seeds = [
"emqx1@127.0.0.1",
"emqx2@127.0.0.1"
]
}
}EMQX automatically assigns Erlang distribution port when more than one node runs on the same machine; no need to select bind-port manually.
For alternative discovery methods (etcd, Kubernetes, static files, etc.), see Create and Manage Cluster.
2.5 Translate Authentication & Authorization
HiveMQ manages security through the Enterprise Security Extension (ESE), which defines Realms (data sources) and Pipelines (logic), or through legacy plugins. EMQX uses Authentication Chains (ordered backends) and Authorization sources (ACLs).
| HiveMQ ESE Component | EMQX Equivalent | Migration Strategy |
|---|---|---|
File Realm (credentials.xml) | Built-in Database | Export users from HiveMQ, import via EMQX REST API. |
| SQL Realm (JDBC) | MySQL / PostgreSQL | Configure password based authenitcation with mysql or postgresql backend. Reuse existing user tables. |
| LDAP Realm / AD | LDAP | Configure password based authentication with LDAP backend. Map HiveMQ DN patterns to EMQX filter templates. |
| OAuth / JWT | JWT | Configure JWT authentication mechanism. Configure public keys or JWKS endpoint. |
| HTTP / Webhooks | HTTP Server | Configure password based authentication with HTTP backend to delegate credentials to your external auth service. |
| X.509 Certs | X.509 / mTLS | Use TLS listener and mutual (two-way) authentication, reuse existing CA and client certificates. |
2.5.1. Migrating File Realm Users
Source: HiveMQ conf/credentials.xml (encrypted/hashed). Destination: EMQX Built-in Database.
- Export: Extract users from the HiveMQ File Realm (
credentials.xml). This file typically contains hashed passwords and salts. You will need to parse this XML to generate a JSON or CSV import file for EMQX. - Import: Use the EMQX REST API to create users. EMQX supports bulk import of users with password hashes (e.g., bcrypt, pbkdf2). See Importing Users for file format details.
# Example: Import a user with a plain password
curl -u admin:public -X POST \
http://emqx-node:18083/api/v5/authentication/password_based:built_in_database/users \
-d '{"user_id":"device-001","password":"StrongPass!"}'2.5.2. Migrating External Integrations (SQL, LDAP, HTTP)
Translate your enterprise-security-extension.xml pipelines into EMQX HOCON authentication blocks.
Example: SQL Realm to EMQX MySQL
HiveMQ uses a fixed database schema for its SQL realm. In contrast, EMQX allows you to define your own schema and queries. This means you do not need to modify your existing MySQL or PostgreSQL database. The EMQX configuration example below uses a query (SELECT password_hash, salt ...) that is specifically adapted to work with the standard HiveMQ users table structure.
Use the following EMQX configuration to authenticate against your existing HiveMQ MySQL database without modifying the schema:
authentication = [
{
mechanism = "password_based"
backend = "mysql"
server = "127.0.0.1:3306"
database = "mqtt"
username = "root"
password = ""
query = "SELECT password_hash, salt FROM users WHERE username = ${username}"
password_hash_algorithm {
name = "sha256"
salt_position = "suffix"
}
}
]Example: LDAP Realm
authentication = [
{
mechanism = "password_based"
backend = "ldap"
server = "ldap.example.com:636"
ssl {
enable = true
}
method {
type = bind
bind_password = "${password}"
}
username = "root"
password = "root password"
base_dn = "uid=${username},ou=testdevice,dc=emqx,dc=io"
filter = "(objectClass=mqttUser)"
}
]2.5.3. Migrating Authorization (ACLs)
HiveMQ defines access policies in enterprise-security-extension.xml (File Realm) or external databases. EMQX uses a flexible Authorization Chain supporting multiple backends simultaneously (File, Redis, MySQL, PostgreSQL, MongoDB, HTTP, etc).
HiveMQ XML Policy:
<permission>
<topic>device/${clientid}/#</topic>
<activity>ALL</activity>
</permission>EMQX Equivalent:
- File (
acl.conf):{allow, all, subscribe, ["device/${clientid}/#"]}. - Built-in Database: Configure rules via Dashboard or API based on Client ID, Username, or Topic.
- MySQL:
SELECT action, permission, topic, ipaddress, qos, retain FROM mqtt_acl where clientid = ${clientid} and ipaddress = ${peerhost}. - PostgreSQL:
SELECT action, permission, topic, ipaddress, qos, retain FROM mqtt_acl where clientid = ${clientid} and ipaddress = ${peerhost}.
For more information, please refer to the Authorization documentation.
2.6 Configure Data Integration
HiveMQ relies on individual extensions (e.g., Kafka Extension) for data integration. In EMQX, all data integrations are built-in and enabled out of the box.
Before configuring specific bridges, familiarize yourself with these core concepts:
2.6.1 Example: Migrating Kafka Extension
HiveMQ Kafka Extension Configuration
<kafka-configuration xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:noNamespaceSchemaLocation="config.xsd">
<kafka-clusters>
<kafka-cluster>
<id>cluster01</id>
<bootstrap-servers>127.0.0.1:9092</bootstrap-servers>
</kafka-cluster>
</kafka-clusters>
<mqtt-to-kafka-mappings>
<mqtt-to-kafka-mapping>
<id>mapping01</id>
<cluster-id>cluster01</cluster-id>
<mqtt-topic-filters>
<mqtt-topic-filter>#</mqtt-topic-filter>
</mqtt-topic-filters>
<kafka-topic>emqx</kafka-topic>
</mqtt-to-kafka-mapping>
</mqtt-to-kafka-mappings>
<kafka-to-mqtt-mappings>
<kafka-to-mqtt-mapping>
<id>mapping02</id>
<cluster-id>cluster01</cluster-id>
<kafka-topics>
<kafka-topic>topic1</kafka-topic>
<kafka-topic>topic2</kafka-topic>
</kafka-topics>
</kafka-to-mqtt-mapping>
</kafka-to-mqtt-mappings>
</kafka-configuration>EMQX Equivalent
connectors {
kafka_producer {
cluster01 {
bootstrap_hosts = "127.0.0.1:9092"
enable = true
}
}
kafka_consumer {
cluster01 {
bootstrap_hosts = "127.0.0.1:9092"
enable = true
}
}
}
actions {
kafka_producer {
mapping01 {
connector = "cluster01"
enable = true
parameters {
message {
value = "${.}"
}
topic = "emqx"
}
}
}
}
rule_engine {
rules {
mqtt-to-kafka-mapping-mapping01 {
sql = "SELECT * FROM '#'"
actions = [
"kafka_producer:mapping01"
]
enable = true
}
kafka-to-mqtt-mapping-mapping02 {
actions = [
{
args {
topic = "kafka"
}
function = "republish"
}
]
enable = true
sql = "SELECT * FROM '$bridges/kafka_consumer:cluster01-topic1','$bridges/kafka_consumer:cluster01-topic2'"
}
}
}
sources {
kafka_consumer {
cluster01-topic1 {
connector = "cluster01"
parameters {
topic = "topic1"
}
enable = true
}
cluster01-topic2 {
connector = "cluster01"
parameters {
topic = "topic2"
}
enable = true
}
}
}2.7 Configure Observability
2.7.1 Prometheus
HiveMQ uses the "Prometheus Monitoring HiveMQ Extension". EMQX comes with native Prometheus support.
Endpoint for Prometheus to scrape metrics is enabled by default: http://emqx-node:18083/api/v5/prometheus/stats.
If you want to use Pushgateway, it can be configured as follows:
prometheus {
push_gateway {
enable = true
url = "http://127.0.0.1:9091"
}
}Check out Integrate with Prometheus guide for more details.
2.7.2 Logging
HiveMQ uses logback.xml (Java standard). EMQX uses a built-in logging facility configured in HOCON.
HiveMQ (logback.xml):
<appender name="CONSOLE" class="ch.qos.logback.core.ConsoleAppender">
<encoder>
<pattern>%-30(%d %level)- %msg%n%ex</pattern>
</encoder>
</appender>
<appender name="FILE" class="ch.qos.logback.core.rolling.RollingFileAppender">
<file>${hivemq.log.folder}/hivemq.log</file>
<append>true</append>
<rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy">
<!-- daily rollover -->
<fileNamePattern>${hivemq.log.folder}/hivemq.%d{yyyy-MM-dd}.log</fileNamePattern>
<!-- keep 30 days' worth of history -->
<maxHistory>30</maxHistory>
</rollingPolicy>
<encoder>
<pattern>%-30(%d %level)- %msg%n%ex</pattern>
</encoder>
</appender>EMQX Equivalent:
log {
file {
default {
enable = true
level = warning
path = "/var/log/emqx/emqx.log"
rotation_count = 30
rotation_size = "50MB"
}
}
console {
enable = true
level = warning
}
}See Logs for configuring log levels, rotation, and formatters (text/JSON).
2.7.3 Tracing
HiveMQ's "Trace Recordings" are often used to debug specific client sessions. EMQX provides a built-in Trace feature (Dashboard or CLI) to filter logs for specific Client IDs, Topics, or IPs in real-time.
Start a trace for a specific client:
emqx ctl trace start client device-001 trace.logSee Trace for advanced debugging.
Phase 3: Update Devices and Integrations
3.1 Deploy EMQX Server CA to Devices
- If EMQX uses an internal CA, install
emqx-server-ca.pemon each device (system trust store or application bundle). - If EMQX uses a public CA (e.g., Let’s Encrypt), no device action is needed.
3.2 Update Device Connection Parameters
Example (mqtt-cli)
# Before (HiveMQ)
mqtt pub -h mqtt.internal.example.com -p 8883 \
-u device-001 -pw StrongPass! \
--cafile AmazonRootCA1.pem --topic device/001/data --message test
# After (EMQX)
mqtt pub -h mqtt.example.com -p 8883 \
-u device-001 -pw StrongPass! \
--cafile emqx-server-ca.pem --topic device/001/data --message testExample (Python paho-mqtt with mTLS)
client.tls_set(
ca_certs="certs/emqx-server-ca.pem",
certfile="certs/device-001.cert.pem",
keyfile="certs/device-001.key.pem",
tls_version=ssl.PROTOCOL_TLS_CLIENT
)
client.connect("mqtt.example.com", 8883)Only the endpoint hostname and server CA file change. Device certificates and private keys continue to work if they were signed by the same CA referenced in EMQX ssl_options.cacertfile.
3.3 Validate Integrations
- Ensure Kafka topics receive messages by checking EMQX rule metrics (
emqx ctl rule show). - Update monitoring dashboards to scrape EMQX metrics.
- Reconfigure alerting systems (Splunk, ELK) to parse EMQX log format.
Advanced Migration Scenarios
Retained Messages and Sessions
HiveMQ persistence files cannot be imported directly. Use a migration script:
- Keep HiveMQ running temporarily.
- Run a bridge client that subscribes to
#on HiveMQ and republishes retained messages to EMQX. - For queued QoS 1/2 messages, complete in-flight transactions before switching DNS.
Shared Subscriptions
HiveMQ’s $share/group/topic syntax is fully supported by EMQX. If you previously used $queue/topic, map it to $share/queue/topic. Tune broker.shared_subscription_strategy (e.g., round_robin, hash_clientid) to mimic the load-balancing behavior your consumers expect.
HTTP/API-Driven Configuration
HiveMQ relies on static XML plus extension-specific reload semantics. EMQX offers dynamic configuration APIs:
curl -s -H "Authorization: Bearer $TOKEN" \
-H "Content-type: application/json" \
-X PUT "http://emqx-node:18083/api/v5/listeners/ssl:default" \
-d '{"type": "ssl", "bind": "0.0.0.0:8883", "id": "ssl:default", "max_connections": 200000}'This writes to data/configs/cluster.hocon. Decide whether to keep configuration immutable (only emqx.conf) or adopt EMQX’s dual-layer model for per-environment overrides.
Validation Checklist
- All EMQX listeners report
running(emqx ctl listeners list). - TLS handshake succeeds and fails when no client certificate is provided (for mTLS devices).
- Device IDs in EMQX sessions match the original HiveMQ client IDs.
- ACLs enforce the same topic access you enforced in HiveMQ.
- Cluster nodes auto-heal after simulated network partitions.
- Kafka integration receives data without transformation regressions.
- Metrics are visible in Prometheus.
Conclusion
Migrating from HiveMQ to EMQX is primarily a configuration translation exercise: convert Java-centric artifacts (XML, JKS, extensions) into EMQX’s HOCON configuration, flexible Authentication Chains, and the Data Integration framework. By following the three phases—inventory, configure, and update—you can preserve device credentials, topic structures, and integration flows while gaining EMQX’s high-concurrency Erlang runtime and dynamic configuration capabilities. Plan the cutover, validate each listener and integration, and execute the migration with confidence.