meethigher - Configure Debezium Connector for Oracle

Configure Debezium Connector for Oracle 1. Treadpit Collection Environmental Science Operating system: centos7.9Oracle v...
2.1 LogMiner
2.2 XStream API

Configure Debezium Connector for Oracle

1. Treadpit Collection

Environmental Science

  1. Operating system: centos7.9
  2. Oracle version: oracle-database-ee-19c-1.0-1.x86_64.rpm
  3. Zookeeper version: apache-zookeeper-3.7.0-bin.tar.gz
  4. Kafka version: kafka_2.12-2.7.0.tgz

Reference Article

  1. Debezium Connector for Oracle :: Debezium Documentation
  2. Apache Kafka
  3. The framework for real-time monitoring synchronized database changes is magic
  4. Introduction to Debezium | Baeldung
  5. debeziumEmbedded: Self-written access to database using debezium
  6. docker installation oracle19c
  7. The PDB database for oracle 12c is not open
  8. Completion Log for oracle - Supplemental Logging
  9. oracle archive log mode and non-archive log mode
  10. Migration of Oracle database's non-archive mode to archive mode
  11. Oracle login ORA-01033: Solutions for ORACLE initializing or closing
  12. Debezium grabs data from oracle to kafka_
  13. Kafka Connect
  14. Introduction and deployment of Kafka connection
  15. Keyword: Oracle lrm-00109: could not open parameter file'/opt/oracle - adodo1
  16. Kafka uses Debezium to synchronize Oracle data in real time | BlackC
  17. ORA-00942: No solution exists for tables or views
  18. oracle - Maven including ocijdbc19 in java.library.path - Stack Overflow
  19. JDBC drives the difference between oci and thin
  20. Error while fetching metadata with correlation id: Handle posture correctly
2. Monitoring Oracle

Debezium provides two ways to monitor databases, corresponding to two connections to oracle.

  • LogMiner: Essentially a jdbc thin driver, pure Java development, platform independent.
  • XStream API: Essentially a jdbc oci driver, implemented by calling the oci client c dynamic library.

Cite official description

The JDBC Thin driver is a pure Java, Type IV driver that can be used in applications and applets. It is platform-independent and does not require any additional Oracle software on the client-side. The JDBC Thin driver communicates with the server using SQL*Net to access Oracle Database.

The JDBC Thin driver allows a direct connection to the database by providing an implementation of SQL*Net on top of Java sockets. The driver supports the TCP/IP protocol and requires a TNS listener on the TCP/IP sockets on the database server.

The JDBC OCI driver is a Type II driver used with Java applications. It requires an Oracle client installation and, therefore, is Oracle platform-specific. It supports all installed Oracle Net adapters, including interprocess communication (IPC), named pipes, TCP/IP, and Internetwork Packet Exchange/Sequenced Packet Exchange (IPX/SPX).

The JDBC OCI driver, written in a combination of Java and C, converts JDBC invocations to calls to OCI, using native methods to call C-entry points. These calls communicate with the database using SQL*Net.

The JDBC OCI driver uses the OCI libraries, C-entry points, Oracle Net, core libraries, and other necessary files on the client computer where it is installed.

The following steps are based on the oracle 19c already installed and can be referred to Centos8 Install Oracle19c

2.1 LogMiner

Switch to user oracle

su - oracle

Connect to oracle and change the sys password so that it corresponds to the statement on debezium and is ready to use.

sqlplus / as sysdba connect / as sysdba alter user sys identified by top_secret; exit;

Database Open Archiving Mode

sqlplus / as sysdba connect sys/top_secret AS SYSDBA alter system set db_recovery_file_dest_size = 10G; alter system set db_recovery_file_dest = '/opt/oracle/oradata/recovery_area' scope=spfile; shutdown immediate startup mount alter database archivelog; alter database open; -- Should now "Database log mode: Archive Mode" archive log list exit;

Switch to the root user, create the db_recovery_file_dest folder, grant permissions, and switch back to the oracle user

su root mkdir /opt/oracle/oradata/recovery_area chmod 777 /opt/oracle/oradata/recovery_area su oracle

7 denotes R (read), w (write), x (execute) permissions

777 denotes that rwx permissions are assigned to file owners, users in the same group, and users in other groups

Enable minimum supplementary logging at the database level and can be configured as follows.

ALTER DATABASE ADD SUPPLEMENTAL LOG DATA; ALTER DATABASE DROP SUPPLEMENTAL LOG DATA;

If you just want to turn on minimum logging for a table (such as stuinfo), refer to the following.

If the change is unsuccessful and the table exists, select * from C##TEST.STUINFO first. If prompted that there is no table, select * from C##TEST. STUINFO

ALTER TABLE C##TEST.STUINFO ADD SUPPLEMENTAL LOG DATA (ALL) COLUMNS;

Create users and assign permissions

sqlplus sys/top_secret@//localhost:1521/ORCLCDB as sysdba CREATE TABLESPACE logminer_tbs DATAFILE '/opt/oracle/oradata/ORCLCDB/logminer_tbs.dbf' SIZE 25M REUSE AUTOEXTEND ON MAXSIZE UNLIMITED; exit; sqlplus sys/top_secret@//localhost:1521/ORCLPDB1 as sysdba CREATE TABLESPACE logminer_tbs DATAFILE '/opt/oracle/oradata/ORCLCDB/ORCLPDB1/logminer_tbs.dbf' SIZE 25M REUSE AUTOEXTEND ON MAXSIZE UNLIMITED; exit; sqlplus sys/top_secret@//localhost:1521/ORCLCDB as sysdba CREATE USER c##dbzuser IDENTIFIED BY dbz DEFAULT TABLESPACE logminer_tbs QUOTA UNLIMITED ON logminer_tbs CONTAINER=ALL; GRANT CREATE SESSION TO c##dbzuser CONTAINER=ALL; GRANT SET CONTAINER TO c##dbzuser CONTAINER=ALL; GRANT SELECT ON V_$DATABASE to c##dbzuser CONTAINER=ALL; GRANT FLASHBACK ANY TABLE TO c##dbzuser CONTAINER=ALL; GRANT SELECT ANY TABLE TO c##dbzuser CONTAINER=ALL; GRANT SELECT_CATALOG_ROLE TO c##dbzuser CONTAINER=ALL; GRANT EXECUTE_CATALOG_ROLE TO c##dbzuser CONTAINER=ALL; GRANT SELECT ANY TRANSACTION TO c##dbzuser CONTAINER=ALL; GRANT LOGMINING TO c##dbzuser CONTAINER=ALL; GRANT CREATE TABLE TO c##dbzuser CONTAINER=ALL; GRANT LOCK ANY TABLE TO c##dbzuser CONTAINER=ALL; GRANT ALTER ANY TABLE TO c##dbzuser CONTAINER=ALL; GRANT CREATE SEQUENCE TO c##dbzuser CONTAINER=ALL; GRANT EXECUTE ON DBMS_LOGMNR TO c##dbzuser CONTAINER=ALL; GRANT EXECUTE ON DBMS_LOGMNR_D TO c##dbzuser CONTAINER=ALL; GRANT SELECT ON V_$LOG TO c##dbzuser CONTAINER=ALL; GRANT SELECT ON V_$LOG_HISTORY TO c##dbzuser CONTAINER=ALL; GRANT SELECT ON V_$LOGMNR_LOGS TO c##dbzuser CONTAINER=ALL; GRANT SELECT ON V_$LOGMNR_CONTENTS TO c##dbzuser CONTAINER=ALL; GRANT SELECT ON V_$LOGMNR_PARAMETERS TO c##dbzuser CONTAINER=ALL; GRANT SELECT ON V_$LOGFILE TO c##dbzuser CONTAINER=ALL; GRANT SELECT ON V_$ARCHIVED_LOG TO c##dbzuser CONTAINER=ALL; GRANT SELECT ON V_$ARCHIVE_DEST_STATUS TO c##dbzuser CONTAINER=ALL; exit;

Create tables and open minimum logs

sqlplus / as sysdba conn c##dbzuser/dbz; CREATE TABLE STU ( "s_id" INT PRIMARY KEY, "s_name" VARCHAR ( 255 ) ); ALTER TABLE C##DBZUSER.STU ADD SUPPLEMENTAL LOG DATA (ALL) COLUMNS; exit;

Following the steps above, you can monitor the database using either the java api or kafka-connector. By contrast, it is much easier to use the java api directly.

java API

Create SpringBoot Project

pom.xml

<?xml version="1.0" encoding="UTF-8"?> <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 https://maven.apache.org/xsd/maven-4.0.0.xsd"> <modelVersion>4.0.0</modelVersion> <parent> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-parent</artifactId> <version>2.3.1.RELEASE</version> <relativePath/> <!-- lookup parent from repository --> </parent> <groupId>com.example</groupId> <artifactId>demo</artifactId> <version>0.0.1-SNAPSHOT</version> <name>demo</name> <description>Demo project for Spring Boot</description> <properties> <java.version>1.8</java.version> </properties> <dependencies> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter</artifactId> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-test</artifactId> <scope>test</scope> </dependency> <dependency> <groupId>io.debezium</groupId> <artifactId>debezium-api</artifactId> <version>1.6.2.Final</version> </dependency> <dependency> <groupId>io.debezium</groupId> <artifactId>debezium-embedded</artifactId> <version>1.6.2.Final</version> </dependency> <dependency> <groupId>io.debezium</groupId> <artifactId>debezium-connector-mysql</artifactId> <version>1.6.2.Final</version> </dependency> <dependency> <groupId>io.debezium</groupId> <artifactId>debezium-connector-oracle</artifactId> <version>1.6.2.Final</version> </dependency> <dependency> <groupId>com.oracle.ojdbc</groupId> <artifactId>ojdbc8</artifactId> <version>19.3.0.0</version> </dependency> </dependencies> <build> <plugins> <plugin> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-maven-plugin</artifactId> </plugin> </plugins> </build> </project>

Create logback.xml under resources

<?xml version="1.0" encoding="UTF-8"?> <configuration debug="true"> <appender name="stdout"> <Target>System.out</Target> <encoder> <pattern>%-5p [%d][%mdc] %C:%L - %m %n</pattern> <charset>utf-8</charset> </encoder> <!-- This log appender For development purposes, only the bottom level is configured, and the log level output by the console is greater than or equal to this level of log information --> <filter> <level>INFO</level> </filter> </appender> <root level="info"> <!-- Production environment will invite stdout Remove --> <appender-ref ref="stdout"/> </root> </configuration>

Create OracleDebezium_19c Class

import io.debezium.engine.ChangeEvent; import io.debezium.engine.DebeziumEngine; import io.debezium.engine.format.Json; import io.debezium.relational.history.FileDatabaseHistory; import java.util.Properties; import java.util.concurrent.ExecutorService; import java.util.concurrent.Executors; public class OracleDebezium_19c { public static void main(String[] args) { // 1.Build Configuration Properties props = genProps(); // 2.Business Processing Logic Part Code DebeziumEngine<ChangeEvent<String, String>> engine = engineBuild(props); // 3.Official operation runSoftware(engine); } // Generate related configurations for connecting Oracle private static Properties genProps() { // To configure Properties props = new Properties(); props.setProperty("name", "oracle-engine-0033"); props.setProperty("connector.class", "io.debezium.connector.oracle.OracleConnector"); props.setProperty("offset.storage", "org.apache.kafka.connect.storage.FileOffsetBackingStore"); // Specify offset storage directory props.setProperty("offset.storage.file.filename", "D:\\temp\\oracle4.txt"); // Specify the interval between Topic offset writes to disk props.setProperty("offset.flush.interval.ms", "6000"); //Set up database connection information props.setProperty("database.hostname", "192.168.10.132"); props.setProperty("database.port", "1521"); props.setProperty("database.user", "C##DBZUSER"); props.setProperty("database.password", "dbz"); props.setProperty("database.server.id", "85701"); props.setProperty("table.include.list", "C##DBZUSER.STU"); props.setProperty("database.history", FileDatabaseHistory.class.getCanonicalName()); props.setProperty("database.history.file.filename", "D:\\temp\\oracle4.txt"); //This parameter needs to be modified for each run because it is unique props.setProperty("database.server.name", "my-oracle-connector-0023"); //Specify instance name of CDB mode props.setProperty("database.dbname", "ORCLCDB"); //Whether to output schema information props.setProperty("key.converter.schemas.enable", "false"); props.setProperty("value.converter.schemas.enable", "false"); props.setProperty("database.serverTimezone", "UTC"); // time zone props.setProperty("database.connection.adapter", "logminer"); // Pattern // Kafka Connection Related Configuration /*props.setProperty("database.history.kafka.bootstrap.servers", "192.168.131.130:9092"); props.setProperty("database.history.kafka.topic", "oracle.history");*/ return props; } // Start running the program public static void runSoftware(DebeziumEngine<ChangeEvent<String, String>> engine) { ExecutorService executor = Executors.newSingleThreadExecutor(); executor.execute(engine); } // Implementation logic public static DebeziumEngine<ChangeEvent<String, String>> engineBuild(Properties props) { // 2.Build DebeziumEngine // Use Json format DebeziumEngine<ChangeEvent<String, String>> engine = DebeziumEngine .create(Json.class) .using(props) .notifying(record -> { // There will be types of operations (add, delete, change) and specific data in the record System.out.println("record.key() = " + record.key()); System.out.println("record.value() = " + record.value()); }) .using((success, message, error) -> { // It is strongly recommended that you add this part of the callback code for easy viewing of error information if (!success && error != null) { // Error Callback System.out.println("----------error------"); System.out.println(message); //System.out.println(error); error.printStackTrace(); } }) .build(); return engine; } }

Start the project. After six steps of step execution, if there are no errors, the start is successful.

Enter the database to add a piece of data to the table being monitored.

The following log appears. This indicates that the monitoring was successful.

kafka-connector

Using java, zookeeper, kafka decompressed to/opt/module, Java needs to configure environment variables

go Central Repository: io/debezium/debezium-connector-oracle Download the version of plugin you need, such as debezium-connector-oracle-1.6.2.Final-plugin.tar.gz

Create folder to store kafka-connector-plugin

mkdir /opt/kafka-plugin

Unzip the downloaded plugin, copy all the contents to kafka-plugin and kafka's libs, as shown below

go Oracle Instant Client Downloads Download the Basic Package (ZIP) for your operating system.

Unzip it and extract ojdbc8.jar from it into libs of kafka.

Configure kafka-connector

cd /opt/module/kafka_2.12-2.7.0/ vi config/connect-distributed.properties

Add the plugin.path to the kafka-plugin you just configured and save it.

plugin.path=/opt/kafka-plugin

That way, you're ready to configure it.

Enter the zookeeper path, copy a zookeeper configuration file, and start the zookeeper

cp conf/zoo_sample.cfg conf/zoo.conf bin/ bin/zkServer.sh start

Enter Kafka path, start Kafka first, start successfully, then start kafka-connect

bin/kafka-server-start.sh config/server.properties bin/connect-distributed.sh config/connect-distributed.properties

Open browser/postman, get access to port 8083, version information will appear

With post access, ip:8083/connectors, and a configuration json, you can register a connector

{ "name": "stu2", "config": { "connector.class": "io.debezium.connector.oracle.OracleConnector", "tasks.max": "1", "database.server.name": "server2", "database.hostname": "192.168.10.132", "database.port": "1521", "database.user": "c##dbzuser", "database.password": "dbz", "database.dbname": "ORCLCDB", "table.include.list": "C##DBZUSER.STU2", "database.history.kafka.bootstrap.servers": "192.168.10.132:9092", "database.history.kafka.topic": "schema-changes.stu2" } }

The kafka-connector automatically generates a kafka-topic, typically a server.library name.table name, but like the #character, it is usually converted to the _character, like server2.C##DBZUSER.STU2 to server2.C_u DBZUSER.STU2, which can be discovered by registering the connector and observing the log carefully.

Go to the kafka path and see all the topic s in kafka

bin/kafka-topics.sh --list --zookeeper 192.168.10.132:2181

Monitor current topic, whether database changes are monitored

bin/kafka-console-consumer.sh --bootstrap-server 192.168.10.132:9092 --topic server2.C__DBZUSER.STU2

Monitoring data like the figure above indicates successful monitoring!

2.2 XStream API

Switch to user oracle

su - oracle

Connect to oracle and change the sys password so that it corresponds to the statement on debezium and is ready to use.

sqlplus / as sysdba connect / as sysdba alter user sys identified by top_secret; exit;

Turn on archiving mode

CONNECT sys/top_secret AS SYSDBA alter system set db_recovery_file_dest_size = 5G; alter system set db_recovery_file_dest = '/opt/oracle/oradata/recovery_area' scope=spfile; alter system set enable_goldengate_replication=true; shutdown immediate startup mount alter database archivelog; alter database open; -- Should show "Database log mode: Archive Mode" archive log list exit;

Switch to the root user, create the db_recovery_file_dest folder, grant permissions, and switch back to the oracle user

su root mkdir /opt/oracle/oradata/recovery_area chmod 777 /opt/oracle/oradata/recovery_area su oracle

7 denotes R (read), w (write), x (execute) permissions

777 denotes that rwx permissions are assigned to file owners, users in the same group, and users in other groups

Enable minimum supplementary logging at the database level and can be configured as follows.

ALTER DATABASE ADD SUPPLEMENTAL LOG DATA;

If you just want to turn on minimum logging for a table (such as stuinfo), refer to the following.

If the change is unsuccessful and the table exists, select * from C##TEST.STUINFO first. If prompted that there is no table, select * from C##TEST. STUINFO

ALTER TABLE C##TEST.STUINFO ADD SUPPLEMENTAL LOG DATA (ALL) COLUMNS;

Configure XStream admin user

sqlplus sys/top_secret@//localhost:1521/ORCLCDB as sysdba CREATE TABLESPACE xstream_adm_tbs DATAFILE '/opt/oracle/oradata/ORCLCDB/xstream_adm_tbs.dbf' SIZE 25M REUSE AUTOEXTEND ON MAXSIZE UNLIMITED; exit; sqlplus sys/top_secret@//localhost:1521/ORCLPDB1 as sysdba CREATE TABLESPACE xstream_adm_tbs DATAFILE '/opt/oracle/oradata/ORCLCDB/ORCLPDB1/xstream_adm_tbs.dbf' SIZE 25M REUSE AUTOEXTEND ON MAXSIZE UNLIMITED; exit; sqlplus sys/top_secret@//localhost:1521/ORCLCDB as sysdba CREATE USER c##dbzadmin IDENTIFIED BY dbz DEFAULT TABLESPACE xstream_adm_tbs QUOTA UNLIMITED ON xstream_adm_tbs CONTAINER=ALL; GRANT CREATE SESSION, SET CONTAINER TO c##dbzadmin CONTAINER=ALL; BEGIN DBMS_XSTREAM_AUTH.GRANT_ADMIN_PRIVILEGE( grantee => 'c##dbzadmin', privilege_type => 'CAPTURE', grant_select_privileges => TRUE, container => 'ALL' ); END; / exit;

Create XStream User

sqlplus sys/top_secret@//localhost:1521/ORCLCDB as sysdba CREATE TABLESPACE xstream_tbs DATAFILE '/opt/oracle/oradata/ORCLCDB/xstream_tbs.dbf' SIZE 25M REUSE AUTOEXTEND ON MAXSIZE UNLIMITED; exit; sqlplus sys/top_secret@//localhost:1521/ORCLPDB1 as sysdba CREATE TABLESPACE xstream_tbs DATAFILE '/opt/oracle/oradata/ORCLCDB/ORCLPDB1/xstream_tbs.dbf' SIZE 25M REUSE AUTOEXTEND ON MAXSIZE UNLIMITED; exit; sqlplus sys/top_secret@//localhost:1521/ORCLCDB as sysdba CREATE USER c##dbzuser IDENTIFIED BY dbz DEFAULT TABLESPACE xstream_tbs QUOTA UNLIMITED ON xstream_tbs CONTAINER=ALL; GRANT CREATE SESSION TO c##dbzuser CONTAINER=ALL; GRANT SET CONTAINER TO c##dbzuser CONTAINER=ALL; GRANT SELECT ON V_$DATABASE to c##dbzuser CONTAINER=ALL; GRANT FLASHBACK ANY TABLE TO c##dbzuser CONTAINER=ALL; GRANT SELECT_CATALOG_ROLE TO c##dbzuser CONTAINER=ALL; GRANT EXECUTE_CATALOG_ROLE TO c##dbzuser CONTAINER=ALL; exit;

Create XStream Outbound Server

sqlplus c##dbzadmin/dbz@//localhost:1521/ORCLCDB DECLARE tables DBMS_UTILITY.UNCL_ARRAY; schemas DBMS_UTILITY.UNCL_ARRAY; BEGIN tables(1) := NULL; schemas(1) := 'debezium'; DBMS_XSTREAM_ADM.CREATE_OUTBOUND( server_name => 'dbzxout', table_names => tables, schema_names => schemas); END; / exit;

Configure the XStream user account to connect to the XStream outbound server

sqlplus sys/top_secret@//localhost:1521/ORCLCDB as sysdba BEGIN DBMS_XSTREAM_ADM.ALTER_OUTBOUND( server_name => 'dbzxout', connect_user => 'c##dbzuser'); END; / exit;

Create tables and open minimum logs

sqlplus / as sysdba conn c##dbzuser/dbz; CREATE TABLE STU ( "s_id" INT PRIMARY KEY, "s_name" VARCHAR ( 255 ) ); ALTER TABLE C##DBZUSER.STU ADD SUPPLEMENTAL LOG DATA (ALL) COLUMNS; exit;

Following the steps above, you can monitor the database using either the java api or kafka-connector. By contrast, it is much easier to use the java api directly.

java API

To be completed

kafka-connector

Metaphysical files, no data monitored, no errors reported

Using java, zookeeper, kafka decompressed to/opt/module, Java needs to configure environment variables

go Central Repository: io/debezium/debezium-connector-oracle Download the version of plugin you need, such as debezium-connector-oracle-1.6.2.Final-plugin.tar.gz

Create folder to store kafka-connector-plugin

mkdir /opt/kafka-plugin

Unzip the downloaded plugin, copy all the contents to kafka-plugin and kafka's libs, as shown below

go Oracle Instant Client Downloads Download the Basic Package (ZIP) for your operating system.

Unzip it and extract ojdbc8.jar and xstream.jar from it into libs of kafka.

Configure the decompressed instantClient as an environment variable

vim /etc/profile.d/

Export LD_LIBRARY_PATH and save

export LD_LIBRARY_PATH=/opt/instantclient_19_12

Refresh environment variables

source /etc/profile

Configure kafka-connector

cd /opt/module/kafka_2.12-2.7.0/ vi config/connect-distributed.properties

Add the plugin.path to the kafka-plugin you just configured and save it.

plugin.path=/opt/kafka-plugin

That way, you're ready to configure it.

Enter the zookeeper path, copy a zookeeper configuration file, and start the zookeeper

cp conf/zoo_sample.cfg conf/zoo.conf bin/ bin/zkServer.sh start

Enter Kafka path, start Kafka first, start successfully, then start kafka-connect

bin/kafka-server-start.sh config/server.properties bin/connect-distributed.sh config/connect-distributed.properties

Open browser/postman, get access to port 8083, version information will appear

With post access, ip:8083/connectors, and a configuration json, you can register a connector

{ "name": "stu2", "config": { "connector.class" : "io.debezium.connector.oracle.OracleConnector", "tasks.max" : "1", "database.server.name" : "server6", "database.hostname" : "192.168.10.131", "database.port" : "1521", "database.user" : "c##dbzuser", "database.password" : "dbz", "database.dbname" : "ORCLCDB", "table.include.list" : "C##DBZUSER.STU", "database.history.kafka.bootstrap.servers" : "192.168.10.131:9092", "database.history.kafka.topic": "schema-changes.stu2", "database.connection.adapter": "xstream", "database.out.server.name" : "dbzxout" } }

The kafka-connector automatically generates a kafka-topic, typically a server.library name.table name, but like the #character, it is usually converted to the _character, like server2.C##DBZUSER.STU2 to server2.C_u DBZUSER.STU2, which can be discovered by registering the connector and observing the log carefully.

Go to the kafka path and see all the topic s in kafka

bin/kafka-topics.sh --list --zookeeper 192.168.10.132:2181

Monitor current topic, whether database changes are monitored

bin/kafka-console-consumer.sh --bootstrap-server 192.168.10.132:9092 --topic server2.C__DBZUSER.STU2

12 October 2021, 13:01 | Views: 2693

Add new comment

For adding a comment, please log in
or create account

0 comments