Kafka Core API - AdminClient API

Five Kinds of Kafka Client Role and Difference

stay Above It describes how to set up a Kafka service, so how do we access and integrate Kafka in development?This requires the use of the Kafka client API described in this article.The following is a diagram in the official document that illustrates the type of client that can be integrated with Kafka:

These clients are integrated with Kafka via API, Kafka's five types of clients API Type The following:

  • AdminClient API: Allows management and detection of Topic, broker, and other Kafka instances, similar to Kafka's own script commands
  • Producer API: Publish a message to one or more Topic s, which are APIs needed by the producer or publisher
  • Consumer API: Subscribe to one or more Topic s and process the resulting messages, which are APIs that consumers or subscribers need to use
  • Stream API: Converts input streams to output streams efficiently, often used in some stream processing scenarios
  • Connector API: Pull data from some source system or application to Kafka, such as the DB in the figure above

Create a project

The specific use of the AdminClient API will be demonstrated in the following chapters, and the remaining APIs will be described in subsequent articles.First, we create a Spring Boot project in IDEA that hasPom.xmlThe contents of the file are as follows:

<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
         xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 https://maven.apache.org/xsd/maven-4.0.0.xsd">
    <modelVersion>4.0.0</modelVersion>
    <parent>
        <groupId>org.springframework.boot</groupId>
        <artifactId>spring-boot-starter-parent</artifactId>
        <version>2.3.0.RELEASE</version>
        <relativePath/> <!-- lookup parent from repository -->
    </parent>
    <groupId>com.zj.study</groupId>
    <artifactId>kafka-study</artifactId>
    <version>0.0.1-SNAPSHOT</version>
    <name>kafka-study</name>
    <description>Kafka study project for Spring Boot</description>

    <properties>
        <java.version>11</java.version>
    </properties>

    <dependencies>
        <dependency>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-starter-web</artifactId>
        </dependency>
        <!-- Kafka Client Dependency -->
        <dependency>
            <groupId>org.apache.kafka</groupId>
            <artifactId>kafka-clients</artifactId>
            <version>2.5.0</version>
        </dependency>

        <dependency>
            <groupId>org.projectlombok</groupId>
            <artifactId>lombok</artifactId>
            <optional>true</optional>
        </dependency>
        <dependency>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-starter-test</artifactId>
            <scope>test</scope>
            <exclusions>
                <exclusion>
                    <groupId>org.junit.vintage</groupId>
                    <artifactId>junit-vintage-engine</artifactId>
                </exclusion>
            </exclusions>
        </dependency>
    </dependencies>

    <build>
        <plugins>
            <plugin>
                <groupId>org.springframework.boot</groupId>
                <artifactId>spring-boot-maven-plugin</artifactId>
            </plugin>
        </plugins>
    </build>

</project>

Establishment of AdminClient Client

Common AdminClient API objects are as follows:

Obviously, the precondition to operating on the AdminClient API is to create an AdminClient instance.Code example:

/**
 * Configure and create AdminClient
 */
public static AdminClient adminClient() {
    Properties properties = new Properties();
    // Configure the access address and port number of the Kafka service
    properties.setProperty(AdminClientConfig.
            BOOTSTRAP_SERVERS_CONFIG, "127.0.0.1:9092");

    // Create an AdminClient instance
    return AdminClient.create(properties);
}

Once we have created an instance object of AdminClient, we can manipulate Kafka using the methods it provides, as follows:

Method Name Effect
createTopics Create one or more Topic s
listTopics Query Topic List
deleteTopics Delete one or more Topic s
describeTopics Query Topic's Description Information
describeConfigs Query all configuration item information for Topic, Broker, etc.
alterConfigs Configuration item information for modifying Topic, Broker, and so on (this method is marked as expired in the new version)
incrementalAlterConfigs It is also used to modify configuration item information for Topic, Broker, and so on, but with more functionality and flexibility to replace alterConfigs
createPartitions The number of Partitions used to adjust Topic can only be increased and cannot be reduced or deleted, that is, the number of Partitions newly set must be greater than or equal to the number of Partitions previously set

Tips:

  • describeTopics and describeConfigs are primarily meant for monitoring, and many components that monitor Kafka use both API s because they provide detailed information about Topic itself and its surroundings.

Create Topic

Topic can be created using the createTopics method, and the parameters passed in are the same as those in the kafka-topics.sh command script.Code example:

/**
 * Create topic
 */
public static void createTopic() throws ExecutionException, InterruptedException {
    AdminClient adminClient = adminClient();
    // Name of top
    String name = "MyTopic3";
    // Number of partition s
    int numPartitions = 1;
    // Number of copies
    short replicationFactor = 1;
    NewTopic topic = new NewTopic(name, numPartitions, replicationFactor);
    CreateTopicsResult result = adminClient.createTopics(List.of(topic));
    // Avoid client connections being disconnected too quickly to cause Topic to fail to create
    Thread.sleep(500);
    // Gets the number of partition s set by topic
    System.out.println(result.numPartitions(name).get());
}

View Topic List

The listTopics method is used to query the Topic list, and some optional options can be set by passing in the ListTopicsOptions parameter.Code example:

/**
 * Query Topic List
 */
public static void topicLists() throws ExecutionException, InterruptedException {
    AdminClient adminClient = adminClient();
    ListTopicsResult result1 = adminClient.listTopics();
    // Print Topic Name
    System.out.println(result1.names().get());
    // Print Topic information
    System.out.println(result1.listings().get());

    ListTopicsOptions options = new ListTopicsOptions();
    // Is Topic used internally listed
    options.listInternal(true);
    ListTopicsResult result2 = adminClient.listTopics(options);
    System.out.println(result2.names().get());
}

About the listInternal option:

The listInternal option is not available in Kafka 0.x because in 0.x Kafka stores the offset information of consumer in Zookeeper, but since Zookeeper synchronizes the offset information of consumer slowly, it migrates to Topic in Kafka after 1.x to store it, which also improves throughput and performance.

Delete Topic

The deleteTopics method deletes one or more Topics, code example:

/**
 * Delete Topic
 */
public static void delTopics() throws ExecutionException, InterruptedException {
    AdminClient adminClient = adminClient();
    DeleteTopicsResult result = adminClient.deleteTopics(List.of("MyTopic1"));
    System.out.println(result.all().get());
}

Topic Description View

A Topic has its own descriptive information, such as the number of partition s, the number of replica sets, whether it is an internal, and so on.The describeTopics method is provided in AdminClient to query these descriptions.Code example:

/**
 * Query Topic's Description Information
 */
public static void describeTopics() throws ExecutionException, InterruptedException {
    AdminClient adminClient = adminClient();
    DescribeTopicsResult result = adminClient.describeTopics(List.of("MyTopic"));
    Map<String, TopicDescription> descriptionMap = result.all().get();
    descriptionMap.forEach((key, value) ->
            System.out.println("name: " + key + ", desc: " + value));
}

The output is as follows:

name: MyTopic, desc: (name=MyTopic, internal=false, partitions=(partition=0, leader=127.0.0.1:9092 (id: 0 rack: null), replicas=127.0.0.1:9092 (id: 0 rack: null), isr=127.0.0.1:9092 (id: 0 rack: null)), authorizedOperations=null)

Topic Configuration Information View

In addition to Kafka's own configurations, there are many configurations for Topic within Kafka. We can get information about configurations in a Topic by describeConfigs method.Code example:

/**
 * Query Topic's configuration information
 */
public static void describeConfig() throws ExecutionException, InterruptedException {
    AdminClient adminClient = adminClient();
    ConfigResource configResource = new ConfigResource(
            ConfigResource.Type.TOPIC, "MyTopic"
    );
    DescribeConfigsResult result = adminClient.describeConfigs(List.of(configResource));
    Map<ConfigResource, Config> map = result.all().get();
    map.forEach((key, value) ->
            System.out.println("name: " + key.name() + ", desc: " + value));
}

The output is as follows, which will output all the configuration information with more content:

name: ConfigResource(type=TOPIC, name='MyTopic'), desc: Config(entries=[ConfigEntry(name=compression.type, value=producer, source=DEFAULT_CONFIG, isSensitive=false, isReadOnly=false, synonyms=[]), ConfigEntry(name=leader.replication.throttled.replicas, value=, source=DEFAULT_CONFIG, isSensitive=false, isReadOnly=false, synonyms=[]), ConfigEntry(name=message.downconversion.enable, value=true, source=DEFAULT_CONFIG, isSensitive=false, isReadOnly=false, synonyms=[]), ConfigEntry(name=min.insync.replicas, value=1, source=DEFAULT_CONFIG, isSensitive=false, isReadOnly=false, synonyms=[]), ConfigEntry(name=segment.jitter.ms, value=0, source=DEFAULT_CONFIG, isSensitive=false, isReadOnly=false, synonyms=[]), ConfigEntry(name=cleanup.policy, value=delete, source=DEFAULT_CONFIG, isSensitive=false, isReadOnly=false, synonyms=[]), ConfigEntry(name=flush.ms, value=9223372036854775807, source=DEFAULT_CONFIG, isSensitive=false, isReadOnly=false, synonyms=[]), ConfigEntry(name=follower.replication.throttled.replicas, value=, source=DEFAULT_CONFIG, isSensitive=false, isReadOnly=false, synonyms=[]), ConfigEntry(name=segment.bytes, value=1073741824, source=STATIC_BROKER_CONFIG, isSensitive=false, isReadOnly=false, synonyms=[]), ConfigEntry(name=retention.ms, value=604800000, source=DEFAULT_CONFIG, isSensitive=false, isReadOnly=false, synonyms=[]), ConfigEntry(name=flush.messages, value=9223372036854775807, source=DEFAULT_CONFIG, isSensitive=false, isReadOnly=false, synonyms=[]), ConfigEntry(name=message.format.version, value=2.5-IV0, source=DEFAULT_CONFIG, isSensitive=false, isReadOnly=false, synonyms=[]), ConfigEntry(name=max.compaction.lag.ms, value=9223372036854775807, source=DEFAULT_CONFIG, isSensitive=false, isReadOnly=false, synonyms=[]), ConfigEntry(name=file.delete.delay.ms, value=60000, source=DEFAULT_CONFIG, isSensitive=false, isReadOnly=false, synonyms=[]), ConfigEntry(name=max.message.bytes, value=1048588, source=DEFAULT_CONFIG, isSensitive=false, isReadOnly=false, synonyms=[]), ConfigEntry(name=min.compaction.lag.ms, value=0, source=DEFAULT_CONFIG, isSensitive=false, isReadOnly=false, synonyms=[]), ConfigEntry(name=message.timestamp.type, value=CreateTime, source=DEFAULT_CONFIG, isSensitive=false, isReadOnly=false, synonyms=[]), ConfigEntry(name=preallocate, value=false, source=DEFAULT_CONFIG, isSensitive=false, isReadOnly=false, synonyms=[]), ConfigEntry(name=min.cleanable.dirty.ratio, value=0.5, source=DEFAULT_CONFIG, isSensitive=false, isReadOnly=false, synonyms=[]), ConfigEntry(name=index.interval.bytes, value=4096, source=DEFAULT_CONFIG, isSensitive=false, isReadOnly=false, synonyms=[]), ConfigEntry(name=unclean.leader.election.enable, value=false, source=DEFAULT_CONFIG, isSensitive=false, isReadOnly=false, synonyms=[]), ConfigEntry(name=retention.bytes, value=-1, source=DEFAULT_CONFIG, isSensitive=false, isReadOnly=false, synonyms=[]), ConfigEntry(name=delete.retention.ms, value=86400000, source=DEFAULT_CONFIG, isSensitive=false, isReadOnly=false, synonyms=[]), ConfigEntry(name=segment.ms, value=604800000, source=DEFAULT_CONFIG, isSensitive=false, isReadOnly=false, synonyms=[]), ConfigEntry(name=message.timestamp.difference.max.ms, value=9223372036854775807, source=DEFAULT_CONFIG, isSensitive=false, isReadOnly=false, synonyms=[]), ConfigEntry(name=segment.index.bytes, value=10485760, source=DEFAULT_CONFIG, isSensitive=false, isReadOnly=false, synonyms=[])])

Topic Configuration Information Modification

In addition to viewing Topic's configuration item information, AdminClient provides methods to modify Topic's configuration item values.In earlier versions, the alterConfigs method was used to modify configuration items.Code example:

/**
 * Modify Topic's configuration information
 */
public static void alterConfig() throws Exception {
    // Specify the type and name of ConfigResource
    ConfigResource configResource = new ConfigResource(
            ConfigResource.Type.TOPIC, "MyTopic"
    );
    // Configuration item exists as ConfigEntry
    Config config = new Config(List.of(
            new ConfigEntry("preallocate", "true")
    ));

    AdminClient adminClient = adminClient();
    Map<ConfigResource, Config> configMaps = new HashMap<>();
    configMaps.put(configResource, config);
    AlterConfigsResult result = adminClient.alterConfigs(configMaps);
    System.out.println(result.all().get());
}

 public static void main(String[] args) throws Exception {
    alterConfig();
    describeConfig();
}

Execute the above code, and the console output is as follows, and you can see that the value of the preallocate configuration item was successfully changed to true:

In the new version, the incrementalAlterConfigs method is used to modify Topic's configuration items, which is slightly more complex to use than alterConfigs, but therefore more flexible.Code example:

/**
 * Modify Topic's configuration information
 */
public static void incrementalAlterConfig() throws Exception {
    // Specify the type and name of ConfigResource
    ConfigResource configResource = new ConfigResource(
            ConfigResource.Type.TOPIC, "MyTopic"
    );
    // Configuration items also exist as ConfigEntry with the addition of operation types
    // And the ability to support the operation of multiple configuration items, which is relatively more functional and flexible
    Collection<AlterConfigOp> configs = List.of(
            new AlterConfigOp(
                    new ConfigEntry("preallocate", "false"),
                    AlterConfigOp.OpType.SET
            )
    );

    AdminClient adminClient = adminClient();
    Map<ConfigResource, Collection<AlterConfigOp>> configMaps = new HashMap<>();
    configMaps.put(configResource, configs);
    AlterConfigsResult result = adminClient.incrementalAlterConfigs(configMaps);
    System.out.println(result.all().get());
}

public static void main(String[] args) throws Exception {
    incrementalAlterConfig();
    describeConfig();
}   
  • Tips: In some versions, the incrementalAlterConfigs method may have some problems, poor support for single-instance Kafka, failure to successfully modify configuration items, and the alterConfigs method can be used instead, which is why the two methods are used here

Execute the above code, and the console output is as follows, and you can see that the value of the preallocate configuration item was successfully changed to false:

Adjust Topic's number of Artitions

We need to set the number of Partitions when creating Topics, but if you feel that the number of Partitions initially set is too small, you can use the createPartitions method to adjust the number of Partitions for Topics, but it is important to note that Partitions can only increase and not decrease in Kafka.Code example:

/**
 * Increasing the number of Partitions, currently Kafka does not support deleting or reducing Partitions
 */
public static void incrPartitions() throws ExecutionException, InterruptedException {
    AdminClient adminClient = adminClient();
    Map<String, NewPartitions> newPartitions = new HashMap<>();
    // Adjust the number of Artitions for MyTopic to 2
    newPartitions.put("MyTopic", NewPartitions.increaseTo(2));
    CreatePartitionsResult result = adminClient.createPartitions(newPartitions);
    System.out.println(result.all().get());
}

public static void main(String[] args) throws Exception {
    incrPartitions();
    describeTopics();
}  

Executing the above code, the console output is as follows, and you can see that a Partition has been successfully added to the Topic:

  • Tips:Partition's index starts at 0, so the first partition=0 and the second partition=1

Tags: Big Data kafka Spring Maven Apache

Posted on Sat, 16 May 2020 13:28:18 -0400 by Ralf Jones