Serialization and deserialization of Kafka messages

The parameters Kafka Producer must configure when sending messages are: bootstrap.servers, key.serializer, value.serializer. Serialization is performed after the Interceptor and before the partitions are allocated. First of all, let's see how Kafka Producer can write under normal circumstances through an example code: public class ProducerJava ...

Posted on Mon, 11 Nov 2019 18:18:34 -0500 by nadeem14375

Flume configuration TailDirSource, FileChannel, HDFSSink and KafkaSink stand-alone test

Article directory Version selection Technology selection Stand alone configuration zookeeper kafka flume Startup program hadoop zookeeper kafka flume test Version selection assembly Version number Scala 2.11.x Hadoop 2.6.0-cdh5.7.0 Kafka (apache) 2.11-0.10.2.2 Flume 1.6.0-cdh5.7.0 Zookeeper 3. ...

Posted on Sun, 03 Nov 2019 00:07:28 -0400 by catalin.1975

zabbix monitoring kafka consumption

I. several indicators monitored by Kafka 1. Lag: how many messages are not consumed lag = logsize offset2. logsize: the total number of messages saved by Kafka 3. offset: messages that have been consumed   II. View zookeeper configuration cat /home/app/zookeeper/zookeeper/conf/zoo.cfg | egrep -v "^$|^#" clientPort=2181   III. v ...

Posted on Fri, 01 Nov 2019 18:35:46 -0400 by Dev

flume+springboot+kafka+sparkStream integration

Next chapter flume+springboot+kafka integration In this paper, sparkStream is also integrated. As kafka's consumer, sparkStream accepts kafka's data and realizes real-time calculation of log error and warning data. (1) the environment is the environment in the previous article. Only one sparkStream ...

Posted on Tue, 29 Oct 2019 17:32:40 -0400 by zushiba

In depth understanding of RocketMq common message and sequential message usage, principle and optimization

1. background Recently, we have been doing some pressure tests on the system and optimizing some problems. We have gained a lot of good optimization experience from these problems. The following articles will focus on this aspect. In the process of this crackdown, what we have gained is the optimization of RocketMq. At the beginning, our compan ...

Posted on Mon, 28 Oct 2019 23:50:04 -0400 by EPCtech

Create connection pool with BoneCP and write sparkstreaming data to MySQL

Get ready First, we go to the maven repository to find the dependency of BoneCP: Add to pom.xml My kafka version is 0.10 spark version is 2.4.2 because it's experimental, so I don't pack it in linux and run it directly on IDEA. Start-up First, I set up a new table Kafka? Test? TBL in the G6 database ...

Posted on Mon, 28 Oct 2019 14:27:34 -0400 by Mr P!nk

Flink learning from 0 to 1 -- Flink Data transformation

Preface In the first article about Flink Learn Flink from 0 to 1: introduction to Apache Flink The structure of Flink program has been mentioned in The Flink application structure is shown in the figure above: 1. Source: data source. Flink has four types of source in stream processing and batch processing: local collection based source, fi ...

Posted on Sun, 27 Oct 2019 05:15:45 -0400 by DeX

Flink learning from 0 to 1 -- Introduction to Data Sink

Preface First of all, Sink means: You can probably guess! Data sink is a bit of a way to store (drop) data. As shown in the figure above, Source is the Source of data. The middle Compute is actually what Flink does. It can do a series of operations. After the operation, sink the calculated data result to a certain place. (it can be MySQL ...

Posted on Sun, 27 Oct 2019 04:37:40 -0400 by treppers

Flink learn from 0 to 1 - how to customize Data Source?

Preface stay Learn Flink from 0 to 1: introduction to Data Source In this article, I introduce Flink Data Source and user-defined Data Source briefly. In this article, I will introduce it in more detail and write a demo for you to understand. Flink Kafka source Preparation Let's take a look at the demo for Flink to get data from Kafka topic. F ...

Posted on Fri, 25 Oct 2019 05:50:50 -0400 by stretchy

Kafka learning notes - Kafka consumer API

I. Kafka consumer profile Kafka and other message systems have a different design, adding a layer of group on top of consumer. Consumer Group is a scalable and fault-tolerant consumer mechanism provided by Kafka. Consumers of the same group can consume messages of the same topic in parallel, but consumers of the same group will not consume the ...

Posted on Wed, 16 Oct 2019 11:49:32 -0400 by mchip