- home page
- special column
- javascript
- Article details
[stability platform] GOREPLAY traffic recording and playback practice

Introduction to GoReplay
As the complexity of the application increases, the amount of work required to test it increases exponentially. GoReplay gives us a simple idea of reusing existing traffic for testing. GoReplay is a simple traffic recording plug-in developed by golang. It supports multiple methods of filtering, current limiting, amplification, rewriting and so on. GoReplay can be completely non intrusive to the code, does not need to change your production infrastructure, and is language independent. It is not an agent, but directly monitors the traffic on the network card.
How GoReplay works: the listener server captures traffic and sends it to the replay server, or saves it to a file, or to kafka. Then the replay server will transfer the traffic to the configured address
Use process
Demand: after receiving the demand from the algorithm side, you need to record the real production environment traffic and playback it to any environment at any time.
Because some scenes on the algorithm side are written in non Java language, the existing traffic recording platform can not support it temporarily. New recording components need to be used to support the pressure measurement requirements, so goreplay is selected.
GoReplay supports storing recorded data in a local file and then reading it from the file during playback. Considering the complexity of storing and distributing files during each recording and playback, we expect to use a more convenient way to manage data.
GoReplay also supports recording data to be stored in kafka, but it is found to have great limitations when using it; When using kafka to store data, traffic playback must be performed while traffic recording. The frame composition is as follows
Processes 1-4 cannot be split and can only be performed at the same time
This will make the traffic recording and playback function very weak. We need to replay the recorded data at any time, and we also need to support multiple replays of a recorded data. Now that it has stored the traffic data in kafka, we can consider transforming GoReplay to support our needs.
Flow recording and playback architecture after transformation:
In the figure, stages 1-2 and 3-5 are independent of each other
In other words, the traffic recording process and playback process can be separated. Just record the offset of kafka at the beginning and end of recording to know what data this recording task contains. We can easily organize each recording data into a recording task, and then play back the traffic when necessary.
Transformation and integration
kafka offset support transformation
Brief process:
Definition of InputKafkaConfig in the source code
type InputKafkaConfig struct { producer sarama.AsyncProducer consumer sarama.Consumer Host string `json:"input-kafka-host"` Topic string `json:"input-kafka-topic"` UseJSON bool `json:"input-kafka-json-format"` }
Definition of modified InputKafkaConfig
type InputKafkaConfig struct { producer sarama.AsyncProducer consumer sarama.Consumer Host string `json:"input-kafka-host"` Topic string `json:"input-kafka-topic"` UseJSON bool `json:"input-kafka-json-format"` StartOffset int64 `json:"input-kafka-offset"` EndOffset int64 `json:"input-kafka-end-offset"` }
In the source code, the fragment of reading data from kafka:
As you can see, the offset it selects is Newest
for index, partition := range partitions { consumer, err := con.ConsumePartition(config.Topic, partition, sarama.OffsetNewest) go func(consumer sarama.PartitionConsumer) { defer consumer.Close() for message := range consumer.Messages() { i.messages <- message } }(consumer) }
Modified fragment of reading data from kafka:
for index, partition := range partitions { consumer, err := con.ConsumePartition(config.Topic, partition, config.StartOffset) offsetEnd := config.EndOffset - 1 go func(consumer sarama.PartitionConsumer) { defer consumer.Close() for message := range consumer.Messages() { // The offset of the comparison message. When the maximum value of this batch of data is exceeded, the channel is closed if offsetFlag && message.Offset > offsetEnd { i.quit <- struct{}{} break } i.messages <- message } }(consumer) }
At this time, you can specify the range of kafka offset when starting the playback task. We can achieve the effect we want.
Integrated into the pressure measuring platform
Simply fill in and select the operation through the page, and then generate the startup command to replace the lengthy command writing
StringBuilder builder = new StringBuilder("nohup /opt/apps/gor/gor"); // Splicing parameter combination command builder.append(" --input-kafka-host ").append("'").append(kafkaServer).append("'"); builder.append(" --input-kafka-topic ").append("'").append(kafkaTopic).append("'"); builder.append(" --input-kafka-start-offset ").append(record.getStartOffset()); builder.append(" --input-kafka-end-offset ").append(record.getEndOffset()); builder.append(" --output-http ").append(replayDTO.getTargetAddress()); builder.append(" --exit-after ").append(replayDTO.getMonitorTimes()).append("s"); if (StringUtils.isNotBlank(replayDTO.getExtParam())) { builder.append(" ").append(replayDTO.getExtParam()); } builder.append(" > /opt/apps/gor/replay.log 2>&1 &"); String completeParam = builder.toString();
The pressure test platform controls the start and stop of GoReplay process through the interface exposed by Java agent
String sourceAddress = replayDTO.getSourceAddress(); String[] split = sourceAddress.split(COMMA); for (String ip : split) { String uri = String.format(HttpTrafficRecordServiceImpl.BASE_URL + "/gor/start", ip, HttpTrafficRecordServiceImpl.AGENT_PORT); // Recreate object GoreplayRequest request = new GoreplayRequest(); request.setConfig(replayDTO.getCompleteParam()); request.setType(0); try { restTemplate.postForObject(uri, request, String.class); } catch (RestClientException e) { LogUtil.error("start gor fail,please check it!", e); MSException.throwException("start gor fail,please check it!", e); } }