How to use wrk and Jmeter for performance pressure test?

When we start a new project, we need to have some performance indicators for our services, such as SLA (how many 9's need to be reached), QPS, TPS, etc. Because these quantitative figures give us a better understanding of our system.

How do we pressure test? In fact, I think there are two scenes.

The first is that we clearly know the goal and see if we can achieve it through a large number of concurrency. If not, we need to achieve it through horizontal capacity expansion and performance optimization.

The second is that we don't know the target. Through the pressure test, we can know the maximum performance of a single machine and single service under a fixed configuration. Let's have a thorough understanding of it. Make more preparations for the following goals, or compare with the industry level to see how much the gap is.

How to use wrk for pressure measurement?

Github address: , the project is also an open source project, and many people pay attention to it, including 30.4K. I consulted my colleagues around me, and many people use it. The main language is C.

git clone


-- Copy wrk reach bin
cp wrk /usr/sbin/wrk
Pressure test script

Pressure test script

rm ./report_mock.${1}.txt 2> /dev/null
for k in 10; do
    bf=`expr ${k} \* 100`
    for len in 512k; do
        echo "start length ${len}" >> ./report_mock.${1}.txt
        ./wrk  -c${bf} -t16 -d3m --timeout 2m --latency -s ./post_${len}.lua http://${1}/press/${len} >> ./report_mock.${1}.txt
        echo "----------------------------------------------------" >> ./report_mock.${1}.txt
    sleep 10

Lua script post_512.lua

wrk.method = "POST"
wrk.body = '{"key":"value"}'
wrk.headers["Content-Type"] = "application/json"
wrk.headers["X-Forwarded-For"] = ""

Here is a general script, which roughly means:

  • Delete the original generated file
  • Cycle for 10 times, that is, conduct pressure measurement in sequence from 100 to 1000 concurrent numbers
  • The returned message size is controlled at 512K, which can match your Response according to your Request.
  • -t16: start 16 threads
  • -d3m: the measured time is 3mins
  • --Timeout: the timeout is 2m ins
  • post_${len}.lua: it is to construct the lua script corresponding to the Request parameter returned by 512K
  • report_mock..txt: the result will be append ed to the file
  • : the host+port to which you want to send the request

To sum up, these times need to be adjusted according to their own server performance. It is possible that the measured data is empty because the Response is not returned after timeout.

Parameter analysis
usage method: wrk <option> <Tested HTTP Service URL>                            
    -c, --connections <N>  Establish and maintain with the server TCP Number of connections  
    -d, --duration    <T>  Pressure measurement time           
    -t, --threads     <N>  How many threads are used for pressure measurement   
    -s, --script      <S>  appoint Lua Script path       
    -H, --header      <H>  For each HTTP Request add HTTP head      
        --latency          After the pressure test, print the delay statistics   
        --timeout     <T>  Timeout     
    -v, --version          Print in use wrk Version details for
  <N>Represents a numeric parameter and supports international units (1k, 1M, 1G)
  <T>Represents time parameter and supports time unit (2s, 2m, 2h)
Results of execution
start length 512k
Running 3m test @ http://{ip+port}/press/512k # {ip+port} is your own
  16 threads and 1000 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency   145.17ms  942.97ms   0.86m    98.34%
    Req/Sec     1.39k   114.27     3.58k    70.07%
  Latency Distribution
     50%   36.58ms
     75%   48.78ms
     90%  185.51ms
     99%    1.78s 
  3994727 requests in 3.00m, 65.25GB read
  Non-2xx or 3xx responses: 3863495
Requests/sec:  22181.33
Transfer/sec:    371.00MB

Will give a very good distribution: 50%, 75%, 90%, 99%.

However, if it is not intuitive to see a large amount of data in this way, a python script is provided here to parse the values. So that it can extract important information from these logs:

import math

def ms(fr):
    fr = fr.lower()
    if fr[-2:] == 'ms':
        to = float(fr[:-2])
    elif fr[-1:] == 's':
        to = float(fr[:-1]) * 1000
    elif fr[-1] == 'm':
        to = float(fr[:-1]) * 1000 * 60
    return to

def mb(fr):
    fr = fr.lower()
    if fr[-2:] == 'gb':
        to = float(fr[:-2]) * 1000
    elif fr[-2:] == 'mb':
        to = float(fr[:-2])
    elif fr[-2:] == 'kb':
        to = float(fr[:-2]) / 1000
    elif fr[-1] == 'b':
        to = float(fr[:-1]) / 1000 / 1000
    elif fr[-1] == 'k':
        to = float(fr[:-1]) / 1000
    elif fr[-1] == 'm':
        to = float(fr[:-1])
    return to

def parse_one(one):
    ret = {}
    for l in one.split('\n'):
        if l.find('test @ http://') > 0:
            ret['host'] = l.split('://')[1].split('/')[0].strip()
        elif l.find('start length') == 0:
            ret['size'] = l.split(' ')[-1].strip()
        elif l.find('threads and') > 0:
            ret['threads'] = int(l.split('threads and')[0].strip())
            ret['conns'] = int(l.split('threads and')[1].split('connections')[0].strip())
        elif l.find('    Latency') == 0:
            ret['l_avg'] = ms(l[len('    Latency'):].lstrip().split(' ')[0])
        elif l.find('     50%') == 0:
            ret['l_50'] = ms(l[len('     50%'):].strip())
        elif l.find('     90%') == 0:
            ret['l_90'] = ms(l[len('     90%'):].strip())
        elif l.find('     99%') == 0:
            ret['l_99'] = ms(l[len('     99%'):].strip())
        elif l.find('Requests/sec:') == 0:
            ret['qps'] = float(l[len('Requests/sec:'):].strip())
        elif l.find('Transfer/sec:') == 0:
            ret['mbps'] = mb(l[len('Transfer/sec:'):].strip())
    return ret

with open('/Users/chenyuan/Desktop/report_mock.') as f:
    all =
    out = []
    for one in all.split('----------------------------------------------------\n'):
        r = parse_one(one)
        out.append((r['host'], r['size'], r['conns'], r['l_avg'], r['l_50'], r['l_90'], r['l_99'], r['qps'], r['mbps']))
    for o in out:
        print('\t'.join([str(i) for i in o]))

Finally, you can use column operation in the text to sort the content into Excel. You can report and share well with others~

How to use Jmeter for pressure measurement?

brief introduction

Apache JMeter is a Java based stress testing tool developed by Apache organization. It was originally designed for Web application testing, but later extended to other testing fields. It can be used to test static and dynamic resources, such as static files, Java applets, CGI scripts, Java objects, databases, FTP servers, and so on. JMeter can be used to simulate huge loads on servers, networks or objects, test their strength and analyze the overall performance under different stress categories. In addition, JMeter can perform functional / regression tests on applications and verify that your program returns the results you expect by creating scripts with assertions. For maximum flexibility, JMeter allows you to create assertions using regular expressions.

Apache jmeter can be used to test the performance of static and dynamic resources (files, servlets, Perl scripts, java objects, databases and queries, FTP servers, etc.). It can be used to simulate heavy load on servers, networks or objects to test their strength or analyze the overall performance under different stress types. You can use it for graphical analysis of performance or to test your server / script / object under large concurrent load.

Jmeter is also a software with many scenarios in pressure measurement, and the graphical interface is very friendly. Simply write a Demo process.

Download and install

Official website:

After downloading and decompressing a directory structure, you can configure bin to path and activate the software directly through jmeter secret order.

➜  jmeter pwd
➜  jmeter ll
total 64
-rw-rw-r--@   1 chenyuan  staff    15K  1  2  1970 LICENSE
-rw-rw-r--@   1 chenyuan  staff   167B  1  2  1970 NOTICE
-rw-rw-r--@   1 chenyuan  staff   9.6K  1  2  1970
drwxrwxr-x@  43 chenyuan  staff   1.3K  1  2  1970 bin
drwxr-xr-x@   6 chenyuan  staff   192B  1  2  1970 docs
drwxrwxr-x@  22 chenyuan  staff   704B  1  2  1970 extras
drwxrwxr-x@ 104 chenyuan  staff   3.3K  1  2  1970 lib
drwxrwxr-x@ 104 chenyuan  staff   3.3K  1  2  1970 licenses
drwxr-xr-x@  19 chenyuan  staff   608B  1  2  1970 printable_docs

# I can run it directly here. It's better to configure PATH, which is more convenient for follow-up
➜  jmeter ./bin/jmeter
Don't use GUI mode for load testing !, only for Test creation and Test debugging.
For load testing, use CLI Mode (was NON GUI):
   jmeter -n -t [jmx file] -l [results file] -e -o [Path to web report folder]
& increase Java Heap to meet your test requirements:
   Modify current env variable HEAP="-Xms1g -Xmx1g -XX:MaxMetaspaceSize=256m" in the jmeter batch file
Check :
# This means that it has been started. Do not use the terminal command box during operation.
Create test

Let's take a look at an overall picture and have a more overall understanding of it:

We can follow Do it step by step. I won't make too many repeated introductions here. Because it is relatively simpler than wrk.

Finally, you can click Run to run a single test. Generally, we will adjust the size of the number of threads and the sending frequency to conduct pressure test to see the results.

We can do a lot of things where we assert, because what kind of result is correct and what kind of result is failure. To obtain, you need to intercept some key keys and value s from the Response and Header for logic. These can be done by writing scripts, which is a relatively high-level operation. We can go further in the follow-up~

Finally, we can see our performance and SLA ratio in the summary report.

As a back-end developer, it is necessary to conduct a very comprehensive performance pressure test on the services written by him. For a QPS, TPS and SLA of the system, these figures should be able to be said at will. Where are performance bottlenecks? Then find the corresponding scheme to optimize it. Many times, performance may be the biggest risk, which will lead to the paralysis and unavailability of our services as a whole. These are likely to be linked to our KPI and bonus~

Reference address


Posted on Wed, 03 Nov 2021 00:13:30 -0400 by hexguy