Using Apache Bench for Simple Load Testing

If you have access to a Mac or Linux server, chances are you may already have a really simple http load generating tool installed called Apache Bench, or ab. If you are on windows and have Apache installed, you may also have ab.exe in your apache/bin folder.

Suppose we want to see how fast Yahoo can handle 100 requests, with a maximum of 10 requests running concurrently:

ab -n 100000 -c 1000 https://setupalinuxserver.com/

It will then generate output as follows:

admin@vmi778522:~$ ab -n 100000 -c 1000 https://setupalinuxserver.com/
This is ApacheBench, Version 2.3 <$Revision: 1843412 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/

Benchmarking setupalinuxserver.com (be patient)
Completed 10000 requests
Completed 20000 requests
Completed 30000 requests
Completed 40000 requests
Completed 50000 requests
Completed 60000 requests
Completed 70000 requests
Completed 80000 requests
Completed 90000 requests
Completed 100000 requests
Finished 100000 requests


Server Software:        LiteSpeed
Server Hostname:        setupalinuxserver.com
Server Port:            80

Document Path:          /
Document Length:        54678 bytes

Concurrency Level:      1000
Time taken for tests:   9.082 seconds
Complete requests:      100000
Failed requests:        0
Total transferred:      5493200000 bytes
HTML transferred:       5467800000 bytes
Requests per second:    11010.61 [#/sec] (mean)
Time per request:       90.822 [ms] (mean)
Time per request:       0.091 [ms] (mean, across all concurrent requests)
Transfer rate:          590658.94 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        0   17  21.0     16    1020
Processing:    25   74  12.5     73     129
Waiting:        0   17   7.5     15      91
Total:         36   91  24.4     89    1113

Percentage of the requests served within a certain time (ms)
  50%     89
  66%     94
  75%     97
  80%     99
  90%    105
  95%    112
  98%    124
  99%    133
 100%   1113 (longest request)
admin@vmi778522:~

As you can see this is very useful information, it returned requests at a rate of 11010 requests per second.

So the next time you are tempted to whip out cfloop and GetTickCount to do some benchmarking on a piece of code, give ab a try, it’s easy to use, and will yield much more realistic results.

Because ab supports concurrency, this has two big advantages over cfloop. The main one is that it allows you to test how your code runs concurrently, this can help you identify any possible race conditions, or locking issues. Concurrent requests are also a more natural simulation of load than loops.

Suppose you wanted to test multiple url’s concurrently as well? You can do this by creating a shell script, with multiple ab calls. At the end of each line place an & this makes the command run in the background, and lets the next command start execution. You will also want to redirect the output to a file for each url using > filename For example:

#!/bin/sh

ab -n 100 -c 10 http://127.0.0.1:8300/test.cfm > test1.txt &
ab -n 100 -c 10 http://127.0.0.1:8300/scribble.cfm > test2.txt &

The usage info from the ab version installed on my Mac (v2.3) is listed below. As you can see there are many useful options for outputting results, and sending additional data in the request.

Usage: ab [options] [http[s]://]hostname[:port]/path
Options are:
    -n requests     Number of requests to perform
    -c concurrency  Number of multiple requests to make
    -t timelimit    Seconds to max. wait for responses
    -b windowsize   Size of TCP send/receive buffer, in bytes
    -p postfile     File containing data to POST. Remember also to set -T
    -T content-type Content-type header for POSTing, eg.
        'application/x-www-form-urlencoded'
        Default is 'text/plain'
    -v verbosity    How much troubleshooting info to print
    -w              Print out results in HTML tables
    -i              Use HEAD instead of GET
    -x attributes   String to insert as table attributes
    -y attributes   String to insert as tr attributes
    -z attributes   String to insert as td or th attributes
    -C attribute    Add cookie, eg. 'Apache=1234. (repeatable)
    -H attribute    Add Arbitrary header line, eg. 'Accept-Encoding: gzip'
        Inserted after all normal header lines. (repeatable)
    -A attribute    Add Basic WWW Authentication, the attributes
        are a colon separated username and password.
    -P attribute    Add Basic Proxy Authentication, the attributes
        are a colon separated username and password.
    -X proxy:port   Proxyserver and port number to use
    -V              Print version number and exit
    -k              Use HTTP KeepAlive feature
    -d              Do not show percentiles served table.
    -S              Do not show confidence estimators and warnings.
    -g filename     Output collected data to gnuplot format file.
    -e filename     Output CSV file with percentages served
    -r              Don't exit on socket receive errors.
    -h              Display usage information (this message)
    -Z ciphersuite  Specify SSL/TLS cipher suite (See openssl ciphers)
    -f protocol     Specify SSL/TLS protocol (SSL2, SSL3, TLS1, or ALL)

Comments

Leave a Reply