Introduction
Benchmarking tools usually performs stress tests on a piece of software to see how it performs under pressure or heavy load. In turn one can optimize your source code or server configuration as a result of the benchmarking tests. There are a number of HTTP server benchmarking tools available, including ApacheBench, Apache JMeter, curl-loader, openSTA, HttTest and httperf. Today I will be posting about ApacheBench, or known simply as, ab.
ApacheBench is a rather simple and basic tool but then again it is quite easy to use. I installed it on my Ubuntu setup using apt-get:
$ sudo apt-get install apache2-utils
Lets take a look at a basic test and its result:
$ ab -n 100 http://localhost/mysite/index.php
The above test basically makes the same request 100 times to the specified URL. The length of time the test takes obviously depends on the number of requests you specified, the rendering speed and the output size of your site as well as the speed of your server/PC and connection speed. The site I tested is a PHP based site hosted on my own PC and the results showed the following:
Server Software: Apache/2.2.14
Server Hostname: localhost
Server Port: 80
Document Path: /mysite/index.php
Document Length: 6937 bytes
Concurrency Level: 1
Time taken for tests: 6.492 seconds
Complete requests: 100
Failed requests: 98
(Connect: 0, Receive: 0, Length: 98, Exceptions: 0)
Write errors: 0
Total transferred: 737279 bytes
HTML transferred: 693479 bytes
Requests per second: 15.40 [#/sec] (mean)
Time per request: 64.915 [ms] (mean)
Time per request: 64.915 [ms] (mean, across all concurrent requests)
Transfer rate: 110.91 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 0 0.0 0 0
Processing: 10 65 318.0 20 2895
Waiting: 10 65 318.0 20 2895
Total: 10 65 318.0 20 2895
Percentage of the requests served within a certain time (ms)
50% 20
66% 22
75% 24
80% 27
90% 41
95% 61
98% 1407
99% 2895
100% 2895 (longest request)
The most useful results here are the "Time taken for test", "Requests per second" and "Time per request" readings. I can for example see that the complete time it took to run the tests were 6.492 seconds and that, on average, it took 64.915ms (milliseconds) per request.
However, notice that the result is also showing that 98 of the 100 tests were failed requests, does this really mean that almost all our requests failed? Luckily for us it doesn't, the Failed Requests result is a little misleading at first. If you look closely it actually tells us that 98 of the requests returned were Length errors: (Connect: 0, Receive: 0, Length: 98, Exceptions: 0)
But all this means is that after the initial HTTP response, subsequent responses contained differently sized HTML documents. Since the site I was testing creates dynamic content this is bound to happen and therefore there is no reason for me to worry about the reading as the actual test results are still valid.
Concurrent Testing
Unless you only have 1 user ever visiting your site (kind of like my blog) it is quite meaningless to test your site this way. We need to add some concurrency to simulate multiple users accessing your site at the same time. To do this we use the -c flag and by specifying a number of concurrent connections, for example:
$ ab -c 10 -n 100 http://localhost/mysite/index.php
This still means we will only be performing 100 test requests, but we will be making 10 requests at a time instead of 1.
Using KeepAlive
There are some catches when using ab you will need to look out for, one of them is that the KeepAlive option is turned off by default. This means that every request sent to the server is done over a new connection which is terribly slow and effects your test results. If your own site is configured to handle requests over the same connection by using KeepAlive then it will make sense to turn on KeepAlive for your benchmarking tests as well. This can be done using the -k flag. Example:
$ ab -ck 10 -n 100 http://localhost/mysite/index.php
or
$ ab -k -c 10 -n 100 http://localhost/mysite/index.php
Other useful options
Lets take a look at some of the other useful options as well:
- -A auth-username:password - Supply BASIC Authentication credentials to the server. The username and password are separated by a single : and sent on the wire base64 encoded. The string is sent regardless of whether the server needs it (i.e., has sent an 401 authentication needed).
- -e csv-file - Write a Comma separated value (CSV) file which contains for each percentage (from 1% to 100%) the time (in milliseconds) it took to serve that percentage of the requests.
- -p POST-file - File containing data to POST.
- -t timelimit - Maximum number of seconds to spend for benchmarking. This implies a -n 50000 internally. Use this to benchmark the server within a fixed total amount of time. Per default there is no timelimit.
- -w - Print out results in HTML tables. Default table is two columns wide, with a white background.
Happy Benchmarking!
The blog you shared is very good. I expect more information from you like this blog. Thankyou.
ReplyDeleteQtp classes in chennai
Best QTP Training Center in Chennai
QTP Training in Anna Nagar
LoadRunner Training in Chennai
hp loadrunner training
DOT NET Training in Chennai
.net coaching centre in chennai
Html5 Training in Chennai
After I initially commented I seem to have clicked the -Notify me when new comments are added- checkbox and from now on each time a comment is added I get four emails with the same comment. Is there a way you can remove me from that info site service? Thank you!
ReplyDeletewonderful article. Very interesting to read this article.I would like to thank you for the efforts you had made for writing this awesome article. This article resolved my all queries.
ReplyDeleteDot Net Training in Chennai | Dot Net Training in anna nagar | Dot Net Training in omr | Dot Net Training in porur | Dot Net Training in tambaram | Dot Net Training in velachery