The reason I ask is this...I've seen steady growth in the amount of traffic my blog has had...for instance, the first month in existence I had around 10k hits all month long...mainly due to a syndicated article I wrote. Now that my blog has been up for 7 months or so...I've been syndicated and linked to more and had over 200k hits this month so far.
What I fear is that I'll get something akin to a slashdotting and the blog will crumple. I'm moving to a host to combat the hardware side...now what I need to know is the software side. Has anyone benchmarked s9y before? If so, would you share the results?
I'll share one of mine...what concerns me is that there are close to 934 error requests...I installed zend optimization and it dropped to 480...but that is odd considering if I do the same benchmark on a phpBB forum...there is 0 errors. Anyone know why that would be this way? That's a bit extreme...of course, I would have chalked it up to being a settings thing had phpBB not completely took the bench and done great with it.
As promised, the bench below:
Code: Select all
%> ab -n 1000 -c 200 http://linuxblog.sytes.net/index.php?/archives/74-Experiment-Update-Installed-Ubuntu-5.04-Hoary.html
Server Software: Apache
Server Hostname: linuxblog.sytes.net
Server Port: 80
Document Path: /index.php?/archives/74-Experiment-Update-Installed-Ubuntu-5.04-Hoary.html
Document Length: 30581 bytes
Concurrency Level: 200
Time taken for tests: 164.314321 seconds
Complete requests: 1000
Failed requests: 934
(Connect: 0, Length: 934, Exceptions: 0)
Write errors: 0
Total transferred: 13545284 bytes
HTML transferred: 13088749 bytes
Requests per second: 6.09 [#/sec] (mean)
Time per request: 32862.863 [ms] (mean)
Time per request: 164.314 [ms] (mean, across all concurrent requests)
Transfer rate: 80.50 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 1938 8051.4 0 92997
Processing: 2043 23744 20871.1 15577 151285
Waiting: 1933 23558 20655.1 15519 151274
Total: 2061 25683 24209.0 15707 160284
Percentage of the requests served within a certain time (ms)
50% 15707
66% 31222
75% 35796
80% 38812
90% 53616
95% 79462
98% 107644
99% 110537
100% 160284 (longest request)
With a different concurrancy level (100 instead of 200)
Code: Select all
%> ab -n 1000 -c 100 http://linuxblog.sytes.net/index.php?/archives/74-Experiment-Update-Installed-Ubuntu-5.04-Hoary.html
Server Software: Apache
Server Hostname: linuxblog.sytes.net
Server Port: 80
Document Path: /index.php?/archives/74-Experiment-Update-Installed-Ubuntu-5.04-Hoary.html
Document Length: 33014 bytes
Concurrency Level: 100
Time taken for tests: 257.979613 seconds
Complete requests: 1000
Failed requests: 48
(Connect: 0, Length: 48, Exceptions: 0)
Write errors: 0
Total transferred: 32082140 bytes
HTML transferred: 31570600 bytes
Requests per second: 3.88 [#/sec] (mean)
Time per request: 25797.961 [ms] (mean)
Time per request: 257.980 [ms] (mean, across all concurrent requests)
Transfer rate: 121.44 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 2 9.0 0 80
Processing: 244 24731 10039.6 24135 126324
Waiting: 243 24233 9827.4 23942 126323
Total: 244 24733 10038.8 24135 126333
Percentage of the requests served within a certain time (ms)
50% 24135
66% 25854
75% 27672
80% 28888
90% 34705
95% 39083
98% 44959
99% 48018
100% 126333 (longest request)
Odd. Is there a way to improve this?