Common Performance Testing Mistakes

Common Performance Testing Mistakes

At my lab recently we have all been benchmarking our applications. Most of our work has to deal with throughput in distributed applications. A lot of the time I see people making common mistakes when performing tests so I decided to blog about it.

  1. Never test from the same machine, your testing program is taking away significant resources from your application!
  2. Is concurrency being tested (depends on your goals, most likely, YES)
  3. If testing for real world applications, network latency is a HUGE factor in performance and affects applications in many different scenarios, some quick tips are:
    1. Never test using a wireless network unless that is part of your tests!
    2. Make sure you are not hitting your network cap or your packets are being changed by your internet provider
    3. If testing a cloud service, do not test from the same service since there could be no network/connection delay. This can be due to several factors but it could be simply because the test tool might be under the same virtual machine as the application!
    4. If you are using a reverse proxy for spoon feeding, make sure you test with and without it!
  4. Most importantly, calculate performance based on real values instead of approximations. Most of the times approximations are NOT true as they are extrapolated to higher values.

In the web, performance is very important, if you don’t think so, ask Google: http://glinden.blogspot.com/2006/11/marissa-mayer-at-web-20.html

Marissa started with a story about a user test they did. They asked a group of Google searchers how many search results they wanted to see. Users asked for more, more than the ten results Google normally shows. More is more, they said.

So, Marissa ran an experiment where Google increased the number of search results to thirty. Traffic and revenue from Google searchers in the experimental group dropped by 20%.

Ouch. Why? Why, when users had asked for this, did they seem to hate it?

After a bit of looking, Marissa explained that they found an uncontrolled variable. The page with 10 results took .4 seconds to generate. The page with 30 results took .9 seconds.

Half a second delay caused a 20% drop in traffic. Half a second delay killed user satisfaction.