Whose Performance is it anyway?
Last month, we finished an awesome vodQA event in our Thoughtworks Pune office. During the event it was encouraging to see a lot of interest in Performance Testing. But slightly disappointing to see all of those discussions tending towards “Load Testing”.
“Load Testing” typically gives us insight into server-side performance. But there is a lot more to performance testing than just measuring server-side performance. Back in 2010 Steve Souders wrote about frontend SPOF. He makes an important observation:
”.. we’ve learned that 80% of the time users wait for a web page to load is the responsibility of the frontend.”
Fast Forward to 2014 and the world has been moving towards even more distributed applications. Networks are being re-written and there is already a huge momentum behind Internet of Things (IOT). Our Application’s performance matters and it will do more so in the coming years. But If “Load Tests” are the only performance tests for our application, then we have a slim chance of identifying performance issues upfront. Server-side performance is important but equally so is the performance of our client-side code, network, infrastructure and even the 3rd party libraries that you use. On this journey to understand application performance, I would like to start with some very basic questions:
1. What really is well performing code?
The answer to this question lead me to the basics of computer science. And I came across some excellent examples in Princeton University’s Analysis of Algorithms. How do we quantify code performance? Well, the easiest measure for a program’s performance is its “running time”. And as mentioned in this book:
The total running time for a program is determined by two primary factors:
- The cost of executing each statement. (A property of the system)
- The frequency of execution of each statement. (A property of the algorithm)
This means that performance of code depends as much on the underlying system as its algorithmic efficiency. And really, the underlying system is not merely a single computer. These days, it is made up of complex mix of multi-layered software code sitting on top of a variety of hardware systems which are interconnected via different network mediums. And that leads me to the question that has been bugging me for a while..
2. Whose performance are we are trying to test?
I feel this is an important question to soak-in before we begin with any performance testing exercise. With the complex and multi-layered systems that we work with today, there are different things that could hurt an applications performance. Have a look at a sample web application architecture pictured below. And then its time for some examples…
Examples:
-
Server-side code is sub-optimal but functionally working fine.
Going by the above picture, our server-side code has a lot of computing power at its disposal. And hence it should not have survival issues. Typically, a good load testing exercise should point this out. But unless there are major defects, we might just decide to live on with some additional computing power. -
Server-side code is super awesome but client-side code sucks.
Client-side code runs on each one of the individual clients (typically with limited computing resources). Hence, this is where we should be be shifting the focus of our performance tests. Tools like YSlow and PageSpeed should be able to get us started. -
All our code is super awesome. But our servers are in a different country than our target audience.
I feel the real question here is: How early can can we detect such issues? A simple web page test should point this out. But ideally we should create continuous probes from across the target locations to keep our web-application well performing. -
Again, all our code is super awesome. But our audience only uses wireless networks to access the web-application.
Is my website optimised for wireless access? With the mobile revolution around us, this is is another important aspect of our web-application performance. Tools like Mobitest are relatively new on the block. But surely worth a try.
To Summarize: it is really important that we look at all aspects of application performance before we try to measure it. And again a good way to get started is to ask some simple questions:
- Are we trying to test the performance of the piece of code that we just wrote?
- Do we want to test the code performance when it is deployed in an application or web server?
- How do I get alerted when there is change in overall application performance due to the small code change that I made?
- Are we trying to tweak our web or application server settings for faster response times?
- Do we want to come up with some performance figures for Capacity planning?
- How much peak load would the application handle? How does it scale?
- Are we trying to monitor the performance of our live application?
- Are we checking for bottlenecks in the Application Infrastructure?
- Do we want to check for performance problems from an End User Perspective?
- Validate that the application’s response times are within the required threshold
- Validate that the application’s response times are similar across our target (audience) locations
- Validate that the application’s response times are similar when accessed from different network types
These are all important questions that try to address different aspects of application performance. They should help us design specific testing experiments and in-turn help uncovering performance problems. Essentially, I would classify these into 3 different categories:
- Lower Level tests that are closer to code (Questions 1-3)
- Higher Level tests that are more from the system perspective (Questions 4-6)
- Real User Measurements (Questions 7-9)
In my next post, I plan to dig deeper into each of these aspects and come up with strategies and tooling. But, if you’ve got this far I would love to hear from you. Did I miss something? Do you have better suggestions? Please do leave your comments…..
blog comments powered by Disqus