The following post has been co-authored by Luca Chiodi and Roberto Brusa.

When to carry out performance testing?

Usually, there are different approaches to performance testing. Very often –too often- performance testing is only considered after some critical disruption occurred in production, causing costly service losses. Effort, time and resources are spent to find the cause and never let it happen again. In this scenario, performance testing does not follow a regular, planned schedule, but it’s driven by application or infrastructural failures. It is rather simple to implement such a process, but there’s a limit: testing is only involved in emergency situations, with the aim to reduce service downtime (MTTR). Despite the relatively low effort required by this approach, waiting for the next disaster could be a risky bet.

At the opposite side of the spectrum, performance testing is included as an integral part in the Application Lifecycle Management (ALM). All new applications and releases are tested, assuring no performance regression is occurring from a previous baseline or expected service levels. A previous article from our colleague David Furnari showed the importance of developing a maturity model aimed to improve a customer’s awareness on performance management topics. The focus is to evolve from a simple issue-resolution approach to the development of a comprehensive performance optimization process. 

Who to carry out performance testing? A factory approach

For the above reason, a dedicated team of performance engineers is required, implementing a centralized performance management model. In two words, a Performance Factory.  In this context, a performance factory approach shows a number of advantages:

  • Efficiency. Test team engage/response time is reduced, and communication between different areas is smoother (faster analysis and results sharing); for each application a “test repository” is kept, acting as a performance baseline history future releases will be compared to.
  • Economies of scale. Tasks are divided among test team, and each member focuses on his duties and skills; testing process and instruments are standardized.
  • Service catalog. Activity requirements are implemented more efficiently: a shared approach eases test definition phase.
  • Specific responsibilities. Highly specialized tasks disengage test team from all preliminary activities, e.g. software releases or environment set-up; roles are clearly defined: testing is performed on time and quality of the results is guaranteed.

Moreover, a Performance Factory supports all other service lifecycle phases, spanning over the whole performance management process:

  • Application Delivery provides not only performance testing & tuning, but also availability testing, failover testing and infrastructure sizing.
  • Service Management supports Availability & Performance monitoring and Incident & Problem Management; cooperates with Service Level Management.
  • Continuous Service Improvement facilitates the exchange of application and infrastructural knowledge; supports Capacity Planning; application changes driven by performance activity could lead to new developments in Functional Testing.

Again, this means a strong cooperation between all different teams has to be achieved. We strongly recommend that everyone involved in the performance analysis is working together throughout the whole performance process, from the initial stage to the final results discussion and acceptance.

Moviri Performance Factory

Who carrying out performance testing? A non-factory approach

However, working as a factory may simply not be enough. To improve the quality of the results and broaden the range of the analysis that can be performed, a different approach could be required.

In an effort to increase the value and the quality of performance testing, the focus on Service Offering concepts represents an evolution from the test factory model. A Service Offering consists in the categorization of an activity into dimensions that represent its key features.

In the performance testing area, typical dimensions are:

  • Test frequency
  • Technology involved
  • Test objectives

In a straight factory approach, it could be difficult to correctly identify these dimensions. Perhaps, in a context where the test team is committed to run dozens (if not hundreds) of tests in the short term, there is virtually no time to approach the activity in such an articulated manner. Or, more simply, the test factory usually deals with the very same kind (dimensions) of tests: constant frequency, same technology, standardized objectives.

But what if we are NOT part of a permanent performance team and need to deal with different test activities? In a world where context and technology evolve fast, performance engineers have to face constantly novel challenges on their way to implement performance. Any of us constantly stepping into different customer accounts each one with its specific issues and needs, knows very well that never two performance test activities are exactly the same.

In this scenario, the factory model would show its design flaws. Basically, a factory-organized performance team acts like an assembly line, where each step to performance optimization is executed in a standardized sequence and few exceptions are allowed. This model perfectly suits an environment where the performance process is well on track, strongly integrated in the service lifecycle: test scheduling is predictable, the team is perfectly familiar with the technology involved and test objectives reflect standardized requirements.

But in case we are asked to deal with new contexts, addressing new performance issues, usually with very short notice and where time is a strong constraint, we need to adapt (and increase) our skills and knowledge. The Performance Factory model has to become less rigid in favor of a more flexible approach. Its ultimate goal is to broaden the range of the service offering and increase the capability of our testing portfolio.

In this case, to develop a valuable service offering approach a number of factors are involved. Each performance testing activity requires specific considerations and even customizations that are hard (if not impossible) to be taken into account in a performance factory organization. A completely different testing approach is developed. The performance activity process is driven by the specific requirements that particular context needs. Performance experts should operate in a less ‘standardized’ manner, digging into each testing aspect with much more detail and customization.

Planning phase and design phase have to comply with new detailed objectives and/or specific agreements (Service Level Agreements, SLAs) to be met during the execution of the test. Defined goals have to comply with the specific environment, with its own features, capacity and purpose: a run in a testing environment aimed at pinpointing performance issues and suggesting tuning tweaks is very different from a run in production performed to assess the maximum throughput available for end users. According to these design differences, every time specific monitoring on resources and infrastructure may be required.

Execution phase is directly connected with technology involved, and has to be conducted accordingly, while the analysis phase is strictly related with test scope and objectives, shaping structure and content of final outputs, results and documents.

In this scenario the expertise of the performance team is definitely stronger and more reliable if compared to the experience developed in a Performance Factory; the ability to change is also much more developed. The focus is not only on technical skills, but also on organizational and even relational skills, since rapidly evolving contexts require test experts to expand their knowledge, while performing their tasks and when involving other people and teams.

From theory to practice

Of course, implementing this “Swiss-army knife” performance approach does not come for free. Like any piece of craftsmanship, this kind of performance testing requires additional effort and work, along with a relevant time to be spent in the initial setup process. Unlike a factory, it is difficult to perform economies and apply some sort of recurring standardized process template or  activity framework while dealing with such different scenarios.

Moreover, an additional risk hiding behind this kind of approach is that could lead to small and individual test activities, confined into very specific contexts and performed on an irregular basis (when not just one-offs), not bringing enough visibility to the performance process, regardless all the effort spent. This means the performance maturity and awareness, whose importance has been previously pointed out, could be compromised, or at least would not improve, leading to a potential lack in recognizing the value of performance testing.

Conclusions

A general clear-cut rule to determine which approach most fits every testing activity does not exist. Each approach, factory and non-factory, has its own advantages and disadvantages, strengths and weaknesses. Depending on the context, test objectives, requirements, constraints involved and expected results, it is imperative to determine how to organize the performance activity and how to achieve the maximum efficiency and the best results: dealing with the rigid but systematic framework of a Performance Factory, or tackling a performance issue in a custom environment, sparing no effort but finally achieving the most detailed analysis and valuable results. In a word, an all-round performance engineer should balance their key factors, taking advantage of each approach while adapting to each specific testing scenario.