When designing virtual users to be run during load test activities, there are some points to be taken in account to collect realistic and reliable performance measurements:

1. Realistic work path

Load tests do not have to include the entire features set of the application, but only most representative: applying the Pareto Analysis [cfr Wikipedia], you will probably be able to find out that only 20% of available functionalities are usually responsible of 80% of the workload composition, meaning that in the real life you will be usually able to design a limited set navigational paths with considerable significance.

Often application owners ask you to include well-know problematic steps in the performance test business processes. This may cause an increase of design complexity, analysis effort and the possibility of unrealistic performance measurements. Always try to address this type of requests to the right area:

  • if some steps are known to have functional problems, even if unsystematic, they should be analyzed and taken in account during functional testing;
  • if some steps have evident performance problems, and are not so much traffic intensive (e.g. download report, which is called by 5% of real users), there is no need to include them in the load test (which is aimed to measure and understand performances under concurrent usage) and it’s better to plan a specific performance drill down analysis to understand why the functionality is slow.

2. Think time

You need to estimate think time, based on your real user confidence and speedness surfing features of the application under test. Be aware that if virtual user concurrency is one of your KPI, think times plays a key role driving this metric. Assuming for example that application’s response time is 1 second, you will get more throughput with 150 virtual users waiting an average 5 seconds of think time, than 250 waiting 10 seconds of think time (see example below).

3. Data parameterization

One of most common issues while designing virtual users is to get test data. Starting from user credentials, to contain overall team effort, only few data are usually provided to  the performance test team. Working with limited data sets may cause totally unpredictable measurements. Think about accessing the application concurrently with 500 users sharing the same credential. You may find that data used to build a customized welcome dashboard are cached in the database, causing incredibly fast, yet unrealistic, build of the page, or you may experience totally misleading and slow login time due to locks in the database caused by inserts of last login date for the same primary keys such as userids.

4. Transactions

Choosing transactions is an easy step: usually the best practice is to close in on specific transactions for each click causing data exchange with the frontend server.

  • Ajax server requests are used to increase the speed of web applications. Do not forget hook them in transactions, so you’ll be able to pinpoint specific sub-steps with performance issues.
  • If you are running more business processes with common steps (such as login phase), reuse the same transaction ids: in this way you will be able to combine them, collecting performance data from the functional point of view.