Search This Blog

Loading...

Friday, February 7, 2014

Mobile Performance Testing Checklist

Mobile Performance Testing Checklist
Smartphones & tablets are the reality and a large mass of people are moving to use it for bussiness applications, entertainment, social networking, healthcare applications etc. It's mandatory for businesses to assess performance of mobile apps before releasing it to public. Responsiveness of mobile apps is one of the big factor for organization to capture the market. Below are list of things to consider specific to mobile performance testing.

Have you considered-

Client side considerations
  • Application performance against different mobile phones in the market?
    • CPU utilization
    • Memory utilization
    • I/O
    • Cache size availability
    • Rendering (2D / 3D)
  • Application performance against different mobile phones & browsers in the market?
    • JS Engine processing
    • Number of threads executing the requests
  • Memory leakage by the application?
  • Battery consumption by the application?
  • Internet data usage by the application?
  • Offline data usage by the application?
  • Different sizes of images for different mobile phones?
  • Analyzing waterfall chart for the mobile traffic?
    • Compressed data
    • Caching (HTML5 Web Storage)
    • Less number of round trips
    • Image size
    • Consolidating resources?

Server side considerations
  • Analyzing impact on server resources (cpu, memory, i/o) due to slow data transfer rate from server to mobile devices?
  • Simulating scenarios, when user moving from Wi-fi to 3G, 4G etc?
  • Simulating scenarios where connections are getting dropped?
  • Simulating slow sessions and always connected?
  • Simulating large number of active sessions for a longer time?
  • Simulating unidirectional server updates?
  • Simulating large data transfer between server and client?
  • Static resources being served from CDNs or proxies?
  • Simulating traffic from different regions / cities?
  • Simulating different network characteristics?
    • Latency
    • Packet Loss
    • Jitter
    • Bandwidth (2G, 3G, Wifi, etc.)
  • Simulating web based application's scenarios?
  • Simulating native application's scenarios?
  • Simulating hybrid application's scenarios?
  • Simulating secure web based application?
  • Simulating secure native application?

Monday, July 29, 2013

Identification of Performance Objectives / Goals

Identification of performance objectives / goals
It is found that most of the time, stakeholders are not aware or clear on what they want after the performance run. After performance run, every performance test engineer prepare the report containing many graphs, analysis and what not and share it to stakeholder. But stakeholder is confused or overwhelmed by viewing the big report. The main reason is he or she is not sure what answer they were expecting in the report and it becomes very difficult for performance tester to explain or walkthrough the report. Therefore, it is very important that before we conduct performance run(s), we set the expectation in the beginning that what question(s) will be answered after the run. Below are some of the questions, I ask before conducting a performance run and prepare test strategy and report accordingly.


Client Side Goals
  • Do you want to know if the application is able to support X number of concurrent users load?
  • How yours most important transactions and pages performing in terms of response time during normal load and peak load?
  • Would you like to know if X number of transactions can be completed in Y number of minutes?
  • Is new version of the software adversely affecting response time (create performance baseline)?
  • Among all key transactions and pages/screens, which are the top 5 slowest transactions and pages that can be considered for optimization for next release?
  • Do you want to know if response time of all important transactions and pages are less than X sec?
  • Do you want to know size of web page / screen or the number of objects on the page impacting performance for end users? If yes, then which objects are the affecting it most and what are the alternatives?
  • Do you want to know If yours most important transactions and pages are reliable (error rates less than 5%) during normal load and peak load?
  • Are you worried that yours most important transactions and pages / screens are not available to customers during peak load time?
  • If performance (response time and error percentage) of yours most important transactions and pages / screens might degrades after application is up and running for a longer duration (say 8 hrs?)
  • Would you like to know after application get stressed out it comes to normal state by itself when no requests are coming?
  • At what point does the performance degradation occur?
  • Would you like to know how many concurrent users and transactions per second will crash the system?
  • Would you like to know the performance behavior of application during spike (all concurrent users get ramp up in very less time)?

Server Side Goals
  • Do you want to measure resource utilization like memory, cpu, disk i/o and network i/o of servers (web server, application server and database server) during normal and peak load?
  • Would you like to know if application server has memory leak?
  • Would you like to know what parts of the system perform poorly and under what conditions?
  • Would you like to know which configuration (Garbage collection, Connection Pool, Thread pool etc.) provides the best performance?
  • Would you like to find out the recommended configuration for iis server and database servers for peak load traffic?
  • Would you like to know how many transactions / sec (throughput) being handled by different servers during the peak load?
  • What kinds of errors are generated on application server during peak load?
  • Would you like to know where the threads are waiting in application server when response is very slow?
  • Would you like to know what is being processed when performance of the application is found to be slow?
  • Would you like to know what code snippet is slowest during peak load on application server?
  • Would you like to know top X slowest SQL queries during the peak load?
  • Would you like to know the root cause of performance degradation?
  • Would you like to know the performance of most important transactions and pages at various database size?
  • Would you like to know which layer is performing slow?
  • Do you want to know when there is a time to put better hardware or additional hardware to satisfy the future user base and its load?
  • Would you like to determine if application is capable of handling future growth?

Technology or Stakeholder Specific Goals
  • Would you like to stress every web method separately and create a baseline?
  • Would you like to conduct stress testing of database stored procedures?
  • Would you like to know how cache is being utilized on different servers?
  • Would you like to know the performance behavior of application during various connections behavior (persistent, retrying etc.)?
  • Would you like to know the performance behavior of application when all users abandon the site without logging out?
  • Would you like to know performance behavior of application when accessed from different locations?
  • Would you like to know performance behavior of application when accessed from different network bandwidth (home users, office users, etc.)?
  • Which types of users (home, administrator, business etc.) are affected most during peak load?
  • Would you like to know the performance (response time) of page for usable components only or full page load?
  • Would you like to know which request (among thousands) has taken maximum time and its breakdown?
  • Do you want to know are there any third party providers such as rating, reviews, ads, news feeds, e-commerce engines and CDN hurting the performance?
  • Would you like to reproduce some performance issues in the system? What would be that?
  • Do you have some specific performance issue that you want to solve? What would be that?

Friday, May 17, 2013

Best Practices for Unit Test Case Automation

Best Practices for Unit Test Case Automation

Make each test orthogonal (i.e., independent) to all the others
  • Any given behavior should be specified in one and only one test. Otherwise if you later change that behavior, you’ll have to change multiple tests
Don’t make unnecessary assertions
  • Which specific behavior are you testing? It’s counterproductive to Assert() anything that’s also asserted by another test
  • It just increases the frequency of pointless failures without improving unit test coverage at all
  • Have only one logical assertion per test
  • Unit tests are a design specification of how a certain behavior should work, not a list of observations of everything the code does
Test only one code unit at a time
  • Architecture must support testing units (i.e., classes or very small groups of classes) independently, not all chained together
  • If you can’t do this, then your architecture is limiting your work’s quality – consider using Inversion of Control
Mock out all external services and state
  • You’ve definitely taken a wrong turn if you have to run your tests in a specific order, or if they only work when your database or network connection is active.
  • Behaviour in those external services overlaps multiple tests, and state data means that different unit tests can influence each other’s outcome
  • If you can’t, at least make sure each test resets the relevant statics to a known state before it runs.
Avoid unnecessary preconditions
  • Avoid having common setup code that runs at the beginning of lots of unrelated tests
Don’t unit-test configuration settings
  • By definition, your configuration settings aren’t part of any unit of code
Name your unit tests clearly and consistently
  • Avoid non-descriptive unit tests names such as Purchase() or OutOfStock()
  • One naming convention could be - Action_ConditionsToTest_ExpectedResult. Example: ProductPurchaseAction_IfStockIsZero_RendersOutOfStockView()
Follow consistent test structure
  • Setup -> Action -> Assert -> TearDown