Posts

IIS 8.0 Performance Sensitive Configuration Parameters

IIS 8.0 is the version that ships as part of Windows Server 2012.  It uses process model similar to IIS 6.0.  A kernel mode web driver (http.sys) receives and routes HTTP requests, and it can satisfy requests from its response cache.  Worker processes register for URL subspaces, and http.sys routes the requests to appropriate process (or set of processes for application pools). Http.sys is responsible for connection management and request handling.  The request can be served from the http.sys cache or passed to a worker process for further handling.  Some web application platforms such as ASP.NET, provides mechanism to enable any dynamic content to be cached in the kernel-mode cache.  The static file handler in IIS 8.0 automatically caches frequently requested files in http.sys. Web server has kernel mode and user mode components, both components must be tuned for optimal performance. Kernel-Mode Driver / Http.sys Caching ▫     Configurable attribu

Network Emulation in Performance Testing

In today's interconnected world, conducting performance testing on the LAN with excellent bandwidth, low latency, zero corruption etc. and released onto a WAN directly don't make sense.  Identifying network related issues during end of release cycle by conducting performance testing from cloud or data center also is not a good idea.  If performance testing needs to be added in the development cycle than emulation of WAN like environment becomes very important. Different connection types that might need simulation: VoIP/RoIP Satellite Cloud DSL/ADSL/XDSL Microwave OC-3 2G/3G/4G T1/T3/E1/E3 Wifi/WiMax GPRS Dial up Different network conditions that might need simulation: Busy Peak Times Limited Bandwidth Long Distance Information Corruption Jittery Connections De-Sequencing of Data Duplication of Data Network Traffic Bursts Fragmentation of Data Partial or Total Outage

Mobile Performance Testing Checklist

Mobile Performance Testing Checklist Smartphones & tablets are the reality and a large mass of people are moving to use it for bussiness applications, entertainment, social networking, healthcare applications etc. It's mandatory for businesses to assess performance of mobile apps before releasing it to public. Responsiveness of mobile apps is one of the big factor for organization to capture the market. Below are list of things to consider specific to mobile performance testing. Have you considered- Client side considerations Application performance against different mobile phones in the market? CPU utilization Memory utilization I/O Cache size availability Rendering (2D / 3D) Application performance against different mobile phones & browsers in the market? JS Engine processing Number of threads executing the requests Memory leakage by the application? Battery consumption by the application? Internet data usage by the a

Identification of Performance Objectives / Goals

Identification of performance objectives / goals It is found that most of the time, stakeholders are not aware or clear on what they want after the performance run. After performance run, every performance test engineer prepare the report containing many graphs, analysis and what not and share it to stakeholder. But stakeholder is confused or overwhelmed by viewing the big report. The main reason is he or she is not sure what answer they were expecting in the report and it becomes very difficult for performance tester to explain or walkthrough the report. Therefore, it is very important that before we conduct performance run(s), we set the expectation in the beginning that what question(s) will be answered after the run. Below are some of the questions, I ask before conducting a performance run and prepare test strategy and report accordingly. Client Side Goals Do you want to know if the application is able to support X number of concurrent users load? How yours most imp

Best Practices for Unit Test Case Automation

Make each test orthogonal (i.e., independent) to all the others Any given behavior should be specified in one and only one test. Otherwise if you later change that behavior, you’ll have to change multiple tests Don’t make unnecessary assertions Which specific behavior are you testing? It’s counterproductive to Assert() anything that’s also asserted by another test It just increases the frequency of pointless failures without improving unit test coverage at all Have only one logical assertion per test Unit tests are a design specification of how a certain behavior should work, not a list of observations of everything the code does Test only one code unit at a time Architecture must support testing units (i.e., classes or very small groups of classes) independently, not all chained together If you can’t do this, then your architecture is limiting your work’s quality – consider using Inversion of Control Mock out all external services and state Yo

Memory Utilization & Garbage Collection Analysis

Memory Utilization & Garbage Collection Analysis The goal should be to optimize garbage collection (GC) in such way that its impact on application response time or CPU usage is minimized. Monitor the utilization of different memory pools (young, survivor and old). Memory shortage is the number-one reason for the increased GC activity The young generation should be big enough to ensure that all temporary objects die there If overall memory utilization is growing continuously despite garbage collection, it is an indication of memory leak and requires heap analysis Monitor the churn rate (number of object allocation per interval) at the younger generation. The high number of young collection can cause high response time. It can cause unnecessarily copying of objects from young generation to old generation Free tools that can be used for GC analysis are: JConsole, jStat, Java VisualVM, JRockit Mission Control, verbose:gc flag of the JVM A high GC activity generally h

Checklist for Designing Better Performance Test Scripts

Checklist for Designing Better Performance Test Scripts Make sure the script doesn’t contain incorrect or extraneous URLs. The urls specified should be in correct sequence. It might be possible while recording, script writer would have gone to his / her popular website. It can be validated by using test tool’s “playback” feature to confirm what the script actually does. Identify all dynamic data (as the response from server) present in a script and correlate it. Usually it can be found by recording scripts two times and making comparisons between them. Parameterize scripts to support dynamic data set. In presence of dynamic data, every simulated user exercises the exact same path but avoids responses from the cache and exercises database interactions properly. Use proper checkpoint(s) / assertion(s) for all steps involved in the transaction. Absence of checkpoint might result in better response time when a page is not getting downloaded completely / correctl