Posts

Identification of Performance Objectives / Goals

Identification of performance objectives / goals It is found that most of the time, stakeholders are not aware or clear on what they want after the performance run. After performance run, every performance test engineer prepare the report containing many graphs, analysis and what not and share it to stakeholder. But stakeholder is confused or overwhelmed by viewing the big report. The main reason is he or she is not sure what answer they were expecting in the report and it becomes very difficult for performance tester to explain or walkthrough the report. Therefore, it is very important that before we conduct performance run(s), we set the expectation in the beginning that what question(s) will be answered after the run. Below are some of the questions, I ask before conducting a performance run and prepare test strategy and report accordingly. Client Side Goals Do you want to know if the application is able to support X number of concurrent users load? How yours most imp

Best Practices for Unit Test Case Automation

Make each test orthogonal (i.e., independent) to all the others Any given behavior should be specified in one and only one test. Otherwise if you later change that behavior, you’ll have to change multiple tests Don’t make unnecessary assertions Which specific behavior are you testing? It’s counterproductive to Assert() anything that’s also asserted by another test It just increases the frequency of pointless failures without improving unit test coverage at all Have only one logical assertion per test Unit tests are a design specification of how a certain behavior should work, not a list of observations of everything the code does Test only one code unit at a time Architecture must support testing units (i.e., classes or very small groups of classes) independently, not all chained together If you can’t do this, then your architecture is limiting your work’s quality – consider using Inversion of Control Mock out all external services and state Yo

Memory Utilization & Garbage Collection Analysis

Memory Utilization & Garbage Collection Analysis The goal should be to optimize garbage collection (GC) in such way that its impact on application response time or CPU usage is minimized. Monitor the utilization of different memory pools (young, survivor and old). Memory shortage is the number-one reason for the increased GC activity The young generation should be big enough to ensure that all temporary objects die there If overall memory utilization is growing continuously despite garbage collection, it is an indication of memory leak and requires heap analysis Monitor the churn rate (number of object allocation per interval) at the younger generation. The high number of young collection can cause high response time. It can cause unnecessarily copying of objects from young generation to old generation Free tools that can be used for GC analysis are: JConsole, jStat, Java VisualVM, JRockit Mission Control, verbose:gc flag of the JVM A high GC activity generally h

Checklist for Designing Better Performance Test Scripts

Checklist for Designing Better Performance Test Scripts Make sure the script doesn’t contain incorrect or extraneous URLs. The urls specified should be in correct sequence. It might be possible while recording, script writer would have gone to his / her popular website. It can be validated by using test tool’s “playback” feature to confirm what the script actually does. Identify all dynamic data (as the response from server) present in a script and correlate it. Usually it can be found by recording scripts two times and making comparisons between them. Parameterize scripts to support dynamic data set. In presence of dynamic data, every simulated user exercises the exact same path but avoids responses from the cache and exercises database interactions properly. Use proper checkpoint(s) / assertion(s) for all steps involved in the transaction. Absence of checkpoint might result in better response time when a page is not getting downloaded completely / correctl

DW Testing - Test Ideas

DW Testing - Test Ideas Data Completeness Is all expected data being loaded? Is your DW handling updates over time? Is delta pull being tested other than full? What happens when error occurs during the ETL process? Is application able to recover after error? Does the ETL have the ability to resume from the point of failure? Are all data source being considered for testing? Is row count from source to target as per the requirements? Is there any extra data present in the target system? Is unique values of key fields between source data and target data same? Is data truncating before putting into datawarehouse? For example - Source Column width might be having 30 characters, whereas destination might have only 20. Is boundaries of each field been tested? For example, for a decimal(3) field include values of -99 and 999, and for date fields include the entire range of dates expected. Data Transformation Is all data being transformed correctly accordingly to business

QA Metrics - Testing Progress Metrics

Quality Progress Metrics Start tracking before Test Execution User Stories, Features or Requirements Coverage Progress Are test cases being authored for all required stories, features or requirements? Are test cases for important features or requirements being targetted first? Are number of test cases (for stories, features or requirements) proportional to effort required for their development? Are there any features, stories or requirements being left over? Is team on target in covering all required features, stories or requirements? Test Case Readiness Progress When will all the test cases be ready to run? Will all the test cases be ready to run by the end of the iteration? How many test cases must the team still write and review? How many test cases are ready to run? Test Automation Progress How many test cases has been automated? How many test cases being automated on regular basis? Is the team on track in automating the test cases? Start tracking during Te

Manual Testing Core Activities

Manual Testing Core Activities What are the core activities of testing especially for manual testers? How should we plan testing efforts? Most of the time, I have seen effort of review activities and retesting of defects are being ignored heavily and schedule get hit because of that. Following are the core test activities should be considered while planning - Test Ideas (Not Reviewed): Test idea is a test case with only summary and no details Test Ideas (Reviewed L1): Test ideas reviewed by peer team member Test Ideas (Reviewed L2): Test ideas reviewed by Test Lead / Test Manager Test Ideas (Signed-off): Test ideas reviewed by client and signed-off Test Cases (Not Reviewed): Test case is extension of test idea, putting more details around test idea (descriptive, detailed, Clear steps for execution, Test Data information etc.) Test Cases (Reviewed L1): Test cases reviewed by peer team member Test Cases (Reviewed L2): Test cases reviewed by test lead / test manager. Review