Search This Blog

Monday, July 17, 2017

Where is Manual Testing in Continuous Deployment?

Transformation from Waterfall to Agile methodology is almost over. Most of the companies are bracing Agile methodology for their release management. Some have perfected and others are catching up. Few have moved forward and took Agile to the next steps following approach of Continuous Delivery / Continuous Deployment. All these changes are suggesting a clear pattern where end users expectations are being valued and served quickly with quality delivery. Days might not be very far, when users can track delivery of their favorite features or changes similar to items being tracked via FedEx (or any other courier service) :). Does it suggest that there won’t be any place for manual testing in future? In this blog, we will try to explore this answer.

Before we delve into details, it important to understand the difference between Continuous Delivery & Continuous Deployment. In Continuous Delivery, decision of releasing to production is manual and release can be delayed depending on the risk or who is taking the decision. In Continuous Deployment, build is automatically deployed to the production once the code is checked-in. In Continuous Delivery, there is still time for manual or exploratory testing that can be done on staging environment(s) before taking build to the production.

How about manual testing in “Continuous Deployment (CD)”? When it is being done or does it happen at all? Is it being done by developer alone or tagging along with tester before code check-in? Are the defects being directly found and raised by users? Will CD approach not affect the brand name of company offering the services? Testing of critical scenarios, scenarios having financial impacts, usability & compliance, few crazy scenarios etc. can’t be left on bots alone and needs to be accompanied by manual testing. I was pursuing these questions, when another question popped up, are there companies who is releasing their code using Continuous Deployment (NOT Continuous Delivery) approach? Below are the list of few (not ALL) companies and their approach of conducting manual testing in CD.

Continuous Deployment at IMVU: Doing the impossible fifty times a day
In praise of continuous deployment: The story
Intercom - Why continuous deployment just keeps on giving
Etsy - Quantum of Deployment
WealthFront - DevOps Cafe on Continuous Deployment and Operations Dashboards

How & when manual testing being conducted?
In the current world, end user applications are getting complex and dependent on many other services or applications. Testing all scenarios manually is not practical but testing of critical scenarios directly on production (TiP) seems feasible. Does this mean, quality of the service or application is being compromised? No, in reality, (in TiP) the new feature or changes are not required to be visible to end users always as soon as the new build is released to the production. There are ways, when some features or changes can be hidden to end users until validated by testers in the production. Initially, the testers in the company validates the changes and later changes are propagated to Beta users. Upon their satisfaction, feature or changes are released slowly to all users. In the meantime, if defects are found, the release can be rolled backed without affecting ALL users.

Does this mean, even after releasing to production, it can take long time before changes are propagated to users?
In CD, the regression testing gets done in almost all the phases. Developer does manual testing before check-in of their code followed by automatic static validation, unit testing and automated regression tests. Build is deployed and validated in multiple environments in an automated fashion prior to deploying in the production environment. In all the phases / environment, monitors are placed, which automatically tests for reliability and other details. There are bots in the test system, which keep on doing the real transactions using real credit cards / debit cards etc. and makes it difficult for defects to get passed to the production. Monitors keep watch on any regression and give alerts in advance. Missed scenarios are added to the automation test suite quickly for the next set of releases.

As most of the testing getting done in an automated way, only few critical scenarios are left for testing manually and that too it depends on the critically of the feature / changes.

Why not all services can be deployed using CD?
In my opinion, CD is comparatively easy to apply for services where defects introduced will not create havoc. There is no loss of life, hazardous to health or directly impact the monetary loss. I see, lot of banks have also started applying the CD but not the features / changes that might involve monetary loss. In all these cases, CD can be adopted but releasing changes to the users can be delayed until confident of the quality. Not all existing design architecture might support the CD but it can be tweaked over a period of time where small changes or new features can coexist with other design elements and can be toggled on-off as per the requirements - like microservices.

Tuesday, July 11, 2017

How QA can benefit when application is deployed using Dockers / Containers?

What is Docker Container?
"A container image is a lightweight, stand-alone, executable package of a piece of software that includes everything needed to run it: code, runtime, system tools, system libraries, settings. Available for both Linux and Windows based apps, containerized software will always run the same, regardless of the environment."
What is Container?

What is the difference between a Virtual Machine and a Container?
"VMs and Containers differ on quite a few dimensions, but primarily because containers provide a way to virtualize an OS in order for multiple workloads to run on a single OS instance, whereas with VMs, the hardware is being virtualized to run multiple OS instances. Containers’ speed, agility and portability make them yet another tool to help streamline software development."
Docker containers vs. virtual machines: What’s the difference?

Hidden benefits of Docker / Container for QA
Quite a few companies started deploying containers in the production environment lately and the numbers are growing steadily. With this trend of companies going towards Docker way, the days are not far, when QA needs to tweak their testing strategy; for testing applications deployed in containers. We will discuss this test strategy some other time but in this blog, we will explore hidden benefits of Docker / Container for QA. One of the key benefit of Docker / Container for QA is that it ease the setup of clean test environment and drastically reduce the difference between development and production environment. The other benefits / scenarios could be are as follows -

»  Docker image file can be shared to developers for a bug that is not reproducible at their machine / environment
»  Easier to catch configuration related issues while doing system testing
»  Reduces regression execution time as subsets of test suite can be executed against multiple containers in parallel
»  Testing of application’s different configurations can be automated by bringing up and down of different Docker’s image files
»  Similarly application can be tested against different flavour of databases (MySQL, SQL Server, Oracle etc.) programmatically
»  Negative scenarios like unavailability of databases, services etc. can be easily automated / tested by bringing up or down the containers
»  An emulator or service virtualization can be containerized and shared across different environments when needed
»  QA professional can set up production like environment in their laptop and can execute tests against it
»  Scaling up of web, app and database servers can be done programmatically with ease and less cost, which in turn helps in validating the reliability and scalability of application earlier in the test cycle
»  Quick testing by peer(s) can be done for freshly developed component without understanding the nitty gritty of setting up the environment
»  Many testers can do the exploratory testing in parallel w/o worrying about changing the state of application for other testers
»  Helps in conducting bug bashing or crazy testing w/o having fear of bringing the test environment down
»  Docker image file can be sent to remote testing team for early feedback (before starting their work next day)
»  Test tools setup (like Selenium Grid) can also be containerized for easy & fast access to team members
»  Crowdsource testing becomes simpler and can be conducted on a large scale with less cost and more ease
»  Workshops & Trainings can be arranged easily w/o having mandatory need of internet access and fear of unavailability of test environment
The hidden benefits of Docker for QA

Sunday, July 2, 2017

A Brief on DevOps

What is DevOps?
DevOps is the combination of cultural philosophies, practices, and tools that increases an organization’s ability to deliver applications and services at high velocity. This speed enables organizations to better serve their customers and compete more effectively in the market. DevOps moves the focus from development to delivery—a subtle but important distinction.
What is DevOps?

History of DevOps
At the Agile 2008 conference, Andrew Shafer and Patrick Debois presented on "Agile Infrastructure". The term DevOps was popularized through a series of "devopsdays" starting in 2009 in Belgium. Since then, there have been devopsdays conferences, held in many countries, worldwide. History of DevOps
The Incredible True Story of How DevOps Got Its Name

Practices for Effective DevOps
To realize the goals of effective collaboration & smoother operation, below practices can be adopted
»  Active participation of all stakeholders (developers, operations staff, testers, support people)
»  Automated regression testing, extending to TDD & BDD
»  Integrated Configuration Management, where developers not only focus on solution level but also should consider production configuration issues and rest of organization’s infrastructures (test environments)
»  Integrated Change Management, where development team should work closely with Operation & testing team to understand the implication of any technology changes at organization level
»  Implementing Continuous Integration with automated regression testing and static code analysis
»  Continuous deployment planning along with operation, testing and support team
»  Continuous deployment from one sandbox environment to next one until production environment
»  Support of developers during the critical production issues from beginning
»  Application monitoring other than infrastructure monitoring
»  Real time insights to organization’s governance team through automated dashboard
Top 10 Practices for Effective DevOps

Continuous Delivery Tools Chain
The key to an effective DevOps implementation is understanding your tools, and implementing an effective continuous delivery tool chain

CD ChainNotesTools
Orchestration and Deployment Pipeline Visualization Deployment pipeline steps can be built and visualized by integrating the entire existing tool chain. It helps team to detect delays & wait times between each steps. ElectricCommander, CA LISA, IBM UrbanCode, XebiaLabs XL
Version Control Source code and configuration files are version controlled. All text based files, getting changed by team members should be added into the version control Git, Mercurial, Perforce, Subversion, TFS, Bazaar, CVS
Continuous Integration Integrate new code with stable main release line and alert stakeholders if it causes issues in the final product Jenkins, Travis CI, ThoughtWorks GO, CircleCI, TeamCity, Bamboo, Gitlab CI
Continuous Inspection Automatic & continuous audit of code quality in terms of maintainability, coding standards, future bugs etc. SonarQube, CheckStyle, JavaNCSS, CPD, FindBugs, PMD
Artifact Management Focus is on packaged artifacts like application assets, virtual image, configuration data, infrastructure code etc. Artifacts are identifiable, versioned and immutable. Package metadata identifies how and when it was tested and against which environment Nexus, Artifactory, Archiva
Test Automation The goal should be to automate regression suite completely except scenarios, which can’t be automated. JMeter, Selenium/WebDriver, Cucumber (BDD), RSpec (BDD), SpecFlow (BDD), LoadUI (Performance), PageSpeed (Performance), Netem (Network Emulation), SoapUI (Web Services)
Environment Automation Test environment can be brought up on-demand using automation tool, which can provision VMs and applying configuration template. Vagrant, Docker, Packer
Server Configuration and Deployment Management Tools are used to deploy binaries into the required environment(s). Team needs to ensure that process is fully automated and must have capability of rolling back to previous stable version w/o any issues Ansible, Chef, Puppet, SaltStack
Monitoring and Reporting Log files from all system can be aggregated at centralized location, should be indexed and searchable from web browser

Real time insights to organization’s governance team through automated dashboard
Application: New Relic, Dynatrace, AppDynamics

Infrastructure: Nagios, Sensu

Logs: Splunk, Fluentd, Heapster, Logstash, Prometheus, WeaveScope

The Continuous Delivery Toolchain

Monday, June 12, 2017

Roles & Responsibilities of a Test Manager

The main role of a test manager is to make the testing efforts successful in an organization and for achieving the same, he/she carries multiple responsibilities. In this article, I have tried to list major activities a test manager is responsible for. The responsibilities and activities depends on the organization goals ultimately. The test managers are dependent on his team to carry out these activities and hence composing and building team is one of the success factor for their success.

Effective test manager not only understands the discipline of testing but have capability of managing and implementing a testing process in the organization. They have skills in leading team, communicating and advocating quality and can show ROI of testing team in the organization.

A test manager is majorly responsible for -
» Implementing Testing Discipline
» Planning & Management
» Team Building
» Measuring, Monitoring & Improvement
» Positive Communication / Diplomacy

Implementing Testing Discipline
Implementing testing process in the organization The testing process includes -
» Planning, monitoring & controlling of testing activities
» Analysis of "what" test conditions (features, functionalities etc.) to be tested
» Design "how" to test the identified test conditions
» Implementation of test schedule
» Test prioritization and its execution
» Documentation and reporting of test progress
» Evaluation of exit criteria
» Implementation of test closure activities (test completion check, test artifacts handover, lessons learned, archiving of results, logs, reports and other documents)
Selecting, customizing & implementing appropriate templates & standards for the work products for project(s) or organizationTemplates for test plan, test case, test report, test strategy, test scenario, traceability matrix, checklists etc.
Defining, implementing and evolving appropriate metrics and measurements frameworkMetrics to track Project Cost, Product quality, Process efficiency and People productivity
Making a (test) plan, tracking progress and reporting to stakeholdersPlan for scope, strategy, environment, schedule, deliverables, entry / suspension / resumption criteria, exit criteria, roles & responsibilities and risks mitigation
Creating estimates for all test process activities, using all applicable estimation techniquesSome of the estimation techniques are Intuition, guesses & past experiences, Work Breakdown Structure (WBS), Team estimation session, Company standards and norms, percentage of overall project estimation, organizational history and metrics, Industry averages etc.
Developing a test execution schedule, which is complete and consistent with respect to the test objectives, test strategy and test planUse risks, prioritization, test environment, data dependencies and constraints to come up with schedule
Establishing test object delivery processes and to ensure version control during test executionDefine and implement Configuration management, Release management and Change management
Incorporating test design techniques into the project and test processesTest case design techniques like Combinatorial Test (e.g. Pairwise testing), Boundary Value analysis, Equivalence Partitioning, Decision Table, Cause Effect Graphing, State transition, Classification Tree, etc.
Implementing retrospective notes, post project closureTo evaluate processes and discover areas of improvement
Implementing a defect management process for an organization and explaining it to all stakeholdersInclude defects report workflow, states and transitions. It also includes level of details necessary to capture

Planning & Management
Determining the usefulness of different standards for the specific context in which testing is occurring or will occur. It should be as per the organization’s policy Standards can provide a reference to proven best practices and provides a basis for organizing the test process. Some of the industry standards are ISO 25000, IEEE 829 & IEEE 1028, BS 7925-2, PMI, PRINCE2, ITIL etc.
Analyzing the project risks and coming up with different risk management optionsRisk management options like mitigation, contingency, transfer or acceptance
Identification of the activities and resources required to implement objectives identified in the test strategyActivities like test case designs, environment setup, automation, test execution, reporting etc. and resources like people, software, hardware, infrastructure etc.
Identification of methods for gathering and tracking the metricsData should be gathered and tracked across software development life cycle phases
Identification of test toolsTest tools for - Test Management, defects management, test automation, test execution, test result tracking, reporting etc.
Identification of correct strategies for the testing project / releases / organizationTest strategies like Risk based , Reactive, Requirement based, Model-based strategies, Methodical strategies (predetermined set of test conditions from various quality standards), Process / standard compliance strategies, Consultative strategies, Regression averse testing strategies, Checklist based
Doing a cost-benefit analysis before acquiring or building a tool and present the return of investment (ROI) details to the management for approval of the budgetShould consider both recurring and nonrecurring costs for tool evaluation, selection and owning the license
Understanding and analyzing the licensing scheme of the selected toolGeneral Public License (GNU), Licensed per named user, Licensed per installation, Licensed per client, Floating license, etc.
Checking and analyzing the legal ramifications of redistributing the software in the organizationMost of the time, when open source tool is used, the resultant software is binded by same license as received it under
Considering all the negative & positive issues while developing custom tools for the project or organizationNeed custom tools when no vendor specific or open source tools exist due to proprietary system
Analyzing the stakeholders, circumstances and needs of a software program / test needs and identify the optimal testing activities
Analyzing and calculating ROI of testing and provides business value to the organization
Identification of non-functional tests to perform based on risk and constraintsNon-functional tests like performance, security, compliance, usability, compatibility, accessibility etc.
Requirements engineering and managementNeeds to consider requirements during the scoping and estimation of test effort
Managing and growing the testing assetsLike team members, test tools, test frameworks and test process

Team Building
Setting up the skill gap framework To analyze the strength and weakness of team members related to system under test, domain, automation, business knowledge, software testing, communication etc.
Analysis of skills assessment spreadsheet and set up trainings and skill development programsTrainings options like in-house, cross training, self training, external training, custom training, mentoring
Setting of performance goals for individuals for improving their skills set
Hiring the right resources for the company and not only for the project
Scheduling & arranging relevant trainings to team membersTrainings for - Process improvement, skills development (automation, domain etc.), test tools
Understanding the factors required for successful use of staffing strategiesTest team staffing strategies can be distributed, outsourced or insourced
Retaining skilled test professionals

Measuring, Monitoring & Improvement
Assessing existing process capability & maturity and brings improvement(s) Improve test process using industry accepted techniques / models like TMMi, STEP, CTP, TPI Next, etc.
Reviewing testing work products using the appropriate techniques and by the appropriate participants and stakeholdersReviews techniques can be informal, technical, through inspection or walkthroughs
Ensuring smooth functioning of tools throughout it’s life cycleAt the same time, assessing ROI of these tools
Collection of accurate and timely information during the test processTo support accurate reporting and evaluation against existing criteria
Using requirement traceability matrix to check completeness and consistency of designed test cases with respect to the defined test conditions

Communication / Diplomacy
Putting the test team in the respectable position in the organization
Motivating an individual in a testing position
Communication with multiple audiences, including users, project team members, management, external testing groups and customers
Defining the process of data collection and classification throughout the defects management process
Explaining the defects statistics report to managementAnd evaluate the current capability of existing testing process and software development process
Advocating the proper use of process and selection of defect management tool
Facilitating the successful management reviews and auditsTo focus on the existing software process capabilities and improvement areas, particularly with respect to testware products
Implementing the efficient (technical) reviews like requirement review, design review, code review etc. in the projects and demonstrate the benefits of review exercise
Explaining different test strategies and its effects on testing activities
Working with Development Managers to coordinate the delivery of test objects, including content and dates of each test release, as well as participating in defect management
Working with the development manager to ensure that testers are integrated into and aligned with activities in agile development
Analyzing the reported test results and provide insights and recommendations to enable project stakeholders to make decisions
Working with the Technical Support Manager to analyze production failures, in order to implement process improvements
Working with the Technical Support Manager to ensure proper delivery of test results during test closure So that those involved in supporting the product after release are aware of known failures and workarounds
Understand the precise nature of the stakeholder relationship with testing and how the test team serves the needs of the stakeholdersWithout this, the testing process might not achieve optimal effectiveness and efficiency
Creating well defined and convincing business case for testing in the organization or optimize testing to deliver good business valueCost can be evaluated using categories like cost of prevention, cost of detection, cost of internal failure, cost of external failure
Defining and implementing the roles, testing plays within the organization

Tuesday, March 8, 2016

Test Data Management (TDM)

“Data isn't information, any more than fifty tons of cement is a skyscraper” - Clifford Stoll

What is Test Data Management (TDM)?
TDM consists of managing the provisioning of required test data efficiently and effectively, while at the same time ensuring compliance to regulatory and organizational standards. Below are some building blocks of TDM
·         Data Subset – a process of slicing a part of the production database and load it into the test DB
·         Data Masking – a process of masking the sensitive fields from the complete data set
·         Data Archive- a process of storing data snapshot to restore it later as per build / release / cycle
·         Test Data Refresh – a process of loading / refreshing the test data with latest data from prod
·         Test Data Ageing – a process required for time based testing.  Depending on the scenario that needs testing, either backdate or front date the given date
·         Gold Copy – the baseline version of data that can be used for future releases

Why do we need TDM?
Research shows that projects cancelled due to poor data quality are 15 percent more costly than successful projects of the same size and type.  It is noticed that almost over 10% of the defects raised in production are due to data that could have easily been captured during the various testing phases.
·         To create “right-sized” test databases that accurately reflect E2E business processes
·         To enable developers to correct defects early in the life cycle
·         Allow to execute comprehensive non-functional tests
·         To create realistic and manageable test databases by applying data sub-setting techniques
·         To safeguard customer privacy/security by applying data privatization techniques
·         Quickly and easily refresh data in Test Environments
·         To empower test teams to select and book test data set
·         To reproduce any reported bugs, the data used must be available

What are some of the indicators that your project needs TDM?
·         Testing deadlines getting slipped due to data related outages and/or data synchronization issues
·         Testers wasting more time in preparing test data than the actual testing
·         Testers depends a lot on BA to provide meaningful test data
·         High risk and penalties associated with not adhering to compliance and/or data privacy laws
·         Lots of false defects due to data related issues
·         Testers complaining about complexity in creating test data for consumption
·         Test data are as voluminous as production and hinder performance
·         Test data not being reused and every time being created from scratch (using the same process)
·         A big delay in providing the test data as waiting for another system to get ready
·         With projects growing, team complaining about managing the test data
·         Outsourced and / or off-shored testing services have access to the customer’s PII data

What are some of the major activities of TDM?
·         Acquiring an initial understanding of the test data landscape like a list of test regions, applications, types of data stores, frequency of data requests for each application etc.
·         Carrying out data profiling exercise for each of the individual data stores across the enterprise
·         Identify
o   Data types
o   Data dependencies
o   Data sources and providers
o   Tools for data extraction, masking, creating, loading and so on
o   Who needs test data, a tester, a developer or a vendor
o   When to refresh the test data and when to clean
o   Phase of cycle test data needs to be used, unit, integration, system or UAT?
·         Assigning a version number to existing data
·         Identify test region(s) where data need to be loaded or refreshed
·         Restore “used” data to original “unused” state
·         Carrying out masking
·         Test data preparation
o    Cloning production databases
o    Generating synthetic data
o    Sub setting production data
·         Distribute unused data from other projects
·         Load data dump (masked or unmasked) to target region
·         Take back-up of data of new data (both databases & files) once the data is set up
·         Assign version number of the backup and catalog it with proper description
·         Refresh with data dumps (production slice or other regions)

Test Data Management Challenges
·         Data Requirements
o    How to synchronize and share test data among multiple applications and teams?
o    How to resolve contention of environments?
o    How to analyze existing data if are not profiled properly?
o    How to handle sudden and immediate requests for test data during test execution?
o    How to ensure proper data distribution so as to prevent redundant or unused data?
o    How to ensure data reuse?
·         Data Validity & Consistency
o    How can it be ensured that the data has not ‘aged’ and has not become obsolete?
o    How are you planning to refresh test data on a regular basis to avoid poor data quality and data integrity?
o    How to manage complex and heterogeneous system coupled with different file formats having multiple touch points?
o    What is your strategy for proper versioning of data?
o    How to enable traceability from end-2-end business process?
o    How to maintain traceability between test data to test cases to business requirements?
·         Data Privacy
o    How to mask sensitive personal information before migrating it to test environment(s)?
o    Are you aware about different government mandates and regulations in place that stipulate the data must be masked, de-identified or encrypted?
o    How to enable auditing of data?
·         Data Selection & Subsetting
o    How to plan a smaller subset of data in a scaled down, non production environment without risking coverage (of test data)?
o    How to plan subset of data in different format for different teams (DW, Performance, Functional, System etc.) without resulting in long test cycles?
·         Data Storage & Safety
o    Is your company ready for high storage, license and maintenance cost when copies of full production data are required in a test environment?
o    How many test environments require copies of full production data?
o    What is the policy for version control, access-security and backup mechanisms?
o    Is your company ready for high storage, license and maintenance cost when copies of
·         Data Refresh
o    How to manage impact of data refresh on ongoing projects?
·         Effort
o    DBA like skills required for team managing TDM
o    Is there any separate team for data engineering, data provisioning and data mocking etc.?
o    Managing & maintaining referential integrity & data quality while data generation
o    What is the time taken in copying huge volume of production data to different environments?
o    How to strategize test data identification, extraction and conditioning?
o    Coordination with multiple stakeholders

Test Data Management strategy
Quality data is a must for testing business functionality in the test environment. However, managing quality of data is often challenging due to complex relationships, limited infrastructure, sensitivity of data, and the lack of data conforming to business rules. A better test data management strategy not only ensures greater development and testing efficiencies, but helps organizations identify and correct defects early in the development process, when they are cheapest and easiest to fix. Any test data management strategy must efficiently supply a steady supply of relevant test data to support ever-tightening development cycles, while avoiding testing bottlenecks.
·         Gathering & Analyzing test data
o    Does the relevant production data exists, which can be used as test data
o    Test cases not covered by production data must be covered by newly created test data
·         Data Generation
o    Have you outlined a set of criteria to automatically generate the quality of data required?
o    Are the data generated are re-usable or needed to generate every time?
o    Are the data generated from scratch or copied subset of data from production?
·         Data de-identification
o    Mask corporate, client, employee, etc. information
o    Supports compliance with government and industry regulations
o    Mask consistently complete business objects (e.g. Customer Order)
o    Who will have access to this data? All internal team members or vendors doing testing?
o    Do data need to be encrypted?
·         Data Planning
o    Capture E2E business process and the associated data for the testing
o    How to select a subset of data? How do you ensure if selected data are relevant?
o    Do we need 5x data for stress environment?
o    If cloning or the migration of production data on test environments is required, should we clone full or 60%? What should be the periodicity of migration / cloning?
o    What is the amount of changes in the production database and amount of application changes?
·         Subset production data from multiple data sources
o    Subsetting creates realistic test databases small enough to support rapid test runs, but large enough to reflect the variety of production data
o    Create test data to force error and boundary conditions
·         Data Reuse
o    Have you labeled test data to correlate them to specific test cases?
o    Are test data labeled for release / build / cycle?
o    Can we categorize test data according to different testing stages like functional, stress?
·         Data Maintenance
o    What should be schedule and frequency of refreshing the test data?
o    What is your plan for storing the data?
o    How often it is migrated to the test environment?
·         Data Refresh
o    Accommodate changing test requirements
o    Is this possible to automate data refresh?
·         Data Auditing
o    Can you trace the workflow from end to end?
o    Can you analyze the data from audition logs and is this fit for the purpose?
·         Cleaning up test environment post testing completion
o    How and when the cleaning up of test data needs to be done, post testing completion?
o    Are there any instances where altered test data cannot be cleaned up?
·         Automate test data result comparison
o    Automate identification of data anomalies and inconsistencies
·         Use of central repository with version control

What are some pros and cons of cloning production databases?
·         Pros: It is relatively simple to implement
·         Cons
o    Expensive in terms of hardware, license and support cost
o    Time consuming – Increases the time required to run test cases due to large data volumes
o    Not agile: Developers, testers and QA staff can’t refresh the test data
o    Inefficient: Developers and testers can’t create targeted test data sets for specific test cases or validate data after test runs
o    Not collaborative between DBA and testing teams
o    Not scalable across multiple data sources or applications
o    Laborious: Production systems are typically large
o    Risky: Nonproduction environments might be compromised or misused (developers, testers and QA staff need realistic data to do their jobs—but they do not have a valid business reason to access sensitive data such as corporate secrets, revenue projections or customer information)

What are some challenges of using Production Data in Test Enviornment (Production Cloning)?
·         Data security is one of the most crucial challenges as production data can contain a lot of sensitive information like real customer details, vendor names etc.  It can be overcome by data masking
·         Data volume that needs to be dealt with is pretty huge.  Think about 100K customer doing 5 transactions per hour is equivalent of generating 500K transactions per hour, which is a 5000K transactional record’s addition in one day.  Just imagine the scale of data that needs to be loaded into the test environment.  It can be overcome by data sub setting.
·         Data can come from various sources like flat files, different relational databases, excel, etc. and can be in various formats.  Maintaining data relationships and data integrity is another challenge.
·         Production cloning might force to have production like infrastructure, means higher costs
·         The Additional cost of storing production data (e.g. 50TB) in different test enviornments
·         Increased load time from production to test environment will leads to less time for real testing

What are some pros and cons of generating synthetic data?
·         Pros: Safe
·         Cons
o    Resource-intensive: Requires a huge commitment from highly skilled DBAs with deep knowledge of the underlying database schema, as well as knowledge of implicit relationships that might not be formally detailed in the schema
o    Tedious: DBAs must intentionally include errors and set boundary conditions within the synthetic data set to ensure a robust testing process, which adds time to the test data creation process
o    Challenging: Despite the time and effort put forth by the DBA to generate synthetic test data, testers find it challenging to work with because synthetic test data doesn’t always reflect the integrity of the original data set or retain the proper context
o    Time-consuming: Process is slower and can be error-prone

What are some pros and cons of Subsetting production databases?
§  Pros:  Less expensive compared to cloning or generating synthetic test data
§  Cons: Skill-intensive: Without an automated solution, requires highly skilled resources to ensure referential integrity and protect sensitive data

What are some challenges in Data Subsetting?
·         Maintaining referential integrity is the biggest challenges.  Just imagine of fetching only 100 customer order records from 1 million customer orders’ records without losing any context
·         Maintaining data integrity of the subset of the data.  Just imagine, if customers’ records are in Oracle database but the customer order records are in SQL server.
·         Maintaining data relationships across multiple sources.  For example, a vendor might provide a data feed in flat file format for all customers’ orders.

Key features of TDM tool
TDM is about automate the provisioning of masked and synthetically generated data to meet the needs of test, development & QA team. TDM is needed for minimizing risk of data breach.  TDM helps in using production data safely in test or development environment.  TDM can be deployed on premises, in the cloud and via cloud hybrid configurations. Some of the tools in TDM space are Datamaker, Optim, HP TDM etc. Key features of TDM tool should be:
·         Automatic discovery of sensitive data (locations) across databases
·         Ability to create synthetic data where production data can’t be used or doesn’t exist
·         Should be able to get connected with distributed databases
·         Conformance and compliance team should be able to verify its functionality
·         Capability of masking data in place or while copying to test, support or outsource environment
·         Provision for smaller set of data requirements
·         Support for packaged applications