What is Statistical testing.
A test design technique in which a model of the statistical distribution of the input is used to construct representative test cases.
Acceptance testing by users/customers at their site, to determine whether or not a component or system satisfies the user/customer needs and fits within the business processes, normally including hardware as well as software.
Two persons, e.g. two testers, a developer and a tester, or an end-user and a tester, working together to find defects. Typically, they share one computer and trade control of it while testing.
Testing of individual components in isolation from surrounding components,with surrounding components being simulated by stubs and drivers, if needed.
Testing of software used to convert data from existing systems for use in replacement systems.
An analysis method that determines which parts of the software have been executed (covered) by the test suite and which parts have not been executed, e.g. statement coverage, decision coverage or condition coverage.
It is an approach to testing in which test cases are designed based on descriptions and/or knowledge of business processes.
A white box test design technique in which test cases are designed to execute branches.
An input value or output value which is on the edge of an equivalence partition or at the smallest incremental distance on either side of an edge, for example the minimum or maximum value of a range.
A test case that cannot be executed because the preconditions for its execution are not fulfilled.
A version of component integration testing where the progressive integration of components follows the implementation of subsets of the requirements, as opposed to the integration of components by levels of a hierarchy.
Features of coverage measurement tool include support for:
i. Identifying coverage items (instrumenting the code).
ii. Calculating the percentage of coverage items that were exercised by a suite of tests.
iii. Reporting coverage items that have not been exercised as yet.
iv. Identifying test inputs to exercise as yet uncovered items (test design tool functionality).
v. Generating stubs and drivers (if part of a unit test framework).
A coverage tool is a tool that provides objective measures of what structural elements eg. statements, decisions, or branches have been exercised by a test suite.
Features of test comparator include support for: i. Dynamic comparison of transient events that occur during test execution. ii. Post-execution comparison of stored data eg. in files or databases. iii. Masking or filtering of subsets of actual and expected results.
Test comparator is a test tool to perform automated test comparison. Automated test comparison is a process of identifying differences between the actual results produced by the component or system under test and the expected results for a test. Test comparison can be performed during test execution (dynamic comparison) or after test execution.
Features of test harness and unit test framework tools include support for
i. Supplying inputs to the software being tested.
ii. Receiving outputs generated by the software being tested.
iii. Executing a set of tests within the framework or using the test harness.
iv. Recording the pass/fail results of each test (framework tools).
v. Storing tests (framework tools).
vi. Support for debugging (framework tools).
vii. Coverage mesurement at code level (framework tools).
Unit test framework tool provides an environment for unit or component testing in which a component can be tested in isolation or with suitable stubs and drivers. Tool also provides other support for the developer, such as debugging capabilities.
Test harness is a test environment comprised of stubs and drivers needed to execute a test.
Features of test execution tools include support for
i. Capturing (recording)test inputs while tests are executed manually.
ii. Storing an expected result in the form of a screen or object to compare to, the next time the test is run.
iii. Executing tests from stored scripts and optionally data files accessed by the script (if data driven or keyword driven scripting is used).
iv. Dynamic comparison (while the test is running) of screens, elements, links, controls, objects and values.
v. Ability to initiate post execution comparison.
vi. Logging results of tests run (pass/fail, differences between expected and actual results).
vii. Masking or filtering of subsets of actual and expected results, eg. excluding the screen displayed current date and time which is not of interest to a particular test.
viii. Measuring timings for tests.
ix. synchronizing inputs with the application under test eg. wait until the application is ready to accept the next input, or insert a fixed delay to represent human interaction speed.
x. Sending summary results to a test management tool.
Capture and playback tool is a type of test execution tool where inputs are recorded during manual testing in order to generate automated test scripts that can be executed later (i.e replayed). These tools are often used to support automated regression testing
Features of test design tools include support for i. Generating test input values from: requirements, design models(state,data or object), code, graphical user interfaces, test conditions. ii. Generating expected results, if an oracle is available to the tool.
Test design tool is a tool that supports the test design activity by generating test inputs from a specification that may be held in a Computer Aided Software Engineering (CASE) tool repository eg. a requirements management tool, or from specified test conditions held in a tool itself or from code.
Testware are the artifacts produced during the test process required to plan, design, and execute tests, such as documentation, scripts, inputs, expected results, set-up and clear-up procedures, files, databases, environment, and any additional software or utilities used in testing.
Features of configuration management tools include support for i. Storing information about versions and builds of the software and testware. ii. Traceability between software and testware and different versions or variants. iii. Keeping track of which versions belong with which configurations (OS, libraries, browsers). iv. Build and release management. v. Baseline. vi. Access control (checking in and out).
Configuration management tool provides support for the identification and control of configuration items, their status over changes and versions, and the release of baselines consisting of configuration items.
Features of defect management tools include support for i. Storing information about the attributes of incidents (eg. severity) ii. Storing attachments (eg. screenshots). iii. Prioritization of incidents. iv. Assigning actions to people (fix,confirmation test, etc.) v. Status of incident (eg. open, rejected, duplicate, deferred, close). vi. Reporting of statistics/metrics about incidents (eg. number of defects with each status, total number defects raised, open, closed).
Defect management tool is a tool that facilitates the recording or status tracking of incidents or defects. They often have workflow oriented facilities to track and control the allocation, correction and re-testing of incidents (confirmation testing) and provide reporting facilities.
Features of requirements management tools include support for
Requirements management tool is a tool that supports the recording of requirements, requirements attributes (eg. priority, person responsible) and facilitates traceability through layers of requirements and requirements change management. Some requirements management tools also provide facilities for static analysis, such as consistency checking and violations to pre-defined requirements rules.
Features of test management tools includes:
Test management tool provides support to the test management and control part of a test process. It has several capabilities such as testware management, scheduling of tests, logging of results, progress tracking, incident management and test reporting. Information in this tool can be used to monitor testing process and decide what actions to take. This tool also gives information about the component or system being tested. Test management tool help to gather, organize and communicate information about the testing on a project.
Decision table is a table showing combination of inputs and/or stimuli(causes) with their associated outputs and/or actions(effects), which can be used to design test cases. Decision table testing is a black box test design technique in which test cases are designed to execute the combinations of inputs and/or stimuli(causes) shown in a decision table.
Testing techniques may be broadly classified in Static testing and Dynamic testing.
Static testing techniques do not execute the code and are generally used before any tests are executed on the software. Reviews, walkthroughs and inspection constitute static testing techniques.
Dynamic testing techniques are sub divided into three categories:
Compiler is a software tool that translates programs expressed in a high order language into their machine language equivalents.
Back-to-back testing is the testing in which two or more variants of a component or system are executed with the same inputs and outputs are compared and analyzed in cases of discrepancies.
Top down is incremental approach to integration testing where the component at the top of the component hierarchy is tested first, with lower level components being simulated by stubs. Tested components are then used to test lower level components. The process is repeated until the lowest level components have been tested.
State table is a grid showing the resulting transitions for each state combined with each possible event, showing both valid and invalid transitions.
State diagram depicts the states that a component or system can assume and shows the events or circumstances that cause and/or result from a change from one state to another
A black box test design technique in which test cases are designed to execute valid and invalid transitions.
The capability of the software product to be upgraded to accommodate increased loads is scalability. Testing to determine the scalability of the software product is called scalability testing.
Audit trail is the path by which the original input to a process (eg. data) can be traced back through the process, taking the process output as a starting point. This facilitates defect analysis and allows a process audit to be carried out.content of the products to be produced.
Audit is an independent evaluation of software products or processes to ascertain compliance to standards, guidelines, specifications, and/or procedures based on objective criteria, including documents that specify:
1. The form or content of the products to be produced.
2. The process by which the products shall be produced.
3. How compliance to standards or guidelines shall be measured.
Bottom-up testing is an incremental approach to integration testing where the lowest level components are tested first, and then used to facilitate the testing of higher level components. This process is repeated until the component at the top of the hierarchy is tested.
Big-bang testing is a type of integration testing in which software elements, hardware elements, or both are combined all at once into a component or an overall system, rather than in stages.
The capability of the software product to use appropriate amounts and types of resources, for eg. the amounts of main and secondary memory used by the program and sizes of required temporary or overflow files, when the software performs its function under stated conditions. The process of testing to determine the resource utilization of a software product is called resource utilization testing.
Release note document identifies test items, their configuration, current status and other delivery information delivered by development to users, to testing, and possibly other stake holders, at the start of a test execution phase.
Behavior of a component or system in response to erroneous input, from either a human user or from another component or system, or to an internal failure.
CMMI is a framework that describes the key elements of an effective product development and maintenance process. CMMI covers best practices for planning, engineering, and managing product development and maintenance. CMMI is designated successor of the CMM.
CMM is a five level staged framework that describes the key elements of an effective software process. CMM covers best practices for planning, engineering, and managing software development and maintenance.
A software product that is developed for the general market, i.e. for a large number of customers, and that is delivered to many customers in identical format is known as off-the-shelf software.
A test case with concrete(implementation level) values for input data and expected results. Logical operators from high-level test cases are replaced by actual values that correspond to the objectives of the logical operators.
A test case without concrete(implementation level) values for input data and expected results. Logical operators are used; instances of the actual values are not yet defined and/or available.
A defect in a programs dynamic store allocation logic that causes it to fail to release memory after it has finished using it, eventually causing the program to fail due to lack of memory.
Tests aimed at showing that a component or system does not work. Negative testing is related to the testers attitude rather than a specific test approach or test design technique. eg. testing with invalid input values or exceptions.
Agile development is an iterative type of software development life cycle model. Extreme programming(XP) is currently one of the most well-known agile development life cycle models. Some of the important characteristics of XP or extreme programming are:
1. Demands an onsite customer for continual feedback and to define and carry out functional acceptance testing.
2. Promotes pair programming and shared code ownership amongst the developers.
3. States that component test scripts shall be written before the code is written and that those tests should be automated.
4. states that integration and testing of the code shall happen several times a day. With XP there are numerous iterations each requiring testing. XP is not about doing extreme activities during development process, it is about doing known value adding activities in an extreme manner.
The degree to which a component or system can function correctly in the presence of invalid inputs or stressful environmental conditions is called robustness. Testing to determine the robustness of the software product is robustness testing.
RAD is formally a software development life cycle model where parallel development of functions and subsequent integration takes place. Components or functions are developed in parallel as if they were mini projects, the developments are time boxed, delivered, and then assembled into a working prototype. This can very quickly give the customer something to see and use and to provide feedback regarding the delivery and their requirements. Rapid change and development is possible using this methodology. An early business focused solution in the market place gives an early return on investment(ROI) and can provide valuable marketing information for the business.
An evaluation of a product or project status to ascertain discrepancies from planned results and to recommend improvements. Eg. management review, informal review, technical review, inspection and walkthrough. Reviews help to detect defects at an early stage thereby reducing rework costs and improving quality. Review is a type of static testing.
The capability of the software product to interact with one or more specified components or systems is called interoperability. The process of testing to determine the interoperability of a software product is called interoperability testing.
Test approach is the implementation of test strategy for a specific project. It typically includes the decisions made based on the project goal and the risk assessment carried out, starting points regarding the test process, the test design techniques to be applied, exit criteria and test types to be performed.
The ability of the software product to perform its required functions under stated conditions for a specified period of time, or for a specified number of operations is termed as reliability. The process of testing to determine the reliability of a software product is called reliability testing.
Modification of a software product after delivery to correct defects, to improve performance or other attributes, or to adapt the product to a modified environment is called maintenance. Testing the changes to an operational system or the impact of a changed environment to an operational system is maintenance testing.
The ease with which the software product can be transferred from one hardware or software environment to another is called portability. The process of testing to determine the portability of a software product is called portability testing.
Stub : A skeletal or special purpose implementation of a software component, used to develop or test a component that calls or is otherwise dependent on it. It replaces a called component.
Driver : A software component or test tool that replaces a component that takes care of the control and/or the calling of a component or system. Most often stubs and drivers are used to replace the missing software and simulate the interface between the software components in a simple manner.
A stub is called from the software component to be tested; a driver calls a component to be tested.
Component testing is testing of an individual software components. It is also known as unit, module and program testing, searches for defects in, and verifies the functioning of software (eg. modules, programs, objects, classes etc.) that are separately testable.
RTM or traceability is important in following most common scenarios taking place in software testing: 1. Requirements for a given function or feature have changed. Some of the fields have different ranges that can be entered. Test cases now have to be changed. How many tests will actually be affected by this change in the requirements? These questions can be easily answered if traceability is followed. 2. A set of tests that has run OK in the past has started to have serious problems. Traceability between the tests and the requirement being tested enables the functions or features affected to be identified more easily. 3. Before delivering a new release, we want to know whether or not we have tested all the specified requirements in specification document. One may quickly know the tests that have passed and whether every requirement was tested or not.
1. Critical or key test cases successfully completed. Certain test cases even if they fail may not be show stoppers.
2. Functional coverage, code coverage, meeting the client requirements to certain point.
3. Defect rates fall below certain specified level and High priority bugs are resolved.
4. Project progresses from Alpha, to beta and so on.
5. Testing budget of the project are been depleted. Or when the cost of continued testing does not justify the project cost.
6. Project deadline and test completion deadline.
An element of configuration management, consisting of the evaluation, coordination, approval or disapproval, and implementation of changes to configuration items after formal establishment of their configuration identification.
A flaw in a component or system that can cause the component or system to fail to perform its required function eg. an incorrect statement or data definition. A defect if encountered during execution, may cause a failure of the component or system.
Quality is the degree to which a component, system or process meets specified requirements and/or user/customer needs and expectations.
In waterfall model, tasks are executed in sequential fashion. Waterfall model starts with a feasibility study and flow down through various project tasks finishing with implementation. Design flows into development, which in turn flows into build, and finally into test. User requirements > System requirements > Global design > Detailed design > Implementation > Testing. Testing tends to happen towards the end of the project life cycle so, defects are detected close to live implementation date. Thus, V model was developed to address some of problems experienced using traditional waterfall model. V model illustrates how testing activities can be integrated into each phase of SDLC.
Internal factors that influence the decision about which technique to use are:
1. Models used - Testing technique depends on which model is been used. eg. If specification contains a state transition diagram, state transition testing would be a good technique to use.
2. Tester knowledge/experience - How much testers know about the system and about testing techniques will clearly influence their choice of testing techniques.
3. Likely defects - Knowledge of likely defects could be gained through experience of testing a previous version of system and previous levels of testing on current version.
4. Test objective - If objective is for thorough testing then more rigorous and detailed techniques should be chosen. If objective is only for gaining confidence then use cases would be a sensible approach.
5. Documentation - Whether or not documentation exists and whether or not it is up to date will affect the choice of testing technique.
6. Life cycle model - A sequential life cycle model will lend itself to the use of more formal techniques whereas an iterative life cycle model may be better suited to using an exploratory testing approach.
External factors that influence the decision about which technique to use are:
1. Risk - The greater the risk, the greater the need for more thorough and more formal testing.
2. Customer/contractual requirements - Sometimes contracts specify which technique to be used.
3. Type of system - The type of system eg. Embedded, graphical, financial etc. will influence choice of testing technique.
4. Regulatory requirements - Some industries have regulatory standards guidelines that govern the testing techniques used.
5. Time and budget - Ultimately how much time is available and budget will affect choice of testing technique.
Template for test case specification include:
1. Test case specification identifier.
2. Test items.
3. Input specifications.
4. Output specifications.
5. Environmental needs.
6. Special procedural requirements.
7. Intercase dependencies.
A document specifying a set of test cases (objective, inputs, test actions, expected results and execution preconditions) for a test item.
Template for test design specification include:
1. Test design specification identifier.
2. Features to be tested.
3. Approach refinements.
4. Test identification.
5. Feature pass/fail criteria.
Test design specification is a document specifying the test conditions (coverage items) for a test item, the detailed test approach and associated high level test cases.
A black box test design technique in which test cases are designed to execute user scenarios.
A use case is a description of a particular use of the system by an actor (a user of the system). Each use case describes the interactions the actor has with the system in order to achieve a specific task. Use cases are a sequence of steps that describe the interactions between the actor and the system.
Configuration Management is the process of identifying and defining the items in the system, controlling the change of these items throughout their lifecycle, recording and reporting the status of items and change requests, and verifying the completeness and correctness of items.
Configuration management usually includes:
1. Identify configuration items eg. source code, test scripts, third party software, hardware, test documentation.
2. Version control of configuration items.
3. Release management.
4. Build management.
5. Controlling changes
6. Tracking status.
Some of the common risks are:
1. Excessive change to the product that invalidates test results or requires an update to test cases, expected results and environments.
2. Insufficient or unrealistic test environments that yield misleading results.
3. Organizational issues such as shortage of resources, skills or training, problems with communicating and responding to test results, complexity of project team.
4. Hardware crunch, failure in hardware resulting in time constraint.
5. Technical problems related to ambiguous, conflicting or non-prioritized requirements etc.
6. Software items not getting installed in test environment.
Risk mitigation involves actions implemented to reduce the impact and likelihood of a risk happening.
Risk is a factor that could result in future negative consequences, usually expressed as impact. A risk analysis involves identifying the most probable threats to an organization and analyzing the related vulnerabilities of the organization to these threats. Risk management involves the strategy employed to prevent potential risks.
A test design technique where the experience of the tester is used to anticipate what defects might be present in the component or system under test as a result of errors made, and to design tests specifically to expose them.
A test design technique where the tester actively controls the design of the tests as those tests are performed and uses information gained while testing to design new and better tests. It is hands on approach and invovles minimum planning and maximum test execution.
Keyword driven testing is a scripting technique that uses data files to contain not only test data and expected results, but also keywords related to application being tested.The keywords are interpreted by special supporting scripts that are called by the control script for the test.
Data driven testing is a scripting technique that stores test input and expected results in a table or spreadsheet, so that a single control script can execute all the tests in the table. Data driven testing is often used to support the application of test execution tools such as capture/playback tools.
BVA is a black box test design technique in which test cases are designed based on boundary values. A boundary value may be input value or output value which is on the edge of an equivalence partition or at the smallest incremental distance on either side of an edge. For example: If input condition specifies a range bounded by values a and b, test cases should include a and b, values just above and just below a and b.
Input data of a program is divided into different partitions so that test cases can be designed for each partition of input data. Objective of equivalence partitioning is to come out with the test cases so that error are uncovered and test cases can be carried out more efficiently. This is one of the black box technique, eg. if input condition specifies a range say 4 to 10, then one valid i.e any value between 4 to 10 say 6 is valid and two invalid first is less than 4 and second is greater than 10 equivalence classes are defined.
Dynamic testing invovles the execution of the software application or system and finding out the defects. Dynamic testing may include different testing types eg. unit testing, functionality testing ,system testing, integration testing etc.
Testing of a system at specification or implementation level without execution of that software eg. reviews, walkthroughs or static code analysis.
Localization refers to the process, on a properly internationalized base product, of translating messages and documentation as well as modifying other locale specific files for a specific region or language. This type of testing ensures that all the text present on the applications GUI, any text/messages that application is producing including error message/warning and help/documentation has been localized. This type of testing is abbreviated as L10n where 10 stands for the number of letters between the first l and last n in localization. Globalization is the term used for the combination of internationalization and localization.
Internationalization is the process of designing and coding a product so it can perform properly when it is modified for use in different languages and locales. This type of testing is abbreviated as i18n where 18 stands for the number of letters between the first i and last n in internationalization.
Requirement traceability matrix is an important document to ensure whether test cases coverage is 100% or not. It might be possible that due to human errors, there is no test case for an important requirement. In RTM, requiremnts are mapped with test cases. Template of the matrix may contain Requirement Details like Req. ID /Status, Release Reference, Architecture Document Element ID, Test Cases IDs, Program Name / ID, Dependent Requirement etc. Tester should ensure that all requirement ids have been covered in requirement traceability matrix.
Listed are the most common problems in software development process:
Severity of a defect is what impact defect has on the application. It is possible that high severity defect has low priority and vice versa. It is usually categorized as high, medium and low. Example: High severity and low priority – An application supports both UI and command line interface. A command line switch is not working correctly, so it has high severity. If the customers are not using this switch and prefer UI interface then it will have low priority.
Priority of a defect is how urgent it is that the defect should be fixed. Usually categorized as urgent, high, medium and low. Example: High priority and low severity – Company logo is missing or spelling mistake in company name on the product is of high priority defect but has no impact on the functionality of the application so the severity is less.
Testing Methodology define process, set of rules and principle which are followed by group concerned with testing the application. Following are the steps to ensure proper process is been followed to create an application with minimum defects: i)Test Requirement Analysis - Study of requirements. ii) Test Plan - Includes which features will be tested and features which are not going to be tested, types of testing, resources and responsibilities, assumptions, risk analysis, deliverables of the documents, tools to be used, which OS to be used, how the defects will be tracked, which tool will be used to track defects etc . iii)Test Design - Which types of testing will be carried out eg. Unit, system,integration,performance testing etc. iv)Test execute - Preparing and executing test cases and requirements traceability matrix to ensure not a single requirement is been missed and ensuring good test coverage for critical requirements. v)Defect track - Tracking of defects. vi)Test Automation - Implementing automation using tools like QTP, Silk test etc. vii)Test Maintain - Updating test cases and testing the software whenever it has undergone changes.
STP, Software Test Plan includes:
01. Scope of testing :: Which features will be tested and features which will not to be tested.
02. Test entrance and exit criteria.
03. Test environment :: Information regarding hardware, software, environment configuration like browser compatibility, localization and globalization.
04. Test strategy :: Testing process: identify the requirement, test design , preparation of test cases, test data, test setup, establishing test environment, test execution and test report. It also contains information regarding types of testing to be used like Functionality testing, Load testing, Parallel testing etc. and build process to be followed.
05. Information regarding defect tracking tool.
06. Roles and responsibilities- This section includes who all resources will be working and their roles and responsibilities for proper communication with clients.
07. Test schedule :: Contains information about detail test schedule.
08. Assumptions :: Includes any assumptions followed by testers while designing test cases to avoid any miscommunication.
09. Risk management :: Risk may be resources crunch, hardware unavailability, lack of domain knowledge etc. Mitigation is the plan as to how will testers avoid the mentioned risk.
10. Dependencies and constrains :: Information about internal or external dependencies in executing test cases.
11. Test deliverables :: Information about the documents (like STP, Test cases, BAR, TSR) with planned dates and responsible person.
12. Defect clarification :: Define severity and priority for defects.
After alpha testing product goes into beta stage, in this testing product is released at customers site and end users test the product in live environment.
As its name suggests soak testing is the process where application is under load for a prolonged period of time. This is useful to ensure application runs as expected even when its resources are in maximum usage for long time. While soak testing memory leaks and CPU usage are also been tested. Compared to load testing, soak testing is performed for a long period of time.
Testing performed to check the behaviour of the application when back end or front end or important module of the application is not working. Application should be able to handle the situation and recover lost data if any.
Alpha testing is done by customers at the developers site, but outside the development area. Though alpha testing takes place in a controlled environment this stage is useful to know whether the product is stable and capable to release or not.
Testing performed to ensure interfaces between back end and front end is functioning correctly is called intersystems testing. As front end (VB, VC, CPP etc.) and back end(Oracle, Sybase, SQL etc.) are in different languages such testing is called intersystems. Interfaces between these applications should be able to talk with each other efficiently and correctly.
Compliance testing is to test whether the product follows the standards agreed upon. This type of testing verifies whether configuration standards and all the standards stated upon are been met or not.
Testing performed on different platforms or different system configuration to test whether application is functional on all configurations so that different type of users can use the application is called compatibility testing. The best example of compatibility testing is to test whether a particular site is compatible in all browsers like Mozilla, Firefox, IE6, IE8 etc.
Testing the smallest unit of the product is unit testing. It involves testing whether the units individually are functioning properly or not. Unit testing is usually carried out by developers.
User interface testing is the way to test how successful is the product in interacting with the end user. It is graphical representation of all the options and text that helps guide the user to use the product.
Usability testing is to check whether end user is able to use the product easily. Usability needs good user interface so that interaction with the end user is productive and user is able to use the product without any intervention.
Sanity testing is subset of regression testing, tester only tests few areas of application to verify functionality of application is intact. It is just a cursory testing of affected areas.
Smoke testing is been performed by tester before accepting the build for further testing. Tester tests basic major functionalities of the application and checks whether the build is stable enough for further deep testing. Smoke testing involves testing all the major functionalities in an application. Tester has authority to reject the build if basic or major functionality of the application seems breaking.Smoke term comes from electronics field as to test whether circuit is not breaking leading to smoke
White box testing is method where tester tests structure and defects in code apart from the functionality of the application. Tester is aware of the coding standards and structure of the code.
Black box testing is a method where tester tests functionality of the application i.e. whether application meets the specifications given. In this, tester does not look into the code.
Testing whether the product functions correctly after integrating all the modules in the product. This type of testing needs knowledge of how interdependent the modules are and their functionality. Involves testing interfaces between the modules.
Application is subjected to peak load condition until break point is met. Stress testing checks whether application under test is able to sustain this breaking point and how the condition will be handled. In stress testing system resources are under maximum stress.
Load testing examines the performance of application under heavy loads. Testing the response time of application under heavy loads for a given amount of time with concurrent users. Load testing tests the average load condition application can sustain thereby degrading the performance.
Testing under maximum load conditions to test the performance of the application. Tests the response time of the application for heavy loads with concurrent users for a given amount of time. System resources i.e. hard disk, memory usage, CPU usage are also under test to find out memory leakage defects.
Testing the new version of the application functions similar with the older version that is working correctly. Ensure that data used for both the versions must be similar so that results can be compared accurately.
A group of test activities that are organized and managed together. Example component, integration, system and acceptance testing. Test levels can be combined or reorganized depending on nature of project or system architecture.
V-model is a framework which describes the life cycle of software development right from requirements specification to maintenance. It shows the testing activities can be integrated into each phase of software development life cycle.
Testing of a previously tested program following modification to ensure that defects have not been introduced or uncovered in unchanged areas of the software as a result of the changes made. It is performed when the software or its environment is changed.
If the same tests are repeated over and over again, eventually the same set of test cases will no longer find any new bugs. to overcome this, the test cases need to be regularly reviewed and revised, and new and different tests need to be written to exercise different parts of the software or system to potentially find more defects.
If a small number of modules contain most of the defects discovered during pre-release testing are show the most operational failures is call defect clustering.
A reason or purpose for designing and executing a test.
Test case is a set of input values, execution preconditions, expected results and execution post conditions, developed for a particular objective or test condition, such as to exercise a particular program path or to verify compliance with a specific requirement.
Requirement is a condition or capability needed by a user to solve a problem or achieve an objective that must be mend or possessed by a system or system component to satisfy a contract, standard, specification, or other formally imposed document.
All documents from which the requirement of a component or system can be inferred. The document on which the test cases are based. If a document can be amended only by way of formal amendment procedure, then the test basis is called a frozen test basis.
Exhaustive or complete testing is a test approach in which the test suite comprises all combination of input values and preconditions.
Error/mistake is a human action that produces an incorrect result.
Risk is a factor that could result in future negative consequences, usually expressed as impact and likelihood.
A set of test cases derived from the internal structure of a component or specification to ensure that 100% of a specific coverage criterion will be achieved.
A specification or software product that has been formally reviewed or agreed upon, that thereafter serves as the basis for further development, and that can be changed only through a formal change control process.
In Back-to-back testing, two or more variants of a component or system are executed with the same inputs. The outputs are compared. Whenever a difference is observed it is investigated and, if necessary, a correction is applied.
Adhoc tesing is carried out informally. It is performed without planning and documentation. There is no formal test preparation and no recognized test design technique is used. There are no expectations for results and unpredictability guides the test execution activity.
Test Oracle is a source to determine expected results to compare with the actual result of the software under test. This may be SRS/PRD or from knowledge base.
A test condition is a condition which a tester should follow to test an application. It should be verified by one or more test cases.
Accessibility testing is done to test the ease by which users with disabilities can use the software.
A good test engineer has a test to break attitude, an ability to take the point of view of the customer, a strong desire for quality, intuitive and an attention to detail.
Formal testing with respect to user needs, requirements, and business processes conducted to determine whether or not a system satisfies the acceptance criteria and to enable the users, customers or other authorized entity to determine whether or not to accept the system. This testing is also called as User Acceptance Testing, UAT.
Following are the common problems in software development:
Software Testing is the process to verify functionality of the application under normal and abnormal conditions. Testing is to find out defects in the application. Tester should have negative approach while testing so that he or she can break the functionality taking into consideration negative scenarios.Software testing is necessary to ensure any losses that may cause due to defects in the application.
Following are different stages of software testing:
Testing of an application include following stages:
A peer group discussion activity that focuses on achieving agreement on the technical approach to be taken.
A review of a software work product by colleagues of the producer of the product for the purpose of identifying defects and improvements.
It is a step-by-step presentation by the author of a document in order to gather information and to establish a common understading of its content. Within a walkthrough the author does most of the preparation. Participants are not required to do a detailed study.
Validation ensures that functionality, as defined in requirements (by client or customer), is the intended behavior of the product; validation typically involves actual testing and takes place after verifications are completed.
Verification ensures the product is designed to deliver all functionality to the customer; it typically involves reviews and meetings to evaluate documents, plans, code, requirements and specifications; this can be done with checklists, issues lists, walk-throughs and inspection meetings.