Testing to verify a product meets customer specified requirements. A customer usually does this type of testing on a product that is developed externally.
By performing acceptance tests on an application the Customers/Clients will deduce how the application will perform in production. There are also legal and contractual requirements for acceptance of the system.
This test is the first stage of testing and will be performed amongst the teams (developer and QA teams). Unit testing, integration testing and system testing when combined are known as alpha testing. During this phase, the following will be tested in the application:
- Spelling Mistakes
- Broken Links
- Cloudy Directions
The Application will be tested on machines with the lowest specification to test loading times and any latency problems.
This is a formal testing performed by the end-users at development site. This is the previous stage to Beta Testing.
Accessibility or 508 Compliance Testing:
This is a subset of usability testing, verifying the software product is accessible to the users under consideration having disabilities (deaf, blind, mentally disabled etc.). Accessibility evaluation is more formalized than usability testing generally. The end goal, in both usability and accessibility, is to discover how easily people can use a web site and feed that information back into improving future designs and implementations.
Ad Hoc Testing:
This is a non-methodical approach where testing is performed, in general, without planning and documentation. Here the tester tries to ‘break’ the system by randomly trying the system’s functionality. This includes negative testing as well. See Monkey Testing
This is a type of performance testing that is carried out by running the software for longer duration like weeks or months and check its performance variation (any sign of degradation). This is also called as Longevity or Soak Testing.
Testing practice for projects using agile methodologies, treating development as the customer of testing and emphasizing a test-first design paradigm.
Each of the software’s APIs are tested as per the APIs specification individually or pipelining the APIs to complete a functionality. This can be performed as White-Box or Black-Box Unit Testing.
Testing employing software tools which execute tests without manual intervention. Can be applied in GUI, performance, API, etc. testing.
The use of software to control the execution of tests, the comparison of actual outcomes to predicted outcomes, the setting up of test preconditions, and other test control and test reporting functions.
It is one of the Non-Functional Testing. In this testing, documents and specification used to create test cases are validated. The validation of requirements specifications is a baseline testing. A majority of the issues that could crop up during development phase is resolved by this testing by reviewing, brain-storming and identifying the gaps in the requirements before the requirement specification is signed-off.
Generally, The Development, QA and BA team will work together on this during the Requirements Phase. The requirements gathering may be iterated until the gaps are bridged that brings consensus among the teams. Every amendment is performed keeping the client in loop and with their consensus. Then the requirements are frozen for sign-off.
Basis Path Testing:
A white-box test case design technique that uses the algorithmic flow of the program to design tests for every execution path of code.
Backward Compatibility Testing:
Performed to check newer version of the software can work successfully installed over previous version and newer version works fine with the table structure, data structure and files that were created previous version of the software. See Forward Compatibility Testing also.
Tests that use representative sets (or different versions) of programs and data designed to evaluate the performance of software in a given configuration (Hardware and Operating Systems). See Comparison Testing also.
A formal testing conducted by end customers before releasing to market. A successful completion of Beta Testing confirms to customer acceptance of the software.
This test is performed after Alpha testing has been successfully performed. In beta testing a sample of the intended audience tests the application. Beta testing is also known as pre-release testing. Beta test versions of software are ideally distributed to a wide audience on the Web, partly to give the program a “real-world” test and partly to provide a preview of the next release. In this phase the audience will be testing the following:
- Users will install, run the application and send their feedback to the project team.
- Typographical errors, confusing application flow, and even crashes.
- Getting the feedback, the project team can fix the problems before releasing the software to the actual users.
- The more issues you fix that solve real user problems, the higher the quality of your application will be.
- Having a higher-quality application when you release to the general public will increase customer satisfaction.
Binary Portability Testing:
Testing an executable application for portability across system platforms and environments, usually for conformation to an Application Binary Interface (ABI) specification.
Big Bang Testing:
This is a type of Integration Testing where almost all modules or components or sub-systems are completely developed and integrated or coupled, and testing is performed for intactness of the sub-systems in conjunction with the co-existence of other sub-systems. See Integration Testing also.
Black Box Testing:
The technique of testing based on an analysis of the specification of a piece of software without having any knowledge of the interior workings of the application is Black Box testing. The goal is to test how well the component conforms to the published requirements for the component.
The tester is oblivious to the system architecture and does not have access to the source code. Typically, when performing a black box test, a tester will interact with the system’s user interface by providing inputs and examining outputs without knowing how and where the inputs are worked upon.
Software API’s are also tested in black-box way by passing the appropriate inputs (both positive and negative) and verify the expected output and expected exceptions. This needs a little bit of coding to call the API to pass the input, but need not know the internal structure of the API. Also few APIs can be pipelined to perform a full or part functional flow and test the same. This kind of testing is called Black-Box API/Unit Testing. See White-Box API/Unit Testing also.
Note: For every event happening through UI (front-end), there is a relevant API call happening in the back-end. So any functionality tested through UI can also be tested by pipelining the respective APIs and achieve the same functional outcome, and can be benchmarked against each other for the correctness.
An approach to integration testing where the lowest level components are tested first, then used to facilitate the testing of higher level components. The process is repeated until the component at the top of the hierarchy is tested. See Integration Testing also.
Boundary Value Testing:
Test based on “error aggregates at boundaries”, which focus on the boundary or limit conditions of the software being tested. (Some of these tests are stress tests). If the field accepts value from 1 to 100 then testing is performed for values 0, 1, 2, 99, 100 and 101. Here the test should fail for 0 and 101 and pass for the rest.
This is a White-Box approach, testing all branches in the program source code are tested at least once during Unit Testing.
A test suite that exercises the full functionality of a product but does not test features in detail.
Browser Compatibility Testing:
This is a subset of Compatibility Testing wherein web applications are tested with combination of different browsers and operating systems. This is performed to ensure the Web UI Objects are correctly rendered and functional.
This is a subset of compatibility testing, however product (hardware, Software, OS, Browser, etc) will be certified as fit to use.
Clear Box Testing:
This is an alias for White box testing. Refer to White-Box Testing for more details.
Unit Testing (Black/White Box) performed using frameworks like JUnit, nUnit, xUnit, TestNG, DBUnit, HTTPUnit, etc. See Unit Testing, White-Box Testing and Black-Box Unit Testing also.
This is an analysis method that determines which parts of the software have been executed (covered) by the test cases and which parts are not executed that may require additional test cases for it.
The software is tested for comparison of features, pros and cons against the competitors’ products or different versions of the same product. This helps the Business Intelligence Team and Product Marketing Teams. See Benchmark Testing also.
This checks whether the software can be run on different hardware, operating systems, databases, webservers, application servers, bandwidth, hardware peripherals, emulators, processors, configurations and browsers (including different versions). Mostly, it is difficult to test on various combinations of H/W, OS, DBs and Browsers etc. To get the optimal coverage of different topologies, the Orthogonal Array Method is used. See Forward Compatibility Testing and Backward Compatibility Testing also.
This is a part of usability testing that ensures the software meets the required standards, government laws and regulations, company policies, etc. This is generally performed and certified by the external standards body or agencies. Sometimes testers also perform the testing and get certified from the external agencies. See Accessibility Testing also.
This is also a type of Unit Testing. This is carried out after completing unit testing. This test involves a group of units together as a whole rather. Producing tests for the behavior of components of a product to ensure their correct behavior prior to system integration.
This is a part of performance testing. Multi-user testing geared towards determining the effects of accessing the same application code, module or database records. Identifies and measures the level of locking, deadlocking and use of single-threaded code and locking semaphores.
This is also called Condition/Decision coverage testing. This is a Unit Testing technique used to test all the condition and decision statements like if-then, if-then-else, switch-case, While, Do-While, For-Next, etc.
This is used as a part of performance testing for performance tuning by finding the optimal configuration settings that makes the software perform at its best for the given hardware / OS.
The process of testing that an implementation conforms to the specification on which it is based. Usually applied to testing conformance to a formal standard.
Context Driven Testing:
The context-driven school of software testing is flavor of Agile Testing that advocates continuous and creative evaluation of testing opportunities in light of the potential information revealed and the value of that information to the organization right now.
Testing of programs or procedures used to convert data from existing systems for use in replacement systems.
Data Driven Testing:
Testing in which the action of a test case is parameterized by externally defined data values, maintained as a file or spreadsheet. A common technique in Automated Testing.
To examine the configuration, input and output sent by the base system to dependent sub-systems are as per specifications. It examines the application’s requirements for pre-existing software, initial states and configuration in order to maintain proper functionality.
A test that exercises a feature of a product in full detail.
This is intended to break the ice to find the failure points of software through input of invalid / corrupt data, incorrect format of data, huge volume or heavy load, etc.
This is a type of Static Testing in which project documents are verified and validated for thoroughness and completeness. This testing can vary from spell check, grammar check, readability from end-user perspective, appropriateness of content, correctness of screenshots, ambiguity of meaning is checked, coverage of technical details within scope, simplicity of installation steps, EULA (End User License Agreements), User Guides, Admin Guides, Installation Guides, Porting Guides, Upgrade Guides, etc.
Testing the core or critical functionality of the software and execute them to ensure the core functionalities are working fine rather than executing all test cases. See Smoke Testing and Sanity Testing also.
Downward Compatibility Testing:
This refers to Backward Compatibility Testing.
Dry Run Testing (DRT):
This is a form of testing where the effects of a possible failure are intentionally mitigated. In computer programming, this is a mental or paper-based run of a program by scrutinizing the source code step by step to determine what it will do when run.
Testing the software through execution of the code. For eg., Unit Testing, Functional Testing, Regression Testing, System Testing, Performance Testing etc. See also Static Testing.
This is a Performance Testing technique, also called Soak Testing or Longevity Testing. Checks for memory leaks or other performance issues that may occur with prolonged execution under expected load on continual basis or without load staying idle.
Testing the software for end-to-end functional flows in a complete application environment that mimics real-world use, such as interacting with a database, using network communications, or interacting with other hardware, applications, or systems if appropriate.
This is also known as Equivalence Class Partitioning. This is software testing technique but not a type of testing. This technique is used in black and grey box testing. This technique classifies test data into positive and negative equivalence classes ensuring both positive and negative scenarios are tested.
This focuses on error handling capabilities of the software, response of the software on exceptions, messages it renders to users in case of expected and unexpected errors. This is called Yellow-Box Testing.
Testing which covers all combinations of input values and preconditions for an element of the software under test.
Exploratory testing is an informal type of testing conducted to learn the software at the same time looking for errors or application behavior that seems non-obvious. Exploratory testing is usually done by testers but can be done by other stake holders as well like Business Analysts, developers, end users etc. who are interested in learning functions of the software and at the same time looking for errors or behavior is seems non-obvious.
Fault Injection Testing:
Faults or errors are induced in the application and existing tests are executed to capture the induced errors. If the errors are caught, the test cases are good enough and robust, else additional test cases are written to capture the same. This type of testing is very important when it comes to zero-tolerance applications like healthcare, defense, aerospace, finance or any other critical software.
Forward Compatibility Testing:
This is done to check if the software works as expected when it is downgraded from a higher version to a lower version. This is to provide an option for user to allow downgrading the software to a lower version and use if not comfortable using the higher version or experience any issue with the upgraded higher
version. See Backward Compatibility Testing also.
See Black Box Testing also.
Testing the features and operational behavior of a product to ensure they correspond to its specifications (Design Document, Use Cases, Functional Specifications and Requirements Documents).
Testing that ignores the internal mechanism of a system or component and focuses solely on the outputs generated in response to selected inputs and execution conditions.
This is a type of black box testing that is based on the specifications of the software that is to be tested. The application is tested by providing input and then the results are examined that need to conform to the functionality it was intended for. Functional Testing of the software is conducted on a complete, integrated system to evaluate the system’s compliance with its specified requirements.
There are five steps that are involved when testing an application for functionality.
The determination of the functionality that the intended application is meant to perform.
The creation of test data based on the specifications of the application.
The output based on the test data and the specifications of the application.
The writing of Test Scenarios and the execution of test cases.
The comparison of actual and expected results based on the executed test cases.
An effective testing practice will see the above steps applied to the testing policies of every organization and hence it will make sure that the organization maintains the strictest of standards when it comes to software quality.
This involves testing with random inputs and the software is monitored for failures and error messages that are thrown due to the erroneous inputs.
This is otherwise called White-Box Testing or Structural Testing. This testing is based on an analysis of internal workings and structure of a piece of software. This includes techniques such as Branch Testing and Path Testing.
Globalization (G11N) Testing:
This detects problems within the application design, related to usage of different languages and different character sets. The term “G11N” is coined because there are 11 letters between the “G” and “N” in the word “Globalization”.
Golden Path Testing:
This is also known as Happy Path Testing, this focuses on selective execution of tests that do not exercise the software for negative or error conditions.
This is testing one particular module and/or functionality heavily or exhaustively.
Green Box Testing:
Primarily for Hardware – People that work with data centers, communications and enterprise networking equipments (routers, switches, server chasis etc) know well the method of green-box testing (GB testing). This technique is used to check multi-gigabit serial links and is a procedure that determines the settings for the optimum transmitter equalization.
In order to test the systems with hundreds of high-speed channels, green-box testing is applied. It defines the transmitter settings which ensure that the system will meet BER (bit error rate). Nowadays, this technique is necessary as it ensures a successful data transmission over different channel media.
An equalization circuit includes a transmitter and a receiver. They both have their own special components that ensure the successful data transition. Usually, receivers implement:
CTLE – continuous-time linear equalizer,
DFE – decision-feedback equalizer (automatically accept the incoming signals).
Transmitters typically utilize FIR (finite-impulse-response filter) that has one pre-cursor and one post-cursor tap.
For Software – It is a Release Testing Technique exercising a software system’s coexistence with other systems or sub-systems by taking multiple integrated systems that have passed system testing as input and test their required interactions to ensure it is environment friendly and meets its functional requirements.
Grey Box Testing:
A combination of Black Box and White Box testing methodologies: testing a piece of software against its specification but using some knowledge of its internal workings.
Grey Box testing is a technique to test the application with limited knowledge of the internal workings of an application. In software testing, the term the more you know the better carries a lot of weight when testing an application.
Mastering the domain of a system always gives the tester an edge over someone with limited domain knowledge. Unlike black box testing, where the tester only tests the application’s user interface, in grey box testing, the tester has access to design documents and the database. Having this knowledge, the tester is able to better prepare test data and test scenarios when making the test plan.
The software’s GUI is tested against the specification as in the GUI Mockups or Wireframes and also as in detailed design document. This ensures the GUI elements are meeting its expectations as per UI Specifications and Functional Specifications.
Happy Path Testing:
This is also known as Golden Path Testing, this focuses on selective execution of tests that do not exercise the software for negative or error conditions.
A headless browser is a web browser without a graphical user interface. Headless browsers provide automated control of a web page in an environment similar to popular web browsers, but are executed via a command-line interface or using network communication. It is suitable for general command-line based testing, within a pre-commit hook, and as part of a continuous integration system.
Headless browsers aren’t faster than real browsers and make it harder to write/debug tests. Anyways customers don’t use headless browsers. Real browsers can also run headlessly.
Heuristic Testing (originally “Heuristic Evaluation” is proposed by Nielsen and Molich, 1990) is a discount method for quick, cheap, and easy evaluation of the user interface.
The process requires that a small set of testers (or “evaluators”) examine the interface, and judge its compliance with recognised usability principles (the “heuristics”). The goal is the identification of any usability issues so that they can be addressed as part of an iterative design process.
Heuristic Testing is popular in Web development.
This is an integration testing involving both Top-Down and Bottom-Up techniques together. Hence is also called Sandwich Testing. This gives a comprehensive coverage in the integration testing.
The testing of combined parts of an application to determine if they function correctly together is Integration testing. This is usually performed after unit and functional testing. This type of testing is especially relevant to client/server and distributed systems.
The following are the methods of doing Integration Testing:
Integration Testing Method
This testing begins with unit testing, followed by tests of progressively higher-level combinations of units called modules or builds.
This testing, the highest-level modules are tested first and progressively lower-level modules are tested after that.
|Hybrid or Sandwich Integration
This testing is a blend of Bottom-Up and Top-Down Integration that provides comprehensive coverage.
In a comprehensive software development environment, bottom-up testing is usually done first, followed by top-down testing. The process concludes with multiple tests of the complete application, preferably in scenarios designed to mimic those it will encounter in customers’ computers, systems and network.
Internationalization (I18N) Testing:
The term “I18N” is coined because there are 18 letters between the “I” and “N” in the word “Internationalization”.
This finds out the defects pertaining to installation of software. This testing involves invoking the software installer in different modes like “Express” or “Custom” on different types of OS and different environments like Virtual machines. This also involves updating the software configuration files, installing patches like bug fix, security patch etc.
This helps to confirm that the installer under test recovers from expected or unexpected events without loss of data or functionality. Events can include shortage of disk space, unexpected loss of communication, or power out conditions.
This also involves checking the files are extracted out correctly to the respective directory in the defined structure under the location chosen by the end-user that installs it.
For Windows, checking for registry entry is performed. See Un-Installation Testing also.
Keyword driver testing is more of an automated software testing approach than a type of testing itself. Keyword driven testing is known as action driven testing or table driven testing.
The process of putting demand on a system or device and measuring its response to determine the system’s behavior under both normal and anticipated peak load conditions. It helps to identify the maximum operating capacity of an application as well as any bottlenecks and determine which element is causing degradation.
A process of testing the behavior of the Software by applying maximum load in terms of Software accessing and manipulating large input data. It can be done at both normal and peak load conditions. This type of testing identifies the maximum capacity of Software and its behavior at peak time.
Most of the time, Load testing is performed with the help of automated tools such as Load Runner, AppLoader, IBM Rational Performance Tester, Apache JMeter, Silk Performer, Visual Studio Load Test etc.
Virtual users (VUsers) are defined in the automated testing tool and the script is executed to verify the Load testing for the Software. The quantity of users can be increased or decreased concurrently or incrementally based upon the requirements.
Localization (L10N) Testing:
This term refers to making software specifically designed for a specific locality. Localization testing a type of software testing where software is expected to adapt to a particular locale, it should support a particular locale/language in terms of display, accepting input in that particular locale, display, font, date time, currency etc., related to a particular locale.
The term “L10N” is coined because there are 10 letters between the “L” and “N” in the word “Localization”.
For e.g. many web applications allow choice of locale like English, French, German or Japanese. So once locale is defined or set in the configuration of software, software is expected to work as expected with a set language/locale.
This is a type of performance testing that is carried out by running the software for longer duration like weeks or months and check its performance variation (any sign of degradation). This is also called Aging or Soak Testing.
A white box testing technique that exercises program loops.
Testing a system or an Application on the fly without any specific tests in mind, i.e., just few tests here and there to ensure the system or an application does not crash out. Monkey test tries to break the software by entering incorrect dates like 31-Feb-2012 or long strings of text or numbers or special characters etc.
Testing done on the application where bugs are purposely added to it.
Testing aimed at showing software does not work and known as “test to fail” or “attitude to break”. These are functional and non-functional tests that are intended to break the software by entering incorrect data like incorrect date, time or string or upload binary file when text files expected or enter huge text string for input fields etc.
This is a variation of Regression Testing. Testing conducted with multiple cycles in which errors found in test cycle N are resolved and the solution is retested in test cycle N+1. The cycles are typically
repeated until the solution reaches a steady state and there are no errors. See also Regression Testing.
This section is based upon the testing of the application from its non-functional attributes. Non-functional testing of Software involves testing the Software from the requirements which are non-functional in nature related but important a well such as performance, security, user interface etc.
Some of the important and commonly used non-functional testing types are mentioned as follows:
Operational Acceptance Testing (OAT):
See Operational Readiness Testing (ORT).
Operational Readiness Testing (ORT):
Operational readiness testing also known as pre go live testing is usually performed by software testers. As the name suggests, Operational readiness testing intends to validate the production environment after new version of the software is deployed in production environment. Software testers test the existing and new functionality to certify that the software is ready to be used by end users.
Orthogonal Array Testing:
This is a black box testing technique. Orthogonal array Testing is statistical and systematic way of Software testing. Orthogonal array Testing technique helps to minimize the number of test cases and maximize test coverage by grouping set of test conditions. Orthogonal testing is effective in case of GUI testing, Configuration testing where there are multiple input parameters and testing one parameter at a time would lead to large number of test cases.
This is a software testing technique that can be done by two people paired together, one to test and other to monitor and record test results. Pair testing can also be performed in combination of tester-developer, tester-business analyst or developer-business analyst combination. Combining testers and developers in pair testing helps to detect defects faster, identify root cause, fix and test the fix.
This is a black-box testing, where each input is tested in pairs of inputs that help to test whether the software works as expected with all possible input combinations.
In Pair-wise testing, the operator is replaced with other and tested for the same set of inputs and verify the functionality.
e.g., Consider the code z = a * b * c as per specification.
Input Set a=1, b=2, c=3. Alternative operator for pair-wise testing is +.
So the statement z = a*b*c = 1*2*3 = 6
Similarly, replace the operator with +, the statement looks z = a+b+c = 1+2+3 = 6.
Though the output for both the operators is same, it does not mean the software is working correctly.
Hence pair-wise testing intends to dig out such corner cases where the output can be correct but the functionality gets goofed-up.
Generally, this kind of testing is performed in product development companies whose products (Java, C#, VB, Excel, etc.) are used for further software development.
This is a software testing technique, where in you test two or more versions of the software “the current version” and “previous version or versions” of the software together to see the differences of existing functionality.
This is a white-box testing approach in which all paths in the program source code are exercised and tested at least once. This is a type of software testing technique that is used as part of white box testing approach. These testing techniques are applied by developers while performing Unit testing.
This is a type of security testing, also known as pentest in short. Penetration testing is done to tests how secure software and its environments (Hardware, Operating system and network) are when subject to attack by an external or internal intruder. Intruder can be a human/hacker or malicious programs. Pentest uses methods to forcibly intrude (by brute force attack) or by using a weakness (vulnerability) to gain access to a software or data or hardware with an intent to expose ways to steal, manipulate or corrupt data, software files or configuration. Penetration Testing is a way of ethical hacking, an experienced Penetration tester will use the same methods and tools that a hacker would use but the intention of Penetration tester is to identify vulnerability and get them fixed before a real hacker or malicious program exploits it.
Testing conducted to evaluate the compliance of a system or component with specified performance requirements.
It is mostly used to identify any bottlenecks or performance issues rather than finding the bugs in software. There are different causes which contribute in lowering the performance of software:
- Network delay.
- Client side processing.
- Database transaction processing.
- Load balancing between servers.
- Data rendering.
Performance testing is considered as one of the important and mandatory testing type in terms of following aspects:
- Speed (i.e. Response Time, data rendering and accessing)
It can be either qualitative or quantitative testing activity and can be divided into different sub types such as Load testing and Stress testing.
Portability testing includes the testing of Software with intend that it should be re-useable and can be moved from another Software as well. Following are the strategies that can be used for Portability testing.
- Transferred installed Software from one computer to another.
Building executable (.exe) to run the Software on different platforms.
Portability testing can be considered as one of the sub parts of System testing, as this testing type includes the overall testing of Software with respect to its usage over different environments. Computer Hardware, Operating Systems and Browsers are the major focus of Portability testing. Following are some pre-conditions for Portability testing:
- Software should be designed and coded, keeping in mind Portability Requirements.
- Unit testing has been performed on the associated components.
- Integration testing has been performed.
- Test environment has been established.
Testing aimed at showing software works. Also known as “test to pass”.
Continuously raising an input until the system breaks down. This is a type of testing is conducted to check the response of the software with constant increase in workload on the software. Ramp testing enables testing the ability of the software to sustain gradual increase in work load.
Testing that confirms the program recovers to its contextual base-state from expected or unexpected events without loss of data or functionality. Events can include shortage of disk space, unexpected loss of communication, or power out conditions.
Acceptance Testing is otherwise called as Red-Box Testing that is conducted to enable a user/customer to determine whether to accept a software product. Normally performed to validate the software meets a set of agreed acceptance criteria.
Re-testing a previously tested program following modification (bug fix or enhancement or new feature introduction) to ensure that faults have not been introduced or uncovered as a result of the changes made.
Whenever a change in a software application is made it is quite possible that other areas within the application have been affected by this change. To verify that a fixed bug hasn’t resulted in another functionality or business rule violation is Regression testing. The intent of Regression testing is to ensure that a change, such as a bug fix did not result in another fault being uncovered in the application.
Regression testing is so important because of the following reasons:
Minimize the gaps in testing when an application with changes made has to be tested.
Testing the new changes to verify that the change made did not affect any other area of the application.
Mitigates Risks when regression testing is performed on the application.
Test coverage is increased without compromising timelines.
Increase speed to market the product.
Regression Testing types are functional regression tests by testers and Unit regression tests by developers.
This deals with testing software’s ability to function under given environmental conditions for a particular amount of time. Software reliability is the probability that software will work properly in a specified environment and for a given time, where
Probability = Number of failing cases / Total number of cases under consideration
To achieve the satisfactory results from reliability testing one must take care of some reliability characteristics.
For example, Mean Time Between Failures (MTBF) = Mean Time To Failure (MTTF) + Mean Time To Repair (MTTR).
MTTF is the difference of time between two consecutive failures. It is measured in terms of three factors – operating time, number of on off cycles and Calendar time.
MTTR is the time required to fix the failure.
Types of Reliability Tests are Feature Testing, Load Testing, Stress Testing and Regression Testing.
This is a type of testing that is carried out as a part of defect fix verification. For e.g. a tester is verifying a defect fix and let us say that there are 3 test cases failed due to this defect. Once tester verifies defect fix as resolved, tester will retest or test the same functionality again by executing the test cases that were failed earlier.
This is a type of testing carried by performance engineering team to assess how stable the software is when it is subject to incorrect data, large workloads and more volume of data to be processes.
This is a type of testing performed by software testers. Recovery testing aims at checking how soon and how efficiently software can recover from software crashes, Operating system crashers, and hardware failures. Recovery testing is not the same as fail rover testing or reliability testing.
Risk based Testing:
This is a type of software testing and an different approach towards testing a software. In Risk based testing, requirements and functionality of software to be tested are prioritized as Critical, High, Medium and low. In this approach, all critical and High priority tests are tested and them followed by Medium. Low priority or low risk functionality are tested at the end or may not based on the time available for testing.
This is an integration testing involving both Top-Down and Bottom-Up techniques together. This is also called Hybrid Testing. This gives a comprehensive coverage in the integration testing.
It is a quick and brief evaluation of major functional elements of a piece of software to determine if it is basically operational and software environment as a whole is stable enough to proceed with extensive testing.
Scalability Testing is a non-functional test intended to test one of the software quality attributes i.e. “Scalability”. It is a Performance testing focused on ensuring the application under test gracefully handles increases in work load. It is focused on performance of software as a whole. Scalability testing is usually done by performance engineering team.
Objective of scalability testing is to test the ability of the software to scale up with increased users, increased transactions, increase in database size etc. It is not necessary that software’s performance increases with increase in hardware configuration, scalability tests helps to find out how much more workload the software can support with expanding user base, transactions, data storage etc.
Security testing involves the testing of Software in order to identify any flaws and gaps from security and vulnerability point of view. Security Testing is generally carried out by specialized team of software testers. Objective of security testing is to secure the software from external or internal threats from humans and malicious programs. Following are the main aspects which Security testing should ensure:
A security measure that protects against the disclosure of information to parties other than the intended recipient that is by no means the only way of ensuring the security.
A measure intended to allow the receiver to determine that the information provided by the system is correct. In addition to Confidentiality, this involves additional information that requires algorithmic checks rather than encoding the information.
This involves confirmation of identity of a person by tracing to the origins of artifacts ensuring that an information requested is from a trusted computer/program.
Access control is an example of authorization. This is the process of determining that a requester is allowed to receive a service or allowed to perform an operation.
Assuring the information and services are ready for use and kept available to authorized users as and when they need it.
Cross-site Scripting (XSS)
Cross-site scripting uses known vulnerabilities in web-based applications, their servers, or plug-in systems on which they rely. Exploiting one of these and by finding ways of injecting malicious scripts into web pages, an attacker can gain elevated access-privileges to sensitive page content, session cookies, and a variety of other information maintained by the browser on behalf of the user. Cross-site scripting attacks are therefore a special case of code injection.
There are 3 types of XSS – Stored, Reflected and DOM-based.
The Stored XSS vulnerability is the most powerful kind of XSS attack. A Stored XSS vulnerability exists when data provided to a web application by a user is first stored persistently on the server (in a database, filesystem, or other location), and later displayed to users in a web page without being encoded using HTML entity encoding.
A real life example of this would be the Samy MySpace Worm found on MySpace in October of 2005.
These vulnerabilities are the most significant of the XSS types because an attacker can inject the script just once. This could potentially hit a large number of other users with little need for social engineering, or the web application could even be infected by a cross-site scripting virus.
The Reflected XSS vulnerability is by far the most common and well-known type. These holes show up when data provided by a web client is used immediately by server-side scripts to generate a page of results for that user. If unvalidated user-supplied data is included in the resulting page without HTML encoding, this will allow client-side code to be injected into the dynamic page.
A classic example of this is in site search engines: if one searches for a string which includes some HTML special characters, often the search string will be redisplayed on the result page to indicate what was searched for, or will at least include the search terms in the text box for easier editing.
If all occurrences of the search terms are not HTML entity encoded, an XSS hole will result.
For example, if an attacker hosts a malicious website which contains a link to a vulnerable page on a client’s local system, a script could be injected and would run with privileges of that user’s browser on their system. This bye-passes the entire client-side sandbox, not just the cross-domain restrictions that are normally bye-passed with XSS exploits.
This is a way to ensure that a message transferred has been sent and received by the parties claiming to have sent and received the message by creating a handshake between the parties through dispatch and delivery acknowledgements.
From a digital security perspective, this guarantees that the message sender cannot later deny having sent the message and that the recipient cannot deny having received the message.
This testing confirms that the program can restrict access to authorized personnel and that the authorized personnel can access the functions available to their security level. Security testing basically checks, how good the software’s authorization mechanism is, how strong the authentication is, how the confidentiality and integrity of the data is maintained, what is the availability of the software in an event of malicious attack by hackers and malicious programs. Security testing requires good knowledge of application, technology, networking, security testing tools.
This is also used to discover potential vulnerabilities through version detection that may highlight deprecated versions of software or firmware.
Performing a periodic Security / Risk Audit and Review can be of great help to identify the gaps and resolve the security or risk-based issues.
See Penetration Testing and Vulnerability Testing also.
Server testing is primarily stress-oriented testing that include client/server I/O, network stress, CPU consumption, and memory consumption. The specific tests you must run depends on the features that you implement on the server. Several kinds of stress tests get run against a server, including basic system functionality, system stress and shutdown/restart tests.
System Functionality Test – The system functionality tests are individual tests of the capabilities of the system. Some tests are run for every system, and some tests only run if the capability exists in the system.
System Stress Test – The System Stress Test consists of several server scenario workloads that operate from the user level address space that is applied to the system to exercise the system hardware, system-specific devices and drivers, network and storage adapters and drivers, and any filter drivers that might be part of the system configuration, such as multipath storage drivers, storage or file system filter drivers, or intermediate layer network drivers. The workloads applied are SQL I/O Simulation, Local Storage I/O, Disk Stress with Verification, Client-Server Storage I/O and Network Traffic. These workloads automatically scale to the number of network and storage adapters in the system that have connected clients or storage devices, respectively.
Shutdown / Restart Test – The server test also includes a shutdown and restart test. This test signals the system to shut down and restart. The test records the event log information related to shutting down and restarting the system, such as vetoes that prevent shutdown, the startup event, and any driver errors that are received after restarting the system. This test makes sure that all device drivers in the system comply with system shutdown, do not veto, and cleanly restart in the system without conflicting with other drivers.
Server Virtualization Validation (SVV) Test – Two kinds of virtualization tests are run against a server, including virtual machine functionality tests and SVV System functionality tests. The system can be a standalone server or a virtual machine. The Virtual Machine Functionality Tests are individual tests of the capabilities of the product’s virtual machine implementation. The SVV System functionality tests validate the functionality of the following of the virtual machine – Virtual PCI I/O, Virtual BIOS, Virtual Timers, Virtual Plug-n-Play functions.
Server systems might have additional functionality beyond that which is required for Server Certification. The additional features for which a system can test and qualify are as follows:
Fault Tolerant Test – To confirm the ability of a fault-tolerant system hardware, devices, and drivers to have a hardware failure and continue to operate without impacting clients that are connected to the server over on the network.
Power Management Test – To validate that the systems supports the CPU related feature flag, processor states, and other functionality needed for the Server to manage the power of the system.
In addition to the above, there are various other tests like Hardware Certification Kit Harness Tests, Boot/Secure-Boot/ReBoot Tests, Debug Capability Test, Recovery Test, Robustness Test, Disk Stress Test, Timer Tests, PCI Hardware Compliance Test, Plug-n-Play Tests (with and without I/O devices), USB related tests, DVD Drive Tests, Memory Tests, Stability Tests, Reliability Tests, Network Connectivity Tests, Wireless Connectivity Tests, Domain Controller Test, Utilization Tests, etc.
NOTE: The above said tests are commonly applicable for almost all types of servers. Depending upon the type of the server and its intended functionality additional specific tests are designed to meet the requirements.
A quick-and-dirty test that the major functions of a piece of software work. Originated in the hardware testing practice of turning on a new piece of hardware for the first time and considering it a success if it does not catch on fire.
Smoke testing is a type of testing that is carried out by software testers to check if the new build” build-version-and-release-number/build provided by development team is stable enough i.e., major functionality is working as expected in order to carry out further or detailed testing. Smoke testing is intended to find “show stopper” defects that can prevent testers from testing the application in detail. Smoke testing carried out for a build is also known as build verification test.
Soak Testing is a type of performance testing, wherein software running is subjected to high load over a prolonged duration of time. Soak testing may go on for few days or even for few weeks.
Soak testing is a type of testing that is conducted to find errors that result in deterioration of software performance with continued usage. Soak testing is extensively done for electronic devices, which are expected to run continuously for days or months or years without restarting or rebooting. With growing web applications soak testing has gained significant importance as web application availability is critical for sustaining and success of business.
For example, running several times with huge transactions in an entire day (or night) greater than expected in a busy day, to identify and performance problems that appear after a large number of transactions has been executed.
This testing is also called Aging or Longevity Testing.
A set of activities conducted with the intent of finding errors in software.
This is a type of performance testing performed by performance engineering team. Objective of spike testing is to check how software responds to workloads that are sent in very short span of time and which are not constant over period of time.
This is a non-functional test intended to test one of the software quality attributes i.e. “Stability”. Stability testing focuses on testing how stable software is when it is subject to loads at acceptable levels, peak loads, loads generated in spikes, with more volumes of data to be processed. Scalability testing will involve performing different types of performance tests like load testing, stress testing, spike testing, soak testing, spike testing etc.,
Analysis of a program carried out without executing the program. Static Testing is a form of testing wherein approaches like reviews and walkthroughs are employed to evaluate the correctness of the deliverable.
In static testing software code is not executed instead it is reviewed for syntax, commenting, naming convention, coding standards, size of the functions and methods etc.
Static testing usually has check lists against which deliverables are evaluated. Static testing can be applied for requirements, design, test case, user manual, installation documents by using approaches like reviews or walkthroughs.
Testing that verifies the program under test stores data files in the correct directories and that it reserves sufficient space to prevent unexpected termination resulting from lack of space. This is external storage as opposed to internal storage.
Stress Testing is a type of performance testing, in which software is subjected to peak loads and even to a break point to observe how the software would behave at breakpoint. Testing conducted to evaluate a system or component at or beyond the limits of its specified requirements to determine the load under which it fails and how. A graceful degradation under load leading to non-catastrophic failure is the desired result. Often Stress Testing is performed using the same process as Load Testing but employing a very high level of simulated load.
Stress testing also tests the behavior of the software with insufficient resources like CPU, Memory, Network bandwidth, Disk space etc. Stress testing enables to check some of the quality attributes like robustness and reliability.
This testing type includes the testing of Software behavior under abnormal conditions. Taking away the resources, applying load beyond the actual load limit is Stress testing.
The main intent is to test the Software by applying the load to the system and taking over the resources used by the Software to identify the breaking point. This testing can be performed by testing different scenarios such as:
- Shutdown or restart of Network ports randomly.
- Turning the database on or off.
- Running different processes that consume resources such as CPU, Memory, server etc.
Testing based on an analysis of internal workings and structure of a piece of software.
Once all the components are integrated, the application as a whole is tested rigorously to see that it meets Quality Standards. This testing attempts to discover defects of the entire system rather than of its individual components.
System Testing includes multiple software testing types that will enable to validate the software system as a whole
(software, hardware and network) against the requirements for which it was built. Different types of tests (GUI testing, Functional testing, Regression testing, Smoke testing, load testing, stress testing, security testing, stress testing, ad-hoc testing etc.,) are carried out to complete system testing” system-testing/system testing.
System testing is so important because of the following reasons:
System Testing is the first step in the Software Development Life Cycle, where the application is tested as a whole.
The application is tested thoroughly to verify that it meets the functional and technical specifications.
The application is tested in an environment which is very close to the production environment where the application will be deployed.
System Testing enables us to test, verify and validate both the business requirements as well as the Applications Architecture.
System Integration Testing (SIT):
This is a type of testing conducted by software testing team. As the name suggests, focus of System integration testing is to test for errors related to integration among different applications, services, third party vendor applications etc., As part of SIT, end-to-end scenarios are tested that would require software to interact (send or receive data) with other upstream or downstream applications, services, third party application calls etc.,
Transparent Box Testing:
This is an alias for White box testing. Refer to White-Box Testing for more details.
A variation of top-down testing where the progressive integration of components follows the implementation of subsets of the requirements, as opposed to the integration of components by successively lower levels.
Top down integration testing is an incremental testing approach for integration testing where in testing of top level modules are done first before moving on to testing of branch modules. The component at the top of the hierarchy is tested first, with lower level components being simulated by stubs. Tested components are then used to test lower level components. The process is repeated until the lowest level components have been tested. This helps in finding design issues in the initial stages of the Integration test. See Integration Testing also.
This finds out the defects pertaining to un-installation of software. This testing involves invoking the software installer for un-installing the software on different types of OS and different environments like Virtual machines. This also involves removal of the software configuration files, registry entries (for windows), installed patches like bug fix, security patch etc.
Also, if any folders created that are not removed during un-installation should be notified to the user upon completion of uninstallation and informed to delete the same. Also check whether a reboot of the machine is required for a complete removal; if necessary, prompt the user on the same. See Installation Testing also.
From usability perspective, can check with the user whether the configuration files need to be retained for future installation or remove completely, and perform based on the response.
Also try the exceptional case by deleting all the installed files from the computer and try un-installing the software. Any exceptions thrown should be handled appropriately giving the installer a robust behavior.
This type of testing is performed by the developers before the setup is handed over to the testing team to formally execute the test cases. Unit testing is performed by the respective developers on the individual units of source code assigned areas. The developers use test data that is separate from the test data of the quality assurance team.
The goal of unit testing is to isolate each part of the program and show that individual parts are correct in terms of requirements and functionality.
Limitations Of Unit Testing
Testing cannot catch each and every bug in an application. It is impossible to evaluate every execution path in every software application. The same is the case with unit testing.
There is a limit to the number of scenarios and test data that the developer can use to verify the source code. So after he has exhausted all options there is no choice but to stop unit testing and merge the code segment with other units.
This test is carried out after a hardware upgrade or OS upgrade or Software upgrade. Upgrade testing is a type of testing that is carried out to ensure application features are not broken due to the upgrade. See Installation Testing also.
Upward Compatibility Testing:
This refers to Forward Compatibility Testing.
Testing the ease with which users can learn and use a product.
This section includes different concepts and definitions of Usability testing from Software point of view. It is a black box technique and is used to identify any error(s) and improvements in the Software by observing the users through their usage and operation.
According to Nielsen, Usability can be defined in terms of five factors i.e. Efficiency of use, Learn-ability, Memor-ability, Errors/safety, satisfaction. According to him the usability of the product will be good and the system is usable if it possesses the above factors.
Nigel Bevan and Macleod considered that Usability is the quality requirement which can be measured as the outcome of interactions with a computer system. This requirement can be fulfilled and the end user will be satisfied if the intended goals are achieved effectively with the use of proper resources.
In 2000, Molich stated that user friendly system should fulfill the following five goals i.e. Easy to Learn, Easy to Remember, Efficient to Use, Satisfactory to Use and Easy to Understand.
In addition to different definitions of usability, there are some standards and quality models and methods which define the usability in the form of attributes and sub attributes such as ISO-9126, ISO-9241-11, ISO-13407 and IEEE std.610.12 etc.
Usability testing is a type of software testing that is performed to understand how user friendly the software is. Objective of usability testing is to allow end users to use the software, observe their behavior, their emotional response (whether users liked it or not, were they stressed using it, etc.) and collect their feedback on how the software can be made more useable or user friendly and incorporate the changes that make the software easier to use.
UI VS Usability Testing
UI testing involves the testing of Graphical User Interface of the Software. This testing ensures that the GUI should be according to requirements in terms of color, alignment, size and other properties.
On the other hand Usability testing ensures that a good and user friendly GUI is designed and is easy to use for the end user. UI testing can be considered as a sub part of Usability testing.
Accessibility or 508 Compliance Testing
This is a subset of usability testing, verifying the software product is accessible to the users under consideration having disabilities (deaf, blind, mentally disabled etc.). Accessibility evaluation is more formalized than usability testing generally. The end goal, in both usability and accessibility, is to discover how easily people can use a web site and feed that information back into improving future designs and implementations.
Myths, Misconceptions, & Confusion:
Does 508 compliance is the same as accessibility?
‘No’ is the answer. 508 compliance is the law that federal judiciary wrote up to try and begin to set some standards for all electronic and information technology products. It provides the minimum standards for what is deemed acceptable, and ‘minimum’ really does not make a web-site fully accessible. Sure, you can make the effort to be 508 compliant, but it is very broad in meaning. Make the effort to be accessible, not just 508 compliant.
Does Accessible means the design isn’t as pretty?
This is a serious misconception. There isn’t anything about being accessible that necessarily makes a design look ugly. Anything that you can do with web standards and other best practices can be done accessibly, and that makes for a lot of great design potential.
Providing Alt Tags is all you really need to do be accessible?
Adding descriptive alt text is the very least of what you can do to improve accessibility. There are numerous simple things one can implement in the design.
1) If your main navigation page is long and spanned across pages, “skip navigation” proves to be very useful for those who are using the assistance of a screenreader. What is skip navigation? It allows the user to not to have to hear the same navigation over and over every time they navigate to other pages on the same site.
2) Provide true semantic headers ( h1, h2, h3) and determine clear labels on forms to help those using screenreaders. This will help improve the user’s expereince when navigating through a site.
3) When marking up your content or provide additional options in a layout, consider font sizing (which is built into most modern browsers now), for those who may need to see text at a larger size.
4) Be mindful of color contrast for those who are colorblind or have a hard time determining different colors.
User Acceptance Testing:
A formal product evaluation performed by a customer as a condition of purchase. User Acceptance testing is performed by clients/end users of the software. User Acceptance testing allows Subject matter experts (SMEs) from client side to test the software with their actual business or real-world scenarios and to check if the software meets their business requirements.
Volume testing is one of the types of performance testing which confirms that any values that may become large over time (such as accumulated counts, logs, and data files), can be accommodated by the program and will not cause the program to stop working or degrade its operation in any manner.
Volume testing is a non-functional testing carried out by performance engineering team. Volume testing is carried out to find the response of the software with different sizes of the data being received or to be processed by the software.
Example – 1: If you were to be testing Microsoft word, volume testing would be to see if word can open, save and work on files of different sizes (10 to 100 MB).
Example – 2: If you were to test any email service for its attachment capabilities, volume testing would be to see if a 100 MB file (image or document or binary) can be attached.
Example – 3: If you were to test any email service for its attachment capabilities, volume testing would be to see if 50 various file types and sizes can be attached to a single email.
This involves identifying and exposing the software, hardware or network Vulnerabilities that can be exploited by hackers and other malicious programs likes viruses or worms. Vulnerability Testing is key to software security and availability. With increased number of hackers and malicious programs, Vulnerability Testing is critical for success of a Business.
White Box Testing:
This is also known as Structural Testing, Glass Box Testing, Clear Box Testing
Transparent Box Testing. Contrast with Black Box Testing.
White box testing is the detailed investigation of internal logic and structure of the code. White box testing is also called glass testing or open box testing. White box testing intends to execute code and test statements, branches, path, decisions and data flow within the program being tested. In order to perform white box testing on an application, the tester needs to possess knowledge of the internal working of the code. The tester needs to have a look inside the source code and find out which unit/chunk of the code is behaving inappropriately.
White-box testing and Black-box testing complement each other as each of the testing approaches has potential to un-cover specific category of errors.
Cyclomatic or Cyclometric or Conditional Complexity
This is software metric to express the complexity of the source code. There is a tight correlation between a program’s cyclomatic complexity and its maintainability and testability. Hence when the cyclomatic complexity is higher, the probability of errors when refactoring or enhancing or fixing the source code is higher.
This technique is used in white-box testing. The complexity number is the minimum number of test cases required to cover all the flows of the program. It also helps in deciding to split/modularize the complex program. Generally the acceptable maximum complexity is 15.
Cyclomatic Complexity = E – N + 2P
E – Number of Edges/Paths of the Graph (conditional statements)
N – Number of Nodes of the graph (return statements)
P – Number of Connected Components (number of programs)
For e.g., Consider 1 program having if-then, for-next, do-while and switch-cases.
E = Total Number of “if (including elseif)” in the program + Total Number of “loops” from “For-Next” + Total Number of “loops” from “Do-While” + Total Number of Cases (ignore default case) in the “Switch-Case”
N = Total Number of “Return” operators
P = Number of Programs = 1 in this case.
Hence Cyclomatic Complexity = E – N + 2P = (if’s + loops + cases) – (returns) + 2 * 1
|1-10||A simple module without much risk|
|11-20||A more complex module with moderate risk|
|21-50||A complex module of high risk|
|51 and greater||An untestable program of very high risk|
Scripted end-to-end testing that duplicates specific workflows which are expected to be utilized by the end-user.
This focuses on error handling capabilities of the software, response of the software on exceptions, messages it renders to users in case of expected and unexpected errors. This is called Error-Handling Testing.
Black Box Vs Grey Box Vs White Box
Black Box Testing
Grey Box Testing
White Box Testing
The Internal Workings of an application are not required to be known
Somewhat knowledge of the internal workings are known
Tester has full knowledge of the Internal workings of the application
Also known as closed box testing, data driven testing and functional testing
Another term for grey box testing is translucent testing as the tester has limited knowledge of the insides of the application
Also known as clear box testing, structural testing or code based testing
Performed by end users and also by testers and developers
Performed by end users and also by testers and developers
Normally done by testers and developers
Testing is based on external expectations – Internal behavior of the application is unknown
Testing is done on the basis of high level database diagrams and data flow diagrams
Internal workings are fully known and the tester can design test data accordingly
This is the least time consuming and exhaustive
Partly time consuming and exhaustive
The most exhaustive and time consuming type of testing
Not suited to algorithm testing
Not suited to algorithm testing
Suited for algorithm testing
This can only be done by trial and error method
Data domains and Internal boundaries can be tested, if known
Data domains and Internal boundaries can be better tested
NOTE: In case of any missing types of testing, please do not hesitate to add it to the comments so that the same can be incorporated within this post.