Integration testing (sometimes called integration and testing, abbreviated I&T) is the phase in software testing in which individual software modules are combined and tested as a group. It occurs after unit testing and before validation testing. Integration testing takes modules that have been unit tested, groups them into larger aggregates, applies tests defined in an integration test plan to those aggregates, and delivers as its output an integrated system ready for system testing.[1]
Purpose
editIntegration testing exists to identify issues among modules which individually pass unit tests, but upon interaction their performance is inconsistent with the expected behavior. The inconsistencies may stem from the execution of operations in an unanticipated order or simply the difference in how two components handle the same object.[2]
The purpose of integration testing is to verify functional, performance, and reliability requirements placed on major design items. These "design items" (i.e., assemblages or groups of units) are exercised through their interfaces using black-box testing. Success and error cases are simulated via appropriate parameter and data inputs. Simulated usage of shared data areas and inter-process communication is tested and individual subsystems are exercised through their input interface. Test cases are constructed to test whether all the components within an assemblage interact correctly, such as across procedure calls, and are performed after testing individual modules through unit tests. The overall approach is to use "building blocks", in which verified assemblages are added to a verified base and then used to support the integration testing of further assemblages.
Software integration testing is performed according to the software development life cycle (SDLC) after unit and functional tests. The cross-dependencies for software integration testing are: schedule for integration testing, strategy and selection of the tools used for integration, define the cyclomatical complexity of the software and software architecture, reusability of modules and life-cycle and versioning management.[3]
Integration Patterns
editThe foundation of the integration testing patterns are big-bang, top-down, and bottom-up.
Big Bang Integration
editIn the big-bang approach, most of the developed modules are coupled together to form a complete software system or major part of the system and then used for integration testing. This method is very effective for saving time in the integration testing process. However, if the test cases and their results are not recorded properly, a failed test will not be descriptive of the source of the issue. Therefore the entire integration process will be more complicated due to the large amount of code in which the issue may lie. The lack of clarity may prevent the testing team from achieving the goal of all integration tests passing.[3]
A type of big-bang integration commonly used in both software and hardware integration testing is usage model testing. The basis behind this type of integration testing is to run user-like workloads in integrated user-like environments. In doing the testing in this manner, the environment is proofed, while the individual components are proofed indirectly through their use. Usage Model testing takes an optimistic approach to testing, because it expects to have few problems with the individual components. The strategy relies heavily on the component developers to do the isolated unit testing for their product. The goal of the strategy is to avoid redoing the testing done by the developers, and instead flesh-out problems caused by the interaction of the components in the environment. For integration testing, usage model testing can be more efficient and provides better test coverage than traditional focused functional integration testing. To be more efficient and accurate, care must be used in defining the user-like workloads for creating realistic scenarios in exercising the environment. This gives confidence that the integrated environment will work as expected for the target customers.[4]
Bottom-Up Integration
editBottom-up testing is an approach to integrated testing where the lowest level components are tested first, then used to facilitate the testing of higher level components. The process is repeated until the component at the top of the hierarchy is tested.
Before integration testing can begin, all the bottom or low-level modules need to pass their respective unit tests. The modules are then grouped together for integration testing. After a level passes integration testing, the next level of modules may be grouped for further integration testing. The hope is that once a layer of integrated modules passes integration testing, then a failed integration test at the next level has a reduced scope as it may be assumed the previously passing modules do not have any issues. The bottom-up approach is helpful only when all or most of the modules of the same development level are ready. Bottom-up integration testing helps to determine the level of software developed and makes it easier to report testing progress in the form of a percentage.[3]
Top-Down Integration
editTop-down testing is an approach to integration testing where the top integrated modules are designed and tested before lower level modules. When a system's design is viewed as a tree structure, top down begins by testing the root node and testing layer by layer until the leaves are reached. It differs from big bang integration in that the modules below the layer currently being tested do not have functionality, they are simply stub methods. Thus during testing if an error is encountered, the lower layers cannot be responsible. Any errors most likely arise from the code written in the currently tested layer. The benefits of top-down integration are generally seen early in the project's life-cycle, focusing on higher level workflows and deferring most of the testing complexity for a later time.[5]
Top down integration suffers a long startup time due to the requirement that all stubs must be set up. Once the stubs are in place, it allows for parallel development. Lower level code may be developed, but will not enter integration testing until the higher levels have passed integration testing. The reduced dependency on completion of all code allows for testing to begin as soon as the top layers have completed functionality.[3]
Other Integration Patterns
editMost of the other patterns utilize at least one of the foundation testing patterns. Other patterns that may be followed include:
- collaboration integration: Split up the application into possibly overlapping components, which are task dependent, called collaborations. Test one collaboration at a time until all have passed integration testing.
- backbone integration: Isolate a backbone from the application where the backbone is a group of components without which the application could not run. Big-bang testing is performed on the backbone, while the top-down testing is performed on application components not part of the backbone.
- layer integration: The application is split logically into layers such that each layer is tested in isolation. Once each layer has passed its tests, layer integration can begin via top-down or bottom-up integration of the layers.
- client/server integration: Tests the interface between the nodes (clients and servers) where each node has already been tested. Each node has passed testing to prevent the strategy from becoming big bang testing on the whole network.
- high-frequency integration: Integration tests (often of top-down pattern) are run frequently to test integration of new code. Often continuous integration is utilized to automate the process.[3]
- sandwich integration: Combines top-down and bottom-up integration into a single pattern. Bottom-up testing takes place from the middle layer upward while top-down testing takes place from middle layer downward.
- risky-hardest integration: Testing that prioritizes the riskiest and hardest software modules first. This is similar to backbone in that individual modules are evaluated based on their importance to the overall application.[4]
Implementations
editIntegration testing frameworks provide an interface for the developer to test the interaction of components which have previously passed unit testing. Due to the similar set-up and teardown pattern utilized in unit testing and integration testing, most unit testing frameworks may also be used to facilitate integration testing. When using the framework, each test should check the integrity of interactions among modularly passing methods. A third party testing framework exists for most languages.
Limitations
editIn a given application, there are multiple paths that a user may take. Coverage is often a common metric used to evaluate the effectiveness of a test. The more possibilities evaluated (methods, calls, decisions), the better the coverage of a test. Integration tests often provide poor coverage since they consider large chunks of the application as opposed to individual methods or classes. This results in poor error isolation when running an integration test. Furthermore, the coarse granularity of integration tests makes them comparably slower than unit tests. However, integration tests are still useful in identifying errors that would otherwise be masked during unit testing.[6]
Unit | Functional | Integration | |
What is Tested | Method/Class | Methods/Classes | Parts of Application |
Run Time | Very Fast | Fast | Slow |
Coverage | Excellent | Moderate | Poor |
Error Localization | Excellent | Moderate | Poor |
References
edit- ^ Martyn A Ould & Charles Unwin (ed), Testing in Software Development, BCS (1986), p71. Accessed 31 Oct 2014
- ^ Beizer, Boris (1990). Software Testing Techniques. New York, New York: Van Nostrand Reinhold. pp. 21–22. ISBN 0-442-20672-0.
- ^ a b c d e Binder, Robert V.: Testing Object-Oriented Systems: Models, Patterns, and Tools. Addison Wesley 1999. ISBN 0-201-80938-9
- ^ a b "Big Bang Integration Testing". ProfessionalQA. April 19, 2016. Retrieved February 20, 2017.
- ^ "What is Integration Testing?". ISTQBExamCertification. Retrieved February 20, 2017.
- ^ Fox, Armando; Patterson, David (2014). Engineering Software as a Service: An Agile Approach Using Cloud Computing. Strawberry Canyon LLC. pp. 279–283.