About Us Our Work Employment News & Events
MITRE Remote Access for MITRE Staff and Partners Site Map
Our Work

Follow Us:

Visit MITRE on Facebook
Visit MITRE on Twitter
Visit MITRE on Linkedin
Visit MITRE on YouTube
View MITRE's RSS Feeds
View MITRE's Mobile Apps
Home > Our Work > Systems Engineering > SE Guide > Systems Engineering Life-cycle Building Blocks
Systems Engineering Guide

Assess Integration Testing Approaches

Definition: When components of a system are developed in isolation, all the pieces must be brought together to ensure that the integrated system functions as intended in its operational configuration. Integration testing should exercise key interfaces between system components to ensure that they have been designed and implemented correctly. In addition, the total operational architecture, including all the segments of the system that are already fielded, should be included in an end-to-end test to verify system integration success.

Keywords: end-to-end testing, integration testing, operational architecture, system-of-systems testing

MITRE SE Roles and Expectations: MITRE systems engineers (SEs) are expected to understand the purpose and role of integration (or system-of-systems) testing in the acquisition process, where it occurs in systems development, and the benefits and risks of employing it. MITRE systems engineers are also expected to understand and recommend when integration testing, or system-of-systems testing, is appropriate within a program development. They should be able to take a broader look at the system acquisition within the context of its intended operational environment, beyond simply the core piece of equipment or software that is being developed, to the overarching operational architecture. MITRE systems engineers should develop and recommend integration testing strategies and processes that encourage and facilitate active participation of end users and other stakeholders in the end-to-end testing process. They are expected to monitor and evaluate contractor integration testing and the acquisition program's overall testing processes, and recommend changes when warranted.

Background

From a software development perspective, system integration testing (SIT) is defined as the activities involved with verifying the proper execution of software components and proper interfacing between components within the solution. The objective of SIT is to validate that all software module dependencies are functionally correct and that data integrity is maintained between separate modules for the entire solution. While functional testing is focused on testing all business rules and transformations and ensuring that each "black box" functions as it should, SIT is principally focused on testing all automated aspects of the solution and integration touch points [1].

Modern systems provide great value through multi-functionality. However, for the systems engineer, the multi-functionality brings the challenge of increased complexity. Humans deal with complexity by partitioning the challenge into smaller pieces—sometimes called components or modules, although at times these are full systems in and of themselves. The downside of partitioning the problem into manageable pieces is that the pieces have to be put together (integration) and shown to work together. This integration is best achieved through a disciplined systems engineering approach containing good architecture, interface definitions, and configuration management.

In most cases, systems being acquired through the government's acquisition process are not complete, stand-alone entities. The newly acquired system will almost always need to fit into a larger operational architecture of existing systems and/or operate with systems that are being separately acquired. To be completely effective and suitable for operational use, the newly acquired system must interface correctly with the other systems that are a part of the final operational architecture. Integration testing, or system-of-systems testing, verifies that the building blocks of a system will effectively interact, and the system as a whole will effectively and suitably accomplish its mission. This article expands the strict software-focused definition of system integration testing to a broader look at complete systems and the integration, or system-of-systems, testing that should be conducted to verify the system has been "assembled" correctly.

The conundrum that a MITRE systems engineer, or any independent party charged with assessing a system's integration test strategy, will encounter in attempting to recommend or develop integration test strategies is the lack of requirements written at a system-of-systems or operational architecture level. By way of example, although the Department of Defense (DoD) Joint Capabilities Integration and Development System (JCIDS) was developed to address shortfalls in the DoD requirements generation system, including "not considering new programs in the context of other programs" [2], operational requirements documents continue to be developed without a system-of-systems focus. A typical Capabilities Development Document will provide requirements for a system, including key performance parameters, but will not provide requirements at the overarching architecture level. As a result, to develop a recommendation for integration testing, some creativity and a great deal of pulling information from diverse sources are required. Once the test is developed, the task of advocating and justifying the test's need within the system development process will be the challenge at hand.

The following discussion provides examples of systems-of-systems, the recommended integration testing that should be conducted, and both good and bad integration testing examples. Note that best practices and lessons learned are generally interspersed throughout the article. A few cautionary remarks are also listed at the end.

Systems-of-Systems: Definition and Examples

While the individual systems constituting a system-of-systems can be very different and operate independently, their interactions typically deliver important operational properties. In addition, the dependencies among the various systems are typically critically important to effective mission accomplishment. The interactions and dependencies must be recognized, analyzed, and understood [3]. Then the system-of-systems test strategy can be developed to ensure that the integration of the individual systems has been accomplished successfully to deliver a fully effective and suitable operational capability.

The examples used in this article are drawn from a particular domain.  But most MITRE systems engineers should see a great deal of similarity in the essentials of the following examples, regardless of the sponsor or customer they support.

The Global Positioning System (GPS) is an example of a system-of-systems. Typical GPS users—ranging from a hiker or driver using a GPS receiver to navigate through the woods or the local streets, to a USAF pilot using GPS to guide a munition to its target—don't usually consider all the components within the system-of-systems required to guide them with GPS navigation. The constellation of GPS satellites is only a small piece, albeit an important one, within the system-of-systems required to deliver position, navigation, and timing information to the GPS user. Other essential pieces include the ground command and control network needed to maintain the satellite's proper orbit; the mission processing function needed to process the raw collected data into usable information for the end user; the external communication networks needed to disseminate the information to the end user; and the user equipment needed for the end user to interface with the system and use its information.  The dependencies and interfaces among all these elements are just as critical to accomplishing the user's goal as is the proper functioning of the constellation of GPS satellites.

A second system-of-systems example is an interoperable and information assurance (IA) protected cross-boundary information sharing environment where federal government users from different departments and agencies, commercial contractors, allies, and coalition members can share information on a global network. Multiple separate but interrelated products comprise the first increment suite of information technology services, including Enterprise Collaboration, Content Discovery and Delivery, User Access (Portal), and a Service-Oriented Architecture Foundation to include Enterprise Service Management.

Finally, an example of a more loosely coupled system-of-systems (SoS)—i.e., a surveillance system-of-systems for which a single government program office is not responsible for acquiring and sustaining the entire SoS. The surveillance network comprises a number of sensors that contribute information in the form of observations to a central processing center (CPC) that uses the sensor-provided observations to maintain a database containing the location of all objects being monitored. The CPC is updated and maintained by one organization while each type of surveillance network contributing sensor has its own heritage and acquisition/sustainment tail. A new sensor type for the surveillance network is currently being acquired.  While it will be critically important for this new sensor type to integrate seamlessly into and provide data integrity within the overall surveillance network, the road to SoS integration testing is fraught with difficulty primarily because there are no overarching requirements at the surveillance network level to insure adequate integration of the new sensor.

System-of-Systems Testing

Although they are challenging to plan and execute, system-of-systems tests for programs where a single government program office is responsible for the entire SoS are generally accomplished better  as part of the system acquisition process. If nothing else, a final system integration test is typically planned and executed by the development contractor prior to turning the system over for operational testing. Then the operational test community plans, executes, and reports on an operationally realistic end-to-end system test as a part of the system's Congressionally mandated Title 10 Operational Test and Evaluation.

A good example of an integration/SoS test is that being done to inform some GPS upgrades. As the new capability is fielded within the GPS constellation, the development community will combine their Integrated System Test (IST) with the operational test community's Operational Test and Evaluation into an integrated test that will demonstrate the end-to-end capability of the system. This SoS/end-to-end test will include the full operational process, from user request for information, through command generation and upload to the constellation, to user receipt of the information through the user's GPS receiver. During the final phase of the IST, a number of operational vignettes will be conducted to collect data on the end-to-end system performance across a gamut of operational scenarios.

Lessons Learned

  1. Integration testing should include scenarios that demonstrate the capability to perform mission essential tasks across the SoS segments.
  2. Don't assume integration testing will necessarily happen or be adequate just because the full SoS is under the control of a single program office.  There are examples of such single program office SoS acquisitions comprising a number of different products and segments that only tested each product separately.
  3. Failure to conduct adequate SoS integration testing can lead to potentially catastrophic failures.  If the new sensor type in the surveillance network example provides the quality and quantity of data anticipated, there is the real possibility that it will overwhelm the CPC's processing capability, thus degrading the accuracy and timeliness of the surveillance database.

Summary and Conclusions

A strict software-development view of integration testing defines it as a logical extension of unit testing [4]. In integration testing's simplest form, two units that have already been tested are combined into a component and the interface between them is tested. A component, in this sense, refers to an integrated aggregate of more than one unit. In a realistic scenario, many units are combined into components, which are in turn aggregated into even larger parts of the program. The idea is to test combinations of pieces and eventually expand the process to test your modules with those of other groups. Eventually all the modules making up a process are tested together. Integration testing identifies problems that occur when units are combined. By using a test plan that requires you to test each unit and ensure the viability of each before combining units, you know that any errors discovered when combining units are likely related to the interface between them. This method reduces the number of possibilities to a far simpler level of analysis.

This article has focused on making the logical extension of this definition to a full-up system, and expanding the integration testing definition to one of system-of-systems testing. MITRE systems engineers charged with assessing integration testing approaches should ensure a system-of-systems view of the program, and develop and advocate for full end-to-end testing of capabilities within the complete operational architecture.

References & Resources

  1. MIKE 2.0, System Integration Testing, accessed March 5, 2010.
  2. Joint Capabilities Integration Development System, Wikipedia, accessed March 5, 2010.
  3. System of Systems, Wikipedia, accessed March 5, 2010.
  4. DISA/JITC, October 2008, Net-Centric Enterprise Services Initial Operational Test and Evaluation Plan, p. i.

Additional References & Resources

MSDN, Integration Testing, accessed March 5, 2010.

Wikipedia contributors, "United States Space Surveillance Network," Wikipedia, accessed May 25, 2010.

SMC/GPSW, September 9, 2009, Integrated System Test (IST) 2-4 Test Plan draft, pp. 2-21–2-22.


Not all references and resources are publicly available. Some require corporate or individual subscriptions. Others are not in the public domain.

Link to MITRE-Only Resource References and resources marked with this icon are located within MITRE for MITRE employees only.


Page last updated: May 3, 2012   |   Top of page


For more information on the Systems Engineering Guide, or to suggest an article, please Contact Us.


Homeland Security Center Center for Enterprise Modernization Command, Control, Communications and Intelligence Center Center for Advanced Aviation System Development

 
 
 

Solutions That Make a Difference.®
Copyright © 1997-2013, The MITRE Corporation. All rights reserved.
MITRE is a registered trademark of The MITRE Corporation.
Material on this site may be copied and distributed with permission only.

IDG's Computerworld Names MITRE a "Best Place to Work in IT" for Eighth Straight Year The Boston Globe Ranks MITRE Number 6 Top Place to Work Fast Company Names MITRE One of the "World's 50 Most Innovative Companies"
 

Privacy Policy | Contact Us