Test and Evaluation of Systems of Systems


Definition: A system of systems (SoS) is "a collection of systems, each capable of independent operation, that interoperate together to achieve additional desired capabilities [1]." Test & Evaluation (T&E) is the process by which an SoS and/or its constituents are compared against capability requirements and specifications.

Keywords: system of systems (SoS), SoS test, SoS test and evaluation, SoS testing, SoS validation, SoS verification

MITRE SE Roles and Expectations: MITRE systems engineers (SEs) are expected to understand the characteristics of systems of systems (SoS) and the implications for systems engineering in an SoS environment, including SoS test and evaluation. SEs are expected to develop systems engineering and T&E plans for systems that are constituents of SoS as well as for SoS themselves.

Background

This article is a special subject that addresses unique aspects of T&E of SoS and outlines strategies and techniques for handling them.

Systems of systems (SoS) differ from traditional systems in a number of ways. As a result, the application of systems engineering to SoS requires that it be tailored to address the particular characteristics of SoS. Likewise, the distinctive characteristics of SoS have implications for the application of test and evaluation (T&E).This discussion specifically addresses "acknowledged SoS," a type of SoS that is growing in significance in the Department of Defense (DoD). Acknowledged SoS have recognized objectives, a designated manager, and resources. However, the constituent systems (those that interoperate with each other to achieve the SoS capabilities) retain their independent ownership, objectives, funding, development, and sustainment approaches. Changes in the constituent systems are based on collaboration between the SoS and the systems levels.

SoS raise unique development challenges consequent to the far-reaching SoS capability objectives: the lack of control by the SoS over the constituent systems and the dependence of SoS capability on leveraging already fielded systems that address user and SoS needs. Further, SoS are often not formal programs of record but rather depend on changes made through acquisition programs or operations and maintenance of fielded systems. As a result, the question addressed here is not simply, how do we implement T&E for SoS but rather, what does it mean to test and evaluate SoS?

SoS Characteristics Impacting Test and Evaluation

Table 1 summarizes key differentiating characteristics between systems and acknowledged SoS. Most of the differences are a result of the independence of the SoS's constituent systems. The constituent systems may evolve in response to user needs, technical direction, funding, and management control independent of the SoS. SoS evolution, then, is achieved through cooperation among the constituent systems, instead of direction from a central authority, by leveraging the constituent systems efforts to improve their own individual capabilities.

Table 1. Comparing Systems and Acknowledged Systems of Systems

Aspect of Environment

System

Acknowledged System of Systems

Management & Oversight

Stakeholder Involvement

Clearer set of stakeholders

Stakeholders at both system level and SoS levels (including system owners), with competing interests and priorities; in some cases, the system stakeholder has no vested interest in the SoS; all stakeholders may not be recognized.

Governance

Aligned program manager and funding

Added levels of complexity due to management and funding for both the SoS and individual systems; SoS does not have authority over all the systems.

Operational Environment

Operational Focus

Designed and developed to meet operational objectives

Called upon to meet a set of operational objectives using systems whose objectives may or may not align with the SoS objectives.

Implementation

Acquisition

Aligned to acquisition category milestones, documented requirements, SE

Added complexity due to multiple system lifecycles across acquisition programs involving legacy systems, systems under development, new developments, and technology insertion; Typically have stated capability objectives upfront which may need to be translated into formal requirements.

Test & Evaluation

Test and evaluation of the system is generally possible

Testing is more challenging due to the difficulty of synchronizing across multiple systems' life cycles, given the complexity of all the moving parts and potential for unintended consequences.

Engineering & Design Considerations

Boundaries and Interfaces

Focuses on boundaries and interfaces for the single system

Focus on identifying the systems that contribute to the SoS objectives and enabling the flow of data, control, and functionality across the SoS while balancing needs of the systems.

Performance & Behavior

Performance of the system to meet specified objectives

Performance across the SoS that satisfies SoS user capability needs while balancing needs of the systems

An SoS will face T&E challenges that stem from the independence of its constituent systems:

  • Independent development cycles mean that the delivery of systems' upgrades to meet SoS needs is done asynchronously and are bundled with other changes to the system in response to other needs (beyond those of the SoS).
  • The number and variability of the systems that influence SoS results means that large SoS, in particular, are complex and that interactions among the constituents may lead to unintended effects or emergent behavior.

Test and Evaluation in the SoS SE Process

The SoS Guide [1] presents seven core SoS SE elements. Four are critical to T&E of the SoS. More detail on these elements can be found in the above reference; this article summarizes their key aspects as shown in Figure 1. The discussion that follows shows how T&E activities fit into the SoS SE core elements and the challenges SoS pose for T&E.

 SoS SE Core Elements and Their Relationships to T and E
Figure 1: SoS SE Core Elements and Their Relationships to T&E

(1) Capability objectives of an SoS are often stated at a high level, particularly when the need for an SoS is first established.

Translating capability objectives into high-level SoS requirements is a core element in the SoS SE process. In most cases, SoS capability objectives are framed in high-level language that needs to be interpreted into high-level requirements to serve as the foundation of the engineering process.

"SoS objectives are typically couched in terms of needed capabilities, and the systems engineer is responsible for working with the SoS manager and users to translate these into high-level requirements that provide the foundation for the technical planning to evolve the capability over time [1, p. 18]."

These objectives establish the capability context for the SoS, which grounds the assessment of current SoS performance. In most cases, SoS do not have requirements per se; they have capability objectives or goals that provide the starting point for specific requirements that drive changes in the constituent systems to create increments of SoS evolution.

(2) Requirements are specified at the level of the system for each SoS upgrade cycle.

In the SoS SE core element, "assessing requirements and solution options," increments of SoS improvements are planned collaboratively by managers and systems engineers at the SoS and system levels. Typically, there are specific expectations for each increment about system changes that will produce an anticipated overall effect on the SoS performance. While it may be possible to confidently define specifications for the system changes, it is more difficult to do this for the SoS, which is, in effect, the cumulative result of the changes in the systems.

"It is key for the systems engineer to understand the individual systems and their technical and organizational context and constraints when identifying viable options to address SoS needs and to consider the impact of these options at the systems level. It is the SoS systems engineer's role to work with requirements managers for the individual systems to identify the specific requirements to be addressed by appropriate systems (that is to collaboratively derive, decompose, and allocate requirements to systems) [1, p. 20]."

As a result, most SoS requirements are specified at the system level for each upgrade cycle, which provides the basis for assessing system-level performance. As discussed below, T&E of system changes is typically done by the systems as part of their processes.

(3) Systems implement changes as part of their own development processes.

The main source of T&E challenges arise from SoS upgrades that are the product of changes in independent operating systems and in the SoS itself. The SoS SE team needs to work with the SE systems teams to plan and track these systems changes that will contribute to meeting the SoS capability objectives:

"Once an option for addressing a need has been selected, it is the SoS systems engineer's role to work with the SoS sponsor, the SoS manager, the constituent systems' sponsors, managers, systems engineers, and contractors to fund, plan, contractually enable, facilitate, integrate, and test upgrades to the SoS. The actual changes are made by the consistent systems' owners, but the SoS systems engineer orchestrates the process, taking a lead role in the synchronization, integration, and test across the SoS and providing oversight to ensure that the changes agreed to by the systems are implemented in a way that supports the SoS [1, p. 20]."

(4) Systems level T&E validates implementation of system requirements.

Consequently T&E is implemented as part of this element at both the system and SoS levels. It seems fairly straightforward to assess whether the systems have made the changes as specified in the plan, however, it is less clear how the results of these changes at the SoS level are to be tested and evaluated.

"Throughout orchestration, the systems are implementing changes according to the negotiated plans, and they are following their own SE and T&E processes. The SoS systems engineer works with the SE teams of the constituent systems to enable SoS insight into progress of the system developments as laid out in the SoS plan. The SoS SE team members are responsible for integration and for verification and validation of the changes across the suite of system updates under an SoS increment, including T&E tailored to the specific needs of the increments. Their efforts may result in both performance assessments and a statement of capabilities and limitations of the increment of SoS capability from the perspectives of SoS users and users of the individual systems. These assessments may be done in a variety of venues, including distributed simulation environments, system integration laboratories, and field environments. The assessments can take a variety of forms, including analysis, demonstration, and inspection. Often SoS systems engineers leverage system-level activities that are underway in order to address SoS issues [1, p. 68]."

There are significant challenges in creating an end-to-end test environment sufficient to addressing the met needs of the SoS capability. This can be mitigated by conducting T&E on a subset of systems prior to fielding the entire SoS increment, though at the expense of some T&E validity. Contingency plans should be prepared for when the SoS T&E results don't reflect expected improvements in case the systems are ready to be fielded based on system level test results and owners' needs.

(5) Constituent system development processes are typically asynchronous.

The asynchronous nature of the constituent systems development schedules presents a challenge to straightforward T&E at the SoS level. While it is obviously desirable to coordinate the development plans of the systems and synchronize the delivery of upgrades, as a practical matter, this is often difficult or impossible. Even when it is possible to plan synchronous developments, the result may still be asynchronous deliveries due to the inevitable issues that lead to development schedule delays, particularly with a large number of systems or when the developments are complex.

"SoS SE approaches based on multiple, small increments offer a more effective way to structure SoS evolution. Big-bang implementations typically will not work in this environment; it is not feasible with asynchronous independent programs. Specifically, a number of SoS initiatives have adopted what could be termed a "bus stop," spin, or block-with-wave type of development approach. In this type of approach, there are regular time-based SoS "delivery" points, and systems target their changes for these points. Integration, test, and evaluation are done for each drop. If systems miss a delivery point because of technical or programmatic issues, they know that they have another opportunity at the next point (there will be another bus coming to pick up passengers in 3 months, for instance). The impact of missing the scheduled bus can be evaluated and addressed. By providing this type of SoS battle rhythm, discipline can be inserted into the inherently asynchronous SoS environment. In a complex SoS environment, multiple iterations of incremental development may be under way concurrently."

"Approaches such as this may have a negative impact on certification testing, especially if the item is software related to interoperability and/or safety issues (such as Air Worthiness Release). When synchronization is critical, considerations such as this may require large sections of the SoS, or the entire SoS, to be tested together before any of the pieces are fielded [1, pp. 68-69]."

As these passages indicate, the asynchronous nature of system developments frequently results in other SoS constituents being unprepared to test with earlier delivering systems, complicating end-to-end testing. However, as autonomous entities, constituent systems expect to field based on results of their own independent testing apart from the larger impact on SoS capability. Holding some systems back until all are ready to test successfully is impractical and undesirable in most cases. These dependencies form a core impediment to mapping traditional T&E to SoS.

(6) SoS performance is assessed in various settings.

SoS typically have broad capability objectives rather than specific performance requirements as is usually the case with independent systems. These capability objectives provide the basis for identifying systems as candidate constituents of an SoS, developing an SoS architecture, and recommending changes or additions to constituent systems.

In an SoS environment there may be a variety of approaches to addressing objectives. This means that the SoS systems engineer needs to establish metrics and methods for assessing performance of the SoS capabilities which are independent of alternative implementation approaches. A part of effective mission capability assessment is to identify the most important mission threads and focus the assessment effort on end-to-end performance. Since SoS often comprise fielded suites of systems, feedback on SoS performance may be based on operational experience and issues arising from operational settings, including live exercises as well as actual operations. By monitoring performance in the field or in exercise settings, systems engineers can proactively identify and assess areas needing attention, emergent behavior in the SoS, and impacts on the SoS of changes in constituent systems [1, pp. 18-19].

This suggests the necessity of generating metrics that define end-to-end SoS capabilities for ongoing benchmarking of SoS development. Developing these metrics and collecting data to assess the state of the SoS is part of the SoS SE core element "assessing the extent to which SoS performance meets capability objectives over time." This element provides the capability metrics for the SoS, which may be collected from a variety of settings as input on performance, including new operational conditions [1, p. 43]. Hence, assessing SoS performance is an ongoing activity that goes beyond assessment of specific changes in elements of the SoS.

T&E objectives, particularly key performance parameters, are used as the basis for making a fielding decision. In addition, SoS metrics, as discussed above, provide an ongoing benchmark for SoS development which, when assessed over time, should show an improvement in meeting user capability objectives. Because SoS are typically comprised of a mix of fielded systems and new developments, there may not be a discrete SoS fielding decision; instead, the various systems are deployed as they are ready, at some point reaching the threshold that enables the new SoS capability.

In some circumstances, the SoS capability objectives can be effectively modeled in simulation environments that can be used to identify changes at the system levels. If the fidelity of the simulation is sufficient, it may provide validation of the system changes needed to enable SoS-level capability. In those cases, the fidelity of the simulation may also be able to provide for the SoS evaluation.

In cases where simulation is not practical, other analytical approaches may be used for T&E. Test conditions that validate the analysis must be carefully chosen to balance test preparation and logistics constraints against the need to demonstrate the objective capability under realistic operational conditions.

Best Practices and Lessons Learned

Approach SoS T&E as an evidence-based approach to addressing risk. Full conventional T&E before fielding may be impractical for incremental changes to SoS because systems may have asynchronous development paths. In addition, explicit test conditions at the SoS level may not be feasible due to the difficulty in bringing all constituent systems together to set up meaningful test conditions. Thus an incremental risk-based approach to identifying key T&E issues is recommended.

For each increment, a risk-based approach identifies areas critical to success and areas that could have adverse impacts on user missions. This is followed by a pre-deployment T&E. Risk is assessed using evidence from a range of sources, including live test. In some circumstances, the evidence can be based on activity at the SoS level, in others it may be based on roll-ups of activity at the constituent systems level. The activity can range from explicit verification testing, results of models and simulations, use of linked integration facilities, and results of system level operational test and evaluation.

Finally, these risks must be factored into SoS and system development plans, in case the T&E results indicate that the changes will have a negative impact, which can then be discarded without jeopardizing system update deliveries to users. The results could be used to provide end-user feedback in the form of "capabilities and limitations."

Encourage development of analytic methods to support planning and assessment. Analytical models of the SoS can serve as effective tools to assess system level performance against SoS operational scenarios. They may also be used to validate the requirements allocations to systems, and provide an analytical framework for SoS-level capability verification. Such models may be used to develop reasonable expectations for SoS performance. Relevant operational conditions should be developed with end user input and guided by "design of experiments" principles, so as to explore a broad a range of conditions.

Address independent evaluation of networks that support multiple SoS. Based on the government vision of enabling distributed net-centric operations, the "network" has assumed a central role as a unique constituent of every SoS. Realistic assessment of SoS performance demands evaluation of both network performance and potential for degradation under changing operational conditions. Because government departments and agencies seek to develop a set of network capabilities for a wide range of applications, consideration should be given to developing an approach to network assessment independent of particular SoS applications as part of SoS planning and T&E.

Employ a range of venues to assess SoS performance over time. For SoS, evaluation criteria may be end user metrics that assess the results of loosely defined capabilities. While these may not be expressly timed to the development and fielding of system changes to address SoS capability objectives, these data can support periodic assessments of evolving capability and provide valuable insight to developers and users.

Assessment opportunities should be both planned and spontaneous. For spontaneous opportunities, T&E needs to be organized in a way that facilitates responding flexibly as they arise.

Establish a robust process for feedback once fielded. Once deployed, continuing evaluation of the fielded SoS can be used to identify operational problems and make improvements. This continuous evaluation can be facilitated through system instrumentation and data collection to provide feedback on constraints, incipient failures warnings, and unique operational conditions.

By establishing and exercising robust feedback mechanisms among field organizations and their operations and the SoS SE and management teams, SoS T&E can provide a critical link to the ongoing operational needs of the SoS. Feedback mechanisms include technical and organizational dimensions. An example of the former is instrumenting systems for feedback post-fielding. An example of the latter is posting a member of the SoS SE and management team to the SoS operational organization.

References & Resources

  1. Office of the Under Secretary of Defense for Acquisition, Technology and Logistics (OUSD AT&L), August 2008, Systems Engineering Guide for Systems of Systems, Washington, DC.

Additional References & Resources

Dahmann, J., G. Rebovich, J. Lane, R. Lowry, and J. Palmer, 2010, "Systems of Systems Test and Evaluation Challenges," 5th IEEE International Conference on System of Systems Engineering.

The MITRE Institute, September 1, 2007, "MITRE Systems Engineering (SE) Competency Model, Version 1," Section 2.6.

Publications

Download the SEG

MITRE's Systems Engineering Guide

Download for EPUB
Download for Amazon Kindle
Download a PDF

Questions?
Contact the SEG Team