Special Considerations for Conditions of Uncertainty: Prototyping and Experimentation
Definition: Prototyping and experimentation are two closely related methods that can help systems engineers (SEs) drive requirements uncertainty out of the requirements process.
Keywords: CONOPS, experimentation, exploration, prototyping, requirements, uncertainty
MITRE SE Roles and Expectations: MITRE systems engineers are expected to identify uncertainty in requirements and actively take steps to manage and mitigate it, including considering uncertainty in associated areas such as operational concepts and others (see MITRE SEG Concept Development topic). MITRE SEs are expected to understand the range and styles of prototyping and experimentation, and the potential impact of each when applied during requirements engineering. SEs are expected to understand the value in having MITRE execute a prototyping activity as opposed to (or in conjunction with) a contractor. They are also expected to be aware of experimental venues, events, and laboratories that exist to support these activities.
Successfully developing systems or capabilities to meet customers' needs requires the ability to manage uncertainty when defining requirements. For example, how will analytical assessments of performance or functional requirements match the reality of their implementation when the system or capability is fielded? What unintended technical, operational, or performance issues are likely to occur? Will technology essential to meet a requirement perform as expected when using realistic user data, or when injected in operational environments and contexts? Are the user concepts of operations really supportable given new technical capabilities? Prototyping and experimentation are two methods that can help address these issues.
Prototyping is a practice in which an early sample or model of a system, capability, or process is built to answer specific questions about, give insight into, or reduce uncertainty or risk in many diverse areas, including requirements. This includes exploring alternative concepts and technology maturity assessments as well as requirements discovery or refinement. It is a part of the SE's toolkit of techniques for managing requirements uncertainty and complexity and mitigating their effects.
The phase of the systems engineering life cycle and the nature of the problem the prototype is intended to address influence the use and type of prototyping. Prototyping may be identified immediately after a decision to pursue a material solution to meet an operational need. In this situation, prototypes are used to examine alternative concepts as part of the analysis of alternatives to explore the requirements space to determine if other approaches can better meet the requirements. Prototyping to explore and evaluate the feasibility of high-level conceptual designs may be performed early in technology development as part of government activities to assess and increase technology maturity, discover or refine requirements, or develop a preliminary design. A prototype may even be developed into a reference implementation—a well-engineered example of how to implement a capability, often based upon a particular standard or architecture—and provided to a commercial contractor for production as a way of clarifying requirements and an implementation approach.
For more information on prototyping, see the Guide article on Competitive Prototyping.
Experimentation adds a component of scientific inquiry to the above, often supported by realistic mission/domain context. Performing experiments with a realistic context allows the evaluator to assess and evaluate hypotheses about concept of operations (CONOPS), feasibility of technology, integration with other systems, services, and data, and other concepts that support requirements refinement. Experimentation environments, or laboratories, allow acquisition personnel, real-world users, operators, and technologists to collaboratively evaluate concepts and prototypes using combinations of government, open source, and commercial-off-the-shelf products. In these environments, stakeholders can evolve concepts and approaches—in realistic mission contexts—and quickly find out what works and what doesn't, ultimately reducing risks by applying what they've learned to the acquisition process.
It is useful to consider three broad stages of experimentation, which form a pipeline:
- Lightweight Exploration: Driven by operator needs, this stage is distinctive for its quick brainstorming and rapid assembly of capabilities with light investment requirements. It allows for a "first look" insight into new concepts and newly integrated capabilities that can support requirements generation. MITRE's ACME (Agile Capability Mashup Environment) Lab is an example of a lightweight experimentation venue.
- Low/Medium-Fidelity Experimentation: This stage involves significant engagement with users, operators, and stakeholders, and is typified by human-in-the-loop simulations and possibly real-world capabilities and data with experimental design and attempts to control independent variables. MITRE's Collaborative Experimentation Environment (CEE) and iLab-based "Warfighter Workshops" are two examples of low/medium-fidelity experimentation venues. These venues allow concept exploration and alternative evaluations that can support requirements clarification.
- High-Fidelity Experimentation: These experiments are planned in conjunction with sponsors to refine existing concepts of operation that can support requirements refinement. They often feature highly realistic models and simulations of entities, timing, sensors, and communication networks along with some real-world applications. MITRE's Naval C4ISR Experimentation Lab (NCEL) is an example of a high-fidelity experimentation venue.
Prototype solutions from any of the above experimental stages can be used to support the requirements management process. Generally, the products from the lightweight end of the venue spectrum support the early stages of requirements management (CONOPS, concept development, etc.), while those at the high-fidelity end tend to support refinement of relatively mature requirements. These solutions can also be transitioned, given appropriate circumstances, to operators in the field, to industry, or to other parties for evaluation or use.
Best Practices and Lessons Learned
- Be opportunistic. The acquisition process is structured, linear, and can, in some circumstances, seem to stifle innovation. Embrace requirements uncertainty as an opportunity to inject innovation and think freely.
- Act early in the acquisition life cycle. Prototyping early in the acquisition life cycle serves as a method to reduce requirements risk and may be of interest to program managers attempting to avoid late-acquisition-life-cycle change (e.g., requirements creep), especially if there is requirements uncertainty.
- Seek early/frequent collaboration among the three critical stakeholders. Purely technical prototyping risks operational irrelevance. It is vital to involve technologists, operators, and acquirers/integrators early and often in prototyping and experimentation dialogues. Operator involvement is particularly critical, especially in solidifying requirements, yet it is often deferred or neglected. Three of the four recommendations from the 2007 Department of Defense Report to Congress on Technology Transition address this collaboration :
- ...early and frequent collaboration is required among the developer, acquirer, and user. This early planning can then serve to mitigate the chasm between Technology Readiness Level (TRL) 5 and TRL 7 by identifying technical issues, resource requirements/sources, avoiding unintended consequences, and ultimately gaining the most yield for the science and technology (S&T) investment.
- ...if the program manager were to conduct early and frequent communication with the developer about user requirements and companion acquisition plans, much of the development risk could be addressed earlier in the process.
- ...the pace at which new technologies are discovered/innovated/developed/deployed in the private sector is staggering, and at odds with the linear, deliberate nature of some government acquisitions...Finding ways to include these innovators in our process could serve both the government and America's economic competitiveness in the world market.
- Use realistic data. Prototypes using unrealistic data often result in failure to address the requirements uncertainty or complexity the prototype was designed to examine. MITRE systems engineers should take every opportunity to capture real data from their work and help build a repository of this data that can be used across MITRE's activities.
- Use loose couplers and open standards to isolate requirements changes. Providing loosely coupled data integration points across components in the prototype allows for changes in one area to explore some aspects of the requirements space, while controlling others. Use open standards, including RESTful services, whenever possible. (See SEG Enterprise Engineering articles in this area on design patterns, composable capabilities on demand, and open source and standards.)
- Develop scalable prototypes. Consider the intended operational or systematic use of the prototype being developed. Does the technology scale? Does the operator workload scale? The performance? Do the results collected from the prototype provide the necessary insight to ensure that modifications to requirements are appropriate for the actual full-scale system?
- Prefer rapid increments. Execute quick experimental iterations or spirals of prototype capability development with operator involvement to ensure adequate feedback flow and requirements refinement for future spirals.
- Look beyond MITRE for resources. MITRE resources on any given program are limited. If the MITRE project resources cannot support a prototyping or experimentation activity by MITRE staff, look to other mechanisms, such as government teaming, the SBIR (Small Business Innovative Research) process, willing industry participants, and others.
- Look beyond MITRE for venues. Consider holding an experiment on-site at a contractor facility, in an operational setting, at a sponsor training facility, or at other locations to reduce real or perceived barriers to participation and to promote awareness of particular stakeholder contexts and points of view.
References & Resources
- Deputy Under Secretary of Defense, August 2007, DoD Technology Transition Report to Congress.
Additional References & Resources
"Uncertainty Management," MITRE Project Leadership Handbook, viewed January 26, 2010.