Establishing a Quality Assurance Program in the Systems Acquisition or Government Operational Organization


Definition: Quality Assurance (QA) is "a planned and systematic means for assuring management that the defined standards, practices, procedures, and methods of the process are applied [1]."

Keywords: continuous improvement, process improvement, quality, standards

MITRE SE Roles & Expectations: MITRE systems engineers (SEs) are expected to recommend how to establish a QA program in the systems acquisition or the government operational organization. They are expected to propose plans to resource, implement, and manage a QA program to enable a positive, preventive approach to managing the systems acquisition. They are expected to participate in integrated teams to create directives and plans that establish QA standards, processes, procedures, and tools [2].

Background

MITRE assists the government in preparing contract requirements for the acquisition of large systems from major information technology contractors. With few exceptions, these contracts must comply with mandatory provisions in the Federal Acquisition Regulation (FAR).

The definition of quality for government contracts is stated in the FAR Part 46.101: "Contract quality requirements means the technical requirements in the contract relating to the quality of the product or service and those contract clauses prescribing inspection, and other quality controls incumbent on the contractor, to assure that the product or service conforms to the contractual requirements." Thus, the government contract interpretation of quality (control) in most contracts for major systems is the manufacturing-based perspective: conformance to specifications.

MITRE's Quality Guidelines and Standards

MITRE's QA efforts should focus on the use of appropriate processes to assure that the right product is being built (customer-driven quality), that the product being built will meet its specified requirements (product-based quality), and that the product is suitable for its intended use (user-based quality). This aligns with the view of systems engineering quality in the "Systems Engineering Quality at MITRE" white paper. This paper states, "1) Degree to which results of SE meet the higher level expectations for our FFRDCs [federally funded research and development centers]—resulting in usability and value for end recipients; 2) Degree to which results of SE [systems engineering] meet expectations of our immediate customers—service and performance [3]." MITRE's QA efforts are particularly appropriate for the design and development phases of the product, especially software and one or few-of-a-kind systems, rather than concentrating on quality control (QC) in the production phase. See the Quality Assurance and Measurement topic of the SE Guide for additional perspectives on quality.

Best Practices and Lessons Learned

  • Use project and portfolio reviews. The use of MITRE project reviews can provide project leaders with additional perspectives and assistance from elsewhere in MITRE, when necessary. This is a form of leveraging the corporation. Portfolio reviews help maintain a focus on a sponsor's most important problems and provides an opportunity for cross-portfolio synergy among the projects in the portfolio.
  • Establish watchlists. Watchlists in various forms (e.g., major issues, significant risks, external influences) provide a vehicle to keep project leaders focused on issues that are likely to have a critical impact on their programs. It also provides MITRE and senior government managers aware of project issues that may require escalation. If done at an enterprise level, watchlists can keep individual programs aware of enterprise issues where programs can make a difference.
  • Perform Engineering Risk Assessments (ERAs). ERAs are constructive engineering reviews that identify and resolve issues or risks that might preclude program success. In the Department of Defense, ERAs are performed on Acquisition Category II (ACAT II) and below programs. The ERAs focus on solution appropriateness, SE progress health, and SE process health. In doing so, the ERA considers all aspects of systems engineering in acquisition, including engineering to establish sound technical baselines that support program planning and program cost estimation, technical resource planning, engineering management methods and tools, engineering performance metrics, engineering basis of estimate and earned value management, system design appropriateness, system design for operational effectiveness (SDOE), and other areas. The ERA methodology provides a tailorable framework for conducting ERAs to assist program managers and appropriate decision makers in preparation for milestone decision and other reviews.
  • Perform independent assessments. Independent assessments can include Gold Teams, Blue Teams, Red Teams, Gray Beard Visits, or process assessments like the Standard Capability Maturity Model Integration Appraisal Method for Process Improvement (SCAMPI). All of these assessments involve the use of external subject matter experts to provide an objective opinion on the health of a program or organization and its processes. An independent assessment can be used at any point in the program life cycle to provide insight into the progress and risks. For example, the assessment may be used to provide an independent assessment of a preliminary design or an assessment of the product as it enters integration and test. Independent assessments are typically proactive and intended to provide an early look at potential problems that may be on the horizon in time to take action and avoid adverse impact to the program. One of the best opportunities for an independent assessment is during the management turnover of a program or organization. This provides the new manager a documented assessment of the organization and a set of recommended improvements. The new manager has the benefit of a documented assessment of the mistakes or improvements made by prior management. See the MITRE FFRDC Independent Assessments topic of the Systems Engineering Guide for additional information.
  • Conduct peer reviews of deliverables. Having an external peer review of formal deliverables ensures that the delivered product makes sense and represents a MITRE position rather than an individual's opinion on a product provided to our sponsor. This is particularly important on small projects that are located in a sponsor's facility. It keeps our staff objective in providing advice to our sponsors.
  • Perform after-action reviews. After participating in the development of a critical deliverable or briefing, or position for or with our sponsors, meet with the MITRE participants and discuss what was executed well and what could have been better. This allows us to continuously improve how we serve the customer and capture lessons learned for others engaged in similar activities for their sponsors.
  • Identify key requirements. All requirements are not created equal, although that may be the first response when you ask the "priority" question. Whether it is the key performance parameters in an operational requirements document, the critical performance parameters in a system performance specification, or the evaluation factors for award in a request for proposal, identify the most important measures that will impact the final decision.
  • Use Technical Performance Measurement (TPM) in conjunction with risky performance requirements only. If a given performance requirement is within the state-of-the-art technology, and there is little doubt that the developer will be able to meet the requirement, do not use TPM. Focus on the "risky" performance requirements where it is important to monitor progress in "burning down" the risk.
  • Identify program risks and problem areas, and then identify metrics that can help. Do not take a "boilerplate" approach when specifying metrics to monitor your program. Identify the significant programmatic and technical issues and risks on the program, then select metrics that provide insight into the handling or mitigation of the issues and risks.
  • Do not identify all metrics at contract award, specify them when you need them. As program issues and risks change as a function of time, the metrics to monitor the handling or mitigation of the issues or risks should change as well. In the front end of a program, developer ramp up is something to monitor, but it disappears when the program is staffed. In the testing phase of a program, defect density is important, but not particularly important in the front end of a program.
  • Do not require measurement data, unless you have an analysis capability. Do not implement a set of metrics, unless you have staff with the capability to analyze the resulting data and make program decisions. Two examples of data that are frequently requested, but there is no program staffing qualified for analysis, are software metrics and Earned Value Management (EVM) data.

References & Resources

  1. Software Engineering Institute, Carnegie Mellon, "CMMI-Development," Version 1.2, accessed February 11, 2010.
  2. The MITRE Institute, September 1, 2007, "MITRE Systems Engineering (SE) Competency Model," Version 1", Section 3.7, p. 45.
  3. Metzger, Dr. L., May 2009, "Systems Engineering Quality at MITRE."

Additional References & Resources

Acquisition Community Connection, August 16, 2005, Acting UDS/AT&L Policy Memo: Performance Based Logistics: Purchasing Using Performance Based Criteria.

Acquisition Community Connection, February 7, 2006, Air Force Smart Operations for the 21st Century CONOPS and Implementation Plan, Version 4.

CMMI Product Team, February 2009, "CMMI for Services, Version 1.2-CMMI-SVC, V1.2," Software Engineering Institute, accessed February 11, 2010.

Department of Defense Directive, May 15, 2008, "DoD-Wide Continuous Process Improvement (CPI)/Lean Six Sigma (LSS) Program," DoD 5010.42.

Evans, J. R. and W.M. Lindsay, 2008, Managing for Quality and Performance Excellence, 7th Edition, Thomson South-western.

International Organization for Standardization, 2000, ISO 9000:2000, Quality Management Systems, Fundamentals and Vocabulary, Second Edition.

International Organization for Standardization, 2000, ISO 9001:2000, Quality Management Systems, Third Edition.

Joint Chiefs of Staff, May 1, 2007, Operation of the Joint Capabilities Integration and Development System, CJCSM 3170.01C.

June 1998, "Earned Value Management," ANSI EIA-748A Standard.

OSD Earned Value Management, accessed February 15, 2010.

OSD Technical Performance Measurement, accessed February 15, 2010.

Practical Software and System Measurement, accessed February 15, 2010.

Software Engineering Institute, Carnegie Mellon, "CMMI for Acquisition, Version 1.2," accessed February 11, 2010.

The American Society for Quality (ASQ), accessed February 11, 2010.

Publications

Download the SEG

MITRE's Systems Engineering Guide

Download for EPUB
Download for Amazon Kindle
Download a PDF

Questions?
Contact the SEG Team