Data Driven Contractor Evaluations and Milestone Reviews


Definition: Data-driven contractor evaluations and milestone reviews provide an objective assessment of contractor performance at technical milestone reviews. Technical reviews and the content to be addressed are typically prescribed by government agency or department mandates available to MITRE staff and other project members prior to the actual milestone.

Keywords: empirical data, independent technical assessments, metrics, milestone reviews, performance assessments, technical reviews

MITRE SE Roles & Expectations: MITRE systems engineers (SEs) are expected to provide technical thought leadership and assessment throughout an entire government program life cycle. While ongoing insight is needed to quickly grasp and respond to program risks and opportunities, its importance peaks at event-driven milestones when key government decisions are made. At those times, MITRE systems engineers are expected to lead and participate in teams reviewing the contractor proposed technical approach. MITRE systems engineers analyze design review content against milestone entry and exit criteria to ensure that the contractor delivers quality products on time and within budget. They are expected to assess the contractor's technical and programmatic approaches, work packages, prototypes, and deliverables before and during reviews to identify issues and ensure that decision makers are provided with data-driven recommendations during technical and program milestone reviews [1].

Introduction

MITRE systems engineers can assume many roles at technical milestone reviews. Depending on the size and complexity of the program, there may be many MITRE staff supporting the same technical review or, on some programs, only one or two. Staff typically perform as Subject Matter Experts (SMEs) for specific technical areas (e.g., adequacy of requirements capture, maturity of the architecture) to be reviewed; they provide informal and formal assessments to the government sponsor. It is also not uncommon for MITRE to develop an overall assessment of the entire technical review. This assessment may include aggregating the input from MITRE staff and other program office contractor support. Whatever the scope, focus, or size of the MITRE review effort, the overall assessment must be based largely on empirical data, metrics, the trends they indicate, and demonstrated system performance. During reviews, MITRE staff need to be prepared, inquisitive, confident, technically competent, thorough, current with program progress, tactful in dealing with the contractor, and convincing in their overall assessments. Finally, MITRE's assessment of and recommendation on whether the technical review "passed" or "failed" can have a significant impact on whether the program meets its schedule or experiences long and costly delays.

Government Interest and Use

The government has myriad guidelines and mandates that define how systems should be acquired, developed, delivered, and sustained. In attempts to track the progress of a system development, the government has also defined a set of technical reviews to be conducted at various phases of development. Conducting these reviews successfully requires insight into contractor progress. Although it is a government responsibility to formally sign off on the final assessment of a technical review, MITRE is relied on heavily to provide convincing and credible technical evidence to support the assessment.

Independent, fact-based engineering analysis is essential to government program managers (PMs) in making their assessment of whether a program meets its technical review criteria.

Among the most critical times for MITRE to provide unbiased and technically substantiated assessments on a program is when supporting technical milestone reviews. We need to work with the contractor to ensure that the government PM is presented with empirical data and metrics that characterize system progress and performance as accurately as possible. That increases the likelihood that the government PM will make the right decision because it is based on objective data that supports the overall assessment.

It is important to ensure that technical recommendations are not influenced by the natural, collective desire of program stakeholders for the program to be viewed as a success and to move forward. Because of program pressures to succeed, technical assessments that indicate program problems may not be immediately embraced. In rare cases, it may be necessary to provide a formal, independent message of record to the PM documenting the technical assessment, the rationale for the perceived risk to the program (i.e., the likelihood of not meeting technical objectives, schedule, or cost and the impact), what may happen if the situation is not addressed, and recommended steps to mitigate the risk. The PM should be made aware of such a message and its contents personally before it is issued. While such a communication may not be welcomed in the short term, in the long run, it maintains the high standard that our customers expect of us.

Best Practices and Lessons Learned

  • Ensure consensus-based entry/exit criteria. The name, purpose, and general requirements of each technical review in standard acquisition processes are usually well defined in department or agency regulations [2]. What is often not done, but is essential for conducting a coordinated and successful technical review, is to ensure that the government team and contractor have documented formal entry and exit criteria, and that consensus has been reached on their content. If these do not exist, it is important to ensure that they are created and, if required, for MITRE staff to take responsibility for ensuring that they are defined. The entry/exit criteria should be tailored to meet the needs of each program. This is an area where MITRE can contribute—by emphasizing criteria (e.g., data, prototypes, metrics) that can be objectively assessed. Sample entry/exit criteria for many reviews are contained in the Mission Planning Technical Reviews [3].
  • Prepare, prepare, prepare. The backgrounds, skill sets, and experiences of the systems engineering team supporting the government at a technical review can vary widely. Depending on our role in the supported program, MITRE can and should instigate and lead government preparation meetings to ensure that entry/exit criteria are known, responsibilities of each SME are defined ahead of time, there is a pre-review artifacts/contract data requirements lists, and government leadership attending have been "prepped" on strengths/weaknesses of the contractor and where they should weigh in. It is also beneficial to conduct technical review "dry runs" with the contractor prior to the review. At the same time, be sensitive to the demands that dry runs place on the contractor. Structure them to be less formal and intrusive while achieving the insight they provide. The benefits of these dry runs are:
    • They require the contractor to prepare for the review earlier and reduce the possibility of them creating "just-in-time" charts for the major review that may have disappointing content from the government perspective. If the content falls short of expectations, there is time for them to correct it.
    • They allow more people to attend a version of the review and have their questions answered, since meetings will be smaller. While key PM and technical team members will attend both the dry run and final review, others are likely to attend only one.
    • They allow a graceful way to reschedule the review if the contractor is not ready by dry run. This is especially important for programs that are under substantial scrutiny.
  • Divide and conquer. No one can know all aspects of a contractor's effort, regardless of how able the staff is, how long they have been on the program, or how technically competent they are. It may also happen that a program's systems engineering staff resources may be weighted in a particular discipline (e.g., software engineers, radar engineers, network specialists). Program technical reviews are all-encompassing. They must address user requirements, risk identification and mitigation, performance, architecture, security, testing, integration, and more. If staff resources are limited, it is advisable to assign SMEs who are strong in one discipline (e.g., software engineering) the secondary responsibility of another discipline (e.g., risk identification) at the technical review. This has the benefit of ensuring that all disciplines are covered at some level during the review and provides the opportunity to train staff in secondary system engineering disciplines that broaden their skill set and help the government in the long run.
  • Gauge "ground truth" for yourself. Be aware of the true program progress well ahead of the review. Know the "real" workers responsible for day-to-day development, who may be different from those presenting progress reports at reviews. This will allow you to more accurately gauge progress. This requires advanced preparation, including meetings with programmers, attending contractor in-house peer reviews, reviewing development metrics, witnessing early prototype results, observing in-house testing, and spending time in the contractor's facility to know fact from fiction.
  • Assess when fresh. Recognize that technical reviews can be long, tedious, information packed, and physically and mentally draining events. As difficult as it may be, attempt to conduct a government team caucus at the end of each day to review what was accomplished and to gain preliminary team feedback. Meetings do not have to be long; a half hour can be sufficient. It is advantageous to gather the impressions of team members, since it can quickly confirm the review's formal presentations or uncover differences. Use the entry/exit criteria to voice what was "satisfactory" and what was not. Finally, when it is time to aggregate all input for the entire review, it is valuable to have the daily reviews to streamline the assembly of the formal assessment.
  • Use mostly data, part "gut feeling." While it is desirable for the technical reviews to be civil, "just the facts" affairs, there may be times when exchanges become contentious and relationships between government and contractor representatives become strained. Personalities can get involved and accusations may be made, which are driven more by defensive instincts than impartial assessment of data. This is the time to make maximum use of objective data to assess contractor progress and solution development maturity, while refraining from over-reliance on anecdotal information and subjective assertions. Metrics and the trends they illuminate should be used as the basis for questions during the review and assessments after the review. Metrics to demonstrate software size, progress, and quality, should be assessed. (For software-intensive systems, it may be advisable to compare productivity/defect rates to other industries [4], other military systems [5], or CMMI maturity level standards [6].) Preliminary data to indicate system performance, reliability, and user satisfaction should be examined and challenged if necessary. Staffing metrics can be used to corroborate sufficiency of assigned resources. Testing metrics should be reviewed, as well. Don't ignore "gut feelings," but use them selectively. When the data says one thing and your intuition says another, intensify your efforts to obtain additional fact-based evidence to reconcile the disparity.
  • Search for independence. Regardless of how knowledgeable organic project staff is on all phases of your acquisition and the technologies responsible for the most prominent program risks, it is advisable to call on independent SMEs for selected technical reviews. In fact, Department of Defense (DoD) guidance for the developments of systems engineering plans, as well as the Defense Acquisition Guide (DAG), call out the need for independent SMEs. This is excellent advice. For large, critical, and high-visibility programs undergoing oversight by their respective department or agency acquisition authority, conducting an Independent Technical Assessment (ITA) to assess the maturity of the program at a major technical review (e.g., PDR, CDR) can help develop objective evidence to inform the final assessment. It may also be advisable to include an SME from a large, respected technical organization on the ITA to provide advice in their areas of special expertise (e.g., Carnegie Mellon Software Engineering Institute [SEI] on Capability Maturity Model issues). It may be advantageous to use a qualified, senior-level MITRE technical SME to lead the ITA, as a way of bringing the corporation to bear. It is also advisable to include a senior manager from the prime contractor being reviewed, as long as this person is NOT in the direct management chain of the program leadership. This can open many doors with the prime contractor that may have seemed closed in the past. Recognize that bringing on independent SMEs for a review has a distinct cost (e.g., organic staff resources will need to bring SME members up to speed). However, judiciously done, it can be worthwhile.

References & Resources

  1. MITRE Systems Engineering (SE) Competency Model, Version 1, September 1, 2007, p. 38.
  2. Defense Acquisition Guidebook, Chapter 4.
  3. 951st Electronic Systems Group, April 2007, "Mission Planning Technical Reviews," accessed January 29, 2010.
  4. Jones, C., April 2008, "Applied Software Measurement: Global Analysis of Productivity and Quality," Third Edition, McGraw-Hill Osborne Media.
  5. Reifer, J., July 2004, "Industry Software Cost, Quality and Productivity Benchmarks," The DoD Software Tech News, Vol. 7, Number 2, accessed January 29, 2010.
  6. Croxford, M. and R. Chapman, May 2005, "Correctness by Construction: A Manifesto for High-Integrity Software," Crosstalk: The Journal of Defense Software Engineering,

Additional References & Resources

Defense Acquisition Guidebook, Chapter 4, "Systems Engineering" (System Engineering activities to support technical reviews), accessed January 29, 2010.

Neugent, B., August 2006, "How to Do Independent Program Assessments," The MITRE Corporation.

Jones, Capers, May 2000, "Software Assessments, Benchmarks, and Best Practices," Addison-Wesley Professional.

The MITRE Corporation, "Activity 12: Monitor, Manage, & Report on Project Execution," MITRE Project Leadership Handbook.

The MITRE Institute, "Leading an Independent Review Team," MITRE Institute, TSE501, Leading an Independent Review Team, John Kennedy and Linda Rosa.

Publications

Download the SEG

MITRE's Systems Engineering Guide

Download for EPUB
Download for Amazon Kindle
Download a PDF

Questions?
Contact the SEG Team