How to Develop a Measurement Capability
Definition: A performance measure is an indicator of progress toward achieving a goal; it is a "you are here" on the map of progress. The Government Accountability Office defines performance measurement as follows:
Performance measurement is the ongoing monitoring and reporting of program accomplishments, particularly progress toward preestablished goals. It is typically conducted by program or agency management. Performance measures may address the type or level of program activities conducted (process), the direct products and services delivered by a program (outputs), or the results of those products and services (outcomes). A "program" may be any activity, project, function, or policy that has an identifiable purpose or set of objectives.
Keywords: evaluation, Government Performance and Results Act, logic model, measurement, outcome measures, outcomes, performance management, performance measurement, performance reference model, strategic planning
MITRE SE Roles & Expectations: MITRE systems engineers (SEs) are expected to understand the general principles and best practices of performance measurement methods and systems. They are expected to assist sponsors in developing a measurement capability in the systems acquisition and/or the operational organization. They assist in collecting and using performance measures to assess progress toward achieving strategic goals and objectives and to inform decisions about resource allocation.
Background
Congress required performance measures of all federal agencies starting in 1999. The legislation containing those requirements is the Government Performance and Results Act (GPRA), passed in 1993. The only federal legislation that requires strategic planning and performance measurement, GPRA requires each agency to develop a five-year strategic plan (to be updated at least every three years), an annual performance plan, and an annual performance report. In specifying what must be included in those documents, GPRA requires that a strategic plan must show the link between strategic goals and performance goals. A strategic plan must also contain an evaluation plan that includes performance measures.
GPRA designates the Office of Management and Budget (OMB) as the agency responsible for executing GPRA. OMB's process for evaluating agencies is called the Performance Assessment Rating Tool (PART). Two of the four sections of PART examine performance measures. Although PART is likely to change somewhat, the current Administration has announced that the fundamentals of that process will remain unchanged. Agencies must report performance, showing results (the "R" in GPRA). The Administration is increasing its emphasis on evaluation, which is a way to make sure that what matters is measured, that what is measured is really what is intended to be measured, and that results reported are credible.
Congress and the Administration are placing increased emphasis on performance and results for a simple reason: it is the best solution for achieving success when money is tight. Unless performance improves, there are basically three other, highly unpopular directions: (a) raise taxes, (b) cut programs, or (c) increase debt. MITRE can expect to see more requests for assistance with performance.
The Single Most Common Problem (and Its Solution)
At MITRE we are often asked to develop performance measures for a government program or other initiative. The most common problem about program performance cited in research—and that we have seen at MITRE—is that the program's goals/objectives have not been identified. It is impossible to develop measures of progress if we do not know where we are trying to go.
The first step in developing measures of progress is to identify the desired end point or goal. One of the most useful tools for identifying goals, and for developing performance measures, is the logic model. A logic model is a map, a one-page bridge between planning and performance.
The logic model shown below in Figure 1 should be read from left to right.
- The problem that the program was created to solve is identified on the left.
- The agency's strategic priorities—its goals/objectives—are next and should directly relate to the problem.
- The next three columns are basic input-process-output. Inputs are people, funding, and other resources. Outputs are results of processes or activities. Output measures answer the question: "How do you know they really did that?" Outputs are usually expressed in numbers of units produced or units of service provided.
Figure 1. Defining Performance Measures with Logic Models
- Outcomes are all about impact. They are answers to the question: "So what?" What difference did your product or service make? An initial outcome is softer, usually near-term, and might be measured by before/after tests of understanding if a training service were the output. Customer satisfaction is a common short-term outcome measure. An intermediate outcome might include changes in behavior, and it might be measured by finding out how many of those who received training are actually using their new skills. (Note: Often, short-term and intermediate outcomes are combined as intermediate outcomes.) Long-term outcomes are the conditions the program/agency is trying to change and should be a mirror image of the problem on the left of the logic model. Thus measures of long-term outcomes can be relatively easy to identify. A program established to address the problem of homelessness among veterans, for example, would have an outcome measure that looks at the number and percent of veterans who are homeless. (Defining "homeless" may be a separate issue to be addressed in a later implementation of data collection and reporting.)
- Environmental factors can influence all stages of a program and need to be identified in agencies' strategic plans.
The extreme left and extreme right of the model are easiest to define. The hard part is to develop measures for outputs (although those are easiest) and especially for outcomes. How would you know you are making progress toward achieving your long-term goal before you get to that goal? What would tell you that you are on the right or wrong track? How would you know whether you need to make course corrections to get to the destination you want? In developing outcome measures, keep asking like a four-year-old child, "Why?...Why?...Why?"
The further away from outputs you measure, the more likely that conditions outside the agency's/program's control are affecting the results observed. Factors such as the economy or the weather can affect long-term outcomes. And that is where third-party evaluation can be helpful to analyze the performance data, as well as other quantitative and qualitative information, to assess the impact of the agency/program on the outcomes.
The benefits of using a logic model are numerous:
- It is the strategic plan on a page. The measures can be derived directly from a logic model.
- A logic model can be a highly effective tool for communicating with stakeholders and for making sure that the activities, outputs, and outcomes are accurate in terms of their mission and business. Program people seem to "get it," and they help refine the model very quickly.
- It makes the connection between inputs, activities, outputs, and outcomes transparent and traceable.
- Most important, it shows in a nutshell where you want to go.
Related Problems and Pitfalls
- Clients tend to focus on outputs, not outcomes. Output measures are much easier, and they are under the agency's control. People know what they do, and they are used to measuring it. "I produced 2,500 widgets last year" or "On the average we provided two-second turnaround time." They can find it harder to answer the question: "So what?" They are not used to looking at the outcomes, or impact, of what they do. We need to keep asking "So what?" or "Why?" Move toward what would show impact or progress toward solving the problem the agency, program, or project was created to address.
- Goals and objectives are often lofty, and not really measurable. "The goal is to conduct the best census ever." How do you measure that? Make the goals concrete enough that we can know whether they have been achieved and whether we are making progress toward them.
- There are tons of reports with measures that are ignored; no one knows how to use them. There is no plan to actually use the measures to make decisions about resource allocation. This is where agencies need to move from performance measurement to performance management, using the performance data to make resource allocation decisions based on credible evidence and including evaluations, analysis of agency culture, new directives from Congress, or higher levels of the Administration, etc.
- The client wants to use measures that they already produce, regardless of whether those are actually useful, meaningful, or important. This is the argument that "we already report performance data and have been doing it for years." These are probably outputs, not outcomes, and even so, they need to be reviewed in light of the strategic goals/objectives to determine whether they show progress toward achieving end outcomes.
- They want to identify a budget as an output or outcome. A budget is always an input. Just don't let the conversation go there.
Best Practices and Lessons Learned
- You need clear goals/objectives to even begin to start developing performance measures. Without clear goals, you can only measure activities and outputs. You can show, for example, how many steps travelers have taken along a path, how much food was consumed, and how long they have been traveling. But you cannot know whether they are any nearer their destination unless you know the destination. They might be walking in circles.
- Performance measures are derived from strategic plans. If the agency does not have a plan, it needs to develop one. There is much guidance and many examples on developing a plan.
- Complete a logic model for the whole program. You can develop outcomes or measures as you go or wait until the end, but the measures help keep the goals/objectives and outcomes real.
- To the maximum extent possible, ground the logic model in bedrock. Bedrock includes the following, in the priority listed: Legislation, Congressional committee reports, executive orders, regulations, agency policies, and agency guidance. Legislation is gold. The Constitution is platinum (e.g., the requirement for a decennial census).
- Long-term outcomes, or impact, are relatively straightforward to identify. It should reflect the problem that the program, agency, or project was created to solve. That is what you are trying to measure progress toward. If your program was created to address the problem of homelessness, the long-term outcome is a reduction of homelessness, regardless of how you decide to measure it.
- Use caution in interpreting what the measures show. Performance measures tell you what is happening; they do not tell you why something is happening. You need to plan for periodic evaluations to get at causality. It is possible that your program kept things from being worse than they appear or that the results measured might have happened even without your program.
- Fewer is better; avoid a shotgun approach to creating measures. Agencies tend to want to measure everything they do rather than focus on the most important few things. Goals might need to be prioritized to emphasize the most important things to measure.
- Look at similar agencies and programs for examples of performance measures. Two types of outcomes are particularly difficult to measure: (a) prevention and (b) research and development. How do you measure what did not happen, and how do you measure what might be experimental with a limited scope? The solution for the first is to find a proxy, and the best place to look might be at similar programs in other agencies. The Department of Health & Human Services does a lot of prevention work and is a good place to look for examples. The solution to the second often takes the form of simply finding out whether anyone anywhere is using the results of the research.
- The organization responsible for an agency's performance should be closely aligned with the organization responsible for its strategic planning. Otherwise, strategic plans and/or performance reports tend to be ignored. Performance management operationalizes an organization's strategic plan.
- More frequent reporting tends to be better than less frequent. Agencies often have a hard time getting their performance reports done on an annual basis, and the data are so out of date that they are not helpful for resource allocation decisions. The current OMB Director is calling for performance reporting more often than weekly, which seems daunting for agencies that have trouble reporting annually, but it could actually become easier if processes are put in place to streamline reporting the few most important data items. This frequency is already being required for reporting under the Recovery Act.
- Efficiency is output divided by input; effectiveness is outcome divided by input. You need a denominator in both cases to achieve these common measures of program performance. Efficiency is about producing more output with less input, but efficient does not always mean effective. Effectiveness is about results and therefore uses outcome measures. The logic model helps show clearly those relationships.
Refer to the articles Earned Value Management and Acquisition Management Metrics in the Acquisition Systems Engineering section of the SE Guide for related information.
References & Resources
- Advanced Performance Institute, Strategic Performance Management in Government and Public Sector Organizations.
- MITRE-supported performance measurement products:
- Steinhardt, Bernice, July 24, 2008, Government Performance: Lessons Learned for the Next Administration on Using Performance Information to Improve Results, Testimony (statement) before the Subcommittee on Federal Financial Management, Government Information, Federal Services, and International Security, Committee on Homeland Security and Governmental Affairs, U.S. Senate, U.S Government Accountability Office, GAO-08-1026T.
- The MITRE Institute, September 1, 2007, MITRE Systems Engineering (SE) Competency Model, Version 1, p. 46.
- U.S. Commodity Futures Trading Commission, Commodity Futures Trading Commission Strategic Plan 2007-2012, accessed February 24, 2010.
- U.S Government Accounting (now Accountability) Office/General Government Division, May 1997, Agencies' Strategic Plans Under GPRA: Key Questions to Facilitate Congressional Review, version 1, GAO/GGD-l0.l.16.
- U.S Government Accountability Office, May 2003, Program Evaluation: An Evaluation Culture and Collaborative Partnerships Help Build Agency Capacity, GAO-03-454.
- U.S Government Accountability Office, May 2005, Performance Measurement and Evaluation: Definitions and Relationships, GAO-05-739SP.
- W.K. Kellogg Foundation Logic Model Development Guide
Not all references and resources are publicly available. Some require corporate or individual subscriptions. Others are not in the public domain.
References and resources marked with this icon are located within MITRE for MITRE employees only.
|
|
Articles in the Enterprise Planning and Management Topic
|
For more information on the Systems Engineering Guide, or to suggest an article, please Contact Us.
|