“Not everything that can be counted counts, and not everything that counts can be counted.”
– Albert Einstein, 1879-1955, German-American physicist
What outputs should be delivered from this step?
- A performance monitoring system designed to measure indicators of the performance (success or failure) of marine spatial management actions and the overall marine plan;
- Information on the performance of marine spatial management actions that will be used for evaluation; and
- Periodic reports to decision makers, stakeholders, and the public about the overall performance of the marine spatial management plan.
Introduction
Information on which to base evaluations of MSP performance can come from many sources, but performance monitoring has a particularly important contribution to make in providing the basic data that should underpin any evaluation.
DEFINITION. Monitoring of performance is a continuous management activity that uses the systematic collection of data on selected indicators to provide managers and stakeholders with measures of the extent of progress toward the achievement of management goals and objectives.
Because of the importance of performance monitoring and evaluation (M&E) to adaptive management, IOC/UNESCO published in 2014 a Guide to Evaluating Marine Spatial Plans. The description of this step in the MSP process updates the original guide with new material from the IOC evaluation guide.
At least two types of monitoring are relevant to marine spatial planning: (1) monitoring that assesses the state of the system, e.g., “What is the status of current general conditions in the marine management area?”; and (2) monitoring that measures the performance of management actions, i.e., “Are the management actions we have taken producing the outcomes we desire?” These two types of monitoring are closely related—and both are important.
Sound performance monitoring program design depends on several factors:
- The objectives of the monitoring program need to be clearly articulated in terms that pose questions that are meaningful to the public and that provide the basis for measurement;
- Not only must data be gathered, but attention must be paid to their management, analysis, synthesis, and interpretation;
- Adequate resources are needed not only for data collection, but for analysis and evaluation over the long term;
- Monitoring programs should be sufficiently flexible to allow for their modification where changes in conditions or new information suggests the need; and
- Provision should be made to ensure that performance monitoring information is reported to all interested parties in a form that is useful to them.
Do not overstate the usefulness of performance monitoring programs. The marine environment is complex and variable. Separating the effects of human activities from natural variability is difficult (Carneiro 2013). This difficulty and others do not argue against monitoring performance of management actions, but they do make the case for realistic expectations, careful design, periodic evaluations, and a sustained commitment of resources.
TIP!
Some background reading
Several “classic” and comprehensive introductions to performance monitoring & evaluation (M&E) already have been written including Measures of Success: Designing, Managing, and Monitoring Conservation and Development Projects (Margoluis & Salafsky 1998), Ten Steps to a Results-based Monitoring and Evaluation System (Kusek & Rist 2004), and Performance Measurement (Hatry 2006).
If you are just beginning to think about or developing a MSP performance evaluation system, you should have a look at these important references for basic ideas, definitions, and detailed discussions of methods before you begin or when you get stuck.
Another more recent document you should have on your reference shelf is Open Standards for the Practice of Conservation (Conservation Measures Partnership 2013) available at: www.conservationmeasures.org. The Conservation Measures Partnership is a consortium of conservation organisations whose mission is to advance the practice of conservation by developing, testing, and promoting principles and tools to credibly assess and improve the effectiveness of conservation management actions.
Marine spatial planning (MSP) is a continuing, adaptive process that should include performance monitoring and evaluation as essential elements of the overall management process (Ehler and Douvere 2009). Rather than waiting until a spatial management plan has been developed to begin thinking about monitoring and evaluation should be considered at the very beginning of the planning process—not the end.
Most marine planning efforts throughout the world claim to endorse adaptive management—simply defined as “learning by doing”. Which management actions work, which do not, and why? An adaptive approach to marine spatial planning and management is indispensable to deal with uncertainty about the future and to incorporate various types of change, including global change (climate change), as well as technological, economic, and political change. For example, the 2010 Final Recommendations of the [US] Interagency Ocean Policy Task Force stated that… “CMSP objectives and progress toward those objectives would be evaluated in a regular and systematic manner, with public input, and adapted to ensure that the desired environmental, economic, and societal outcomes are achieved” (see Key MSP Documents).
Climate change will certainly influence the location of important biological and ecological areas and species over the next 30–100 years and beyond, while technological change (and climate change) will considerably alter the exploitation of previously inaccessible marine areas such as the Arctic or the High Seas. Goals and objectives of MSP, and management plans and actions will inevitably have to be modified to respond to those changes—or plans quickly become ineffective, uneconomic, infeasible, and ultimately—irrelevant.
One of the 10 principles for MSP, as defined in the European Union “Roadmap for MSP” (see Key MSP Documents), for example, includes the “incorporation of monitoring and evaluation in the planning process” and recognises that “… planning needs to evolve with knowledge” (European Commission 2008). Consistent with these MSP policy requirements, each of the marine spatial plans in the USA (Massachusetts), Germany, and Norway often held up as models of good practice include references to either an adaptive approach or to monitoring and evaluation as essential elements of an adaptive approach.
However, despite the importance of an adaptive approach to MSP, few efforts have been made to define what such an approach really entails (Douvere & Ehler 2010). An adaptive approach requires monitoring and evaluation of the performance of marine spatial plans, but little research has been conducted on how such performance monitoring and evaluation can lead to meaningful results and whether current MSP initiatives have the essential features, e.g., measurable objectives, to allow it. The latter, however, is crucial as more and more countries attempt to learn from existing MSP practice and some countries (Belgium, the Netherlands, Norway) have completed their “second- or third-generation” marine spatial plans.
How will you recognize “success” in MSP?
“Successful” MSP has often been defined in practice as simply the adoption of a management plan (an output) or the implementation of new spatial management actions (also outputs). Sometimes meeting the objectives and targets of the management plan is the definition of success. According to an analysis of 16 marine spatial planning examples in practice (Collie et al. 2012) undertaken by the Ecosystem Science and Management Working Group for the National Oceanic and Atmospheric Administration (NOAA), successful MSP is defined along a continuum. Most American plans, e.g., Massachusetts and Rhode Island, consider success to be the adoption of the plan, while meeting the objectives of the management plan denotes success in many European marine plans, i.e., the plan is not an end in itself but a process to meet objectives and produce desired results (outcomes). The report for NOAA found that most marine planning efforts incorporate some level of monitoring. Several plans stated that they would use existing monitoring programs, but only a few plans have tied objectives and management actions to specific performance indicators. Of the plans that had performance indicators, only a few had pre-identified background or reference levels.
Performance monitoring and evaluation will be successful if progress is being made toward achieving management objectives through the MSP process. A few additional criteria are relevant:
- Stakeholders are actively involved and committed to the MSP process. Stakeholder involvement in problem identification, specification of MSP goals and objectives, selection of management actions, and monitoring and evaluation build support for the overall MSP process;
- Progress is being made toward the achievement of management goals and objectives. Since MSP is a multi-objective planning process, achieving the outcome of one objective may involve trade-offs with the outcomes of other objectives. In the absence of at least some indication of progress over a reasonable period of time, then there is little justification for continuing the MSP process as originally designed;
- Results from performance monitoring and evaluation are used to adjust and improve management actions; and
- Implementation of the marine spatial plan is consistent with other applicable authorities. If not, disruptions in the planning and implementation process are inevitable. A breakdown of trust among stakeholders is likely, and possibly a withdrawal of stakeholder support, loss of funding, and possibly litigation.
If stakeholders do not endorse the MSP process and its outputs and outcomes, the process has not been successful. If performance monitoring and evaluation results are not used to modify revisions to future plans, then the process has not been successful.
TIP!
The power of measuring results
If you do not measure results, you cannot tell success from failure.
If you cannot see success, you cannot reward it.
If you cannot reward success, you are probably rewarding failure.
If you cannot see success, you cannot learn from it.
If you cannot recognise failure, you cannot correct it.
If you can demonstrate results, you can win public support.
– Osborne & Gaebler, 1992, American management consultants
Identifying the need for performance monitoring and evaluation
Before designing and implementing a performance monitoring and evaluation process, it’s important to determine who wants the results that evaluation can provide. What is driving the need for evaluation—is it required by legislation, is it a requirement for funding, do high-level executives and administrators want the information upon which to base future decisions? Is there a champion in the executive or legislative branches of government who wants to use evaluation information? Who will benefit from evaluation—administrators, legislators, auditors, and the public, non-governmental organizations? Who will not benefit from evaluation? Who will carry out the evaluation?
Identifying who should be on the performance monitoring and evaluation team
An early step is to form the Performance Monitoring and Evaluation Team. The overall manager of the MSP process or a senior professional evaluator should lead the team. In addition the team could consist of:
- Members of the MSP professional staff, including both natural and social scientists;
- Representatives of agencies responsible for MSP;
- A measurement expert, either from one of the agencies responsible for MSP, or an outside contractor (preferably familiar with the MSP process); and
- An information-processing expert.
Your Performance Monitoring and Evaluation Team should be no larger than 10-12 members. Team members should commit to the process for about one to two years, working both frequently and regularly. You should be flexible about adding members and expertise to the team, as needed.
TASK 1. DEVELOPING A PERFORMANCE MONITORING & EVALUATION PLAN
Once you have assembled your team, begin an initial planning or scoping phase to clarify the nature and scope of the performance monitoring and evaluation process. During this task, the main purpose of the monitoring and evaluation, the stakeholders to be consulted, and the time frame for the results should be established. This is an exploratory period. Key issues are identified from the perspective of management partners and other stakeholders, a review of existing documentation, and related management actions that may influence the program. The assumptions underlying the evaluation should be identified.
At the end of the initial scoping, there should be enough knowledge of the context for the evaluation that a general approach can be decided. The heart of the evaluation planning is the evaluation design phase, which culminates in the evaluation plan. It is generally a good practice to present and discuss the overall design with the management partners and other key stakeholders before completing the Performance Monitoring and Evaluation Plan. There should be no surprises, and it should build buy-in and support of the evaluation.
Action 1. Re-confirming the MSP objectives
An effective performance monitoring system begins with a clear set of well specified planning objectives. Since spatial planning objectives may have been modified during the MSP process (Steps 4-7), they should be re-confirmed with stakeholders and decision makers and, if necessary, updated before monitoring begins.
Action 2. Agreeing on outcomes to measure
Outcomes are the most important results for governments and stakeholders to measure. A focus on outcomes helps to build the knowledge base of the types of management actions that work, that do not work, and why. It can help build transparency and accountability into the MSP process.
DEFINITION. An outcome is an anticipated result of the implementation of a marine spatial management action.
Action 3. Identifying key performance indicators to monitor
The main purpose for establishing indicators is to measure, monitor and report on progress toward meeting the goals and objectives of MSP. Indicators have numerous uses and potential for improving management. They include the ability to monitor and assess conditions and trends, forecast changes and trends (such as providing early warning information), as well as help evaluate the effectiveness of management actions. Each management action you identify should have a performance indicator.
DEFINITION. A performance indicator is a measure, quantitative or qualitative, of how close we are to achieving what we set out to achieve, i.e., our objectives and outcomes. The three main functions of indicators are simplification, quantification, and communication.
The selection of relevant and practical (i.e., measurable) indicators is one of the most important components of an outcome-based planning approach (see Step 3, Organizing the process through pre-planning). The next Table identifies some characteristics of good indicators.
CHARACTERISTICS OF GOOD INDICATORS
Readily measurable |
On the time scales needed to support MSP, using existing instruments, monitoring programs, and available analytical tools |
Cost-effective |
Monitoring resources are usually limited; how can effective monitoring be accomplished at least cost? |
Concrete |
Indicators that are directly observable and measurable rather than those reflecting abstract properties are desirable because they are more readily interpretable and accepted by diverse stakeholder groups |
Interpretable |
Indicators should reflect properties of concern to stakeholders; their meaning should be understood by as wide a range of stakeholders as possible |
Grounded in theory |
Indicators should be based on well-accepted scientific theory, rather than on inadequately defined or poorly validated theoretical links |
Sensitive |
Indicators should be sensitive to changes n the properties being monitored, e.g., able to detect trends in the properties or impacts |
Responsive |
Indicators should be able to measure the effects of management actions to provide rapid and reliable feedback on their performance and consequences |
Specific |
Indicators should respond to the properties they are intended to measure rather than to other factors, i.e., it should be possible to distinguish the effects of other factors from the observed response |
Caution should be exercised in defining too many indicators. Choosing the correct indicators is often a trial-and-error process—and may take several iterations. Indicators can be changed—but not too often.
What are the principal types of indicators?
Marine spatial management indicators can be organized into three types:
- Governance indicators measure the performance of phases of the MSP process, e.g., the status of marine spatial management planning and implementation, stakeholder participation, compliance and enforcement, as well as the progress and quality of management actions and of the marine spatial management plan itself; governance indicators are particularly important at the beginning of the MSP process before real outcomes can be measured (Ehler 2003);
- Socio-economic indicators reflect the state of the human component of coastal and marine ecosystems, e.g., level of economic activity, and are an essential element in the development of MSP plans. They help measure the extent to which MSP is successful in managing the pressures of human activities in a way that results not only in an improved natural environment, but also in improved quality of life in coastal and marine areas or number of jobs gained or lost, as well as in sustainable socio-economic benefits; and
- Ecological or environmental indicators reflect trends in characteristics of the marine environment. They are descriptive in nature if they describe the state of the environment in relation to a particular issue, e.g., eutrophication, loss of biodiversity or overfishing.
For examples of these types of indicators, see the IOC Guide to Evaluating Marine Spatial Plans (2014).
Action 4. Establishing a baseline for selected indicators
One of the key questions of MSP is: Where are we now? Defining and describing where we are now is critical for both the analysis and evaluation of individual management actions before their implementation as well as for performance monitoring and evaluation after implementation of the marine spatial plan.
Establishing baseline data on indicators is critical in determining current conditions and measuring future performance. Measurements from the baseline will help decision makers determine whether they are on track with respect to achieving objectives. Baseline data can be collected from reports, interviews, direct observations, one-time surveys, interviews with experts, and direct field experiments, depending on time and other resources available. A baseline of information about each of the indicators selected in the previous step is necessary before actual monitoring of the indicators begins.
DEFINITION. A baseline is the situation before a marine spatial management plan begins; it is the starting point for performance monitoring and evaluation of each performance indicator.
TIP!
Data collection
It’s important to collect only the data that will be used in the performance evaluation. After all, performance information should be a management tool—and there is no need to collect information that managers are not going to use.
“As a rule of thumb, only collect baseline information that relates directly to the performance questions and indicators that you have identified. Do not spend time collecting other information.” (IFAD 2002)
TIP!
The right number of indicators
Since each indicator implies an explicit data collection strategy for measuring it, the key questions on data collection and management should be considered.
Too many indicators can be difficult to track and may be a drain on available resources. Reducing the number of indicators is always preferable to trying to include too many.
TASK 2. EVALUATE PERFORMANCE MONITORING DATA
Evaluation is the element of management in which the greatest learning should occur. Ideally, it should be a continuous process in which measures or indicators of performance are defined and systematically compared with program goals and objectives. Evaluation should be undertaken periodically during the lifetime of a program. While evaluation is widely recognised as an essential element of management, few examples exist. One of the few is the Great Barrier Reef Marine Park Authority’s monitoring and evaluation activities related to its Representative Areas Program (See Key MSP Documents).
DEFINITION. Evaluation is a management activity that assesses achievement against some predetermined criteria, usually a set of standards or management objectives.
As already discussed, MSP initiatives often have goals and objectives that are very vague or general and thus are not easily measured. In these cases it is difficult, if not impossible, to determine the extent to which goals and objectives are being achieved. Evaluations, if undertaken at all, tend to fall back on indicators that measure effort (input) rather than results (outputs or outcomes). For example, the number of permits granted or denied might be used as an indicator of the performance of a MSP program rather than the number of use conflicts avoided or area of biologically important marine areas protected.
Meaningful evaluations can be conducted only if the objectives of the MSP program were stated in unambiguous terms and if indicators for assessing progress were identified in the planning phase, and monitored afterward. Baseline data are essential. Many evaluations yield ambiguous results because these preconditions for assessing performance do not exist.
Evaluation should be seen as a normal part of the process of MSP. Integrated and adaptive MSP is based on a circular or iterative—rather than a linear — management process that allows information concerning the past to feed back into and improve the way management is conducted in the future. Evaluation helps management to adapt and improve through a “learning process”.
Evaluation consists of reviewing the results of actions taken and assessing whether these actions have produced the desired results (outcomes). It is something that most good managers already do where the link between actions and outcomes can be simply observed. But the link between action and outcome is often not obvious. Faced with the daily demands of their jobs, many managers are not able to monitor systematically and review the results of their efforts. In the absence of such reviews, however, money and other resources can be wasted on programs that are not achieving management objectives.
TASK 3. REPORTING RESULTS OF PERFORMANCE EVALUATION
Performance data should be reported in comparison to earlier data and to the baseline. In analyzing and reporting data, the more measurements there are, the more certain one can be of trends, directions, and results.
A good communications strategy is essential for disseminating and sharing information with key stakeholders. Sharing information with stakeholders helps bring them into the business of government and can help generate trust and support.
TIP!
The number of key messages
The number of key messages to be communicated about an evaluation should be limited to between three and five. Limit the complexity of your key messages, and vary the message depending on the audience. Keep your key messages consistent and make sure everyone on the evaluation team is communicating the same messages. Avoid jargon and acronyms and keep the messages short and concise.
Good practices in evaluating the performance of MSP management actions
Monitoring and evaluation should be considered at the very beginning of the MSP process—not the end as an afterthought.
For example, the plans of Germany, Norway, and the state of Massachusetts in the USA all refer to an adaptive approach or to the importance of monitoring and evaluation to move toward an adaptive approach. However, the German plans state only that existing national and international monitoring programs will be used to monitor the implementation of plans for the North and Baltic seas marine plans. The three Norwegian plans stipulate the introduction of an integrated monitoring system with indicators, reference systems, and action thresholds—all as part of existing monitoring and research programs. The Massachusetts Oceans Act requires the revision of its marine plan at least every five years. Despite well-developed marine spatial plans, none of these countries have developed a monitoring system that would allow monitoring and evaluation of the performance of spatial and temporal management actions of marine plans.
An adaptive approach to MSP ultimately relies on clear, measurable objectives from which indicators can be derived that, in turn, inform monitoring and evaluation of the performance of MSP plans. The lack of clear, measurable objectives results in the inability to monitor and review the outcomes of plans systematically and prevents the understanding of whether MSP is actually working or not. Specifying SMART objectives remains a challenge to MSP—everywhere.
Useful documents include IOC’s guide to Evaluating Marine Spatial Plans and the English Marine Management Organisation’s Review of the Marine Planning Monitoring and Evaluation Framework and Development of Baselines (2016) (available in the Key MSP Documents section of this website).
Useful references:
Caneiro, C., 2013. Evaluation of marine spatial planning. Marine Policy. 37, 214-229.
Collie, J.S., W.L. Adamowicz, M.W. Beck, B. Craig, T.E. Essington, D. Fluharty, J. Rice, & J.N. Sanchirico, 2013. Marine spatial planning in practice. Estuarine, Coastal and Shelf Science. 117. 1-11.
Conservation Measures Partnership, 2013. Open Standards for the Practice of Conservation. Version 3.0. 47 p.
Douvere, F., and C. Ehler, 2010.The importance of monitoring and evaluation in maritime spatial planning. Journal of Coastal Conservation. Vol 15, No. 2. pp. 305-311.
Elher, C. 2014. A Guide to evaluating marine spatial plans. IOC Manuals and guides, 70. 97 p.
Hatry, H.P., 1999, 2006. Performance Measurement: Getting Results. Washington, DC: Urban Institute Press. 326 p.
Kusek, J.Z., and R.C. Rist, 2004. Ten Steps to a Results-based Monitoring and Evaluation System. The World Bank: Washington, DC. 247 p.
Margoluis, R., and N. Salafsky, 1998. Measures of Success: Designing, Managing, and Monitoring Conservation and Development Projects. Island Press: Washington, DC. 362 p.
IOC-UNESCO’s Marine Spatial Planning: A Step-by-step Approach toward Ecosystem-based Management offers a 10-step guide on how to get a marine spatial plan started in your region – choose a step on the right and click on the title!