E. M&E Reporting

Monitoring and evaluating the NSDS process are important components of the NSDS cycle. Even if some of these tasks are carried out at a specific period of time, they are considered as a continuous tool feeding important information to fully enable the managing step during the whole NSDS process and thus classified as essential tasks. In particular, monitoring requires a constant organisation of activities aiming at tracking or alerting management to potential problems. Reporting is also a task which has to be seriously taken into account. 

 
Regardless of the care taken in its design, the strategy will only be a success if its implementation is rigorously planned. Monitoring and evaluation of implementation constitute an essential, continuous process.It is important to be able to know, at any point, whether or not one is deviating from the desired path, and if so to take appropriate adjustment measures. By the same token, implementation planning relies on a regular and diversified reporting mechanism.

As the creation of an NSDS must always start with what already exists, the activities to be completed in the first year of implementation are to a great extent “pre-determined”. They generally consist either of the continuation of activities already under way (regular or not), aiming for instance to improve the quantity or quality of statistics in order to meet as yet unsatisfied requirements, or of the commencement of new activities for which financing is already available from national and/or external resources. Thus the first year’s monitoring of an NSDS action plan’s execution has its own specific character, if only because the necessary resources are theoretically available.

 

1. RESULTS CHAIN
To give an overview of the NSDS, a logical framework is prepared, highlighting the relationships between the different elements (vision, goals, expected results, activities, and means) and the underlying assumptions for a successful implementation of the strategy.

A results chain describes the following sequence: means-activities-outputs-results (outcomes)-goals, and their interrelations.
The means outline the resources necessary for the implementation of activities. They include material resources, human resources, financial resources, the legal and institutional framework, etc. The activities can take various forms: the preparation of methodological guides to improve the quality and dissemination of data, evaluation of the NSS’s organisational framework, organisation of training workshops, surveys and censuses, etc. Outputs may be available statistical publications, improved statistical series, the number of trained statisticians, etc. Results (outcomes)correspond to changes such as increased capacity to produce statistics, improved dialogue between users and producers, increased demand for statistics, etc. An example of an NSDS results chain is available in the following table:

NSDS results chain - examples

Typical goals in NSDS
1.    Increase the relevance of official statistics 2.    Improve evidence-based management
3.    Enhance public confidence in official statistics 4.    Increase funding
Typical outcomes in NSDS
5.    Increased national statistical capacity  6.    Increased user satisfaction on official
        statistics
7.    Increased accessibility of official statistics  8.    Increased timeliness of official statistics
9.    Increased demand of official statistics 10.    Improved institutional capacity of the national statistical system
 11.    Users-producers dialogue improved 12.  Availability of statistics
 Typical outputs in NSDS
13.   Statistics produced 
 
14.   Number of data sets that have followed quality protocol
15.    Statistics produced including sex disaggregation 16.   Adoption of open data license
17.    Statistics produced under international standards 18.   Ethics e.g. code of conduct
 19.    Mechanisms of coordination improved  
   
Typical activities/components of an NSDS
20.    Training of NSS staff 21.    Methodologies developed
22.    Training of data users 23.    Development of statistical regulations and guidelines
24.    Installation of IT equipment  and software installation or development 25.    Raising financing for statistical development
26.    Development of a code of conduct  

 

The Philipines have put in place a streamlined version, which is worth mentioning:

 

 

2. MONITORING AND EVALUATION
One can define “monitoring” as a continuous process of collecting and analysing information to judge the quality of the implementation of an NSDS. This process regularly informs managers and the different stakeholders of progress and difficulties in achieving results, compares achievements with those expected from the outset, and enables parties to take any necessary corrective measures. Strong monitoring requires the design of a plan laid out as follows: after having defined the main goals to be achieved, one must specify the indicators that will be used to monitor progress and collect basic information on each indicator to establish a baseline. The means, frequency, and person responsible for compiling each indicator must be clearly defined. The compiled indicators must then be evaluated and reports prepared to sketch out trends and arrive at a consensus on the necessary changes to be implemented regarding inputs and activities, not to mention results and goals. Experience shows that in the process of identification of the indicators, not enough attention is paid to the feasibility and regular availability of the indicators. Another limitation is linked to the lack of precise identification and adequate sensitization of the people in charge of compiling the data.

The evaluation will judge the relevance, performance, and success of the NSDS. It reveals to what extent the NSDS achieved its goals. Monitoring and evaluation constitute two inextricably linked processes. Monitoring focuses on activity implementation and output delivery. Evaluation concerns the achievement of results, the effects and impacts on the global goal of the NSDS. It helps draw lessons and capitalise on experience for a future NSDS.

The system of evaluation must incorporate the flexibility necessary to take account of the inevitable changes that will occur during the strategy’s implementation period. These changes may result in more or less significant adjustments to the strategic goals selected and/or the result obtained, which will require changes to the schedule of activities and the activities themselves. Evaluation must identify which expected results have not been achieved, and the reasons for this, in order to re-direct the strategy. Evaluation generally includes two key meetings: mid-point evaluation and final evaluation. The mid-point evaluation enables an analysis of the differences between recorded results and expected results in order to enable the necessary changes to be made, including to the schedule of activities for the second half of the strategy period. The final evaluation allows lessons to be learned and progress to be built on for a future NSDS.

 

In practice

Who and When
Monitoring and evaluation of NSDS implementation are primarily the responsibility of managers at different levels of the national statistical system, beginning with those responsible for carrying out various activities and those managing resources (human, material and financial). The frequency of monitoring will depend on the country’s preference. As a general rule, the frequency is annual or even six-monthly. More rarely, it can be undertaken quarterly or monthly. 

Evaluation takes place during the design phase (assessment of the NSS) then later during implementation (at the midway point or end of the NSDS).

Monitoring is necessarily linked with the preparation of annual programmes of work for each unit and for the whole NSS.

How
A good monitoring and evaluation framework should be based on internationally recognized standards and practices. The selected indicators should be measurable. Each indicator has a baseline, a unit of measure, and a target.

Examples of indicators used in NSDS monitoring and evaluation are listed in the Table  below:

 Indicator  Unit of Measure
1.    Time lag between collection data and dissemination of results  Time
2.    Proportion of microdata datasets documented according to the DDI standard  Percentage
3.    Number of datasets that have been quality-controlled and are designed as Official Statistics  Number
Add a baseline
1.    Time lag between survey production and dissemination of results  6 months
2.    Proportion of microdata datasets documented according to the DDI standard  40 percent
3.    Number of datasets that have been quality-controlled and are designed as Official Statistics  100
Add a target and expected date for its completion
1.    Time lag between survey production and dissemination of results  4 months (by March 2014)
2.    Proportion of microdata datasets documented according to the DDI standard  80 percent (by June 2014)
3.    Number of datasets that have been quality-controlled and are designed as Official Statistics  500 (by March 2017)

 

PARIS21 has produced indicators that can be used to measure statistical capacity. They include quantitative and qualitative indicators to enable countries to undertake a self-assessment or a peer review of the level of development of their statistical system. They comprise:

  • System-wide indicators which outline the statistics that a country publishes, their reference year, and their source;
  • Quantitative indicators which relate to statistical agencies. To enable comparison, it is recommended to include at the very least the agencies responsible for calculating Gross Domestic Product (GDP), population figures, and household income/expenditure;
  • Qualitative indicators which concern statistical series. Here again, it is recommended to include GDP, population figures, and household income/expenditure to enable international comparison.

The World Bank and the UN Economic Commission for Africa have also developed statistical capacity indicators.

Normally, NSDSs are evaluated at the midway point and at the end of implementation. Independent consultants are often tasked with carrying out these evaluations with well-defined terms of reference. The terms of reference can be considered an evaluation management tool in the sense that:

- they describe the evaluation goals, which can differ depending on whether it is a mid -term or final evaluation;
- they are useful in determining to what extent the evaluation was undertaken successfully;
- they legitimise the task allocated to the evaluation consultant.

Given their role as an evaluation management tool, the terms of reference are generally discussed and approved by a working group or steering committee. They are annexed to the consultant’s contract.

 

3. PEER REVIEWS
Peer reviews are increasingly used to evaluate a national statistical system, an official statistical body, or an NSDS. They are a friendly exercise relying mostly on mutual trust between countries and a common confidence in the process. They are conducted by “peers,” in other words by NSS managers who collaborate with their counterparts in another country. The methodological framework can be the UN Fundamental Principles of Official Statistics or their equivalents at the continental or regional level: European Statistics Code of Practice, African Charter on Statistics, etc.

The peer review evaluates the functioning of all aspects of an NSS (institutional, organisational, statistical production mechanism, etc.), identifies strong and weak points, makes recommendations on improving performance, and helps share good practices. Peer reviewevaluation reports are in principle made public.

Peer review suggests that the evaluators are NSS managers, which means that they are generally statisticians. Yet, the point of view of other stakeholers is strongly needed in an evaluation, and users should be involved in the process to diversify the points of view, including users from outside the NSS. 

 

In practice

Who and When
In contrast to evaluation of an NSDS or, in more general terms, of a national statistics system or a public statistics organisation, which can be defined within appropriate laws, peer review occurs at the initiative of the responsible persons directly affected (e.g. Director of the central statistical office). Thus peer review of an NSDS can take place during the process of its preparation, preferably to coincide with the assessment of the NSS to give access to the observations, analysis and recommendations of an experienced team, in order to allow the preparation of strategies for improving capacity on the most objective basis possible.

How
In addition to peers in the true sense of the term (namely senior managers in other national structures), the team responsible for the peer review can draw on independent consultants (ensuring that the users will be duly represented)  who, unlike the peer reviewers, will be paid. This was the case for the peer reviews conducted in Africa with the support of the Secretariat of PARIS21. As part of the structure adopted, a guide for the peer reviewers was prepared to describe the main factors to be taken into consideration. The methodological reference framework consists of the United Nations’ Fundamental Principles of Official Statistics and the African Statistical Charter, which represents their implementation in Africa.

Peer review takes the form of a series of interviews with senior management of the central statistical body and with those in a sample of other services producing public statistics, national users of statistics and technical and financial partners. Based on the guide, these interviews cover: the legal and regulatory framework governing statistical activities; the statistics production structure (including human, material and financial resources); and the analysis, distribution, storage and archiving of statistical data. Particular attention is paid to the professional independence of statisticians (independence in the choice of concepts, definitions, nomenclatures and methods; integrity and impartiality of public statistics services), the quality of data, the satisfaction of users and the processes of preparing, implementing, monitoring and evaluating the NSDS in progress where appropriate. Based on their observations, the examining team analyse the strengths and weaknesses, opportunities and threats, and will develop recommendations. A report on the peer review covering all these elements will be published at the end of the process. 

 

4. REPORTING SYSTEM
Reporting is an integral part of any monitoring and evaluation framework. Its main goal is to provide and publish comprehensive and regular information on the implementation of the NSDS or any other programme. In fact, during the several years of NSDS implementation (generally from three to six years), more or less considerable changes can arise which require modifications to the goals, expected results, and activities. These modifications are decided by NSS managers, or even political authorities. The latter must receive timely information and analysis in order to take an intelligent decision. It is the role of the reporting system to provide this information.

 

 In practice 

Who and When
Different countries will have different levels of sophistication in their reporting system. In all systems, however, the central statistical body should produce (or supervise/co-ordinate the production of) annual reports at the national level to provide an update on progress in NSDS implementation, a summary of difficulties encountered, and proposed solutions. In a sophisticated reporting system, it can include semi-annual, quarterly, or even monthly reports. These reports are generally produced by the different agencies directly involved in implementing the NSDS.

How
Annual reports are produced from reports produced by the various services and agencies that produce official statistics on a standard model to facilitate their combination. They are then examined by the official coordinating body of the NSS (the national statistical council or equivalent). In some countries the conclusions and recommendations resulting from this examination will then be submitted to the government.  

Reports, issued half-yearly, quarterly or even monthly, are examined at a level close to the statistics-producing services and agencies. This might mean Ministerial or sectorial committees, made up of technicians directly involved in the production of statistics.

 

5. DIFFICULTIES ENCOUNTERED BY DEVELOPING COUNTRIES

It follows from the foregoing that the quality of monitoring and evaluation systems (including reporting) depends largely on the institutional arrangements in place for NSDS design and implementation monitoring. It is at this level that developing countries face one of the main difficulties. Indeed, the institutional arrangements in place in these countries have many shortcomings:

  • While Boards of Directors and Councils which are provided for in statistical legislation meet regularly and report to appointing authority, other co-ordination and consultation bodies e.g. user-producer committees tend to meet infrequently for a number of reasons including the following:  usually, the composition of the committees is inappropriate, the  agendas for meetings are not interesting especially to data users, invitations to meetings are not sent out in good time, lack of interest in some NSS managers and sometimes lack of (financial) resources
  • In some cases statistical laws do establish a body for official co-ordination like a National Statistical Council but the implementing regulations are not always suited to efficient functioning: a National Statistical Council chaired by the Prime Minister and composed of ministers often has difficulties to meet when scheduled. Likewise, a National Statistical Council chaired by the minister responsible for statistics and composed of other ministers will struggle to function optimally without the majority of members present.

Another major difficulty lies simply in the absence of a relevant monitoring and evaluation system. When such a system is mentioned in the NSDS, it is often only very briefly described. In particular, one does not find the full battery of indicators relative to goals, results, outputs, and activities, which makes any regular monitoring and reliable evaluation impossible.

To overcome the major difficulties mentioned above, it is important to allocate sufficient resources to the co-ordination and consultation bodies, either by financing them through a specific budget line or through resources from the National Fund for Statistical Development in those countries where such a fund already exists or is being set up. Lastly, it is important, in the concerned countries, to revise and/or complete the legislative and regulatory provisions in force.