4 Assessing



Before elaborating an NSDS, it is desirable to carry out an in-depth assessment of the current status of the statistical system in order to set the priorities for statistics development. The assessment is aimed at answering the question “Where are we?” through a full description of the National Statistical System (NSS). It should lead to an understanding of the data demands, adequacy and quality of the statistical outputs, strengths and challenges, and the organisation and management of the NSS as a whole.

The results need to be actionable in order to aid in the process of setting strategic objectives. The assessment should therefore be realistic, objective, and critical. There are two areas that need to be considered in order to understand how the NSS is performing: a) the extent to which its products meet users’ needs and b) its statistical capacity.  If previous assessment has been undertaken, it important to have comparison of the results to gauge any improvements attained.


On the first, an analysis of users’ needs would be carried out to establish data availability to produce demanded statistical indicators and identify data gaps. Most gaps are brought about by a mismatch between data demand and supply or between products and results (the effectiveness of the system from the user’s point of view), i.e. gaps in terms of what data are desired and what is available/gaps in terms of quality of available data and service provided.

The assessment will also highlight the performances of the statistical system (efficiency - capacity/products), and present the main internal strengths and its weaknesses, as well as the external threats that can affect the way in which it evolves and opportunities to be seized. Success will be evaluated against the starting situation. In fact, for countries which already have an NSDS, the assessing phase is equivalent to the evaluation phase of the previous NSDS. It should be noted that a thorough analysis of statistical capacity is crucial in designing strategies that would help reduce data gaps and improve performance of the statistical system.

A useful assessment uses best practices and benchmarks against international standards and frameworks as appropriate. There are currently several tools available for countries to assess their NSS. In 2017 PARIS21 conducted a review of international assessments and their use, which will be discussed in the following sections.

Specifically, the assessment should lead to an understanding of the following dimensions:

  • Assessment of data demands/needs and user satisfaction
  • Assessment of NSS
  • Capacity
  • Statistical outputs
In practice

Before conducting an assessment, deciding on the evaluator (e.g. national experts, international experts, national commission), who will be the focal point, who will participate in the process, and the assessment tool to use is of outmost importance. The list of who will be involved in the assessment may include data producers (at what levels and in which agencies), higher authorities responsible for the NSS, including ministers, statistics board members or senior officials in parent ministries, current data users, potential users, data providers, development partners, research organisations, academia and regional bodies.


Preparation for the overall assessment requires a thorough review of existing documentation and findings of any earlier assessments. Very few countries will be starting strategic planning from scratch (some countries are designing their second NSDS) and the purpose will normally be to improve an existing national statistical system.

The assessment should start with a review of relevant policy documents to identify priority areas and necessary indicators. These documents are likely to include national development policy frameworks and their reviews (e.g., sustainable development, poverty reduction, sectoral, and subnational strategies), as well as sub-regional and international development policy documents (e.g., the Agenda 2030 on Sustainable Development, international agreements, and country SDG reports). It might also be appropriate at this stage to read the country policy/programme documents of potential donors.

Analysis of all relevant documents, including existing reports on the statistical situation, will provide a first general picture of the country’s statistical development. Any fruitful analysis of the NSS must be undertaken and owned by the country itself.  There are several international tools to aid in this process; this document will provide guidance on how to select them.

PARIS21 (2017: 15) has elaborated guidelines on how to choose the assessment tool. The NSDS design team could use the following guide questions to identify the most appropriate to national needs:

  • Who needs to be convinced by the assessment? Who is to receive and act on the results?
  • Who should carry out the assessment? Self-assessment with support from an international expert, or an independent assessment by a development partner?
  • Will those who need to be influenced by the results, in order to bring about the required capacity improvements, trust and value the results?
  • Is there an issue or sector which requires special attention?
  • How urgent is the assessment – some are rapid others take several months?
  • Is there an interested development partner?
  • Who should be consulted in the assessment?


Users’ demand for statistics varies greatly according to their purpose and their capacity, literacy and sophistication in use of statistics. Therefore, a review of both current and potential data demand should be undertaken. User needs cannot be properly met unless these have been properly identified, synthesised, understood and prioritised and unless users are completely aware of the existing statistical production and management practices. It is important to emphasise that users invariably have a long list of statistical needs, and every effort should be made to guide them to identify their priorities. Establishing the link between user needs to national development plans and/or specific national programs will be crucial at the time of taking final decisions with regards to which statistical outputs to  prioritise. Moreover, user needs and priorities are always changing and keeping track of these changes requires that consultation and dialogue with users be an ongoing activity. 

Consultations and discussions with users should aim to answer the following main questions:

  • Who are the main users of statistics? How do they use statistics in their own operations?
  • How far are required statistics available and how users have been constrained by lack of statistics; does the existing system contribute to produce the appropriate set of indicators to monitor national development goals and meet international requirements (SDGs for example) and regional commitments?
  • Are the statistics produced contributes to better accountability and transparency of government?
  • How government and non-government users assess the adequacy of existing statistics in terms of relevance, accuracy, consistency, completeness, timeliness, level of disaggregation (geographic, gender, etc.), presentation or readability of publications, practices with respect to the revision of preliminary data and accessibility to data- meta-data and micro-data?
  • What are their relationships with main producers of statistics and how do they perceive their own role in the development of the NSS?
  • Are the current advocacy strategies sufficient to raise public awareness of the importance of the data produced; does the system provide adequate training to assist users to interpret data, develop indicators and make best use of statistics; does the system offer tailored or on-demand studies?
  • What are their current and perceived future statistical needs and priorities? Are their needs linked to specific national programmes or development plans?
  • How do they think their needs can best be met in the context of the NSDS?

Statistical capacity

PARIS21 has developed the Capacity Development 4.0 (CD 4.0) Framework to analyse capacity in statistical systems through a new lens. In this context, statistical capacity refers to the ability of a country’s national statistical system, its organisations and individuals to collect, produce, analyse and disseminate high quality and reliable data to meet users’ needs (Eurostat, 2014; World Bank, 2017).  The framework was developed by a Task Team established in 2017 to analyse statistical capacity in the context of the new data ecosystem. The Task Team involved a large group of actors, including national statistical offices, development partners, multilateral and bilateral organisations and private sector providers.

The CD 4.0 Framework identifies three main levels in assessing capacity, as is generally used in various existing frameworks (e.g. Denney and Mallet, 2017). Official statistics are produced by the NSS, which is composed of several institutions or agencies that interact with each other, and they are in turn made-up of people who interact with others. For this reason, statistical capacity has to be assessed at three levels: a) the individual (e.g., statistics employees, including prospects); b) the organisational (e.g., the NSO); and c) the system as a whole, which is composed of the individuals and organisations, plus the links and interactions between them.

There are five targets to consider in this framework: resources, skills and knowledge, management, politics and power (i.e., governance), and incentives. Skills are needed for manipulating resources, while management allows putting them to the best use towards achieving specific goals and objectives. Setting objectives is a matter of political interactions, which are guided by incentives. All of them need to be aligned in order to have a functioning NSS.

The Capacity Development 4.0 framework






Professional background

Human resources

Legislation, principles and institutional setting


Funds infrastructure


Plans (NSDS, sectoral…)

Existing data

Skills and knowledge

Technical skills

Statistical production processes

Data literacy

Work ‘know-how’

Quality assurance and codes of conduct

Problem solving and creative thinking


Knowledge sharing



Time management and prioritisation

Strategic planning and monitoring and evaluation

NSS co-ordination mechanisms

Organisational design

Data ecosystem co-ordination


HR management

Change management

Advocacy strategy

Fundraising strategies

Politics and power

Teamwork and collaboration


Relationship between producers

Communication and negotiation skills

Relationship with users

Workplace politics

Relationship with political authorities

Strategic networking

Relationship with data providers




Career expectations

Compensation and benefits

Stakeholders' interests

Income and social status

Organisational culture

Political support

Work ethic and self-motivation



As of date, the availability of performance indicators to measure the CD 4.0 dimensions is limited. As data ecosystems become more complex, NSS assessments will gradually need to integrate other performance indicators of capacity, which include individual skills beyond technical competencies, new organisational practices and the emerging actors (e.g. private providers, citizen-generated data initiatives).  This will allow the assessment to provide a forward looking approach and identify areas for further development within the NSS. 

Statistical outputs

Existing and already planned outputs of the NSS will be assessed. This should be further complemented by an assessment of potential new outputs to be produced in response to identified priorities. Each key output should be gauged against agreed criteria, such as international standards and frameworks, and recommended methodologies, among others. 

Data quality assurance frameworks and dissemination standards such as the IMF Data Quality Assessment Framework (DQAF) and General Data Dissemination System (GDDS) are still considered as gold standards for assessing statistical outputs and thus are considered in the NSDS in most countries. A large number of countries currently subscribes to the GDDS and have therefore already carried out many of the steps that are required to develop a strategic approach to develop statistics. Those countries not yet participating would find the GDDS an important early step. 

In order to assess statistical output, one will consider the following questions:

  • What statistics are available (inventory of supply), their sources and how quickly they are made available to users (publication, dissemination, and communication policies and processes);
  • The quality of statistics and how they are produced (production processes, methods and procedures, use of international standards, constraints and problems) and processed, analysed and archived (IT policies, databases, anonymization, etc.);
  • Improvements in data management systems and processes to facilitate efficient data production, i.e. reduce duplication of effort, fill gaps in the system;
  • Are there clear definitions of all data produced?  Are they archived so that they can be accessed by all relevant users and producers throughout the NSS and beyond? Does the system produce the appropriate set of indicators to assess sector performance?
In practice
The NSDS design team, in close coordination with all other units (see PREPARATION) and with possible support, from consultants, will carry out the Assessment. Prior analysis of existing information, classification of statistical output and inventory of all the NSS units will facilitate the above exercise.

Assessment of user needs can be done through various approaches. As with assessment generally, it is likely that the design team will be able to build on existing processes or a benchmark assessment of user needs (quality principles related to official statistics) already conducted in the past. 

One approach is to identify those who are interested in particular data sets according to their preferred areas and arrange contact with these users. The list prepared through the PREPARATION process (see PREPARATION - Identification of stakeholders) is a good starting point in identifying the main data users. Selected institutions from each of the main user groups (see ADVOCATING) should be included in the consultation and discussions held with them, either individually or in small groups, whilst others might be invited to contribute in writing. The process should ensure that policy and decision makers as well as technical staff in user institutions are consulted. 

A second approach to user involvement that has met with success in a number of countries is to organise a country workshop bringing together data compilers, data users, and donor agencies. The workshops deal with specific statistical topics of interest to participants and in addition encourage dialogue among groups of compilers and users. The workshops have proven useful in sensitising participants to the importance of statistics, providing progress reports on data improvements, and discussing new issues. 

The views of the various users will be taken into account and compared with the inventory of official statistics. Information can also be obtained by means of a questionnaire or by actually visiting the stakeholders and interviewing them. The latter approach is usually preferred to avoid low response rates in administering questionnaires.

During the NSDS preparation process, the sector committees to be created will constitute an adequate framework for the participation of the users and the identification of their needs. 

For assessing statistical capacity and output, there are numerous international tools. A study conducted by PARIS21 (2018), where fourteen of the most popular international assessments’ questions and indicators were categorized using the classification mentioned in the Capacity Development 4.0 Framework, found that 40% of all questions focus on skills and knowledge at the organisational level, mainly on statistical production processes and quality assurance frameworks.

There are four types of international tools: A) those used by national authorities to inform national statistical planning and strategic processes, B) tools for partners who want to ‘invest’ in statistics to inform project design and monitoring, C) tools for international monitoring of statistical performance and D) data quality assessments/compliance with codes of practice. (PARIS21, 2017: 7-8)

Type A tools are “usually conducted under the auspices of the national planning or statistical authorities, but it is generally undertaken by reputable international experts or peers working alongside the national authorities (…) [They] are usually carried out over a period of weeks or months and involve a large number of consultations with national, regional and international stakeholders”

Type B tools are usually “conducted by experts from, or hired by, the development partners’ own organisation (…) The diagnostic period extends over a period of weeks and involves consultations with stakeholders” (PARIS21, 2017: 7)

Such assessments are meant to identify the strengths and weaknesses of the statistical system and quality constrains.

Type C tools “do not involve a visit to the country, but rely on publicly available information from national statistical websites” (PARIS21, 2017: 8). Finally, Type D tools are implemented to observe international organisations’ standards, they “are normally compulsory and designed to check compliance with the body’s own statistical codes of practice and standards”.