Capacity for Forestry Research in the Southern African Development Community

G.S. Kowero and M.J. Spilsbury

[Back to OccPaper Top Page]

[Chapter 1]
Introduction

[Chapter 2]
Previous Forestry Capacity-related Work in the SADC Region

[Chapter 3]
Methodology

Survey of Methodologies

Study Methodology

Limitations of Study Methodology

[Chapter 4]
Results and Discussion

Research Resources

Research Environment

[Chapter 5]
Conclusions and Recommendations

Conclusions

Recommendations

References

Annex 1. Methodology and Indicators of Research Capacity

Annex 2. Forestry Research Manpower in the SADC Region

Annex 3. Values for Research Indicators by Institutes

Annex 4. Institutes by Research Capacity Indicators

Annex 5. Overview of Physical Resources by Institute

Annex 6. Institutions Visited and those which Mailed Information


List of Figures

Figure 1. Distribution of forestry-related researchers in the SADC region

Figure 2. Distribution, by country, of researchers with M.Sc. or Ph.D. and more than years 4 experience

Figure 3. Researchers, by institution, with M.Sc. or Ph.D. and at least 4 years experience

Figure 4. Number of research staff by institute and budget per researcher


List of Tables

Table 1. Some positive and negative aspects of regional approaches

Table 2. Distribution of research operational expenses in some institutions (%)

Table 3. Research support facilities in sample institutions

Table 4. Research interactions and their perceived value

Table 5. Interactions with educational institutions and users of research results

Table 6. Salary and non-salary incentives

Table 7. Use of formal and informal evaluations

METHODOLOGY

Survey of methodologies

Following a review of literature and discussions with a number of people, the following alternative approaches have been used to evaluate research capacity.

  • Use of an external review team. This approach appears to be favoured by funding agencies and some individual research institutions, and is probably the most widely used research evaluation approach (Bengston et al. 1988). Under this approach capacity to execute research is usually one of the items examined. The level of detail depends on a number of things including the competency of team members in appraising research capacity and the priority accorded to such capacity in the evaluation. In this respect, Ruttan (1978) notes that a research review should not limit itself to the assessment of quality and value of the programme in force, rather it should engage in a dialogue with the staff and management to find ways of increasing efficiency and contributing to the evolution of a highly productive programme. The survey reported in Burley et al. (1989) is one such example of an external review team, in this case commissioned by the World Bank to provide information to guide Bank decisions in its agricultural support programme for the region. CIFOR has also adopted this approach in an internally commissioned external review which was undertaken in 1995. In both cases the methodology adopted, which is client-driven, considered many more variables than those specific to the capacity to undertake research.
  • It is also common to use checklists which give an indication of the research capacity of an individual institution. Check lists can be used either on their own or in combination with other approaches. Bengston et al. (1988) report that checklists developed by Schweitzer and Long (1979), have been used for investigations of science and technology institutions in Nigeria, Malaysia and Colombia. The checklists were used to guide the structure of interviews with the institutions studied.
  • Though not strictly a research capacity evaluation approach, ex post evaluation of impacts of various forestry programmes implemented in a particular locality can provide information on research capacity. The impact of research is partly a function of the capacity available to produce results and to make them known and adopted by potential users. Example evaluations include those by Ridker (1994), and Someshwar (1994). Alvez (1984), Elz (1984), and Bengston et al. (1988) share the view that impact studies tend to be useful in justifying past actions and expenditures or can serve to support new research proposals, and help to build credibility and/or political support, but are unlikely to be of direct use in improving the organisation and management of a research institute; things which are important in improving research capacity. The authors do not fully concur with this view, research effort that leads to large positive impacts will always be favoured in preference to research that make no measurable difference. Analyses that attempt to investigate how research efforts yield an impact, as well as attempting to quantify it, can contribute much to the improved management of research. This category of analysis is a great improvement on the quantification of outputs because the emphasis is on the quantification of outcomes; i.e. the extent to which research has not only succeeded in providing a solution to a problem, but also the extent to which solutions are adopted and benefits accrue to the users. Since the definition of research capacity focuses on resolution of problems through research, research outcome should be an important consideration in evaluation of research outputs. Nevertheless, impact assessments per se are not the most efficient means of estimating institutional research capacity.
  • Bengston et al. (1988) report on an approach proposed by Clark (1980) in which the deviation of a research institution from 'optimal' behaviour serves as a measure of performance and research policy. The six characteristics constituting optimal behaviour of a research institution were identified as: well-developed internal and external technical communications; socio-economic communications (or 'adequate liaison with the productive sector'); a programme-centred research approach as opposed to a research organisation along lines of scientific disciplines; employment of economists in the institution to assess proposed research projects and help guide project selection; decision making structures consisting of a series of research committees within the institution to review proposed projects and decide on renewal of existing projects; and finally, the priorities of a research institution should be linked to the country's overall development plans or national objectives. Clark suggested a number of proxy variables to make the evaluation quantitative, but did not test this approach in the field and the authors of this document have yet to find a study which tests this approach using empirical data.

With the exception of that proposed by Clark (1980), all the other approaches mentioned earlier tend to concentrate on a descriptive account of the variables under investigation with very little attempt at showing how they feature within the institution or in relation to other similar institutions. Quantification of the relationships between the variables under study is also minimal.

Study methodology

The approach adopted in this study is based on checklists and a modification of the methodology used by Bengston et al. (1988) in their study of forestry research capacity in the Asia-Pacific region. The approach is based on the analysis of certain indicators within the institutions' external and internal environments which are assumed to be related to research capacity, in addition to evaluating available research support within the institutions.

External environment

Within the external environment of each institution, three indicators were identified:

  • Scientific interactions with other research institutions. Such interactions are perceived to be instrumental in overcoming the phenomenon of 'research isolation'; in addition to facilitating the development and sharing of resources, research methods and findings. Interactions among institutions can help to create the 'critical mass' of scientists required for some tasks; something which individual institutions might not have. They contribute to confidence building among researchers and institutions, and would appear to be a pre-requisite for developing national and international collaboration in research.
  • Interactions with educational/training institutions. This may benefit research institutions through increased opportunities for training their staff, as well as possibilities for sharing resources like staff, libraries, laboratories, software, computers and other equipment.
  • Interactions with users of research outputs. Research should be responsive and relevant to local needs. One indicator of this is the extent to which researchers interact with their clients. Forestry research in the region is becoming increasingly demand-driven, and therefore researcher-client interactions are a necessity if research is to fulfil its function and if the research outputs are to lead to successful outcomes (impact).

Internal environment

Three indicators were identified for the institutional internal environment.

  • Salary and related incentives. The monthly disposable income of researchers, comprised of net salaries and other monetary benefits, is one of the driving forces behind employment in research institutions. How researchers fare financially in relation to colleagues with similar qualifications in other institutions may influence the rate of staff turnover, the development and stability of research programmes, and staff morale in an organisation.
  • Non-salary incentives. These may be instrumental in increasing the capacity for an institution to attract, retain, increase productivity of its researchers, and to motivate individual staff. Where an institution is not competitive with respect to salary or in economies where base salary is heavily taxed, a good incentive scheme may adequately compensate.
  • Use of formal and informal evaluations in decision making. The capacity to do research is also related to how an institution decides to manage its research resources. Formal and informal evaluations on completed and on-going research can provide information useful for better management of research.

Indicator for research support

Support to scientific staff in terms of technicians within an institution, was considered an important input to research. The availability of technicians allows researchers to spend less time on technical matters and more time on scientific issues, thus increasing effective research time.

Annex 1 details the means of quantifying the indicators and the underlying assumptions.

Indicator for research outputs

There are many forms of research output other than those that appear in published format. For example, software, demonstration research plots and oral presentation of research results to user groups, it is a key indicator of the capacity of a research institution, and should be expressed in proportion to the number of research staff.

Limitations of study methodology

There are a number of limitations associated with the study in general, and more specifically with the methodology adopted. An appreciation of these is important in evaluating the results. These include:

  • limitations associated with the individual indicators in capturing expected capacity aspects. For example, using the ratio of technical staff to research staff in gauging the extent of research support without first establishing the optimum ratio for the institution may lead to inconclusive results. Also, research funding could have been used as an indicator and therefore strengthen the assessment of research support had all institutions supplied this information. However, only a modest overview is made using the limited data available.
  • the number of indicators chosen for the study may not give a full picture of the research capacity in all institutions. For example, organisational aspects within institutions may not be adequately quantified through this approach. Some indicators relying, for example, on research funding were omitted from the study for lack of adequate data.
  • survey data may be biased by respondents providing information on behalf of an institution, in which case a distorted picture of the institution's capacity for research could emerge.
  • coverage of institutions involved in forestry-related research was incomplete. The survey gave emphasis to the major players in individual countries. Only 19 of the 28 national institutions contacted constitute the sample for this study.
  • there are various economic and social factors influencing the performance of the institutions surveyed, and hence their capacity to conduct research; factors for which indicators were not assigned, but for which a qualitative assessment was given. These included level of economic development of individual countries, endowment with forest resources, role of forestry in the socio-economic development of individual countries and forestry development in public and private sectors.

These limitations notwithstanding, the methodology chosen is simple to understand yet provides useful information; nevertheless it has potential for improvement. Although the methodology does not lend itself to determination of optimum or absolute values of capacity for each institution, it has the merit of determining relative research capacity, i.e., how the capacity in one institution relates to that of another. It is also capable of highlighting some aspects of institutional comparative advantage which are useful for the development of collaborative research among institutions. The approach is also relatively efficient in summarising a large body of information relevant to a particular institution.