Capacity for Forestry Research in the Southern African Development Community

G.S. Kowero and M.J. Spilsbury

[Back to OccPaper Top Page]

[Chapter 1]
Introduction

[Chapter 2]
Previous Forestry Capacity-related Work in the SADC Region

[Chapter 3]
Methodology

Survey of Methodologies

Study Methodology

Limitations of Study Methodology

[Chapter 4]
Results and Discussion

Research Resources

Research Environment

[Chapter 5]
Conclusions and Recommendations

Conclusions

Recommendations

References

Annex 1. Methodology and Indicators of Research Capacity

Annex 2. Forestry Research Manpower in the SADC Region

Annex 3. Values for Research Indicators by Institutes

Annex 4. Institutes by Research Capacity Indicators

Annex 5. Overview of Physical Resources by Institute

Annex 6. Institutions Visited and those which Mailed Information


List of Figures

Figure 1. Distribution of forestry-related researchers in the SADC region

Figure 2. Distribution, by country, of researchers with M.Sc. or Ph.D. and more than years 4 experience

Figure 3. Researchers, by institution, with M.Sc. or Ph.D. and at least 4 years experience

Figure 4. Number of research staff by institute and budget per researcher


List of Tables

Table 1. Some positive and negative aspects of regional approaches

Table 2. Distribution of research operational expenses in some institutions (%)

Table 3. Research support facilities in sample institutions

Table 4. Research interactions and their perceived value

Table 5. Interactions with educational institutions and users of research results

Table 6. Salary and non-salary incentives

Table 7. Use of formal and informal evaluations

ANNEX 1

METHODOLOGY AND INDICATORS OF RESEARCH CAPACITY

The methodology attempts to capture the most important aspects of research capacity by means of quantitative indicators or proxies. The comparison of indicators between institutions allows determination of the relative research capacities. It does not, however, yield optimum or absolute values. The indicators have the advantage of being simple to understand and the data required can, generally, be collected quickly and efficiently.

SURVEY METHODOLOGY

Data collection was largely by means of structured interviews with the heads of forestry research institutes or with senior forestry-related researchers in other organisations. The interviews were conducted informally and the aims and background to the study were explained. In some institutions the full complement of data required was not readily available, e.g., financial information, publications and staff breakdowns and these were provided (in most of these cases) at a later date by mail or facsimile.

The data collected were tailored to the requirements of the methodology for quantification of the following set of indicators; additional qualitative information was captured via further discussion and visits to institute facilities and field sites. Bengston et al. (1988) developed a methodology which was used as the starting point for this study. Of the indicators developed by Bengston et al. the following have been adopted without modification:

  • Salary incentives
  • Non-salary incentives
  • Use of evaluation; and
  • Research support

These are further described below. The data resulting from the survey yielded values for indicators for each of the institutes. These were processed in a simple spreadsheet, and a graph for each institute was produced showing the standardised (to a uniform scale) quartile values for each of the indicators. The value of each indicator was then plotted against the sample quartile values, thus providing a measure of relative research capacity with respect to the indicators used.

HUMAN RESOURCES (HR)

Effective scientific manpower is the single most important factor affecting research capacity. Most studies, including Bengston et al. (op. cit.), rely on total staff numbers to reflect the resource available. In this study an indicator that attempts to reflect staff experience and qualifications has been developed.

HRi = (G j + 2q j) + 4 E

where:

i = ith research institution

G = The length of service of the jth staff member;

1 = less than four years, 2 = four to ten years, 3 = over ten years.

q = The highest qualification of the jth researcher with;

0 = B.Sc., 1 = M.Sc. and 2 = Ph.D.

E = The total number of expatriate research staff in the institute

This expression reflects the following table whereby the relative 'worth' of researchers to a research institute has been arbitrarily quantified with respect to the qualifications and duration of service per researcher within the institute.

Table A1.1 Weightings applied to human resources indicator.

Duration of Service

Highest Qualification < 4 years 4 - 10 years > 10 years
Ph.D. 5 6 7
M.Sc. 3 4 5
B.Sc. 1 2 3
Expatriate 4 4 4

This 'relative worth' implies several assumptions that may not adequately reflect reality:

  1. New recruits to a research organisation are generally assumed to be less effective than longer-serving staff and that the effectiveness of staff increases over time irrespective of the duties performed or previous experience outside the institute.
  2. Highest qualification is directly proportional to the level of competence in conducting all research-related duties.
  3. The 'relative worth' of a staff member is independent of the duties performed.
  4. Expatriates are assumed to have already reached their maximum potential upon entry and, since this category can have a variety of qualifications, the median 'worth' value was adopted.

INDICATORS FOR EXTERNAL ENVIRONMENT

The three indicators that attempt to capture interactions in the external research environment have the limitation of not linking the interactions to particular beneficial outcomes, e.g., new technologies, publications, or collaborative research arrangements.

Research Interactions (RI)

Scientific interactions with other research organisations are thought to be essential in overcoming the phenomenon of 'research isolation' and in developing and sharing research methods and findings. They are also a pre-requisite for developing national and regional research co-operation. The extent of interactions with other research institutions is quantified by the following indicator:

RI i = w(aF + bM + cO)

where:

i = ith research institution

F = frequency of interaction with other forestry research institutions in the same country

M = frequency of interaction with non-forestry research organisations within the same country

O = frequency of interaction with research institutions outside the country. The frequency of interaction (F, M and O) may take the following values:

0 = never interacts, 1 = interactions are occasional, 2 = interactions are frequent

a, b, and c represent the perceived benefits of the interactions defined in F, M and O respectively, and may take the following values:

1 = no real benefit, 2 = moderate benefit, 3 = high benefit

w = number of research staff in the ith institute/mean number of research staff per institute from the sample

This indicator includes a weighting not used by Bengston et al. The rationale for the inclusion of a weight is that the total extent of interactions by a research institute is likely to be proportional to the number of research staff able to interact. Thus, the indicator takes into account the frequency of, and benefit derived from, various interactions adjusted by a weighting related to the number of staff in the institute.

Educational Interactions (EI)

Interaction with educational institutions, is assumed to enhance research capacity in several ways; including training of research staff, exposure to new ideas and, perhaps, access to current literature. The interactions between the institutions surveyed and educational establishments is given by the following expression:

EI i = w(dD + eQ)

where:

i = ith research institution

D = frequency of interaction with national educational institutions

Q = frequency of interaction with educational institutions outside the country

d = a measure of the value of the perceived benefits from the in-country educational interactions

e = a measure of the value of the perceived benefits from foreign educational interactions

w = number of research staff in the ith institute/mean number of research staff per institute from the sample

The weighting is applied for the same reasons as in the RI indicator above.

User Interactions (UI)

The leverage obtained from research funding is enhanced if research is 'demand-driven', i.e., a clear need is fulfilled by the research activity. The extent of interaction with users or potential users of research outputs are taken as a proxy for the extent to which research is targeted to potential users. The indicator is based on the premise that 'extent and effectiveness' of interactions can be quantified from the time and money an institute allocates to these activities:

UI i = B + wT

where:

i = ith research institution

B = proportion, in %, of annual budget associated with technical transfer and extension of results

T = proportion, in %, of staff time associated with technical transfer and extension of results

w = number of research staff in the ith institute/mean number of research staff per institute from the sample

One shortcoming of this indicator is the extent to which the percentage of the annual budget allocated to extension/user interactions represents 'double counting' with respect to the staff time allocation, which also features in the indicator. The staff time component has been weighted by the number of staff in the institute as a ratio of the sample mean. Again, the rationale is that total extent of 'user interaction' is the product of mean time per researcher and number of researchers in addition to the financial resources available to facilitate transfer of research results.

The indicator has a number of obvious weaknesses; it takes no account of the means by which results are transferred to users, nor does it attempt to assess the relative merit of the different approaches (e.g., workshops, demonstration trials, stakeholder participation in research design). It does not highlight the extent to which research findings may be transferred by a third party that services the research institute through extension activities, nor does it capture the extent to which user needs feature in establishing research priorities. These are all important aspects in ensuring that research outputs yield successful outcomes for the targets of research.

INDICATORS FOR INTERNAL ENVIRONMENT

Salary and related Incentives (SI)

Salary incentives for researchers are very important in attracting and maintaining the key research resource. The indicator captures the remuneration available to researchers relative to similarly qualified professionals in the same country:

SI i = C i

where:

Ci is the percentage advantage (or disadvantage if negative) in the researchers' income (salaries and allowances) for the ith institute over that of colleagues with equivalent qualifications and experience in other sectors of the economy in the same country.

In the calculation of this index from the data, the largest negative value (-100) was added to each value for the i institutes to standardise the values. Relative ranking remained unaffected.

For government research organisations comparisons were made with private sector or parastatal organisations. The interpretation of the results from this indicator requires care. The indicator reflects the within country competitiveness of the institute in terms of salary incentives, not the total remuneration relative to the sample as a whole.

Non-Salary Incentives (NSI)

Non-salary incentives for researchers, are considered to be very important in retaining the key resource of well qualified, highly motivated research workers. Non-salary incentives can often compensate for poor base salaries through, for example, housing and transport allowances. Inadequate non-salary incentives contribute to the likelihood of staff turnover or depletion and is defined as:

NSI i = sumrj .Rj

where:

i = ith research institution

R = a measure of the frequency with which the various forms of rewards are used

r = a measure of the effectiveness of the rewards in contributing to researcher productivity

j = types of incentives offered to researchers

The frequency of use for each of the rewards used is ranked as follows:

1 used occasionally, 2 used frequently, 3 always used (i.e. built into the system)

The effectiveness of each of the rewards in stimulating researcher productivity is ranked as:

0 not effective, 1 slightly effective, 2 moderately effective, 3 very effective

The type of incentives j include:

Peer recognition awards, housing and transport allowances, travel to other countries, career development (schedule of service), professional responsibility, sabbaticals/internships, consultancies/ training, and award of additional research funding.

Use of Formal and Informal Evaluations (EV)

The use of evaluation in research decision making is assumed to be linked to the capacity to effectively manage research, another important component of research capacity. An evaluation index (EV) based on formal and informal evaluations was quantified as follows:

EVi = sumU j + V j

where:

i = ith research institution

U = the frequency with which a formal evaluation is used

V = the frequency of use of informal evaluations

The frequency of use of either formal or informal evaluations U and V is ranked as:

0 never used, 1 seldom used, 2 occasionally used, 3 frequently used, 4 always used (i.e. built into the system)

j = types of evaluation conducted: justifying past expenditure, in support of funding or budget requests, choosing among research projects, monitoring ongoing research activity

This indicator implies that formal and informal evaluation methods are of equal merit. The indicator does not attempt any standardisation in relation to the methodology or processes involved in the evaluation activities considered. Therefore, evaluations based on well-structured, relevant information, that make use of defined procedures will 'score' equally with inadequately designed and poorly implemented evaluation procedures. Also, the indicator does not capture the importance of the evaluation to the internal management of the institute, nor does it record to what extent evaluations only serve externally imposed requirements. These deficiencies were ameliorated, to some extent, by the collection of qualitative information in relation to evaluation procedures.

During data collection it became apparent that many respondents had difficulty with the classification of 'frequency of use' of evaluation procedures. Future surveys should consider the use of three categories should this indicator be used in the same form: 0 = never used, 1 = occasionally used, 3 = always used.

TECHNICAL SUPPORT (TS)

Technical Support is also an important factor in high levels of research capacity. Provision of technical support releases more 'effective research time' for researchers. The optimum ratio of technicians to researchers will depend on the type of research being conducted.

 

  TS i =  S

 P

 

where :

i = ith research institution

S = number of technicians in the ith institution

P = number of researchers in the ith institution

This expression implies that the higher the ratio of technicians to researchers, the better. Optimum levels for each institute are not known nor is the opportunity cost of allocating too many resources to provision of technicians. Results must therefore be carefully interpreted as the institutes that have the greatest number of technicians per researcher may be making inefficient use of research funds by allocation of excessive resources to technical support.

RESEARCH OUTPUTS (RO)

Although research output is a retrospective indicator of research capacity, it can provide insights into the productivity of a research institute, and should be expressed in proportion to the number of research staff. The indicator used can be expressed as:

ROi =

(H - I + 3I)  (H + 2I)

 =
 P  P

where:

i = ith research institution

H = total number of publications for the ith institution in the preceding year

I = total number of publications for the ith institution appearing in refereed journals in the preceding year

P = number of researchers in the ith institution

Clearly, the indicator gives an arbitrary weight in favour of published refereed papers that is three times that for unrefereed material. Although the magnitude of the weight is arbitrary the indicator implies that refereed material has greater 'value'. This is because the dissemination of refereed material is likely to be wider and the 'quality control' in the research findings more reliable. The indicator fails to address such aspects of research as: the time (scientist year equivalents) required to conduct different kinds of forestry research, e.g., tissue culture experiments versus tree provenance/site selection trials; other forms of research output/product, e.g., equipment, software, and practical techniques that are not readily described in the format of a scientific paper; all these are are excluded from the indicator and may represent major research efforts.