A method for estimating mortality rates in humanitarian emergencies using previous birth history

 

Mark Myatt & Anna Taylor

June 2002


Introduction

 

Traditionally, prevalence (e.g. the prevalence of undernutrition) and incidence (e.g. mortality) have been measured using two quite different epidemiological methods. These are:

 

Prevalence : Cross-sectional surveys such as the modified EPI 30 cluster survey commonly used to estimate the prevalence of undernutrition.

 

Incidence : Surveillance (monitoring) systems such as monitoring of burial places, routine reports from (e.g.) street leaders in refugee camps, routine reports of deaths in hospital from curative services.

 

Surveillance systems usually require a reasonably stable situation and reliable population estimates. They also take a considerable time to establish and need to run for some time before data can be meaningfully analysed. These factors make them unsuitable for estimating mortality in emergency assessments. It is possible to estimate cumulative incidence retrospectively using a cross-sectional survey. This is currently the recommended method for estimating mortality in emergencies. There are, however, problems to this approach with regard to estimating mortality rates:

 

Manipulation : Any emergency assessment is prone to manipulation by an aid-savvy population or regime. Such manipulation will, generally, lead to an overestimation of incidence.

 

Taboo : In some cultures death is a taboo subject. This makes asking questions about deaths problematic and will lead to an underestimation of mortality.

 

Unreliability : Many handbooks on emergency assessment mention the importance of estimating mortality rates but provide scant details on exactly how this should be done. Whilst reviewing reports of emergency assessments we found that a variety of methods were used. Many of these assessments committed one or more gross methodological blunders. The most common of which was nesting of the mortality survey within a nutrition survey thereby excluding households in which all children under five years of age had died leading to underestimation of mortality. In general, the methods used lacked standardised procedures for defining households, enumerating household members, selecting the principal informant, ascertaining whether identified household members were living at home during the survey period, failed to define live-births, and did not have a standardised question set. This lack of standardisation is likely to lead to large within and between observer variation within a single survey and, perhaps more importantly, large variations between surveys due to methodological problems and inconsistencies rather than to differences in underlying mortality rates.

 


... continued

 

Lack of guidance on sample size calculations and data-analysis procedures : Current editions of handbooks on emergency assessment do not provide details on how sample sizes should be calculated and offer conflicting advice on minimum sample sizes which are couched in terms of a minimum number of individuals or households rather than units of person-time-at-risk (i.e. the product of the number of individuals followed-up and the duration of the follow-up period). Key analytical procedures such as the calculation of a confidence interval on an estimated rate are also not covered in these handbooks.

 

Given these problems with the way mortality is currently estimated in emergency assessments, SCF(UK) decided to design and undertake preliminary testing of a method that might overcome these problems.

 


Desirable attributes of a method

 

The first step in designing the new method was to decide on a set of desirable attributes. After some deliberation, the following list was arrived at:

 

Familiar sampling method : The new method must be able to use proximity sampling of households as is used in most variants of the EPI 30 cluster method because most workers in the field are already familiar with this method (e.g. it is a commonly used method for assessing the nutritional status of a population in emergency situations). Other methods (e.g. simple random sampling, systematic sampling, stratified sampling, and adaptations of the EPI method) may be also be used.

 

Reliable : The new method must use a standard validated question set applied to a single informant with a single relationship to the deceased.

 

Low overhead : The new method must have low resource overheads. It must be possible for data to be collected by a single enumerator. The data must be also be simple to collect. The method should not require entry of large volumes of data onto computer. The data must be simple to analyse and not require the use of specialist computer software.

 

Resistance to manipulation and taboo : The intent of a mortality survey using the new method must not be obvious (i.e. it must not be obvious that data is being collected on recent deaths). The question set used must avoid any mention of death.

 

In addition, it was decided that simple tools for sample-size calculation and the calculation of a confidence interval on an estimated rate should be developed and placed in the public domain. It was decided that these tools should be general to the problem of estimating a single rate rather than being tied to the new method.

 

 


Which mortality rate to estimate?

 

These considerations led to the decision to estimate under five years mortality rather than all-age mortality (crude mortality, CMR). Estimating under five years mortality has the following advantages:

 

A single informant with a single relationship to the deceased may be used (i.e. mothers).

 

A standard validated question set (the UNICEF 'previous birth history' (PBH) method) is already available. This question set makes no mention of death and has low data collection and analysis overheads. The PBH question set is shown in box 1. The flow of questions in the PBH question set is illustrated in figure 1.

 

An additional rationale for estimating under five years mortality rather than all-age mortality is that the under five years population is an early warning population (i.e. mortality is likely to rise in this population before it rises in the general population).

 

One disadvantage with the proposed method is that maternal orphans are excluded by the requirement that only living mothers are interviewed. It might be expected that the survival probabilities of maternal orphans are considerably lower than children whose mothers are still alive. This will cause any method based on the PBH question set to underestimate mortality. The degree to which this underestimates mortality will depend upon the maternal mortality rate. Underestimation may be a particular problem in situations of exceptionally high maternal mortality coupled with high under five years mortality due to (e.g.) HIV / AIDS or malaria epidemics in areas of unstable malaria endemicity. This problem was not considered in the development and testing of the method reported here.

 

Most emergency handbooks concentrate on collecting data to estimate both crude mortality and under five years mortality. This approach is superficially attractive but is subject to the problems of manipulation, taboo, and unreliability mentioned earlier. Estimates of under-five mortality from such surveys are likely to lack precision due inadequate sample sizes.

 

It should be noted that under five years mortality is not an appropriate indicator for initial assessments undertaken where considerable under five years mortality has occurred prior to the start of the follow-up period (e.g. initial assessments undertaken very late in an unameliorated nutritional emergency) or in situations where mortality is likely to be highest in the adult or elderly population.


Data arising from the PBH question set

 

The PBH question set yields three variables per mother. These are:

 

The number of children at risk

 

The number of new births in the survey period

 

The number of new deaths in the survey period

 

Such a small number of variables allows data collected from each mother to be summed by hand. Cluster or community level tallies can also be summed by hand. It is even possible to sum the cluster level tallies and calculate mortality rates directly although calculation of confidence intervals is complicated if a multi-stage sample (e.g. cluster) sample is used. Hand calculation of mother and cluster tallies reduces the data-entry overhead to just three items per cluster (i.e. 90 data items for a 30 cluster survey).

 

 

Analysing the PBH data

 

Survey level totals plug directly into the standard mortality estimation formula:

 

                  new deaths

---------------------------------------------- * rate multiplier

children at risk - ½ new deaths + ½ new births

 

The rate multiplier is the reference population (e.g. per 1,000, per 10,000) divided by the number of periods of follow-up (e.g. 90 days).

 

Calculation of confidence intervals relies on:

 

                  new deaths

----------------------------------------------

children at risk - ½ new deaths + ½ new births

 

being a proportion or period prevalence. Confidence intervals for a proportion from a two-stage cluster sampled survey may be calculated using the standard formula:

 

 

Where:

 

            p          =          proportion observed in whole sample

            pi            =          proportion observed in cluster i

            k          =          number of clusters

 


... continued

 

Use of this formula accounts for variance loss (i.e. the design effect) due to the use of a two-stage sampling method. The format of the data and the equations required to calculate rates and confidence intervals are simple enough for all calculations to be performed using standard spreadsheet packages. Figure 2 shows an example spreadsheet created using Microsoft Excel. This spreadsheet is available (in Microsoft Excel ’95 format) from:

 

http://www.myatt.demon.co.uk/samplerate.htm

 

This is a general tool and may be used to calculate rates and confidence intervals on count data collected using a two-stage cluster sampled survey.

 


Sample size calculation

 

Required sample sizes can be calculated using the standard formula:

 

Where:

 

m          =          rate

e          =          required size of standard error

 

For example, the sample size required to estimate a mortality rate of 2 / 10,000 persons / day with a 95% confidence interval of ± 1 / 10,000 persons / day using simple random sampling is:

 

         0.0002

n = ---------------- = 76,832 person-days-at-risk (PDAR)

    (0.0001 / 1.96)2

 

This person-days-at-risk (PDAR) figure may be expressed as the number of children for which survival data should be collected by dividing the PDAR by the length of the follow-up period:

 

                                  PDAR

     n(children) = ----------------------------------

                   length of follow-up period in days

 

For example, with a follow-up period is 90 days, data on the survival of:

 

                   76,832

     n(children) = ------ = 854

                     90

 

children is required. This may be expressed as the number of mothers that should be interviewed by dividing this figure by an estimate of the average number of children under-five per mother alive at any time during the follow-up period:

 

                                    n(children)

     n(mothers) = -----------------------------------------------

                  average number of children < 5 years per mother

 

More complex sampling strategies (e.g. the EPI 30 cluster method) can be accommodated by multiplying the calculated sample size by the expected design effect (usually estimated to be 2.0).

 

A sample-size calculator that implements the PDAR calculation has been developed and placed in the public domain. The sample size calculator is available from:

 

http://www.myatt.demon.co.uk/samplerate.htm

 

The sample-size calculator is a general tools for use with any survey that estimates a single rate.
Experiences in the field

 

A preliminary test of the proposed method was undertaken in four food economy zones (FEZ) in Northern Darfur, Sudan in January and February 2002. The start of Ramadan was used as the start of the recall period. Data was collected using a two stage cluster sample. Sample size requirements were calculated as follows:

 

            Rate to estimate :                                                       2 / 10,000 persons / day

            95% CI (±) :                                                                 1 / 10,000 persons / day

            Expected design effect :                                             2.0

            PDAR :                                                                        153,664

            Average length of follow-up period (days) :                80

            Sample size (children) :                                              1921    (i.e. 153, 664 ¸ 80)

            Average number of children < 5 years per mother :   2.0

            Sample size (mothers) :                                             961      (i.e. 1921 ¸ 2.0)

            Cluster size (mothers) :                                              26

            Minimum number of clusters of size 26 required :     37        (i.e. 961 ¸ 26)

 

For each test survey, data was collected using 38 clusters of 26 mothers (one extra cluster was sampled to ensure that the sample size requirement was met even if one or two clusters were located communities with less than 26 mothers). The required sample size was met in three of the four food economy zones (table 1). The expected design effect of 2.0 that was used to calculate the required sample size proved to be an overestimate and the sample sizes collected were sufficient to estimate the expected rate with a precision better than the specified ± 1 / 10,000 persons / day.

 

The following procedures and definitions were used in the surveys:

 

All women in the reproductive age range in a selected household were questioned.

 

Births were defined as live births. A distinction was made between live births and still births or miscarriages. Live born children were defined as those born alive even if the child died immediately after birth. A baby who cried or breathed, if only for a few minutes, counted as a live birth.

 

The results of these surveys are summarised in table 1. It was not possible to validate these results by comparison with surveillance data but the mortality rates reflect the findings of nutrition surveys undertaken at the same time in the same villages (i.e. the Tombac area was characterised by higher prevalence of undernutrition by MUAC than by weight-for-height z-scores and a high prevalence of oedema).

 

Data proved both easy and rapid to collect with enumerators taking no longer than the teams measuring children for concurrent nutrition surveys. Data analysis was also straightforward.

 


Further work

 

The results of the initial testing are promising but further work is required to:

 

Validate the estimates arising from the proposed method : This may be easiest to do in a refugee camp where routine monitoring of deaths is undertaken. Validation should be a relatively rapid process given that random or systematic samples may be used. Other overheads (e.g. travel costs) should also be low.

 

Establish reasonable design effects for use in sample size calculations : The pilot surveys used an expected design effect of 2.0 in order to calculate the required sample size for each survey. This produced estimates with a reasonable degree of precision. The test surveys presented here yielded negligible design effects. It is possible that, as experience with the method grows, the expected design effect may be revised down thus reducing costs

 

Establish benchmark values for the interpretation of under five years mortality : The benchmarks that are currently used to interpret under five years mortality are derived by doubling those used to interpret crude mortality (table 2). Under five years mortality may be subject to higher regional variation than crude mortality and simple global benchmarks may prove inappropriate. This problem may be assessed by a desk review of mortality data and appropriate benchmarks developed.

 

Use of the methods with alternative sampling methods : The EPI survey method is limited to producing an overall estimate of mortality for a survey area. If estimates of mortality estimates are required at village level then other sampling strategies (e.g. sequential sampling plans) could be used. At present there is no experience with using the proposed method with alternative sampling methods.

 

 


Box 1 : The PBH question set

 

 

            Have you ever given birth?

 

                        If NO, then STOP

 

            When was your most recent birth?

 

                        If more than 5 years ago, then STOP

 

            Was this before or after [START DATE]?

 

                        If after start of [START DATE], then ADD 1 TO NEW BIRTHS

 

            Where is this child now?

 

                        If ALIVE, then ADD 1 TO KIDS AT RISK

 

                        If DEAD, then:

 

                                    Did this child die before or after [START DATE]?

 

                                    If child died after [START DATE], then:

 

                                                ADD 1 TO NEW DEATHS, ADD 1 TO KIDS AT RISK

 

            Did you have a birth before this child?

 

                        If NO, then STOP

 

                        If YES ... REPEAT for previous birth

 

 

 

 

 


Figure 1 : The flow of the PBH question set

 


Figure 2 : Example spreadsheet for calculating a 95% confidence interval

 


Figure 3 : Sample size calculator

 

 


Table 1 : Results of for retrospective mortality surveys in North Darfur, February 2002

 

 

 

FEZ

Child

days at

risk

Percentage

of sample

size met

 

Rate / 10,000 / day (95% CI)

 

Design

effect

 

 

Interpretation

 

Goz

 

 

162,891

 

106%

 

0.92 (0.40, 1.44)

 

1.11

 

Normal

Tombac

 

 

166,536

108%

3.78 (3.07, 4.49)

0.77

Elevated - possibly serious situation

Pastoral

 

 

178,432

116%

0.23 (0.02, 0.43)

0.91

Normal

Non-wadi

 

139,435

91%

0.65 (0.25, 1.05)

0.95

Normal

 


Table 2 : Benchmarks for the interpretation of mortality rates

 

CMR

deaths / 10,000 / day

U5MR

deaths / 10,000 / day

 

Interpretation

0.5

1

Normal rate

< 1

< 2

Elevated, cause for concern

1 - 2

2 - 4

Elevated, serious situation

> 2

> 4

Elevated, very serious situation

> 5

> 10

Elevated, major catastrophe