Quantifying lead-time bias in risk factor studies of cancer through simulation

Ann Epidemiol. 2013 Nov;23(11):735-41. doi: 10.1016/j.annepidem.2013.07.021. Epub 2013 Aug 27.

Abstract

Purpose: Lead-time is inherent in early detection and creates bias in observational studies of screening efficacy, but its potential to bias effect estimates in risk factor studies is not always recognized. We describe a form of this bias that conventional analyses cannot address and develop a model to quantify it.

Methods: Surveillance Epidemiology and End Results (SEER) data form the basis for estimates of age-specific preclinical incidence, and log-normal distributions describe the preclinical duration distribution. Simulations assume a joint null hypothesis of no effect of either the risk factor or screening on the preclinical incidence of cancer, and then quantify the bias as the risk-factor odds ratio (OR) from this null study. This bias can be used as a factor to adjust observed OR in the actual study.

Results: For this particular study design, as average preclinical duration increased, the bias in the total-physical activity OR monotonically increased from 1% to 22% above the null, but the smoking OR monotonically decreased from 1% above the null to 5% below the null.

Conclusions: The finding of nontrivial bias in fixed risk-factor effect estimates demonstrates the importance of quantitatively evaluating it in susceptible studies.

Keywords: Bias; Cancer screening; Case-control study; Early diagnosis of cancer; Mathematical model; Methodological study.

MeSH terms

  • Bias
  • Case-Control Studies
  • Early Detection of Cancer / statistics & numerical data
  • Humans
  • Incidence
  • Male
  • Minnesota / epidemiology
  • Models, Theoretical*
  • Odds Ratio
  • Prostatic Neoplasms / diagnosis*
  • Prostatic Neoplasms / epidemiology*
  • Risk Factors
  • SEER Program
  • Time Factors
  • Wisconsin / epidemiology