Can we improve pregnancy outcomes by simply encouraging and supporting mothers to adopt a healthy lifestyle during pregnancy? This is the question the Finnish Gestational Diabetes Prevention Study (RADIEL) sets out to answer.
More than 250 mothers were selected among high-risk women, that is with high BMI and previous gestational diabetes. They were randomly assigned to receiving counselling during their pregnancy, with the aim of limiting weight gain by exercising and eating a healthier diet.
The RADIEL team in Finland recently published the results on leading international journals – see Koivusalo et al. (2016). As part of the DynaHEALTH project, I was kindly granted access to the data collected by the Finnish team, together with my PhD supervisor Gabriella Conti. We want to re-evaluate the effect of the experimental intervention on a wider range of outcomes, including the mother’s post-pregnancy weight, mental health, and self-rated health. In particular, we wanted to know if the effect of the intervention was different for higher-risk mothers.
We need to take some important methodological issues into account:
Issue: The sample is relatively small, especially if we want to investigate effects in subsamples. Solution: We use resampling techniques to perform inference that is a valid approximation in the finite sample. This also accounts for the strata used in the randomisation process, by randomising within blocks.
Issue: The sample might be imbalanced with respect to some pre-randomisation characteristics. Solution: We implement a linear conditioning approach to account for baseline differences.
Issue: There is a relevant amount of attrition, i.e. mothers dropping out of the study. Some of it is non-random – for example younger mothers are more likely to leave. Solution: We reweight the analysis using inverse-probability weights, so that mothers with characteristic more similar to the ones who dropped out of the study are given more weight.
Issue: Testing many hypotheses at a time, you’re bound to “stumble” on something that is statistically significant. Technically, one in twenty of your null hypotheses are guaranteed to be rejected by pure chance even in absence of any real effect. Solution: we programmed a state-of-the-art stepdown adjustment to our p-values, that takes into account the multiplicity of hypotheses and avoids p-hacking.
All of the above is neatly packaged in a Stata routine, which is now being tested. Hopefully I’ll have some time to translate this into R, which would make it more accessible!
Results will be submitted for publication soon. Stay tuned!
BONUS: Hear me talk about our results on Vimeo!