Updated: July 2019
The childbearing experience has always been unpredictable and potentially dangerous. In response, humans have sought ways to create a sense of control and minimise danger. Practices (actions) aimed at creating a sense of control reflect the culture from which they arise. Historically, women relied on a spiritual connection to the Goddess/es, rituals (rites of passage and rites of protection); wisewomen and remedies from nature. The current approach emerges from Science and Research (the new religion?) and sustains a technocratic approach to birth. Risk and danger are considered to be located within the woman (rather than her environment or others) and practices aim to identify danger and control it from the outside. This new approach claims to be rational, effective and underpinned by research evidence.
From evidence-based-practice to research-based-practice
By the end of the 1900s ‘evidence based practice’ was an established concept in medicine and health care in general. However, it was never meant to be purely ‘research based practice’: “Evidence based medicine is not restricted to RCT and meta-analyses. It involves tracking down the best external evidence with which to answer our clinical questions” (Sackett et al 1996). Whilst the emphasis was on ‘external’ evidence it did involve taking account of the individual ‘patient’ and the experience and skill of the practitioner.
Along with the shift towards research based practice came an increasing emphasis on quantitative research. If you are unsure about the different between quantitative and qualitative research – see this summary and this cartoon:
Quantitative research purports to be objective and is underpinned by Popperian philosophy and principles. Popper’s (1961) philosophy of science maintains that scientific knowledge develops in an incremental and linear fashion whereby ‘truths’ are systematically tested. Truths or hypotheses cannot be proven, only falsified, and scientific theories can be objectively tested to measure how much truth or falsity they contain. How neat and tidy!
In keeping with this idea of being able to objectively measure humans and their experiences the Pyramid of Evidence was devised. This pyramid illustrates the hierarchy of ‘quality’ relating to research evidence. As you can see the more (allegedly) objective the research, the greater the quality and the greater the weight given to the findings. However, reality is a lot more complex and subjectivity and bias permeate all research:
“Research is carried out within paradigms of knowledge. Everything from the research question; the research framework; the methodology; the interpretation of findings; and the implementation of the findings into practice is influenced by the paradigm of knowledge in which the research is conducted.” (Kuhn 1970)
This blog post takes a critical look at quantitative research – the purported foundation of modern maternity care. There are also issues with qualitative research, however qualitative philosophy and methodology acknowledges the element of bias as part of the research design.
1. Choosing a research topic
Researchers do not generally carry out research in their spare time funded by the goodness of their heart. Research requires money to pay for time and resources. Competitive grants are offered by a variety of organisations – government, charity and industry. Research topics are influenced by the requirements and criteria dictated by grant providing organisations. For example, government funded grants will usually focus on government health priorities such as the treatment of diabetes, heart disease, etc. Therefore, it would be much easier to obtain a grant to study the management of gestational diabetes than to study psychosocial outcomes of maternity care. Government health priorities reflect culture (and health lobby groups eg. industry). For example, the joint leading cause of maternal death in Australia is psychosocial morbidity (cardiovascular disease is the other). There is a clear link between how women are treated by care providers and mental health outcomes (Reed, Sharman & Inglis 2017). However, women’s mental health is not a cultural priority in Australia.
Another source of funding for research comes from the industries that develop and sell interventions (eg. technology and medications). This has resulted in major issues in the area of pharamacology (see Ben Goldacre’s book). In the maternity context there are no companies offering grants to find out about the benefits of birth without medication or intervention – there is no profit in physiological birth.
Research plans and grant applications require the identification of a ‘problem’ – there is usually a section subtitled ‘describe the research problem’. This creates a focus on pathology rather than on wellness. For example, a study aiming to investigate why ‘x’ institution has a high rate of physiological births and great outcomes is less likely to get a grant than a study aimed at trialling a medical intervention to reduce the high PPH rates in another institution. However, former study may result in important findings that could help to improve the latter.
Most government grants require a government health employee to be listed as a researcher on the grant application. Frequently this results in a manager from the institution becoming a named ‘researcher’ (although sometimes they contribute nothing to the research process). This looks great on the managers CV and allows the rest of the research team to access samples (women) and data (whatever information they are collecting/measuring). However, as a representative and employee of the institution they may have a vested interest in ensuring that the research topic and findings do not reflect badly on the institution. This can influence the research topic because some topics (the interesting ones) will be off the agenda of the institution… more on this later re. disseminating the findings.
2. Formulating a research question
Once a research topic/problem has been identified, a research question is created. Again, the question that arises reflects the cultural paradigm in which the research is taking place. A study by Phipps, Charlton and Dietz (2007; 2009) provides a perfect example of this. The problem: high rates of intervention due to first time mothers being unable to push their babies out within the (non-evidence based) hospital prescribed timeframe. The question arising from this problem became ‘can women be taught how to push more effectively’ and women were randomly allocated to antenatal education sessions aimed at teaching them how to push effectively. This reflects a paradigm in which women’s bodies are considered the problem. An alternative paradigm would have resulted in examining the problem of using prescriptive timeframes to define individual birth processes.
3. Designing the research
Physiology as experimental
Usually in quantitative research the control group is the group that does not get an intervention. This control group is compared to the experimental group that gets the intervention. However, this is usually the opposite in maternity care, reflecting a culture in which intervention is the norm. Initially routine interventions during birth were introduced as part of the general medicalisation of childbirth, without any supporting research evidence (Donnison 1988). These interventions continue to be carried out until research is conducted to support a change in practice. Therefore, research in maternity care is often carried out to support not performing an intervention that was initially introduced without research evidence. For example, women were routinely subjected to vulval shaving, enemas and episiotomies until research demonstrated it was safe to not abuse women this way. In such studies, the control group is the group subjected to the intervention, with the experimental group not receiving the intervention.
Confounding factors in complex human experiences
Research is often conducted with the assumption of simplicity as a framework. The origin of this assumption is Descartes’ concept of dualism, that the body could be studied as a separate entity to the social, psychological and spiritual aspects of a person. This approach ignores the complexity of cause and effect in individual human subjects and varying situations.
Confounding variables are factors that influence the relationship between x and y. Research design aims to reduce confounding variables. This is easier when carrying out research in laboratory conditions where you can control the environment and any interactions with the subject (eg. bacteria in a petri-dish). However, pregnancy, birth, breastfeeding, mothering, and maternity care are incredibly complex. In most cases it is impossible to limit confounding factors. For example, when designing research comparing active management of placental birth vs physiology, it is not possible to isolate the effects of administering an oxytocic medication or not. The ‘management’ is being carried out on a complex human, by a complex human, in a complex environment, all of which may influence the outcome. For example, a practitioner who is used to active management but now has to carry out physiological management may find this challenging… their approach and interactions are likely to be influenced by their feelings. This partly explains the different outcomes in different studies with different participants and settings (see this post).
RCT, blind arms and ethics
It can be argued that the gold standard or research – randomised controlled trials – are often unethical in maternity care. For example, it would be unethical to randomly allocate a woman to a particular birth setting (and her feelings about her birth setting would alter her outcomes). Considering what we know about placental transfusion immediately after birth, and the importance of adequate blood volume for newborns, it would be unethical to randomly allocate newborns to have premature clamping of their cords (and many mother’s would refuse consent).
It is also considered good research design for both the practitioner (person administering the intervention) and the subject (person getting the intervention – or not) to be ‘blind’ to this ie. not know. This works well in the case of medications ie. neither the doctor nor the patient know whether the pill is the experimental medication or a placebo. However, it is virtually impossible to ‘blind’ practitioners or women to interventions. Women and their care providers will know if an intervention is carried out or not eg. active management of the placenta, episiotomy, premature cord clamping.
4. Interpreting the results
The cultural paradigm also influences how researchers and the media interpret the results of studies. In particular, the creation of links between factors assumed by cultural understandings to be linked ie. presenting correlations as causations. The classic example of this is the relationship between ice-cream sales and shark attacks. There is an correlation between increased ice-cream sales and increased shark attacks. However, ice-cream does not cause shark attacks – both of these factors are influenced by how the weather effects human behavior (eating ice-cream and swimming in the ocean).
In relation to maternity care identifying cause and effect are even more difficult due to the complex nature of the issues. For example, there is a general consensus that obesity is associated with poor outcomes for women and babies and the solution is to reduce BMI. However, this raises further questions: is obesity the direct cause of poor outcomes? Is obesity a symptom of some other heath related disorder that is the causal factor of obesity? Is the treatment of obese women the cause of poor outcomes (increased stress/shaming, surveillance, intervention)?
5. Recommendations arising from the findings
Once a study has been concluded the researchers offer recommendations arising from their study. Again, these recommendations are influenced by the cultural paradigm. An example of this is the recommendations resulting from research into early labour (discussed in this post). Women admitted to hospital in early labour = increased intervention and decreased normal birth. The recommendation is therefore to limit the time a woman is exposed to the hospital system… not change the hospital system to better accommodate the needs of women in early labour.
6. Dissemination of findings
The aim of research is to publish the findings and contribute to the evidence-base for practice. However, whether, and how research findings are disseminated is influenced and controlled by a number of factors. In particular the ‘interested parties’ (see above ‘research partners’) can prevent or manipulate publication. For example, to access data held by an organisation the researcher is likely to have signed an agreement that publications relating to that data must be approved by the organisation. I know more than one researcher who has been unable to publish interesting results because they have been blocked by the organisation they had an agreement with. The Union of Concerned Scientists have published a report detailing how corporations obstruct, distort and supress research. Techniques include: terminating and suppressing research; intimidating or coercing scientists; ghost writing scientific articles; publications bias ie. only allowing certain results to be published (Ben Goldacre also discusses this in his book).
Good journals use a peer review process to ensure quality research dissemination. However, peer reviewers are humans and are also influenced by the cultural paradigm and their own emotions. An article that is not aligned with the philosophy/views of a journal or a particular reviewer will be more likely to be rejected. For example, an article reporting findings that demonstrate midwifery continuity of care resulted in poor outcomes (I am making this up) would be more likely to be published in a medical journal than a midwifery journal. And some topics are difficult to publish anywhere.
7. Implementing recommendations into practice and decision making
Evidence based practice?
The final step – implementing evidence into practice is perhaps the least successful step in maternity care research. The discipline of obstetrics was awarded the ‘wooden spoon’ by Archie Cochrane in 1979. In response Iain Chalmers et al. published the first edition of ‘effective care in pregnancy and birth’ in 1989. However, it seems that not much has changed.
“Despite claims of EBP, practices are underpinned by an established hierarchy of understanding and practice, rather than by research.” (McCourt 2009)
It is easier to introduce and maintain culturally based practices that lack evidence, than to introduce evidence based practices that challenge the cultural norm. For example (and I have stuck to the ‘gold standard’ of Cochrane reviews relating to ‘normal’ birth here): Common practices that lack evidence include routine vaginal examinations; amniotomy to shorten labour; routine antibiotics for rupture of membranes; use of a partogram; admission CTG and CTG during labour; (I could go on).Uncommon practices supported by evidence include midwife-led continuity of care; warm compresses to reduce perineal trauma; skin-to-skin contact; optimal cord clamping (aka ‘delayed’); warm water immersion; (I could go on).
Organisations and the staff working in them rely on clinical guidelines to guide practice. However, so called ‘evidence based guidelines’ are anything but. If you take a look at most clinical guidelines and follow the reference trail you will find that they cite another clinical guideline, which cites another clinical guideline… and you end up at a dead-end with no actual research in site. Prusova et al. (2014) published an article about this situation: RCOG ‘Green-top Guidelines’: 9-12% based on Grade A evidence. Whilst the article focuses on the RCOG – this is widespread across maternity care guidelines.
Evidence based decision making?
When it comes to how individual women make decisions about their maternity care – research is also fairly low on the list. Below is a quote from a previous post that is relevant here:
Many factors influence decision making, and the information a midwife provides is only one piece of the puzzle. Humans are active seekers and interpreters of information. We pick and choose, using and discarding information according to internal and external constraints and considerations. Embodied knowledge, personal experiences and other people’s experiences influence the selective designation of knowledge as authoritative or not. We often start with a conclusion, then rationalise it with evidence. We surround ourselves with people who have beliefs and opinions aligned with our own. The internet has increased our access to information and people who will reinforce our beliefs and choices.
A way forward?
I am not advocating discarding research. I am a researcher myself and believe that this type of evidence can, and does shift practice. Midwives need to contribute to, and understand the evidence-base for practice from the woman’s perspective. The International Confederation of Midwives position statement ‘The Role of the Midwife in Research’ provides guidance on this:
- …all midwives have a role and a responsibility in advancing knowledge within the midwifery profession and the effectiveness of midwifery practice…
- Research on the childbearing cycle maintain a holistic approach that includes the physiological, psycho-social, cultural and spiritual aspects of the health of women and babies
- Midwives design/participate in studies that support and promote holistic care as well as evaluating the effects of using technology as an intervention during pregnancy and birth
We also need to be able to discuss research with women – not just quote statistics. More importantly we need to acknowledge and respect all the other forms of evidence that operate when a woman makes a decision about what is best for her – in particular her own embodied knowledge.
Cochrane AL (1989). Foreword. In: Chalmers I, Enkin M, Keirse MJNC, eds. Effective care in pregnancy and childbirth. Oxford: Oxford University Press.
Donnison, J 1988, Midwives and medical men: a history of the struggle for the control of childbirth, 2nd edn, Historical Publications, London.
Kuhn, TS 1970, The structure of scientific revolutions, University of Chicago Press, Chicago.
Phipps, H, Charlton, S & Dietz, HP 2007, ‘Can antenatal education influence how women push in labour? A pilot randomised control trial on maternal antenatal teaching for pushing in the second stage of labour (PUSH STUDY)’, paper presented to Big Bold & Beautiful: Australian College of Midwives 15th National Conference, Canberra, Australia, 25-28 September 2007.
Phipps, H, Charlton, S & Dietz, HP 2009, ‘Can antenatal education influence how women push in labour? A pilot randomised control trial on maternal antenatal teaching for pushing in the second stage of labour (PUSH STUDY)’, Australian and New Zealand Journal of Obstetrics and Gynaecology, vol. 49, pp. 274-278.
Popper, KR 1961, The logic of scientific discovery, Basic Books, New York.
Prusova, K, Tyler, A, Churcher, L & Lokugamage 2014, ‘Royal College of Obstetricians and Gynaecologists guidelines: how evidence based are they?’ Journal of Obstetrics and Gynaecology, DOI: 10.3109/01443615.2014.920794
Sackett, DL, Rosenberg, WMC, Gray, JAM, Haynes, RB & Richardson, WS 1996, ‘Evidence-based medicine: what it is and what it is not’, British Medical Journal, vol. 312, pp. 71-72.