Weighing up evidence in medical research


By Paul Chinnock | Thursday 16 November 2017

The first time that most people get to hear about the findings of new medical research it is usually from a report in the media. But researchers rarely contact the media as a first step. Instead they submit details of their work to specialist medical journals.

These studies are reviewed by experts and we might be tempted to think that this means the conclusions of published research can be treated as established facts. But science doesn’t work in quite this way. Published studies can present conflicting findings and it can be difficult to work out what information to pay attention to.

Part of this is just the cut and thrust of the scientific process. Disagreements arise when researchers use different methods to try to answer a scientific question. But sometimes there can be issues relating to the publishing industry, a field I know well through a career spanning many years.

The world of medical research publishing is totally unfamiliar to most people. There are thousands of medical research journals, many of them publishing studies that concern dementia.

The processes these journals use when considering research for publication are mysterious and indeed open to criticism.

In this blog I want to draw attention to just two issues about medical publishing. The first is what is called publication bias.

It is vital that all research should be published. If this does not happen, then the data that is available will be biased. For example, if ten research groups test a potential new treatment for dementia and only one study produces encouraging results then there is a danger that only the ‘successful’ group will report their work. But the seemingly positive results may only be the result of chance; only when we bring together all the results from research into a potential new treatment can we be more confident.

Sadly, there continue to be many instances where unsuccessful results are not published, while more positive findings are. Leading medical journals usually only want to publish studies that seem to be major advances; positive findings are therefore more ‘visible’ (and more likely to be picked up by the mainstream media) than the less exciting results which usually appear in lesser known journals. As a result it is all too easy to form a biased picture of what is going on.

My second point is that everyone should be aware that some research varies in the quality of evidence that is generated. For example, data obtained from the experience of a single patient (a ‘case study’) can be interesting but it is regarded as very poor evidence. A case study counts as ‘poor study design’ and is considered to be at the bottom of the ‘hierarchy of evidence’. Likewise, anecdotes and expert opinion, while they are important in advancing medical research do not, in themselves, give us good evidence.

We start to move up the hierarchy when studies involve bigger numbers of patients and where there are control groups – people who have not received the treatment that is under investigation but are otherwise comparable to those who have. But studies described as ‘observational’ (i.e. the researcher is an observer with no control over whether each patient is receiving a treatment or not) are regarded as inferior to experiments in which trials are conducted. Of particular importance are studies where patients are chosen randomly to be in the treatment group or a control group. A good randomised controlled trial (RCT) includes various procedures intended to improve quality still further; for example ‘blinding’ is used so that no one knows which treatment each patient is getting.

The very best evidence is considered to be the ‘systematic review’, in which a search is conducted with the aim of bringing together all relevant data from high-quality studies.

The Hierarchy of Evidence Pyramid

I have helped to review health information for Alzheimer’s Research UK, although from the perspective of someone with personal experience of dementia, and not with my professional hat on. I know that the Information Services team carefully considers this ‘hierarchy of evidence’ when selecting information to include in dementia information leaflets. With the help of input from dementia experts and reviewers like me, they have put together a range of accessible and evidence-based publications that provide high-quality information about different aspects of dementia.

So to sum up – we need to reach conclusions that are based on all the research evidence and to remember that some types of evidence are stronger than others. The next time you read about a new study, try to bear both points in mind!

Find out more about different types of research and how you can get involved.


By submitting a comment you agree to our comments policy.
Please do not post any personal information about yourself or anyone else, especially any health data or other sensitive data. If you do submit sensitive data, you consent to us handling it in line with our comments policy.

Leave a Comment

About the author

Paul Chinnock

Paul Chinnock has spent most of his career in medical publishing. Some of it was as an editor of a magazine for doctors working in Africa. Later he worked for the Cochrane Collaboration, which publishes reviews of all the data available on the effectiveness of specific health interventions, and the leading medical journal PLOS Medicine. He has also worked for the World Health Organization and other bodies concerned with global health. Paul has personal experience of dementia and has volunteered for Alzheimer’s Research UK, helping to review our health information and providing support to the Dementia Research Infoline.