How can your gut microbiome affect risk of cancer?

Dr Kaitlin H. Wade1,2,3

1 Population Health Sciences, Bristol Medical School, University of Bristol, Bristol, BS8 2BN

2 Medical Research Council (MRC-IEU), University of Bristol, Bristol, BS8 2BN

3 Cancer Research UK (CRUK) Integrative Cancer Epidemiology Programme (ICEP), University of Bristol, Bristol, BS8 2BN

The causes of cancer are often preventable

Cancer, a disease that has a profound impact on the lives of individuals all over the world, also has an ever-increasing burden. And yet, evidence indicates that over 40% of all cancers are likely explained by preventable causes. One of the main challenges is identifying so-called ‘modifiable risk factors’ for cancer – aspects of our environment that we can change to reduce the incidence of disease.

Photo by Chloe Russell for ‘Up Your A-Z’, an encyclopaedia of gut bacteria

The gut microbiome could influence cancer risk

The gut microbiome is a system of microorganisms that helps us digest food, produce essential molecules and protects us against harmful infections. There is growing evidence supporting the relationship between the human gut microbiome and risk of cancer, including lung, breast, bowel and prostate cancers. For example, experiments have shown that changing the gut microbiome (e.g., by using pre- or pro-biotics) may reduce the risk of developing colorectal cancer. Research also suggests that people with colorectal cancer have lower microbiota diversity and different types of bacteria within their gut compared to those without a diagnosis.

As the gut microbiome can have a substantial impact on their host’s metabolism and immune response, there are many biological mechanisms by which the gut microbiome could influence cancer development and progression. However, we don’t yet know how the gut microbiome can do this.

Human studies in this context have used small samples of individuals and measure both the microbiome and disease at the same time. These factors can make it difficult to tease apart correlation from causation – i.e., does variation in the gut microbiome change someone’s risk of cancer or is it the existence of cancer that leads to variation in the gut microbiome? This is an important question because the main aim of such research is to understand the causes of cancer and how we can prevent the disease. We want to fully understand whether altering the gut microbiome can reduce the burden of cancer at a population level or whether it is simply a marker of cancer itself.

I’m inviting feedback on your knowledge and understanding of the gut microbiome and cancer – please take this 5-minute survey (click here for survey) to contribute your thoughts.

People are interested in their gut microbiome

Even though we don’t yet know much about the causal relevance of the gut microbiome, there is still a growing market for commercial initiatives targeting the microbiome as a consumer-driven intervention. This usually involves companies obtaining a small number of faecal samples from consumers and prescribing “personalised” nutritional information for a “healthier microbiome”. However, these initiatives are very controversial given uncertainty in the likely relationships between the gut microbiome, nutrition and various diseases. What these activities do highlight is the demand for such information at a population level. This shows there is an opportunity to improve understanding of the causal role played by the gut microbiome in human health and disease.

Photo by Chloe Russell for ‘Up Your A-Z’, an encyclopaedia of gut bacteria

Microbiome and variation in our genes

Using information about our genetics can help us find out whether the gut microbiome changes the risk of cancer, or whether cancer changes the gut microbiome. Genetic variation cannot be influenced by the gut microbiome nor disease. Therefore, if people who are genetically predisposed to having a higher abundance of certain bacteria within their gut also have a lower risk of, say, prostate cancer, this would strongly suggest a causal role of those bacteria in prostate cancer development. This approach of using human genetic information to discern correlation from causation is called Mendelian randomization.

Studies relating human genetic variation with the gut microbiome have proliferated in recent years. They have provided evidence for genetic contributions to features of the gut microbiome including the abundance or likelihood of presence (vs. absence) of specific bacteria. This knowledge has given the opportunity to apply Mendelian randomization to better understand the causal impact of gut microbiome variation in health outcomes, including cancer. There are, however, many important caveats and complications to this work. Specifically, there is a (currently unmet) requirement for careful examination of how human genetic variation influences the gut microbiome and interpretation of the causal estimates derived from using Mendelian randomization within this field.

This is exactly what I will be looking at in my new research funded by Cancer Research UK. For more details on the nuances of this work, please see my research feature for Cancer Research UK and paper discussing these complexities.

What’s next for this research?

This research has already shown promise in the application of Mendelian randomization to improve our ability to discern correlation from causation between the gut microbiome and cancer. It has importantly highlighted the need for inter-disciplinary collaboration between population health, genetic and basic sciences. Thus, with the support from my team of experts in microbiology, basic sciences and population health sciences, this Fellowship will set the scene for the integration of human genetics and causal inference methods in population health sciences with microbiome research. This will help us understand the causal role played by the gut microbiome in cancer. Such work acts as a new and important step towards evaluating and prioritising potential treatments or protective factors for cancer prevention.

Acknowledgements

The research conducted as part of this Cancer Research UK Population Research Postdoctoral Fellowship will be supported by the following collaborators: Nicholas Timpson, Caroline Relton, Jeroen Raes, Trevor Lawley, Lindsay Hall and Marc Gunter, and my growing team of interdisciplinary PhD students and postdoctoral researchers. I’d also like to thank the following individuals for comments on this feature: Tom Battram, Laura Corbin, David Hughes, Nicholas Timpson, Lindsey Pike and Philippa Gardom. Additional thanks go to Chloe Russell, a brilliant photographer with whom I collaborated to create “Up Your A-Z” as part of Creative Reactions 2019, who provided the photos for this webpage.

About the author

Dr. Wade’s academic career has focused on the integration of human genetics with population health sciences to improve causality within epidemiological studies. Focusing on relationships across the life-course, her work uses comprehensive longitudinal cohorts, randomized controlled trials and causal inference methods (particularly, Mendelian randomization and recall-by-genotype designs). Kaitlin’s research has focused on understanding the relationships between adiposity and dietary behaviours as risk factors for cardiometabolic diseases and mortality. Having been awarded funding from the Elizabeth Blackwell Institute and Cancer Research UK, Kaitlin’s work uses these methods to understand the causal role played by the human gut microbiome on various health outcomes, such as obesity and cancer. Since pursuing a career in this field, Kaitlin has already led and been key in several fundamental studies that with path the way to resolve – or at least quantify – complex relationships between genetic variation, the gut microbiome and human health. In addition to her research, Kaitlin is actively involved in organising and administering teaching and public engagement activities as well as having many mentorship and supervisory roles within and external to the University of Bristol.

Key publications:

Hughes, D.A., Bacigalupe, R., Wang, J. et al. Genome-wide associations of human gut microbiome variation and implications for causal inference analyses. Nat Microbiol 5, 1079–1087 (2020). https://doi.org/10.1038/s41564-020-0743-8.

Kurilshikov, A., Medina-Gomez, C., Bacigalupe, R. et al. Large-scale association analyses identify host factors influencing human gut microbiome composition. Nat Genet 53, 156–165 (2021). https://doi.org/10.1038/s41588-020-00763-1.

Wade KH and Hall LJ. Improving causality in microbiome research: can human genetic epidemiology help? [version 3; peer review: 2 approved]. Wellcome Open Res 4, 199 (2020). https://doi.org/10.12688/wellcomeopenres.15628.3.

 

 

Do development indicators underlie global variation in the number of young people injecting drugs?

Dr Lindsey Hines, Sir Henry Wellcome Postdoctoral Fellow in The Centre for Academic Mental Health & the Integrative Epidemiology Unit, University of Bristol

Dr Adam Trickey, Senior Research Associate in Population Health Sciences, University of Bristol

Follow Lindsey on Twitter

Injecting drug use is a global issue: around the world an estimated 15.6 million people inject psychoactive drugs. People who inject drugs tend to begin doing so in adolescence, and countries that have larger numbers of adolescents who inject drugs may be at risk of emerging epidemics of blood borne viruses unless they take urgent action. We mapped the global differences in the proportion of adolescents who inject drugs, but found that we may be missing the vital data we need to protect the lives of vulnerable young people. If we want to prevent HIV, hepatitis C, and overdose from sweeping through a new generation of adolescents we urgently need many countries to scale up harm reduction interventions, and to collect accurate which can inform public health and policy.

People who inject drugs are engaging in a behaviour that can expose them to multiple health risks such as addiction, blood-borne viruses, and overdose, and are often stigmatised. New generations of young people are still starting to inject drugs, and young people who inject drugs are often part of other vulnerable groups.

Much of the research into the causes of injecting drug use focuses on individual factors, but we wanted to explore the effect of global development on youth injecting. A recent systematic review showed wide country-level variation in the number of young people who comprise the population of people who inject drugs. By considering variation in countries, we hoped to be able to inform prevention and intervention efforts.

It’s important to note that effective interventions can reduce the harms of injecting drug use. Harm reduction programmes provide clean needles and syringes to reduce transmission of blood borne viruses. Opiate substitution therapy seeks to tackle the physical dependence on opiates that maintains injecting behaviour and has been shown to improve health outcomes.

What we did

Through a global systematic review and meta-analysis we aimed to find data on injecting drug use in published studies, public health and policy documents from every country. We used these data to estimate the global percentage of people who inject drugs that are aged 15-25 years old, and also estimated this for each region and country. We wanted to understand what might underlie variation in the number of young people in populations of people who inject drugs, and so we used data from the World Bank to identify markers of a country’s wealth, equality, and development.

What we found

Our study estimated that, globally, around a quarter of people who inject drugs are adolescents and young adults. Applied to the global population, we can estimate approximately 3·9 million young people inject drugs. As a global average, people start injecting drugs at 23 years old.

Estimated percentage of young people amongst those who inject drugs in each country

We found huge variation in the percentage of young people in each country’s population of people who inject drugs. Regionally, Eastern Europe had the highest proportion of young people amongst their populations who inject drugs, and the Middle Eastern and North African region had the lowest. In both Russia and the Philippines, over 50% of the people who inject drugs were aged 25 or under, and the average age of the populations of people who inject drugs was amongst the lowest observed.

Average age of the population of people who inject drugs in each country

In relation to global development indicators, people who inject drugs were younger in countries with lower wealth (indicated through Gross Domestic Product per capita) had been injecting drugs for a shorter time period. In rapidly urbanising countries (indicated through urbanisation growth rate) people were likely to start injecting drugs at later ages than people in countries with a slower current rate of urbanisation. We didn’t find any relationships between the age of people who inject drugs and a country’s youth unemployment, economic equality, or level provision of opiate substitution therapy.

However, many countries were missing data on injecting age and behaviours, or injecting drug use in general, which could affect these results.

What this means

1. The epidemic of injecting drug use is being maintained over time.

A large percentage of people who inject drugs are adolescents, meaning that a new generation are being exposed to the risks of injecting – and we found that this risk was especially high in less wealthy countries.

2. We need to scale up access to harm reduction interventions

There are highly punitive policies towards drug use in the countries with the largest numbers of young people in their populations of people who inject drugs. Since 2016, thousands of people who use drugs in the Philippines have died at the hands of the police. In contrast, Portugal has adopted a public health approach to drug use and addiction for decades, taking the radical step of taking people caught with drugs or personal use into addiction services rather than prisons. The rate of drug-related deaths and HIV infections in Portugal has since plummeted, as has the overall rate of drug use amongst young people: our data show that Portugal has a high average age for its population of people who inject drugs. If we do not want HIV, hepatitis C, and drug overdoses to sweep through a new generation of adolescents, we urgently need to see more countries adopting the approach pioneered by Portugal, and scaling up access to harm reduction interventions to the levels recommended by the WHO.

3. We need to think about population health, and especially mental health, alongside urban development.

Global development appears to be linked to injecting drug use, and the results suggest that countries with higher urbanisation growth are seeing new, older populations beginning to inject drugs. It may be that changes in environment are providing opportunities for injecting drug use that people hadn’t previously had. It’s estimated that almost 70% of the global population will live in urban areas by 2050, with most of this growth driven by low and middle-income countries.

4. We need to collect accurate data

Despite the health risks of injecting drug use, and the urgent need to reduce risks for new generations, our study has revealed a paucity of data monitoring this behaviour. Most concerning, we know the least about youth injecting drug use in low- and middle-income countries: areas likely to have the highest numbers of young people in their populations of people who inject drugs. Due to the stigma and the illicit nature of injecting drug use it is often under-studied, but by failing to collect accurate data to inform public health and policy we are risking the lives of vulnerable young people.

Contact the researchers

Lindsey.hines@bristol.ac.uk

Lindsey is funded by the Wellcome Trust.

stopWatch – a smartwatch system that could help people quit smoking

Dr Andy Skinner and Chris Stone

Follow Andy and Chris on twitter

 

 

October sees the return of Stoptober, a Public Health England initiative to encourage smokers to quit. Campaigns like this and many others have been effective in reducing smoking in the UK over a number of decades. However, on average, about 15% of the UK’s population still smoke, and this costs the NHS more than £2.5bn each year.

To help address this, the NHS Long Term Plan has identified a range of measures to encourage healthier behaviours, including the need to speed up the introduction of innovative new health interventions based on digital technologies.

Here in the MRC IEU we’ve been working on a new wearable system that could help people stop smoking; stopWatch is a smartwatch-based system that automatically detects cigarette smoking. Because the system can detect when someone is smoking a cigarette, it can trigger the delivery of interventions to help that person quit smoking at precisely the time the interventions will be most effective.

Hand and wrist wearing stopWatch and holding a cigarette
The stopWatch could help people to stop smoking

What is stopWatch, and how does it work?

stopWatch is an application that runs on a commercially available Android smartwatch. Smartwatches now come equipped with motion sensors, just like the ones in smartphones that measure step counts and activity levels. As smartwatches are attached to the wrist, the motion sensors in a smartwatch can tell us how a person’s hand is moving. stopWatch takes data from the smartwatch’s motion sensors and applies machine learning methods to look for the particular pattern of hand movements that are unique to smoking a cigarette.

How can we use stopWatch to help people quit smoking?

It’s estimated about a third of UK smokers try to stop each year, but only a fifth of those that try manage to succeed. For most smokers an attempt to stop smoking ends with a lapse (having just one cigarette), that can quickly lead to a full relapse to smoking. As stopWatch can detect the exact moment a smoker lapses and has a cigarette, it can trigger the precise delivery of an intervention aimed specifically at helping prevent the lapse turning into a full relapse back to smoking.

Will the intervention work?

A recent article highlighted the potential for using mobile and wearable technologies, like stopWatch, to deliver these kinds of ‘just-in-time’ interventions for smoking. To develop our smoking relapse intervention we will be using the person-based approach, which has an excellent track record of delivering effective health behaviour change interventions. We will also be engaging the highly interdisciplinary cohort of PhD students in the new EPSRC Center for Doctoral Training in Digital Health and care, which brings together students with backgrounds in health, computer science, design and engineering.

However, that same article also pointed out that these types of intervention are still new, and that there has been little formal evaluation of them so far. So we don’t yet know how effective these will be, and it’s important interventions of this kind are subject to a thorough evaluation.

We will be working closely with colleagues in NIHR’s Applied Research Collaboration (ARC) West and Bristol Biomedical Research Centre who have expertise in developing, and importantly, evaluating interventions. We will also be working with the CRUK-funded Integrative Cancer Epidemiology Unit at the University of Bristol, collaborating with researchers who have detailed knowledge of developing interventions for specific patient groups.

The StopWatch display
On average, stopWatch detected 71% of cigarettes smoked and of the events stopWatch thought were cigarette smoking, 86% were actually cigarette smoking.

How good is stopWatch at detecting cigarette smoking?

In any system designed to recognise behaviours there is a trade-off between performance and cost/complexity. Other systems that use wearables to detect smoking are available, but these require the wearable be paired with a smartphone and need a data connection to a cloud-based platform in order to work properly. stopWatch is different in that it runs entirely on a smartwatch. It doesn’t need to be paired with a smartphone, and doesn’t need a data connection. This makes it cheaper and simpler than the other systems, but this also means its performance isn’t quite as good.

We recently validated the performance of stopWatch by asking thirteen participants to use stopWatch for a day as they went about their normal lives. On average, stopWatch detected 71% of cigarettes smoked (the system’s sensitivity), and of the events stopWatch thought were cigarette smoking, 86% were actually cigarette smoking (its specificity). This compares with a sensitivity of 82% and specificity of 97% for the systems that require smartphones and data networks.

When will stopWatch and the smoking relapse intervention be available and what will they cost?

The stopWatch system itself is available for research purposes to academic partners now, free of charge. We’re open to discussions with potential commercial partners – please get in touch if you’d like to discuss this (contact details below).

We aim to begin work on the smoking relapse intervention based on stopWatch next year, and we expect development and evaluation to take between 18 and 24 months. The cost of the intervention has yet to be determined. That will depend on many factors, including the partnerships we form to take the intervention forward.

What’s next?

We’re currently putting stopWatch through its paces in some tough testing in occupational settings. This will stress the system so that we can identify any weaknesses, find out to how to improve the system, and develop recommendations for optimising the use of stopWatch in future studies and interventions.

We’re also developing a new smartwatch-based system for the low burden collection of self-report data called ‘dataWatch’. This is currently undergoing feasibility testing in the Children of the 90s study.

Contact the researchers

Dr Andy Skinner Andy.Skinner@bristol.ac.uk 

Institutionalising preventive health: what are the key issues for Public Health England?

Professor Paul Cairney, University of Stirling

Dr John Boswell, University of Southampton

Richard Gleave, Deputy Chief Executive and Chief Operating Officer, Public Health England

Dr Kathryn Oliver, London School of Hygiene and Tropical Medicine

The Green Paper on preventing ill health was published earlier this week, and many have criticised that proposals do not go far enough. Our guest blog explores some of the challenges that Public Health England face in providing evidence-informed advice. Read on to discover the reflections from a recent workshop on using evidence to influence local and national strategy and their implications for academic engagement with policymakers.

On the 12th June, at the invitation of Richard Gleave, Professor Paul Cairney and Dr John Boswell led a discussion on ‘institutionalising’ preventive health with senior members of Public Health England (PHE).

It follows a similar event in Scotland, to inform the development of Public Health Scotland, and the PHE event enjoyed contributions from key members of NHS Health Scotland.

Cairney and Boswell drew on their published work – co-authored with Dr Kathryn Oliver and Dr Emily St Denny (University of Stirling) – to examine the role of evidence in policy and the lessons from comparable experiences in other public health agencies (in England, New Zealand and Australia).

This post summarises their presentation and reflections from the workshop (gathered using the Chatham House rule).

The Academic Argument

Governments face two major issues when they try to improve population health and reduce health inequalities:

  1. Should they ‘mainstream’ policies – to help prevent ill health and reduce health inequalities – across government and/ or maintain a dedicated government agency?
  2. Should an agency ‘speak truth to power’ and seek a high profile to set the policy agenda?

Our research provides three messages to inform policy and practice:

  1. When governments have tried to mainstream ‘preventive’ policies, they have always struggled to explain what prevention means and reform services to make them more preventive than reactive.
  2. Public health agencies could set a clearer and more ambitious policy agenda. However, successful agencies keep a low profile and make realistic demands for policy change. In the short term, they measure success according to their own survival and their ability to maintain the positive attention of policymakers.
  3. Advocates of policy change often describe ‘evidence based policy’ as the answer. However, a comparison between (a) specific tobacco policy change and (b) very general prevention policy shows that the latter’s ambiguity hinders the use of evidence for policy. Governments use three different models of evidence-informed policy. These models are internally consistent but they draw on assumptions and practices that are difficult to mix and match. Effective evidence use requires clear aims driven by political choice.

Overall, they warn against treating any response – (a) the idiom ‘prevention is better than cure’, (b) setting up a public health agency, or (c) seeking ‘evidence based policy’ – as a magic bullet.

Major public health changes require policymakers to define their aims, and agencies to endure long enough to influence policy and encourage the consistent use of models of evidence-informed policy. To do so, they may need to act like prevention ninjas, operating quietly and out of the public spotlight, rather than seeking confrontation and speaking truth to power.

 

Image By Takver from Australia [CC BY-SA 2.0 (https://creativecommons.org/licenses/by-sa/2.0)], via Wikime

The Workshop Discussion

The workshop discussion highlighted an impressive level of agreement between the key messages of the presentation and the feedback from most members of the PHE audience.

One aspect of this agreement was predictable, since Boswell et al’s article describes PHE as a relative success story and bases its analysis of prevention ‘ninjas’ on interviews with PHE staff.

However, this strategy is subject to frequent criticism. PHE has to manage the way it communicates with multiple audiences, which is a challenge in itself.  One key audience is a public health profession in which most people see their role as to provoke public debate, shine a light on corporate practices (contributing to the ‘commercial determinants of health’), and criticise government inaction. In contrast, PHE often seeks to ensure that quick wins are not lost, must engage with a range of affected interests, and uses quiet diplomacy to help maintain productive relationships with senior policymakers. Four descriptions of this difference in outlook and practice stood out:

  1. Walking the line. Many PHE staff gauge how well they are doing in relation to the criticism they receive. Put crudely, they may be doing well politically if they are criticised equally by proponents of public health intervention and vocal opponents of the ‘nanny state’.
  2. Building and maintaining relationships. PHE staff recognise the benefit of following the rules of the game within government, which include not complaining too loudly in public if things do not go your way, expressing appreciation (or at least a recognition of policy progress) if they do, and being a team player with good interpersonal skills rather than simply an uncompromising advocate for a cause. This approach may be taken for granted by interest groups, but tricky for public health researchers who seek a sense of critical detachment from policymakers.
  3. Managing expectations. PHE staff recognise the need to prioritise their requirements from government. Phrases such as ‘health in all policies’ often suggest the need to identify a huge number of crucial, and connected, policy changes. However, a more politically feasible strategy is to identify a small number of discrete priorities on which to focus intensely.
  4. Linking national and local. PHE staff who work closely with local government, the local NHS, and other partners, described how they can find it challenging to link ‘place-based’ and ‘national policy area’ perspectives. Local politics are different from national politics, though equally important in implementation and practice.

There was also high agreement on how to understand the idea of ‘evidence based’ or ‘evidence informed’ policymaking (EBPM). Most aspects of EBPM are not really about ‘the evidence’. Policy studies often suggest that, to improve evidence use requires advocates to:

  • find out where the action is, and learn the rules and language of debate within key policymaking venues, and
  • engage routinely with policymakers, to help them understand their audience, build up trust based on an image of scientific credibility and personal reliability, and know when to exploit temporary opportunities to propose policy solutions.
  • To this we can add the importance of organisational morale and a common sense of purpose, to help PHE staff operate effectively while facing unusually high levels of external scrutiny and criticism. PHE staff are in the unusual position of being (a) part of the meetings with ministers and national leaders, and (b) active at the front-line with professionals and key publics.

In other words, political science-informed policy studies, and workshop discussions, highlighted the need for evidence advocates to accept that they are political actors seeking to win policy arguments, not objective scientists simply seeking the truth. Scientific evidence matters, but only if its advocates have the political skills to know how to communicate and when to act.

Although there was high agreement, there was also high recognition of the value of internal reflection and external challenge. In that context, one sobering point is that, although PHE may be relatively successful now (it has endured for some time), we know that government agencies are vulnerable to disinvestment and major reform. This vulnerability underpins the need for PHE staff to recognise political reality when they pursue evidence-informed policy change. Put bluntly, they often have to strike a balance between two competing pressures – being politically effective or insisting on occupying the moral high ground – rather than assuming that the latter always helps the former.

This blog post was originally published on the PolicyBristol blog.

Drinking in pregnancy: lasting effects of low-level alcohol use?

Kayleigh Easey, a PhD student and member of the Tobacco and Alcohol Research Group (TARG) at the School of Psychological Science at the University of Bristol, takes a look at a recent systematic review investigating effects of parental alcohol use and offspring mental health.

Follow Kayleigh and TARG on Twitter

It’s generally well known that drinking large amounts of alcohol during pregnancy is linked to Foetal Alcohol Syndrome (FAS), and negative outcomes such as premature birth and an increase in the risk of miscarriage. However, less is known about the effects of low to moderate alcohol use during pregnancy on offspring outcomes after birth, and even less for mental health outcomes in the child, particularly internalising disorders such as depression. Despite government guidelines being updated by the Department of Health in January 2016, advising pregnant women that the safest approach is to abstain from drinking alcohol altogether through their pregnancy, there remains uncertainty amongst the public as to whether a ‘drink or two’ is harmful or not.

Is a ‘drink or two’ harmful during pregnancy?

Researchers within the field mostly agree that abstinence from alcohol during pregnancy is the safest approach, but the evidence to support this is relatively weak, often due to study design and sample limitations. A previous meta-analysis highlighted how there are relatively few studies investigating low levels of alcohol use in pregnancy. Their analyses mainly focused on pregnancy outcomes such as gestational diabetes and childhood outcomes linked to FAS such as behavioural problems. Until now, a comprehensive review had not been undertaken on the effects of light to moderate drinking in pregnancy and offspring mental health.

Our research sought to review and summarise what literature was currently available for drinking alcohol in pregnancy and offspring mental health outcomes. Overall, we found that over half of the analyses included in the review reported an association between drinking in pregnancy and offspring mental health problems, specifically anxiety, depression, total problems and conduct disorder. By excluding FAS samples we were more certain that the findings we were reporting were representative of lower levels of drinking in pregnant women. However, we can’t be certain that many of the included studies are not still capturing higher levels of alcohol use in pregnancy, and potentially children with undiagnosed foetal alcohol spectrum disorders – a known problem in researching prenatal alcohol use.

Our review also highlights the differences across studies measuring drinking in pregnancy and offspring mental health, with all but four studies using a different measure of drinking alcohol in pregnancy, making comparison difficult. This means it is difficult to establish between studies if there is a ‘cut off’ level for what is potentially a hazardous level of alcohol exposure during pregnancy.

Image by Jill Wellington from Pixabay

Abstinence seems to be the safest approach

The associations we find do not provide evidence of a causal effect on their own, which can be difficult to demonstrate. However, it is important for women to understand what the current evidence shows, to allow them to make informed decisions about drinking during pregnancy. Women should be able to use this information to inform their choices, and to avoid potential risks from alcohol use, both during pregnancy and as a precautionary measure when trying to conceive.

As such, people may take from this that the current advice of abstaining from alcohol during pregnancy is certainly sensible, at least until evidence is available to indicate otherwise. We suggest that future work is needed to investigate whether light to moderate alcohol use in pregnancy may be harmful to different mental health outcomes in children from large cohort studies, which is exactly what I am currently doing within my PhD research using the Children of the 90s study.

This blog post was originally posted on the Alcohol Policy UK website.

Can lifestyle changes prevent prostate cancer or delay the need for treatment?

This men’s health week, National Institute for Health Research Biomedical Research Centre and Integrative Cancer Epidemiology Programme PhD student Meda Sandu outlines findings from research on how we can best prevent and treat prostate cancer.

Follow Meda on twitter

Prostate cancer (PCA) is the second most common cancer in the adult population in the UK, with over 47, 000 new cases being diagnosed each year.  Over 400, 000 people assigned male at birth live with or after a diagnosis of PCA. Localised PCA is cancer that is confined in the prostate gland and has not spread to other organs. Localised PCA often grows slowly, may not give any symptoms and/or require treatment. However, occasionally this type of cancer is aggressive in nature, can spread fast and will require treatment. Patients who undergo radical treatment, such as surgery and radiotherapy, often report side effects which significantly impact their wellbeing and enjoyment of life, such as urinary and bowel incontinence, low libido, erectile dysfunction, fatigue, mood swings. Our research looks at lifestyle interventions that may help prevent PCA, as well as identifying cases when treatment may be delayed,  in an attempt to best support outcomes for people with PCA.

Image by pixel2013 from Pixabay

How does treatment choice affect survival?

In the ProtecT study people were randomised to various treatments or active surveillance where no treatment is given but the patient is regularly followed-up. The study found similar chances of surviving localised PCA, with those in the active surveillance group only having a very small decrease in survival compared to the other groups. This prompted a spur of research in identifying modifiable factors which could delay PCA progression and therefore avoid unnecessary treatment.

One of the aims of our research is to look at dietary and lifestyle changes that are acceptable and achievable to PCA patients, which could reduce PCA risk and progression. These could prevent PCA or, for those who have already been diagnosed with localised disease, delay treatment and therefore avoid the associated side effects.

Dietary and physical activity interventions in people with PCA

In the PrEvENT  feasibility randomised controlled trial, patients who underwent prostate removal surgery were randomly assigned to both a dietary and a physical activity intervention. The dietary intervention consisted of either:

  • a lycopene supplement, which is an antioxidant found in tomatoes and has been previously be suggested to be protective for PCA
  • eating 5 portions of fruit and vegetable per day and replacing dairy milk with a vegan option (soy, almond, oat, coconut etc)
  • continue as normal.

The physical activity intervention asked participants to do 30 minutes of brisk walking five times a week.

Image by PublicDomainArchive from Pixabay

For each intervention, we looked at the change in metabolites, which are very small molecules found in blood that reflect metabolic patterns, a very useful measure in diseases where there are metabolic changes, such as cancer.  We found that eating more fruit and vegetables and decreasing dairy changed blood metabolite levels. Of particular interest was the change in pyruvate levels, a metabolite used as fuel in the pathway of cancer cell proliferation. This suggests that our interventions could lead to less energy being available to the proliferation of cancerous cells, which could lead to lower PCA risk and progression.

Can we predict which PCA cases do not need treatment?

A second aim of my research is to identify blood markers that can help distinguish disease that is unlikely to cause problems from aggressive disease that will advance rapidly and require treatment. We looked at the ProtecT trial and provisionally identified metabolites that could help predict PCA progression. This would allow clinicians to more accurately decide if the patient should take up treatment or if active surveillance would be appropriate.

Patients diagnosed with localised PCA have 96% chance of surviving 15 years after diagnosis.  However, PCA risk factors have yet to be conclusively identified. In addition, the diagnosing techniques are invasive and there is uncertainty around which localised cases are likely to advance. More research is therefore needed to establish potential risk factors which could help prevent PCA, and blood-based markers that could predict the aggressiveness of localised PCA cases. We are also looking at the relationship between PCA risk and progression and blood DNA methylation markers, which allow cells to control the expression of genes and have previously been suggested to be responsive to both environmental factors and causes of cancer and could help us better understand the aetiology of PCA.

So what does this mean for people with prostate cancer?

Although more research is required as our studies were small and did not aim to have definitive answers,  we did find some evidence to suggest that some lifestyle changes, namely increased fruit and vegetable consumption and replacing milk with non-dairy options and walking 30 minutes a day, 5 times a week are acceptable to PCA patients and that these interventions may have promising effects on blood metabolites. Identifying lifestyle factors which may have a protective role could help prevent PCA cases. Our research also identified metabolites which may help predict the aggressiveness of PCA which could help patients diagnosed with localised PCA avoid serious side effects by not undertaking unnecessary treatment.

Further reading and resources

Find out more about prostate cancer

Find out more about the treatments available for prostate cancer

Journal paper: Hamdy et al (2016) 10 year outcomes after monitoring, surgery, or radiotherapy for localised prostate cancer (the ProtecT trial)

Journal paper: Hackshaw-McGeagh et al (2016) Prostate cancer – evidence of exercise and nutrition trial (PrEvENT): study protocol for a randomised controlled feasibility trial.

 

 

Why are people who stay in school longer less likely to get heart disease?

Alice Carter, PhD researcher at the IEU, outlines the key findings from a paper published in BMJ today.

Follow Alice on twitter

 

Heart disease remains the leading cause of death globally, causing over 17.5 million deaths annually. Whilst death rates from heart disease are decreasing in high income countries, the most socioeconomically deprived individuals remain at the greatest risk of developing heart disease. Socioeconomic causes and the wider determinants of health (including living and working conditions, health care services, housing and a number of other wider factors) and are suggested to be the most important driver of health. Behavioural and lifestyle factors, such as smoking, alcohol consumption, diet and exercise, are the second most important contributor to health and disease.

Why does education matter?

Staying in school for longer has been shown to lead to better lifelong health, including reducing the risk of heart disease (cardiovascular disease) and dementia. We also know that those who stay in school are more likely to adopt healthy behaviours. For example, they are less likely to smoke, but more likely to eat a healthy diet and take part in physical activity. These factors, can in turn, reduce the risk of heart disease, such as by reducing body mass index (BMI) or blood pressure. We wanted to understand if these risk factors (BMI, systolic blood pressure and lifetime smoking behaviour) could explain why those who stay in school for longer are less likely to get heart disease, and how much of this effect they explained.

What did we find?

We found that individually, BMI, systolic blood pressure and smoking behaviour explained up to 18%, 27% and 34% of the effect of education on heart disease respectively. When we looked at all three risk factors together, they explain around 40% of the effect. This means that up to 40% of the effect of staying in school reducing the risk of heart disease can be explained by the fact that those who stay in school tend to lead healthier lives. In this work we looked at four outcomes, coronary heart disease (gradual build-up of fatty deposits in arteries), stroke, myocardial infarction (heart attack) and all subtypes of heart disease combined. For all the outcomes we looked at, we found similar results. Notably, the 40% combined effect is smaller than the amount estimated simply from summing the individual effects together. This suggests there is overlap between the three risk factors in how they cause heart disease.

How did we do this?

In our work, we used a few different methods and data sources to answer our questions.

  • We started by looking at observational data (that is the data self-reported by the participants of the study) in UK Biobank – a large population cohort study of around 500 000 individuals. Of these, almost 220 000 people were eligible to be in our analysis.
  • We looked at how their education affected their risk of four types of heart disease. We then looked at how the intermediate factors, BMI, blood pressure and smoking, could help explain these results.
  • Secondly, we replicated these analyses using two types of  Mendelian randomisation analyses (a form of genetic instrumental variable analysis, see below), firstly in the UK Biobank group and secondly by using summary data from other studies in the area.

Why use genetic data?

Typically, epidemiologists collect data by asking people to report their behaviours, lifestyle characteristics and any diagnoses from a doctor. Alternatively, people in a study may have been to a clinic where their BMI or blood pressure is measured. However, this type of data can lead to inaccuracies in analyses.  This could be because:

  • measures are not reported (or measured) accurately. For example, it can be difficult to get an accurate measure of blood pressure, where it changes throughout the day, and even just going to a clinic can result in higher blood pressure.
  • there may be other variables associated with both the exposure and outcome (confounding). One example of this is suggesting that grey hair causes cancer. Really, age is responsible for i) leading to grey hair and ii) leading to cancer. Without accounting for age, we might suggest a false association exists (see figure 1). In our study using education, this could be ethnicity for example, where it influences both staying in school and risk of heart disease.
  • or an individual with ill health may have been advised to change their lifestyle (reverse causality). For example, an individual with a high BMI may have had a heart attack and have been advised by their doctor to lose weight to avoid having a second heart attack.
Diagram showing a picture of grey hair with an arrow linking to cancer, and a third variable - age - above, which explains both.
Figure 1: Does grey hair really cause cancer?

 

One way to overcome these limitations is to use Mendelian randomisation. This method uses the genetic variation in an individual’s DNA to help understand causal relationships. Every individual has their own unique genetic make-up, which is determined, and fixed, at the point of conception.

We are interested in single changes to the DNA sequence, called single nucleotide polymorphisms (or SNPs). For all of our risk factors of interest (education, BMI, blood pressure and smoking) there are a number of SNPs that contribute towards the observed measures, that are not influenced by factors later in life. This means, Mendelian randomisation estimates are unlikely to be affected by bias such as confounding, reverse causality or measurement error, as we might expect when we rely on observational data. By using these genetic variants, we can improve our understanding of if, or how, a risk factor truly causes an outcome, or whether it might be spurious.

What else might be important?

Although we find BMI, blood pressure and smoking behaviour explain a very large amount of the effect, over 50% of the effect of education on heart disease is still unknown. In some small sensitivity analyses we looked at the role of diet and exercise as intermediate risk factors; however, these risk factors did not contribute anything beyond the three main risk factors we looked at. Other social factors may be involved. For example, education is linked to higher income and lower levels of workplace stress, but these factors may also be related to those we’ve looked at in this analysis.

One further suggestion for what may be responsible is medication prescribing and subsequent adherence (or compliance). For example, individuals with higher education may be more likely to be prescribed statins (cholesterol lowering drugs) compared to someone who left school earlier, but with the same requirement for medication. Subsequently, of those who are prescribed statins for example, perhaps those with higher education are more likely to take them every day, or as prescribed. We have work ongoing to see whether these factors play a role.

What does this mean for policy?

Past policies that increase the duration of compulsory education have improved health and such endeavours must continue. However, intervening directly in education is difficult to achieve without social and political reforms.

Although we did not directly look at the impact of interventions in this area, our work suggests that by intervening on these three risk factors, we could reduce the number of cases of heart disease attributable to lower levels of education. Public health policy typically aims to improve health by preventing disease across the population. However, perhaps a targeted approach is required to reduce the greatest burden of disease.

In order to achieve maximum reductions in heart disease we now need to i) identify what other intermediate factors may be involved and ii) work to understand how effective interventions could be designed to reduce levels of BMI, blood pressure and smoking in those who leave school earlier. Additionally, our work looked at predominantly European populations, therefore replicating analyses on diverse populations will be important to fully understand the population impact.

We hope this work provides a starting point for considering how we could reduce the burden of heart disease in those most at risk, and work to reduce health inequalities.

Read the full paper in the BMJ