stopWatch – a smartwatch system that could help people quit smoking

Dr Andy Skinner and Chris Stone

Follow Andy and Chris on twitter

 

 

October sees the return of Stoptober, a Public Health England initiative to encourage smokers to quit. Campaigns like this and many others have been effective in reducing smoking in the UK over a number of decades. However, on average, about 15% of the UK’s population still smoke, and this costs the NHS more than £2.5bn each year.

To help address this, the NHS Long Term Plan has identified a range of measures to encourage healthier behaviours, including the need to speed up the introduction of innovative new health interventions based on digital technologies.

Here in the MRC IEU we’ve been working on a new wearable system that could help people stop smoking; stopWatch is a smartwatch-based system that automatically detects cigarette smoking. Because the system can detect when someone is smoking a cigarette, it can trigger the delivery of interventions to help that person quit smoking at precisely the time the interventions will be most effective.

Hand and wrist wearing stopWatch and holding a cigarette
The stopWatch could help people to stop smoking

What is stopWatch, and how does it work?

stopWatch is an application that runs on a commercially available Android smartwatch. Smartwatches now come equipped with motion sensors, just like the ones in smartphones that measure step counts and activity levels. As smartwatches are attached to the wrist, the motion sensors in a smartwatch can tell us how a person’s hand is moving. stopWatch takes data from the smartwatch’s motion sensors and applies machine learning methods to look for the particular pattern of hand movements that are unique to smoking a cigarette.

How can we use stopWatch to help people quit smoking?

It’s estimated about a third of UK smokers try to stop each year, but only a fifth of those that try manage to succeed. For most smokers an attempt to stop smoking ends with a lapse (having just one cigarette), that can quickly lead to a full relapse to smoking. As stopWatch can detect the exact moment a smoker lapses and has a cigarette, it can trigger the precise delivery of an intervention aimed specifically at helping prevent the lapse turning into a full relapse back to smoking.

Will the intervention work?

A recent article highlighted the potential for using mobile and wearable technologies, like stopWatch, to deliver these kinds of ‘just-in-time’ interventions for smoking. To develop our smoking relapse intervention we will be using the person-based approach, which has an excellent track record of delivering effective health behaviour change interventions. We will also be engaging the highly interdisciplinary cohort of PhD students in the new EPSRC Center for Doctoral Training in Digital Health and care, which brings together students with backgrounds in health, computer science, design and engineering.

However, that same article also pointed out that these types of intervention are still new, and that there has been little formal evaluation of them so far. So we don’t yet know how effective these will be, and it’s important interventions of this kind are subject to a thorough evaluation.

We will be working closely with colleagues in NIHR’s Applied Research Collaboration (ARC) West and Bristol Biomedical Research Centre who have expertise in developing, and importantly, evaluating interventions. We will also be working with the CRUK-funded Integrative Cancer Epidemiology Unit at the University of Bristol, collaborating with researchers who have detailed knowledge of developing interventions for specific patient groups.

The StopWatch display
On average, stopWatch detected 71% of cigarettes smoked and of the events stopWatch thought were cigarette smoking, 86% were actually cigarette smoking.

How good is stopWatch at detecting cigarette smoking?

In any system designed to recognise behaviours there is a trade-off between performance and cost/complexity. Other systems that use wearables to detect smoking are available, but these require the wearable be paired with a smartphone and need a data connection to a cloud-based platform in order to work properly. stopWatch is different in that it runs entirely on a smartwatch. It doesn’t need to be paired with a smartphone, and doesn’t need a data connection. This makes it cheaper and simpler than the other systems, but this also means its performance isn’t quite as good.

We recently validated the performance of stopWatch by asking thirteen participants to use stopWatch for a day as they went about their normal lives. On average, stopWatch detected 71% of cigarettes smoked (the system’s sensitivity), and of the events stopWatch thought were cigarette smoking, 86% were actually cigarette smoking (its specificity). This compares with a sensitivity of 82% and specificity of 97% for the systems that require smartphones and data networks.

When will stopWatch and the smoking relapse intervention be available and what will they cost?

The stopWatch system itself is available for research purposes to academic partners now, free of charge. We’re open to discussions with potential commercial partners – please get in touch if you’d like to discuss this (contact details below).

We aim to begin work on the smoking relapse intervention based on stopWatch next year, and we expect development and evaluation to take between 18 and 24 months. The cost of the intervention has yet to be determined. That will depend on many factors, including the partnerships we form to take the intervention forward.

What’s next?

We’re currently putting stopWatch through its paces in some tough testing in occupational settings. This will stress the system so that we can identify any weaknesses, find out to how to improve the system, and develop recommendations for optimising the use of stopWatch in future studies and interventions.

We’re also developing a new smartwatch-based system for the low burden collection of self-report data called ‘dataWatch’. This is currently undergoing feasibility testing in the Children of the 90s study.

Contact the researchers

Dr Andy Skinner Andy.Skinner@bristol.ac.uk 

Social media in peer review: the case of CCR5

Last week IEU colleague Dr Sean Harrison was featured on BBC’s Inside Science, discussing his role in the CCR5-mortality story. Here’s the BBC’s synopsis:

‘In November 2018 news broke via YouTube that He Jiankui, then a professor at Southern University of Science and Technology in Shenzhen, China had created the world’s first gene-edited babies from two embryos. The edited gene was CCR5 delta 32 – a gene that conferred protection against HIV. Alongside the public, most of the scientific community were horrified. There was a spate of correspondence, not just on the ethics, but also on the science. One prominent paper was by Rasmus Nielsen and Xinzhu Wei’s of the University of California, Berkeley. They published a study in June 2019 in Nature Medicine that found an increased mortality rate in people with an HIV-preventing gene variant. It was another stick used to beat Jiankiu – had he put a gene in these babies that was not just not helpful, but actually harmful? However it now turns out that the study by Nielsen and Wei has a major flaw. In a series of tweets, Nielsen was notified of an error in the UK Biobank data and his analysis. Sean Harrison at the University of Bristol tried and failed to replicate the result using the UK Biobank data. He posted his findings on Twitter and communicated with Nielsen and Wei who have now requested a retraction. UCL’s Helen O’Neill is intimately acquainted with the story and she chats to Adam Rutherford about the role of social media in the scientific process of this saga.’

Below, we re-post Sean’s blog which outlines how the story unfolded, and the analysis that he ran.

Follow Sean on Twitter

Listen to Sean on Inside Science

*****************************************************************************************************************************************

“CCR5-∆32 is deleterious in the homozygous state in humans” – is it?

I debated for quite a long time on whether to write this post. I had said pretty much everything I’d wanted to say on Twitter, but I’ve done some more analysis and writing a post might be clearer than another Twitter thread.

To recap, a couple of weeks ago a paper by Xinzhu (April) Wei & Rasmus Nielsen of the University of California was published, claiming that a deletion in the CCR5 gene increased mortality (in white people of British ancestry in UK Biobank). I had some issues with the paper, which I posted here. My tweets got more attention than anything I’d posted before. I’m pretty sure they got more attention than my published papers and conference presentations combined. ¯\_(ツ)_/¯

The CCR5 gene is topical because, as the paper states in the introduction:

In late 2018, a scientist from the Southern University of Science and Technology in Shenzhen, Jiankui He, announced the birth of two babies whose genomes were edited using CRISPR

To be clear, gene-editing human babies is awful. Selecting zygotes that don’t have a known, life-limiting genetic abnormality may be reasonable in some cases, but directly manipulating the genetic code is something else entirely. My arguments against the paper did not stem from any desire to protect the actions of Jiankui He, but to a) highlight a peer review process that was actually pretty awful, b) encourage better use of UK Biobank genetic data, and c) refute an analysis that seemed likely biased.

This paper has received an incredible amount of attention. If it is flawed, then poor science is being heavily promoted. Apart from the obvious problems with promoting something that is potentially biased, others may try to do their own studies using this as a guideline, which I think would be a mistake.

1

I’ll quickly recap the initial problems I had with the paper (excluding the things that were easily solved by reading the online supplement), then go into what I did to try to replicate the paper’s results. I ran some additional analyses that I didn’t post on Twitter, so I’ll include those results too.

Full disclosure: in addition to tweeting to me, Rasmus and I exchanged several emails, and they ran some additional analyses. I’ll try not to talk about any of these analyses as it wasn’t my work, but, if necessary, I may mention pertinent bits of information.

I should also mention that I’m not a geneticist. I’m an epidemiologist/statistician/evidence synthesis researcher who for the past year has been working with UK Biobank genetic data in a unit that is very, very keen on genetic epidemiology. So while I’m confident I can critique the methods for the main analyses with some level of expertise, and have spent an inordinate amount of time looking at this paper in particular, there are some things where I’ll say I just don’t know what the answer is.

I don’t think I’ll write a formal response to the authors in a journal – if anyone is going to, I’ll happily share whatever information you want from my analyses, but it’s not something I’m keen to do myself.

All my code for this is here.

The Issues

Not accounting for relatedness

Not accounting for relatedness (i.e. related people in a sample) is a problem. It can bias genetic analyses through population stratification or familial structure, and can be easily dealt with by removing related individuals in a sample (or fancy analysis techniques, e.g. Bolt-LMM). The paper ignored this and used everyone.

Quality control

Quality control (QC) is also an issue. When the IEU at the University of Bristol was QCing the UK Biobank genetic data, they looked for sex mismatches, sex chromosome aneuploidy (having sex chromosomes different to XX or XY), and participants with outliers in heterozygosity and missing rates (yeah, ok, I don’t have a good grasp on what this means, but I see it as poor data quality for particular individuals). The paper ignored these too.

Ancestry definition

The paper states it looks at people of “British ancestry”. Judging by the number in participants in the paper and the reference they used, the authors meant “white British ancestry”. I feel this should have been picked up on in peer review, since the terms are different. The Bycroft article referenced uses “white British ancestry”, so it would have certainly been clearer sticking to that.

Covariable choice

The main analysis should have also been adjusted for all principal components (PCs) and centre (where participants went to register with UK Biobank). This helps to control for population stratification, and we know that UK Biobank has problems with population stratification. I thought choosing variables to include as covariables based on statistical significance was discouraged, but apparently I was wrong. Still, I see no plausible reason to do so in this case – principal components represent population stratification, population stratification is a confounder of the association between SNPs and any outcome, so adjust for them. There are enough people in this analysis to take the hit.

The analysis

10

I don’t know why the main analysis was a ratio of the crude mortality rates at 76 years of age (rather than a Cox regression), and I don’t know why there are no confidence intervals (CIs) on the estimate. The CI exists, it’s in the online supplement. Peer review should have had problems with this. It is unconscionable that any journal, let alone a top-tier journal, would publish a paper when the main result doesn’t have any measure of the variability of the estimate. A P value isn’t good enough when it’s a non-symmetrical error term, since you can’t estimate the standard error.

So why is the CI buried in an additional file when it would have been so easy to put it into the main text? The CI is from bootstrapping, whereas the P value is from a log-rank test, and the CI of the main result crosses the null. The main result is non-significant and significant at the same time. This could be a reason why the CI wasn’t in the main text.

It’s also noteworthy that although the deletion appears strongly to be recessive (only has an effect is both chromosomes have the deletion), the main analysis reports delta-32/delta-32 against +/+, which surely has less power than delta-32/delta-32 against +/+ or delta-32/+. The CI might have been significant otherwise.

2

I think it’s wrong to present one-sided P values (in general, but definitely here). The hypothesis should not have been that the CCR5 deletion would increase mortality; it should have been ambivalent, like almost all hypotheses in this field. The whole point of the CRISPR was that the babies would be more protected from HIV, so unless the authors had an unimaginably strong prior that CCR5 was deleterious, why would they use one-sided P values? Cynically, but without a strong reason to think otherwise, I can only imagine because one-sided P values are half as large as two-sided P values.

The best analysis, I think, would have been a Cox regression. Happily, the authors did this after the main analysis. But the full analysis that included all PCs (but not centre) was relegated to the supplement, for reasons that are baffling since it gives the same result as using just 5 PCs.

Also, the survival curve should have CIs. We know nothing about whether those curves are separate without CIs. I reproduced survival curves with a different SNP (see below) – the CIs are large.

3

I’m not going to talk about the Hardy-Weinburg Equilibrium (HWE, inbreeding) analysis– it’s still not an area I’m familiar with, and I don’t really think it adds much to the analysis. There are loads of reasons why a SNP might be out of HWE – dying early is certainly one of them, but it feels like this would just be a confirmation of something you’d know from a Cox regression.

Replication Analyses

I have access to UK Biobank data for my own work, so I didn’t think it would be too complex to replicate the analyses to see if I came up with the same answer. I don’t have access to rs62625034, the SNP the paper says is a great proxy of the delta-32 deletion, for reasons that I’ll go into later. However, I did have access to rs113010081, which the paper said gave the same results. I also used rs113341849, which is another SNP in the same region that has extremely high correlation with the deletion (both SNPs have R2 values above 0.93 with rs333, which is the rs ID for the delta-32 deletion). Ideally, all three SNPs would give the same answer.

First, I created the analysis dataset:

  1. Grabbed age, sex, centre, principal components, date of registration and date of death from the UK Biobank phenotypic data
  2. Grabbed the genetic dosages of rs113010081 and rs113341849 from the UK Biobank genetic data
  3. Grabbed the list of related participants in UK Biobank, and our usual list of exclusions (including withdrawals)
  4. Merged everything together, estimating the follow-up time for everyone, and creating a dummy variable of death (1 for those that died, 0 for everyone else) and another one for relateds (0 for completely related people, 1 for those I would typically remove because of relatedness)
  5. Dropped the standard exclusions, because there aren’t many and they really shouldn’t be here
  6. I created dummy variables for the SNPs, with 1 for participants with two effect alleles (corresponding to a proxy for having two copies of the delta-32 deletion), and 0 for everyone else
  7. I also looked at what happened if I left the dosage as 0, 1 or 2, but since there was no evidence that 1 was any different from 0 in terms of mortality, I only reported the 2 versus 0/1 results

I conducted 12 analyses in total (6 for each SNP), but they were all pretty similar:

  1. Original analysis: time = study time (so x-axis went from 0 to 10 years, survival from baseline to end of follow-up), with related people included, and using age, sex, principal components and centre as covariables
  2. Original analysis, without relateds: as above, but excluding related people
  3. Analysis 2: time = age of participant (so x-axis went from 40 to 80 years, survival up to each year of life, which matches the paper), with related people included, and using sex, principal components and centre as covariables
  4. Analysis 2, without relateds: as above, but excluding related people
  5. Analysis 3: as analysis 2, but without covariables
  6. Analysis 3, without relateds: as above, but excluding related people

With this suite of analyses, I was hoping to find out whether:

  • either SNP was associated with mortality
  • including covariables changed the results
  • the time variable changed the results, and d) whether including relateds changed the results

Results

4

I found… Nothing. There was very little evidence the SNPs were associated with mortality (the hazard ratios, HRs, were barely different from 1, and the confidence intervals were very wide). There was little evidence including relateds or more covariables, or changing the time variable, changed the results.

Here’s just one example of the many survival curves I made, looking at delta-32/delta-32 (1) versus both other genotypes in unrelated people only (not adjusted, as Stata doesn’t want to give me a survival curve with CIs that is also adjusted) – this corresponds to the analysis in row 6.

5

You’ll notice that the CIs overlap. A lot. You can also see that both events and participants are rare in the late 70s (the long horizontal and vertical stretches) – I think that’s because there are relatively few people who were that old at the end of their follow-up. Average follow-up time was 7 years, so to estimate mortality up to 76 years, I imagine you’d want quite a few people to be 69 years or older, so they’d be 76 at the end of follow-up (if they didn’t die). Only 3.8% of UK Biobank participants were 69 years or older.

In my original tweet thread, I only did the analysis in row 2, but I think all the results are fairly conclusive for not showing much.

In a reply to me, Rasmus stated:

6

This is the claim that turned out to be incorrect:

11

Never trust data that isn’t shown – apart from anything else, when repeating analyses and changing things each time, it’s easy to forget to redo an extra analysis if the manuscript doesn’t contain the results anywhere.

This also means I couldn’t directly replicate the paper’s analysis, as I don’t have access to rs62625034. Why not? I’m not sure, but the likely explanation is that it didn’t pass the quality control process (either ours or UK Biobank’s, I’m not sure).

SNPs

I’ve concluded that the only possible reason for a difference between my analysis and the paper’s analysis is that the SNPs are different. Much more different than would be expected, given the high amount of correlation between my two SNPs and the deletion, which the paper claims rs62625034 is measuring directly.

One possible reason for this is the imputation of SNP data. As far as I can tell, neither of my SNPs were measured directly, they were imputed. This isn’t uncommon for any particular SNP, as imputation of SNP data is generally very good. As I understand it, genetic code is transmitted in blocks, and the blocks are fairly steady between people of the same population, so if you measure one or two SNPs in a block, you can deduce the remaining SNPs in the same block.

In any case there is a lot of genetic data to start with – each genotyping chip measures hundred of thousands of SNPs. Also, we can measure the likely success rate of the imputation, and SNPs that are poorly imputed (for a given value of “poorly”) are removed before anyone sees them.

The two SNPs I used had good “info scores” (around 0.95 I think – for reference, we dropped all SNPs with an info score of less than 0.3 for SNPs with minor allele frequencies similar), so we can be pretty confident in their imputation. On the other hand, rs62625034 was not imputed in the paper, it was measured directly. That doesn’t mean everyone had a measurement – I understand the missing rate of the SNP was around 3.4% in UK Biobank (this is from direct communication with the authors, not from the paper).

But. And this is a weird but that I don’t have the expertise to explain, the imputation of the SNPs I used looks… well… weird. When you impute SNP data, you impute values between 0 and 2. They don’t have to be integer values, so dosages of 0.07 or 1.5 are valid. Ideally, the imputation would only give integer values, so you’d be confident this person had 2 mutant alleles, and this person 1, and that person none. In many cases, that’s mostly what happens.

Non-integer dosages don’t seem like a big problem to me. If I’m using polygenic risk scores, I don’t even bother making them integers, I just leave them as decimals. Across a population, it shouldn’t matter, the variance of my final estimate will just be a bit smaller than it should be. But for this work, I had to make the non-integer dosages integers, so anything less than 0.5 I made 0, anything 0.5 to 1.5 was 1, and anything above 1.5 was 2. I’m pretty sure this is fine.

Unless there’s more non-integer doses in one allele than the other.

rs113010081 has non-integer dosages for almost 14% of white British participants in UK Biobank (excluding relateds). But the non-integer dosages are not distributed evenly across dosages. No. The twos has way more non-integer dosages than the ones, which had way more non-integer dosages than the zeros.

In the below tables, the non-integers are represented by being missing (a full stop) in the rs113010081_x_tri variable, whereas the rs113010081_tri variable is the one I used in the analysis. You can see that of the 4,736 participants I thought had twos, 3,490 (73.69%) of those actually had non-integer dosages somewhere between 1.5 and 2.

7

What does this mean?

I’ve no idea.

I think it might mean the imputation for this region of the genome might be a bit weird. rs113341849 has the same pattern, so it isn’t just this one SNP.

But I don’t know why it’s happened, or even whether it’s particularly relevant. I admit ignorance – this is something I’ve never looked for, let alone seen, and I don’t know enough to say what’s typical.

I looked at a few hundred other SNPs to see if this is just a function of the minor allele frequency, and so the imputation was naturally just less certain because there was less information. But while there is an association between the minor allele frequency and non-integer dosages across dosages, it doesn’t explain all the variance in the estimate. There were very few SNPs with patterns as pronounced as in rs113010081 and rs113341849, even for SNPs with far smaller minor allele frequencies.

Does this undermine my analysis, and make the paper’s more believable?

I don’t know.

I tried to look at this with a couple more analyses. In the “x” analyses, I only included participants with integer values of dose, and in the “y” analyses, I only included participants with dosages < 0.05 from an integer. You can see in the results table that only using integers removed any effect of either SNP. This could be evidence that the imputation having an effect, or it could be chance. Who knows.

4

rs62625034

rs62625034 was directly measured, but not imputed, in the paper. Why?

It’s possibly because the SNP isn’t measuring what the probe meant to measure. It clearly has a very different minor allele frequency in UK Biobank (0.1159) than in the GO-ESP population (~0.03). The paper states this means it’s likely measuring the delta-32 deletion, since the frequencies are similar and rs62625034 sits in the deletion region. This mismatch may have made it fail quality control.

But this raises a couple of issues. First is whether the missingness in rs62625034 is a problem – is the data missing completely at random or not missing at random. If the former, great. If the latter, not great.

The second issue is that rs62625034 should be measuring a SNP, not a deletion. In people without the deletion, the probe could well be picking up people with the SNP. The rs62625034 measurement in UK Biobank should be a mixture between the deletion and a SNP. The R2 between rs62625034 and the deletion is not 1 (although it is higher than for my SNPs – again, this was mentioned in an email to me from the authors, not in the paper), which could happen if the SNP is picking up more than the deletion.

The third issue, one I’ve realised only just now, is that previous research has shown that rs62625034 is not associated with lifespan in UK Biobank (and other datasets). This means that maybe it doesn’t matter that rs62625034 is likely picking up more than just the deletion.

Peter Joshi, author of the article, helpfully posted these tweets:

89

If I read this right, Peter used UK Biobank (and other data) to produce the above plot showing lots of SNPs and their association with mortality (the higher the SNP, the more it affects mortality).

Not only does rs62625034 not show any association with mortality, but how did Peter find a minor allele frequency of 0.035 for rs62625034 and the paper find 0.1159? This is crazy. A minor allele frequency of 0.035 is about the same as the GO-ESP population, so it seems perfectly fine, whereas 0.1159 does not.

I didn’t clock this when I first saw it (sorry Peter), but using the same datasets and getting different minor allele frequencies is weird. Properly weird. Like counting the number of men and women in a dataset and getting wildly different answers. Maybe I’m misunderstanding, it wouldn’t be the first time – maybe the minor allele frequencies are different because of something else. But they both used UK Biobank, so I have no idea how.

I have no answer for this. I also feel like I’ve buried the lead in this post now. But let’s pretend it was all building up to this.

Conclusion

This paper has been enormously successful, at least in terms of publicity. I also like to think that my “post-publication peer review” and Rasmus’s reply represents a nice collaborative exchange that wouldn’t have been possible without Twitter. I suppose I could have sent an email, but that doesn’t feel as useful somehow.

However, there are many flaws with the paper that should have been addressed in peer review. I’d love to ask the reviewers why they didn’t insist on the following:

  • The sample should be well defined, i.e. “white British ancestry” not “British ancestry”
  • Standard exclusions should be made for sex mismatches, sex chromosome aneuploidy, participants with outliers in heterozygosity and missing rates, and withdrawals from the study (this is important to mention in all papers, right?)
  • Relatedness should either be accounted for in the analysis (e.g. Bolt-LMM) or related participants should be removed
  • Population stratification should be both addressed in the analysis (maximum principal components and centre) and the limitations
  • All effect estimates should have confidence intervals (I mean, come on)
  • All survival curves should have confidence intervals (ditto)
  • If it’s a survival analysis, surely Cox regression is better than ratios of survival rates? Also, somewhere it would be useful to note how many people died, and separately for each dosage
  • One-tailed P values need a huge prior belief to be used in preference to two-tailed P values
  • Over-reliance on P values in interpretation of results is also to be avoided
  • Choice of SNP, if you’re only using one SNP, is super important. If your SNP has a very different minor allele frequency from a published paper using a very similar dataset, maybe reference it and state why that might be. Also note if there is any missing data, and why that might be ok
  • When there is an online supplement to a published paper, I see no legitimate reason why “data not shown” should ever appear
  • Putting code online is wonderful. Indeed, the paper has a good amount of transparency, with code put on github, and lab notes also put online. I really like this.

So, do I believe “CCR5-∆32 is deleterious in the homozygous state in humans”?

No, I don’t believe there is enough evidence to say that the delta-32 deletion in CCR-5 affects mortality in people of white British ancestry, let alone people of other ancestries.

I know that this post has likely come out far too late to dam the flood of news articles that have already come out. But I kind of hope that what I’ve done will be useful to someone.

How might fathers influence the health of their offspring?

Dr Gemma Sharp, Senior Lecturer in Molecular Epidemiology

Follow Gemma on Twitter

Follow EPOCH study on Twitter

A novel thing about the Exploring Prenatal influences On Childhood Health (EPoCH) study is that we’re not just focusing on maternal influences on offspring health, we’re looking at paternal influences as well.

One of the reasons that most other studies have focused on maternal factors is that it’s perhaps easier to see how mothers might have an effect on their child’s health. After all, the fetus develops inside the mother’s body for nine months and often continues to be supported by her breastmilk throughout infancy. However, in a new paper from me and Debbie Lawlor published in the journal Diabetologia, we explain that there are lots of ways that fathers might affect their child’s health as well, and appreciating this could have really important implications. The paper focuses on obesity and type two diabetes, but the points we make are relevant to other health traits and diseases as well.

The EPOCH study will look at how much paternal factors actually causally affect children’s health. Image by StockSnap from Pixabay

How could fathers influence the health of their children?

These are the main mechanisms we discuss in the paper:

  • Through paternal DNA. A father contributes around half of their child’s DNA, so it’s easy to see how a father’s genetic risk of disease can be transmitted across generations. Furthermore, a father’s environment and behaviour (e.g. smoking) could damage sperm and cause genetic mutations in sperm DNA, which could be passed on to his child.
  • Through “epigenetic” effects in sperm. The term “epigenetics” refers to molecular changes that affect how the body interprets DNA, without any changes occurring to the DNA sequence itself. Some evidence suggests that a father’s environment and lifestyle can cause epigenetic changes in his sperm, that could then be passed on to his child. These epigenetic changes might influence the child’s health and risk of disease.
  • Through a paternal influence on the child after birth. There are lots of ways a father can influence their child’s environment, which can in turn affect the child’s health. This includes things like how often the father looks after the child, his parenting style, his activity levels, what he feeds the child, etc.
  • Through a father’s influence on the child’s mother. During pregnancy, a father can influence a mother’s environment and physiology through things like causing her stress or giving her emotional support. This might have an effect on the fetus developing in her womb. After the birth of the child, a father might influence the type and level of child care a mother is able to provide, which could have a knock-on effect on child health.
There are lots of ways in which fathers might influence the health of their offspring. This figure was originally published in our paper in Diabetologia (rdcu.be/bPCBa).

What does this mean for public health, clinical practice and society?

Appreciating the role of fathers means that fathers could be given advice and support to help improve offspring health, and their own. Currently hardly any advice is offered to fathers-to-be, so this would be an important step forward. Understanding the role of fathers would also help challenge assumptions that mothers are the most important causal factor shaping their children’s health. This could help lessen the blame sometimes placed on mothers for the ill health of the next generation.

What’s the current evidence like?

In the paper, we reviewed all the current literature we could find on paternal effects on offspring risk of obesity and type 2 diabetes. We found that, although there have been about 116 studies, this is far less than the number of studies looking at maternal effects. Also, a lot of these studies just show correlations between paternal factors and offspring health (and correlation does not equal causation!).

What is needed now is a concerted effort to find out how much paternal factors actually causally affect offspring health. This is exactly what EPoCH is trying to do, so watch this space!

This content was reposted with permission from the EPOCH blog.

Teens who hit puberty later could face bone health issues later in life, studies suggest


shutterstock

Ahmed Elhakeem, University of Bristol

Puberty is a time of dramatic development for both boys and girls. Not only are those hormones raging, but there’s all the bodily changes to contend with.

Puberty is driven by the activity of sex hormones and its onset is announced by the appearance of pubic hair, beards and breasts. Along with the dramatic hormone-driven changes to a child’s body, another defining features of puberty is the adolescent growth spurt – children become taller and eventually physically mature into adults.

For boys and girls this growth spurt generally happens at different ages. And there can be big differences as to when the growth spurt happens. For girls, rapid growth generally occurs around age eleven and a half years but can begin as early as eight or as late as 14 while for boys it generally happens a year or two later than girls. Children continue to get taller during their growth spurt until the ends of their long bones fuse and stop increasing in length, which happens around the end of puberty.

Children’s bones develop rapidly during puberty. And our new findings published in JAMA Network Open suggest that teens who have their pubertal growth spurt later could have more problems with their bone health in the future. In essence our research shows that the timing of puberty might influence or at least signal a child’s bone strength throughout adolescence and into early adulthood.

Weak bones

Our study is not the first to report a link between the timing of puberty and bone strength. A 2016 study of British people born in 1946 showed that children who had their growth spurt at an older age had lower bone density near the end of their forearm bone when measured decades later in old age, making them more likely to get a broken wrist.

More recently, a study of adolescents and young adults from Philadelphia showed that people who were genetically predisposed to later puberty had lower bone density at the spine and hips sites which are known to be susceptible to osteoporosis in later life. This is an ageing-related condition where bones lose their strength and become more likely to break.

Our study tracked the development of bone strength in a group of British children through to adulthood and found that teens who hit puberty at an older age tend to have lower bone mineral density which is a strong indicator of having weaker bones. We found that this continued to be the case for up to several years of their adult life.

Measuring puberty

We analysed data from more than 6000 children from the Children of the 90s study. This is a multi-generational study that has tracked the lives of a large group of people born in the early 90s around Bristol in the south west of England.

Children whose genetic makeup triggers a later-than-average start to puberty are at increased risk of having weaker bones as adults.
Shutterstock

We made use of multiple bone density scans taken from each child to assess their bone strength across a 15-year period between the ages of ten and 25 years.

To calculate the age when the children went through puberty, we tracked each child’s height and used this information to estimate the age when each child went through the adolescent growth spurt. We then assumed that children that had their growth spurt at an older age must have started puberty later. As a check, we repeated our analysis in girls using the age they reported getting their first period as a different indicator of when they hit puberty and we came to the same conclusions.

Rebuilding bone density

Our research adds to the growing evidence that children who mature later may be at increased risk of breaking a bone as they grow and mature. And that they may also have an increased risk of getting osteoporosis later in life.

Of course, there are things people can do to strengthen their bones. But given our findings, it is clear there is now a need for bigger and more detailed studies into the very complex relationships between puberty, growth and bone development. Continuing to track the lives of the people in our study will be crucial if we are to discover how puberty might impact people’s bones as they go through adult life and eventually move into old age. This will help to further understand the causes of osteoporosis and ultimately help people to maintain healthy bones throughout their lives.The Conversation

Ahmed Elhakeem, Epidemiologist, University of Bristol

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Institutionalising preventive health: what are the key issues for Public Health England?

Professor Paul Cairney, University of Stirling

Dr John Boswell, University of Southampton

Richard Gleave, Deputy Chief Executive and Chief Operating Officer, Public Health England

Dr Kathryn Oliver, London School of Hygiene and Tropical Medicine

The Green Paper on preventing ill health was published earlier this week, and many have criticised that proposals do not go far enough. Our guest blog explores some of the challenges that Public Health England face in providing evidence-informed advice. Read on to discover the reflections from a recent workshop on using evidence to influence local and national strategy and their implications for academic engagement with policymakers.

On the 12th June, at the invitation of Richard Gleave, Professor Paul Cairney and Dr John Boswell led a discussion on ‘institutionalising’ preventive health with senior members of Public Health England (PHE).

It follows a similar event in Scotland, to inform the development of Public Health Scotland, and the PHE event enjoyed contributions from key members of NHS Health Scotland.

Cairney and Boswell drew on their published work – co-authored with Dr Kathryn Oliver and Dr Emily St Denny (University of Stirling) – to examine the role of evidence in policy and the lessons from comparable experiences in other public health agencies (in England, New Zealand and Australia).

This post summarises their presentation and reflections from the workshop (gathered using the Chatham House rule).

The Academic Argument

Governments face two major issues when they try to improve population health and reduce health inequalities:

  1. Should they ‘mainstream’ policies – to help prevent ill health and reduce health inequalities – across government and/ or maintain a dedicated government agency?
  2. Should an agency ‘speak truth to power’ and seek a high profile to set the policy agenda?

Our research provides three messages to inform policy and practice:

  1. When governments have tried to mainstream ‘preventive’ policies, they have always struggled to explain what prevention means and reform services to make them more preventive than reactive.
  2. Public health agencies could set a clearer and more ambitious policy agenda. However, successful agencies keep a low profile and make realistic demands for policy change. In the short term, they measure success according to their own survival and their ability to maintain the positive attention of policymakers.
  3. Advocates of policy change often describe ‘evidence based policy’ as the answer. However, a comparison between (a) specific tobacco policy change and (b) very general prevention policy shows that the latter’s ambiguity hinders the use of evidence for policy. Governments use three different models of evidence-informed policy. These models are internally consistent but they draw on assumptions and practices that are difficult to mix and match. Effective evidence use requires clear aims driven by political choice.

Overall, they warn against treating any response – (a) the idiom ‘prevention is better than cure’, (b) setting up a public health agency, or (c) seeking ‘evidence based policy’ – as a magic bullet.

Major public health changes require policymakers to define their aims, and agencies to endure long enough to influence policy and encourage the consistent use of models of evidence-informed policy. To do so, they may need to act like prevention ninjas, operating quietly and out of the public spotlight, rather than seeking confrontation and speaking truth to power.

 

Image By Takver from Australia [CC BY-SA 2.0 (https://creativecommons.org/licenses/by-sa/2.0)], via Wikime

The Workshop Discussion

The workshop discussion highlighted an impressive level of agreement between the key messages of the presentation and the feedback from most members of the PHE audience.

One aspect of this agreement was predictable, since Boswell et al’s article describes PHE as a relative success story and bases its analysis of prevention ‘ninjas’ on interviews with PHE staff.

However, this strategy is subject to frequent criticism. PHE has to manage the way it communicates with multiple audiences, which is a challenge in itself.  One key audience is a public health profession in which most people see their role as to provoke public debate, shine a light on corporate practices (contributing to the ‘commercial determinants of health’), and criticise government inaction. In contrast, PHE often seeks to ensure that quick wins are not lost, must engage with a range of affected interests, and uses quiet diplomacy to help maintain productive relationships with senior policymakers. Four descriptions of this difference in outlook and practice stood out:

  1. Walking the line. Many PHE staff gauge how well they are doing in relation to the criticism they receive. Put crudely, they may be doing well politically if they are criticised equally by proponents of public health intervention and vocal opponents of the ‘nanny state’.
  2. Building and maintaining relationships. PHE staff recognise the benefit of following the rules of the game within government, which include not complaining too loudly in public if things do not go your way, expressing appreciation (or at least a recognition of policy progress) if they do, and being a team player with good interpersonal skills rather than simply an uncompromising advocate for a cause. This approach may be taken for granted by interest groups, but tricky for public health researchers who seek a sense of critical detachment from policymakers.
  3. Managing expectations. PHE staff recognise the need to prioritise their requirements from government. Phrases such as ‘health in all policies’ often suggest the need to identify a huge number of crucial, and connected, policy changes. However, a more politically feasible strategy is to identify a small number of discrete priorities on which to focus intensely.
  4. Linking national and local. PHE staff who work closely with local government, the local NHS, and other partners, described how they can find it challenging to link ‘place-based’ and ‘national policy area’ perspectives. Local politics are different from national politics, though equally important in implementation and practice.

There was also high agreement on how to understand the idea of ‘evidence based’ or ‘evidence informed’ policymaking (EBPM). Most aspects of EBPM are not really about ‘the evidence’. Policy studies often suggest that, to improve evidence use requires advocates to:

  • find out where the action is, and learn the rules and language of debate within key policymaking venues, and
  • engage routinely with policymakers, to help them understand their audience, build up trust based on an image of scientific credibility and personal reliability, and know when to exploit temporary opportunities to propose policy solutions.
  • To this we can add the importance of organisational morale and a common sense of purpose, to help PHE staff operate effectively while facing unusually high levels of external scrutiny and criticism. PHE staff are in the unusual position of being (a) part of the meetings with ministers and national leaders, and (b) active at the front-line with professionals and key publics.

In other words, political science-informed policy studies, and workshop discussions, highlighted the need for evidence advocates to accept that they are political actors seeking to win policy arguments, not objective scientists simply seeking the truth. Scientific evidence matters, but only if its advocates have the political skills to know how to communicate and when to act.

Although there was high agreement, there was also high recognition of the value of internal reflection and external challenge. In that context, one sobering point is that, although PHE may be relatively successful now (it has endured for some time), we know that government agencies are vulnerable to disinvestment and major reform. This vulnerability underpins the need for PHE staff to recognise political reality when they pursue evidence-informed policy change. Put bluntly, they often have to strike a balance between two competing pressures – being politically effective or insisting on occupying the moral high ground – rather than assuming that the latter always helps the former.

This blog post was originally published on the PolicyBristol blog.

Drinking in pregnancy: lasting effects of low-level alcohol use?

Kayleigh Easey, a PhD student and member of the Tobacco and Alcohol Research Group (TARG) at the School of Psychological Science at the University of Bristol, takes a look at a recent systematic review investigating effects of parental alcohol use and offspring mental health.

Follow Kayleigh and TARG on Twitter

It’s generally well known that drinking large amounts of alcohol during pregnancy is linked to Foetal Alcohol Syndrome (FAS), and negative outcomes such as premature birth and an increase in the risk of miscarriage. However, less is known about the effects of low to moderate alcohol use during pregnancy on offspring outcomes after birth, and even less for mental health outcomes in the child, particularly internalising disorders such as depression. Despite government guidelines being updated by the Department of Health in January 2016, advising pregnant women that the safest approach is to abstain from drinking alcohol altogether through their pregnancy, there remains uncertainty amongst the public as to whether a ‘drink or two’ is harmful or not.

Is a ‘drink or two’ harmful during pregnancy?

Researchers within the field mostly agree that abstinence from alcohol during pregnancy is the safest approach, but the evidence to support this is relatively weak, often due to study design and sample limitations. A previous meta-analysis highlighted how there are relatively few studies investigating low levels of alcohol use in pregnancy. Their analyses mainly focused on pregnancy outcomes such as gestational diabetes and childhood outcomes linked to FAS such as behavioural problems. Until now, a comprehensive review had not been undertaken on the effects of light to moderate drinking in pregnancy and offspring mental health.

Our research sought to review and summarise what literature was currently available for drinking alcohol in pregnancy and offspring mental health outcomes. Overall, we found that over half of the analyses included in the review reported an association between drinking in pregnancy and offspring mental health problems, specifically anxiety, depression, total problems and conduct disorder. By excluding FAS samples we were more certain that the findings we were reporting were representative of lower levels of drinking in pregnant women. However, we can’t be certain that many of the included studies are not still capturing higher levels of alcohol use in pregnancy, and potentially children with undiagnosed foetal alcohol spectrum disorders – a known problem in researching prenatal alcohol use.

Our review also highlights the differences across studies measuring drinking in pregnancy and offspring mental health, with all but four studies using a different measure of drinking alcohol in pregnancy, making comparison difficult. This means it is difficult to establish between studies if there is a ‘cut off’ level for what is potentially a hazardous level of alcohol exposure during pregnancy.

Image by Jill Wellington from Pixabay

Abstinence seems to be the safest approach

The associations we find do not provide evidence of a causal effect on their own, which can be difficult to demonstrate. However, it is important for women to understand what the current evidence shows, to allow them to make informed decisions about drinking during pregnancy. Women should be able to use this information to inform their choices, and to avoid potential risks from alcohol use, both during pregnancy and as a precautionary measure when trying to conceive.

As such, people may take from this that the current advice of abstaining from alcohol during pregnancy is certainly sensible, at least until evidence is available to indicate otherwise. We suggest that future work is needed to investigate whether light to moderate alcohol use in pregnancy may be harmful to different mental health outcomes in children from large cohort studies, which is exactly what I am currently doing within my PhD research using the Children of the 90s study.

This blog post was originally posted on the Alcohol Policy UK website.

How can researchers engage with policy?

Dr Alisha Davies

Dr Laura Howe

Prof Debbie Lawlor

Dr Lindsey Pike

Follow Alisha, Laura, Debbie and Lindsey on Twitter

Policy engagement is becoming more of a priority in academic life, as emphasis shifts from focusing purely on academic outputs to creating impact from research. Research impact is defined by UKRI as ‘the demonstrable contribution that excellent research makes to society and the economy’.

On 25 June 2019 the IEU held its first Engagers’ Lunch event, which focused on policy engagement. Joined by Dr Alisha Davies, Head of Research from Public Health Wales, Dr Laura Howe, Professor Debbie Lawlor and Dr Lindsey Pike from the IEU facilitated discussion drawing on their experiences – from both sides of the table – of connecting research and policy. Below we summarise advice from our speakers about engaging with policy.

The benefits of engaging with policy & how to do it

  • As an academic you need to consider what your ‘offer’ is. What expertise do you bring? This may be topic specific knowledge or relate to strong academic skills such as critical approaches to complex challenges, novel methods in evaluation, health economics. Recognise where you add value; the remit of academia is to develop robust evidence in response to complex and challenging questions using reliable methods – a gap that those in practice and /or policy cannot fill alone.
  • Find the right people to engage with – who are the decision makers in your area of research? Listen to what is currently important to inform action / policy. Read through local and national strategies in your topic of expertise to understand the wider landscape and where your work might inform, or where you might be able to address some of those key gaps. Academics can also submit evidence to policy (colleagues from the University of Bristol can access PolicyBristol’s policy scan, which lists current opportunities to engage).
  • Be visible and actively engage. Find out what local events are going on in your area related to your research and go along to meet local public health professionals. It’s a good way to meet people, find commonalities and form collaborations.
  • Condense your new research into a short briefing, identify what it adds to the existing evidence base, how does it inform given the wider context.
  • As an academic you will have a network of other research colleagues. Policymakers value being able to draw on this network for information. When providing evidence, don’t just cite your own – objectivity is one of the key advantages of working with academics, and policymakers value your intellectual independence. Your knowledge of the broader evidence base is invaluable.
  • Setting up a research steering group or stakeholder panel can be a great way to develop your relationships and ensure your research is speaking to policy, practice or industry priorities. Key to this is getting the right people involved – this blog post from Fast Track Impact has some useful advice.

The challenges of engaging with policy & how to navigate them

  • Academic and policymaking timescales are different. Policymakers need an answer yesterday while academics may not feel comfortable with providing a definitive response without time for reflection. There’s a need for flexibility on both sides.
  • There are also tensions between the perceived need for certainty and ability to be able to provide it. Policymakers may want ‘an answer’, but the evidence base may not be robust enough to give one. It is more useful to outline what we do and do not know, with a ‘balance of probabilities’ recommendation, than to say ‘more research is needed’.
  • Language can also be a barrier. Academic language is complex and, at times, impenetrable; policymaker documents need to be aimed at an intelligent lay audience, without jargon, and focusing on what matters to them (outlining policy options and the evidence base behind them – not lengthy discussions of statistical methods). Look at Public Health Wales’ publications, for example on digital technology, adverse childhood experiences and resilience, or mass unemployment events, or examples from the NIHR Dissemination Centre or PolicyBristol to get a sense of the language to use.
  • Do you think you have time for networking with non-academic stakeholders? The perception of opportunity costs can be another barrier for academics. While time for networking might not be costed into your grant funding, think of it in the same way as writing a grant application; you can’t guarantee the outcome but the potential reward is significant.
  • There are no guarantees in policy engagement work, and a level of realism is required around what findings from one study can achieve. Policymaking is a complex and messy process; the evidence base is just one factor in decision making. Your recommendations may not be taken up because of politics, resource issues, or other concerns taking priority. Sometimes your relationships will reach honourable dead ends, where you realise that interests, capacity or timescales are not as aligned as you thought. Knowing this before you start is important to avoid feeling disillusioned.
Cartoon showing complexity of policymaking process and comparing it to making sausages
Policymaking is a complex and messy process; the evidence base is just one factor in decision making. Image from Sausages, evidence and policymaking: The role of universities in a post-truth world, Policy Institute at Kings 2017

In summary, the panel concluded that policymakers are interested in academic research as long as their priorities are addressed. While outcomes are not guaranteed, our colleagues at PolicyBristol advise a strategy of ‘engineered serendipity’ – looking for and capitalising on opportunities, being ready to talk about your research in a clear and policy orientated way (why does your research matter and what are the key recommendations?) and aim to build long term and trusting relationships with policymakers.

If you’d be interested in attending a future Engagers’ Lunch, please contact Lindsey Pike.

Further information & resources

PolicyBristol aims to enhance the influence and impact of research from across the University of Bristol on policy and practice at the local, national and international level.

Public Health Wales Research and Evaluation work collaboratively across Public Health Wales and with external academic and partner organisations, and are keen to facilitate research links across Public Health Wales with new national and international partners.

Research impact at the UK Parliament ‘Everything you need to know to engage with UK Parliament as a researcher’

Parliamentary research services across the legislatures include:

  • House of Commons Library: an independent research and information unit. It provides impartial information for Members of Parliament of all parties and their staff.
  • Parliamentary Office of Science and Technology: Parliament’s in-house source of independent, balanced and accessible analysis of public-policy issues related to science and technology.
  • Research and Information Service (RaISe): aims to meet the information needs of the Northern Ireland Assembly Members, their staff and the secretariat in an impartial, objective, timely and non-partisan manner.
  • Scottish Parliament Information Centre (SPICe): the internal parliamentary research service for Members of the Scottish Parliament.
  • Senedd Research: an expert, impartial and confidential research and information service designed to meet the needs of Wales’ National Assembly Members and their staff.

Conference time at the MRC Integrative Epidemiology Unit!

Dr Jack Bowden, Programme Leader

Follow Jack on Twitter

 

Every two years my department puts on a conference on the topic of Mendelian Randomization (MR), a field that has been pioneered by researchers in Bristol over the last two decades. After months of planning, including finding a venue, inviting speakers from around the world and arranging the scientific programme, it’s a week and a half to go and we’re almost there!

But what is Mendelian Randomization research all about I hear you ask? Are you sure you want to know? Please read on but understand there is no going back…..

Are you sure you want to know about Mendelian Randomisation?

Have you ever had the feeling that something wasn’t quite right, that you are being controlled in some way by a higher force?

Well, it’s true. We are all in The Matrix. Like it or not, each of us has been recruited into an experiment from the moment we were born. Our genes, which are given to us by our parents at the point of conception, influence every aspect of our lives: how much we eat, sleep, drink, weigh, smoke, study, worry and play. The controlling effect is cleverly very small, and scientists only discovered the pattern by taking measurements across large populations, so as individuals we generally don’t notice. But the effect is real, very real!

How can we fight back?

We cannot escape The Matrix, but we can fight back by extracting knowledge from this unfortunate experiment we find ourselves in and using it for society’s advantage. For example, if we know that our genes predict 1-2% of variation in Low-Density Lipoprotein cholesterol (LDL-c – the ‘bad’ cholesterol) in the population, we can see if genes known to predict LDL-c also predict later life health outcomes in a group of individuals such as an increased risk of heart disease. If they do, then it provides strong evidence that reducing LDL-c will reduce heart disease risk, and we can then take steps to act. This is, in essence, the science of Mendelian randomization. See here for a nice animation of the method by our Unit director, George Davey Smith – our Neo if you like.

An example of the mathematical framework that leads to our analysis (honest)

Mendelian randomization is very much a team effort, involving scientists with expertise across many disciplines. My role, as a statistician and data scientist is to provide the mathematical framework to ensure the analysis is performed in a rigorous and reliable manner.

We start by drawing a diagram that makes explicit the assumptions our analysis rests on. The arrows show which factors influence which. In our case we must assume that a set of genes influence LDL-c, and can only influence heart disease risk through LDL-c. We can then translate this diagram into a system of equations that we apply to our data.

The great thing about Mendelian randomization is that, even when many other factors jointly influence LDL-c and heart disease risk, the Mendelian randomization approach should still work.

Recently, the validity of the Mendelian randomization approach has been called into question due to the problem of pleiotropy. In our example this would be when a gene affects heart disease through a separate unmodelled pathway.

 

An illustration of pleitropy

This can lead to bias in the analysis and therefore misleading results. My research is focused on novel methods that try to overcome the issue of pleiotropy, by detecting and adjusting for its presence in the analysis. For further details please see this video.

The MR Data challenge

At this year’s conference we are organising an MR Data Challenge, to engage conference participants in exploring and developing innovative approaches to Mendelian randomization using a publicly available data set. At a glance, the data comprises information on 150 genes and their association with

  • 118 lipid measurements (LDL cholesterol)
  • 7 health outcomes (including type II diabetes)

Eight research teams have submitted an entry to the competition, to describe how they would analyse the data and the conclusions they would draw. The great thing about these data is that the information on all 118 lipid traits simultaneously assessed to improve the robustness of the Mendelian randomization analysis.

Genetic data can help us understand how to resolve population health issues. Image credit: www.genome.gov

A key aim of the session is to bring together data scientists with experts from the medical world to comment on and debate the results. We will publish all of the computer code online so that anyone can re-run the analyses. In the future, we hope to add further data to this resource and for many new teams to join the party with their own analysis attempt.

Please come and join us at the MR conference in Bristol, 17-19 July, it promises to be epic!

Can lifestyle changes prevent prostate cancer or delay the need for treatment?

This men’s health week, National Institute for Health Research Biomedical Research Centre and Integrative Cancer Epidemiology Programme PhD student Meda Sandu outlines findings from research on how we can best prevent and treat prostate cancer.

Follow Meda on twitter

Prostate cancer (PCA) is the second most common cancer in the adult population in the UK, with over 47, 000 new cases being diagnosed each year.  Over 400, 000 people assigned male at birth live with or after a diagnosis of PCA. Localised PCA is cancer that is confined in the prostate gland and has not spread to other organs. Localised PCA often grows slowly, may not give any symptoms and/or require treatment. However, occasionally this type of cancer is aggressive in nature, can spread fast and will require treatment. Patients who undergo radical treatment, such as surgery and radiotherapy, often report side effects which significantly impact their wellbeing and enjoyment of life, such as urinary and bowel incontinence, low libido, erectile dysfunction, fatigue, mood swings. Our research looks at lifestyle interventions that may help prevent PCA, as well as identifying cases when treatment may be delayed,  in an attempt to best support outcomes for people with PCA.

Image by pixel2013 from Pixabay

How does treatment choice affect survival?

In the ProtecT study people were randomised to various treatments or active surveillance where no treatment is given but the patient is regularly followed-up. The study found similar chances of surviving localised PCA, with those in the active surveillance group only having a very small decrease in survival compared to the other groups. This prompted a spur of research in identifying modifiable factors which could delay PCA progression and therefore avoid unnecessary treatment.

One of the aims of our research is to look at dietary and lifestyle changes that are acceptable and achievable to PCA patients, which could reduce PCA risk and progression. These could prevent PCA or, for those who have already been diagnosed with localised disease, delay treatment and therefore avoid the associated side effects.

Dietary and physical activity interventions in people with PCA

In the PrEvENT  feasibility randomised controlled trial, patients who underwent prostate removal surgery were randomly assigned to both a dietary and a physical activity intervention. The dietary intervention consisted of either:

  • a lycopene supplement, which is an antioxidant found in tomatoes and has been previously be suggested to be protective for PCA
  • eating 5 portions of fruit and vegetable per day and replacing dairy milk with a vegan option (soy, almond, oat, coconut etc)
  • continue as normal.

The physical activity intervention asked participants to do 30 minutes of brisk walking five times a week.

Image by PublicDomainArchive from Pixabay

For each intervention, we looked at the change in metabolites, which are very small molecules found in blood that reflect metabolic patterns, a very useful measure in diseases where there are metabolic changes, such as cancer.  We found that eating more fruit and vegetables and decreasing dairy changed blood metabolite levels. Of particular interest was the change in pyruvate levels, a metabolite used as fuel in the pathway of cancer cell proliferation. This suggests that our interventions could lead to less energy being available to the proliferation of cancerous cells, which could lead to lower PCA risk and progression.

Can we predict which PCA cases do not need treatment?

A second aim of my research is to identify blood markers that can help distinguish disease that is unlikely to cause problems from aggressive disease that will advance rapidly and require treatment. We looked at the ProtecT trial and provisionally identified metabolites that could help predict PCA progression. This would allow clinicians to more accurately decide if the patient should take up treatment or if active surveillance would be appropriate.

Patients diagnosed with localised PCA have 96% chance of surviving 15 years after diagnosis.  However, PCA risk factors have yet to be conclusively identified. In addition, the diagnosing techniques are invasive and there is uncertainty around which localised cases are likely to advance. More research is therefore needed to establish potential risk factors which could help prevent PCA, and blood-based markers that could predict the aggressiveness of localised PCA cases. We are also looking at the relationship between PCA risk and progression and blood DNA methylation markers, which allow cells to control the expression of genes and have previously been suggested to be responsive to both environmental factors and causes of cancer and could help us better understand the aetiology of PCA.

So what does this mean for people with prostate cancer?

Although more research is required as our studies were small and did not aim to have definitive answers,  we did find some evidence to suggest that some lifestyle changes, namely increased fruit and vegetable consumption and replacing milk with non-dairy options and walking 30 minutes a day, 5 times a week are acceptable to PCA patients and that these interventions may have promising effects on blood metabolites. Identifying lifestyle factors which may have a protective role could help prevent PCA cases. Our research also identified metabolites which may help predict the aggressiveness of PCA which could help patients diagnosed with localised PCA avoid serious side effects by not undertaking unnecessary treatment.

Further reading and resources

Find out more about prostate cancer

Find out more about the treatments available for prostate cancer

Journal paper: Hamdy et al (2016) 10 year outcomes after monitoring, surgery, or radiotherapy for localised prostate cancer (the ProtecT trial)

Journal paper: Hackshaw-McGeagh et al (2016) Prostate cancer – evidence of exercise and nutrition trial (PrEvENT): study protocol for a randomised controlled feasibility trial.

 

 

Why haven’t e-cigarettes stubbed out cigarettes?

On World No Tobacco Day, PhD researcher Jasmine Khouja outlines the evidence around e-cigarettes.

Follow Jasmine on Twitter

 

There are an estimated 3.2 million e-cigarette users in Great Britain, and the majority of users have switched from smoking to vaping in search of a less harmful alternative to help them quit. In a recent study, people who used e-cigarettes to quit smoking were more likely to be smoke-free after one year compared to people who used more traditional methods such as nicotine patches. So, why are some smokers reluctant to try e-cigarettes, and why have some people been unable to quit smoking using them? The media, researchers, public health officials, and the general public have all played a role in discouraging some smokers from vaping.

E-cigarettes in the media

As a researcher in the field of e-cigarette use, I have often looked at news articles about vaping and felt exasperated. We frequently see e-cigarettes portrayed as a harmful option; according to many news articles, e-cigarettes are dangerous, lead to heart attacks and are as bad for your lungs as cigarettes. The same news outlets often report the opposite finding and say e-cigarettes are actually better for you. This flip-flopping leaves smokers confused and could discourage them from trying e-cigarettes for fear that vaping is actually more harmful than smoking.

Science in the media

So, why do the media keep switching their stance on e-cigarettes? They’re getting their information from the research community, and this community is divided. Some researchers claim that the costs of unknown health risks of vaping and the popularity of e-cigarettes among children and adolescents outweigh the potential benefits of helping smokers to quit, and others claim vice versa.

As researchers, we should be impartial and only provide the public with information which we can back up with evidence from our research, but, as we are still human, our opinions tend to seep through into how we report our findings and even what we choose to research. This lack of agreement in the research community is fuelling the media’s flip-flopping , leading to public confusion and reluctance to try e-cigarettes to help them quit smoking.

Public attitudes to vaping

With all of this contrasting information, it’s no wonder the general public’s opinion of vaping seems to be split too. Negative public opinion can have an impact on whether a smoker wants to try an e-cigarette. Quitting smoking isn’t easy; the last thing smokers want is to feel judged when they are trying to quit.

Negative public attitudes to vaping could put smokers off trying vaping but also affects where they can vape. Many businesses include e-cigarettes in their smoke free policies so that vapers have to stand outside with smokers. When trying to quit, it’s not ideal to be surrounded by the very thing you’re trying to wean yourself off. It’s like being on a diet and spending every meal at an all you can eat buffet when all you can eat is a salad; it’s tempting to slip into old habits. So, despite there being no indoor vaping ban (as there is with cigarettes), vapers are forced outside into a situation where they are more likely to start smoking again.

 

Unintended consequences of policy

It’s not just organisational policies attempting to control e-cigarette use; in 2016, a legislation called the Tobacco Products Directive (TPD) added a section on e-cigarettes in an attempt to regulate the devices. There were a number of unpopular changes to e-cigarette products as a result. Changes to the amount of nicotine allowed in products and restrictions on innovation of new products may have had unintended consequences.

With the introduction of the TPD, a limit was set on how much nicotine a vape product could contain. Nicotine is the key ingredient in cigarettes which keeps people smoking, and although it is highly addictive, nicotine is not the cigarette ingredient which is likely to kill smokers. E-cigarettes help people to quit smoking because they can contain nicotine which satisfies smokers cravings while exposing them to fewer toxins than smoking would. Limiting the amount of nicotine in these products means that heavier smokers don’t receive enough nicotine from an e-cigarette to satisfy their nicotine addiction and this makes them more likely to start smoking again.

The TPD also requires companies to register products in advance of bringing them to market. Where the e-cigarette industry was creating new, more effective devices at a very fast pace, users now can’t buy these products for a substantial amount of time after they have been developed. This restriction on innovation means that while consumers are waiting for these better products to become available, they could be trying products that don’t meet their needs. I often hear tales of “I tried one once and it was just like puffing air, so I kept smoking instead”. They have tried one product, it wasn’t good enough, and they assume all other products will be just as bad. By restricting innovation, we limit the amount of better-quality devices on the market and increase the likelihood that a smoker looking to quit will come across a poor device and turn back to smoking.

Making it easy to stop smoking

Many smokers want to quit and we, as researchers, media representatives, public health officials and even members of the public, need to make it as easy as possible for them to do so. We need to be clearer in the information we provide, be more accepting of vaping and not limit products which could help the most addicted smokers. I still have hope that smoking will be stubbed out in my generation, and that e-cigarettes could be the disruptive technology needed to help us achieve this.