EDIT, July 2012: Slightly updated version for Open Democracy.
Last week the BBC Trust published their review of impartiality and accuracy in science coverage. This included research undertaken by my group at Imperial. My name’s on the front but I should stress that I only did a small part of the work. It was lead by my colleague Felicity Mellor.
This post isn’t in any way an extension of the report, or official comment in any way: it’s just some of my thoughts on a small part of it, a few days after publication. Please also note that I’m talking about our report (pdf) which is Appendix A of the full review. It’s also worth flagging up that it was a content analysis, and these can only tell you so much. A focus on content can lose the context of the making of the news, as well as its reception (something I think came out well in the recent debate between Ben Goldacre and James Randerson). However, studies like these can also flag up some fascinating points for further thought/ analysis, and challenge the sorts of assumptions we all make through our own everyday personal (and partial) consumption of the media. It all too easy to get annoyed by a bad example and generalise from that, a systematic content analysis can provide a more nuanced picture.
A lot of the news coverage of the review has focused on so-called ‘false balance’; the perceived problem of giving too much weight to fringe views in an (arguably misguided) attempt to avoid pushing one particular point of view. Just as a political piece might ‘balance’ itself with perspectives from left and right wings, a science story might, for example, include someone who is against vaccinations as well as someone who would advocate them. If you are interested in balance issue with respect to climate change, chapter three of Matt Nisbet’s Climate Shift report makes for interesting reading. I’d also add that Hargreaves and Lewis’ 2003 Towards a Better Map report on science news gives some historical context to the UK’s particular issues with respect to balance, in particular the way BSE led to framings of MMR (which one might argue, in turn, influences reporting of climate).
Our study was based on a slightly more diverse set of questions than simply balance though, as we tried to gain a detailed empirical grip on these slippery issues of impartially and accuracy. As a result, I think it threw up a rather more complex set of challenges too.
One of the key questions our work asked was who is given a voice in science news? What’s their expertise, where do they come from and – arguably most importantly – what are we told about this sort of context?
For a start, less than 10% of broadcast items in our sample included comment from those presented as having no specialist professional knowledge of relevance to the issue. Moreover, lay voices tended not to be run alongside expertise voices. So, that oft-made complaint that news reports rely on personal anecdote? Well, going by our sample, for the BBC this would not appear to be the case. It’s also worth noting that almost two thirds of the broadcast news sample – which included a lot of very short summary items – either relied on a single viewpoint or paraphrased alternative views. In items reporting on new research, this proportion rose to about 70%. So there was little room for alternative view here; to ‘balance’ or otherwise. I also thought it was significant that only about an eighth of broadcast news items about research, and almost two fifths of online news items about research, included comment from independent scientists (i.e. with no connection to the research being reported).
Whether you think any these various expert voices (when they were included) are the right ones is another matter though. You can’t just say scientists are the appropriate voices and that’s it. Simply being ‘a scientist’ doesn’t necessarily make you an expert on the topic at hand, and there are other areas of relevant expertise, especially when it comes to science policy issues. We had to think a lot about the rather large ‘other professional expertise’ category we considered alongside scientific, clinical, lay, non-science academics and ‘unknown’. Other forms of professional expertise came from politicians, spokespeople for charities and representatives of industry. It’s important these experts are included in debate about science. For example, a scientist might be able to talk about their research, but know little about the policy surrounding it. Equally though, many scientists do have expertise of such areas as part of their research. It depends, which is rather the point about ‘a scientist’ being too blunt a description. We did notice a relative lack of direct comment from the UK government (less than 2% of items) as part of science stories, something I think is potencially worrying in terms of public debate over science policy.
Aside from questions over appropriateness of expertise being a rather slippery issue, there is very little information given about the expertise of a speaker. We found lot of reliance on phrases such as ‘scientists have found’ and ‘experts say’. Personally I think we need to address this issue before we can even get on to matters of whether experts are the right ones or not. Although expertise may be implied through editing, and TV in particular can flag up institutional association and title, we rarely saw a contributor’s disciplinary background specified. Especially significant I thought, in broadcast reports about new research we found little explicit reference to whether or not a particular contributor was involved in the research being reported (online reports often refer to someone as ‘lead author’ or ‘co-author’). This lack of definition makes it hard for audiences to judge a contributor’s independence, whether they are speaking on a topic they have studied in depth or if they are simply working from anecdote.
(I do appreciate how hard it is to give this sort of context, but it’s still a problem).
One of the things I was personally excited to find out in the study was the institutional location of the voices of science. We found that they were twice as likely to be affiliated to universities as any other type of organisations. There are perhaps also questions to be raised about the geographical distribution of these. Cue headlines suggesting Welsh science is ‘frozen out’ by the BBC, but the dominance of researchers based in the South East of England might be a due to a range of other factors (e.g. concentration of ‘Russell Group’ universities there).
When it came to coverage of recently published research, it was also interesting to ask which publications they came from. Nature, the Lancet and the British Medical Journal accounted for nearly all journal citations in the broadcast news. As with the location of research institutions, this is perhaps understandable considering their status, but also lacks diversity. On the reliance of particular journals, Charlie Petit was quick to note how little the American journal, Science is mentioned compared to the British equivalent, Nature. We thought this was interesting too, and wondered if this was a UK bias issue, but seeing as we found more coverage from Science when it came to online and non-news samples, it could be simply be that Science‘s embargo times don’t fit so easily with the BBC’s broadcast news schedule.
If you combine the arguable lack of diversity of news sources with our concerns over the reliance on press releases it is tempting to think back to Andy Williams’ point, based on his Mapping the Field study, about the ‘low hanging fruit’ of the almost ready-made content the highly professional press teams at these elite publications and research institutions push out. It’s hard to tell the reasons for this sort of coverage from our content analysis though. These sorts of questions, I believe, require studies which consider the processes of news-making as well as the content (and while I’m talking about Cardiff research, I should flag up the great iterative and mixed methods approach taken by Chimba and Kitzinger in their Bimbo or Boffin work; inspiring bit of research strategy there).
Anyway, I hope that’s pulled a bit more out of the study, but it’s only scratching the surface. There is, obviously, a lot more in the report itself – do read it for yourself if you’re interested in science journalism. If I get time later in the week, I’ll do a follow up post on what we had to say about the BBC’s blogs (edited to add 29/7 done!). There’s also a great interview with Dr Mellor on the Imperial College site which runs through some of the key findings. Edited to add 27/7: and a sharp editorial in Research Fortnight about it too. Edited to add 18/8: and here I am on the college podcast.
(or, you can try the BBC report of the review, which is rather brief, but does include a picture of Brian Cox blowing bubbles)
Thank you for a fascinating and thoughtful piece on the research behind the report. It’d be great to see/hear from a more diverse range of commentators on science. However (I work at BMJ Group myself) I know that the big journals work hard to serve up what the broadcast journalists in particular are looking for. Some other institutions are less helpful – not sending out the journal article with the press release, for example, or giving contact details for someone who’s just gone on holiday for a fortnight! There is some onus on the institutions to make research and researchers available to journalists on tight deadlines.
Oh yes, very much understood and the sort of thing I mean by the problem of just looking at the content – the processes of newsmaking explain things like this well (though don’t always excuse it…)
Just wondered what you thought about the role of press officers and researchers themselves in the issue of false balance/too much weight to fringe views in the media. As the experts and writers of the press releases which the media is so dependent on surely they have as much a duty to provide balance in science as the media do. If you write a press release for a single article that is on the fringe of the accepted scientific view are you not being slightly irresponsible by emailing it off to the national media? Given the risk that it may turn out to be disproved/right with a few corrections after a few months/years in the scientific world, either of which is then harder to convey to people.
I’m quite new to the whole science communication thing but I don’t really like the whole scientific article press release culture. I realise that it’s necessary to fit in with the way the ‘media’ works but I’m not sure that it does science,scientists or the “publics” view of science much good.
Good question. I’m maybe not the best person to answer it though – it’d be interesting to see if anyone else chips in.
I suppose I’d say that a press officer is in a very different professional position from a journalist at a public service broadcaster. It’s probably also worth noting that there are a lot of very moral and incredibly clever, creative and wonderful press officers in UK science. I’m all for being critical of churnalism, but don’t write off ‘the whole scientific article press release culture’ out of hand.
I didn’t mean to imply that I’m against press officers, I know lots of them and they’re very good at their jobs and I think they do important jobs. But I do wonder about their role in publicising fringe science. At the time of publication it’s probably really hard to know what is fringe, it depends on future research as to whether it will be accepted or not, science is like that. I guess it’s something that can only be looked at in hind sight like with the media and they probably already do it.
I think there is some really interesting churnalism, I shouldn’t have been so general and out of hand about it you’re right. The media is an important source of science for lots of people, and I just think that churned out press releases in some many cases don’t do justice/have a negative impact to the subject. But that’s something that isn’t restricted to science.
Or maybe it’s just that I don’t like hearing that ‘x cups of tea increases/decreases risk for disease y’ first thing in the morning that really get to me.
Ah, that point about not realising it is fringe at the time is a very interesting one…
I am writing a thesis with the hope that it will be applied to better the world we live in. This thesis is on Public Trust in the Media, WikiLeaks, and the Government and need to know what your opinions are. The online survey is anonymous, multiple choice and will take approximately 10 minutes to complete. Please follow the link: http://www.kwiksurveys.com/?s=ILLLML_9669e09d. Would be great if you would encourage others to do the survey also.
Pingback: The BBC Trust Report on Science (via through the looking glass) « Engaging Talk
Pingback: Context context context | through the looking glass
Nice post that . The most amazing that you are not taking the full credit of research.
Your point about “‘scientist’ being too blunt a description” echoes the historians’ complaint about David Starkey on Newsnight, and the potentially misleading implications of labelling him a historian without saying what kind of historian he is (see http://www.timeshighereducation.co.uk/story.asp?storycode=417236). How should the media refer to researchers/experts who are not, in a given instance, commenting on their particular specialist subject? ‘Public intellectuals’, I half-seriously suggest here: http://www.vitae.ac.uk/researchers/156431-447271/Would-you-own-up-to-being-a-public-intellectual.html
Pingback: Imperial Horizons-Follow the Imperial Horizons pilot programme. Debating climate science
Pingback: The BBC’s problem with science | Martin Robbins | Hopenhagen2009
Pingback: Does the BBC have a science problem? « Why Evolution Is True
Pingback: The BBC's problem with science - News of the day
Pingback: Debating climate science | through the looking glass