EDIT, July 2012: Slightly updated version for Open Democracy.
Last week the BBC Trust published their review of impartiality and accuracy in science coverage. This included research undertaken by my group at Imperial. My name’s on the front but I should stress that I only did a small part of the work. It was lead by my colleague Felicity Mellor.
This post isn’t in any way an extension of the report, or official comment in any way: it’s just some of my thoughts on a small part of it, a few days after publication. Please also note that I’m talking about our report (pdf) which is Appendix A of the full review. It’s also worth flagging up that it was a content analysis, and these can only tell you so much. A focus on content can lose the context of the making of the news, as well as its reception (something I think came out well in the recent debate between Ben Goldacre and James Randerson). However, studies like these can also flag up some fascinating points for further thought/ analysis, and challenge the sorts of assumptions we all make through our own everyday personal (and partial) consumption of the media. It all too easy to get annoyed by a bad example and generalise from that, a systematic content analysis can provide a more nuanced picture.
A lot of the news coverage of the review has focused on so-called ‘false balance’; the perceived problem of giving too much weight to fringe views in an (arguably misguided) attempt to avoid pushing one particular point of view. Just as a political piece might ‘balance’ itself with perspectives from left and right wings, a science story might, for example, include someone who is against vaccinations as well as someone who would advocate them. If you are interested in balance issue with respect to climate change, chapter three of Matt Nisbet’s Climate Shift report makes for interesting reading. I’d also add that Hargreaves and Lewis’ 2003 Towards a Better Map report on science news gives some historical context to the UK’s particular issues with respect to balance, in particular the way BSE led to framings of MMR (which one might argue, in turn, influences reporting of climate).
Our study was based on a slightly more diverse set of questions than simply balance though, as we tried to gain a detailed empirical grip on these slippery issues of impartially and accuracy. As a result, I think it threw up a rather more complex set of challenges too.
One of the key questions our work asked was who is given a voice in science news? What’s their expertise, where do they come from and – arguably most importantly – what are we told about this sort of context?
For a start, less than 10% of broadcast items in our sample included comment from those presented as having no specialist professional knowledge of relevance to the issue. Moreover, lay voices tended not to be run alongside expertise voices. So, that oft-made complaint that news reports rely on personal anecdote? Well, going by our sample, for the BBC this would not appear to be the case. It’s also worth noting that almost two thirds of the broadcast news sample – which included a lot of very short summary items – either relied on a single viewpoint or paraphrased alternative views. In items reporting on new research, this proportion rose to about 70%. So there was little room for alternative view here; to ‘balance’ or otherwise. I also thought it was significant that only about an eighth of broadcast news items about research, and almost two fifths of online news items about research, included comment from independent scientists (i.e. with no connection to the research being reported).
Whether you think any these various expert voices (when they were included) are the right ones is another matter though. You can’t just say scientists are the appropriate voices and that’s it. Simply being ‘a scientist’ doesn’t necessarily make you an expert on the topic at hand, and there are other areas of relevant expertise, especially when it comes to science policy issues. We had to think a lot about the rather large ‘other professional expertise’ category we considered alongside scientific, clinical, lay, non-science academics and ‘unknown’. Other forms of professional expertise came from politicians, spokespeople for charities and representatives of industry. It’s important these experts are included in debate about science. For example, a scientist might be able to talk about their research, but know little about the policy surrounding it. Equally though, many scientists do have expertise of such areas as part of their research. It depends, which is rather the point about ‘a scientist’ being too blunt a description. We did notice a relative lack of direct comment from the UK government (less than 2% of items) as part of science stories, something I think is potencially worrying in terms of public debate over science policy.
Aside from questions over appropriateness of expertise being a rather slippery issue, there is very little information given about the expertise of a speaker. We found lot of reliance on phrases such as ‘scientists have found’ and ‘experts say’. Personally I think we need to address this issue before we can even get on to matters of whether experts are the right ones or not. Although expertise may be implied through editing, and TV in particular can flag up institutional association and title, we rarely saw a contributor’s disciplinary background specified. Especially significant I thought, in broadcast reports about new research we found little explicit reference to whether or not a particular contributor was involved in the research being reported (online reports often refer to someone as ‘lead author’ or ‘co-author’). This lack of definition makes it hard for audiences to judge a contributor’s independence, whether they are speaking on a topic they have studied in depth or if they are simply working from anecdote.
(I do appreciate how hard it is to give this sort of context, but it’s still a problem).
One of the things I was personally excited to find out in the study was the institutional location of the voices of science. We found that they were twice as likely to be affiliated to universities as any other type of organisations. There are perhaps also questions to be raised about the geographical distribution of these. Cue headlines suggesting Welsh science is ‘frozen out’ by the BBC, but the dominance of researchers based in the South East of England might be a due to a range of other factors (e.g. concentration of ‘Russell Group’ universities there).
When it came to coverage of recently published research, it was also interesting to ask which publications they came from. Nature, the Lancet and the British Medical Journal accounted for nearly all journal citations in the broadcast news. As with the location of research institutions, this is perhaps understandable considering their status, but also lacks diversity. On the reliance of particular journals, Charlie Petit was quick to note how little the American journal, Science is mentioned compared to the British equivalent, Nature. We thought this was interesting too, and wondered if this was a UK bias issue, but seeing as we found more coverage from Science when it came to online and non-news samples, it could be simply be that Science‘s embargo times don’t fit so easily with the BBC’s broadcast news schedule.
If you combine the arguable lack of diversity of news sources with our concerns over the reliance on press releases it is tempting to think back to Andy Williams’ point, based on his Mapping the Field study, about the ‘low hanging fruit’ of the almost ready-made content the highly professional press teams at these elite publications and research institutions push out. It’s hard to tell the reasons for this sort of coverage from our content analysis though. These sorts of questions, I believe, require studies which consider the processes of news-making as well as the content (and while I’m talking about Cardiff research, I should flag up the great iterative and mixed methods approach taken by Chimba and Kitzinger in their Bimbo or Boffin work; inspiring bit of research strategy there).
Anyway, I hope that’s pulled a bit more out of the study, but it’s only scratching the surface. There is, obviously, a lot more in the report itself – do read it for yourself if you’re interested in science journalism. If I get time later in the week, I’ll do a follow up post on what we had to say about the BBC’s blogs (edited to add 29/7 done!). There’s also a great interview with Dr Mellor on the Imperial College site which runs through some of the key findings. Edited to add 27/7: and a sharp editorial in Research Fortnight about it too. Edited to add 18/8: and here I am on the college podcast.
(or, you can try the BBC report of the review, which is rather brief, but does include a picture of Brian Cox blowing bubbles)