Tag Archives: publication

Has blogging changed science writing?

Badges made for our housewarming last year. Bonus points if you get the ref.

There is an oft-made joke that the answers to questions posed by news headlines are always, when take time to consider them, a simple ‘no’. With that in mind, here’s a question headlining my essay in the latest edition of the Journal Of Science Communication: Has blogging changed science writing?

You can download the full paper on the JCom website. Spoiler warning: I think the answer is no. Or at least not much. Drawing on basic tenets of the social studies of technology, I argue there have always been more options than action when it comes to innovation in science writing, most of which we haven’t taken up. It hasn’t changed nearly as much as it could have, and we don’t know yet how much it will change. The future, as ever, is up for debate. We should think carefully about the science media we want, not what we’re given or simply left with.

As I finish the article, I don’t claim to know though. The thing I personally enjoy most about science blogging is that it seems to have make it slightly more socially acceptable to finish with questions. Of course, this has yet to weave its way through to journal design, so if you do have an answer, you might want to use the comments space here, as there isn’t one on JCom.

Context context context

Context context context. It’s what the mainstream media’s reporting on science always lacks, isn’t it? It’s the oft-repeated line ‘I think you’ll find it’s a bit more complicated than that’ which media critics such as myself can grump about from the cosiness of their ivory tower. Context context context: Easy to say, but hard to provide?

Context context context: Easy to say. For example, our content analysis for the BBC Trust’s review of impartiality and accuracy in science coverage (blogged about earlier this week) highlighted quite hand-waving descriptions of scientists’ roles and work, with a reliance on phrases such as ‘scientists have found’ and ‘experts say’. We also noted little exploration of experimental design, and that it was very rare that the funder of research was referred to. We worried that many reports relied on a single viewpoint or paraphrased alternative views, and the lack of explicit reference as to whether or not a contributor was involved in the research being reported (i.e. independence was hard to judge).

Context context context: Hard to provide? A journalist can easily, and quite fairly, reply to calls for more context with the argument that readers do not care. Of course the big wide world is more complex than depicted in the mass media, but a large part of a journalist’s job is to simplify this world, and that inevitably means losing some context. Personally, I think there are still ways journalists might rethink the traditional patterns for telling stories, and I expect professional journalists of the calibre working at the BBC to be imaginative and thoughtful about what parts of stories they choose to provide (and I know the good ones do, and that they are constrained in a lot of their work too).

One of the things we coded for in the study was if a piece pointed the audience to other information: the chance for people to find out more if they wanted to. This didn’t have to be online links, but would often be. We noticed it was rare that the broadcast news items ever explicitly directed viewers to the BBC website for further information about science items. In the online news, there were automatically generated links to other BBC reports on similar topics, but only 21 items (16%) included links to other BBC reports within the body of the text. However, almost 90% of online news items included at least one link to the source of the story, such as the laboratory where the research was carried out or the journal where it was published, but 70 items (54%) included no links to other external sources. So, over half of online news items the reader is not offered opportunities to find further information relevant to a science story other than that provided by the source.

Blogs in particular offer the opportunity of linking to other sources and, by enabling journalists to “show their working”, may help make visible the process of reporting too. Some of the BBC reporters’ blogs we looked at made use of this, particularly those of Jonathan Amos and Richard Black, but only one of Tom Feilden’s blogposts in our sample period contained any in-text links to sites other than the Today programme. Blogs also allow journalists to post longer quotes from sources than the edited versions included in broadcast reports, include links to other sources of information that the journalist has used to build their story, or track unfolding stories (as with the Guardian’s Science Story Tracker). However, we found few examples of this type of usage in the BBC blogs we looked at.

Like much of the content we looked at, blogs were more likely to mention benefits of scientific research than risks (eleven of the 27 unique postings cited benefits compared to just two mentioning risks). It seemed to us that as with a lot of the online science content (and science content overall), the blogs located science as a ‘good-news’ story where science provides benefits to society and is rarely the source of any risks. As with any of this, you may well be able to dig up an example or three to argue that the BBC blogs are ‘anti-science’ in some way (and this singular examples may well be very important, perhaps even because they are singular) but looking at our sample as a whole, this was not the picture we saw.

We saw a range of ways of using the blogging form amongst the science and technology reporters that blog for the BBC. Some reporters took the chance to contextualise news stories they have reported on (Richard Black), or to offer a more personal take on a story (Fergus Walsh). Others would trail upcoming items (Susan Watts), to summarise/ repeat a news item in another site, or describe related research (Tom Feilden, Jonathan Amos). Potentially, adopting a personal voice raises issues with respect to the BBC’s impartiality (there are editorial guidelines on this), although we found no evidence in the blogs we looked that it had actually compromised impartiality in action. If anything blogs can also offer a space to address questions of impartiality and accuracy when they arise though. We found a lovely example of this from Rory Cellan-Jones, where he reflected on an report for the BBC One News at Ten, saying he should have been he should have taken a more sceptical tone, and also took the chance to quote at length the scientist’s defence of the research.

You can find more details of this study in the full report (opens as pdf) especially pages 33-38.

As I’ve written before, the placing of a link is a rhetorical and, as such, creative process. Thinking about what you’ll link to, how and when (and when not to) is a challenge I personally adore when I write, and one of the many reasons I find writing online more professionally fulfilling than print. It really doesn’t seem to be used enough though, or thought about as much as it could be either (n.b. this is a general grumble, broader and looser than the BBC Trust study).

So, I guess for now I’ll keep banging on about ‘context, context, context’, knowing it’s hard for journalists to provide it but hoping they continue to try to be as imaginative and proactive as possible in facilitating connections between the information that is out there and those members of their audience who are interested to find out more.

The BBC Trust Report on Science

EDIT, July 2012: Slightly updated version for Open Democracy.

Last week the BBC Trust published their review of impartiality and accuracy in science coverage. This included research undertaken by my group at Imperial. My name’s on the front but I should stress that I only did a small part of the work. It was lead by my colleague Felicity Mellor.

This post isn’t in any way an extension of the report, or official comment in any way: it’s just some of my thoughts on a small part of it, a few days after publication. Please also note that I’m talking about our report (pdf) which is Appendix A of the full review. It’s also worth flagging up that it was a content analysis, and these can only tell you so much. A focus on content can lose the context of the making of the news, as well as its reception (something I think came out well in the recent debate between Ben Goldacre and James Randerson). However, studies like these can also flag up some fascinating points for further thought/ analysis, and challenge the sorts of assumptions we all make through our own everyday personal (and partial) consumption of the media. It all too easy to get annoyed by a bad example and generalise from that, a systematic content analysis can provide a more nuanced picture.

A lot of the news coverage of the review has focused on so-called ‘false balance'; the perceived problem of giving too much weight to fringe views in an (arguably misguided) attempt to avoid pushing one particular point of view. Just as a political piece might ‘balance’ itself with perspectives from left and right wings, a science story might, for example, include someone who is against vaccinations as well as someone who would advocate them. If you are interested in balance issue with respect to climate change, chapter three of Matt Nisbet’s Climate Shift report makes for interesting reading. I’d also add that Hargreaves and Lewis’ 2003 Towards a Better Map report on science news gives some historical context to the UK’s particular issues with respect to balance, in particular the way BSE led to framings of MMR (which one might argue, in turn, influences reporting of climate).

Our study was based on a slightly more diverse set of questions than simply balance though, as we tried to gain a detailed empirical grip on these slippery issues of impartially and accuracy. As a result, I think it threw up a rather more complex set of challenges too.

One of the key questions our work asked was who is given a voice in science news? What’s their expertise, where do they come from and – arguably most importantly – what are we told about this sort of context?

For a start, less than 10% of broadcast items in our sample included comment from those presented as having no specialist professional knowledge of relevance to the issue. Moreover, lay voices tended not to be run alongside expertise voices. So, that oft-made complaint that news reports rely on personal anecdote? Well, going by our sample, for the BBC this would not appear to be the case. It’s also worth noting that almost two thirds of the broadcast news sample – which included a lot of very short summary items – either relied on a single viewpoint or paraphrased alternative views. In items reporting on new research, this proportion rose to about 70%. So there was little room for alternative view here; to ‘balance’ or otherwise. I also thought it was significant that only about an eighth of broadcast news items about research, and almost two fifths of online news items about research, included comment from independent scientists (i.e. with no connection to the research being reported).

Whether you think any these various expert voices (when they were included) are the right ones is another matter though. You can’t just say scientists are the appropriate voices and that’s it. Simply being ‘a scientist’ doesn’t necessarily make you an expert on the topic at hand, and there are other areas of relevant expertise, especially when it comes to science policy issues. We had to think a lot about the rather large ‘other professional expertise’ category we considered alongside scientific, clinical, lay,  non-science academics and ‘unknown’. Other forms of professional expertise came from politicians, spokespeople for charities and representatives of industry. It’s important these experts are included in debate about science. For example, a scientist might be able to talk about their research, but know little about the policy surrounding it. Equally though, many scientists do have expertise of such areas as part of their research. It depends, which is rather the point about ‘a scientist’ being too blunt a description. We did notice a relative lack of direct comment from the UK government (less than 2% of items) as part of science stories, something I think is potencially worrying in terms of public debate over science policy.

Aside from questions over appropriateness of expertise being a rather slippery issue, there is very little information given about the expertise of a speaker. We found lot of reliance on phrases such as ‘scientists have found’ and ‘experts say’. Personally I think we need to address this issue before we can even get on to matters of whether experts are the right ones or not. Although expertise may be implied through editing, and TV in particular can flag up institutional association and title, we rarely saw a contributor’s disciplinary background specified. Especially significant I thought, in broadcast reports about new research we found little explicit reference to whether or not a particular contributor was involved in the research being reported (online reports often refer to someone as ‘lead author’ or ‘co-author’). This lack of definition makes it hard for audiences to judge a contributor’s independence, whether they are speaking on a topic they have studied in depth or if they are simply working from anecdote.

(I do appreciate how hard it is to give this sort of context, but it’s still a problem).

One of the things I was personally excited to find out in the study was the institutional location of the voices of science. We found that they were twice as likely to be affiliated to universities as any other type of organisations. There are perhaps also questions to be raised about the geographical distribution of these. Cue headlines suggesting Welsh science is ‘frozen out’ by the BBC, but the dominance of researchers based in the South East of England might be a due to a range of other factors (e.g. concentration of ‘Russell Group’ universities there).

When it came to coverage of recently published research, it was also interesting to ask which publications they came from. Nature, the Lancet and the British Medical Journal accounted for nearly all journal citations in the broadcast news. As with the location of research institutions, this is perhaps understandable considering their status, but also lacks diversity. On the reliance of particular journals, Charlie Petit was quick to note how little the American journal, Science is mentioned compared to the British equivalent, Nature. We thought this was interesting too, and wondered if this was a UK bias issue, but seeing as we found more coverage from Science when it came to online and non-news samples, it could be simply be that Science‘s embargo times don’t fit so easily with the BBC’s broadcast news schedule.

If you combine the arguable lack of diversity of news sources with our concerns over the reliance on press releases it is tempting to think back to Andy Williams’ point, based on his Mapping the Field study, about the ‘low hanging fruit’ of the almost ready-made content the highly professional press teams at these elite publications and research institutions push out. It’s hard to tell the reasons for this sort of coverage from our content analysis though. These sorts of questions, I believe, require studies which consider the processes of news-making as well as the content (and while I’m talking about Cardiff research, I should flag up the great iterative and mixed methods approach taken by Chimba and Kitzinger in their Bimbo or Boffin work; inspiring bit of research strategy there).

Anyway, I hope that’s pulled a bit more out of the study, but it’s only scratching the surface. There is, obviously, a lot more in the report itself – do read it for yourself if you’re interested in science journalism. If I get time later in the week, I’ll do a follow up post on what we had to say about the BBC’s blogs (edited to add 29/7 done!). There’s also a great interview with Dr Mellor on the Imperial College site which runs through some of the key findings. Edited to add 27/7: and a sharp editorial in Research Fortnight about it too. Edited to add 18/8: and here I am on the college podcast.

(or, you can try the BBC report of the review, which is rather brief, but does include a picture of Brian Cox blowing bubbles)

Funding science communication

science museum sign

This is a picture of a large plaque at the front of London’s Science Museum. It’s thanking their various sponsors. Most museums have them. It’s as normal as a gift shop and a cafe.

I photographed it because I wanted to think of such signs not just as a vote of thanks, or as the design piece this museum seems to want to re-articulate theirs as, but as a sort of declaration of conflict of interest. In many respects, I think’s what it is. I also think this is why we should be pleased the museum has tried to make theirs into an arresting aesthetic object.

Museum sponsorship has a long and often controversial history. I wrote about it last year with respect to Shell and the Science Museum’s climate science gallery (see also follow up post on similar controversies at the Smithsonian). Today on the Guardian’s culture cuts blog, Tony Hill, Director of Manchester’s Museum of Science and Industry has a post reflecting on the impact to his institution. He notes that retail, catering and conferencing will become ever more important, as will sponsorship.

They also hope to increase the average donation per visitor from the current 3.5p per head to 50p. I’ve noticed that the London Science Museum, as well giving its wall of thanks a polish,  has filled its entrance hall with a load of  ‘keep science free’ signs asking for donations. I think it’s interesting that the Science Museum are playing on the rhetoric of keep Science free. Not the Science Museum, or scientific heritage, or scientific education, or buttons that are cool to press.

I agree that the museum’s work is part of science, even if it’s funded from the Department of Culture rather than the science budget. I made a similar point in a piece I wrote for January’s Chemistry World:

You could almost hear the collective sigh of relief from UK science after the government’s autumn 2010 spending review. Indeed, it was a largely grateful audience that met science minister David Willetts when, in the week after the spending review, he joined a panel for a ‘Science question time’ event at the Royal Institution (RI) in London. Sceptical, as scientists are wont to be, but relieved that cuts were not nearly as deep as expected, nor as deep as they will fall elsewhere. Near the end of the evening however, a hand went up from the back of the Faraday Theatre. Writer and astronomer Colin Stuart asked: what about other cuts to other areas, museums for example, how will those affect UK science? Stuart has a crucial point here: we should be careful of applying too narrow a definition of science funding.

Questions about where money might (or might not) come from concern people in lots of different areas involved in the sharing of science with broader society, not just museums. In book-publishing and journalism as much as publicly funded work. Sponsorship is an option for some, it’s also getting harder to find (it’s not like print journalism are riding high on advertising revenue right now). Increasingly, academics are asked to do communications work as part of their day-to-day work as a researcher. I think there are good reasons for asking researchers to do this, but I also think we need to give academics time and support to do such work. Time and support that costs money.

I also think that we shouldn’t force all academics to do public communication, and there is a role for professionals here too, but that’s a whole other (and frankly, slightly tedious) discussion, probably best left for a bit of ranting in the pub.

Anti-quackery underpants

Something ticked off the lifetime to-do list: I have managed to get the words “anti-quackery underpants” into a scholarly publication. An encyclopedia. This encyclopedia. It’s page 586 of volume two, if you’re interested, part of the entry on Popular Science Media.

anti-quack underpants

It’s these underpants I’m referring to; the ones sold via badscience.net. I noticed recently that SciCurious has just gone into merchandise too, including underwear. This is just a funny and recent example, my broader point is that the popularisation of science exists across a range of platforms and is something (at least some) people like to buy.

The term “popular science” is a bit weird. We might take it quite strictly as a category of contemporary bookselling (i.e. the sign above Dawkins and Hawkin at Waterstone’s), but historically the term means a lot more than that. It has a sometimes uncomfortable relationship with both scholarly and popular media, and in a way, is quite explicitly neither. As such, it can be quite slippery to pin down, but as I attempt to define it in the encyclopedia entry, popular science is:

science to be consumed in our free time, largely for personal rather than professional reasons. It is science for fun: to experience the wonders of nature, learn more about an issue which is important to you, on a friend’s recommendation, or simply because a piece of promotional material caught your eye.

The underpants example help demonstrate the way in which popular science may exist on a range of media platforms, but also how inter-connected popular science media is. It spins-off from one format to another (and has done for centuries): blog to book, magazine to blog, museum to magazine, book to toy, live show to toy, toy to museum, museum to book, documentary to live show, book to documentary, documentary to book, live show to book, book to blog, blog to underpants.

I also wanted to use the underpants to emphasise that popular science as something audiences enthusiastically buy into. People queue round the block for science, they sell out the Royal Albert Hall, they sign petitions because of it. Ok, so we might argue that it’s still a very limited group that do such queuing/ buying/ signing, but science has its fans. Again, this has been going on for centuries. I think this is important to remember this. Scholars in the field often conceive of popular science as if it exists largely to let science show off; that it only invites non-scientists to play so as to reinforce a boundary between those clever professional scientists and everyone else. Read thus, one would wonder why the audiences of popular science would bother. And they clearly do bother. And they come back, again and again. And they buy branded pants (and calendars – the weridos). We might argue that popular science does still patronise it’s audience through it’s very existence, but audiences seem to feel they are getting something out of it too

(For the academically minded, I’m referring to the slight difference between Hilgartner’s take on the subject and Fyfe and Lightman‘s. Personally I both apply and take some scepticism to each of those approaches, and in addition like to fold in Bourdieu’s approach to cultural consumption).

My encyclopedia entry is nothing especially profound. It’s a basic primer. If you are interested in the topic, the entry’s list of recommended readings includes:

I also wrote the Communicating Science to Children entry. Obviously, everyone should read that too because <irony> it’s seminal stuff </irony> but I’m aware this encyclopedia is a couple of hundred quid (it’s very much a publication for libraries). I have a paper from 2008 on a similar topic you can download for free (pdf).

The piece on children doesn’t mention underpants, though you can read my blogpost on poo and kids’ books or purchase pro-MMR bibs along with the anti-quack pants from the Bad Science store.

The discipline of science communication

The latest edition of the Journal of Science Communication is up, and I’m in it.

I was asked to discuss the question ‘does science communication deserve special attention as an academic discipline?’ Read my contribution, and you’ll see I don’t really answer the question. Or rather I answer with a simple negative and then, um… spin that out for a few pages with a long list of references. Susanna Hornig Priest’s contribution is much better. Read that instead.

Something Hornig Priest refers to is the difference between inter-disciplinary and multi-disciplinary work (or, the difference between connecting disciplines and mixing them). I think this sort of awareness of discipline is something shared by everyone involved in science communication. As I argue in my contribution, for me science communication is less a community of researchers, and more a space where we deal with the fact such communities of research exist. It is a consequence of the spaces left between fragmented expertise, a weird by-product of modernity. So, I refused to answer JCom’s question properly not just out of petulance, but because I feel more like someone who works in the spaces between academic disciplines than one desiring of building their own.

This middle ground is not always an easy place to reside, but an awareness of this unease is part of how I think science communication scholars can be useful; as we examine, reflect, debate and help others manage the clashes between communities of knowledge. As such, I think it necessarily involves a fair bit of both multi and inter-disciplinary work, as well as a strong awareness (I’d personally add, involvement) with practitioners and audiences. This does also mean you spend most of your life ignorant of most of what you are looking at. You feel constantly stupid (but in exchange you get to see loads of different amazing things).

It’s not for everyone and nor should it be, but it’s where I work.