Monthly Archives: July 2011

Context context context

Context context context. It’s what the mainstream media’s reporting on science always lacks, isn’t it? It’s the oft-repeated line ‘I think you’ll find it’s a bit more complicated than that’ which media critics such as myself can grump about from the cosiness of their ivory tower. Context context context: Easy to say, but hard to provide?

Context context context: Easy to say. For example, our content analysis for the BBC Trust’s review of impartiality and accuracy in science coverage (blogged about earlier this week) highlighted quite hand-waving descriptions of scientists’ roles and work, with a reliance on phrases such as ‘scientists have found’ and ‘experts say’. We also noted little exploration of experimental design, and that it was very rare that the funder of research was referred to. We worried that many reports relied on a single viewpoint or paraphrased alternative views, and the lack of explicit reference as to whether or not a contributor was involved in the research being reported (i.e. independence was hard to judge).

Context context context: Hard to provide? A journalist can easily, and quite fairly, reply to calls for more context with the argument that readers do not care. Of course the big wide world is more complex than depicted in the mass media, but a large part of a journalist’s job is to simplify this world, and that inevitably means losing some context. Personally, I think there are still ways journalists might rethink the traditional patterns for telling stories, and I expect professional journalists of the calibre working at the BBC to be imaginative and thoughtful about what parts of stories they choose to provide (and I know the good ones do, and that they are constrained in a lot of their work too).

One of the things we coded for in the study was if a piece pointed the audience to other information: the chance for people to find out more if they wanted to. This didn’t have to be online links, but would often be. We noticed it was rare that the broadcast news items ever explicitly directed viewers to the BBC website for further information about science items. In the online news, there were automatically generated links to other BBC reports on similar topics, but only 21 items (16%) included links to other BBC reports within the body of the text. However, almost 90% of online news items included at least one link to the source of the story, such as the laboratory where the research was carried out or the journal where it was published, but 70 items (54%) included no links to other external sources. So, over half of online news items the reader is not offered opportunities to find further information relevant to a science story other than that provided by the source.

Blogs in particular offer the opportunity of linking to other sources and, by enabling journalists to “show their working”, may help make visible the process of reporting too. Some of the BBC reporters’ blogs we looked at made use of this, particularly those of Jonathan Amos and Richard Black, but only one of Tom Feilden’s blogposts in our sample period contained any in-text links to sites other than the Today programme. Blogs also allow journalists to post longer quotes from sources than the edited versions included in broadcast reports, include links to other sources of information that the journalist has used to build their story, or track unfolding stories (as with the Guardian’s Science Story Tracker). However, we found few examples of this type of usage in the BBC blogs we looked at.

Like much of the content we looked at, blogs were more likely to mention benefits of scientific research than risks (eleven of the 27 unique postings cited benefits compared to just two mentioning risks). It seemed to us that as with a lot of the online science content (and science content overall), the blogs located science as a ‘good-news’ story where science provides benefits to society and is rarely the source of any risks. As with any of this, you may well be able to dig up an example or three to argue that the BBC blogs are ‘anti-science’ in some way (and this singular examples may well be very important, perhaps even because they are singular) but looking at our sample as a whole, this was not the picture we saw.

We saw a range of ways of using the blogging form amongst the science and technology reporters that blog for the BBC. Some reporters took the chance to contextualise news stories they have reported on (Richard Black), or to offer a more personal take on a story (Fergus Walsh). Others would trail upcoming items (Susan Watts), to summarise/ repeat a news item in another site, or describe related research (Tom Feilden, Jonathan Amos). Potentially, adopting a personal voice raises issues with respect to the BBC’s impartiality (there are editorial guidelines on this), although we found no evidence in the blogs we looked that it had actually compromised impartiality in action. If anything blogs can also offer a space to address questions of impartiality and accuracy when they arise though. We found a lovely example of this from Rory Cellan-Jones, where he reflected on an report for the BBC One News at Ten, saying he should have been he should have taken a more sceptical tone, and also took the chance to quote at length the scientist’s defence of the research.

You can find more details of this study in the full report (opens as pdf) especially pages 33-38.

As I’ve written before, the placing of a link is a rhetorical and, as such, creative process. Thinking about what you’ll link to, how and when (and when not to) is a challenge I personally adore when I write, and one of the many reasons I find writing online more professionally fulfilling than print. It really doesn’t seem to be used enough though, or thought about as much as it could be either (n.b. this is a general grumble, broader and looser than the BBC Trust study).

So, I guess for now I’ll keep banging on about ‘context, context, context’, knowing it’s hard for journalists to provide it but hoping they continue to try to be as imaginative and proactive as possible in facilitating connections between the information that is out there and those members of their audience who are interested to find out more.

The BBC Trust Report on Science

EDIT, July 2012: Slightly updated version for Open Democracy.

Last week the BBC Trust published their review of impartiality and accuracy in science coverage. This included research undertaken by my group at Imperial. My name’s on the front but I should stress that I only did a small part of the work. It was lead by my colleague Felicity Mellor.

This post isn’t in any way an extension of the report, or official comment in any way: it’s just some of my thoughts on a small part of it, a few days after publication. Please also note that I’m talking about our report (pdf) which is Appendix A of the full review. It’s also worth flagging up that it was a content analysis, and these can only tell you so much. A focus on content can lose the context of the making of the news, as well as its reception (something I think came out well in the recent debate between Ben Goldacre and James Randerson). However, studies like these can also flag up some fascinating points for further thought/ analysis, and challenge the sorts of assumptions we all make through our own everyday personal (and partial) consumption of the media. It all too easy to get annoyed by a bad example and generalise from that, a systematic content analysis can provide a more nuanced picture.

A lot of the news coverage of the review has focused on so-called ‘false balance’; the perceived problem of giving too much weight to fringe views in an (arguably misguided) attempt to avoid pushing one particular point of view. Just as a political piece might ‘balance’ itself with perspectives from left and right wings, a science story might, for example, include someone who is against vaccinations as well as someone who would advocate them. If you are interested in balance issue with respect to climate change, chapter three of Matt Nisbet’s Climate Shift report makes for interesting reading. I’d also add that Hargreaves and Lewis’ 2003 Towards a Better Map report on science news gives some historical context to the UK’s particular issues with respect to balance, in particular the way BSE led to framings of MMR (which one might argue, in turn, influences reporting of climate).

Our study was based on a slightly more diverse set of questions than simply balance though, as we tried to gain a detailed empirical grip on these slippery issues of impartially and accuracy. As a result, I think it threw up a rather more complex set of challenges too.

One of the key questions our work asked was who is given a voice in science news? What’s their expertise, where do they come from and – arguably most importantly – what are we told about this sort of context?

For a start, less than 10% of broadcast items in our sample included comment from those presented as having no specialist professional knowledge of relevance to the issue. Moreover, lay voices tended not to be run alongside expertise voices. So, that oft-made complaint that news reports rely on personal anecdote? Well, going by our sample, for the BBC this would not appear to be the case. It’s also worth noting that almost two thirds of the broadcast news sample – which included a lot of very short summary items – either relied on a single viewpoint or paraphrased alternative views. In items reporting on new research, this proportion rose to about 70%. So there was little room for alternative view here; to ‘balance’ or otherwise. I also thought it was significant that only about an eighth of broadcast news items about research, and almost two fifths of online news items about research, included comment from independent scientists (i.e. with no connection to the research being reported).

Whether you think any these various expert voices (when they were included) are the right ones is another matter though. You can’t just say scientists are the appropriate voices and that’s it. Simply being ‘a scientist’ doesn’t necessarily make you an expert on the topic at hand, and there are other areas of relevant expertise, especially when it comes to science policy issues. We had to think a lot about the rather large ‘other professional expertise’ category we considered alongside scientific, clinical, lay,  non-science academics and ‘unknown’. Other forms of professional expertise came from politicians, spokespeople for charities and representatives of industry. It’s important these experts are included in debate about science. For example, a scientist might be able to talk about their research, but know little about the policy surrounding it. Equally though, many scientists do have expertise of such areas as part of their research. It depends, which is rather the point about ‘a scientist’ being too blunt a description. We did notice a relative lack of direct comment from the UK government (less than 2% of items) as part of science stories, something I think is potencially worrying in terms of public debate over science policy.

Aside from questions over appropriateness of expertise being a rather slippery issue, there is very little information given about the expertise of a speaker. We found lot of reliance on phrases such as ‘scientists have found’ and ‘experts say’. Personally I think we need to address this issue before we can even get on to matters of whether experts are the right ones or not. Although expertise may be implied through editing, and TV in particular can flag up institutional association and title, we rarely saw a contributor’s disciplinary background specified. Especially significant I thought, in broadcast reports about new research we found little explicit reference to whether or not a particular contributor was involved in the research being reported (online reports often refer to someone as ‘lead author’ or ‘co-author’). This lack of definition makes it hard for audiences to judge a contributor’s independence, whether they are speaking on a topic they have studied in depth or if they are simply working from anecdote.

(I do appreciate how hard it is to give this sort of context, but it’s still a problem).

One of the things I was personally excited to find out in the study was the institutional location of the voices of science. We found that they were twice as likely to be affiliated to universities as any other type of organisations. There are perhaps also questions to be raised about the geographical distribution of these. Cue headlines suggesting Welsh science is ‘frozen out’ by the BBC, but the dominance of researchers based in the South East of England might be a due to a range of other factors (e.g. concentration of ‘Russell Group’ universities there).

When it came to coverage of recently published research, it was also interesting to ask which publications they came from. Nature, the Lancet and the British Medical Journal accounted for nearly all journal citations in the broadcast news. As with the location of research institutions, this is perhaps understandable considering their status, but also lacks diversity. On the reliance of particular journals, Charlie Petit was quick to note how little the American journal, Science is mentioned compared to the British equivalent, Nature. We thought this was interesting too, and wondered if this was a UK bias issue, but seeing as we found more coverage from Science when it came to online and non-news samples, it could be simply be that Science‘s embargo times don’t fit so easily with the BBC’s broadcast news schedule.

If you combine the arguable lack of diversity of news sources with our concerns over the reliance on press releases it is tempting to think back to Andy Williams’ point, based on his Mapping the Field study, about the ‘low hanging fruit’ of the almost ready-made content the highly professional press teams at these elite publications and research institutions push out. It’s hard to tell the reasons for this sort of coverage from our content analysis though. These sorts of questions, I believe, require studies which consider the processes of news-making as well as the content (and while I’m talking about Cardiff research, I should flag up the great iterative and mixed methods approach taken by Chimba and Kitzinger in their Bimbo or Boffin work; inspiring bit of research strategy there).

Anyway, I hope that’s pulled a bit more out of the study, but it’s only scratching the surface. There is, obviously, a lot more in the report itself – do read it for yourself if you’re interested in science journalism. If I get time later in the week, I’ll do a follow up post on what we had to say about the BBC’s blogs (edited to add 29/7 done!). There’s also a great interview with Dr Mellor on the Imperial College site which runs through some of the key findings. Edited to add 27/7: and a sharp editorial in Research Fortnight about it too. Edited to add 18/8: and here I am on the college podcast.

(or, you can try the BBC report of the review, which is rather brief, but does include a picture of Brian Cox blowing bubbles)

Fair’s fair

What questions would the public choose to invest scientific time and resources in, if given the chance to shape research policy? This is an old and largely unanswered question. Indeed, it is one that many members of the scientific community go out of their way to avoid testing.

Ben Goldacre touched on it a couple of weeks ago, in his Bad Science column, where he repeated an idea that’s been around for a while – that each year, a very small proportion of the research budget should be spent on whatever the public vote for. Goldacre mentioned this idea because he wanted to argue that at least some of the money would go on useful research. Still he was also fast to quip that ‘Most of it would go on MMR and homeopathy, of course’.

But we don’t really know what the public would fund. That’s the beauty of the experiment: we’d give ourselves a chance to find out.

We’d also give publicly funded science a chance to enrich its scope of inspiration, and make itself more clearly accountable to the communities which fund it. Researchers often say they should be to be left to research what is “interesting” without public, or at least political, interference (see about any reference to the Haldane Principle…). Ok. But we need to appreciate that any idea of “interesting” is socially constructed. I don’t say that to undermine the point necessarily. We’ve put 100s of years of effort into constructing a world of science which trains people to have a keen sense of “interesting”. But I see it as an ongoing process, open to development and, potentially, open to input from a broader social network.

I was thinking about this issue while at the Google Science Fair last week, in particular the broad range of sources of inspriation the finalists and drawn upon, and have a post about it on the Guardian Science blog. There, I suggest children sit in a sort of mid-way space between science and ‘the public’, and that this is is something we might try to replicate in at least some parts of grown up science:

It is perhaps best to think of schoolchildren as holding a liminal position with respect to science and the rest of society. They are not quite inside the scientific community or squarely outside it either. They are both science and “the public”, and they are neither of these things [...] what can we do to further this sort of liminality in grownup science? How can we extend the social spheres of our professional scientists, especially those who define the research agenda, so they might draw inspiration more effectively from the diversity of publics that fund them?

When thinking about the question of how the public might shape research policy, I think this sense of liminality is key. To me, this is better than a straight public vote, which just seems a bit blunt. I much prefer a model of co-production which aims towards mutual learning between science and the public so they can build something better than either alone would be able to dream up.

Afterall, a question that on first glance looks like a call to homeopathy or MMR might well contain a nugget of a more scientifically credible challenge for public health, if only given a bit of discussion to help bring that point out.

Book Review: Free Radicals

With his new book, Free RadicalsMichael Brooks has done something which surprised me: he’s produced a popular science version of Against Method.

Against Method, if you don’t know it, is a philosophy of science book by Paul Feyerabend, published in 1975. It argued against the idea that science progressed through the application of a strict universal method, and caused quite the fuss at the time (it continues to, in places). Brooks is keen to distance himself from the more extreme ends of Feyerabend’s version of this view, but agrees with a central sense that, when it comes to doing “good” science, “anything goes”.

Subtitled “the secret anarchy of science” Brooks’ book argues that throughout the 20th century, scientists have colluded in a coverup of their own inherent humanity, building a brand of science as logical, responsible, gentlemanly, objective and rational when in reality it’s a much more disorganised, emotional, creative and radical endeavor. This, Brooks argues, is not only inaccurate but dangerous; education and public policy would be much more successful if science was only more open about its inherent humanity.

This picture of the anarchy of science is done with affection and a clear strength of belief in science. I’m sure some would be tempted to dub it Against Method Lite, but Against Method, With Love might be more accurate. The message seems to be that scientists are people who do amazing things, even though (and sometimes because) they take drugs, lie, cheat, are reckless, work on stuff other than what they’re supposed to, are horrible to their wives, fudge their results, are motivated by money or are simply a bit of a dick. In places, Brooks also emphasises the religious beliefs of many great scientists, and the way in which religion could sit easily alongside, even inspired, their research.

Personally, I’m unconvinced anarchy is the right word here. Messy and human is perhaps better. Or, as a colleague put it at a conference earlier this month: “just people doing people things, in people ways” (I appreciate this doesn’t make for such a sellable book though). Still, the result is a warm, engaging and neatly plotted trundle through aspects of the history of science which the more cheerleading heroic histories tend to avoid. In some respects, the book’s approach of short historical tale after short historical tale is reminiscent of Bill Bryson’s A Short History of Nearly Everything. There is a key difference though. Bryson’s book, when it came out in 2005, bugged me. Bryson is famous for his travel books, in particular a chatty style which talks about the people he encounters with a fair bit of pisstaking (affectionate, and often respectful pisstaking, but pisstaking nonetheless). But, witty as A Short History was, Bryson seemed to have left his ability to take the piss at the door of the Royal Society. It wasn’t a warts and all view of the world-weary observant traveller; more a cleaned up polished pictures you save for the tourist brochure. The scientific community welcome Bryson’s book with open arms. I was left thoroughly bored by its reverence. Brooks on the other hand, perhaps because he has a scientific background himself, doesn’t seem to be nearly so star-struck (and isn’t, I’d say, nearly so boring).

Again, let me stress Brooks’ approach is not “anti-science” in any way. But that’s not to say such an Against Method, With Love approach is without problems. I suspect many of my colleagues in the social studies of science would worry about this somewhat celebratory twist on the idea of anarchic science. They’d want more critique, more probing (because, I should also stress, they see such critique as a way to better science, they generally do this with love too). I also suspect Brooks’ focus on the big names of science – Nobellists and the like – would jar with those who eschew great men stories in favour of uncovering the less obvious, more detailed and often anonymous networked texture of science. Brooks might have produced an anti-hero popular history of science, but it’s still one with a focus on great men. Indeed, there is a way in which these stories of slightly crazy scientists simply constructs a whole new mythical image of the scientist, one that adds new and different forms of barriers between science and society. I’m not convinced science is necessarily a “bad boy” any more than I believe in the mythical branding Brooks aims to puncture.

(An anti-hero history of science isn’t a new one, nor are critiques of it. Rosalind Haynes touches on it in her history of the fictional representation of scientists; work that was neatly reapplied to non-fiction contexts by Elizabeth Leane. There’s a section of my PhD on the rhetoric of an anarchic image of science presented in some kids’ books too)

I’m really not the intended audience for this book though. I’d love to know what a more general reader from outside the scientific community makes of it. I’d also like to know what professional scientists think of the books’ image of their work, and how other scholars in the history, philosophy and sociology of science felt about this refashioning of their ideas. I did enjoy reading it though, I think the concluding points about the political worth of accepting the human side of science are, at the very last, worth more public debate.

Towards a multigenerational debate about science

Last week, I was supposed to be one of the speakers at the World Conference of Science Journalists, part of a session on reaching younger audiences. For various reasons (some including ambulances…) I didn’t actually get to give my talk. This post is a linked-up version of what I would have said. The images are screengrabs from an old website, Planet Jemma, which is discussed near the end.

One of the rare bits of research on young people and online science media was conducted back in 2004 by some communication researchers in Florida, published as Attracting Teen Surfers to Science Web Sites in the Public Understanding of Science journal. I know it’s old work, but it’s their attitude I’m interested in here, not the primary data. They concluded that attracting teens to science websites can be difficult because when teenagers do go online, they do so for social interaction and entertainment, not to be educated. They seem to be a little disturbed by this, or at least see it as a problem to be managed.

I don’t think they should be disturbed though. I think they should be excited.

Let me give some background. In recent years, much of the discussion about the public communication of science and technology has focused on what we might broadly see as a shift from a top-down model to a more distributive approach; models which stress the need for scientists to listen to the public, and the role of public-to-public communication in the construction of ideas about science. Many science communication professionals now see their job as facilitating conversations, not providing ready-made polished stories (see this post for more on that).

It is rare, however, that we see this approach followed through when it comes to work with young people. The idea of ‘discovery learning’ was briefly popular in the late 20th century (put kids in a classroom with a load of science kit, let them discover it for themselves). However, as many educational researchers pointed out, this is rather naive: it only works if we actually believe scientific research comes from such uncomplicated, quick interaction with physical entities. In reality, science teachers accommodated students’ results that did not fit the expected outcome. They were demonstrations, not experiments; activities wrapped up in a rhetoric of discovery. Additionally, when young people are asked to debate science policy issues or ethics in class – as we see increasingly English science curriculum – this is seen as a rehearsal for democratic engagement in later life; the kids aren’t going to be listened to as kids.

This shift from providing polished stories to facilitating conversations isn’t unique to science communication. Developments in media technology and cultures surrounding these have led to changes in the way journalists consider the people formally known as the audience; changes I do not need to repeat here. There is also a specific debate within children’s media about the history and politics of adult-to-child narration. It should be remembered that so call-ed ‘children’s media’ is usually given to young people, not produced by them. Even writers aiming at a ‘child-centered’ approach will draw on memories of their childhood which may well be out of date and framed by adult worries. David Buckingham, riffing off Jacqueline Rose, talks about a form of generational drag; adults acting as if they were children, based on an adult conception of what a child is.

I don’t think there is anything wrong with sharing science across generations. Indeed, we might think of science as a generational activity, and the lengthy time frames of science is something I think we need to acknowledge. But we should also be aware of when exactly younger people are asked to speak rather than being spoken for, how much freedom they have, and how often they are listened to.

I will now briefly introduce a few examples of UK science communication websites aimed at young people, before offering two conclutions.

First up: SciCast. Here, children are invited to make short films about science and share them. There is a competition for the best ones every year, and they have a big Oscars-style awards do (finalists announced last week). There are some gems on the site: do go and have look. Let’s not pretend it is unmediated kid-to-kid communication though. Kids are drawing on the ideas of adult scientists, some of which are long dead too. They are also using adult-made media technology, and I’m sure some videos were lead by parents or teachers. It’s also a competition, judged by adults, so kids work to their idea of adult expectations. But I don’t think it pretends to be adult free either. Indeed, the project invites adult professionals to leave feedback, and gives feedback itself, because they see this as a productive part of the process.

Secondly: I’m A Scientist Get Me Out of Here. Scientists are put in zones with four others, each zone is matched to a set of schools. The scientists introduce themselves with a profile, and then the school students ask them questions. It runs for a bit over a week, and adopts the loose structure of reality TV show; the scientists get voted off daily so they compete to give good answers. Here the kids do not produce content, but rather lead it with their questions (and the content is sometimes slightly scrappy forum post answers from scientists, not carefully constructed literary prose). The questions are diverse – about the scientists as people as well as factual – as are the scientists who are everyday working researchers rather than the super-star presenters you might see on TV, and the project is proud of the way it communicates a sense of how science really works. Another key point to stress about I’m a Scientist is that the questions are not always resolved: a lot of scientists simply reply with ‘I don’t know’ (see this post and comment thread for some discussion, as well as this video made by one of the contestants).

SciCast and I’m a Scientist are unusual though. Most science media for young people is made for them, not by them. Moreover, although some may offer forms of interaction, it is worth questioning whether this is interactivity or, more simply, ‘activity’. So here’s my third example: Energy Ninjas, a science computer game developed for use on gallery at the Science Museum, which you can also play online. It has a loose narrative, though you have some control over the order. You move around a city, pick a site to enter and watch the Energy Ninjas chastise people for their carbon consumption. Where you choose to click will have some impact on your route through the game, but it won’t impact on the structure of the game itself, or even change the outcome of any loose story it contains. What you as player choose to click on certainly doesn’t get fed into science, or science policy.  It’s reasonably standard as the genre of these mini-science games go. Again, this isn’t necessarily a bad thing, but we should be aware of the limits of user involvement here.

Finally: Planet Jemma. It’s from 2003 and not online anymore (edit: a demo version is now up), but I think it’s fascinating and so worth sharing with you, so I’ve included some screengrabs the developers had archived, and there are some reviews online (this is interesting, and do see the comment thread includes response from developer). There’s also a Guardian article about it. This tells a story of Jemma a physics student in her early months at university, though emails sent to you as if you were an old friend from back home. You learn a bit of the science she is learning, but also about her life at university. The emails you get relate to where you’ve clicked on an associated website which includes videos and photo stories. Think of it as database-driven personalised narrative. This is a very good example of adult writers aping kid-to-kid discussion (see earlier point about ‘generational drag’). However, I should stress this was 2003. I’m sure the developers would have loved to have brought more of the actual teenage audience into making the story rather than just being the recipients and characters in it, something which is simply easier to do now. I’d love to see a project of this level of imagination and narrative complexity run today, but with the various technological and cultural resources we now have available.

Conclusion one: We should be honest about generational issues at play here. Don’t pretend to be providing a child’s voice when it’s an adult’s one, be aware of how adults are framing, possibly curtailing, children’s interactions with science (and why – they may have reasons for doing so). We should also be honest about the age of scientific content discussed with and by young people. I don’t think there is anything necessarily wrong with young people talking about old ideas, or using old ways to demonstrate them (in some ways, it’s quite exciting that people back in the 18thC did similar tricks to demonstrate science that we o today), but I do think we should be honest about this long history, even aim to explicitly pull it out. Moreover, rather than looking at communication patterns as just top-down or side-to-side, maybe we need to think about co-constructed multi-generational media; both in the construction of content, and its audiences.

Conclusion two: there are a host of projects getting kids to work with scientists, even to be involved in the scientific research. Why not get kids doing science journalism, with science journalists, too? Why not get science journalists doing ‘outreach’? Yes, there is SciCast and some projects to get schoolkids scienceblogging. My mother told be me about a science radio project in North London in the ’80s. But why not more of this? Moreover, why not include the more probing critical work of professional journalism? Kids can do more than explainers. I think this would have a number of educational benefits. Moreover, just as scientists doing outreach is sometimes (cynically) seen as serving the scientific community as a form of promotion for their profession, maybe is science journalism is under threat as a profession, maybe doing outreach could help promote youselves? And, just as scientists often say they learn a lot from working with young people, maybe science journalists could learn something too.

You want to reach young audiences? Stop thinking about them as ‘audiences’, and involve them.

Has Public Engagement become too institutionalised?

I was at a conference recently and a colleague raised an interesting question: today, where do the socially concerned scientists go? In the 1960s and 1970s, there was Pugwash or the Union of Concerned Scientists. What now?

I could think of several such scientists, though they didn’t fit the same model as the 1970s. Yes, I know Pugwash and the UCS still exist but I’d bet a good chunk of even the odd sub-sect of the world that read this blog haven’t heard of them. The nature of a socially engaged scientist seems to have changed somewhat since the 1970s. Some of my students made a great video a few years back dramatising this (screengrab above, watch in full here); with scientists from the 1950, 1970, 1990 and 2010 all arguing over the ways they feel they should address the public.

A key change has been the rise of this thing called ‘public engagement’. Now if you want to take your work outside the confines of the Ivory Tower, you can sign up to an engagement project. As I’ve written in a piece in the latest edition of Research Fortnight (paywalled, but most UK universities have a campus subscription, try this link), the rise of public engagement I something I largely welcome, but I also think it’s worth noting how institutionalised it has become, and wonder if this institutionalisation compromises the independence of academics in their ability to embed themselves in society. Public engagement as it’s framed in UK policy discourse can become a range of different activities; some more ‘impact‘-ful than others. A stall playing with balloons at a science fair is a lot easier than kicking up controversy over GMOs. It may also be more easily accountable.

In many respects, I like that the engagement institutions exist; that the government encourages researchers to do it, including support on how to do it. As I try to stress in Research Fortnight, the move away from top-down approaches to more discursive ones that stress mutual listening and learning between science and society (which many of the engagement institutions advocate) is not only one I personally approve of but, itself, a form of application of academic work from Science and Technology Studies.

One might argue, of course, that as soon as a researcher takes their work into society, they compromise their independence; that a search for objective truth requires a certain degree of intellectual dis-engagement. I think this would be simplistic, even if I do think we should question what the last 10-ish years of ‘engagement’ policy has brought us. So, I don’t agree with the Research Fortnight editorial’s take that ‘the scale and volume of engagement may be reaching the point where it threatens academic independence’. It’s not the size of engagement that’s the problem.

That video by my ex-students ends with the 2010 scientist with her head in her hands; feeling the weight of history and all the various expectations accrued upon her. I sympathise. She doesn’t have any answers and neither do I. The Research Fortnight ends with a question. As they don’t have a comments box for answers, I’ll repeat it here: how can we keep the political voice of academics independent, while supporting the idea that such a voice is part of their job, and ensuring that they in turn listen to other voices too?

Can we?