Category Archives: jargon

KABOOM: Exploding ‘impact’

Picture: social researcher number one.

This is a drawing of a social researcher. I don’t mean a researcher who studies social relations. I mean this is a researcher who is social; one that’s connects to other people, very simply by citing other researchers.

(Yes, sociology-spotters, it’s ‘inspired’ by Bruno Latour. It’s a poor reinterpretation of an early diagram in Science in Action. I’m currently an ocean away from my desk, and don’t have the specific reference to hand).

A few months ago, I attended the launch of the Royal Society’s survey of the global scientific landscape, a report entitled Knowledge, Networks and Nations. Looking at all the Royal Society’s pretty pictures of international networks, I remember be struck by quite how much of a social enterprise science is, and that in many respects this is its great strength. The idea that science might be socially constructed is often taken as a criticism of science, an attempt at undermining it even. But it doesn’t have to be.

(This isn’t to deny science’s interaction with the natural world. Indeed, I’ve often thought many concerns over social constructivism are down to a confusion between science and nature. But that’s a larger philosophical debate/ bunfight, possibly also involving Latourian diagrams scrawled on bits of scrap paper).

I was reminded of this sense of the sociality of science during all the recent blather about ‘impact’. It is jargon, and rather ill-defined at that. As Richard Jones neatly put it, this thing called impact isn’t an actual thing at all, but rather a word that’s been adopted to stand for a number of overlapping imperatives. To put it as plainly as possible, publishing a research paper is only half the job (credit: I stole that line from David Dobbs). The government wants to make sure that the researchers they fund do a full job, even though they are aware that the other half of this work might take a range of forms, so they’re trying to find ways of measuring a thing called impact. This is hard. We could count citations in academic literature, or patent applications, or measure column inches of mass-media coverage. I suppose we could count mentions of Brian Cox on twitter too. I don’t think any of these are quite going to cut it, but that doesn’t mean we can’t be cleverer about how we do try to discern impact. As Steve Fuller recently argued, there is no reason why metrics have to be stupid. A recent AHRC report (pdf) covered many of these issues too.

Back to the social researcher thing. Here’s another picture. The first was supposed to show the social relations involved in making academic work. This one is of the social relations involved in sharing it too, and is as influenced by Lewenstein (via Gregory) as it is Latour.

 Picture: social researcher number two.

We might take first diagram as a critique of the rhetoric of a scientific paper; a way of showing off expertise to keep others out. Equally though, it is simply representative of the ways in which people with academic training draw on a body of other peoples’ work and are helped in their thinking (and give credit with a traceable trial for that thinking). I’m an academic. I have have read a lot and like to reference things. There are 533 sources cited in my PhD (I know because, after I submitted the thing, I ran a ‘guess the weight of my bibliography’ contest on my knitblog). I’ve also learned loads over the years from my students, friends, teachers, colleagues, family, ex-boyfriends, blog commentators, etc. I am a mass of other peoples’ ideas, even if I choose between them and add my own perceptions, misunderstandings and connections. It’s standing on the shoulders of giants stuff, or a matter of science as a team sport.

(The first of those analogies comes via a folk history of Newton, the latter one I’ve taken from Jack Stilgoe. Just as I’ve already drawn on Latour, Dobbs, Jones, Fuller, Gregory… see what I mean?).

This is really important when it comes to thinking about impact. James Sumner wrote a great post earlier this year where he stressed how much time he spent talking about other peoples’ research. Sumner meant this in terms of the specific issues of humanities academics doing public engagement, but I think it applies much more broadly. As Jack Stilgoe wrote earlier this week, innovation studies tell us that economic benefits comes from networking and policy making is similarly built on networks of trust.

So, when Stilgoe also says we need to rethink impact as ‘people, not papers’, I feel the same unease I have about calls to fund ‘people, not projects‘: science is done by groups, not individuals. It’s the tomb of the unknown warrior, to steal another good line, this time from Martin Rees (see second quote here). I guess if we want some tidy alliteration, it’s about keeping our scientists social. Let’s explode the idea of impact, not just to think of it as something more than in an economic or academic sense, but as something accrued, done and most successfully achieved through networks. I don’t mean networks in the Machiavellian sense sometimes associated with Latour, but simply in terms of people helping each other out. I want to sit in the Royal Society looking at pretty pictures that the networked journey of research, not just its networked production (or better, the ways networks or production and dissemination are and can be interlaced).

As ever, the comment thread is open for your thoughts. Or, if you’re London based and want to be sociable about the impact debate, do come along to our event at Imperial on the 5th

The known unknowns

I’m blogging from Cambridge; at the “Challenging Models in the Face of Uncertainty” conference. The focus is unknowns: be they known, unknown, guessed, forecast, imagined or experienced. I’ve heard Donald Rumsfield quoted rather a lot. There has also been  the odd reference how stupid we all are, the problems of a God’s-eye-view and, least we forget, black swans.

at conference

This morning started with a talk from Johan Rockström on his Planetary Boundaries framework. He quoted Ban Ki-moon line, that “Our foot is stuck on the accelerator and we are heading towards an abyss”, adding that we are accelerating as if we were on a clear highway, driving in easy daylight conditions when, really Rockström argued, we’re on a dirt track, in the middle of the night. What science can/ should do, he suggested, is turn the headlights on en route to this abyss. Rockström’s talk was very clear, with some neat little twists on diagrams and the odd metaphorical flourish.

Steve Rayner, in the audience, picked up on this and asked Rockström to reflect on his own ways of signaling authority when transforming his work into a talk for non-expert audiences. Rockström’s response was largely to list names of colleagues and more detailed work. In other words, Rockström didn’t answer Rayner’s question: he simply re-articulated the symbols of authority he’d been asked to reflect upon. Rayner wasn’t suggesting Rockström didn’t have an empirical basis to his work (or that his work was wrong), just that when communicating this work outside of science, Rockström inevitably relies upon rhetoric, and it’d be useful for him to reflect on this role as a rhetorician. But he didn’t, and this was just left as a question.

The second talk was from Melissa Leach. She emphasised the multiple narratives surrounding the sorts of issues discussed at this conference, be they connected to climate change, the spread of disease, GMOs, ash-clouds, nanotechnology, or some other novel technology. She argued that we have a tendency to close down or re-articulate narratives of ignorance, ambiguity, uncertainty and surprise and instead move to ones of “risk”. The sense of control and order risk-framed narratives provide is sometimes very helpful, but it can also be deluded, and shut down possible pathways to useful action. Leach argued that we must open up politics to pay due attention to multiple narratives; to question dominance and authority, to increase the ideas and evidence available to us.

Ok, but how do you do this? For example, we might argue that the internet provides a great opportunity for the presentation of such a multiplicity of narratives and, moreover, an opportunity for such narratives to productively learn from/ change each other. At times it does just that. And yet, science blogging can also be deeply tribal, climate blogging especially so (and, I’d argue, considering its history, understandably so).

This evening, was a public lecture from Lord Krebs, on the complex interface between evidence, policy-making and uncertainty. He outlined three key tensions in this interface, each illustrated with examples. 1) Scientists disagree with each other, e.g. over bystander effect and pesticides. 2) Scientists sometimes just don’t know, even when they develop elaborate experiments to find out, e.g. with badger culling. 3) Sometimes it doesn’t matter what the science says, the politicians will be swayed by other issues, e.g. alcohol. I enjoyed Krebs’ talk. He had some really neat examples and there were some good discussion in the Q&A. Still, I left uninspired and unenlightened.

Lord Krebs talks badgers

Today, I’ve heard a lot of very old and (to me at least) very familiar talk about issues in science communication. I’ve seen a bit of new data and collected some new jargon, but I’m yet to come across any new ideas. I’ve been told that people believe what they want to believe, that science and the perception of it is culturally embedded, that the so-called experts are often as misleading and as likely to mislead as the so-called public, that science and politics make uneasy bedfellows and, of course, that it’s all terribly, terribly complicated.

But I knew that. I knew that ten years ago. I want something new.

Maybe I’m being unfair on this conference. I admit I’m tired and in need of a holiday. Also,  interdisciplinary events are always very difficult to pull off, and it is only half way through.

Maybe tomorrow will surprise me.

The Myth of Scientific Literacy

EDIT: I recorded an updated and audio version of this for the BBC (July 2012).

Every now and again, the term “scientific literacy” gets wheeled out and I roll my eyes. This post is an attempt to explain why.

The argument for greater scientific literacy is that to meaningfully participate, appreciate and even survive our modern lives, we all need certain knowledge and skills about science and technology. Ok. But what will this look like exactly, how will you know what we all need to know in advance and how on earth do you expect to get people trained up? These are serious problems.

Back in the early 1990s, Jon Durant very usefully outlined out the three main types of scientific literacy. This is probably as good a place to start as any:

  • Knowing some science – For example, having A-level biology, or simply knowing the laws of thermodynamics, the boiling point of water, what surface tension is, that the Earth goes around the Sun, etc.
  • Knowing how science works – This is more a matter of knowing a little of the philosophy of science (e.g. ‘The Scientific Method’, a matter of studying the work of Popper, Lakatos or Bacon).
  • Knowing how science really works – In many respects this agrees with the previous point – that the public need tools to be able to judge science, but does not agree that science works to a singular method. This approach is often inspired by the social studies of science and stresses that scientists are human. It covers the political and institutional arrangement of science, including topics like peer review (including all the problems with this), a recent history of policy and ethical debates and the way funding is structured.

The problem with the first approach is what IB Cohen, writing in the 1950s, called “The fallacy of miscellaneous information”: that a set of often unrelated nuggets of information pulled from the vast wealth of human knowledge is likely to be useful in everyday life (or that you’ll remember it when it happens to be needed). That’s not to say that these bits of knowledge aren’t useful on occasion. Indeed, I remember my undergraduate science communication tutor telling us about how she drowned a spider in the toilet with a bit of basic knowledge of lipids and surface tension. However, it’s unrealistic to list all the things a modern member of society might need to know at some point in their life, get everyone to learn them off in advance and then wash our hands of the whole business. This is especially problematic when it comes to science, as such information has the capacity to change (or at least develop). Instead, we all need access to useful information when it is needed. Note: by “access” I include tools and cultural inclination to go about finding and making meaning from such information (posting a document online doesn’t count).

The second of Durant’s approaches to scientific literacy might make more sense then, but there are problems here too. Firstly, there is what Cohen dubs “The fallacy of critical thinking”. Science isn’t necessarily a transferable skill. This is easily demonstrated by examining carefully the lives of scientists outside of the laboratory (or, to put it another way: “yeah, cos scientists are all sooo well organised outside of work, living super-rational evidence-based lives, all the time”). It would be lovely if we could provide a formula for well-lived lives, but people just aren’t that consistent.

There is also the matter of whether you believe science works to a singular “scientific method”. That in reality science isn’t “a” way of thinking, but many; enacted under quite local conditions (which are influenced by ideas like those of Popper, Bacon et al, but “method” is only part of it). This is largely the thinking behind the third approach to scientific literacy: “how science really works”. I have a problem with this too, one it shares with all three: it’s too didactic. It replaces an idea that the public are deficit in scientific information with an idea that they are deficit in sociology of science. It is just as unrealistic (if not more so).

One of the neatest arguments against calls for scientific literacy is Jon Turney’s 2003 response to Susan Greenfield. It has a particularly good ending:

Work to promote scientific literacy so everyone is up to speed, empowered and ready to contribute to the great debates about science, technology and the future? No. Invite them to participate, and really mean it, and they will find the motivation to become as scientifically literate as you, or rather they, please.

This echos a key problem many people have with the scientific literacy approach. It is too top-down. You might be able to talk about scientific literacy in an educational context (i.e. for children in compulsory education), but adults will simply feel patronised and so won’t listen.

I’d also argue that a scientific literacy approach tackles the problem the wrong way around. It would be lovely if we could live in a world where “everyone is up to speed, empowered and ready to contribute”, but you can’t prepare for scientific controversies like that. Do we want to view each science story through the lens of older ones (cough, Simon Jenkins). Maybe prevention would be better than a cure, but I don’t think it is possible in this context; medical metaphors perhaps being as inappropriate here as “literacy”. Rather, let’s provide structures where non-experts can learn about science as and when they become important to them. As Turney says, “Invite them”.

Although I like Turney’s piece a lot, I do also understand the frustration people feel when they see what they feel is a lack of scientific training. I was prompted to write this blogpost after recent comments made by Julian Huppert; that MPs to be required to take a crash course in basic scientific techniques (see also Liberal Conspiracy piece in support). Do we really want elected politicians to “become as scientifically literate as they please”? We might argue that MPs, like schoolkids, should just be told to turn up and listen. But as anyone who has worked in a school will tell you, compulsory attendance is only part of the battle.

Mark Henderson tweeted that he agreed with Huppert and the libcon piece that understanding methods of science would help politics. That it is the least understood thing about science outside science: most non-science graduates think of as body of facts, not as a way of thinking. Fair enough. But you have to believe these ideas, as well as understand them. This is one of the reasons why the UK science communication industry dropped the word “understanding” a while back, and why it is important to avoid confusing “understanding” with “appreciating” (or “knowing” with “liking”, or “trusting” for that matter). Identifying what you think people should know about and actually getting them to (a) listen, (b) believe you and (c) apply it, are entirely different matters. As Huppert told the Independent, political leaders simply pay “lip service” to the importance of scientific proof. I worry that greater training in scientific literacy could simply provide a more extensive rhetoric. You want their hearts, not just their minds (or simply vocabulary).

I’d love it if there was a simple course we could send our elected officials on which would guarantee future science policy would be reliably high quality. Being educated in science (or even “about science”) isn’t going to do it. It’s social connections that will. We need to keep our elected officials honest, constantly check they are applying the evidence we want them to, in the ways we want them to. And if the scientific community want to be listened to, they need to work to build connections. Get political and scientific communities overlapping, embed scientists in policy institutions (and vice versa), get MP’s constituents onside to help foster the sorts of public pressure you want to see: build trust so scientists become people MPs want to be briefed by.

This, for me, is the true message of “understanding how science really works”. That science is not only done by, but advocated by networks of human beings. Rather than training people up in the sociology of science (cough, Harry Collins), we should go out and do some “applied sociology”: build those networks through action and debate.

This is just a brief sketch of the basic problems with scientific literacy (yes, this was the brief version). If you are interested in more, I can recommend the following. They are all a bit old. It is an old argument.

  • Bauer, Martin, Nick Allum & Steve Miller (2007) What can we learn from 25 years of PUS survey research? Liberating and expanding the agenda, Public Understanding of Science, vol. 16(1): 79-95.
  • Durant, Jon (1993) What is scientific literacy? in Jon Durant and Jane Gregory (eds) Science and Culture in Europe (Science Museum: London).
  • Einsiedel, Edna (2005) Editorial: Of Publics and Science, Public Understanding of Science, vol. 16(1): 5-6.
  • Gregory, Jane & Steve Miller (1998) Science in Public: Communication, Culture and Credibility (New York & London: Plenum). See p. 16-17 for IB Cohen’s “fallacies”.
  • Millar, Robin (1996) Towards a science curriculum for public understanding, School Science Review, vol.77 no.280: 7-18.

A quick note on jargon

I posted a note on science communication jargon on posterous last week (mainly a four page jargon buster I came across…). I still think posterous is the best place for it, but I’ll link to it here, and also re-post a bit of my commentary.

There are a lot of advantages in the professionalisation of Science Communication. I like some of its jargon. I use a lot of it myself, several times a day. Some reflect names of institutions we mention so often an acronym is almost like a nickname (SOB, Society of Biology), some reflect ideas and historical shifts in approach the field has decided to take (e.g. a move from PUS to PD*).

Still, it’s also necessary to keep it open, and involve the range of other experts who do active science communication work (e.g. professional scientists who also do a fair bit of public communications). Sophia Collins has already made this point very clearly though, go read her post on the need for such a mix. So, we might joke that a field such as science communication relies on so much jargon, but the more serious point is that science communicators need to be careful because the field contains way more than professionals.

Moreover, I suspect that if we forced ourselves to rely on what we mean, rather than buzzwords we think other members of our gang will understand, we’d communicate within the profession more effectively too. Just think of “engagement”; an incredibly broad collection of different understandings (including, I’d argue, misunderstandings). Some call this an “umbrella term”, other’s might say “woolly” or even “meaningless in its multitude of meanings”.

Sometimes jargon can get in the way of precision as much as it allows it.

* Brief translation: the shift from talking down to a public perceived as ignorant (a need for PUS = Public Understanding of Science) and towards more interactive, dialogue-based models of communication which listen as well as educate (PD = Public Dialogue).