Friday 27 November 2009

More (un)certainty, or do I mean something else?

Following on from my last blog, the event on poetry and astronomy at Royal Observatory in Greenwich went very well. Jocelyn Bell Burnell gave a great talk and invited members of the audience to read aloud the poems on astronomy she had selected. These readings were fantastic; and I think helped the discussion with the audience to flow more freely.

At one point during this discussion I said that perhaps scientists aim to be more ‘certain’ in their writings than poets, and aim to get a more definite response in their readers. A lady in the audience rightly reminded me that since 1927 physicists have been wrestling with uncertainty, and in particular the uncertainty principle and its implications for reality.

One of the problems with trying to understand the uncertainty principle is the word ‘uncertainty’. It’s a translation from the original German phrase used by Heisenberg when he formulated the physics. But he actually used several German words. It’s a shame that the English equivalent has invariably been ‘uncertainty’ and perhaps we accord that specific word too much weight.

What Heisenberg actually deduced, and what he summarised in this principle, is that not every physical property of a system exists with infinite precision at every point in time. Furthermore, these physical properties are connected; so that the more precise the position of a particle is, the less precise is its momentum.

You often see this explained as due to the practical problems of measurement; if you measure the position of a particle, you use light photons to do this and they will hit the particle, inevitably leading to a change in its momentum. This way of thinking implies that there is an underlying physical certainty, but that we just can’t measure it. It leads to approximately the right mathematical formulation of the principle, but it’s wrong-headed. It’s a metaphor that’s been stretched too far.

The uncertainty principle is telling us something much more fundamental about reality than that. It’s telling us that reality itself has an inexactness; there is nothing beyond what we can measure. There is no underlying, more precise, reality. This can perhaps be better expressed in some of the other words that Heisenberg used, for example; ‘indeterminability’.

So, some of the misunderstandings about this principle derive from the English word ‘uncertainty’, and the fact that this translation of Heisenberg’s original words has become rather too crystallised, perhaps even too ‘certain’. It’s taken on a shorthand meaning of a general vagueness, and even a general limit to scientists’ ambitions about their work.

An additional layer of uncertainty has been added by the requirement to translate Heisenberg’s words from German, and this influences the way we’re able to discuss the subject in English. (It’s obviously more precise to go to the maths, but there was an analogous ‘translation’ problem in the 1920s, when both Heisenberg and Schrödinger came up with very different mathematical formulations for quantum physics. It took some time before they realised that one formulation could be translated into another.)
Is it easier to talk about this principle in the original German? Answers on a postcard, please.

Finally, there will be more actual (as opposed to online) discussions about poetry and science; we’re having a ‘social sessions’ event on 13 January in Edinburgh at the Scottish Poetry Library. Do come!

Thursday 29 October 2009

Dark matter – clear poetry

The Royal Observatory Greenwich is holding a public event on 10 November to discuss poems about astronomy. The astronomer Dame Jocelyn Bell Burnell, the poet Kelley Swaine, and I will be speaking at it.

There is a lot of poetry written about astronomy, and I find this surprising for a couple of reasons.

First, because there’s hardly any literary fiction (I’m only singling out literary fiction because, with its emphasis on language, it’s the nearest thing in prose to poetry) inspired by astronomy. There are the wonderful books by John Banville, ‘Copernicus’, and ‘Kepler’, which are fictional accounts of these astronomers’ lives. There is ‘The Discovery of Heaven’ by Harry Mulisch. And that’s it. (Is it? Tell me I’m wrong)

Second, poetry likes to restrict itself in terms of space, and numbers of words. It likes to be concise, condensed. Astronomy, by its very nature, attempts to explain the entire universe. How can there be any sort of dialogue between such a constrained art-form and such a sprawling subject? What can poetry meaningfully say about astronomy?

Looking at the anthology of poetry about astronomy, ‘Dark Matter’, what strikes me is how many of the poems use people to investigate the subject. As I’ve said before in previous blogs we can’t seem to escape the human-sized in literature. So is astronomy just being used as a metaphor for human activities and emotions? Or is it being explored as something interesting in its own right?

Rebecca Elson managed to write very concise poems that convey an accurate sense of the science – her poem ‘Explaining Relativity’ describes how, according to the theory of general relativity, space is distorted by matter, and the way matter moves is correspondingly affected by this distortion.
‘What if There Were No Moon?’ describes how the Moon affects the Earth; not only tides and eclipses, but also human constructions such as the calendar.

Perhaps poetry has similarities to mathematics, in that discipline’s desire to explain and describe as concisely as possible. Euler’s identity is much loved by mathematicians, because it links the fundamental arithmetic operations to irrational, transcendental, and imaginary numbers in one very simple equation. e to the i pi plus one equals zero. When you say it out loud it sounds like poetry.

Wednesday 30 September 2009

Why we should resist the urge to classify everything

Recently James Kelman has been letting off steam about the fact that genre fiction outsells literary fiction and is far more likely to be reviewed in the so-called literary supplements of the newspapers. But both he and many of the people commentating on his criticisms seem to have a curiously simple-minded approach to the literary vs. genre debate. Literary is deemed to be automatically more virtuous or ‘good’ because the author is not subject to any external demands or limitations in terms of style or content, and genre is the bad guy because it apparently doesn’t question or subvert the readers’ expectations. But how much ‘literary’ fiction does actually subvert, or at least stretch, those expectations? And how much of it seems to be in thrall to style, at the expense of content or plot? Many writers sneer at so-called ‘plot-driven’ genre. But just what is wrong with plot? Why are books with more character development than plot lauded over genre?

Writers don’t just write genre fiction because they want to sell. They write it because they have something to say that can’t be fit into the expectations of literary fiction. Despite what Kelman et al. say, literary fiction regularly excludes from itself books whose quality of writing, or ‘style’, certainly justifies their inclusion. It does so because it’s uncomfortable with the subject matter. For example, Margaret Atwood says her books are not science fiction, because the latter only addresses what has not yet happened. This arbitrary definition gives the lie to literary fiction being less rigid in its expectations than (other) genre.

Caster Semenya’s plight may read like a science fiction novel, but it’s real.
Until her story hit the news, I assumed that females were created by two X chromosomes being present in the foetus, and males by one X and one Y. But this is not so. It seems that a person’s sex is a complex physical phenomenon which arises from an interaction between chromosomes and hormones. In order for the chromosomes to have the expected effect on the development of the organism, they need to be supported by the relevant hormones, in the right quantities at the right times.

For example, a person with XY chromosomes can only develop as a male if their body is receptive to the androgen hormones. If they have a condition known as androgen insensitivity syndrome, they will develop as a female, despite the presence of a Y chromosome in their bodies. So, while most people’s sexes are straightforward to categorise, a minority exist on a continuum in between the two most common ‘extremes’.

It seems to be impossible to decide ‘objectively’ the point on this continuum at which ‘female’ can be differentiated from ‘male’, without asking the person concerned what sex they feel they are. That is why Caster Semenya is undergoing psychological tests, as well as physiological ones.

This sort of psychological test sounds depressingly similar to those foisted on gay people in the past, but at least it may give Caster some input into the decision that the IAAF are going to make about her sex.

Classification of people (and of writing) is all well and good if it gives us genuine insights into the world. But if classifications make our thinking too rigid and become uncoupled from reality, then we should learn to live without them.

Sunday 9 August 2009

Tiny galaxies, enormous atoms, and people at the centre of it all...

Literary fiction usually only portrays human characters. This type of fiction places humans at the centre of what is an inhuman universe. It hasn’t absorbed the lesson of the Copernican revolution.

In contrast, since Copernicus’s models and Galileo’s observations, science thinks it fully understands that the universe was not built for us (leaving aside any discussion of the anthropic principle - I’ll save that for another day).

Science deals with physical and temporal phenomena on all scales. The way we define a second of time uses gaps between energy levels in atoms. Stars and galaxies were created billions of years ago.

But I wonder if scientists still sneakily use humans and human-sized experiences as the ultimate measuring scale.

It’s common in astronomy to use redshift as a proxy for distances to galaxies. This is because redshift is the only direct measurement of distance without any reference to, or reliance on theoretical models of the universe. But it also happens to be a simple number without any units (because it is a ratio), and for all measurements apart from one, it is currently between 0 and 10. The only redshift measurement which is higher than this is for the cosmic microwave background, which is at a redshift of 1000.

So, when you’re at a telescope, measuring the redshifts of quasars, you don’t think about the fact that they were formed billions of years ago and are billions of light-years away (the word ‘are’ is a bit tricky in this context – you’re seeing them as they were then, not as they are now). They have redshifts of 1, 2, 3, 4 – numbers you learnt in primary school.

The same goes for size. Astronomers look at images of galaxies that are small enough to fit onto their computer screen. They used to use glass plates to take images of large parts of the sky. Just one of these plates, which are 8 inches square, would show around 100,000 objects – both stars and galaxies. So there was a curious inversion of size, we had to look at these plates with an eyepiece to see the details of the galaxies, the curved spiral arms, the wisps of gas. In every single astronomical imaging there is a huge compression ratio of what is actually out there to what we can cope with.

The list goes on and on… Physicists use a unit called a ‘barn’ to refer to the cross section of an atomic nucleus. When this was first calculated in the 1940s, people said that it was ‘as big as a barn door’.

There is a lovely poem on the Human Genre Project which compares the images of chromosomes to teeth. Again – bringing the not quite human into our world.

Are there drawbacks to this? It means the Copernican revolution is not finished. So when Richard Dawkins tries to explain to us that our behaviour is governed by genes, we still cannot accept that loss of free will. And while science is still struggling, literary fiction hasn’t even begun to come to terms with this.

Friday 31 July 2009

Madame Curie's transformation

My favourite essay in ‘The Faber Book of Science’ (edited by John Carey) is ‘Shedding Life’ by Miroslav Holub, who is a poet and immunologist. In this essay he recollects mopping up the blood of a recently shot muskrat, and he ponders on the exact nature of death. The death of an organism as a whole is generally taken to be defined as brain death or heart failure. But what about all the constituents of that organism? Holub points out that many of these; blood cells, hormones, enzymes etc., are all capable of living on for some time after the apparent death.

When Marie Curie died (in 1934) her belongings, such as her note books, were discovered to be so radioactive that they had to be deposited in lead-lined boxes. Even now, anyone who wants to look at them has to wear protective clothing.
She died of cancer, most likely caused by a lifetime of working with radioactive substances without being properly protected from the resulting radiation. Her most famous achievements were to discover new chemical elements, by virtue of the fact that they were radioactive. In making these discoveries she helped to shed light on the nature of radioactivity, and show that there are three different types; alpha, beta and gamma radiation. When an element emits or absorbs alpha or beta particles it changes into another element. This is why the study of radioactivity has been considered to be a sort of alchemy. (You can get gold from radioactive mercury, but it is very difficult).

Marie Curie worked with a uranium ore called pitchblende, exposing herself to alpha particles, and to radon gas. Both of these would have been absorbed by her body, causing damage to the cells. At an atomic level some of the alpha particles may have been absorbed by the atoms in the cells (Humans are carbon-based and so most of these atoms are carbon atoms. The process of carbon atoms absorbing alpha particles and transmuting into oxygen atoms is what happens in dying stars.)
Radon gas has a long half life but eventually decays into polonium, one of the elements Marie Curie is famous for discovering and which she named after the land of her birth; Poland. Polonium is called a ‘daughter product’ of radon as a result of this process.
Her body is interred in the Pantheon in Paris. It is still radioactive; alongside the more usual organic decay processes taking place there are also the atomic ones. She is being transmuted herself.

Sunday 26 July 2009

Intuition about mice and quasars

‘Intuition’ (by Allegra Goodman) takes place in a cancer research lab where one of the junior workers, Cliff, thinks he’s found a virus which can cure cancer in mice. The lab is in financial trouble, so its directors are grateful for the chance to publicise these results and use them to raise funds. However, it proves impossible to replicate the results, and another post-doc, who happens to be Cliff’s ex-girlfriend, suspects him of fraud.

Some reviews of this book have suggested that it’s never made entirely clear whether Cliff has knowingly committed fraud, or if he has unwittingly made a mistake. But there is a clue quite late on, when we get a glimpse of Cliff’s reasoning;

He had not chosen to discuss every piece of data, but had run ahead with the smaller set of startling results he’d found. Still, aspects of his data were so compelling that in his mind they outweighed everything else. He had sifted out what was significant and the rest had floated off like chaff.’

From a scientific point of view, the first sentence of this quote is damning. Cliff just doesn’t seem to be a very good scientist. When you run an experiment you cannot pick and choose where your results start and end. If you start with a hundred mice, you must discuss the results of the experiment on all of those mice, not just the few that happen to show a good result. This is because there’s always the possibility that your good result happened by chance. If you spot a good looking result in a subset of your data, it may just be a random fluctuation. (That is also true of the whole dataset, of course, and should be quantified as far as possible.)

It’s bad science to select a subgroup of interesting results after the event, but it’s depressingly common. When I was an astronomer, I worked on quasars. There’s a so-called ‘controversy’ about whether or not these objects are at the incredibly large distances as implied by their redshifts, if the Big Bang model is correct. This controversy was a genuine problem when quasars were first discovered in the sixties, but has now fizzled away. Only a few ‘maverick’ astronomers, such as Halton Arp, now believe quasar redshifts aren’t cosmological. The controversy is an artefact of only choosing the data that fit the hypothesis and ignoring all the other data, in this case of looking at quasars that appear to be near to much lower redshift galaxies. Statistically this doesn’t happen any more than you would expect – but superficially it looks ‘interesting’.

One nice aspect of ‘Intuition’ is that the point of view is omniscient and no one character is particularly favoured; rare in modern literature. The reader gets to inhabit the minds of all the major characters and understand their view of the drama unfolding around them. Even so, with all this information that we are provided, it proves impossible to understand the reasons behind the characters’ actions.

This failure of the omniscient narrator to get to the truth of the matter could be read as a warning – are we deluded in hoping/expecting science to be an impartial tool for understanding the external world? Or am I confusing science with omniscience?

Saturday 27 June 2009

Madame Bovary on a beam of light

I’ve been rereading ‘Madame Bovary’ and am struck, all over again, at how Flaubert lets us draw our own conclusions about Emma Bovary’s actions, without forcing any ‘morals’ down our throats.
Flaubert was one of a set of nineteenth century writers, along with others such as Zola, who were very influenced by the rise in science and who saw himself as a sort of human and/or social experimenter. Many other writers have also consciously seen themselves in this way. Brecht commented that he conducted experiments on audiences with his dramas and spoke of turning the theatre into a laboratory.

‘Copenhagen’ by Michael Frayn can be seen as an experiment in the way it repeats variations of the real-life conversation between Heisenberg and Bohr in 1941 in Nazi-occupied Copenhagen, in a repeated attempt to understand what really happened at this meeting. But, literature isn’t really the same as an experiment? In ‘Copenhagen’ the audience is presented with different versions of the events but we never get to understand what really happened, and why. But that’s not the aim of this particular experiment. What we’re left with is an understanding that it is fundamentally impossible to know what happened that night in Copenhagen, despite all the written records.
That understanding is a satisfactory result of this dramatic experiment. It doesn’t matter that this experiment cannot really happen, that it’s only acted out for us. It’s actually a perfect example of a thought experiment.

Thought experiments have a long pedigree in science. They’re best characterised as hypothetical experiments which we can imagine, but which we’re not able to perform. They allow us to set up a scenario and think through the repercussions. Einstein, influenced by the philosopher and physicist Ernst Mach, was a whiz at developing thought experiments. Some of the best ones (such as imagining a person riding on a beam of light) show how his theories developed from considering everyday objects such as clocks and rulers. They also provide the most accessible illustrations of the ramifications of his theories, without having to wade through all the maths.

As twentieth century physics became more esoteric and abstruse, physicists became more and more reliant on thought experiments to illustrate the implications of their work.
The most famous of these thought experiments; Schrodinger’s cat, was developed as a reductio ad absurdam by Schrodinger to criticise the bizarre, and clearly to him wrong headed, ‘standard’ interpretation of quantum physics. In this experiment a cat is locked in a box with a vial of poison. After a fixed amount of time, depending on a random process, the cat will either be killed by the poison, or not. But according to quantum physics, until we open the box, observe the cat, and therefore measure the outcome of the experiment; the cat is in a superposition of quantum states and is both dead and alive.

This experiment is not about to be performed in any laboratory soon. It doesn’t need to be. Thought experiments are a beautiful use of imagination in science. They allow the scientist to imagine ‘what if?’
This is precisely the question that fiction writers ask. Just because the results aren't scientifically provable doesn't mean they're not true.

Sunday 7 June 2009

A spark of life

I’ve been reading about the process of cloning. Some of this process seems conceptually simple, obvious even, yet is clearly very technically demanding. For example, part of the process requires removing the nucleus from a cell and inserting it into an egg which has had its own nucleus removed (a nucleus is the part of a cell which contains the DNA, and this in turn contains the genes needed to instruct a growing embryo what to do and how to grow).

Then, a small electric shock is applied across the egg and its new contents. This shock apparently has two purposes. One is to help the nucleus ‘fuse’ to the egg and the other is to ‘activate’ the cell division process (which is what happens when an egg is fertilised by a sperm), and help the egg on its way to becoming a new organism.

This process of applying an electric shock also seems simple in one way. But it’s not obvious why it should work, and I can’t find a proper explanation in any popular account of cloning. How can an electric shock mimic the natural process of fertilisation? Why did anyone think this would work?

I find this lack of information unsettling and it seems at odds with the rest of the detailed information about cells, nuclei and so on. The account that we do have brings to mind the scene in ‘Frankenstein’ in which the monster is brought to life by an electric shock. Is that why we’re not given more information about the process, because it’s assumed that we’re familiar with it already, through reading or watching ‘Frankenstein’?

To be fair, the idea that electricity is linked to life has been around for longer than ‘Frankenstein’. When she wrote the book, Mary Shelley was influenced by Galvani’s experiments in the late eighteenth century. Galvani found that when he applied an electric shock to dead frogs’ legs they twitched, leading him to think that electricity was the vis viva, a sort of life force.

This intertwining of science and stories about bringing inanimate objects to life goes back further. The Golem is a creature in Jewish culture made out of clay and dirt who is created to protect Jews. One of the best known versions of this story is set in Prague in the sixteenth century. The Rabbi of Prague makes the Golem in order to protect the Jews in the city from anti-Semitic attacks. Here, the creature is not activated with electricity, but by having the word ‘emet’ (Hebrew for ‘truth’) inscribed on his forehead (in some versions a piece of paper inscribed with this word is inserted into his mouth). He does as he’s told and he protects the Jews, but he gets increasingly violent himself, until the Rabbi is forced to deactivate him, and he does this by rubbing out the first letter of ‘emet’ leaving ‘met’, which means ‘death’.

The fact that words are used to bring the Golem to life, and then to kill him off again, is a nice metaphor for the power of language. It feels more precise than a flash of electricity; perhaps that's because there is more information in a word than in a spark.

Wednesday 13 May 2009

What does illness mean?

Writers love disease. It fulfills all sorts of useful functions; it can be used to investigate fictional characters, and even give them a greater depth and nobility (e.g. ‘The Magic Mountain’ by Mann). It can act as a metaphor for societal problems (think of ‘The Plague’ by Camus). Illness is not allowed to be itself.

It’s not just writers who like this way of thinking. It seems to be ubiquitous. But this attachment of metaphor to illness is very pre-science. TB was thought to favour sensitive, artistic souls, until Koch’s discovery of the relevant bacterium[1]. Cancer doesn’t have a simple one-size-fits-all cause or cure, so people speculate that it can be associated with, or caused by, repressed feelings, and cured by having a positive attitude.

In her famous essay ‘Illness and its Metaphors’, Susan Sontag criticised the widespread use of military metaphors in the discussion of cancer. Implicit in many descriptions of life with cancer is the assumption that if you don’t battle against the disease you are surrendering, and must bear the blame if the disease wins. She pointed out that many major illnesses have their own set of distinct metaphors, and what these metaphors really tell us about is our attitude to the illness, not the illness itself.

The metaphors get more complicated when the boundaries of the illness itself get fuzzier. What if we discover we are carriers of an illness that hasn’t actually caused any symptoms – yet? What if an illness runs in our families but no one’s yet found any relevant gene that could be deemed to be a cause? Are we ‘ill’ in either of those circumstances?

Illness has an extra dimension now; the dimension of time. And a new grammar to describe it; the grammar of the possible, and the subjunctive.
The commonly used metaphor for genes’ potential to cause illness is a ‘timebomb’ waiting to go off; something external buried deep within ourselves that is primed to explode at an time unknown to us. This ‘timebomb’ is another use of the military metaphor, and as with cancer, externalises the cause of the illness in a way that is essentially incorrect. Genes are not external forces (although they may, or may not, act according to external forces). They are part of us, they help define us.

We can behave in ways that may lessen the probability of us developing an illness (note that I don’t use the word ‘protect’ – another metaphor from the battlefield) but we are our illnesses as well as our health. We can’t own one without the other.

We need to develop new metaphors.

[1] Actually, cause and effect get jumbled here. TB was also thought to make the sufferer more pre-disposed to an artistic temperament if they weren’t already so inclined.

Thursday 30 April 2009

Monkeys and Moons

You may have realised by now that this year is both the bicentenary of Darwin’s birth as well as the 150th anniversary of the first publication of ‘The Origin of Species’.

It’s also the 400th anniversary of Galileo’s use of the telescope. Why is this worth commemorating? In 1609, Galileo looked through his telescope and observed the moons of Jupiter for the first time. He saw that they were orbiting around Jupiter, so he realised that not everything in the sky moved around the Earth. This was experimental proof for Copernicus’ heliocentric model and it relegated us and our Earth away from the centre of our Universe.
This relegation has continued ever since, to the present day. We now know that we’re on a small planet orbiting around an average sized sun, in a rather large spiral galaxy, which is only one of many galaxies in the Local Group and one of several million (known) galaxies in the Universe.

Darwin’s work was necessarily more focussed on humanity than Galileo’s, but it helped to emphasise the links between humans and other living organisms at the expense of the uniqueness of humanity. Our place in the universe is not particularly special, and neither are we, compared to other species. (We can’t yet quantify how unique the combination of us on our planet is, compared to other similar Earth-like planets in the universe.)

It’s interesting that the anniversary of Darwin’s work seems to have a much higher profile than Galileo’s. Of course, that’s been helped by the double whammy of Darwin’s anniversaries falling in the same year. And this year is International Year of Astronomy, which was at least partly triggered by the Galileo anniversary. IYA is a big deal; there are lots of astronomical events going on all over the world to celebrate it.

Even so, I can’t see any commemoration of the wider implications of what Galileo found, outside of the scope of the actual science. (Of course, these implications were realised by the Church during Galileo’s lifetime, leading to his infamous trial and subsequent house arrest.)
In contrast, we’re possibly in danger of developing Darwin-fatigue. There is oodles of coverage on TV, radio, newspapers, books… and the impact of evolution on every other aspect of human endeavour hasn’t been missed.

I think this difference in emphasis has two possible causes. Darwin’s work is not finished. Evolution is endlessly being challenged by proponents of so-called Intelligent Design. These challenges are not trivial, they are affecting the way that children are educated. So there is a real reason to defend evolution and to continue to explain how it works. Also, Darwin as a person is accessible. We know how he lived, and we can read what he wrote without it needing to be filtered through intermediaries.

On the other hand, Galileo might have been the first modern scientist (in that he actually observed the world around him rather than rely on Aristotlian arguments to make his case) but he’s a remote figure. It’s difficult to imagine his world. What did he really think of the Church? His writing is direct and engaging (see his ‘Dialogue Concerning the Two Chief World Systems’) but it’s difficult to relate it to the way we do science.

But we lose sight of Galileo at our peril. His trial was a test of what can and cannot be said. The journalist Simon Singh is currently being sued by the British Chiropractic Association over his criticism of the possible dangers of chiropractic therapy. This is totally inappropriate; science shouldn’t be constrained by legal devices. If the BCA think that Mr Singh is wrong, why don’t they produce the relevant scientific research?

Tuesday 21 April 2009

The pro-am tournament

Science (of all descriptions) is usually done by scientists in universities or industries, and they get paid to do it. Science became a predominantly paid profession towards the end of the nineteenth century, with the growth of lectureships and PhD studies at universities. Until then, amateurs had played a key role. Victorian science is full of country parsons with microscopes in their dining rooms, and even Darwin wasn’t a professional scientist.

Until recently, astronomy was probably the only science still carried out by amateurs as well as professionals. Amateur astronomers do useful stuff like monitor the changes in light from variable stars. They also search the sky for supernovae (exploding stars), and other rare phenomena. They’re in a good position to do so, as they tend to have more access to telescopes (albeit much smaller telescopes than the ones professionals use).

But amateur astronomy is different to that done by professionals. Amateur astronomers have more in common with collectors of butterflies or fossils. In any science there’s a need to collect, classify, and categorise. Before you can explain different phenomena you need to have detailed descriptions of them, know how common they are and where they occur. Is a big red star a different type of star than a small white one? Or just the same type at a different stage of its life?

Amateurs document and describe what they find, but they’re less likely to investigate the underlying physics and provide explanations of what they see. Theirs is a more passive activity than professional astronomy. Perhaps it’s also more visually aesthetic because they spend more time actually looking at the sky.

Now, the edges are being blurred between amateur and professional astronomy as amateurs are able to get access to facilities like the 2 metre robotic Liverpool Telescope. Will this change what they do?

And now astronomy is no longer the only science in which amateurs participate. There is a growing trend for DIY biology, done by amateurs in their kitchens. This is seen by its practitioners as challenging the hegemony of ‘big science’ and making it more democratic so that genetically modified organisms aren’t just created for the purposes of making profit. This sort of amateur science is much more of a challenge to professionals than amateur astronomy. DIY biologists are doing what professionals are doing, just on a smaller scale, and the professional way of ensuring that scientific results meet a commonly agreed standard is through peer review. Will the DIYers bother with that? Perhapd they won’t need to, if they’re not applying for jobs or grants…

Of course, the blurring between amateur and professional activities happens because people get access to technology and information. Writers now publish on the internet. It may not be ‘professional’ and it may not make them any money, but it disseminates their work. But we still crave the kudos that comes from getting our work published in more traditional ways. Do we want this just because it’s so difficult to get?

Monday 30 March 2009

Imagining accuracy

How important is accuracy in fiction? If you’re writing about something ‘real’ based on real information, experiences, or events do you have to stick to the facts?

If I spot a mistake in the use of science in fiction, it can throw me off course. I feel that the universe set up by the writer is flawed. If the writer can make one mistake, then perhaps others have been made too. Should I continue to believe in this universe?
And I’m more likely to be on the hunt for mistakes if I suspect that the author is using science for reasons other than telling a story.
For example, some authors appear to use science to bolster their authority. Ian McEwan does this in ‘Enduring Love’ with his use of quasi-medical papers to give a scientific ‘explanation’ for the way that one of the characters behaves. Others use science to provide pretty-sounding metaphors. Quantum physics and relativity seem to be particularly popular. The first line of ‘Cat’s Eye’ by Margaret Atwood is
‘Time is not a line but a dimension…’
After I read this oxymoron (a line does have a dimension), I very nearly didn’t read on.

And yet. A desire for accuracy can shade into pedantry. The narrator of ‘Cat’s Eye’ is an artist. She’s not likely to understand the finer points of general relativity, and more importantly, she doesn’t need to for the story to work. All she, and therefore the reader, needs to know is that her brother has become a physicist and is removed from the hum-drumness of daily life (This depiction of an egg-head scientist seems somewhat clichéd but that’s another matter).

Too close a reading of the text in an effort to check its accuracy can stop the reader from appreciating the multiple interpretations that are always possible. When I first read the following lines from the poem ‘Carnal Knowledge’ by Rebecca Elson;
‘Performed the calculus
Of the imaginary i…’
I took the ‘imaginary i’ to refer to the square root of minus 1, which is depicted as i in maths and is the foundation of all so-called imaginary numbers. It took several re-readings of the poem for me to realise that this imaginary i could also be a person, a body. (I don’t know why it took me so long, the whole poem is about bodies…)
My knowledge of maths perhaps led me to assume that there was only one meaning of this phrase, and this actually prevented me from getting a wider appreciation of what the poem could offer. I might also have made this assumption because I knew that Elson herself was an astronomer and much of her writing is about astronomy, and science.

So I think there is a danger of being too proprietorial about knowledge. It shouldn’t be off-limits. If writers make mistakes which the vast majority of their readers won’t spot, then what does it matter? They have at least stretched their language to encompass new ideas.

Monday 23 March 2009

Petals and particles

Popular (and unpopular) science frequently relies on the use of metaphor in explanations. Metaphors have occasionally even been responsible for scientific discovery; in 1865 August Kekulé dreamt of a snake biting its own tale. He said this was the inspiration to his figuring out the structure of benzene.

The description of the expanding universe as a balloon being pumped up is ubiquitous in cosmology. But this ubiquity can be a problem; too often the metaphor ‘becomes’ the thing you are describing, and nothing is ever exactly the same as anything else. Any description of reality is limited in its accuracy by its reliance on words.

In quantum physics, light can either be thought of as particles or as waves, depending on how you observe it (the same is true of sub-atomic particles, i.e. they can equally well be thought of as sub-atomic waves). Thomas Young’s famous experiment at the beginning of the nineteenth century showed that light makes diffraction patterns when travelling through parallel slits. Diffraction is a property of waves. Conversely, Einstein’s early work showed that the photo-electric effect, in which light strikes a metal surface and liberates electrons, can only be explained if you treat light as a particle. So, clearly, our everyday concepts of ‘particles’ or ‘waves’, which are complementary, are inadequate to explain the true nature of light.

But physicists never let mere paradoxes stop them and this drawback was elevated to ‘the complementarity principle’ by Niels Bohr. He stated that something can be both one thing and its opposite, and that it didn’t matter, because physics can only be concerned with what you observe and not with the true underlying nature of reality. ‘There is nothing outside the experiment.’ (A nice counterpoint to Derrida’s ‘there is nothing outside the text’.) Different experiments show different aspects of reality, but there is no reason to suppose that you can have an experiment which shows all aspects.

The complementarity principle is an interesting riposte to those people who accuse scientists of having one-track minds, unable to see the subtleties inherent in reality. Keats claimed that Newton ‘unweaved the rainbow’ by explaining the physics behind this phenomenon. On the contrary, Newton deepens our perception of the rainbow through his description of light being diffracted by water droplets in the atmosphere.

Ezra Pound’s famous poem ‘In a Station of the Metro’ runs (in its entirety)
The apparition of these faces in the crowd;
Petals on a wet, black bough
.’

The two images in this poem are so finely balanced that they mirror each other and it is never clear which is the metaphor and which is the reality. Dangerous for science, but prescient in its complementarity. Pound wrote this in 1913, when Bohr was developing his model of the atom.

Wednesday 11 March 2009

Which comes first, style or content?

When I write, I worry about both style and content. I want my sentences to be pleasing aesthetically, but also meaningful.

Perhaps I am displaying my scientific roots by always making aesthetics play handmaiden to the characters and the story - the actual 'facts'. And yet – there is a powerful appreciation of aesthetics running through science, as well as maths. There is always the search for ‘an elegant solution’ to the problem in hand. What is meant by elegance here? I think it’s something to do with simplicity and conciseness, and perhaps novelty.

Aesthetically pleasing science makes apparently complex phenomena simple. Why are there so many different species of finches on the Galapagos – all with different habits? Darwin said that they’ve each evolved to fit a precise environmental niche.

It can relate disparate phenomena by revealing the underlying laws. Newton showed the movements of orbiting planets and falling apples can be explained by a universal force called gravity. Maxwell’s equations brought together all the different observations of changing electric and magnetic fields to show that each is a transformation of the other.

It leads to new areas and gives big bang for your bucks. (It’s no good having a good-looking theory if you can’t do much with it.) Einstein’s concept of light as a particle led to a whole new understanding of the structure of the atom.

It can decide between different ideas. Penzias and Wilson’s discovery that the thermal noise they detected at Bell laboratory was the remnant of an early stage in the evolution of the universe (and not pigeon droppings which was their initial assumption), ruled out the steady state model in favour of the big bang one.

It may even be visually pleasing in some way. Crick and Watson’s analysis of DNA revealing its double helix structure, has created an image now embedded in our collective consciousness.

Behind many of these aspects of aesthetics lies simplicity. Simplicity is a powerful driver in creating science. Occam’s razor says that we should not ‘multiply entities’ unnecessarily; so if you’re fitting a mathematical model to your data, you choose the one with the fewest parameters. And a new scientific theory should have as few arbitrary factors as possible. But in judging competing scientific theories, it’s not always obvious which one best obeys Occam’s razor.

For example, the Copenhagen interpretation of quantum mechanics says that a wave function is attached to each possible outcome of an event. Once a particular outcome is measured, the wave functions corresponding to all the other outcomes collapse. Until that happens everything related to the event is in a sort of probability ‘fuzz’ (for example, the cat in the box is both dead and alive, until you open the box and discover its fate). But what exactly is a measurement? By definition, it has to be a non-quantum event, otherwise you just get more fuzz. So, the interpretation fundamentally limits what quantum mechanics can describe, by saying there always needs to be something outside its description!

The Many Worlds Interpretation avoids this in-built limitation, but only at the expense of having to start up a new universe each time an event happens. This seems to be multiplying entities to an extreme degree; and quite wasteful.

So which is the ‘simpler’ explanation of quantum reality? They clearly both have their problems, and not ones which can easily be resolved by examining their aesthetics.

Thursday 5 March 2009

The ghosts in the human genome machine

The purpose of the Human Genome Project was to map all the individual bases that make up our DNA. There are four types of these bases, adenine, thymine, cytosine, and guanine, and they combine to form genes.

Now that the project has achieved its main goal and finished the sequencing, the next step is to identify the actual genes as distinct from the majority of the material, the so-called ‘junk DNA’. (Only a tiny proportion of the overall DNA actually consists of genes.)

Humans share 99.9% of their genes, and so the information discovered through the HGP is relevant to all of us. As we have about 20,000-25,000 genes, only a handful are unique to any one of us. But whose DNA was actually sequenced? The project used samples from anonymous donors. Neither the scientists nor the donors know whose samples actually ended up being sequenced, but it is clear that more than one person’s was used, i.e. the information we have is an amalgamation from different donors. Because the ‘map’ created by the HGP is actually linear – a sequence of letters corresponding to the order of bases, at any one point in the sequence, the information we have corresponds to just one unknown donor, but we don’t know who.

This deliberate uncertainty interests me. Scientists (whatever their discipline) spend so much time battling uncertainty, trying to quantify or eliminate it from their work. Much of this uncertainty is caused by random fluctuations or systematic biases in what they are trying to measure. Both need to be understood and accounted for, if you’re trying to make sense of your external world. And more fundamental uncertainties exist in quantum physics, which are not simply due to errors or limitations in the way that we measure things.

So it seems counter-intuitive to increase the amount of uncertainty in this major experiment. But clearly it has several purposes. Uncertainty about knowledge of the donors can protect them from the consequences of having their genome sequenced. For example, subsequent identification of genes relating to disease can’t be attributed back to a particular donor. Also, by banishing information on the particular donors, the experiment is able to interest everyone. It encourages us all to feel that that map has a direct relevance to each and every one of us. As a result, the project is allowed to gain a certain amount of authority.

But you could just as easily say that the project is relevant to no one. What does it mean to sequence a genome of a person that doesn’t actually exist?

This partial information is an excellent example of synecdoche, a type of metaphor in which part of a thing is used to stand in for its entirety. We don’t yet have DNA sequences for all humans, and we don’t yet know which parts of the sequence we do have are shared by all humans. So ‘The Human Genome Project’ is a misnomer.

Perhaps I shouldn’t be too harsh. In practice, it’s actually impossible to communicate without using synecdoche. Fiction writers know that they can’t get across the entirety of their fictional characters. They write about aspects of these characters, and the reader uses these a bit like dried milk, to reconstitute them and make up their own pictures.

Or you can say ‘there is nothing outside the text’, and therefore the human genome is just a sequence of bases, and we are free to give it as much or as little significance as we wish.

Tuesday 24 February 2009

Free will with every packet of cornflakes

Yesterday I spent all day writing. What I produced was rubbish, all 2000 words of it. The words were flat and lifeless with no energy at all. In contrast, last week I was working on short story and the words zinged across the screen. I didn’t know where those words came from, I hadn’t planned them in advance. They just appeared.

Every fiction writer knows the excitement of setting up a situation and then being surprised at what happens next. Your characters can behave in ways that surprise you. Are they exhibiting free will? Even when you consciously plot out your characters’ stories, you have to be sensitive to what feels right and what doesn’t. If you force your characters to do something for the sake of the overall story, they may turn into puppets. They lose their ‘divine spark’ and the story can feel over-engineered. (Interestingly, this may only be true in literature. In Kleist’s famous essay on marionettes, he pointed out that they can be more graceful and more likely to achieve ‘perfection’ in their movements than humans, precisely because they lack human’s self-consciousness which, according to him, inhibits a complete understanding of the universe.)

Nowadays, free will is elusive. It suffered a fatal blow in the seventeenth century from Newton’s discovery of universal physical laws. The corresponding vision of a wholly predictable and mechanistic universe which only needs to be set going before it simply carries on for ever, doesn’t seem to need free will.

A universe that allows time travel (ours doesn’t appear to rule it out) must also impinge upon free will in some way, if only to prevent logical paradoxes (think of ‘Back to the Future’).

Is there any space for free will in quantum physics? After all, it (re)introduces uncertainty into physical systems, albeit on atomic scales. But this uncertainty (in the ability to know simultaneously an object’s position and momentum) is random. And when we exhibit free will, we don’t think we behave randomly.

Geneticists don’t have much time for free will, either. We’re creatures ruled by genes and all our behaviour can be explained by those genes’ desire to replicate themselves. Even apparent altruism to other people only seems to exist because we share practically all of our genetic material (99.9%).

So perhaps we enjoy reading about characters in novels who appear to exercise choice because that choice is a mirage in our real lives. Do our fictional characters have more free will than we do?

Thursday 12 February 2009

In thee (author) I trust?

I’m currently reading a historical novel, Quicksilver by Neal Stephenson, which tells the story of the machinations of the Royal Society in the late seventeenth century and the row between Newton and Leibniz over who first invented the calculus. The book is a mixture of real and imaginary characters and it’s narrated by the latter. I think it’s these imaginary characters that move this book firmly into the realm of fiction, without them the reader would be more likely to wonder if what we were being told about Newton, Hooke, Leibniz, Charles II et al. were ‘real’. The addition of the fictional makes us feel secure that to a large extent all the interactions, even those between the real characters, are made up. (To that extent they’re not real characters at all).

The relationship between the reader and the author needs trust on several levels, overtly in the case of scientific narratives, less so for fiction. The most general trust requirement that the reader has, is that what they are about to read is interesting and worth spending their time (and perhaps money) on. In fiction, the reader needs to trust the author to invent a coherent universe, one that operates according to some specific laws (even if the author is the only one that knows those laws). If something happens in the story that seems to violate its fabric, then the reader stops trusting or caring. (See John Gardner’s excellent book ‘The Art of Fiction for a more detailed discussion of this point).

Of course, you don’t need to trust the narrator who may be unreliable and whose vision of events may be completely skewed. For example, Pip in ‘Great Expectations’ assumes that his mysterious benefactor is Miss Havisham and initially there is no reason to doubt this. The interest in the novel partly lies in exploring how this error affects Pip’s development. Barbara in ‘Notes on a Scandal’ tells us she only has Sheba’s best interests at heart. By the end of the novel, this is patently untrue.

Part of the fun in reading novels with unreliable narrators is working out the reasons for this unreliability. And even here, I think there needs to be some initial trust in the narrator. Someone, who from the very first page is clearly lying, is going to have to be incredibly engaging to keep the reader involved, because it will be so much more hard work just to figure out how this universe works.

Novels with unreliable narrators tend to be written in the first person, so are all first person narrators inherently unreliable due to their necessarily restricted and partial outlook?

Trust in scientific papers operates at various levels, some of them more explicit than others. Most obviously, the reader has to trust the authors’ integrity and believe that what is reported in the work actually happened. If this is violated it can have a huge far-reaching impact on the science. For example, one infamous case of scientific fraud is that of Paul Kammerer and the midwife toads in the twenties. Kammerer said he’d shown that the Lamarckian version of evolution was correct, in that an organism’s environment can directly affect how it develops and the attributes that it passes onto offspring (in violation of Darwinian natural selection which holds that genes mutate randomly and those organisms which are better adapted to their environment – by chance – are more likely to survive and mate and produce offspring, thereby ensuring that their genes are passed on.)

Male toads, from toad species which mate in water, have nuptial pads on their feet to enable them to hold onto their females. Toad species which mate on land don’t need, and therefore don’t have these pads. Kammerer claimed that by forcing the landlubbers to mate in water, he’d got them to develop nuptial pads in only a couple of generations. This caused a furore until it was shown that the nuptial pads on one of his specimens had been faked, and were caused by injections of ink. Kammerer committed suicide shortly afterwards.

This discredited Lamarckian ideas. And yet, how can a fake result be used to disprove a theory? Just because it doesn’t disprove Darwinian ideas, it doesn’t mean that the alternative is wrong. But this is what happened – bad science has been used to discredit Lamarckianism. (Alongside good science which favours Darwin, of course).

Fraud is (presumably) rarer than inadvertent mistakes. Science is difficult to do, mistakes happen. So how does the reader know whether to trust that the author hasn’t messed up?
Popper argued that science should be falsifiable i.e. you should be able to refute a theory if you get an experimental result that disagrees with it. In practice because of the possibility of mistakes there is more caution than this implies, and scientists are unlikely to chuck away an entire framework on the basis of a single result, simply because that result could be wrong.

More insidiously, scientists can be skewed towards a specific reading of their results based on their prejudices. So, trust is required to assume that the author has been open-minded and considered all options.

Of course another factor that influences the reading of a scientific paper is the authors’ reputations. If you know the authors, and accept their previous work, you are more likely to believe their current work. Is this true in fiction? Are you more likely to read a book based on the author’s reputation? Can the book ever stand alone?

You also have to trust what you are not being told. In fiction, much of the art lies not in writing but in editing, the cutting out of extraneous words to leave only the essentials. You have to trust that what has been left out is inessential. In science this is more problematic. Many experiments go unreported, because they don’t give interesting results. This is a particular problem in medicine with the testing of new drugs. If the results are inconclusive or unfavourable, they are more likely to go unreported than if they appear to support the hypothesis that the drug ‘works’.

Tuesday 3 February 2009

From A to B

Writing fiction presents the writer with a near-infinite list of choices, not only on the subject matter, but also on the style and the process. For example, most stories are written in the past tense, but not all. Sometimes, I find that when I write in the present tense, my writing becomes more immediate and more fluid. I can ‘see’ my characters better as I can understand what they are doing right now. I don’t have to interpret their past actions for my readers.

However, this sort of writing can go badly wrong, and you can tell when it does because the characters in the story get stuck in a sort of realisation of Zeno’s paradox, in which an infinite number of steps are needed to get anywhere. The characters stop being able to do anything, because every single action occupies all of the present. When you’re concentrating on the present and not the future or the past, the present can seem like eternity. The writer chops the present down into finer and finer slices until time stops completely. This is probably a good thing if you’re meditating. It’s not a good thing if you want to write interesting fiction.

Of course, the antidote is to know which of the present moments are the important ones and leap from one to the next, leaving out all the others. A story never really flows continuously and smoothly from one moment to the next, and consequently the space-time in the story becomes rather lumpy and quantised. If the writer is good, the reader doesn’t notice these invisible joins.
The use of the past tense in literature is a method of telling the reader that the narrator has seen these events happen in the past and is relaying them to the reader. Using the present tense does away with this artifice, and replaces it with another. The reader is now ‘watching’ the events unfold before his or her eyes; this is the literary equivalent of watching a film. Was this method of writing influenced by the rise of film in the early twentieth century?

In scientific narratives (i.e. papers published in scientific journals), there is also a subtle editing of time and space. In these sorts of writings, you never read about the nights spent at the telescope waiting for the cloud to go away and the cabin fever brought on by spending two weeks alone at 10,000 feet observing the same star night after night. You never usually read about the things that go wrong, the wrong star observed, the wrong chemicals mixed, the endless debugging of the computer program. These things are not supposed to be relevant to the experiment.

The actual work reported in scientific papers is summarised in the past tense, although the results are reported in the present. So the work occupies a specific moment in time, but its outcome should stand for all eternity, even after the invisible narrator is long gone. And the passive voice is always used, to indicate that the presence of the author did not impact on the results. In fact, the scientist is the ghost in the machine. Events unfold with a certain inevitability, the stars were observed, the gases were mixed, the temperature was taken, the theory was developed, but who did all this?

Tuesday 27 January 2009

Invention versus discovery.

Fiction invents, science discovers – right?
But fiction can make you see the real world differently, through examining how humans react to it. For example, Hamlet is torn between two impossible choices, and cannot act because he sees the disadvantages in both. He analyses and rejects the modes of behaviour that his peers would have chosen (as articulated in other Elizabethan revenge tragedies). As a result, we cannot look at how humans think of themselves and their place in the world, without thinking of Hamlet. The play extended humans’ self-consciousness by showing the limitations of actions and the consequential inevitability of self-analysis.

More practically, a fictional depiction of real-life social issues can profoundly affect the way we react to those issues. When the film ‘Cathy Come Home’ was first shown in the Sixties, it exposed the way that homeless people were treated by the authorities and started a national debate. The charity Shelter was formed as a result, and homeless people now have legal rights to housing.
Fictional depictions are sometimes the only way that we can ‘see’ big problems. Poverty in India can seem vast and overwhelming, until the novels of Rohinton Mistry bring it within our understanding, and our empathy. Similarly ‘Half of a Yellow Sun’ by Chimamanda Ngozi Adichie probably works as a better introduction to the Biafran war than many factual texts, because it uses interesting and sympathetic characters (including a child soldier who is forced to take part in a mass rape) through which the reader can see the effects of that war.
A recent academic paper (http://www.bwpi.manchester.ac.uk/resources/Working-Papers/bwpi-wp-2008.pdf) examines the use of fact and fiction in understanding developmental issues and argues that fiction is a credible way of spreading knowledge. It can reach a wider audience than more factual approaches, and provide a richer illustration of how people react to their surroundings, because it’s at liberty to investigate the inner lives of those people in a way that factual narratives are simply not able to do.

Conversely, discovery is not just a passive intake of external data. It’s also a framework to put the data into, in order to make sense of it. Constructing this framework necessarily involves invention. When you carry out an experiment, and measure some variables, you have to have an idea of what these variables are; you have to conceptualise them. For example, if you decide to measure the temperature of a box of gas, you have to have an idea of what temperature actually is, and how it might be related to the gas’s energy. How is the gas’s confinement to a box going to affect the result? What if you pump in more gas and increase its pressure? Or heat it up? Which of the variables are important and will affect the results? And which can be ignored? Will the colour or shape of the box holding the gas affect the results?
Straightaway the process of doing experiments becomes recursive; you carry out an experiment to get a better understanding of the external world, but you already need some sort of narrative of it in order to do the experiment in the first place.
Sometimes this narrative is drawn from places outside science. The importance of Galileo’s observations of four moons orbiting around Jupiter stems partly from the religious dogma which stated that everything orbited the Earth.
Modern science likes to think it makes its models exclusively from within science and that nothing else taints it. But models are made from language, and this can give the lie to this belief of purity. For example, in the fifties James Watson came up with ‘the central dogma of biology’ to express the idea that the flow of information from DNA to RNA and then onto proteins is one-way. (This excludes Lamarckian models of evolution in which DNA can be changed by changes to the proteins during the lifetime of the host organism.) He confessed afterwards that he didn’t really know what dogma meant, but he wanted a word to stress the importance of the idea. There’s no doubt that it’s important, but calling it a dogma implies that any criticism of it is heretical. Maybe Watson was being tongue-in-cheek when he used this word, but it’s hardly an incitement to open debate.

Monday 19 January 2009

Time takes a cigarette...

The concept of time in science is problematic. It’s everywhere, yet elusive. The one-way flow of time from the ‘past’ (i.e. what we remember), to the ‘future’ (what we cannot yet recall) is the ghost in the machinery of physics. Newton’s laws on motion, quantum mechanics, general relativity; all these theories are symmetrical with respect to time. They don’t invoke or explain its passage. As is well known, the only equation in physics that does directly invoke the passage of time is the second law of thermodynamics, which says that things tend from an ordered state (low entropy) to a disordered state (high entropy). One way of explaining time’s flow from past to future is to say that the universe started off in a very low state of entropy, which is subsequently increasing.
So, does biology fare any better? DNA can work as a natural ‘clock’ in the way that it accumulates mutations. We can compare DNA from different organisms and estimate when these organisms diverged. I suppose this is similar to using the decay of carbon 14 to date organic matter.
This gives the study of DNA a natural rhythm, perhaps.

Time flows naturally in fiction. I can’t think of any novel that is set in one ‘present’ moment. There are novels that are set in the present but that present does move on. Perhaps by their very length, novels preclude the examination of the single moment, in a way that short poetry or short stories can. But novels (or epic poetry such as the Iliad) can play with time, make it double back on itself, even run backwards (e.g. Time’s Arrow by Martin Amis), and examine moments in time over and over again from different perspectives. Plays can do this too; Michael Frayn’s play, ‘Copenhagen’ consists of conversations between Bohr, his wife, Margrethe, and Heisenberg[1]. Within each conversation, time flows ordinarily, but the whole play is set in the afterlife, and all three characters are dead. As such they are unable to move or develop, they are curiously static in time, and the play itself does not move forward. It succeeds, brilliantly, because it takes a single point in time (the real-life conversation between Bohr and Heisenberg) and unfolds it to expose all the complexities that exist in that moment, like hidden extra dimensions suggested in string theory.
But what do we mean by the ‘present’? The moment that we exist in is always hemmed in by the past and the future. We walk a knife-edge between what we remember and what we anticipate. Is it just a trick of the consciousness, that we think we ‘are’ at any given point in time?
A novel, poem or short story is a system of words that relies on relationships between them to function. And yet they are always read sequentially. We are taught to read by moving from one word to the next in an ordered way. We can possibly read a short sentence at once by glancing at it, but no more than that. So our ‘present’ corresponds to a short block of text, and poets can explicitly take advantage of this in the way they present their work on the page. Is a line of poetry equivalent to a present moment? And how does this relate to breathing? And music? When we listen to music we hear a phrase, the aural equivalent to a line of poetry. The phrase is there in our head, but it continuously gets superseded. If our perception of the present were any different, e.g. if it were shorter than it is, would poetry or music ‘work’ in the same way?
The sequential nature of literature and music has a nice analogy in the structure of DNA, with its rows of neatly ordered bases. And yet, like literature or music, there is something in the ‘wholeness’ of DNA that is not present in its parts. Different genes may be responsible for different functions in the parent organism, but they affect each other, they don’t work independently of each other. A novel is like that too. Whilst we can only retain in our short term memory a small proportion of what we are reading when we read a novel, we also retain an overall structure. The sentences we read accumulate in our memory to create an impression of the overall work.
[1] I’ve referred to this play before, see my third blog. It’s being performed at the Edinburgh Royal Lyceum this April.

Monday 12 January 2009

Where do you stay?

I’ve already mentioned in a previous posting the complexities around the links between genes and Jewishness. Sharing an ethnicity does not mean sharing an exclusive genepool. There can be more variation within shared ethnicities than between them. And the link between genes and nationalities is even more complex, although perhaps less so in a small country like Scotland which has had relatively little immigration. Perhaps nationalities are (becoming) more memetic than genetic – one can learn to be a different nationality through subconsciously, or consciously, adopting the mores of the people around you.

For example, when I was a child we drank coffee. We hardly ever drank tea. Although my parents were both born in England, one grew up in America and the other was the child of German and Austrian immigrants, and neither had a history of drinking tea. Even as a child I drank coffee for breakfast. Now (perhaps as a result of living in Yorkshire for several years and having a partner from there) I drink tea all the time, and so does my father. Has he learnt it from me?

But sometimes you can get stuck in the middle – for example, I never know what to reply when someone asks me where I ‘stay’. Scottish people use the word as a synonym for ‘live’ or ‘reside’. If I reply ‘I stay at…’ it feels artificial, as if I’m appropriating someone else’s language. But if I reply ‘I live at…’ it implies I’ve thought about using the word ‘stay’, have rejected it as being too Scottish for me, and have retreated to ‘standard’ English. Either alternative makes me feel self-conscious.

Perhaps I should make up my own word.

I’ve learnt that genes are influenced by their envronment, too. They get turned on, or ‘expressed’, by influences surrounding them. This environment stretches all the way from the neighbouring genes, through the rest of the cell, the host organism, and the wider environment. So, genes are not little bullet entities that whistle unperturbed through their host organisms, only randomly mutating. (This concept of ‘random mutations’ sounded so dangerous when I first read about it and I couldn’t figure out why until I realised that it’s a reminder of radioactivity.)

This adaptation to your surroundings can be reflected in laws on nationality. Different countries have different attitudes and laws on the conferring of nationality is conferred. Some use ‘blood’ (e.g. Germany before 2000) through which you can confer your nationality to your offspring even when they’re born outwith the country, others use ‘territory’(e.g. USA) through which your offspring automatically achieve that nationality by virtue of being born there. Those that use territory – do they place more emphasis on learning behaviours and attitudes, such as the desire to aspire to ‘American’ ideals? Because there are no genetic links to rely on to hold society together?

Notice that I used the Scots word ‘outwith’ in the previous paragraph. Perhaps I’m adapting my language after all.