Thursday 24 February 2011

On Inductivism and Falsificationism: Why do scientists value evidence?

Why do scientists value evidence? The question may appear absurd, and the answer blindingly obvious: because evidence demonstrates the truth of theories. But is this really true?

I'm currently reading James Ladyman's textbook, 'Understanding Philosophy of Science' which outlines a number of debates on the nature of evidence and the practice of science in general. In this post I'll relate some highlights gleaned from the chapters on 'inductivism' and 'falsificationism' (friendly definitions to follow, I promise).

A popular conception of science sees scientists making a large number of observations regarding a particular phenomena and then proceeding to make generalisations that account for every instance. For example, if time and time again it is observed that metals of every type expand upon heating it would seem sensible to conclude that the statement 'all metals expand when heated' is true. This extraction of generalisations from evidence is called 'induction'. 'Inductivism' is the view that science may be defined by this method.

There is, however, a long-standing debate within the philosophy of science about the validity of induction and inductivism. The various objections to evidence-based generalisations arise from the following, comparatively simple argument: No amount of positive evidence can ensure against the eventuality that a negative instance may yet be found. For instance, it is conceivable that a metal may yet be discovered that does not expand upon heating. Extending the argument into the realm of metaphysics, the eighteenth-century Scottish philosopher David Hume goes so far as to suggest that all scientific generalisations are based upon the assumption that the future will resemble the past, and that we have no rational reason to believe that it will. So even if we had subjected all metal in the universe to heating and found that every sample conformed to our expectations of expansion, we would still - according to Hume - have no reason extract any kind of general law. Perhaps the next time we hold a lump of copper over a Bunsen burner it might shrink. This certainly seems very counter-intuitive - but drawing attention to nature of our intuitions is Hume's intention. We only believe that the future will resemble the past because it has always done so
in the past. Though this certainly seems a very good assumption, Hume's point is that we must admit that it is an assumption in that it cannot conceivably be justified by evidence.

What such objections to induction have in common is the challenge they pose to science. If it is true that a theory may at any time be disproved by a negative instance (whether or not this involves the regularity of the universe unravelling, or simply new evidence coming to light), then the ability of scientists to pronounce with certainty upon any subject appears fatally undermined.

Perhaps the most successful rebuttal to this argument is Karl Popper's 'falsificationism'. Popper, a twentieth-century British philosopher, sought to undermine, rather than solve, the problem of induction by suggesting that science is never about proving theories to be true; quite the reverse. Science, according to Popper, should busy itself with the falsification of theories; with the whittling down of the available possibilities. Truth is only ever to be approached and never claimed absolutely. The best theories are those battle-hardened formulations that have survived whatever tests have thus far been devised. They are not to be considered correct; merely the least-wrong.

An important consequence of this argument is that scientific theories must be
potentially falsifiable. If one suggests a theory that could not be proven wrong under any circumstances, then no debate can take place and truth is reduced to a matter of assertion. Popper was especially scathing towards Marxism and psychoanalysis on these grounds. If, for instance, a government made some efforts to look after its nation's poor, then a die-hard Marxist could simply explain away this apparent contradiction of their favoured theory as an attempt by the ruling elite to thwart the oncoming proletariat revolution. Likewise, those expressing criticisms of psychoanalysis may be dismissed by its practitioners as suffering from deep-rooted repression.

Critics of falsificationism have pointed out that scientists do, in some cases, appear to believe things for positive reasons. Many successful theories posit the existence of things that cannot be directly observed; atoms, black holes, DNA and so forth. According to Popper, such entities are merely conceptual devices employed by the least-wrong theories to make predictions. A true adherent of falsificationism cannot assert the literal truth of their existence - and yet many scientists do. Speaking personally, I think this splitting hairs: it is possible that in conversations, especially with journalists, scientists may simply use the word 'exist' as shorthand for 'is a reasonable inference of our least-wrong theory'.

Another criticism of falsificationism is that some scientific principles are not falsifiable, and whose apparent violation would send scientists seeking any other explanation but the refutation of the theory. One such example is the principle of the conservation of energy, which states that energy may take different forms but is never created or destroyed. If a system is observed to be creating energy from nothing, scientists would rather question the accuracy of their observations, or postulate the existence of some non-observable energy source interfering with their measurement devices. In such instances, scientists do seem to value the sheer weight of positive confirmations of a theory over one negative instance. The danger here is that such scientists are indistinguishable from self-deluding Marxists. I would argue, however, that it is certainly sensible to make sure that all options have been considered before abandoning a long-standing principle. One may remain open-minded to the prospect of its refutation at the same time as exploring the possibilities for its continued relevance.

Thursday 25 November 2010

On Imaginary Numbers: How 'unreal' concepts may help us understand 'real' phenomena

Something of a place-holder entry this week, I'm afraid. With more questions than answers.

First question: What are imaginary numbers? There's a fascinating introduction to the subject available as part of the BBC's In Our Time archive, but I'll précis the basics for you now:

An imaginary number is that which gives a negative result when multiplied by itself. The square root of minus one - also known as
i - is an example. The most astonishing thing about imaginary numbers (though perhaps their name ought to have given us fair warning) is that they don't 'exist' in the real world. One cannot count or measure with them. And yet - when embedded in equations - they have proven extraordinarily helpful in providing verifiably accurate solutions to real world problems. Imaginary numbers are crucial conceptual tools in contemporary scientific models of electromagnetism, fluid dynamics and quantum mechanics for example.

How can this possibly be? How can something that doesn't exist describe something that does?

A helpful analogy - though not really a solution - can be found in negative numbers. After all, negatives don't really 'exist' either. And that doesn't forestall their use in equations that come out with positive results. Imagine a healthy balance sheet. So long as your income (modelled by 'real' positive numbers) outweighs your debts (modelled by 'conceptual' negative numbers), then your bottom line will be a 'real' number insofar as you could convert it into tangible purchases if you so wished. It doesn't matter that you've used unreal negative numbers to get there. The only difference between negative numbers and imaginary numbers, then, is that the former may be attached to an intuitively graspable concept: debt.

Perhaps a more comprehensive explanation could be found by going one stage further and admitting that even positive numbers are, in a sense, unreal. Mathematical constants are just like nouns in any spoken language: they divide a continuous universe up into discrete chunks that may be talked about. That some mathematical concepts (such as positive numbers) 'make more sense' to us as human beings is interesting, but this should have no bearing upon whether they (or any other concept) should be considered 'real'. All concepts are representational, their definitions man-made. The question is whether they are
useful, by which I mean that they aid our ability to comprehend and/or predict observable phenomena.

I apologise if I'm bordering on incomprehensibility here. This idea that scientific concepts ought to be judged on their usefulness rather than their essence is one that has been intriguing me for a while now. I'm going to try and write about it with more clarity (and perhaps some nice pictures) before too long.

Tuesday 2 November 2010

On the Two Big Words: What is the difference between science and philosophy?

There is a common misconception - perpetuated, in part, by many professional philosophers - that science and philosophy ought to be considered entirely separate entities. Indeed, whenever I try to talk to new people about my interest in the philosophy of science a typical reaction runs thus: Aren't science and philosophy completely different things? What does it even mean to combine them? Seeing as this is my first blog post proper, it would seem sensible to answer that question - and, in doing so, come to some reliable definitions of these Two Big Words.


Let me set my store out up front. Philosophy, as far as I'm concerned, is simply the art of thought. It is the posing and careful working through of problems, most notably (but not exclusively) those perennial questions: Who are we? Where do we come from? What is this baffling universe in which we find ourselves? Throughout history there have, of course, been innumerable attempts at answering those questions. We might classify these as religions or sciences or philosophies - but all are philosophical to the extent that they devote themselves to the furtherance of understanding and to the investigation of that which we find mysterious. Indeed, if one looks at academic philosophy today, one quickly realises that 'philosophy' itself is simply an umbrella term under which a profusion of sub-disciplines take shelter; there are philosophies of gender, of language, of metaphysics, of logic, of ethics, of politics, and - of course - of science.


One should not be surprised to find science classified as a branch of philosophy. Historically speaking, it was always thus. As the great twentieth century English philosopher Bertrand Russell wrote in his Problems of Philosophy, individual sciences are simply those areas of philosophy that have achieved some measure of empirical success and methodological consistency:

It is true that ... as soon as definite knowledge concerning any subject becomes possible, this subject ceases to be called philosophy, and becomes a separate science. The whole study of the heavens, which now belongs to astronomy, was once included in philosophy; Newton's great work was called 'the mathematical principles of natural philosophy'. Similarly, the study of the human mind, which was a part of philosophy, has now been separated from philosophy and has become the science of psychology.

With the relationship of the disciplines thus established, it ought to be clear that there can be no fundamental dispute between science and philosophy per se - only competition between philosophical approaches of which science is one.


What are these conflicting approaches, then? In order to answer that question, one first has to understand the Analytic-Continental divide in twentieth century philosophy. The majority of philosophers opposed to science stem from the Continental tradition. That opposition is, I suspect, largely territorial. Unhappy to cede to scientists the business of discovering reality's secrets, Continental philosophers still believe that language alone (as opposed to material evidence) may allow them access to essential truths about the universe. Among the 'alternative ways of knowing' favoured by Continental philosophers is the practice of 'phenomenology'. First developed at the turn of twentieth century by the German philosopher Edmund Husserl, phenomenology begins by posing the question common to Western thinkers since Descartes (and inquisitive children since time immemorial), namely: How can we be sure that the world exists independent of our perception of it? Husserl concludes that we cannot, and so rather than attempting to map reality via a close inspection of it (à la science), phenomenology would rather chart the experience of particular phenomena. A purest knowledge of 'jealousy', 'clouds' or 'music', say, could be approached by varying in one's mind the possible forms of each phenomena until one arrived at their unchangeable properties or immutable essence.


Analytic philosophers, by contrast, reject such eccentricities on the grounds that language and thought are merely representational and cannot be used to 'access' pre-existing truths. Instead, their work is characterised by the refinement of language; by using it in the most precise and logical way possible to express ideas. Rather than competing with science, they have preserved their intellectual niche through collaboration. Scientific theories must do more than be consistent with the evidence - they must be consistent with themselves. Theories may always be compromised by a poor use of language: overlapping definitions, tautologies, circular arguments and so on and so forth. It is through the policing of these pitfalls and the refinement of scientific concepts that a 'philosopher of science' may aid the progress of science itself. Indeed, as Russell might well have predicted, some of the most successful philosophers of science are now being hailed as 'theoretical scientists'. The following quotation is the Harvard biologist E. O. Wilson's take on the matter:

I see philosophy itself as being in a twilight, with the philosophers themselves metamorphosing in their activities and joining disciplines other than what used to be called classical philosophy. When you look at the work of the most active philosophers today, you find that they divide roughly into three classes. Some philosophers - Daniel Dennett and Patricia and Paul Churchland, for example - are theoretical neuroscientists. I don't believe they would be offended by that title. That's what they have become ... A second category comprises the intellectual historians. A great many of the people who call themselves philosophers are actually intellectual historians - and they are very good at it. The third class comprises what you might call critics or public philosophers, which includes ethicists. They take what we know from science and case histories and attempt to arrive at wise judgments about public policy and social behaviour ... Philosophy's principal occupation has always been to wonder about what we don't know and to frame the discourse of inquiry. It's true, of course, that there is a vast amount we don't know, but it's becoming increasingly apparent that the best way to learn about the unknown is by the methods of the natural sciences. So, not surprisingly, some of the more creative minds in philosophy are gravitating toward science itself as the principal mode of intellectual activity.

So, to recap: Philosophy is best understood as the posing of questions and the grappling with the unknown. Science is best understood as a branch of philosophy. The question facing advocates of science is not, therefore, Why is science better than philosophy?, but Why is science better than other philosophical methods? Continental philosophy and phenomenology, for instance, are rejected by analytic philosophers on the grounds than intuition and armchair speculation alone ought not to be considered the basis for any kind of knowledge. Happy to make the single assumption that the world does have a physical existence beyond perception, they favour instead empirical evidence and logical consistency as better guarantees of truth. To the extent that Analytic philosophy and the philosophy of science may take part in the pursuit of logical consistency within science, they too can be considered scientific endeavors. Philosophy’s loss may be science’s gain – though this may, paradoxically, represent philosophy’s ultimate success.

Wednesday 20 October 2010

On science, world-views, and absurd hubris...

Introductions first. Hello, my name is Joe. That's me on the right there. I live in Cornwall. I'm an English graduate. I play in a ukulele trio. I ride a bicycle. I like poached eggs, woolly jumpers, Kubrick movies and the music of Radiohead. But mainly I like making sense of things. That's why I'm setting up this blog.

At least that's part of the reason. In about a years time (October 2011) I hope to be starting a Masters degree in the Philosophy of Science. Now, I've never studied either philosophy or science to degree level before so I've got some catching up to do. But I really believe that the best way to understand something is to attempt to explain it to others, so that's what I'm going to be doing here. I'm going to read as much science (and the philosophy thereof) as possible and - if I'm successful - you're going to understand it.

If that sounds off-putting, don't be off-put! I won't be pasting up reams of formulae, or documenting the particular cellular chemistry of the East Asian Mangrove crab. I'm not bright enough to be a specialist and, besides, I don't want to peer so closely at the brushwork of science's cosmological canvas that I fail to see the painting. I want to develop an appreciation of the whole. I want to know what science looks like in its totality and I want to know what it
means.

I want a world-view, if I'm honest. I'm aware that such absurd hubris has its own dangers, however. It may well be that a total appreciation of science is necessarily a shallow and dilettantish one. I hope not. My aim shall certainly be to strike a balance between close engagement and appreciative distance.

My title, incidentally, is taken from the work of the seventeenth-century English philosopher and clergyman Joseph Glanvill:

'Adam needed no Spectacles. The acuteness of his natural Opticks ... shewed him much of the Coelestial magnificence and bravery without a Galileo's tube ... It may be he saw the motion of the bloud and spirits through the transparent skin, as we do the workings of those industrious Animals through a hive of glasse.'

For Glanvill, the advance of science - in particular Galileo's invention of the telescope - was far from blasphemous. Indeed he thought it ought to have been considered spiritually enriching in so far as it afforded Mankind a recovery of his pre-Fall appreciation of the natural world. I'm not a religious person, but the sentiment - that science should help us foster a broader world-view - seems suitably apt.