Home » Posts tagged 'biology'

Tag Archives: biology

Advertisements

The Virtual Identity of Life

Budimir Zdravkovic
City College of New York


Introduction

Information can be defined as any sequence or arrangement of things that can convey a message. Here I would like to focus on information coded by biological organisms and how that information is related to their identity. The essence of living things has been difficult to define conceptually. Living things or biological things have certain properties which are unique to them and which are absent in inanimate matter. Defining life has been an ongoing problem for scientists and philosophers, but what is more puzzling is that living organisms do not appear to be defined by the conventional rules of identity. To illustrate what is meant by conventional rules let us look at the Ship of Theseus paradox, which begins with an old boat made of old parts. As this boat is renovated and the old parts are replaced with new ones, it gradually begins to lose its identity. When all the parts of the ship are eventually replaced, can we still say this new renovated ship is the Ship of Theseus? If so, what if we reassembled the old ship from the old parts? Would Theseus now possess two ships? In this paradox it is clear that the problem of identifying the ship stems from defining it in terms of its old and/or new components. The conflict of identity exists because old components are replaced with new ones, confusing our common-sense notions of continuity. (more…)

Advertisements

The Oxford Philosopher Speaks to… Stephen Boulter

9781137322814Stephen Boulter is a Senior Lecturer and Field Chair of Philosophy and Ethics at Oxford Brookes University. Having completed his PhD at the University of Glasgow, he is now both a published author (see Metaphysics from a Biological Point of View and The Rediscovery of Common Sense Philosophy) and a respected member of Oxford’s philosophical milieu. Boulter has also been contracted to the Scottish Consultative Council on the Curriculum (SCCC) as a Development Officer and National Trainer of Scotland’s philosophy A-level. His research interests include the philosophy of language, the philosophy of evolutionary biology, perception, metaphysics, virtue ethics, Aristotle, and medieval philosophy. We at The Oxford Philosopher interrupted these interests for a moment to ask Boulter a few questions about his own experience of philosophy as an academic discipline.

What was the first piece of philosophical literature you read from beginning to end, and have you revisited it since?

My first piece of philosophical literature read from beginning to end was Descartes’ Meditations on First Philosophy. It was part of a course that included Spinoza, Leibniz, Locke, Berkeley and Hume. I’ve reread the work many times since. Part of my current research focuses on the continuities between scholasticism and early modern philosophy – the theme of the so-called ‘long middle ages’ – so there is a sense in which I’ve never stopped reading it.

(more…)

Do Frugal Brains Make Better Minds?

Andy Clark
University of Edinburgh


Might the frugal (but pro-active) use of neural resources be one of the essential keys to understanding how brains make sense of the world? Some recent work in computational and cognitive neuroscience suggests just such a picture. This work sheds light on the way brains like ours make sense of noisy and ambiguous sensory input. It also suggests, intriguingly, that perception, understanding and imagination are functionally co-emergent, arising as simultaneous results of a single underlying strategy known as ‘predictive coding’. This is the same strategy that saves on more mundane kinds of bandwidth, enabling the economical storage and transmission of pictures, sounds and videos using formats such as JPEG and MP3.

In the case of a picture (a black and white photo of sir Laurence Olivier playing Hamlet, to conjure a concrete image in your mind) predictive coding works by assuming that the value of each pixel is well-predicted by the value of its various neighbors. When that’s true – which is rather often, as grey-scale gradients are pretty smooth for large parts of most images – there is simply no need to transmit the value of that pixel. All that the photo-frugal need transmit are the deviations from what was thus predicted. The simplest prediction would be that neighboring pixels all share the same value (the same grey scale value, for example), but much more complex predictions are also possible. As long as there is detectable regularity, prediction (and hence this particular form of data compression) is possible.

Such compression by informed prediction (as Bell Telephone Labs first discovered back in the 1950’s) can save enormously on bandwidth, allowing quite modest encodings to be reconstructed, by in effect ‘adding back in’ the successfully predicted elements into rich and florid renditions of the original sights and sounds. The trick is trading intelligence and foreknowledge (expectations, informed predictions) on the part of the receiver against the costs of encoding and transmission on the day.  A version of this same trick may be helping animals like us to sense and understand the world by allowing us to use what we already know to predict as much of the current sensory data as possible. When you think you see or hear your beloved cat or dog when the door or wind makes just the right jiggle or rustle, you are probably using well-trained prediction to fill in the gaps, saving on input-dominated bandwidth and (usually) knowing your world better as a result. Neural versions of this ‘predictive coding’ trick benefit, however, from an important added dimension: the use of a stacked hierarchy of processing stages. In biological brains, the prediction-based strategy unfolds within multiple layers each of which deploys its own specialized knowledge and resources to try to predict the states of the level below it.

This is not easy to imagine, but it rewards the effort. A familiar, but still useful, analogy is with the way problems and issues are passed up the chain of command in rather traditional management hierarchies. Each person in the chain must there learn to distil important (hence usually surprising or unpredicted) information from those lower down the chain. And they must do so in a way that is sufficiently sensitive to the needs (hence expectations) of those immediately above them. In this kind of multi-level chain, all that flows upwards is news. What flows forward are just the deviations from each level’s predicted events and unfoldings. This is efficient. Valuable bandwidth is not used sending well-predicted stuff forwards. Why bother? We were expecting all that stuff anyway. What gets marked and passed forward in the brain’s flow of processing are just the divergences from predicted states: divergences that may be used to demand more information at those very specific points, or to guide remedial action.

All this, if true, has much more than merely engineering significance. For it suggests that perception may best be seen as what has sometimes been described as a process of ‘controlled hallucination’ (Ramesh Jain) in which we (or rather, various parts of our brains) try to predict what is out there, using the incoming signal more as a means of tuning and nuancing the predictions rather than as a rich (and bandwidth-costly) encoding of the state of the world. This in turn underlines the surprising extent to which the structure of our expectations (both conscious and non-conscious) may quite literally be determining much of what we see, hear, and feel.

The basic effect hereabouts is neatly illustrated by a simple but striking demonstration (used by the neuroscientist Richard Gregory back in the 70s to make this very point) known as ‘the hollow face illusion.’ This is a well-known illusion in which an ordinary face-mask viewed from the back (which is concave, to fit your face) appears strikingly convex when viewed from a modest distance. That is, it looks (from the back) to be shaped like a real face, with the nose sticking outwards rather than having a concave nose-cavity. Just about any hollow face-mask will produce some version of this powerful illusion, and there are many examples on the web, such as this one. The hollow face illusion illustrates the power of what cognitive psychologists call ‘top-down’ (essentially, knowledge-driven) influences on perception. Our statistically salient experience with endless hordes of convex faces in daily life installs a deep expectation of convexity: an expectation that here trumps the many other visual cues that ought to be telling us that what we are seeing is a concave mask.

You might reasonably suspect that the hollow face illusion, though striking, is really just some kind of psychological oddity. And to be sure, our expectations concerning the convexity of faces seem especially strong and potent. But if the predictive coding approaches I mentioned earlier are on track, this strategy might actually pervade human perception. Brains like ours may be constantly trying to use what they already know so as to predict the current sensory signal, using the incoming signal to constrain those predictions, and sometimes using the expectations to ‘trump’ certain aspects of the incoming sensory signal itself. (Such trumping makes adaptive sense, as the capacity to use what you know to outweigh some of what the incoming signal seems to be saying can be hugely beneficial when the sensory data is noisy, ambiguous, or incomplete – situations that are, in fact, pretty much the norm in daily life.

This image of the brain (or more accurately, of sensory and motor cortex) as an engine of prediction is a simple and quite elegant one that can be found in various forms in contemporary neuroscience (for useful surveys, see Kveraga et al. (2007), Bubic et al (2010), and for a rich but challenging incarnation, see Friston (2010)). It has also been shown, at least in restricted domains, to be computationally sound and practically viable. Just suppose (if only for the sake of argument) that it is on track, and that perception is indeed a process in which incoming sensory data is constantly matched with ‘top-down’ predictions based on unconscious expectations of how that sensory data should be. This would have important implications for how we should think about minds like ours.

First, consider the unconscious expectations themselves. Those unconscious expectations derive mostly from the statistical shape of the world as we have experienced it in the past. That means we should probably be very careful about the shape of the worlds to which we expose ourselves, and our children.  We see the world by applying the expectations generated by the statistical lens of our own past experience, and not (mostly) by applying the more delicately rose-nuanced lenses of our political and social aspirations. So if the world that tunes those expectations is sexist or racist, that will structure the unconscious expectations that condition humanities own future perceptions –  a royal recipe for tainted evidence and self-fulfilling negative prophecies.

Second, reflect that perception (at least of this stripe) now looks to be deeply linked to something not unlike imagination. For insofar as a creature can indeed predict its own sensory inputs from the ‘top down’, such a creature is well-positioned to engage in familiar (though perhaps otherwise deeply puzzling) activities like dreaming and some kind of free-floating imagining. These would occur when the constraining sensory input is switched off, by closing down the sensors, leaving the system free to be driven purely from the top down. We should not suppose that all creatures deploying this strategy can engage in the kinds of self-conscious deliberate imagining that we do. Self-conscious deliberate imagining may well require substantial additional innovations, such as the use of language as a means of self-cuing. But where we find perception working in this way, we may expect an interior mental life of a fairly rich stripe, replete with dreams and free-floating episodes of mental imagery.

Finally, perception and understanding would also be revealed as close cousins. For to perceive the world in this way is to deploy knowledge not just about how the sensory signal should be right now, but about how it will probably change and evolve over time. For it is only by means of such longer-term and larger-scale knowledge that we can robustly match the incoming signal, moment to moment, with apt expectations (predictions). To know that (to know how the present sensory signal is likely to change and evolve over time) just is to understand a lot about how the world is, and the kinds of entity and event that populate it. Creatures deploying this strategy, when they see the grass twitch in just that certain way, are already expecting to see the tasty prey emerge, and already expecting to feel the sensations of their own muscles tensing to pounce. But an animal, or machine, that has that kind of grip on its world is already deep into the business of understanding that world.

I find the unity here intriguing. Perhaps we humans, and a great many other organisms too, are deploying a fundamental, frugal, prediction-based strategy that delivers perceiving, understanding, and imagining in a single package? Now there’s a deal!


A version of this material appeared as “Do Thrifty Brains Make Better Minds” on The Stone (philosophy blog of The New York Times) Jan 15 2012.

[Feature image by 401(K) 2012]

Works Cited

Bubic A, von Cramon DY and Schubotz RI (2010) Prediction, cognition and the brain. Front. Hum. Neurosci. 4:25: 1-15

Friston K. (2010) The free-energy principle: a unified brain theory? Nature Reviews: Neuroscience 11(2):127-38.

Helmholtz, H. (1860/1962). Handbuch der physiologischen optik (Southall, J. P. C. (Ed.), English trans.),Vol. 3. New York: Dover.

Kveraga, K., Ghuman, A.S.,  and Bar. M. (2007) Top-down predictions in the cognitive brain. Brain and Cognition, 65, 145-168

The Problem of Identity in Biology

Budimir Zdravkovic
The City College of New York

*

I: A Discourse of Biological Concepts

In the last century great leaps in technology and scientific understanding have allowed us to thoroughly investigate our bodies and how they function. Our recent knowledge of the chemical and biological sciences has revealed some profound philosophical implications concerning our identity and the identity of other living organisms. Concepts in biology tend to neatly segregate definitions in binary terms in the same way classical logic does. Some of these concepts include the definitions of what is living and non-living, what is cancerous and non-cancerous, animal and human, male and female—and so on—but such definitions are far from clear. In the most extreme cases, binary definitions within biology that rely on classical logic and the laws of identity could perpetuate chronic illness. Such definitions could also hypothetically lead to violations of animal and human rights. We must understand that the traditional logic that has been the foundation of mainstream biological understanding for the past two thousand years has a number of shortcomings.

Living systems enjoy such a high degree of complexity that it becomes necessary to distinguish them from non-living systems. But the segregation between living and non-living entities quickly becomes vague and hazy when we consider the evolution of life or the origin of the first living organism: if the origin of life is continuous with non-living chemical and physical processes then at what point does something become living? At what point can we define a substance as a living thing?  With the rejection of vitalism, the idea that there is an essence in the constitution of all living things which is fundamentally different from the constituents of non-living things, we have found that all matter is made of the same stuff at the fundamental level of atoms and molecules.

Evidence in the field of evolutionary biology suggests that within living organisms all species had a common ancestor along with the capacity to transition into different species under environmental pressures and natural selection. Using this evidence we can also draw the conclusion that it is not only life that is defined vaguely in the biological sciences, but also the concept of species. Speciation is also a gradual process, a process of transition. Such processes are difficult to define using classical logic because they are processes that involve change. At what point does one species transition into another? Later I will illustrate the shortcomings of the biological definition of species by way of a thought experiment. What does this mean for our identity as humans and living things in general?

II: The Ship of Theseus and The Problem of The Heap

The problem with our traditional idea of identity is best illustrated by old and common philosophical thought experiments; this is why scientists ought to be speaking with philosophers of science. We can begin with the problem of the grain and the heap. Imagine we are attempting to define a heap of sand and we wish to discover when the heap of sand ceases to remain such. Of course the heap consists of grains, so if we were to one by one start removing them, it follows that at one point or another the heap will no longer fulfil its own qualifications. We might be so bold as to define a heap by an arbitrary number, say 30,000 grains. The obvious problem with this would be the lack of functional and phenomenal difference between an adequate heap and a pile of 29,999 grains. We could continue to remove grains but it would become difficult if not impossible to tell when the heap becomes just a pile: for all our merits we do not seem to be able to place a number or a concrete definition on what constitutes a ‘heap of sand.’ Even with our attempted definition we have but a vague idea of the boundary which separates a collection of grains and an adequate heap. A similar problem is encountered when we talk of the Ship of Theseus: as the ship undergoes renovations, the old parts of the ship replaced with new ones, the question of the ship’s identity is raised. If the renovated ship is a new ship, at what point is it no longer the old one?

The problematic definitions in biology have an identical form. Phenomena in biology are vaguely defined because they emerge from gradually changing events. We can reason similarly about the emergence of life or our definition of ‘living things.’ Is random DNA surrounded by a lipid bilayer a living organism? Or does that kind of organization of molecules only become living when there is some sort of metabolism involved? The ‘metabolism first’ hypothesis goes to the extreme of proposing that life began as a collection of metabolic chemical reactions that were catalyzed by simple catalysts (Prud’homme-Généreux & Rosalind Groenewoud, 2012) (Cody, 2004; pp.32). The hypothesis is bolstered by the observation that certain metabolic chemicals can be synthesized in the presence of simple catalysts like the transition metal sulfides, catalysts which existed on the Prebiotic Earth (Ibid.).

But even the ‘metabolism first’ hypothesis is unsure of when non-living things become living things. If we define something as ‘living’ just because it has a metabolism (a chemical reaction or a collection of chemical reactions that utilize energy according to the principle of steady state kinetics) then there are many things that we could, rather absurdly, deem living. We could purify a set of fatty acids and reconstitute a metabolic chemical reaction that uses steady state kinetics in a test tube, for instance, but by no means could we call this a living system (Xingye Yu, et al., 2011; pp.108). A living system as a whole has a certain degree of irreducible complexity that is hard to define in terms of its constituents. In the same way a collection of grains becomes a heap at a certain point of mass, the metabolic reactions and molecules in the living system must give rise to life at a certain point in life’s evolution. But just like the heap of sand, living things remain poorly defined if we attempt to understand them by way of classical logic and the laws of identity.

III: The Probabilistic Logic of Cancer

Cancer emerges as a result of genetic evolution. The difference between the evolution of cancer and the genetic evolution that gives rise to speciation, however, is that cancer cells (or cells predisposed to cancer) evolve quickly enough for their evolution to be observed. In cancerous cells changes in the DNA eventually accumulate until the cell’s mechanism of division is out of control. As a result cells start dividing rapidly and uncontrollably. The more rapidly a cell divides the more mutations it accumulates; and the cells that accumulate the greatest amount of mutations, that make cell division favorable, will out-compete the healthy tissues in our bodies. This is because cell division requires nutrition.

There are many kinds of cancers and not one is reducible to a single error or mutation in the DNA. At what point does a cell become cancerous? Once again this is a lot like the heap of sand problem, or that of the Ship of Theseus.  One mutation of the DNA is certainly not enough to make a cell ‘cancerous,’ but it could predispose someone to developing cancer. We know that after the cell accumulates a certain number of crucial mutations it becomes (by our definition) cancerous, but there is no clear indication as to when the cell might become cancerous. Anyone could be at risk, and researchers fail to pay attention to the mechanisms and the processes by which a cell might become cancerous, instead concentrating on cancer’s medical symptoms. It should concern us that cancers do not become ‘illnesses’ until they are virulent, by which time it is often too late for effective treatment.

It is simple to identify a malignant tumor, but not so to identify a tumor that is on its way to becoming malignant. The most effective treatment for a tumor is, as with so many things, prevention. We do not, however, seem to acknowledge them until they become malignant, and this is a very dangerous way of thinking about cancer. For the future of medicine it is just as important, if not more important, that we begin to understand how predisposed people are to developing malignant tumors than it is to detect whether they actually have them.

Classical reasoning makes a binary distinction between the cancerous and the non-cancerous; probabilistic reasoning, on the other hand, can provide information on the possibility or likelihood of a tissue becoming cancerous, or the average age at which a predisposed individual may start developing a tumor. Such information would be essential to the early detection and treatment of all forms of cancer. By abandoning the traditional laws of identity and adopting the probabilistic binary terms we may begin to think differently about the nature of well-being and disease. The term denoting the disease must become less important than the relevant statistics that indicate an individual’s predisposition towards the disease.

Mutations in genes like BRCA1 and BRCA2, for instance, increase the risk of female breast and ovarian cancers, as reported by A. Antoniou et al. (2003). A woman carrying inherited mutations in BRCA1 and BRCA2 would certainly not be classified a cancer patient, but the probabilistic nature of the phenomenon—the fact that she is at a high risk for acquiring the disease—ought not be ignored.  Let us say she has 65% chance of acquiring breast cancer; how would one define the tissues in her body?  Are they cancerous? Are they 65% cancerous? Traditional laws of identity and classical logic would not be sufficient to properly define and understand this phenomenon or the evolution of cancer as a chronic disease. Other forms of logic, such as probabilistic logic, however, have emerged as useful methods for understanding this sort of issue.

IV: Speciation

Like many other biological concepts, speciation is problematic. Organisms tend to change over time due to mutations in their DNA; and, as I discussed previously, this process of gradual change presents a problem for static conceptions of identity. As a given species changes over time it becomes hard to define exactly when speciation occurs. At what point did hominids become humans, for example? And to what extent are our hominid ancestors human? We could say that the hominid is human to the extent that it shares common DNA with humans, but how could such a notion survive in light of the similarities of behavior and DNA between humans and chimpanzees? Chimpanzees do, after all, share more with us than with such other hominids as bonobos (Chimpanzee Sequencing and Analysis Consortium, 2005; pp.437) (Kay Prufer et al., 2012; pp.586).

It is also apparent that humans have similar brain structures to other hominids. Though volumetric analysis shows that, overall, humans have bigger brains than hominids, that is not to say we do not have similar sorts of brains (Katerina Semendeferi  et al., 1997; pp.32). Hominids have frontal lobes, brain-areas that control social behavior, creativity, planning and emotion. According to evolutionary theory these structures found in hominid brains are identical to those found in humans.  Humans and hominids share this brain structure because a common ancestor possessed it in the past: each inherited its brain structure from the same antecedent.

It is impossible, through the use of conventional rules of identity, to separate humans from animals. As mentioned above hominids possess the same brain structures as humans. That implies that to some extent hominids are like humans, in terms of behavior and biological constitution. If we could compare our hominid ancestor to the Ship of Theseus, humans would be the equivalent of a partially renovated ship because a lot of the old hominid structures, in fact most of the old hominid structures, are still a part of the human organism: humans would be the equivalent of a slightly upgraded hominid. Biologists attempt to adumbrate speciation barriers through a focus on reproduction, but this definition of species is problematic on account of its potential to violate human rights. The term ‘human species’ is synonymous with the term ‘human’ because, according to the biological definition, humans are a type of species. But how we generally think about humans is very different to how we think about the biological definition of the human as a species:

A ‘species’ is generally defined as an organism that is able to reproduce by breeding with another organism of the same sort. This simple classification puts us into the category of ‘human’ only so long as we are capable of breeding with another of our sort, i.e. a human.  To demonstrate that our definition of ‘human’ as a species type differs from our definition of ‘human’ as an individual with moral capabilities and rights I intend to proffer another thought experiment: imagine there is a woman called Nancy. Much to Nancy’s frustration and confusion, she has been unable to conceive with her husband, Tom. Nancy and Tom have been to several different doctors and Nancy is ostensibly healthy. There is nothing wrong with her hormones nor her reproductive organs. She also ovulates regularly. Tom, too, is completely healthy. There is no reason as to why Nancy and Tom are unable to have children.

After a great deal of effort a scientist in Tom and Nancy’s town caught word of Nancy’s unusual situation. The scientist acquired one of Nancy’s eggs and studied it closely. He soon came to the conclusion that Nancy’s egg is simply incompatible with human semen. According to the biological definition of species, it seems Nancy has become another species diverged from humans. Yet she is human in every other conceivable way. If Nancy is not human in canonical biological terms should she still be subject to human privileges and treatment? Does she, in short, have human rights?

This thought experiment demonstrates the ethical issues involved in the biological definition of ‘human.’ Nancy is, in any respectable terms, a human being since she retains all the human traits we, as a species, value. The unfortunate circumstance that her egg is incompatible with human sperm seems rather trivial when set beside her overall portfolio.

V: Biology and Complexity

Biology as a science began with simple ideas and concepts. The field has become much more complex as our understanding of the biological and biochemical sciences progressed through the centuries. If there is an emerging theme within biology and biochemistry, it is that the more we know about biological and biochemical phenomena the more complex they seem to become. It is the nature of this biological complexity and its changing constituents that make classical definitions and identifications of biological phenomena difficult. These phenomena cannot be understood using traditional laws of identity and classical logic without gross oversimplification. These oversimplifications have consequences on how we think about the distinction between humans and animals, how we think about disease and risk, and might hypothetically lead to violations of human rights.

*

Works Cited

 Annie Prud’homme-Généreux and Rosalind Groenewoud. (2012). The Molecular Origins of Life: Replication or Metabolism-First? Introductory Version. National Center for Case Study Teaching in Science George D. Cody. (2004).

Transition Metal Sulfides and the Origins of Metabolism. Annual Review of Earth and Planetary Sciences. 32.Xingye Yu et al. (2011). In vitro reconstitution and steady-state analysis of the fatty acid synthase from Escherichia coli. PNAS. 108.

A. Antoniou et al. (2003). Average Risks of Breast and Ovarian Cancer Associated with BRCA1 or BRCA2 Mutations Detected in Case Series Unselected for Family History: A Combined Analysis of 22 Studies. American Journal of Human Genetics. 72.

Chimpanzee Sequencing and Analysis Consortium. (2005). Initial sequence of the chimpanzee genome and comparison with the human genome. Nature. 437.

The bonobo genome compared with the chimpanzee and human genomes. Nature. 486.

Katerina Semendeferi et al. (1997). The evolution of the frontal lobes: a volumetric analysis based on three-dimensional reconstructions of magnetic resonance scans of human and ape brains. Journal of Human Evolution. 32.

[feature image by ‘Biodiversity Heritage Library’]