Home » Philosophy of Science

Category Archives: Philosophy of Science


Consciousness is an Inevitable State of Matter

Evan Warfel
University of California, Berkeley

Hello, this is your brain, reading about your brain, reading about your brain.

Consider the following question: Why are we conscious?

I get it; pondering consciousness sounds like an activity only enjoyed by nerds, people who are high, those of us who have found a moment of post-yoga stillness, or people who fit in all three categories at once. But notice that we do not tell our heart to beat or our cells to grow, we do not have to think about focusing our eyes, and we do not consciously will our bodies to inject adrenaline into our bloodstream when something scary happens. These things happen more or less automatically, and if such highly complex tasks can happen without our attention or willpower, why should other complex tasks—like choosing what to eat for breakfast—require conscious awareness? How hard is choosing which flavor of yogurt to eat? And do we really need to be conscious to determine that we should peel a banana before biting one?

To get to the question of why we are conscious, let us first cover some background of how it happens. Despite what people may think during post-yoga philosophy sessions, everything you ever thought of — including every time you have felt thirsty or experienced the feeling of being loved, every ambience you have been aware of, every piece of visual, musical, and kinesthetic thoughtstuffs you have utilized, and every ounce of hard-earned wisdom, as well as every single bit of fluff passing through your mind — has and continues to be encoded by brain cells that ‘pulse.’ Our conscious awareness is underpinned by pulsating neurons. Contrast this with how computers, a.k.a. non-conscious information processors, deal with information: via working with the state of tiny binary on-off switches and electromagnetic storage. In exceedingly rare cases—which inexplicably have historically involved the Russians—information is encoded by the states and status of a three-state ternary switch.

But the neurons in your brain don’t use any particular cell ‘state’ to encode information. Instead, your neurons keep track of information by pulsing, rapidly releasing electro-chemical discharges and then recharging. Like what your heart does, but quicker and nimbler, and on a much a smaller scale. More specifically, information in your brain is encoded in the rate (and the change in the rate) of pulsing, rather than any particular ‘charged’ or ‘discharged’ condition.

I admit that using the rate of a pulse sounds like an overly simplistic mechanism for keeping track of information that ultimately gives rise to conscious experience. If you were to describe a piece of music by the mechanism of foot-tapping, you could have one person to tap how quickly the song should go, the volume could be indicated by how quickly another person would tap, yet another could tap slowly for minor chords and quickly for major chords, and so on. Clearly, you’d need to hire a lot of interns to describe ‘Wonderwall’, and you would need an army of foot-tappers to store the information of a Beethoven symphony. But if you were forced to be clever — perhaps due to constrained resources — you could start to get interesting interactivity with only, say, four neurons. For example, you might have one neuron that only pulses when three other neurons all pulse quickly. Or you could have one neuron that only pulses when three other neurons pulse at the same rate, in agreement. Connect up several of these four-neuron units and you could set up a system of voting and tie-breaking. With such a system you could even get the neurons to fire in the same cascade when given the right input; we might call this a memory. This would be rather impressive for a handful of cells that only know how to change how fast they pulse.

It turns out that each neuron in your brain connects to, on average, about seven thousand other neurons. In total, you are thought to contain roughly eighty-six billion of these pulsing chumps, to say nothing of the connections between them or any of the other types of brain cells you possess. The number of neurons in your body is so large I feel morally, spiritually, and ethically obliged to announce the figure more than once: eighty-six billion. Each one feverishly pulsing and changing its rate over time, like club-goers over the course of an evening. No matter what you are doing, including sleeping, your neurons are always dancing on electricity, tripping the light fantastic with approximately seven thousand brethren. (Click here to hear the pulsing of a neuron translated to sound).

Within the crazed activity of our brains, so much information is being processed, transformed and manipulated by our pulsing neurons that we are conscious of only a tiny part of it. Yet any effort to understand why we are conscious is fruitless without articulating what our pulsing neurons enable in the first place.

You are a prediction machine.

Many parts of our brains are dedicated to figuring out what will happen next. For example, while I write this sentence, some parts of my brain are laboring to figure out how parts of your brain will think about what is going to happen when you read it. Other parts of the brain attempt to predict the errors in the main prediction systems, all in an effort for accuracy. If you can predict, better than chance, where your next meal will come from, you don’t have to worry as much about that aspect of your survival.

However, some things are too complex for brains to predict on their own. It turns out you can’t predict how Mars will move across the night sky by yourself: if you observe it each night, you’ll find that Mars sometimes appears to go backward. Every so often, the red planet seems to reverse its trajectory for a little while and then it gets right back to continuing its original trek across the sky.

If you wanted, you could come up with a formal Martian prediction scheme. To do your scheme justice you’d need research, tools, measurement, theory, other people, and coffee. If you were doing things right, you would need single-origin beans for the hipsters and caffeinated turmeric tea for the hippies. The caffeine would help your researchers deal with the following fundamental difficulty: Mars’s movement contains more variation and unpredictability than what we are used to. We have evolved to learn and then intuitively predict the movement of a ball when it flies at us, but not so much when Mars does something similar. Despite our limitations, we are drawn to predicting the motion of the planets; we want to formally predict lots of things, including how people will respond to different medications, when a heart will stop, when we might find ourselves in the midst of a maple syrup shortage, or—most importantly—precisely when certain individuals (like me) will run out of crunchy peanut butter.

Though we may start off absolutely terrible at formal, explicit predictions in any given domain, we tend to improve. It has taken human beings thousands of years, but we currently have a system of civilization that includes things like predictable sources of food and water, in the form of supermarkets, restaurants, farm supply chains and pipes. To aid our prediction attempts, we have — throughout the entirety of human existence — come up with rules of thumb, mental models, and various tools of thought that make it easier to learn from experience as well as navigate the world we live in. These mental models and concepts serve to translate brain-cell pulsing to real-world phenomena and vice versa. The retrograde motion of Mars, for instance, makes perfect sense as long as the following concepts are part of your worldview: Mars is permanently farther away from the sun than the earth, planets orbit the sun due to gravitational attraction, and when we look at the night sky we see a snapshot of where Mars is in three-dimensional space. Each of these concepts builds on more basic elements, like that an orbit consists of an elliptical or roughly circular shape around a point, the very notion of space, and of physics, gravity, and so on. And each of these concepts is a ‘tool’ we use to think with.

But enough with formal, explicit predictions. Communication is an excellent example of informal, implicit predicting: language consists of words and other tools of thought we use to predict people’s mental states and encourage those people to predict ours. When these predictions are accurate enough, we say that we have successfully communicated. In other words, when we communicate, we trade recipes for predicting (at least some part of) what we are thinking to other people. Most of the time, our brains do this prediction so blazing fast it is as if we have skipped the recipe and instead exchanged actual ‘meaning.’ But the fact remains that when you say ‘chair’ you rely on me to infer what you mean, as it turns out that there are many more types of chairs than anyone can conceive of.

Both formal and informal predictions rely both on pulsing neurons as well as concepts and mental models. Having the right tools of thought is like having a bike that fits your body:  different combinations of gears and lever lengths on a bike will beneficially interact with the different lengths and strengths of your limbs. The bike geometry serves to translate your effort into forward motion most effectively for the environment you find yourself in. Similarly, the right set of thought-tools can help translate the stuff you interact with into, literally speaking, a quirky configuration of pulsing neurons that fits within your other unique neuronal firing patterns. Without the germ theory of disease, for instance, people interested in advancing human health can work hard, have creative ideas, and burn the midnight oil; they just would not get too far for their efforts. Moreover, everybody understands the germ theory of disease slightly differently.

It is not clear, however, that one needs to be conscious to be equipped with the right set of concepts or thought-tools to predict what might happen in the world: the field of machine learning is dedicated to getting computers to perform non-conscious prediction. Its basic premise is that computers can form models of the world and, through a lot of trial end error, figure out how to tweak those models to get better results. At the risk of wading too far into a swamp of poorly defined terms, I submit that the model(s) and parameters that a machine learning algorithm uses for prediction are a computer’s ‘tools of thought.’ Without the right models, parameters, and weights, accurately predicting aspects of the world becomes impossible for a machine learning algorithm, similar to humans who lack the right concepts.

One glaring difference between a computer’s predictive models and ours is that we consciously experience utilizing what we know. Not only are we aware of information passing through our minds and bodies, but if you pay attention, you’ll notice that thinking about different subjects or different people corresponds to a different flavor of subjective experience, even during what we would currently call the same emotional state. Yet computers capable of machine learning pose a challenge, which brings us back to our opening question: if reasonably accurate predictions can be made without conscious awareness, why in Darwin’s name are we conscious at all? What advantages does conscious awareness confer?

Consciousness is the quickest route to context.

To understand what advantages consciousness confers for predicting the world, let us further consider the case of unconscious computer prediction. Imagine you had a digital camera strapped to your bike helmet. You could, if you wanted, instruct a computer to predict what kinds of colors it would see next during your commute. If you were to use modern machine learning algorithms, you would avoid giving the computer ‘smart rules’ regarding how to go about making the predictions. ‘Smart rules’ are the kinds of things we humans can observe and articulate when we think we are being clever, like ‘yellow is a rare color in many urban areas, so don’t expect it’ or ‘mauve and teal are only likely to appear during fashion week in New York, or whenever Christo rolls through town.’

The reason you would avoid verbalized smart rules is because they often fail to cover edge cases, not to mention that they are rarely precise enough to be encapsulated by computer code. By the time you come up with all the exceptions, and exceptions to the exceptions, and so on, French verb conjugations would feel like a walk in the park. By the time you pin down precise instructions regarding each possibility — like what constitutes a patch of color in a computer image, or regarding how you might account for the fact that your eyes see color rather differently than computers (your eyes and brain automatically color-correct for changes in the ambient light), and so on—your code would be much longer than this nearly interminable sentence.

Instead of smart rules, modern machine-learning algorithms take another approach entirely. If we apply the modern machine learning approach to the problem of bike-route color prediction, part of the instructions — or code — you’d develop would define how the computer would know if its predictions were improving. I admit that ‘minimize the error associated with each prediction’ isn’t exactly a James-Bond-worthy mission to charge your computer with, but as long as computers cannot drink martinis, we should be okay with giving them boring things to do. And once you define how the computer should measure its own error, you can leave it to work things out on its own, leaving you free to drink as many martinis as you see fit.

Prediction-error has turned out to be both very difficult and a very fruitful thing to attempt to minimize. One of the biggest single advances in unconscious computer prediction happened about thirty years ago, when David Rumelhart, George Hinton, and Ronald Williams all worked out how to convince a computer to minimize its own prognostic errors. Their breakthrough was in figuring out how to get the computer to fairly distribute responsibility for the prediction errors across all of the parts of the prediction process. It sounds simple in retrospect: instead of coming up with smart rules, you can have the computer come up with the rules itself, as long as it knows how to improve and who to blame. Since then, almost all progress in neural networks, a type machine learning, has consisted of developing fancier ways of utilizing the technique they discovered — one reason for the boom in AI over the past decade is that computer hardware has finally gotten cheap enough that plenty of it can be used for the computationally intensive process Rumelhart et al. derived.

But what would you do if you were not sure of the problem in advance? What if the only thing you knew was that your information processor would come across completely novel problems? Well, you would probably take the best general prediction-error minimizer you could conjure, and hook it up to something that could define its own problems. This way, no matter what problem your information processor defines for itself, it can learn. Roughly speaking, this describes how (parts of) your mind work. When you learn to throw a baseball, you don’t think about the path of your arm in terms of coordinate positions, or in terms of numerically specified velocity. Instead, you attempt to throw a ball, observe the result, and try again. The specific learning takes place on its own. This general idea applies to other parts of your brain, too:  you consciously choose which problems to solve, and then use a combination of conscious and unconscious processing to solve them.

Choosing what problems are worth pursuing in the first place is much harder than doing the ‘learn by error minimization’ task, especially if you use the same non-conscious paradigm of being blind to the qualities of information you work with. In a computer, the processor is robbed of context — it ‘knows’ nothing about what the rest of the computer is doing, only that certain operations should be applied to bits. More than that, the main processing chip in your laptop does the same sorts of things regardless of whether you are watching a video of a hydraulic press squish a toy or whether you are working on financial models via a spreadsheet. If you made a computer conscious and kept everything else the same, their internal experience of the information they process would feel uniform. Or to flip this metaphor around, if we were mostly unconscious, understanding links on Reddit would feel exactly the same as processing the information from the wind on our face. It is this uniformity of information makes it nearly impossible to define problems worth solving, because all problems worth solving—not to mention almost all of the tasks we do on a daily basis—have lots of aspects to consider.

Why? If a problem has a lot of aspects to consider, it is technically ‘multi-dimensional.’ Having only one experience of data, no matter what it is you are processing or computing, does not work for trying to quickly understand and solve multi-dimensional problems. The problems we solve on a daily basis are so multifaceted, in fact, that we use our attention to ignore what we deem irrelevant: there is no way to efficiently process everything. But by consciously experiencing information in fundamentally different ways (emotion, music, sound, touch) we gain access to irreducibly different types of data.

Physiological thirst, for example, corresponds to your body detecting the water content (and osmotic pressure) of select cells. When enough cells are low on water, we experience this information as the visceral feeling of thirst and mild dehydration. Imagine if the information content that corresponds with being thirsty and everything else were to use the same mechanism and present itself as push notifications to your smart phone. It would be much more difficult to distinguish vital signals from ones you could safely ignore. We would likely all develop elaborate systems to help filter the signals we were given. Instead, it is much more efficient to have the feeling of thirst manifest viscerally. By being able to perceive the world in multiple, irreducible ways, we are able to use a larger set of tools to quickly perceive, reflect, decide, and learn —conscious processing of information is more efficient.

To see how, consider one of the more esoteric data visualization techniques known as Chernoff faces. Generating a Chernoff face involves mapping multi-dimensional data to different parameters which govern the face’s appearance — imagine that, when drawing a face, the width of the eyebrows, the diameter of the pupils, the size of the mouth are all driven by the values associated with select columns within a single row of data. With training, a data analyst could read Chernoff faces (or Chernoff chairs, tables, rooms, and so on) just as easily as we do scatterplots. But the effectiveness of a Chernoff object is only possible with conscious experience – to a computer, there is only difference between a set of high-dimensional Chernoff faces and the same data represented as an 1,000-column spreadsheet would be a set of labels or memory pointers; to us the difference is much more profound than a few bits of information. Our conscious experience allows us to perceive (and quickly learn about) the world as a collection of Chernoff objects. A computer, well, not so much.

In other words, consciousness is the most efficient route to context. It is within the context of how our body is feeling and what we have in the fridge—not to mention how we feel about the things in that fridge—that we make the decision about what to have for breakfast. Call it the argument from multi-dimensionality: it is much easier to make decisions like these by perceiving such contextual pieces of information as different types of sensations rather than have them all be reduced to the same kind of information and processed by a ‘blind’ mechanism.


There are four implications that stem from this current consideration of consciousness that are worth mentioning.

First. The argument from multidimensionality implies that synesthesia should be relatively rare; and that multi-domain synesthesia (where the experience of information from one sense informs the experiencing of multiple, other senses) should be practically non-existent both across and within species.

Second. After inspecting his own thought process and first-person experience, Descartes famously concluded ‘I think therefore I am.’ While Descartes’ introspective analysis was hugely influential on the course of Western thought, it clearly could have been better. Had Descartes been more skilled at (or perhaps more aware of Buddhistic approaches to) introspective exploration, he might have alighted on the fact his conscious awareness was more ‘upstream’ than his senses or rational thought-processes. He could have (rightly) concluded that everything he perceived or thought about is ‘downstream’ of a specific non-verbal sense of self, had he ‘looked’ in the right way. Alas, we can only wonder at what the course of Western thought might have been had Descartes been more skilled at the difficult task of introspecting his own mental processes.

Third. The argument from multidimensionality helps solve one of the old zombie problems (in particular, the kind that can’t be solved via video games and practice). Many have conceived of a P-zombie, a realistic and convincing humanoid that can navigate the world, but is entirely lacking in conscious experience.  Given that such a thing would have to process information to survive, it would presumably rely on (lots of) self-directed machine learning. However, this means that its ‘brain’ or set of information processors would be massive. Not only would a self-directed machine learning automaton require lots of hardware (potentially on the order of rooms of servers), it would also require lots of training data — some of the most cutting edge neural networks (i.e. Generative or Conditional Adversarial Networks) require on the order 100,000 instances of training images to work effectively. Not only would our automaton need considerable bandwidth for receiving such information, it is beyond anyone’s ken how our automaton would go about gathering enough training data merely as a by-product of just trying to get through the day. One of the main reasons for the rapid progress of machine learning and neural networks is because the cost of the necessary hardware has become cheaper, which has enabled the use of more hardware and training data, not less. In other words, self-contained P-zombies are impossible as long as our artificial approaches to multidimensional information processing relies on the relatively inefficient mechanism of unidimensional, non-conscious experience.

Fourth. When I said that the entirety of human existence could be described as a progression towards better predictions and better tools for predicting stuff, I was not kidding. As I alluded to above, evidence for a long-term trend of striving for more predictability can be found in several places, particularly in the scientific domains.

Scientific progress has generally concerned itself with explaining and predicting what would otherwise be random data points and facts that don’t quite make sense on their own. Want to know what happens when you split an atom? Is there a formal way to figure out what happens if you take one thing and mix it with another? What if you took three pounds of Taylor Swift CDs and used them to reflect light onto a small sphere? How much useful energy could you create? Would there be any meaningful difference between using her early and her late recordings? Would Mozart symphonies be any better?

As covered above, predicting these things all require domain-specific concepts, mental models and formalized rules of thumb that have been developed as a part of the scientific method. In certain cases, the scientific progress towards attempting to make the world more predictable has not been in terms of explaining data per se, but in these cases the progress has been in terms of how to go about the process of making the predictions more accurate. In the fourteenth century, for instance, a monk named Roger Bacon articulated the abstract concept of experimentation. In what is one of the more important breakthroughs of all time, he helped make the rate of true-knowledge acquisition (i.e. scientific progress) much more predictable. My point is that the history of progress can be seen as a long-term trend of striving towards more predictability. While some aspects of the world are more predictable now, the large increase in the planet’s population means that the world is not always becoming uniformly more predictable. Human beings are the most complex and hard-to-predict things we know of, and more than seven billion of them makes for an interesting world. However, an orientation towards more predictability is, on some level, a constant in every person’s life, in every society, in every country, and every city on the planet.

If you think about it, this is not just a human trend: the history of evolution consists of life forms evolving to better predict the environment they find themselves in. In terms of predicting where food will be next, we are a hell of a lot better at it than bacteria are. What is curious is that the laws of physics tell us that it always takes energy to move and do any sort of work, including getting food. This means that any living thing must attempt to conserve energy, and one great way to do so is to not move to the right when the deer (or complex sugars) are to the left. The argument from multi-dimensionality implies that, over the long term, any spark of life will eventually evolve to include life forms that are on some level aware of the data they process. Given time, the mechanism of random mutation, in the context of a world with multi-dimensional problems and where training data is not woefully abundant, the impulse towards living and reproducing will change from a bacterium to a creature that is conscious of some of the information passing through it. All that it takes is (a) a system of non-homogenous self-replicating units, (b) said units to have the ability to utilize information to alter their behavior, (c) issues of survival to be able to be described in terms of high information dimensionality, and (d) lots of time.

Whether it is on Mars or on a planet way beyond Alpha Centauri, once there is life, you can extrapolate that at some point in the future the creatures that evolve from it will have conscious experiences. Though the conditions for life to flourish are exceedingly rare, the impulse towards increasing in complexity resulting in conscious awareness is not rare at all.

The title for this piece comes from a phrase I heard when noted neuroscientist and consciousness chaser Christof Koch spoke at Berkeley a few years ago. When he spoke about the concept then, I understood it on some level, but was unable to articulate it as I understand it now: just as the conditions for plasma or ice are baked into the physical laws that govern our world, so too are the conditions for consciousness. In the most non-mystical way possible, conscious experience is an inevitable product of the universe’s structure. This reasoning begs the question: what else might be inevitable? It turns out that not only do rats laugh when tickled, they respond to ambiguous stimuli more optimistically immediately afterwards. If consciousness is inevitable, perhaps we will find that laughter is too. The possibilities are incredibly intriguing. But it is your turn now; I will leave it up to you to wonder on your own.

Works Cited

Bacon, Roger. “Opus Majus, trans.” Robert Belle Burke (1928).

Bridges, John Henry. “The Life & Work of Roger Bacon: An Introduction to the Opus Majus.” (1914).

Clark, Andy. “Whatever next? Predictive brains, situated agents, and the future of cognitive science.” Behavioral and Brain Sciences 36.03 (2013): 181-204.

Descartes, Rene. “Discourse on the method of rightly conducting the reason, and seeking truth in the sciences.” (1850). Project Gutenberg. Retrieved from http://www.gutenberg.org/files/59/59-h/59-h.htm.

Hirsh, Jacob B., Raymond A. Mar, and Jordan B. Peterson. “Psychological entropy: a framework for understanding uncertainty-related anxiety.” Psychological review 119.2 (2012): 304.

Ishiyama, S., and M. Brecht. “Neural correlates of ticklishness in the rat somatosensory cortex.” Science 354.6313 (2016): 757-760.

Isola, Phillip, et al. “Image-to-image translation with conditional adversarial networks.” arXiv preprint arXiv:1611.07004 (2016).

Koch, Christof. California Cognitive Science Conference, U.C. Berkeley (2012)

Koch, Christof, et al. “Neural correlates of consciousness: progress and problems.” Nature Reviews Neuroscience 17.5 (2016): 307–321.

McKinley, Michael J., and Alan Kim Johnson. “The physiological regulation of thirst and fluid intake.” Physiology 19.1 (2004): 1–6.

Niiniluoto, Ilkka, “Scientific Progress”, The Stanford Encyclopedia of Philosophy (Summer 2015 Edition), Edward N. Zalta (ed.)

Rumelhart, D. E. “David E. Rumelhart, Geoffrey E. Hinton, and Ronald J. Williams.” Nature 323 (1986): 533-536.

Schwartenbeck, Philipp, Thomas HB FitzGerald, and Ray Dolan. “Neural signals encoding shifts in beliefs.” NeuroImage 125 (2016): 578-586.

Van Gulick, Robert, “Consciousness”, The Stanford Encyclopedia of Philosophy (Summer 2017 Edition), Edward N. Zalta (ed.), forthcoming URL = <https://plato.stanford.edu/archives/sum2017/entries/consciousness/&gt;.

Xu Yang, Terry Regier, and Barbara C. Malt. “Historical semantic chaining and efficient communication: The case of container names.” Cognitive science (2015).


The Virtual Identity of Life

Budimir Zdravkovic
City College of New York


Information can be defined as any sequence or arrangement of things that can convey a message. Here I would like to focus on information coded by biological organisms and how that information is related to their identity. The essence of living things has been difficult to define conceptually. Living things or biological things have certain properties which are unique to them and which are absent in inanimate matter. Defining life has been an ongoing problem for scientists and philosophers, but what is more puzzling is that living organisms do not appear to be defined by the conventional rules of identity. To illustrate what is meant by conventional rules let us look at the Ship of Theseus paradox, which begins with an old boat made of old parts. As this boat is renovated and the old parts are replaced with new ones, it gradually begins to lose its identity. When all the parts of the ship are eventually replaced, can we still say this new renovated ship is the Ship of Theseus? If so, what if we reassembled the old ship from the old parts? Would Theseus now possess two ships? In this paradox it is clear that the problem of identifying the ship stems from defining it in terms of its old and/or new components. The conflict of identity exists because old components are replaced with new ones, confusing our common-sense notions of continuity.

But now let us turn to biological organisms, who constantly experience this turnover in components and materials.  The atoms and molecules in a living cell are constantly replaced; and at one point in a particular cell’s life it would have completely replaced those molecules and atoms it originally possessed with completely new ones. Despite this fact the living organism persists in terms of being the same living organism; it persists in being ‘that particular cell.’ The same is true for complex living organisms such as human beings: human tissue is constantly replaced and replenished, but a human persists in being the same person throughout his or her entire life. Biological organisms are constantly in flux but, unlike the Ship of Theseus, they do not lose their identity over time. Could it be that the identities of living organisms do not really depend on their components, but on something else? In this paper propose that the identity of living organisms depends on the nature of information they store and use. (more…)

Must We Quine Qualia?

George P. Simmonds
Oxford Brookes University


It is no secret that qualia possess a number of enemies in the philosophy of mind, and that the majority of these enemies advance from a materialist position allied to the methods of scientific reduction. Few of these opponents have done so with as much vigour as Daniel Dennett, however, who in his paper ‘Quining Qualia’ proposes we at long last put our cognitive fantasies to bed. In this paper I intend to analyse Dennett’s claim in interest of suggesting his dismissal of qualia exceeds the bounds of moderation.

Part I: Qualia

Qualia are the ‘raw feels’ of conscious experience, viz. what it is like to experience something [1]. A quale might manifest itself as a perceptual event, a bodily sensation, an emotion, a mood, or even – according to the likes of Strawson (1994) – a thought or disposition. They constitute the greenness of green, the saltiness of salt, the hotness of anger, and that thing  which ‘give[s] human consciousness the particular character that it has’ (Ramachandran & Hirstein, 1997, p.430). What is it like to gaze upon a setting sun, or a lunar eclipse? What is it like to feel joy? What is music like? These are all questions relevant to the subjective character of experience, a phenomenon which itself sits ‘at the very heart of the mind-body problem’ (Tye, 2013, preface).


The Problem of Identity in Biology

Budimir Zdravkovic
The City College of New York


I: A Discourse of Biological Concepts

In the last century great leaps in technology and scientific understanding have allowed us to thoroughly investigate our bodies and how they function. Our recent knowledge of the chemical and biological sciences has revealed some profound philosophical implications concerning our identity and the identity of other living organisms. Concepts in biology tend to neatly segregate definitions in binary terms in the same way classical logic does. Some of these concepts include the definitions of what is living and non-living, what is cancerous and non-cancerous, animal and human, male and female—and so on—but such definitions are far from clear. In the most extreme cases, binary definitions within biology that rely on classical logic and the laws of identity could perpetuate chronic illness. Such definitions could also hypothetically lead to violations of animal and human rights. We must understand that the traditional logic that has been the foundation of mainstream biological understanding for the past two thousand years has a number of shortcomings.

Living systems enjoy such a high degree of complexity that it becomes necessary to distinguish them from non-living systems. But the segregation between living and non-living entities quickly becomes vague and hazy when we consider the evolution of life or the origin of the first living organism: if the origin of life is continuous with non-living chemical and physical processes then at what point does something become living? At what point can we define a substance as a living thing?  With the rejection of vitalism, the idea that there is an essence in the constitution of all living things which is fundamentally different from the constituents of non-living things, we have found that all matter is made of the same stuff at the fundamental level of atoms and molecules.

Evidence in the field of evolutionary biology suggests that within living organisms all species had a common ancestor along with the capacity to transition into different species under environmental pressures and natural selection. Using this evidence we can also draw the conclusion that it is not only life that is defined vaguely in the biological sciences, but also the concept of species. Speciation is also a gradual process, a process of transition. Such processes are difficult to define using classical logic because they are processes that involve change. At what point does one species transition into another? Later I will illustrate the shortcomings of the biological definition of species by way of a thought experiment. What does this mean for our identity as humans and living things in general?

II: The Ship of Theseus and The Problem of The Heap

The problem with our traditional idea of identity is best illustrated by old and common philosophical thought experiments; this is why scientists ought to be speaking with philosophers of science. We can begin with the problem of the grain and the heap. Imagine we are attempting to define a heap of sand and we wish to discover when the heap of sand ceases to remain such. Of course the heap consists of grains, so if we were to one by one start removing them, it follows that at one point or another the heap will no longer fulfil its own qualifications. We might be so bold as to define a heap by an arbitrary number, say 30,000 grains. The obvious problem with this would be the lack of functional and phenomenal difference between an adequate heap and a pile of 29,999 grains. We could continue to remove grains but it would become difficult if not impossible to tell when the heap becomes just a pile: for all our merits we do not seem to be able to place a number or a concrete definition on what constitutes a ‘heap of sand.’ Even with our attempted definition we have but a vague idea of the boundary which separates a collection of grains and an adequate heap. A similar problem is encountered when we talk of the Ship of Theseus: as the ship undergoes renovations, the old parts of the ship replaced with new ones, the question of the ship’s identity is raised. If the renovated ship is a new ship, at what point is it no longer the old one?

The problematic definitions in biology have an identical form. Phenomena in biology are vaguely defined because they emerge from gradually changing events. We can reason similarly about the emergence of life or our definition of ‘living things.’ Is random DNA surrounded by a lipid bilayer a living organism? Or does that kind of organization of molecules only become living when there is some sort of metabolism involved? The ‘metabolism first’ hypothesis goes to the extreme of proposing that life began as a collection of metabolic chemical reactions that were catalyzed by simple catalysts (Prud’homme-Généreux & Rosalind Groenewoud, 2012) (Cody, 2004; pp.32). The hypothesis is bolstered by the observation that certain metabolic chemicals can be synthesized in the presence of simple catalysts like the transition metal sulfides, catalysts which existed on the Prebiotic Earth (Ibid.).

But even the ‘metabolism first’ hypothesis is unsure of when non-living things become living things. If we define something as ‘living’ just because it has a metabolism (a chemical reaction or a collection of chemical reactions that utilize energy according to the principle of steady state kinetics) then there are many things that we could, rather absurdly, deem living. We could purify a set of fatty acids and reconstitute a metabolic chemical reaction that uses steady state kinetics in a test tube, for instance, but by no means could we call this a living system (Xingye Yu, et al., 2011; pp.108). A living system as a whole has a certain degree of irreducible complexity that is hard to define in terms of its constituents. In the same way a collection of grains becomes a heap at a certain point of mass, the metabolic reactions and molecules in the living system must give rise to life at a certain point in life’s evolution. But just like the heap of sand, living things remain poorly defined if we attempt to understand them by way of classical logic and the laws of identity.

III: The Probabilistic Logic of Cancer

Cancer emerges as a result of genetic evolution. The difference between the evolution of cancer and the genetic evolution that gives rise to speciation, however, is that cancer cells (or cells predisposed to cancer) evolve quickly enough for their evolution to be observed. In cancerous cells changes in the DNA eventually accumulate until the cell’s mechanism of division is out of control. As a result cells start dividing rapidly and uncontrollably. The more rapidly a cell divides the more mutations it accumulates; and the cells that accumulate the greatest amount of mutations, that make cell division favorable, will out-compete the healthy tissues in our bodies. This is because cell division requires nutrition.

There are many kinds of cancers and not one is reducible to a single error or mutation in the DNA. At what point does a cell become cancerous? Once again this is a lot like the heap of sand problem, or that of the Ship of Theseus.  One mutation of the DNA is certainly not enough to make a cell ‘cancerous,’ but it could predispose someone to developing cancer. We know that after the cell accumulates a certain number of crucial mutations it becomes (by our definition) cancerous, but there is no clear indication as to when the cell might become cancerous. Anyone could be at risk, and researchers fail to pay attention to the mechanisms and the processes by which a cell might become cancerous, instead concentrating on cancer’s medical symptoms. It should concern us that cancers do not become ‘illnesses’ until they are virulent, by which time it is often too late for effective treatment.

It is simple to identify a malignant tumor, but not so to identify a tumor that is on its way to becoming malignant. The most effective treatment for a tumor is, as with so many things, prevention. We do not, however, seem to acknowledge them until they become malignant, and this is a very dangerous way of thinking about cancer. For the future of medicine it is just as important, if not more important, that we begin to understand how predisposed people are to developing malignant tumors than it is to detect whether they actually have them.

Classical reasoning makes a binary distinction between the cancerous and the non-cancerous; probabilistic reasoning, on the other hand, can provide information on the possibility or likelihood of a tissue becoming cancerous, or the average age at which a predisposed individual may start developing a tumor. Such information would be essential to the early detection and treatment of all forms of cancer. By abandoning the traditional laws of identity and adopting the probabilistic binary terms we may begin to think differently about the nature of well-being and disease. The term denoting the disease must become less important than the relevant statistics that indicate an individual’s predisposition towards the disease.

Mutations in genes like BRCA1 and BRCA2, for instance, increase the risk of female breast and ovarian cancers, as reported by A. Antoniou et al. (2003). A woman carrying inherited mutations in BRCA1 and BRCA2 would certainly not be classified a cancer patient, but the probabilistic nature of the phenomenon—the fact that she is at a high risk for acquiring the disease—ought not be ignored.  Let us say she has 65% chance of acquiring breast cancer; how would one define the tissues in her body?  Are they cancerous? Are they 65% cancerous? Traditional laws of identity and classical logic would not be sufficient to properly define and understand this phenomenon or the evolution of cancer as a chronic disease. Other forms of logic, such as probabilistic logic, however, have emerged as useful methods for understanding this sort of issue.

IV: Speciation

Like many other biological concepts, speciation is problematic. Organisms tend to change over time due to mutations in their DNA; and, as I discussed previously, this process of gradual change presents a problem for static conceptions of identity. As a given species changes over time it becomes hard to define exactly when speciation occurs. At what point did hominids become humans, for example? And to what extent are our hominid ancestors human? We could say that the hominid is human to the extent that it shares common DNA with humans, but how could such a notion survive in light of the similarities of behavior and DNA between humans and chimpanzees? Chimpanzees do, after all, share more with us than with such other hominids as bonobos (Chimpanzee Sequencing and Analysis Consortium, 2005; pp.437) (Kay Prufer et al., 2012; pp.586).

It is also apparent that humans have similar brain structures to other hominids. Though volumetric analysis shows that, overall, humans have bigger brains than hominids, that is not to say we do not have similar sorts of brains (Katerina Semendeferi  et al., 1997; pp.32). Hominids have frontal lobes, brain-areas that control social behavior, creativity, planning and emotion. According to evolutionary theory these structures found in hominid brains are identical to those found in humans.  Humans and hominids share this brain structure because a common ancestor possessed it in the past: each inherited its brain structure from the same antecedent.

It is impossible, through the use of conventional rules of identity, to separate humans from animals. As mentioned above hominids possess the same brain structures as humans. That implies that to some extent hominids are like humans, in terms of behavior and biological constitution. If we could compare our hominid ancestor to the Ship of Theseus, humans would be the equivalent of a partially renovated ship because a lot of the old hominid structures, in fact most of the old hominid structures, are still a part of the human organism: humans would be the equivalent of a slightly upgraded hominid. Biologists attempt to adumbrate speciation barriers through a focus on reproduction, but this definition of species is problematic on account of its potential to violate human rights. The term ‘human species’ is synonymous with the term ‘human’ because, according to the biological definition, humans are a type of species. But how we generally think about humans is very different to how we think about the biological definition of the human as a species:

A ‘species’ is generally defined as an organism that is able to reproduce by breeding with another organism of the same sort. This simple classification puts us into the category of ‘human’ only so long as we are capable of breeding with another of our sort, i.e. a human.  To demonstrate that our definition of ‘human’ as a species type differs from our definition of ‘human’ as an individual with moral capabilities and rights I intend to proffer another thought experiment: imagine there is a woman called Nancy. Much to Nancy’s frustration and confusion, she has been unable to conceive with her husband, Tom. Nancy and Tom have been to several different doctors and Nancy is ostensibly healthy. There is nothing wrong with her hormones nor her reproductive organs. She also ovulates regularly. Tom, too, is completely healthy. There is no reason as to why Nancy and Tom are unable to have children.

After a great deal of effort a scientist in Tom and Nancy’s town caught word of Nancy’s unusual situation. The scientist acquired one of Nancy’s eggs and studied it closely. He soon came to the conclusion that Nancy’s egg is simply incompatible with human semen. According to the biological definition of species, it seems Nancy has become another species diverged from humans. Yet she is human in every other conceivable way. If Nancy is not human in canonical biological terms should she still be subject to human privileges and treatment? Does she, in short, have human rights?

This thought experiment demonstrates the ethical issues involved in the biological definition of ‘human.’ Nancy is, in any respectable terms, a human being since she retains all the human traits we, as a species, value. The unfortunate circumstance that her egg is incompatible with human sperm seems rather trivial when set beside her overall portfolio.

V: Biology and Complexity

Biology as a science began with simple ideas and concepts. The field has become much more complex as our understanding of the biological and biochemical sciences progressed through the centuries. If there is an emerging theme within biology and biochemistry, it is that the more we know about biological and biochemical phenomena the more complex they seem to become. It is the nature of this biological complexity and its changing constituents that make classical definitions and identifications of biological phenomena difficult. These phenomena cannot be understood using traditional laws of identity and classical logic without gross oversimplification. These oversimplifications have consequences on how we think about the distinction between humans and animals, how we think about disease and risk, and might hypothetically lead to violations of human rights.


Works Cited

 Annie Prud’homme-Généreux and Rosalind Groenewoud. (2012). The Molecular Origins of Life: Replication or Metabolism-First? Introductory Version. National Center for Case Study Teaching in Science George D. Cody. (2004).

Transition Metal Sulfides and the Origins of Metabolism. Annual Review of Earth and Planetary Sciences. 32.Xingye Yu et al. (2011). In vitro reconstitution and steady-state analysis of the fatty acid synthase from Escherichia coli. PNAS. 108.

A. Antoniou et al. (2003). Average Risks of Breast and Ovarian Cancer Associated with BRCA1 or BRCA2 Mutations Detected in Case Series Unselected for Family History: A Combined Analysis of 22 Studies. American Journal of Human Genetics. 72.

Chimpanzee Sequencing and Analysis Consortium. (2005). Initial sequence of the chimpanzee genome and comparison with the human genome. Nature. 437.

The bonobo genome compared with the chimpanzee and human genomes. Nature. 486.

Katerina Semendeferi et al. (1997). The evolution of the frontal lobes: a volumetric analysis based on three-dimensional reconstructions of magnetic resonance scans of human and ape brains. Journal of Human Evolution. 32.

[feature image by ‘Biodiversity Heritage Library’]