Home » Philosophy of Mind

Category Archives: Philosophy of Mind


Consciousness is an Inevitable State of Matter

Evan Warfel
University of California, Berkeley

Hello, this is your brain, reading about your brain, reading about your brain.

Consider the following question: Why are we conscious?

I get it; pondering consciousness sounds like an activity only enjoyed by nerds, people who are high, those of us who have found a moment of post-yoga stillness, or people who fit in all three categories at once. But notice that we do not tell our heart to beat or our cells to grow, we do not have to think about focusing our eyes, and we do not consciously will our bodies to inject adrenaline into our bloodstream when something scary happens. These things happen more or less automatically, and if such highly complex tasks can happen without our attention or willpower, why should other complex tasks—like choosing what to eat for breakfast—require conscious awareness? How hard is choosing which flavor of yogurt to eat? And do we really need to be conscious to determine that we should peel a banana before biting one?

To get to the question of why we are conscious, let us first cover some background of how it happens. Despite what people may think during post-yoga philosophy sessions, everything you ever thought of — including every time you have felt thirsty or experienced the feeling of being loved, every ambience you have been aware of, every piece of visual, musical, and kinesthetic thoughtstuffs you have utilized, and every ounce of hard-earned wisdom, as well as every single bit of fluff passing through your mind — has and continues to be encoded by brain cells that ‘pulse.’ Our conscious awareness is underpinned by pulsating neurons. Contrast this with how computers, a.k.a. non-conscious information processors, deal with information: via working with the state of tiny binary on-off switches and electromagnetic storage. In exceedingly rare cases—which inexplicably have historically involved the Russians—information is encoded by the states and status of a three-state ternary switch.

But the neurons in your brain don’t use any particular cell ‘state’ to encode information. Instead, your neurons keep track of information by pulsing, rapidly releasing electro-chemical discharges and then recharging. Like what your heart does, but quicker and nimbler, and on a much a smaller scale. More specifically, information in your brain is encoded in the rate (and the change in the rate) of pulsing, rather than any particular ‘charged’ or ‘discharged’ condition.

I admit that using the rate of a pulse sounds like an overly simplistic mechanism for keeping track of information that ultimately gives rise to conscious experience. If you were to describe a piece of music by the mechanism of foot-tapping, you could have one person to tap how quickly the song should go, the volume could be indicated by how quickly another person would tap, yet another could tap slowly for minor chords and quickly for major chords, and so on. Clearly, you’d need to hire a lot of interns to describe ‘Wonderwall’, and you would need an army of foot-tappers to store the information of a Beethoven symphony. But if you were forced to be clever — perhaps due to constrained resources — you could start to get interesting interactivity with only, say, four neurons. For example, you might have one neuron that only pulses when three other neurons all pulse quickly. Or you could have one neuron that only pulses when three other neurons pulse at the same rate, in agreement. Connect up several of these four-neuron units and you could set up a system of voting and tie-breaking. With such a system you could even get the neurons to fire in the same cascade when given the right input; we might call this a memory. This would be rather impressive for a handful of cells that only know how to change how fast they pulse.

It turns out that each neuron in your brain connects to, on average, about seven thousand other neurons. In total, you are thought to contain roughly eighty-six billion of these pulsing chumps, to say nothing of the connections between them or any of the other types of brain cells you possess. The number of neurons in your body is so large I feel morally, spiritually, and ethically obliged to announce the figure more than once: eighty-six billion. Each one feverishly pulsing and changing its rate over time, like club-goers over the course of an evening. No matter what you are doing, including sleeping, your neurons are always dancing on electricity, tripping the light fantastic with approximately seven thousand brethren. (Click here to hear the pulsing of a neuron translated to sound).

Within the crazed activity of our brains, so much information is being processed, transformed and manipulated by our pulsing neurons that we are conscious of only a tiny part of it. Yet any effort to understand why we are conscious is fruitless without articulating what our pulsing neurons enable in the first place.

You are a prediction machine.

Many parts of our brains are dedicated to figuring out what will happen next. For example, while I write this sentence, some parts of my brain are laboring to figure out how parts of your brain will think about what is going to happen when you read it. Other parts of the brain attempt to predict the errors in the main prediction systems, all in an effort for accuracy. If you can predict, better than chance, where your next meal will come from, you don’t have to worry as much about that aspect of your survival.

However, some things are too complex for brains to predict on their own. It turns out you can’t predict how Mars will move across the night sky by yourself: if you observe it each night, you’ll find that Mars sometimes appears to go backward. Every so often, the red planet seems to reverse its trajectory for a little while and then it gets right back to continuing its original trek across the sky.

If you wanted, you could come up with a formal Martian prediction scheme. To do your scheme justice you’d need research, tools, measurement, theory, other people, and coffee. If you were doing things right, you would need single-origin beans for the hipsters and caffeinated turmeric tea for the hippies. The caffeine would help your researchers deal with the following fundamental difficulty: Mars’s movement contains more variation and unpredictability than what we are used to. We have evolved to learn and then intuitively predict the movement of a ball when it flies at us, but not so much when Mars does something similar. Despite our limitations, we are drawn to predicting the motion of the planets; we want to formally predict lots of things, including how people will respond to different medications, when a heart will stop, when we might find ourselves in the midst of a maple syrup shortage, or—most importantly—precisely when certain individuals (like me) will run out of crunchy peanut butter.

Though we may start off absolutely terrible at formal, explicit predictions in any given domain, we tend to improve. It has taken human beings thousands of years, but we currently have a system of civilization that includes things like predictable sources of food and water, in the form of supermarkets, restaurants, farm supply chains and pipes. To aid our prediction attempts, we have — throughout the entirety of human existence — come up with rules of thumb, mental models, and various tools of thought that make it easier to learn from experience as well as navigate the world we live in. These mental models and concepts serve to translate brain-cell pulsing to real-world phenomena and vice versa. The retrograde motion of Mars, for instance, makes perfect sense as long as the following concepts are part of your worldview: Mars is permanently farther away from the sun than the earth, planets orbit the sun due to gravitational attraction, and when we look at the night sky we see a snapshot of where Mars is in three-dimensional space. Each of these concepts builds on more basic elements, like that an orbit consists of an elliptical or roughly circular shape around a point, the very notion of space, and of physics, gravity, and so on. And each of these concepts is a ‘tool’ we use to think with.

But enough with formal, explicit predictions. Communication is an excellent example of informal, implicit predicting: language consists of words and other tools of thought we use to predict people’s mental states and encourage those people to predict ours. When these predictions are accurate enough, we say that we have successfully communicated. In other words, when we communicate, we trade recipes for predicting (at least some part of) what we are thinking to other people. Most of the time, our brains do this prediction so blazing fast it is as if we have skipped the recipe and instead exchanged actual ‘meaning.’ But the fact remains that when you say ‘chair’ you rely on me to infer what you mean, as it turns out that there are many more types of chairs than anyone can conceive of.

Both formal and informal predictions rely both on pulsing neurons as well as concepts and mental models. Having the right tools of thought is like having a bike that fits your body:  different combinations of gears and lever lengths on a bike will beneficially interact with the different lengths and strengths of your limbs. The bike geometry serves to translate your effort into forward motion most effectively for the environment you find yourself in. Similarly, the right set of thought-tools can help translate the stuff you interact with into, literally speaking, a quirky configuration of pulsing neurons that fits within your other unique neuronal firing patterns. Without the germ theory of disease, for instance, people interested in advancing human health can work hard, have creative ideas, and burn the midnight oil; they just would not get too far for their efforts. Moreover, everybody understands the germ theory of disease slightly differently.

It is not clear, however, that one needs to be conscious to be equipped with the right set of concepts or thought-tools to predict what might happen in the world: the field of machine learning is dedicated to getting computers to perform non-conscious prediction. Its basic premise is that computers can form models of the world and, through a lot of trial end error, figure out how to tweak those models to get better results. At the risk of wading too far into a swamp of poorly defined terms, I submit that the model(s) and parameters that a machine learning algorithm uses for prediction are a computer’s ‘tools of thought.’ Without the right models, parameters, and weights, accurately predicting aspects of the world becomes impossible for a machine learning algorithm, similar to humans who lack the right concepts.

One glaring difference between a computer’s predictive models and ours is that we consciously experience utilizing what we know. Not only are we aware of information passing through our minds and bodies, but if you pay attention, you’ll notice that thinking about different subjects or different people corresponds to a different flavor of subjective experience, even during what we would currently call the same emotional state. Yet computers capable of machine learning pose a challenge, which brings us back to our opening question: if reasonably accurate predictions can be made without conscious awareness, why in Darwin’s name are we conscious at all? What advantages does conscious awareness confer?

Consciousness is the quickest route to context.

To understand what advantages consciousness confers for predicting the world, let us further consider the case of unconscious computer prediction. Imagine you had a digital camera strapped to your bike helmet. You could, if you wanted, instruct a computer to predict what kinds of colors it would see next during your commute. If you were to use modern machine learning algorithms, you would avoid giving the computer ‘smart rules’ regarding how to go about making the predictions. ‘Smart rules’ are the kinds of things we humans can observe and articulate when we think we are being clever, like ‘yellow is a rare color in many urban areas, so don’t expect it’ or ‘mauve and teal are only likely to appear during fashion week in New York, or whenever Christo rolls through town.’

The reason you would avoid verbalized smart rules is because they often fail to cover edge cases, not to mention that they are rarely precise enough to be encapsulated by computer code. By the time you come up with all the exceptions, and exceptions to the exceptions, and so on, French verb conjugations would feel like a walk in the park. By the time you pin down precise instructions regarding each possibility — like what constitutes a patch of color in a computer image, or regarding how you might account for the fact that your eyes see color rather differently than computers (your eyes and brain automatically color-correct for changes in the ambient light), and so on—your code would be much longer than this nearly interminable sentence.

Instead of smart rules, modern machine-learning algorithms take another approach entirely. If we apply the modern machine learning approach to the problem of bike-route color prediction, part of the instructions — or code — you’d develop would define how the computer would know if its predictions were improving. I admit that ‘minimize the error associated with each prediction’ isn’t exactly a James-Bond-worthy mission to charge your computer with, but as long as computers cannot drink martinis, we should be okay with giving them boring things to do. And once you define how the computer should measure its own error, you can leave it to work things out on its own, leaving you free to drink as many martinis as you see fit.

Prediction-error has turned out to be both very difficult and a very fruitful thing to attempt to minimize. One of the biggest single advances in unconscious computer prediction happened about thirty years ago, when David Rumelhart, George Hinton, and Ronald Williams all worked out how to convince a computer to minimize its own prognostic errors. Their breakthrough was in figuring out how to get the computer to fairly distribute responsibility for the prediction errors across all of the parts of the prediction process. It sounds simple in retrospect: instead of coming up with smart rules, you can have the computer come up with the rules itself, as long as it knows how to improve and who to blame. Since then, almost all progress in neural networks, a type machine learning, has consisted of developing fancier ways of utilizing the technique they discovered — one reason for the boom in AI over the past decade is that computer hardware has finally gotten cheap enough that plenty of it can be used for the computationally intensive process Rumelhart et al. derived.

But what would you do if you were not sure of the problem in advance? What if the only thing you knew was that your information processor would come across completely novel problems? Well, you would probably take the best general prediction-error minimizer you could conjure, and hook it up to something that could define its own problems. This way, no matter what problem your information processor defines for itself, it can learn. Roughly speaking, this describes how (parts of) your mind work. When you learn to throw a baseball, you don’t think about the path of your arm in terms of coordinate positions, or in terms of numerically specified velocity. Instead, you attempt to throw a ball, observe the result, and try again. The specific learning takes place on its own. This general idea applies to other parts of your brain, too:  you consciously choose which problems to solve, and then use a combination of conscious and unconscious processing to solve them.

Choosing what problems are worth pursuing in the first place is much harder than doing the ‘learn by error minimization’ task, especially if you use the same non-conscious paradigm of being blind to the qualities of information you work with. In a computer, the processor is robbed of context — it ‘knows’ nothing about what the rest of the computer is doing, only that certain operations should be applied to bits. More than that, the main processing chip in your laptop does the same sorts of things regardless of whether you are watching a video of a hydraulic press squish a toy or whether you are working on financial models via a spreadsheet. If you made a computer conscious and kept everything else the same, their internal experience of the information they process would feel uniform. Or to flip this metaphor around, if we were mostly unconscious, understanding links on Reddit would feel exactly the same as processing the information from the wind on our face. It is this uniformity of information makes it nearly impossible to define problems worth solving, because all problems worth solving—not to mention almost all of the tasks we do on a daily basis—have lots of aspects to consider.

Why? If a problem has a lot of aspects to consider, it is technically ‘multi-dimensional.’ Having only one experience of data, no matter what it is you are processing or computing, does not work for trying to quickly understand and solve multi-dimensional problems. The problems we solve on a daily basis are so multifaceted, in fact, that we use our attention to ignore what we deem irrelevant: there is no way to efficiently process everything. But by consciously experiencing information in fundamentally different ways (emotion, music, sound, touch) we gain access to irreducibly different types of data.

Physiological thirst, for example, corresponds to your body detecting the water content (and osmotic pressure) of select cells. When enough cells are low on water, we experience this information as the visceral feeling of thirst and mild dehydration. Imagine if the information content that corresponds with being thirsty and everything else were to use the same mechanism and present itself as push notifications to your smart phone. It would be much more difficult to distinguish vital signals from ones you could safely ignore. We would likely all develop elaborate systems to help filter the signals we were given. Instead, it is much more efficient to have the feeling of thirst manifest viscerally. By being able to perceive the world in multiple, irreducible ways, we are able to use a larger set of tools to quickly perceive, reflect, decide, and learn —conscious processing of information is more efficient.

To see how, consider one of the more esoteric data visualization techniques known as Chernoff faces. Generating a Chernoff face involves mapping multi-dimensional data to different parameters which govern the face’s appearance — imagine that, when drawing a face, the width of the eyebrows, the diameter of the pupils, the size of the mouth are all driven by the values associated with select columns within a single row of data. With training, a data analyst could read Chernoff faces (or Chernoff chairs, tables, rooms, and so on) just as easily as we do scatterplots. But the effectiveness of a Chernoff object is only possible with conscious experience – to a computer, there is only difference between a set of high-dimensional Chernoff faces and the same data represented as an 1,000-column spreadsheet would be a set of labels or memory pointers; to us the difference is much more profound than a few bits of information. Our conscious experience allows us to perceive (and quickly learn about) the world as a collection of Chernoff objects. A computer, well, not so much.

In other words, consciousness is the most efficient route to context. It is within the context of how our body is feeling and what we have in the fridge—not to mention how we feel about the things in that fridge—that we make the decision about what to have for breakfast. Call it the argument from multi-dimensionality: it is much easier to make decisions like these by perceiving such contextual pieces of information as different types of sensations rather than have them all be reduced to the same kind of information and processed by a ‘blind’ mechanism.


There are four implications that stem from this current consideration of consciousness that are worth mentioning.

First. The argument from multidimensionality implies that synesthesia should be relatively rare; and that multi-domain synesthesia (where the experience of information from one sense informs the experiencing of multiple, other senses) should be practically non-existent both across and within species.

Second. After inspecting his own thought process and first-person experience, Descartes famously concluded ‘I think therefore I am.’ While Descartes’ introspective analysis was hugely influential on the course of Western thought, it clearly could have been better. Had Descartes been more skilled at (or perhaps more aware of Buddhistic approaches to) introspective exploration, he might have alighted on the fact his conscious awareness was more ‘upstream’ than his senses or rational thought-processes. He could have (rightly) concluded that everything he perceived or thought about is ‘downstream’ of a specific non-verbal sense of self, had he ‘looked’ in the right way. Alas, we can only wonder at what the course of Western thought might have been had Descartes been more skilled at the difficult task of introspecting his own mental processes.

Third. The argument from multidimensionality helps solve one of the old zombie problems (in particular, the kind that can’t be solved via video games and practice). Many have conceived of a P-zombie, a realistic and convincing humanoid that can navigate the world, but is entirely lacking in conscious experience.  Given that such a thing would have to process information to survive, it would presumably rely on (lots of) self-directed machine learning. However, this means that its ‘brain’ or set of information processors would be massive. Not only would a self-directed machine learning automaton require lots of hardware (potentially on the order of rooms of servers), it would also require lots of training data — some of the most cutting edge neural networks (i.e. Generative or Conditional Adversarial Networks) require on the order 100,000 instances of training images to work effectively. Not only would our automaton need considerable bandwidth for receiving such information, it is beyond anyone’s ken how our automaton would go about gathering enough training data merely as a by-product of just trying to get through the day. One of the main reasons for the rapid progress of machine learning and neural networks is because the cost of the necessary hardware has become cheaper, which has enabled the use of more hardware and training data, not less. In other words, self-contained P-zombies are impossible as long as our artificial approaches to multidimensional information processing relies on the relatively inefficient mechanism of unidimensional, non-conscious experience.

Fourth. When I said that the entirety of human existence could be described as a progression towards better predictions and better tools for predicting stuff, I was not kidding. As I alluded to above, evidence for a long-term trend of striving for more predictability can be found in several places, particularly in the scientific domains.

Scientific progress has generally concerned itself with explaining and predicting what would otherwise be random data points and facts that don’t quite make sense on their own. Want to know what happens when you split an atom? Is there a formal way to figure out what happens if you take one thing and mix it with another? What if you took three pounds of Taylor Swift CDs and used them to reflect light onto a small sphere? How much useful energy could you create? Would there be any meaningful difference between using her early and her late recordings? Would Mozart symphonies be any better?

As covered above, predicting these things all require domain-specific concepts, mental models and formalized rules of thumb that have been developed as a part of the scientific method. In certain cases, the scientific progress towards attempting to make the world more predictable has not been in terms of explaining data per se, but in these cases the progress has been in terms of how to go about the process of making the predictions more accurate. In the fourteenth century, for instance, a monk named Roger Bacon articulated the abstract concept of experimentation. In what is one of the more important breakthroughs of all time, he helped make the rate of true-knowledge acquisition (i.e. scientific progress) much more predictable. My point is that the history of progress can be seen as a long-term trend of striving towards more predictability. While some aspects of the world are more predictable now, the large increase in the planet’s population means that the world is not always becoming uniformly more predictable. Human beings are the most complex and hard-to-predict things we know of, and more than seven billion of them makes for an interesting world. However, an orientation towards more predictability is, on some level, a constant in every person’s life, in every society, in every country, and every city on the planet.

If you think about it, this is not just a human trend: the history of evolution consists of life forms evolving to better predict the environment they find themselves in. In terms of predicting where food will be next, we are a hell of a lot better at it than bacteria are. What is curious is that the laws of physics tell us that it always takes energy to move and do any sort of work, including getting food. This means that any living thing must attempt to conserve energy, and one great way to do so is to not move to the right when the deer (or complex sugars) are to the left. The argument from multi-dimensionality implies that, over the long term, any spark of life will eventually evolve to include life forms that are on some level aware of the data they process. Given time, the mechanism of random mutation, in the context of a world with multi-dimensional problems and where training data is not woefully abundant, the impulse towards living and reproducing will change from a bacterium to a creature that is conscious of some of the information passing through it. All that it takes is (a) a system of non-homogenous self-replicating units, (b) said units to have the ability to utilize information to alter their behavior, (c) issues of survival to be able to be described in terms of high information dimensionality, and (d) lots of time.

Whether it is on Mars or on a planet way beyond Alpha Centauri, once there is life, you can extrapolate that at some point in the future the creatures that evolve from it will have conscious experiences. Though the conditions for life to flourish are exceedingly rare, the impulse towards increasing in complexity resulting in conscious awareness is not rare at all.

The title for this piece comes from a phrase I heard when noted neuroscientist and consciousness chaser Christof Koch spoke at Berkeley a few years ago. When he spoke about the concept then, I understood it on some level, but was unable to articulate it as I understand it now: just as the conditions for plasma or ice are baked into the physical laws that govern our world, so too are the conditions for consciousness. In the most non-mystical way possible, conscious experience is an inevitable product of the universe’s structure. This reasoning begs the question: what else might be inevitable? It turns out that not only do rats laugh when tickled, they respond to ambiguous stimuli more optimistically immediately afterwards. If consciousness is inevitable, perhaps we will find that laughter is too. The possibilities are incredibly intriguing. But it is your turn now; I will leave it up to you to wonder on your own.

Works Cited

Bacon, Roger. “Opus Majus, trans.” Robert Belle Burke (1928).

Bridges, John Henry. “The Life & Work of Roger Bacon: An Introduction to the Opus Majus.” (1914).

Clark, Andy. “Whatever next? Predictive brains, situated agents, and the future of cognitive science.” Behavioral and Brain Sciences 36.03 (2013): 181-204.

Descartes, Rene. “Discourse on the method of rightly conducting the reason, and seeking truth in the sciences.” (1850). Project Gutenberg. Retrieved from http://www.gutenberg.org/files/59/59-h/59-h.htm.

Hirsh, Jacob B., Raymond A. Mar, and Jordan B. Peterson. “Psychological entropy: a framework for understanding uncertainty-related anxiety.” Psychological review 119.2 (2012): 304.

Ishiyama, S., and M. Brecht. “Neural correlates of ticklishness in the rat somatosensory cortex.” Science 354.6313 (2016): 757-760.

Isola, Phillip, et al. “Image-to-image translation with conditional adversarial networks.” arXiv preprint arXiv:1611.07004 (2016).

Koch, Christof. California Cognitive Science Conference, U.C. Berkeley (2012)

Koch, Christof, et al. “Neural correlates of consciousness: progress and problems.” Nature Reviews Neuroscience 17.5 (2016): 307–321.

McKinley, Michael J., and Alan Kim Johnson. “The physiological regulation of thirst and fluid intake.” Physiology 19.1 (2004): 1–6.

Niiniluoto, Ilkka, “Scientific Progress”, The Stanford Encyclopedia of Philosophy (Summer 2015 Edition), Edward N. Zalta (ed.)

Rumelhart, D. E. “David E. Rumelhart, Geoffrey E. Hinton, and Ronald J. Williams.” Nature 323 (1986): 533-536.

Schwartenbeck, Philipp, Thomas HB FitzGerald, and Ray Dolan. “Neural signals encoding shifts in beliefs.” NeuroImage 125 (2016): 578-586.

Van Gulick, Robert, “Consciousness”, The Stanford Encyclopedia of Philosophy (Summer 2017 Edition), Edward N. Zalta (ed.), forthcoming URL = <https://plato.stanford.edu/archives/sum2017/entries/consciousness/&gt;.

Xu Yang, Terry Regier, and Barbara C. Malt. “Historical semantic chaining and efficient communication: The case of container names.” Cognitive science (2015).


Is Shame An Emotion?

Dr Kate Kirkpatrick
University of Hertfordshire


One of the most famous passages in Sartre’s Being and Nothingness (1943) is his phenomenological account of shame. But before writing the 650-page piece for which he is best known, he wrote a much briefer—and clearer—work, The Sketch for a Theory of the Emotions (1939). In this earlier book, Sartre describes emotions as a means of escaping the world when it becomes too difficult. Here he calls emotions ‘degradations of consciousness’ (E loc. 688, 700) and ‘magical transformations of the world’ (E loc. 757). In Being and Nothingness, by contrast, shame is presented as a means of ‘realization’, ‘recognition’, and even ‘discovery of an aspect of my being’ (BN, pp.245-6). This paper therefore asks whether Sartre’s phenomenology of shame presents it as an emotion, by his own definition of the term. The answer, it is argued, is no. This is important for the Sartre scholar—because many readers of Being and Nothingness assume that shame is an emotion. And it is important for philosophers of religion and students of atheism—because this conclusion opens up the possibility of reading the early Sartre as a phenomenologist of sin from a graceless position.

This paper is structured thus:

  • What are emotions?
  • What is shame?
  • Is shame an emotion?
  • If not an emotion, then what?

What Are Emotions?

What are emotions? If classical poets are to be believed, they are the effects of psychosis, powerful but fleeting follies: Sappho described love as ‘a kind of madness’ and Horace likened anger to riding a wild horse. Such characterizations of emotion are not contained to ancient sources: contemporary language frequently implies that emotions are passively endured: one falls in love, is heartbroken, paralyzed by fear, or haunted by remorse (cf. Solomon, 2006, p.2).

Sartre wrote on emotions in most of his philosophical works [1]; and for those who are not familiar with his historical context, it is worth noting that when he studied ‘philosophy’ in France in the 1920s and 30s, a different disciplinary paradigm was the norm. Philosophy was then understood to be comprised of four areas: logic, ethics, metaphysics, and psychology (Elkaim Sartre, 2004, vii). In the Sketch Sartre describes his project as ‘an experiment in phenomenological psychology’ in which he will ‘study emotion as a pure transcendental phenomenon’. He does not wish to consider particular emotions bur rather seeks ‘to attain and elucidate the transcendental essence of emotion as an organized type of consciousness (E loc. 104). He criticizes psychological accounts of emotion for failing to recognize the proximity of the investigator to the thing investigated, and for seeing emotions as ‘facts’ that are insignificant (in the sense of not conveying meaning). On Sartre’s view, ‘facts’ will never add up to a satisfying picture of human nature. Phenomenology, he explains, ‘is the study of phenomena—not facts’, and this method is more likely to offer a satisfactory account of emotion (E loc. 122).

But before giving his account Sartre first explains why the psychologists are wrong; his targets are the theories of William James, Pierre Janet, and Sigmund Freud [2]. Sartre took issue with the notion of emotions as things which afflict or, as Freud would put it, ‘invade’ us. The metaphors of passivity described above were ratified in the early twentieth century in a process of what Robert Solomon calls ‘scientific canonization’ (Solomon, 2006, p.2) in theory of William James [3]. On James’s view, emotions are epiphenomenal: ‘they are the products of bodily changes, but they do not themselves cause action’ (Deigh, 2009, p.20). Deigh gives the following example of fear to illustrate James’s account. According to common sense, if I see a bear charging, this perception will cause me to feel fear and then run. But on his account the perception of the charging bear causes the effect of running, and the feeling of this bodily movement is the fear. As Sartre puts it, James thinks states of consciousness (i.e. the emotion fear, in this example) are consciousness of physiological states (i.e. the body running, elevated heart rate, etc.) (ibid.).

In a move that anticipates some of Damasio’s objections in Descartes’ Error [4], Sartre objects to this ‘periphic’ view of emotions because it treats consciousness as a ‘secondary phenomenon’, and emotion as a disorder or disruption of normal physiological functioning. On the contrary, he suggests, ‘Emotional behaviour is not a disorder at all. It is an organized system of meaning aiming at an end. And this system is called upon to mask, substitute for, and reject behaviour that one cannot or does not want to maintain’ (E loc. 287 ff.).

For Sartre, emotion is a way of being conscious of the world. But he takes issue with the psychologists’ assumption that the consciousness of an emotion is a reflective consciousness (E loc. 457) [5]. For Sartre, emotions are ‘set-back’ behaviours, pre-reflective attempts to diffuse otherwise unmanageable situations [6].

The world is difficult. This notion of difficulty is not a reflective notion which would imply a relationship to me. It is there, on the world; it is a quality of the world which is given in the perception […]

He concludes therefore that emotion is:

A transformation on the world. When the paths traced out become too difficult, or when we see no path, we can no longer live in so urgent and difficult a world. All the ways are barred. However, we must act. So we try to change the world, that is, to live as if the connection between things and their potentialities were not ruled by deterministic realities, but by magic (ends E loc. 537).

To make this clearer, let us reconsider the bear attack scenario. On Sartre’s view, if someone were being chased by a bear and fainted from fear, the fainting would constitute an annihilation of that fear. Emotion is thus an ‘escape’ in which ‘the body, directed by consciousness, changes its relations with the world in order that the world may change its qualities’ (E loc. 552). He calls emotion a magical behaviour which ‘tends by incantation to realize the possession of the desired object as instantaneous totality’ (E loc. 625) [7]. In the bear attack, the pursued person desires to remove themselves from that situation, and fainting enables him to fulfil their desire (though perhaps not in the most reasonable of ways).

Emotion thus constitutes the ‘degradation’ of consciousness (E loc. 681, 754). Sartre writes that emotion is ‘an abrupt drop [chute] of consciousness into the magical’ (E loc. 817). Space prohibits saying much more about Sartre’s theory, but for the purposes of this paper it may be useful to close this section with an example of an emotion affecting interpersonal relations. Among Sartre’s examples are cases reported by Janet, in which psychasthenic patients want to confess something but – before being able to – break out into uncontrollable sobbing or hysteria (E loc. 245). For Sartre, the magical effect of this behaviour is that it conveniently transforms a potential judge into a potential comforter.

What is Shame?

The passage on shame occurs in Part III of Being and Nothingness, in the first subsection of the first chapter on the existence of Others – entitled simply ‘The Problem’. Sartre employs the French word honte here, not pudeur. Sartre says that shame is a ‘mode of consciousness’ which has an identical structure to others he describes, i.e. that it is a ‘non-positional self-consciousness, conscious (of) itself as shame, […] accessible to reflection’ (BN, p.245). Shame’s structure is intentional:

It is a shameful apprehension of something and this something is me. I am ashamed of what I am. Shame therefore realizes an intimate relation of myself to myself. Through shame I have discovered an aspect of my being. Yet although certain complex forms derived from shame can appear on the reflective plane, shame is not originally a phenomenon of reflection. In fact, no matter what results one can obtain in solitude by the religious practice of shame, it is in its primary structure shame before somebody (BN, p.245, italics original).

Shame is, on his definition, ‘shame of oneself before the Other’ (BN, p.246); it concerns how I appear to Others rather than how I ‘exist’ myself. To understand this distinction we must briefly consider Sartre’s tripartite phenomenology of the body. Sartre described ‘the knowledge of the nature of the body’ as being ‘indispensable to any study of the particular relations of my being with that of the Other’ (BN, p.383), and it is therefore indispensable to any account of the experience of shame.

The ontology of the body is comprised of three levels (which are not necessarily separable in experience, though they can be isolated phenomenologically):

  1. The body as being-for-itself (for which he also uses the term ‘facticity’) (BN, pp.330–62).
  2. The body-for-Others (BN, pp.362–75).
  3. (And what he calls) the ‘third ontological dimension of the body’ (BN, pp.375-82).


On the first level, the body is the manner in which I exist pre-reflectively: ‘the body is lived and not known’ (BN, p.348). Sartre writes that ‘my body as it is for me does not appear to me in the midst of the world’ (BN, p.327). It is not a thing but rather ‘a transparent medium for my experience of the world, but also as somehow surpassed toward the world’ (Moran, 2009, p.43). It is a conscious structure of consciousness, but a point of view from which I cannot have a point of view – for though I can see my eye reflected in a mirror I cannot, as Sartre puts it, ‘see the seeing’. The body at this level is not something one can intuit as an object: following Marcel Sartre is emphatic that I am my body (BN, p.342) [8].

The second level on which Sartre expounds is the body as seen rather than lived (le corps-vu rather than le corps-existé). This is the domain of the body as utilized and known by Others, studied and idealized by the ‘objective sciences’. I do not know from my own experience that I have a brain or endocrine glands, for example, but I learn that I have them from others. On the first order, the body is the centre of reference, the point of view from which I cannot have a point of view. On the second, however, my body appears as the ‘tool of tools’ in my instrumental engagement with the world. It appears as ‘a thing’ which I am.

The distinction arises because the body of another is not given to me in the same manner as my own: ‘it is presented to me originally with a certain objective coefficient of utility and of adversity’ (BN, p.364). I assess the Other in terms of what help or hindrance he constitutes to my own pursuits. The Other, therefore, is given in a thing-like manner, as an object (BN, p.371, 374). The recognition that bodies are viewed as objects in this manner reveals the third and final ontological level.

Here, embodiment entails that ‘I exist for myself as a body known by the other’. We experience our bodies not only as our own, but as reflected in others’ experience: ‘the Other is revealed to me as the subject for whom I am an object’ (BN, p.375). This is the level on which we experience things like shame and embarrassment; Sartre writes that ‘I cannot be embarrassed by my own body as I exist it. It is my body as it may exist for the other which may embarrass me’ (BN, p.377) [9].

It is this dimension of the body which exposes us to what Sartre describes as the omnipresent ‘look’ or ‘gaze’ of the Other. Though clearly Sartre does not use the term ‘omnipresent’ in an empirical sense, in the experience of being seen, Sartre says, we are ‘imprisoned’ by the other’s gaze, because the other deprives us of control over how we see our world and – more importantly – ourselves. Just as my own gaze reduces Others to their instrumentality, the gaze of the Other reduces me to the status of mere object. We experience shame, Sartre writes, not because we are this or that object in particular, but because we are an object:

[It is a feeling] of recognizing myself in this degraded, fixed, and dependent being which I am for the Other. Shame is the feeling of an original fall, not because of the fact that I may have committed this or that particular fault but simply that I have ‘fallen’ into the world in the midst of things and that I need the mediation of the Other in order to be what I am (BN, p.312, italics original).

It is in the context of this discussion Sartre explicitly refers to the Genesis account and introduces theological language into his phenomenology [10]. He writes that the modesty or fear felt at being discovered in a state of nakedness are only:

A symbolic specification of original shame; the body symbolizes here our defenceless state as objects. To put on clothes is to hide one’s object-state; it is to claim the right of seeing without being seen; that is, to be pure subject. This is why the Biblical symbol of the fall after the original sin is the fact that Adam and Eve ‘know they are naked’ (BN, p.312).

Here as elsewhere Sartre seems to prioritize ‘being seen’ by Others over our own ‘seeing’ in the project to define ourselves (Moran, 2009, p.53). This is important because for Sartre the body as Others encounter it – that is, the body in its social, intersubjective context – is a domain of contestation and conflict: ‘Conflict is the original meaning of being-for-others,’ he writes (BN, p.386). Human relationships perpetually oscillate between mastery and slavery. But despite the struggle that existence with Others entails, for Sartre, the Other performs a necessary role: the Other reveals something I cannot learn on my own, which is how I really am. It appears to us that the Other can achieve something ‘of which we are incapable and yet which is incumbent upon us: to see ourselves as we are’ (BN, p.377). As Joseph Catalano (2010) puts it, ‘We are born into the world twice, once from the womb of our mothers and then again from our relation to others’ (p.77).

Shame, for Sartre, plays a revelatory role: it ‘reveals to me the Other’s look and myself at the end of that look. It is the shame or pride which makes me live, not know the situation of being looked at’ (BN, pp.284–5, italics original). He carries on to say that shame involves ‘recognition of the fact that I am indeed that object which the other is looking at and judging’ (BN, p.285) [11].

Is Shame an Emotion?

Now that we have laid this rudimentary groundwork we can return to the question this paper intends to answer: is shame an emotion? We have seen that emotions, for Sartre, are a means of escaping the world’s difficulty. They constitute a ‘degradation’ of consciousness, a ‘fall’ into magical thinking. It is the contention of this paper that ‘shame’ cannot be said to function in this manner. Let us first consider the alternative view: how might shame be said to constitute an escape? It is useful in this respect to recall the example given earlier, Janet’s psychasthenic. On Sartre’s view the psychasthenic’s emotional display transforms a difficult interpersonal situation – between confider and potentially condemning judge – into a more comfortable one of victim and consoler.

If shame similarly constitutes an emotional escape route, it would have to be a kind of attempt at atonement, a recognition or acceptance of and apology for being what the Other sees me to be. Any physiological element – blushing or covering my nakedness, for example – would have to be directed towards some magical recasting of the world. But Sartre’s account of shame is not directed towards restoring my relationship with the Other – which, on his view, is impossible by any means. He writes:

[In shame] in the first place there is a relation of being. I am this being. I do not for an instant think of denying it; my shame is a confession. I shall be able later to use bad faith so as to hide it from myself, but bad faith is also a confession since it is an effort to flee the being which I am. But I am this being […] (BN, p.285).

In Sartre’s account in Being and Nothingness shame has an ontological dimension: it seems to be revelatory of the real rather than a descent into magical thinking. It is not the flight of bad faith; it is the dawning of recognition, unpleasant though that recognition may be.

Given this description, shame cannot be rightly called an emotion on Sartre’s own definition of the term. Before going on to say what it can rightly be called, however, we must briefly consider a potential objection, namely whether Sartre’s definition of the term might have changed between 1939, when the Sketch was published, and 1943, when Being and Nothingness appeared. This objection is easily dismissed. In the three places where Sartre refers explicitly to the Sketch in Being in Nothingness, he introduces his comments with an ‘as we have shown elsewhere’. He does not suggest his earlier view was wanting, but rather reaffirms that:

Emotion is not a physiological tempest; it is a reply adapted to the situation; it is a type of conduct, the meaning and form of which are the object of an intention of consciousness which aims at attaining a particular end by a particular means. […] There is an intention of losing consciousness in order to do away with the formidable world in which consciousness is engaged and which comes into being through consciousness’ (BN, p.467) [12].

But his phenomenology of shame does not fit this category. It does not result in a loss of consciousness, or an escape from discomfort, but rather plunges me deeper into the uneasy awareness that I am not always what I desire to be in the eyes of the Other.

Conclusion: If Not Emotion, Then What?

If shame is not an emotion, then what is it? Brevity prevents me from offering a fully developed argument for an alternative here, but I will adumbrate the argument I have made elsewhere. In Sartre’s phenomenology of the third ontological level of the body he moves from phenomenology as a descriptive – and therefore purportedly neutral – practice to a phenomenology which Ricoeur (1974) might call hermeneutic. Sartre describes emotion as a ‘fall’ of consciousness. But in his depiction of shame he brings in an imaginary (in a La Doeuffian sense) which is no longer restricted to discrete lapses [13]: he describes shame as ‘the feeling of an original fall’ (BN, p.312, italics original), invoking the Genesis account to describe the vulnerability of nakedness as ‘a symbolic specification of original shame’.

It is illuminating to note here that Sartre departs not only from descriptive phenomenology but from Damasio’s account; for the latter, shame is a ‘secondary’ emotion which develops through social experience (Damasio, 1994, pp.134-9). Unlike emotion, which is a means of escape, Sartre’s shame is inescapable; it requires no empirical observer to be revelatory of the real, and the reality it reveals is a ‘fall’ from which we cannot extricate ourselves. This is philosophically significant for the methodological reasons already given: it raises the question of whether Sartre’s phenomenology is purely descriptive. But it is also theologically significant because Sartre – whose self-proclaimed project was to ‘draw all the consequences of a consistent atheist position’ – has given an account of shame (and indeed, human consciousness and relations with Others) which bears a striking resemblance to a certain formulation of the doctrine of original sin. On Sartre’s view, we are separated by nothingness from ourselves and Others. There is no God from whom to be separated; but neither is there grace through which to be reconciled to Others or ourselves.


[1] Cf. Hatzimovsis (2009, pp.223-4) for a partial catalogue.

[2] Some argue that Janet is the founder of psychoanalysis rather than Freud (in autobiographical writings Freud felt the need to state that he had not plagiarized Janet [Freud, 1989, p.11]).

[3] Though it is now called the James-Lange theory, on account of having been simultaneously developed by James in America and C.G. Lange in Denmark.

[4] Damasio (1994, pp.129-31, 189-90).

[5] He argues that ‘unreflective behaviour is not unconscious behaviour; it is conscious of itself non-thetically and its way of being thetically conscious of itself is to transcend itself and to seize upon the world as a quality of things (E loc. 520).

[6] Though clearly one can be reflectively conscious of feeling an emotion, this implies a step back from it.

[7] In calling emotion ‘magic’ he appropriates a familiar label in the anthropology of religion, found in the works of Frazer, Levy-Bruhl and others (cf. Anders [1950, p.554]).

[8] See Marcel’s Metaphysical Journal (1927) for discussions of incarnation and Mui (2009) for a discussion of Sartre’s indebtedness to Marcel.

[9] It is important to distinguish between shame and embarrassment. As Galen Strawson points out, though past embarrassments can supply one with funny stories to tell, past shames and humiliations are not usually a source of amusement (Strawson, 1994).

[10] Ricoeur (1974) might suggest that this is a move into the level of hermeneutic phenomenology or interpretation.

[11] On hearing footsteps see BN p.284.

[12] For the other explicit ‘as we have seen’ references to the Sketch, cf. BN p.413, 467, 596 cf. also BN p.370 on anger.



(Primary Text Abbreviations)

BN | Being and Nothingness, trans. Hazel Barnes, London: Routledge, 2003.

E | Emotions: The Outline of a Theory, trans. Bernard Frechtman, New York: Open, 2012. Kindle Edition. References given in brackets indicate the relevant Kindle location.

(Secondary Texts Cited)

Anders, Günther Stern (1950) ‘Emotion and Reality’, Philosophy and Phenomenological Research, 10/4 (June), 553–62.

Catalano, Joseph (2010) Reading Sartre, Cambridge : CUP.

Damasio, Antonio (1994) Descartes’ Error: Emotion, Reason, and the Human Brain, London: Vintage.

Deigh, John (2009) ‘Concepts of Emotions in Modern Philosophy and Psychology’, in P. Goldie (ed.) The Oxford Handbook of Philosophy of Emotion, Oxford: OUP.

Elkaïm Sartre, Arlette (2004) ‘Historical Introduction’ to Jean-Paul Sartre, The Imaginary, trans. Jonathan Webber. London: Routledge.

Freud, Sigmund (1989) An Autobiographical Study, New York: W.W. Norton.

Hatzimoysis, Anthony (2009) ‘Emotions in Heidegger and Sartre’, in P. Goldie (ed.) The Oxford Handbook of Philosophy of Emotion, Oxford: OUP.

Moran, Dermot (2009) ‘Husserl, Sartre and Merleau-Ponty on Embodiment, Touch and the “Double Sensation”,’ in Katherine J. Morris (ed.), Sartre on the Body, London: Palgrave MacMillan.

——(2011) ‘Sartre’s Treatment of the Body in Being and Nothingness: The “Double-Sensation”’ in Jean-Pierre Boulé and Benedict O’Donohoe (eds), Jean-Paul Sartre: Mind and Body, Word and Deed, Newcastle: Cambridge Scholars Publishing.

Mui, Constance (2009) ‘Sartre and Marcel on Embodiment: Re-evaluating Traditional and Gynocentric Feminisms,’ in Katherine J. Morris (ed.), Sartre on the Body, London: Palgrave MacMillan.

Solomon, Robert C. (2006) Dark Feelings, Grim Thoughts, Oxford: OUP.

Strawson, Galen (1994) ‘Don’t tread on me,’ London Review of Books 16/19, 11–12.

Zahavi, Dan (2010) ‘Shame and the exposed self’, in J. Webber (ed.), Reading Sartre: On Phenomenology and Existentialism, London: Routledge.


Kate Kirkpatrick is Lecturer in Philosophy at the University of Hertfordshire and Lecturer in Theology at St Peter’s College, University of Oxford. She is the author of Sartre and Theology (Bloomsbury, 2017) and Sartre on Sin (Oxford University Press, 2017).

A Swedish Virago: Queen Christina of Sweden on Dualism and the Passions

George P. Simmonds
Oxford Brookes University


Queen Christina of Sweden (1626-1689) is not well credited for her contribution to seventeenth-century philosophy. Indeed, most historians pose her simply as a bygone monarch, albeit a most idiosyncratic one [1]. To some she is remembered as the mercurial spinster who went about Europe in men’s clothing, unique in her mastery of equestrianism, shooting and military strategy [2]. Most recall her for her involvement in René Descartes’ death, occurring in the midst of the Swedish winter as he laboured to satisfy her educational demands [3]. The more considered historian might meanwhile describe Christina as the ‘intelligent, independent and artistic sovereign’ (Philippe, 1970, p.695) who possessed the greatest book collection in Europe (Birch, 1907, p.8), who dreamt of making Stockholm ‘the Athens of the north’ (Conley, 2011, §1), and who may well prove to be the closest thing to a Platonic philosopher-queen history has ever seen.

At birth she was proclaimed the male heir to the Swedish empire, and was upon the discovery of her true sex spurned by her mother, who had once again failed to provide King Gustav Adolphus with a son (Stolpe, 1966, p.37). The king proved more tolerant: he would later name Christina his heir and afford her the exacting education suited to a prospective emperor [4]. This unusual series of events came to epitomise her legacy as she assumed the role of the honorary male, adopting the character and temperament of a king in a blatant rejection of womanly life no doubt encouraged by her mother’s refusals. This legacy was a relatively good one, however: under Christina’s rule Sweden saw great relief in the Peace of Westphalia, bringing an end to the Thirty Years War, and later withstood severe political tumult without any major civil conflict (Åkerman, 1987, p.21).

The queen has since been dubbed ‘one of the wittiest and most learned women of her age’ (Stephan, 2006). It is said that she studied ten hours a day, leaving no time to keep up royal appearances. Slovenly dress and tangled, unruly hair quickly became her signature aesthetic (Goldsmith, 1956, p.52) [5]. She was by no means plain in the company she kept, however. Throughout her tenure Christina had a number of distinguished intellectuals at hand, a coterie including Isaac Vossius, Samuel Bochart, Nicholas Heinsius and of course the hapless René Descartes. She valued these men as ‘living libraries,’ as silos of information she admired but ultimately viewed as ‘poor advisers in affairs of the great world’ (M, p.25). This reluctance to heed the counsel of others, together with her scholastic bibliomania, enabled the events of 1654 in which the ‘eccentric scholarly creature’ turned Catholic and scandalously abdicated her father’s throne (Fraser, 1989, p.252). As Birch (1907) explains in the Maxims’ introduction: (more…)

Must We Quine Qualia?

George P. Simmonds
Oxford Brookes University


It is no secret that qualia possess a number of enemies in the philosophy of mind, and that the majority of these enemies advance from a materialist position allied to the methods of scientific reduction. Few of these opponents have done so with as much vigour as Daniel Dennett, however, who in his paper ‘Quining Qualia’ proposes we at long last put our cognitive fantasies to bed. In this paper I intend to analyse Dennett’s claim in interest of suggesting his dismissal of qualia exceeds the bounds of moderation.

Part I: Qualia

Qualia are the ‘raw feels’ of conscious experience, viz. what it is like to experience something [1]. A quale might manifest itself as a perceptual event, a bodily sensation, an emotion, a mood, or even – according to the likes of Strawson (1994) – a thought or disposition. They constitute the greenness of green, the saltiness of salt, the hotness of anger, and that thing  which ‘give[s] human consciousness the particular character that it has’ (Ramachandran & Hirstein, 1997, p.430). What is it like to gaze upon a setting sun, or a lunar eclipse? What is it like to feel joy? What is music like? These are all questions relevant to the subjective character of experience, a phenomenon which itself sits ‘at the very heart of the mind-body problem’ (Tye, 2013, preface).