Philosophy and Ethics

This blog follows up issues and ideas from my website: Philosophy and Ethics.

Thursday, 28 November 2013

Natural morality thanks to oxytocin?

I recently (20th November 2013) attended a lecture in London entitled ‘How the Mind makes Morals’, given by Patricia Churchland as part of the series Mind, Self and Person organised by The Royal Institute of Philosophy. She has been working for many years in the interface between philosophy and neuroscience, and is particularly known for her contributions to the Philosophy of Mind. Here’s what she said in the publicity material for the lecture:

‘One tradition in moral philosophy depicts human moral behavior as unrelated to social behavior in nonhuman animals. Morality, on this view, emerges from a uniquely human capacity to reason. By contrast, recent developments in the neuroscience of social bonding suggests instead an approach to morality that meshes with ethology and evolutionary biology. According to the hypothesis on offer, the basic platform for morality is attachment and bonding, and the caring behaviour motivated by such attachment. Oxytocin, a neurohormone,  is at the nub of attachment behavior in social mammals and probably birds. Not acting alone, oxytocin works with other hormones and neurotransmitters and circuitry adaptations.  Among its many roles, oxytocin decreases the stress response, making possible the trusting and cooperative interactions typical of life in social mammals. Although all social animals learn local conventions, humans are particularly adept social learners and imitators. Learning local social practices depends on the reward system because in social animals approval brings pleasure and disapproval brings pain. Acquiring social skills also involves generalizing from samples, so that learned exemplars can be applied to new circumstances. Problem-solving in the social domain gives rise to ecologically relevant practices for resolving conflicts and restricting within-group competition. Contrary to the conventional wisdom that explicit rules are essential to moral behavior, norms are often implicit and picked up by intuition. This hypothesis connects to a different, but currently unfashionable tradition, beginning with Aristotle’s ideas about social virtue and David Hume’s 18th century ideas concerning ‘the moral sentiment.’

The arguments was persuasively presented with many examples of apparent altruism in animals, stemming from the social bonding necessary for bringing up young, hunting for food and so on.
Notice how utterly different this approach to morality is from that which stems from the application of ethical theories. When we engage in Natural Law, Utilitarianism or Kantian ethics, we are applying reason to data – assessing a rational interpretation of the purpose of an activity, or balancing probably outcomes, or looking at the implications of universalising the maxim that lies behind our action. More often than not, this is a subsequent rationalisation of an action that we choose for personal, intuitive reasons.  We do not stop in the midst of a crisis and decide whether, on balance, it is right to intervene. We just go ahead and do something, recognising afterwards the way in which our action fits in with our habitual way of understanding the world – an important point made by Iris Murdoch at the end of her book The Sovereignty of Good.  More often than not, we instinctively know what we should do, feel guilty if we go against that intuition, and subsequently rationalise our decision, either to justify it or to admit fault.

With Virtue Ethics, of course, the situation is rather different. Here we are examining those qualities that make for the ‘good life’ or ‘human flourishing’ – both of which then need to be defined in terms of the overall nature and purpose of life. But here the key thing, in my view, is less that we justify what we do because it leads to human flourishing, but that we naturally seek human flourishing and we wish to cultivate certain qualities because we recognise that they will help us to do so.

So morality is a reflection upon human behaviour; the application of reason to an evaluation of what we do. And that evaluation itself depends on more general ideas about the nature of life. But the way we behave is shaped long before we start to study ethics. We are born with dispositions, develop a character early on, and very soon learn how to deal with our environment – from yelling until we get what we want (seen in all stages of life, with varying degrees of subtlety, from babyhood to old age), to learning to manipulate those around us. We are social animals and learn the rules very early on in life.  Our strategies and attitudes are in place by the time with hit school for the first time, as any infant teacher will tell you, we do not wait to study ethics in order to recognise the significance of behaviour.
It seems to me, therefore, that it is entirely right to examine the origins of our moral behaviour in the context of the behaviour of other animals. Okay, we know that we have a bigger brain, but why should we assume that animals do not have feelings and thoughts? Watch hunters stalk their prey, or parents grieve for lost offspring, or pet animals negotiate for attention, and you know well enough that species share a great deal in terms of patterns of behaviour and associated emotions. It can be argued that the behaviour of animals in instinctive rather than rational – but most of our behaviour is instinctive rather than rational.  The idea that, of all the species on Earth, homo sapiens (arrogantly so named) is rational and the rest are simply automata is clearly a bias originating in our limited ability to communicate with other species.  Why should we not share the basis of cooperative behaviour with other species, simply because we have the additional benefit of enhanced brain capacity? 

What is more, the functional capacity of the brain develops over the first years of life, as new neural pathways are etched upon the cortex. The relationship between brain and social situation is an iterative process, our mental capacity develops in relation to our situation, and then it reflects upon it and shapes it.  The development of our mental abilities, along with our morality, is a continuously evolving, interrelated process.

By the time we sit down and discuss ethical issues, we have reached a degree of sophistication that is not available to species will less developed brains – but the type of behaviour to which we give our attention is not utterly different from what is happening lower down the intellectual scale. We discuss morality and (perhaps) allow our newly shaped thoughts to influence subsequent behaviour, but is that so different from the reflection at a lower level when an animal realises that, if its behaviour leads it to be separated from the group, it is not going to survive, and it must therefore moderate what it does?
In any developing process there will be levels of complexity at which more sophisticated patterns supervene over the more basic ones. Human behaviour is more sophisticated than that of other species (as far as we know, of course, and judged on our own criteria) but to think that the origins of our social patterning are thereby changed would be naïve.

However, in discussing this issue, it would be all too easy to fall into the trap of identifying certain types of behaviour as ‘moral’ – as though morality were a pattern of behaviour in which only humans could engage. Morality (or ethics) is simply the term we use for discussing and evaluating elements of human behaviour, it is not the same thing as the behaviour it describes.  We should only say that someone is acting morally if we have evidence that their action is informed by moral considerations – in other words, that their reason brings articulated values to bear upon their behaviour. In this sense, morality is unique to humans, in that it is only in the case of other human beings that we can appreciate the intellectualising of actions and outcomes that thinking morally involves.  That does not mean that other species cannot, in theory, act morally; simply that, if they did so, we would have no means of knowing it. We can observe an animal in a quandary. I remember seeing an absolutely harrowing film in which a female elephant chose to remain with her dying calf, rather than follow the herd and leave it to die alone. By doing so, her own chances of survival were lessened.  Clearly, we cannot say that the elephant weighed all this up and then made her choice, but it is clear that the deeply ingrained need for survival was battling with, and lost to, the need to nurture and remain with her young.  We see that as morality, simply because – if intellectualised in human terms – it would indeed be a deeply moral altruistic decision. The elephant doesn’t think in those terms, because it is an elephant and has not taken an examination in ethical theory – but the impulses that lead an individual to act in one way or the other are hardly different. We cannot know what the elephant thinks, but equally we cannot know (and are presumptuous to assume) that the elephant does not think.

So how does this relate to oxytocin and neural pathways?

I think it is important not to fall into materialist traps here.  Oxytocin may indeed be the hormone that allows levels of trust and intimacy in social interaction – true for both humans and other species – but which came first, the interaction of the oxytocin? Neural pathways may develop with ever-increasing complexity within the cortex during the first years of life, but is that the result of experience, or the means by which one can have experience?  The answer, of course, is that it’s not an either/or. More likely some form of feedback loop is in operation. At the risk of an attack of Darwinitis (that wonderful condition, described by Ray Tallis, whereby natural selection is offered as an explanation for absolutely everything), increasing oxytocin might well be a sign that altruism, or care for the young, is generating better results in terms of survival. But that does not, in itself, suggest that we can abrogate all responsibility for our actions on the basis of a diminished quantity of a hormone; a lack of oxytocin is not a valid excuse for acting selfishly! More likely, oxytocin levels increase as a result of our being in situations where care is given and received.

David Hume argued that we have a natural ‘moral sentiment’, displayed by our natural revulsion at the sight of suffering and our natural altruism.  Reason, slave to the passions, builds moral systems upon that natural intuition.  Why has that approach – so obvious when you reflect upon it – not been more popular?  I sense that it has been set aside because it does not allow the intellectual satisfaction of closure on moral matters. Most other theories give the possibility of arguing in favour of a particular course of action and thereby justifying it intellectually.  Natural Law can battle things out against utilitarianism, but both sides are using reason and principles in order to establish an intellectual structure to evaluate (but not explain) behaviour. The reason it does not explain it is that it leaves out of account the deepest impulses that are the actual originators of what we do – modified by our social conditioning and (later) by our intellectual appreciation of what we hold to be of value in life. 

But such intellectual considerations are not the same as the natural desire to do what others might describe as ‘good.’ Would you want to fall in love with someone who was 100% utilitarian?  Or 100% rational, come to that?  Of course not!  Human nature is richly diverse, only partly tamed by intellect.
Our oxytocin level may indeed play a part in our behaviour, originating our impulse to care. If so, we can be happy that the origins of what we think of as moral behaviour pre-date the appearance of humankind.  We may be the only species to rationalise it, but we are certainly not the only species to discover that altruism can get positive results.
What is more problematic, however, is the fact that nature can sometimes also benefit from the opposite behaviour. Ant colonies, so well organised in themselves, are in a state of perpetual war with one another. Groups of animals help one another, but do so by defining their circles of care and excluding others. Hormones play an ambiguous role in life – altruism at one moment, aggression at the next.  In deciding how we should behave, and in controlling the behaviour of others, reason trumps hormones. Or so we hope – as we see in every sitcom depicting the relation between adults and their adolescent offspring. Hormones may ‘run riot’, generating unacceptable behaviour that reason attempts to restrain.

However, when it comes to ethical theory, it is worth keeping in mind the superficiality of any exclusively intellectual approach to human behaviour. If reason alone could determine behaviour, the world’s problems would long since have been solved. Sadly, or gloriously, human behaviour is more deeply and richly embodied.  Pass the oxytocin capsules someone!

Tuesday, 19 November 2013

Is the Philosophy of Religion a mistake?

If you look at most introductory texts, university courses or A-level syllabuses, the Philosophy of Religion seems a reasonably clear and well-defined subject. Start with the arguments for the existence of God, add the classic dilemmas about miracles (thank you, Hume for getting us started with such straightforward clarity), add the problem of evil (with a good dash of Irenaeus and Augustine, filtered through John Hick) and then move on to examine the nature of religious language (mainly in terms of what sort of knowledge it can yield) and perhaps a nod at religious experience (we all like a bit of spine-tingling-Otto-shaped-numen).  It all adds up to a subject which appears to open up the key elements of Western religious belief to intellectual scrutiny.  But what does it actually achieve?

I want to argue that, for all the intellectual fascination it can offer, its overall impact in terms of a sensitive understanding of religion is almost entirely negative. Worse, in offering an implicit definition of religion in terms of a set of propositions to be accepted by the faithful and rejected by atheists, it is a subject designed to fail in its basic purpose.

Unless I am mistaken in my discussions with people over the years, or my reading endless A -level examination essays, or responding to questions and comments after talks to students, few people find their fundamental view of life changed by the arguments presented within the Philosophy of Religion – and where there is change, it is almost always in the direction of giving up naïve and musty religious ideas in favour of the clear air of secular reason, a kind of liberation from things half-believed. No doubt there are exceptions to that, and you may claim that the Cosmological Argument changed your life and grounded your religious belief, but – if that is your situation – I would ask whether the argument was your only, or principal, reason to change your view, or whether it built upon an experience or a longing that you were already experiencing, just waiting for some intellectual permission to take the spiritual plunge. In reality, most people who practice a religion (and here I’m mainly thinking about Christianity, since that is the religion of most people with whom I’ve discussed these things) find that the arguments in the Philosophy of Religion are interesting, challenging to their beliefs, but not life-changing. They cling to belief in a God of love in spite of the problem of evil; they understand Hume’s argument about miracles, but nevertheless see God as active in their lives, transforming situations for the better. At the other end of the scale, most atheists will set about picking holes in the arguments, and are highly unlikely to engage with them sensitively.

In some other areas of philosophy – Ethics or Political Philosophy, for example – the idea is to engage with a subject, explore its presuppositions and gain a broader appreciation of its place in the sphere of human experience. Hopefully, the person studying ethics is able to be more sensitive to people’s moral arguments, and is helped in making moral choices by the range of arguments studied. Political Philosophy digs down into the presuppositions of political ideas – it may not tell you how to vote, but it will better inform your judgment in such matters.  In these terms, the Philosophy of Religion may appear to give belief an intellectual grounding, but I’m not convinced that it does so for most people. So why might that be?

I believe the problem lies in two historical periods. The first reflects the rediscovery of Aristotelian philosophy and the growth of European universities in the 13th century. Key here is Aquinas, who sought to express Christian beliefs using Aristotelian terms.  His arguments for the existence of God effectively establish ‘God’ as the term for an overall rationality and purpose in the universe.  This had the effect of making God ‘objective’ – effectively identifying him with the structure of reality – and so placing him within the sphere of thought that included rational analysis and the physical sciences.  It is hardly surprising that, in the 21st century, our knowledge of the physical universe is such that Aquinas’ arguments are at best seen as indicating the overall sense of what God is about, rather than as a proof of his existence – for we do not (and cannot) link a religious concept with the universe of which 99% is beyond our perception.  Although beyond this briefest of historical references, it is instructive to compare Aquinas with the earlier Anselm, and his ‘faith seeking understanding’, where his famous ‘ontological argument’ depends upon the nature of human awareness rather than an existence that is separated from the religious discourse. ‘That than which no greater can be conceived‘ is exactly that – something experienced as objective (i.e. as transcending the self) but at the same time an expression of the human quest for meaning.  [There is far more to be explored in terms of the contrast between Anselm and Aquinas here, but that will have to wait for a future post.] The second historical period spans the work of Descartes, Hume and Kant.  Here it is the theory of knowledge that provides the background to religious questions, and with it the absolutely and essential distinction between the object and subject.  The question is then whether God exists in any objective sense. 

Reviewing the Philosophy of Religion as a subject, it is remarkable how much of what is studied is linked, directly or indirectly, to those two historical periods. But I want to argue that this is exactly where the Philosophy of Religion goes wrong. It attempts to place religious concepts within an overall secular philosophy that is concerned with both epistemology and metaphysics (i.e. both the theory of knowledge and the most general framework of thought that attempts to express reality). In this context, God becomes a possible object of knowledge – an object which increasingly becomes intellectually redundant.  The logical outcome of this framework for the Philosophy of Religion is atheism, for the framework has already distanced the question of God’s existence from its origin in religious experience.

In my own introduction to the Philosophy of Religion, I started with a chapter on religious experience, on the grounds that without such experience there would be no religion, and hence no beliefs about which philosophy could frame questions.  Too often, in my view, religious experience is regarded as one topic among others, something that can exist quite independent of arguments for the existence of God.  But to me that makes no sense. ‘God’ is a word some people use in order to describe the sense of the absolute, the holy, the rich depth of experience that contrasts with our habitual superficiality.  ‘God’ is not an object that exists independent of our experience and is then linked to it as cause to our experienced effect;  ‘God’ – if that word is to be used at all – is about that experience; signifying that it is ‘deep’ rather than superficial, absolute rather than partial, universal rather than limited.

If we could start again, I would opt for ‘God’ to be an adjective or an adverb, rather than a noun or (worse) a proper name. In its unquestioning acceptance of the philosophical agenda, the Philosophy of Religion kills off its subject before its arguments even get going.

Time and again, even in the days (now almost 40 years ago) when I was still an ordained priest in the Church of England, I turned from such arguments convinced that atheism was the only honest answer.  I could not even start to believe in the sort of God being debated (whether in terms of the traditional arguments for his existence, or miracles, or the problem of evil).  What I did not fully appreciate at the time was that my problem was not with the answer but with the question.  The philosophy of religion ensured that I was to think of myself as an atheist, because the philosophy of religion too often presented and debated a totally inadequate concept of God.

There is, of course, a fundamental difference between Theology and the Philosophy of Religion. But bluntly, theology takes place within the context of religious discourse and practice, while philosophy of religion takes place outside it.  Theology takes an idea or a story and asks what it means for us. What is its significance? What moral demand does it make? To which of our emotions does it appeal?  There is rich symbolism, profoundly moving, in some religious rituals or the images that still struggle to survive beneath the tinsel of a modern Christmas. Can there be any deeper expression of what human community is about than a simple act of sharing bread or wine, or a sense of being cleansed and forgiven than an adult baptism?  If these things touch the bedrock of human experience, then that bedrock may be called ‘God’.  Ask if God ‘exists’ and you have already missed the point!

And here’s my complaint. Philosophy of Religion, more often than not, encourages students to miss the point. I diverts attention away from the richness of human experience and towards metaphysical impossibilities. There is often more theology in the study of English than Philosophy of Religion – unpack any great work of literature and you engage with ‘being itself’, with the depths of human experience, for good or ill.

And if we’re into complaints, I have another. The problem I found with organized religion is that it is all too easy (given the background assumptions in the Philosophy of Religion) to take Biblical narratives and church doctrines as though they were literally true. Severed from their often painful roots, laid out in superficial take-it-or-leave-it terms, they become a parody of the struggle to understand life. Crudely put: Tick this box and you’re saved!  The ultimate enemy of religious truth is smugness; knowing that hidden secret that guarantees your superiority over the multitude. And that is possible because literal interpretations of scripture and the superficiality of the Philosophy of Religion conspire to create a caricature of religion that is all too appealing to the spiritually insecure.

So, in dealing with the Philosophy of Religion, I’d want to stand back and ask ‘What experienced reality are we talking about here?  What is the context of this argument? What does it mean for individuals or for society?’  Only once it is so rooted in our experience that the very question of whether God exists or not is seen as plain silliness, will the Philosophy or Religion reveal its true value. In that sense, I believe that the Philosophy of Religion, as commonly presented, is guaranteed to fail in its assumed task of examining and understanding religion.

Friday, 8 November 2013

Zen and the art of belief

There is a Zen tradition that, in order to know truly something one has first to deny it, thus avoiding prior conceptions and assumptions.

On that basis, the denial of God is an essential first step in any knowledge of God. Only by having the courage to become atheist is it possible to escape the mental traps that lead to inadequate ideas about what ‘God’ might be about and to the narrow and partisan views that have, for too long, blighted much Western religion. Until that step is taken, the ‘believer’ will always be in danger of seeking to defend and fossilise an inadequate concept of God.

Mystics have always described ‘God’ as being beyond thought, and have therefore been dismissed by those seeking intellectual closure on the matter. But they are a reminder of the limits of what is conceptually possible.

The Tao that can be spoken of is not the true Tao. A 'God' who exists (in any normal sense of the word ‘exists’) cannot be God. 

Mental idolatry – proposing, defending and even killing and dying for a particular idea of God – is, sadly, the norm among the most vocal defenders of religion, but actually such strident belief is the most profound atheism, for in selecting a particular concept as the only ‘God’, it shuts down the quest for, or openness to, the ultimate meaning and significance of life, whatever that might be – a meaning and significance that cannot be defined, or even described, but which can only be known by letting go, being receptive, open to intuition.  

Buddha sought a middle path. Most obviously, in practical terms, it was a path between luxury and asceticism. But is was also a middle path between an eternalism in which the self and the gods exists in a fixed and permanent way and a cynical nihilism which denies any sense of identity or value.

‘God either exists or he doesn’t!’  A logical dead end; the one option claims too much, the other denies too much.

To dismiss God, simply because he does not ‘exist’ in any literal sense, does nothing to address the huge issue of what that idea has done – for good or ill – and continues to do in the affairs of man.  It neatly avoids the prior question of what aspect of life ‘God’ language addresses, and whether there are alternative forms of language and activity to address it. 

To return to the Zen tradition…   
If you meet the Buddha on the road, kill him!  If you understand God, become an atheist!  For any Buddha you meet, or God whose existence you can grasp and argue about, is a dangerous illusion.

Should all believers accept a dose of atheism? Should atheists take ‘God’ more seriously? If they don’t, I believe that both will lose out on the quest for truth.

Tuesday, 5 November 2013

Brain states, physicalism and freedom

I recently received the following question from Joe Reynolds (many thanks for sending in that one, Joe!), which got me thinking again on what I regard as one of the toughest questions in philosophy today...

Joe asks...

My interest is 'mental causation', and what I'm trying to understand (I'm ignoring the qualia issue) is why does the non-reductive physicalist / emergentist need to differentiate brain states and mental states if - 

1) reductionist materialists claim that mental properties are physical properties, the latter are causal, so whatever causal efficacy the physical has the mental also has.

2) mental states are able to influence other mental states because they are really brain states.

3) if the strong causal closure argument is correct, the only way to maintain mental causation is to assert type identity reductive physicalism - that mental properties are neurological properties.
But is there some unwanted catch in this ...... would it be assumed that it meant we were just puppets of determining brain states .............. so that non-reductive physicalism is necessary to avoid puppet-dom ?

My response...

As you rightly comment, the whole reason for trying to establish some form of non-reductive approach is to avoid the inevitable conclusion that, if mental activity is in fact none other than neural activity, we are totally determined by pre-existing brain states.

This, of course, has huge implications. For a start, it takes away any sense of moral responsibility, since what we experience as free choices are in fact determined. Hence praise and blame become meaningless: we are as we are, and do what we are programmed to do.  It also has implications for political theory, and particularly for the democratic idea that individuals can shape the society within which they find themselves. A statistician may argue that the results of an election may be predicted accurately, once a few results are in, but we still need to believe that we have a choice once we are in the polling booth.
So there are obvious reasons why one might want to argue that our minds are more than the sum of our neural activity. Reductive physicalism makes sense and fits with an overall scientific approach to data, but does not accord with our own experience.  Even if it were true, it would be impossible wholeheartedly to believe it to be true in our own case without the collapse of what we take to be normal human experience and interactions.

I tend to assume that when an answer is unacceptable, it is likely that the question is wrongly posed or that some other factor has been left out of account. So what has created this problem?

My hunch on this – and it is no more than that – is that the problem stems from the temptation to think that physical and non-physical entities can be understood and encountered using the same cognitive tools.  In other words, that we should be able to understand a feeling (e.g. happiness) or a disposition (e.g. kindness) using the same sort of sense data that we could apply to understanding a physical object. The reductive approach sees data from the physical sense organs, combined with logical deductions stemming from that data, as the only valid form of cognition – hence mental states must, in some way, be validated by sense experience.

In The Concept of Mind Gilbert Ryle’s made the crucial point that both physical and mental operations were valid in themselves, but that they could not be combined – just as you could not have a right glove, a left globe and a pair of gloves. And he therefore pointed out that it was a ‘category mistake’ to attempt to do so: the pair is in a different category from ‘right’ and ‘left’, just as the university is in a different category from the various colleges and libraries of which it is comprised. He, of course, was taking a linguistic approach – that the meaning of mental terms was very different from that of physical terms and that care should be taken when we try to put them together. But I think that there is indeed a category mistake lurking here, and one that is related to the nature of experience…
Experience is not the same as that which is experienced. The act of looking is not the same as the thing that is seen. The act of thinking is not the same thing as the content of thought.

Sentient creatures adapt and survive by being able, through the data received by their sense organs and processed by their brains, to respond to their environment. At a basic, physical level, I see this as no more than a quantitative development of the turning of the sunflower to catch the light. It appears to be qualitatively different because of the degree of complexity that has come about through brain development. We not only respond to our environment but can reflect on that response, and so on. Consciousness then becomes an emergent property. But what does consciousness mean in terms of the experience of being conscious?

I immediately want to return to Kant and the fundamental distinction he made between noumena and phenomena – things as they are in themselves and things as we perceive them to be.  Emergent properties are phenomena; descriptions of states.  But the act of experiencing is noumenal – it is the reality within which things are experienced; it is activity, not content.

Another hunch – or perhaps, more accurately, a mental note I have for further reading and reflection – is that there are two thinkers who are likely to contribute significantly here: Heidegger and Merleau-Ponty. 

Heidegger (in Being and Time), because his idea of Dasein expresses what it is to be a person with a world – prior to the subjective/objective split. In other words, a human being is thrown into a world and lives within it. Dasein without a world is unthinkable, for dasein is ‘being there’ (or ‘being here’) and certainly being in relationship to the world.

Merleau-Ponty (in The Phenomenology of Perception) adds to this the idea of the ‘lived body’. Like Heidegger, he sees human consciousness as always being immersed in a world.

So how does this relate to the original question about reductive physicalism, causality and determinism?

I think we need to take an approach similar to that of Samuel Johnson who is said to have responded to Bishop Berkeley’s argument that everything that appears to be in the world is actually in the mind, by kicking a stone and declaring ‘I refute him thus.’ Actually, the 18th century wisdom of Johnson anticipates the 20th century arguments of Heidegger and Merleau-Ponty. Berkeley was separating mind from matter and then failing to see how we could know that matter exists, since all is known through the mind. Johnson, by kicking his stone, makes the point that the relationship between self and world is primary, and questions about how they might be related are secondary.

We are embedded in the world. Our bodies (including our brains) are part of that world, and – when observed – will therefore come within the framework we have for understanding everything (space, time and causality – as Kant). As an observed phenomenon, mind is inevitably reduced to body, for mind is only known through bodily movement (including the firing of neurons), action, language and so on. There is no ‘thing’ that cannot be observed (in theory at least) and therefore no ‘thing’ that cannot be contained within the network of causality.

The mind is not a ‘thing’, in that it exists prior to the subject/object split and is therefore not a part of the world.  If it were part of the world it would be totally determined and a puppet. In so far as the mind is described it is exactly that. And no doubt that is essential for the practice of psychiatry or psychotherapy – we need to apply rational causality to the mind.

But that’s not how we experience it.  We live ‘forwards’ shaped by or desires, aware of our place within the causal nexus, avoiding predators and seeking food. We live by the choices we make, and we do not live in any number of possible worlds that would have become real if our choices had been other than they were. That process of choosing is experienced as free. Sometimes such freedom is simply a matter of weighing up conflicting pressures on us, or conflicting desires. But if we remove that sense of our own freedom, we do indeed become the puppets that we are observed to be, but which our sense of freedom will not accept as the last word about ourselves.

Mental states, when observed and described, become (and are observed as always having been) brain states; brain states, when observed and described, are merely themselves. That’s the problem. I can only describe a mental state in terms of physical postures, actions, gestures, facial expressions, words and so on. With the benefit of technology I can add to this description an account of brain activity. But that activity is no more a description of what it is like to be in that mental state than is a straightforward description of body and language. And that’s because we lose the essence of the mental as soon as we try to describe it, because we ignore the basic fact that the human self (dasein) is both self and world intermeshed.

There is so much more to be explored here, partly because this question links to so many others. All I can suggest is that the way forward can perhaps be found in terms of embodiment – of the self in the world, rather than being related to the world. Take the latter option, and you will always be a dummy, at least in the eyes of other people!

And here's a further comment from Joe Reynolds...

I agree with most of what you say here but, I think prior to that, what I’m trying to understand is whether the conclusion of your first paragraph is necessarily the case ? 
To paraphrase my original query : the reductive physicalist would say that mental properties are physical properties, and physical properties are causal, therefore mental properties are also causal …. ?? Is that so ……. and, if it is, what does it mean / what are its implications ?
You say “if mental activity is in fact none other than neural activity, we are totally determined by pre-existing brain states”. But can that be so ? Wouldn’t such a determinist puppet scenario be an evolutionary route to inflexibility and extinction ?

Instead, let’s say the brain is a physical-mental ‘duality’. The brain is dynamic, remodels itself continuously in response to experience / reflects the lives we have led – neuro-plasticity. Genes can’t know what demands, challenges, losses the brain will encounter. Nature endows the brain with the flexibility to adapt to the environment it encounters, the experiences it has, the demands its owner makes of it, etc. And buried in all that would be processes of evolution and emergence in the ‘relation between’ the physical and mental, and the unconscious and conscious.

I agree with Joe about the plasticity of the brain/mind - he presents the situation with admirable clarity. The brain responds to experience and is therefore constantly changing, which is also why we develop as persons. But, for a strict determinist, the ghost of the puppet remains, because - although far beyond what we can know, because of its sheer complexity - one could argue that, in theory, every event and encounter are also determined.  One is back with the argument that the appearance of freedom is made possible by the impossibility of appreciating all the interlocking causes that go to make up any situation. And so on, and so on, until... the brain hurts!
But, in my view, Joe is fundamentally right in pointing to the evolutionary implications of this - that is why our brains have developed as they have, and their flexibility (itself a product of their complexity) is key to the emergence of mind. Thanks, Joe!