MAKING OUR OWN LUCK

Article originally published in 2007 Ratio 20, 278-292

 (The definitive version can be found here)

 

DAVID HODGSON

Abstract

It has been contended that we can never be truly responsible for anything we do: we do what we do because of the way we are, so we cannot be responsible for what we do unless we are responsible for the way we are; and we cannot be responsible for the way we are when we first make decisions in life, so we can never become responsible for the way we are later in life. This article argues that in our consciously chosen actions we respond rationally to whole ‘gestalt’ experiences in ways that cannot be pre-determined by pre-choice circumstances and laws of nature and/or computational rules; and that this means we are partly responsible for what we do, even if we are not responsible for the way we are.

 

It has been widely contended, notably by Galen Strawson,[1] that we can never be truly responsible for anything we do, on the basis of the following argument: we do what we do because of the way we are, so we cannot be responsible for what we do unless we are responsible for the way we are; and we cannot be responsible for the way we are when we first make decisions in life (that must be all down to genes and environment), so we can never become responsible (through earlier decisions) for the way we are later in life. It’s all a matter of luck.

          This is a persuasive argument, but I believe there is a good answer to it, one that requires consideration of just how human beings may be different from machines such as computers or autopilots. Here it is.

 

Autopilots

Autopilots fly aeroplanes automatically. Generally, they are used for maintaining steady flight in unproblematic circumstances, and generally human supervision is maintained in case circumstances change and human intervention is needed.

          However, as computers and artificial intelligence systems become progressively more powerful, it can be expected that autopilots will become capable of progressively wider use, extending to flying in the most difficult circumstances and communicating with air traffic control systems and other aircraft. The need for human supervision will be reduced, and may eventually be eliminated.

          Autopilots and associated automatic systems could also take over the task of monitoring the condition of aircraft, maintaining them in good condition and causing repairs to be effected when necessary. In addition to carrying out regular maintenance tasks or overseeing other automatic systems that do so, advanced autopilots could detect circumstances calling for action to avoid or prevent damage, and could bring about such action; and could also detect damage calling for repair outside regular maintenance, and could cause it to be rectified.

          For autopilots to be capable of all these things, they would have to be able to receive and act upon information about an aeroplane and its environment, including information communicated by other systems. It may be that these autopilots would learn to perform some of their tasks by training, rather than being wholly programmed in advance.

         

An Analogy

Some people regard persons as being like aeroplanes, and their brains as being like advanced autopilots, directing their bodies through life.

Our brains have not been designed, constructed and programmed by human beings; but they can be regarded as having been constructed and programmed by physical and chemical processes, in accordance with designs encoded in our DNA and produced by millions of years of evolutionary trial and error, and also under the influence of environmental factors obtaining during gestation and childhood. We too have automatic systems for maintenance and repair, and our brains can initiate further steps for maintenance, repair and avoidance of damage when we become aware of injuries and threats of injuries. (In our case, however, such steps are often the result of our being alerted and/or motivated by feelings of hunger or pain or desire to avoid pain; and I will be contending that this could not be the case with autopilots)

I raise three possible difficulties with this analogy, the first two of which I believe are of little weight apart from the third.

First, our brains and bodies are living things, while autopilots and aeroplanes are not. However, science treats living things as physical systems, operating in accordance with the same physical laws as non-living systems. Parts of our bodies can be replaced successfully by non-living matter, and the same may be true of our brains. There seems no reason in principle why the functioning of our brains cannot be likened to the functioning of non-living systems, such as computers or autopilots.

Second, the tasks undertaken by our brains are vastly more varied and challenging than those that would be undertaken by even the most advanced autopilots. But this may be considered merely a matter of degree.

Third, unlike autopilots, our brains support some processes that have a subjective experiential aspect as well as an objective physical aspect. That is, some processes of our brains involve our having conscious experiences, including visual and auditory experiences, conscious thoughts, and feelings such as pain and hunger; whereas (HAL in the movie 2001 notwithstanding) it seems unlikely in the extreme that autopilots, however advanced, would have conscious experiences. Scientists do not have the faintest idea how to go about constructing or programming a machine to have conscious experiences, much less to use them; and there is no reason to think that advanced autopilots would (for example) have or need to have feelings of pain to alert them to damage or threats of damage, or to motivate them to avoid damage or have it rectified. I believe this is by far the most important of the obvious differences between our brains and autopilots, and one that may give the other two differences significance they would not otherwise have.

 

The Consensus

Many scientists and philosophers do not see this third difference as being very important. There is a broad consensus among scientists and scientifically-oriented philosophers that conscious experiences have no causal input into our decisions and actions over and above the effect of physical processes of our brains that produce the conscious experiences and cause all our physical movements. And there are seemingly powerful reasons for this view.

          First, it is widely accepted that everything that happens, in the physical or material world at least, happens in accordance with physical laws of nature engaging with physical features of the world, being either wholly determined by these laws and features, or else happening randomly within probability parameters determined by them. It is said that the physical word is closed to causal influences that are not themselves physical.

          Second, it is reasonable to believe there are psycho-physical laws that correlate the physical and experiential aspects of brain processes, so that an experience of the type Xe occurs whenever a brain process of the type Xp occurs. Therefore, it would seem, whatever role might be played by the experience in the causal unfolding of events could be played, at least indirectly, by the corresponding physical brain process; so that although an autopilot could not feel pain or know what pain feels like, physical processes in an autopilot could play the same role as pain plays in the functioning of our brains.

          Third, a great deal about the operation of our brains seems understandable in terms of physical processes of our brains, with ordinary physical and chemical activity in billions of neurons carrying out what can be regarded as computational programs for processing information.

          Fourth, there are experimental results, particularly from work by Benjamin Libet,[2] suggesting that, at least in some circumstances, consciousness comes too late to influence action.

Fifth, it can be argued that these views do not involve the implausible position that experiences are ‘epiphenomenal,’ that is, have no causal role whatsoever. The relevant processes have (inseverably) both a physical and an experiential aspect; so that the experiences are no less efficacious than the corresponding brain processes. Further, the psycho-physical laws that correlate experiences with physical processes may do so only indirectly. While any computational programs carried out by brain activity must conform to computational rules of some kind or other, these programs might conceivably be such that they could run on physical systems that operate differently from brains (like a PC program running on an Apple computer); and experiences might comprise or include information that engages with the rules of these programs and not directly with the physical brain processes that support them. To that extent, the causal role of experiences could be partly independent of that of physical processes.

          And sixth, any other view can be branded unscientific superstition.

 

An Alternative

I want to suggest an alternative view.

Like many consensus views, this treats the processes of our brains as including some processes that have both a physical and an experiential aspect. But unlike consensus views, it holds that the role of these processes in the unfolding of events is not wholly determined by physical laws engaging with their physical aspect. It proposes that the physical aspect of these processes does, in conformity with physical laws, restrict what can happen to a limited spectrum of possibilities; but also that, in response to the experiential aspect of these processes, the brain (or more accurately the person) can control what does happen within this spectrum of possibilities.

          This view does not require a self or soul distinct from the brain, which has some input into what happens. Rather, it proposes that the physical-and-experiential system of the brain (or brain-and-mind) has the capacity to use information carried in experiences in a way that corresponding information carried in physical processes cannot be used, and that is not and cannot be wholly determined by pre-existing laws.

I’ve already suggested there are psycho-physical laws correlating physical and experiential aspects of brain processes, so that there is a sense in which any information carried by the experiential aspect must also be carried by the physical aspect. I’ve also suggested that the physical aspect of brain processes has a causal influence through engagement with physical laws, and that information carried by the brain processes may have a causal influence through engagement with the rules of computational programs of the brain. I now suggest that the information as carried by the experiential aspect is characteristically combined into unified experiences or gestalts, and that although these gestalts cannot, as gestalts, engage with laws or rules of any kind, they can have a causal influence because the system can respond to them. I will elaborate on this shortly.

Thus my proposal is that the physical-and-experiential system constituted by the brain can respond to these gestalts, and that this response can supplement the effect of physical laws engaging with the physical aspect of brain processes, so that the person can exercise control over what happens within the available spectrum of possibilities.

It seems reasonable to suppose that whenever we are acting without paying particular attention, and without concentration, deliberation or effort, the available spectrum of possibilities is narrow: we are pretty much ‘on autopilot,’ and conscious input does not go beyond marginal shaping or fine-tuning of actions, coupled with readiness to do more if something arises that calls for attention, concentration, deliberation and/or effort. Our conscious motivation, so far as it is operating, runs along the same lines as our unconscious motivation.

However, the cursory attention associated with this kind of activity can rapidly (and automatically) become heightened when something significant happens, and this can in turn lead to concentration, deliberation and/or effort. In circumstances of heightened attention, concentration, deliberation and/or effort, the spectrum of possibilities may become wider, and our response to experienced gestalts can have a substantial impact in directing action within that wider spectrum.

On this view, returning to the autopilot analogy, there is no distinct system that takes over from an ‘autopilot,’ as happens when a human pilot takes over from an autopilot flying an aeroplane: rather, the system has leeways within which the system itself can ‘steer’ on the basis of information combined into gestalts to which the system can respond in ways not determined by pre-existing rules. Although the conscious and unconscious motivations still run along the same general lines, on the basis of what is in a sense the same information, the system’s ability to respond to the information as combined into gestalts enables it to direct action within the available spectrum of possibilities.

 

Gestalts

Plainly, my proposal concerning gestalts is important to my argument, and I will elaborate a little.[3]  In particular, I need to explain why I say that gestalts cannot, as gestalts, engage with laws or rules.

          I suggest it is characteristic of laws and rules that they apply generally over a range of circumstances, and must engage with types or classes of things or features that different circumstances have in common, including variable quantities that can engage with mathematical rules; so that while laws and rules can apply to individual unique circumstances, they engage with features of these circumstances only in so far as each of these features is of a type or class, and/or is a variable quantity. Laws and rules link categories (say, X, Y, Z, etc.), where these categories are types or classes of things or features, and/or mathematical variables. In the case of computational rules, X may be a potentially recurring situation in a computational program, and Y may be the consequential operation to be undertaken in that situation.

          I accept that, in theory at least, some simple gestalts, such as a visual experience of a basic shape, may be of a type or class such that laws or rules could engage with them; but this could not be true of the feature-rich gestalts we normally experience, such as gestalts comprehending many features of an observed scene, or of a unique melody. And it is these particular gestalts of our ordinary experience on which I am focussing in this discussion. Laws or rules could perhaps engage with these gestalts in so far as they exemplify simple gestalts of a type or class, but they could not otherwise engage with whole feature-rich gestalts of our ordinary experience.

          Although in this article I am not considering laws of a legal system, such laws also, while applying to unique circumstances, generally engage only with types or classes of persons or places or occurrences, and prescribe types or classes of legal consequences. Occasionally, a statute law specifies what is to happen in a particular named place or at a named event or even to a named person; but this is exceptional, and for the most part laws of a legal system do not identify, and thereby engage with and specify a response to, any particular place or event or person.

          Laws of a legal system may, through engaging with each of a number of features that a particular person has, produce a legal result that is unique to that person; and laws of nature and/or computational rules may, through engaging with each of a number of features that are combined into a particular gestalt, produce a result that is unique to that gestalt. Indeed, this must happen whenever a computer program identifies a unique melody. However, I contend that laws and rules cannot engage with any rich combination of features as a whole, and in that sense cannot engage with whole particular gestalts; and by the same token, particular gestalts cannot as wholes engage with laws or rules.

Consider for example George Gershwin’s melody The Man I Love. (I could equally have chosen Summertime or Embraceable You or any of a number of others – each of these melodies, despite its apparent simplicity, is a unique and utterly distinctive whole.)  This melody can be given a description in terms of general and quantitative features it has in common with other melodies, including location and patterns of notes, pitch changes, rhythms, tempos, and so on; and these features, being general and quantitative, can engage with general rules, so that the melody can readily be identified by application of computational rules. No doubt such an appealing melody has constituent features that can push buttons in our emotional make-up that have been established by evolution and environment. But the way this melody sounds, and even the way some 2- and 4-bar chunks of it sound, is unique to this melody; and an experience of such a unique melody or chunk of melody, as a whole, is an example of what I mean by a gestalt that cannot engage with laws or rules.

This unique melody did not exist until it was composed, and neither Gershwin nor anyone else could have been primed in advance by evolution and/or environment for a response to its exact form. When Gershwin was composing the melody, possibilities for how it should proceed must have been thrown up by unconscious processes, presumably processes giving effect to computational programs of his exceptional brain, which were themselves a product of his genes and environment (and perhaps earlier choices). But Gershwin must have consciously appraised these possibilities as he composed, in order to decide whether to adopt them or modify them or look for other possibilities; and ultimately he must have consciously appraised the melody itself, in order to decide whether to assent to it as his composition or to refine it further; and what I suggest is that, in appraising the possibilities and the melody, he must have been influenced by gestalts of the possibilities and of the melody and/or chunks of it, which because of their uniqueness could not engage as wholes with pre-existing rules of any kind. And if so, I suggest, neither the final form of the melody, nor Gershwin’s assent to it, could have been wholly pre-determined by pre-existing circumstances and pre-existing laws or rules.

          I’ve heard there is a computer program that can compose music in the style of Mozart, and there may be one that can compose melodies in the style of George Gershwin. Such a program could conceivably come up with melodies as appealing as The Man I Love. But what it could never do is to appraise and refine its creations by attending to gestalts of them; because while the rules of a computational program can engage with aesthetic standards to which its creations should comply, and can engage with all manner of features which its creations have in common with other things, they cannot engage with whole particular unprecedented gestalts. The point is particularly strong in the case of ground-breaking creations that defy existing aesthetic standards, such as Wagner’s Tristan und Isolde and Picasso’s Les Demoiselles d’Avignon. When creating those works, I suggest, the authors could not have just been giving effect to computational programs and/or applying existing aesthetic standards, but rather must have been influenced in the course they took in creating and refining these works by their appraisal of gestalts of the works and of substantial parts of them.

          A computer could receive, store and process information concerning each and every physical feature of Picasso’s painting, and information concerning all aesthetic standards that have so far been formulated. It could readily identify the painting, and it could possibly perform as well as or better than human experts in determining its conformity to those standards, and also (for example) in determining whether a painting presented to it was the original or a copy. But it could never experience aggregations of features as whole gestalts unique to that painting, or respond to gestalts of that kind in appraising the painting.

 

Why the Alternative?

I find my alternative proposal more believable than consensus views, for a number of reasons.

          First, it accords with how things seem, to me at least. For example, if I am driving a car thinking about other things, and something untoward happens, my conscious attention is quickly engaged, so that my automatic driving reactions are supplemented by my conscious grasp of the whole situation and there can be conscious fine-tuning of my response. (None of the Libet experiments indicate that attentive consciousness of changes of ongoing experience comes too late to influence action: indeed, in extreme cases, it seems that more room is given to conscious control in that things appear to happen in slow motion.)  Also, when I am writing something, ideas are thrown up by unconscious processes; but I am continually appraising the sense and sound of substantial chunks of what I am writing so as decide whether to keep them or to alter them or to try to come up with other ideas. In both situations, my actions seem to flow from complementary contributions from unconscious processes and conscious experiences. And I cannot believe that my response to a melody like The Man I Love or a painting like Les Demoiselles d’Avignon is wholly determined by the engagement of constituent non-unique features with computational rules, and uninfluenced by my experience and grasp of the whole unique particular work.

          Second, the view does not conflict with anything established by science. It may be contended that this view is in fact inconsistent with science, because it supposes indeterminism at a scale beyond that permitted by quantum mechanics, and because in any event the only indeterminism permitted by quantum mechanics is randomness. I believe the former assertion is far from proven;[4] and as regards the latter, quantum mechanics can ascribe probabilities only on the basis of physical features: it cannot in ascribing probabilities take account of non-physical features (such as conscious experiences), or exclude the possibility that non-physical features could impact on these probabilities. And I believe there is no possibility that decisions taken in the course of unique highly complex brain processes could ever be shown to violate the statistical laws of quantum mechanics.

Third, it is supportive of this view that consciously-held reasons for action are characteristically inconclusive, and that there is a corresponding gap between reasons on the one hand and decisions and actions on the other.[5]  Hume said we always act in accordance with the preponderance of our desires, but that falsely assumes that desires, like forces in Newtonian physics, are commensurable, so that there is always a single ‘resultant’ desire that can direct our actions; whereas in truth there is no common scale on which (say) a feeling of hunger can weigh against a feeling of obligation to carry out a promised task. If ‘desires’ such as these conflict, the outcome is not determined by a preponderance of one over the other (because there can be no preponderance of incommensurables), but (I contend) by a choice between them that takes account of their different characters by means of a global assessment to which laws cannot apply.

Fourth, consensus views do not account for plausible reasoning.

Consensus views require that the rationality of any process of human reasoning depend completely on the reliability of computational processes of the brain. To the extent that human reasoning is algorithmic, that is, to the extent that it proceeds in accordance with rules of logic and/or mathematics and/or probability, or any other rules that could be incorporated into a computer program, there is no problem with this. But most human reasoning is not overtly of this kind, but rather is informal plausible reasoning where the premises or data do not entail the conclusion by virtue of applicable rules, but rather support it as a matter of reasonable judgment. Arguments of Hume, Popper and others, particularly as developed by Hilary Putnam,[6] strongly suggest that reasoning of this kind cannot be fully explained in terms of rules for good reasoning, whether they be rules of logic or mathematics or probability or whatever. I suggest that plausible reasoning depends in part on experiences, ideas and feelings grasped as gestalt wholes, which enable judgments to be made that have regard to incommensurable reasons (such as pain and feelings as to what is right) and to analogies that do not depend on identity or quantitative assessment of common features, and which also promote understanding of what is being considered.

Plausible reasoning is fallible, but it is indispensable: even the scientific method depends on plausible reasoning as much as on logic, probability theory and refutation, for example in formulation of hypotheses, design of experiments and appraisal of unrefuted hypotheses. We can and should attempt to minimise error by attending to rules of good reasoning, by trying to identify and eliminate fallacies and biases, and by subjecting our reasoning to scrutiny and debate; but we cannot eliminate either the possibility of error or our ultimate dependence on plausible reasoning.

On consensus views, plausible reasoning must I contend be explained in terms of computational processes that do not have any validity on the basis of logical rules or other rules for good reasoning, but which work because they have been selected in evolution for their effectiveness in promoting survival and reproduction.

But this introduces a vicious circle into justification of plausible reasoning. If we cannot rely on our plausible reasoning as the conscious non-algorithmic process it seems to be, and on associated feelings of assurance, then any confidence we could have in it would have to depend on the belief that plausible reasoning is supported by computational processes whose reliability is assured by the evolutionary tests they have passed; yet this belief would itself have to depend on extensive plausible reasoning, giving rise to a vicious circle.[7]  Disagreements in matters of plausible reasoning could not be addressed rationally: so long as identifiable fallacies were avoided, there would be no basis on which one process of plausible reasoning could be preferable to another. Further, our rationality is well adapted to dealing with problems remote from the evolutionary tests that faced our evolutionary ancestors, which makes it unlikely that our rationality is no more than a matter of useful algorithmic processes selected through those tests.

Fifth, if choices were in fact determined by evolution-selected computational procedures, which as computational procedures need no help from conscious judgment, there seems no plausible explanation of why evolution selected in favour of brains that, at considerable expense in terms of complexity and energy-use, support conscious processes. In particular, there could in that event be no plausible explanation why we have feelings like pain to motivate us, when it would be absurd (even if possible) to use pain or any other feelings to motivate a computer or an autopilot to proceed in accordance with a program for avoidance or repair of damage; or of why are we so constituted that our conscious awareness is automatically called into play when we are faced with a novel situation calling for decisive action. On the other hand, my proposal does provide such an explanation, namely the value of being able to take account of gestalts in fine-tuning actions and engaging in plausible reasoning. This argument is further supported by the consideration that, in conscious decision-making, issues and reasons appear to be presented for appraisal in ways that are simple, somewhat like an executive summary prepared for a chief executive officer of a business; raising the question of why this happens, if all real decisions are made by highly complex unconscious information-processing.

Sixth, my proposed view fits better than consensus views with objectivity of values and rationality of debate about values. I firmly believe that there are at least some things that are objectively and undoubtedly wrong, for example, torturing a child for amusement. However, such a belief can only be supported by plausible reasoning based partly on emotional feelings; and on consensus views, such reasoning can have no validity beyond its proven efficacy for survival and reproduction. Such an evolutionary approach can explain why many people have that moral belief, but cannot justify it as being true or even rational, at least unless ‘rational’ is redefined to mean ‘in accordance with brain processes selected by evolution’.[8]

 

Responsibility

On the alternative view I am proposing, we can in circumstances of attention, concentration, deliberation and/or effort make significant choices as to what to do, choices that are not wholly pre-determined by pre-choice circumstances (including pre-choice states and processes of our brains) and laws of nature and/or computational rules, but are in part determined by our responses, as whole physical and experiential beings, to gestalt experiences that cannot engage with any laws or rules.

This is not to propose some incoherent notion of self-creation or self-causation, although it can be considered as proposing a form of self-organisation. The idea that physical-and-experiential systems can make reasonable choices that do not depend wholly on application of or engagement with any kind of rules or laws may seem mysterious, and it does requires further investigation and explanation. But it is no more mysterious than consciousness itself; and my proposal does provide an intelligible role for consciousness, a matter on which consensus views fail totally.

          On this approach, our choices are subject to considerable pre-choice constraints. We have no alternatives outside the spectrum of possibilities left open by physical circumstances and physical laws. We have no experiences that can give us consciously-held reasons for choosing within this spectrum apart from experiences that arise from pre-choice circumstances and are correlated with physical brain processes. The way these reasons feel and appeal to us, and the tendencies to act that these and other brain processes produce, also arise from pre-choice circumstances and are correlated with physical brain processes. But subject to these constraints, we have and cannot help having the capacity to make choices that are not pre-determined by pre-choice circumstances, because these choices are made in part in response to gestalts that cannot engage with laws.

          In this way, I contend, we can exercise a degree of free will, particularly in fine-tuning our actions, in making aesthetic and moral judgments, in deciding what to believe when there is conflicting evidence, and in deciding what to do when there are conflicting reasons. In doing so, we are limited and influenced by our formed characters, to the extent that they affect the available alternatives, the reasons and their feel and appeal, and the associated tendencies to act, but I suggest not otherwise. Thus, the sense in which it is true that we do what we do because of the way we are is that (1) the way we are plus our circumstances plus laws of nature provide alternatives, inconclusive reasons, and tendencies, and also the capacity to choose between the alternatives on the basis of the reasons; and (2) what we do is what we choose in exercise of that capacity, the choice not being influenced by any differentiating features of the way we are otherwise than through the alternatives, reasons, and tendencies. And that leaves us with a degree of ultimate responsibility for what we do even if we are not responsible for the way we are.

          And this means in turn that we can become partly responsible for the way we are, as our choices, for which we are partly responsible, come to supplement the effects of genes and environment on the way we are. Life is a handicap event, but most of us have the capacity to modify our handicaps and, within limits, to make our own luck and to shape our own lives.

 

..………………………………………

Return to Home

………………………………..………

 



1. For example, ‘Luck swallows everything’, Times Literary Supplement, 26 June 1998, 8-10, and ‘The bounds of freedom’ in R. Kane (ed.) Oxford Handbook of Free Will (New York: Oxford University Press, 2002).

 

[2]. For example, B.Libet, C.Gleason, W.Wright and D. Pearl, ‘Time of conscious intention to act in relation to onset of cerebral activities (readiness potential): the unconscious initiation of a freely voluntary act’, Brain 106 (1983), 623-42.

[3]. I give further elaboration of this proposal in articles in Philosophy 76 (2001), 341-70 and Journal of Consciousness Studies 9 (2002), 65-88.

[4]. See for example H. Stapp, ‘Pragmatic approach to consciousness’ in K. H. Pribram, Brain and Values (Hillsdale NJ: Erlbaum, 1998).

[5]. Cf. John R. Searle, Rationality in Action (Cambridge MA: MIT Press, 2001).

[6]. H. Putnam, Reason, Truth and History (Cambridge: Cambridge University Press, 1981), 174-200.

 

[7]. Cf. T. Nagel, The Last Word (New York: Oxford University Press, 1997), Ch. 7; A. Plantinga, Warrant and Proper Function (New York: Oxford University Press, 1993), Ch. 12.

[8]. Cf. A. Gibbard, Wise Choices, Apt Feelings (Oxford: Oxford University Press, 1990).