Evolution and Morality

We can do better

In #articles

In an ongoing discussion about morality, I said

It is perfectly plausible to me that an evolutionary framework explains much of the process of getting our current moral intuitions, such as the close-kin bias.

There are a couple of things I want to make plain from the outset.

  1. Evolution is a terrible guide for morality. We don't derive our moral intuitions by looking at the process of evolution and using that to evaluate them. We, and our moral intuitions, are the product of the process of evolution.
  2. We can go beyond evolution in this, and in other, domains. I hope to make this clear as we go on. Let me explore an analogy which I think is helpful.

Vision as an analogy to morality

Evolution has shaped our eyes, and the cognitive facilities that allow us to use them to build internal models of the world. There is almost certainly an objective reality, although I cannot think of any experiment which could possibly confirm this. It is possible that every persons' "reality" is relative to them, but I don't find this either a useful or convincing perspective.

Even assuming an objective reality, our eyes do not give us a perfect rendition of it. Perception is as much constructed from the brain as it is perceived. Just because we can't "get outside" our own perceptions, this does not imply that we can't know anything about objective reality. This also doesn't imply that our vision system doesn't give us a decent approximation to reality - it does give us a decent approximation, but like all approximations, there are limits. How do we get around those limits? How do we surpass the limitations imposed by the evolutionary process on our vision to confirm the properties of objective reality? The process of science!

We use a process of verification by others, of repeatability, of open and honest discovery to distinguish what is true from what isn't. We quantify the observations, we structure experiments, we explore our biases and design processes to reduce our sensitivity to them. We recognize that evolution didn't care about perfect vision, only what was good enough. We recognize that evolution is limited in terms of the solutions it can reach - full of jury-rigged partial solutions - which lead to things like the blind spot, and the brain processing to fill the spot in. We recognize the situations where our vision is particularly limited - two dimensional environments, environments without clear objects for comparison, odd coincidences in object placement, etc... In those cases we do not trust our visual intuitions, and lean on the more objective measures of scenes.

Drawing out the analogy

Here's a cut-and-paste job. What do you think?

Evolution has shaped our moral intuitions, and the cognitive facilities that allow us to use them to build moral models of the world. There is almost certainly an objective morality, although I cannot think of any experiment which could possibly confirm this. It is possible that every persons' "morality" is relative to them, but I don't find this either a useful or convincing perspective.

Even assuming an objective morality, our intuitions do not give us a perfect rendition of it. Moral perception is as much constructed from the brain as it is perceived. Just because we can't "get outside" our own perceptions (which seemed to be the primary criticism in the original Dogma Debate Episode which started this whole thing), this does not imply that we can't know anything about objective morality. This also doesn't imply that our moral intuition doesn't give us a decent approximation to reality - it does give us a decent approximation, but like all approximations, there are limits. How do we get around those limits? How do we surpass the limitations imposed by the evolutionary process on our moral intuitions to confirm the properties of objective morality? The process of science!

We use a process of verification by others, of repeatability, of open and honest discovery to distinguish what is true from what isn't. We quantify the observations, we structure experiments, we explore our biases and design processes to reduce our sensitivity to them. We recognize that evolution didn't care about perfect moral systems, only what was good enough. We recognize that evolution is limited in terms of the solutions it can reach - full of jury-rigged partial solutions - which lead to things like moral blind spots (e.g. statistical numbing), and the brain processing to fill the moral blind spots in. We recognize the situations where our moral intuition is particularly limited - large numbers of people, in-group/out-group distinctions, etc... In those cases we do not trust our moral intuitions, and lean on the more objective measures of well-being.