January 22, 2014 Ideas From a Manger

By Ross Douthat                                               Ideas From a Manger                                                  Click here for a pdf version

PAUSE for a moment, in the last leg of your holiday shopping, to glance at one of the manger scenes you pass along the way. Cast your eyes across the shepherds and animals, the infant and the kings. Then try to see the scene this way: not just as a pious set-piece, but as a complete world picture — intimate, miniature and comprehensive.

Because that’s what the Christmas story really is — an entire worldview in a compact narrative, a depiction of how human beings relate to the universe and to one another. It’s about the vertical link between God and man — the angels, the star, the creator stooping to enter his creation. But it’s also about the horizontal relationships of society, because it locates transcendence in the ordinary, the commonplace, the low.

It’s easy in our own democratic era to forget how revolutionary the latter idea was. But the biblical narrative, the great critic Erich Auerbach wrote, depicted “something which neither the poets nor the historians of antiquity ever set out to portray: the birth of a spiritual movement in the depths of the common people, from within the everyday occurrences of contemporary life.”

And because that egalitarian idea is so powerful today, one useful — and seasonally appropriate — way to look at our divided culture’s competing worldviews is to see what each one takes from the crèche in Bethlehem.

Many Americans still take everything: They accept the New Testament as factual, believe God came in the flesh, and endorse the creeds that explain how and why that happened. And then alongside traditional Christians, there are observant Jews and Muslims who believe the same God revealed himself directly in some other historical and binding form.

But this biblical world picture is increasingly losing market share to what you might call the spiritual world picture, which keeps the theological outlines suggested by the manger scene — the divine is active in human affairs, every person is precious in God’s sight — but doesn’t sweat the details.

This is the world picture that red-staters get from Joel Osteen, blue-staters from Oprah, and everybody gets from our “God bless America” civic religion. It’s Christian-ish but syncretistic; adaptable, easygoing and egalitarian. It doesn’t care whether the angel really appeared to Mary: the important thing is that a spiritual version of that visitation could happen to anyone — including you.

Then, finally, there’s the secular world picture, relatively rare among the general public but dominant within the intelligentsia. This worldview keeps the horizontal message of the Christmas story but eliminates the vertical entirely. The stars and angels disappear: There is no God, no miracles, no incarnation. But the egalitarian message — the common person as the center of creation’s drama — remains intact, and with it the doctrines of liberty, fraternity and human rights.

As these world pictures jostle and compete, their strengths and weaknesses emerge. The biblical picture has the weight of tradition going for it, the glory of centuries of Western art, the richness of millenniums’ worth of theological speculation. But its specificity creates specific problems: how to remain loyal to biblical ethics in a commercial, sexually liberated society.

The spiritual picture lacks the biblical picture’s resources and rigor, but it makes up for them in flexibility. A doctrine challenged by science can be abandoned; a commandment that clashes with modern attitudes ignored; the problem of evil washed away in a New Age bath.

The secular picture, meanwhile, seems to have the rigor of the scientific method behind it. But it actually suffers from a deeper intellectual incoherence than either of its rivals, because its cosmology does not harmonize at all with its moral picture.

In essence, it proposes a purely physical and purposeless universe, inhabited by evolutionary accidents whose sense of self is probably illusory. And yet it then continues to insist on moral and political absolutes with all the vigor of a 17th-century New England preacher. And the rope bridges flung across this chasm — the scientific-sounding logic of utilitarianism, the Darwinian justifications for altruism — tend to waft, gently, into a logical abyss.

So there are two interesting religious questions that will probably face Americans for many Christmases to come. The first is whether biblical religion can regain some of the ground it has lost, or whether the spiritual worldview will continue to carry all before it.

The second is whether the intelligentsia’s fusion of scientific materialism and liberal egalitarianism — the crèche without the star, the shepherds’ importance without the angels’ blessing — will eventually crack up and give way to something new.

The cracks are visible, in philosophy and science alike. But the alternative is not. One can imagine possibilities: a deist revival or a pantheist turn, a new respect for biblical religion, a rebirth of the 20th century’s utopianism and will-to-power cruelty.

But for now, though a few intellectuals scan the heavens, they have yet to find their star. — ————–

Richard Luettgen New Jersey

The reality of human interaction is a very complex set of obligations existing in all directions. An individual has obligations, legal as well as ethical, to spouses, children, friends and acquaintances, to employers, to society at large; and in turn expects that others respect the same obligations owed him, as an individual or as part of a greater community.

But that network of interdependency starts and stops with humans. Any connection to deity is one that is overlaid onto an existing network that has been around, growing ever more complex, since BEFORE we came out of the trees (between 5,700 and 10,000 years ago, according to some). It’s a construct of either faith born of need or simply of our imaginations. It may be true, but … it may not.

When I see a manger scene, despite having grown up in a VERY Christian home, I wonder how off-putting it is to Jews, Muslims and the workaday agnostic or even atheist — Americans all, embedded as they are in their own workaday networks of very human interdependencies.

I spent a lot of time arguing, when young, with Jesuits who professed that the source of all morality was religion — and, not coincidentally, theirs. Always seemed like a self-serving argument. To me, the source of morality isn’t religion, or manger as symbol, but an awareness and acceptance of that network of very human interdependency, as the primary premise justifying civilization.

All that said, a very Merry Christmas to many, and a happy holiday to all. Iconoclast1956 Columbus, OH

I find this column rather inscrutable. Ross seems to be suggesting that the equality of the human race derives from the Christmas story. That doesn’t sound right to me. Rather, I believe the democratic ethos derives from many thinkers and writers acting over a long period of time. Ross also seems to have overlooked that Jesus advised slaves to obey their masters, per the New Testament. That doesn’t jibe with an “all people are equal” ethos well.

My agnosticism derives mainly two sources: doubts about the accuracy and credibility of events that supposedly happened two thousand years ago or more, as recorded in the Bible; and the excellent track record of science vis-a-vis religion in explaining the universe. Leonard Mlodinow said, in so many words, that astrophysicists can now explain that it’s possible for the universe to have originated from nothing. Under such circumstances, it’s hard for me to believe there is any meaning at all in the Christmas story.

Carl Sollee Atlanta

A fascinating Christmas meditation. Douthat does a great job describing these spiritual or secular types. I have friends that fit into each. As Pope Francis and the Dalai Lama argue, in their different styles and words, the ideological differences Douthat points to can be at least partially bridged by pursuing the “good” together: Helping the poor and downtrodden, the old and lonely, the young and vulnerable, practicing courtesy and ethics in our lives, a race to virtues if you will, is the best way to bridge the ideological differences among these spiritual and philosophic types. Too many of us, whether we are nominally atheist or religious (or spiritual but not religious) are simply too hedonistic. We can better!

George Kvidera Cudahy, WI

I’d like to make 2 points:

1) The “Darwinian justifications for altruism,” as I recall from what Darwin wrote in one of his letters, probably stem from when early humans looked upon other human beings and realized that they were all pretty much the same. Buddha echoed this sentiment when he said – “If you see yourself in others, then what harm can you do?” Jesus essentially said the same thing and added that to “love your neighbor as yourself” is to love God. Anyway, it’s a simple and natural concept that most everyone believes in.

2) One doesn’t have to view the Christmas story as an historical event to feel its power. There’s a line from “O Little Town of Bethlehem” that always gets me: “The hopes and fears of all the years are met in thee tonight.” I know it’s meant to apply to the birth of Christ, but I can’t help but feel that it’s true for every child born into the world. We fear for them, but there’s always hope that they will make the world a better place than we did.

— —
New Republic 12-26-13
Ross Douthat Is On Another Erroneous Rampage Against Secularism BY JERRY A. COYNE

Are there any conservative columnists who aren’t either wooly-brained, filled with unrighteous anger, or both? Even George Will occasionally got it right, but Ross Douthat? Nope. And he writes for The New York Times, the best newspaper in America. Can’t they do better? I would actually want to read a good conservative columnist, for it’s bad to become complacent and it’s salubrious to have your views challenged. But Douthat isn’t a contender.

Take his column from December 21, “Ideas from a manger.” The theme is Douthat’s musings on the religiosity of America, inspired, of course, by the Christmas season. While gazing at a manger scene, his heart racing as he sees the baby Jesus, Douthat gets an idea: American religious worldviews fall into three categories, one of which is deeply problematic (guess which one!). I’ll list the categories and show what he finds dubious about each (Douthat’s words are indented).

1. Biblical literalism.

The view:

Many Americans still take everything: They accept the New Testament as factual, believe God came in the flesh, and endorse the creeds that explain how and why that happened. And then alongside traditional Christians, there are observant Jews and Muslims who believe the same God revealed himself directly in some other historical and binding form.

The same God? You mean the one that sends Christians and Jews to hell if he’s Allah, and Muslims to hell if he’s the Christian God? How can that be the same God?

Douthat’s problem:

The biblical picture has the weight of tradition going for it, the glory of centuries of Western art, the richness of millenniums’ worth of theological speculation. But its specificity creates specific problems: how to remain loyal to biblical ethics in a commercial, sexually liberated society.

Really? The problem is how to keep being a fundamentalist in a “commercial, sexually liberated society?” Curious that Douthat doesn’t mention that literalism is also insupportably

wrong. Curious, too, that Douthat doesn’t mention the disparities between adherents of “the same God” who for some reason find their dogmas in irresolvable conflict.

2. The “spiritual” take. The view:

But this biblical world picture is increasingly losing market share to what you might call the spiritual world picture, which keeps the theological outlines suggested by the manger scene— the divine is active in human affairs, every person is precious in God’s sight—but doesn’t sweat the details.

This is the world picture that red-staters get from Joel Osteen, blue-staters from Oprah, and everybody gets from our ‘God bless America’ civic religion. It’s Christian-ish but syncretistic; adaptable, easygoing and egalitarian. It doesn’t care whether the angel really appeared to Mary: the important thing is that a spiritual version of that visitation could happen to anyone —including you.

I’m curious what the “spiritual version” of a visitation from an angel really is. Douthat’s problem:

The spiritual picture lacks the biblical picture’s resources and rigor, but it makes up for them in flexibility. A doctrine challenged by science can be abandoned; a commandment that clashes with modern attitudes ignored; the problem of evil washed away in a New Age bath.

One senses that Douthat doesn’t really like this point of view: the “New Age bath” seems pejorative. If I were to guess, I’d put his own view somewhere between #1 and #2. But what really irks him is #3:

3. The secular view. The view:

Then, finally, there’s the secular world picture, relatively rare among the general public but dominant within the intelligentsia. This worldview keeps the horizontal message of the Christmas story but eliminates the vertical entirely. The stars and angels disappear: There is no God, no miracles, no incarnation. But the egalitarian message—the common person as the center of creation’s drama—remains intact, and with it the doctrines of liberty, fraternity and human rights.

Well, that doesn’t sound too bad, save for the idea that atheism is dominant within the “intelligentsia” (it’s not, even among scientists), and secularists’ supposed view that “the common person is the center of creation’s drama,” which isn’t true, either. If there is any “drama” in creation, most of it does not involve people at all. There’s the Big Bang, all those other galaxies, black holes, exploding stars, and, on our planet, evolution, on whose branching bush we are but one tiny twig. Nevertheless, Douthat hates secularism:

Douthat’s problem:

“The secular picture, meanwhile, seems to have the rigor of the scientific method behind it. But it actually suffers from a deeper intellectual incoherence than either of its rivals, because its cosmology does not harmonize at all with its moral picture.

In essence, it proposes a purely physical and purposeless universe, inhabited by evolutionary accidents whose sense of self is probably illusory. And yet it then continues to insist on moral and political absolutes with all the vigor of a 17th-century New England preacher. And the rope bridges flung across this chasm—the scientific-sounding logic of utilitarianism, the Darwinian justifications for altruism—tend to waft, gently, into a logical abyss.

First, I’m not sure what Douthat means when he says “cosmology does not harmonize at all” with the moral picture of secularism. Cosmology doesn’t give one iota of evidence for a purpose (it could!) or for God. Most of the universe is cold, bleak, airless, and uninhabitable. In fact, such a cosmology harmonizes far better with a secular moral picture than a religious one. Secularists see a universe without apparent purpose and realize that we must forge our own purposes and ethics, not derive them from a God for which there’s no evidence.

Yes, secularism does propose a physical and purposeless universe, and many (but not all) of us accept the notion that our sense of self is a neuronal illusion. But although the universe is purposeless, our lives aren’t. This conflation of a purposeless universe (i.e., one not created by a transcendent being for a specific reason) with purposeless human lives is a trick that the faithful use to make atheism seem dark and nihilistic. But we make our own purposes, and they’re real. Right now my purpose is to write this piece, and then I’ll work on a book I’m writing, and later I’ll have dinner with a friend. Soon I’ll go to Poland to visit more friends. Maybe later I’ll read a nice book and learn something. Soon I’ll be teaching biology to graduate students. Those are real purposes, not the illusory purposes to which Douthat wants us to devote our only lifeNor do all atheists insist on moral and political absolutes. Most of the savvy ones, at least, approach their politics and ethics, like we approach our science, provisionally. Take ethics. Sam Harris, an atheist, wrote a book proposing a scientific view of ethics that, he said, was objective. Many atheists didn’t agree, and the arguments went back and forth. Is it okay to torture people if there’s a possibility to saves lives by doing so? Is it ever ethical to lie? It is atheists who argue most often about such things, for religiously-based ethics is either fixed or malleable only by the hammer of secularism. Secularists like Harris and Peter Singer argue about what’s right and wrong using reason, while Christians like William Lane Craig are the Biblical absolutists.

But the worst part is Douthat’s characterization of the effects of secularism:

… the rope bridges flung across this chasm — the scientific-sounding logic of utilitarianism, the Darwinian justifications for altruism — tend to waft, gently, into a logical abyss.

Talk about rope bridges! What is Christianity but a giant rope bridge flung across the Chasm of Hope? And we see nothing on the other side.

Utilitarianism may not be a perfect ethical system, but what, pray tell, is Douthat’s? If it’s Biblical, does he give away all his possessions and abandon his family to follow Jesus, as the Bible commands? Does he think that those who gather sticks on the Sabbath, curse their

parents, or commit adultery should be killed? If not, why not? It’s what the Bible says! If he doesn’t believe in that kind of morality, then he’s adhering to a secular, extra-Biblical view of ethics, which he then must justify.

As for where altruism comes from, who knows? My own suspicions are that it’s partly genetic and partly cultural, but what’s important is that we feel it and can justify it. I can justify it on several grounds, including that altruism makes for a more harmonious society, helps those in need, and, as a selfish motive, that being altruistic gains you more respect. None of this justification has anything to do with God.

I have run on too long, but I want to show Douthat’s penultimate paragraphs, which are even more misleading:

The second [religious question] is whether the intelligentsia’s fusion of scientific materialism and liberal egalitarianism—the crèche without the star, the shepherds’ importance without the angels’ blessing—will eventually crack up and give way to something new.

The cracks are visible, in philosophy and science alike. But the alternative is not. One can imagine possibilities: a deist revival or a pantheist turn, a new respect for biblical religion, a rebirth of the 20th century’s utopianism and will-to-power cruelty.

Check out those two links. The first is to Thomas Nagel’s book Mind and Cosmos, which decries evolution as insufficient to explain life’s diversity and posits, without any evidence, some non-Goddy but teleological force driving the process. It’s a bad book and has been roundly trounced by Nagel’s fellow philosophers (see here, for instance). It’s not a crack, but a crackpot book.

The second link is to a nice article by Steven Weinberg in the New York Review of Books, “Physics: what we do and don’t know.” It’s a succinct summary of the state of the art of both cosmology and particle physics, highlighting the mysteries that beset those fields, including dark matter, dark energy, string theory, how to unify gravity with the other fundamental forces, and whether there might be multiple universes. We don’t know the answers, but what is science without unsolved problems?

And it’s those unsolved problems that Douthat sees as “cracks.” Presumably 200 years ago he would have seen cracks in the unexplained “designlike” features of organisms, in the origin of the universe, and in the unknown constituents of matter. These “cracks” have now been filled. In the unanswered questions that remain, Douthat sees gaps that, he thinks, can be filled only with God. But it’s always been a losing strategy to argue that scientific puzzles presage the death of naturalism and the arrival of Jesus.

Douthat is wrong. The cracks are not in the edifice of secularism, but in the temples of faith. As he should know if he reads his own newspaper, secularism is not cracking up but growing in the U.S. He and his fellow religionists are on the way out, and his columns are his swan song. It may take years, but one fine day our grandchildren will look back on people like Douthat, shake their heads, and wonder why some people couldn’t put away their childish things.

Jerry A. Coyne is a Professor of Ecology and Evolution at The University of Chicago and author of Why Evolution is True, as well as the eponymous website. A version of this post first appeared on WhyEvolutionIsTrue.

——
January 6, 2014, NYTimes Website

The Confidence of Jerry Coyne Ross Douthat

One of the problems with belonging to a faction that’s convinced it’s on the winning side of intellectual history is that it becomes easy to persuade oneself that one’s own worldview has no weak points whatsoever, no internal contradictions or ragged edges, no cracks through which a critic’s wedge could end up driven. This kind of overconfidence has been displayed, at various points in the human story, by everyone from millenarians to Marxists, inquisitors to eugenicists. But right now its vices are often found in a certain type of atheistic polemicist, and in a style of anti-religious argument that’s characterized by a peculiar, almost-willed ignorance of why reasonable people might doubt the scientific-materialist worldview.

A case in point: The University of Chicago professor Jerry Coyne’s response, republished by The New Republic, to my Christmas column on the various modern American world-pictures and what each one owes to the scene in Bethlehem. That column took a concluding dig at secular naturalism, for which Coyne is a prominent evangelist, suggesting that its view of the cosmos — a purposeless, purely physical universe, in which human life is accidental, human history directionless, and human consciousness probably an illusion — is at odds with its general political and moral posture (liberal, egalitarian, right-based, progressive) in ways that make the entire world-picture ripe for reassessment or renovation. So it’s entirely fair that Coyne took the opportunity to deliver some body blows to theism and Christianity in return.

What’s striking about his response, though, is the extent to which its own account of the secular, materialist world-picture actually illustrates precisely the problems and tensions that I was talking about, in ways that even a casual reader should find obvious but which Coyne apparently did not. He can see the weak points in a religious argument, but the weaknesses of his own side of the debate are sufficiently invisible to him that his rebuttal flirts with self- caricature.

Let me offer two examples. First, to the idea that the materialist’s purposeless cosmos poses some problems for the liberal view (or any view) of moral and political purpose in human affairs, Coyne responds:

I’m not sure what Douthat means when he says “cosmology does not harmonize at all” with the moral picture of secularism. Cosmology doesn’t give one iota of evidence for a purpose (it could!) or for God. Most of the universe is cold, bleak, airless, and uninhabitable. In fact, such a cosmology harmonizes far better with a secular moral picture than a religious one.

Secularists see a universe without apparent purpose and realize that we must forge our own purposes and ethics, not derive them from a God for which there’s no evidence.

Yes, secularism does propose a physical and purposeless universe, and many (but not all) of us accept the notion that our sense of self is a neuronal illusion. But although the universe is purposeless, our lives aren’t. This conflation of a purposeless universe (i.e., one not created by a transcendent being for a specific reason) with purposeless human lives is a trick that the faithful use to make atheism seem dark and nihilistic. But we make our own purposes, and they’re real. Right now my purpose is to write this piece, and then I’ll work on a book I’m writing, and later I’ll have dinner with a friend. Soon I’ll go to Poland to visit more friends. Maybe later I’ll read a nice book and learn something. Soon I’ll be teaching biology to graduate students. Those are real purposes, not the illusory purposes to which Douthat wants us to devote our only life.

So Coyne’s vision for humanity here is heroic, promethean, quasi-existentialist: Precisely because the cosmos has no architect or plan or underlying purpose, we are free to “forge” our own purposes, to “make” meaning for ourselves, to create an ethics worthy of a free species, to seize responsibility for our own lives and codes and goals rather than punting the issue to some imaginary skygod. (Ayn Rand could not have put it better.) And these self- created purposes have the great advantage of being really, truly real, whereas the purposes suggested by religion are by definition “illusory.”

Well and good. But then halfway through this peroration, we have as an aside the confession that yes, okay, it’s quite possible given materialist premises that “our sense of self is a neuronal illusion.” At which point the entire edifice suddenly looks terribly wobbly — because who, exactly, is doing all of this forging and shaping and purpose-creating if Jerry Coyne, as I understand him (and I assume he understands himself) quite possibly does not actually exist at all? The theme of his argument is the crucial importance of human agency under eliminative materialism, but if under materialist premises the actual agent is quite possibly a fiction, then who exactly is this I who “reads” and “learns” and “teaches,” and why in the universe’s name should my illusory self believe Coyne’s bold proclamation that his illusory self’s purposes are somehow “real” and worthy of devotion and pursuit? (Let alone that they’re morally significant: But more on that below.) Prometheus cannot be at once unbound and unreal; the human will cannot be simultaneously triumphant and imaginary.

It’s true that even if the conscious self is an illusion, human beings would still have purposes in the sense that any organism has purposes, and our movements — all that travel and reading and dining, in Coyne’s case — wouldn’t just be random or indeterminate. But just as nobody would describe a tree growing toward the sun or a bee returning to the hive as “forging their own purposes” in life, so too Coyne’s promethean language about human agency implies a much higher conception of what a human being IS — both in terms of the reality of consciousness and the freedom afforded to it — than his world-picture will allow.

Obviously the foregoing is not the end of the argument: There are many talented philosophers who have spent their careers trying to iron out this particular kink in the eliminative- materialist fabric, or explaining why it’s not actually a major kink at all, and there’s no

reason why you should take a newspaper columnist’s side against their formidable qualifications. But the point is that if you’re going to argue about this, with a newspaper columnist or anyone, you have to actually make the argument; you can’t just blithely assert what looks like contradiction and claim to be defending science and reason against the obscurantism of religion. Or rather, you can – but you won’t make your side look particularly good.

Then further down, here’s Coyne on the morals of a materialist:

As for where altruism comes from, who knows? My own suspicions are that it’s partly genetic and partly cultural, but what’s important is that we feel it and can justify it. I can justify it on several grounds, including that altruism makes for a more harmonious society, helps those in need, and, as a selfish motive, that being altruistic gains you more respect. None of this justification has anything to do with God.

Again, if this is the scientific-materialist’s justification for morality, then the worldview has even more problems than I suggested. Coyne proposes three arguments in favor of a cosmopolitan altruism, two of which are circular: Making a “harmonious society” and helping “those in need” are reasons for altruism that presuppose a certain view of the moral law, in which charity and harmony are considered worthwhile and important goals. (If my question is, “what’s the justification for your rights-based egalitarianism?” saying “because it’s egalitarian!” is not much of an answer.)

The third at least seems to have some kind of Darwinian-ish, quasi-scientific logic, but among other difficulties it’s an argument that only holds so long as the altruistic choice comes at a relatively low cost: If you’re a white Southerner debating whether to speak out against a lynching party or a Dutch family contemplating whether to hide your Jewish neighbors from the SS, the respect factor isn’t really in play — as, indeed, it rarely is in any moral dilemma worthy of the name. (And of course, depending on your ideas about harmony and stability, Coyne’s “harmonious society” argument might also seem like a case against opposing Jim Crow or anti-Semitism — because why rock the boat on behalf of a persecuted minority when stability and order are the greater goods?)

The point that critics make against eliminative-materialism, which Coyne seems not to grasp, is that it makes a kind of hard-and-fast moral realism logically impossible — because if the only real thing is matter in motion, and the only legitimate method of discernment the scientific method, you’ll never get to an absolute “thou shalt not murder” (or “thou shalt risk your life on behalf of your Jewish neighbor”) now matter how cleverly you think and argue. This is not necessarily a theistic objection — it’s one of the issues raised in Thomas Nagel’s controversy-generating book, which explicitly keeps religious ideas at arm’s length — and for that matter there are forms of theism that need not imply moral realism, and Euthyphro-style objections to the union of the two. But I don’t think those of us who still embrace the traditional Western idea of God are crazy to suggest that our cosmology has at least a surface compatibility with moral realism that the materialist conception of the universe’s (nonexistent) purposes seems to lack.

So if you’re going to defend both materialism and modern rights-based liberalism, you have to actually address this point head-on. Make a case for a more limited, non-metaphysical form of

moral realism, make a more thoroughgoing attempt to discern some sort of moral teleology in the Darwinian story (though of course Coyne has denounced efforts along these lines as “creationism for liberals”), go full relativist and make a purely aesthetic case for cosmopolitanism, I don’t care what — but give me something that doesn’t either beg the question (“we should help people because it helps people!”) or pretend that there are actually solid selfish reasons for the most costly, heroic, and plainly self-sacrificial forms of non-self-interested behavior.

Finally, I enjoyed Coyne’s parting sally:

Douthat is wrong. The cracks are not in the edifice of secularism, but in the temples of faith. As he should know if he reads his own newspaper, secularism is not cracking up but growing in the U.S. He and his fellow religionists are on the way out, and his columns are his swan song. It may take years, but one fine day our grandchildren will look back on people like Douthat, shake their heads, and wonder why some people couldn’t put away their childish things.

For a man who believes in “a physical and purposeless universe” with no room for teleology, Coyne seems remarkably confident about what direction human history is going in, and where it will end up. For my part, I don’t make any pretense to know what ideas will be au courant a hundred years from now, and as I said in the column, I think there are all kinds of worldviews that could gain ground — at the expense of my own Catholicism and secular materialism alike. (Right now, the territory around pantheism and panpsychism seems ripe for further population, but that’s just a guess.) But I suppose it’s a testament to my own childish faith in the “neuronal illusion” that is the human intellect that I can’t imagine a permanent intellectual victory for a worldview as ill-served by its popularizers as atheism is by Jerry Coyne.

— —

gemli Boston

I have blind spots that will forever prevent me from understanding things at which others excel, but I do have a natural aptitude for the sciences, and evolution is one of the things that I understand. While I will never write with the breadth and depth of a Ross Douthat, I’m convinced that he will never be able to understand what evolution is, how it created the diversity of life from inanimate materials, and how a bunch of neurons could create our sense of self.

I can imagine a group of people who have been isolated from all technology suddenly encountering a tape recorder, and hearing the recorded voice of their dead leader. Should we entertain their likely argument that the spirit of the person is inside the machine? No matter how smart and resourceful they are, they would simply be wrong. This is why it’s impossible to construct any secular argument that would convince Mr. Douthat. It’s not a question of intelligence, but of a blind spot that he’s not aware he has.

Coyne makes perfect sense to me. Morality evolved because those behaviors favored reproduction of our species. Our brains are large neural networks that cause us to think and

feel. Even very simple neural networks that you can build with parts from Radio Shack can behave in ways that defy understanding. Only someone unfamiliar with the science could claim that a supernatural explanation was required. There is none so blind as those who will not see.

John Hartford

“This kind of overconfidence has been displayed, at various points in the human story, by everyone from millenarians to Marxists, inquisitors to eugenicists.”

Not to mention conservatives and members of the Republican party? Although not a believer in the supernatural, as apparently Douhat is, I have no problem in co-existing with or even admiring and enjoying aspects such beliefs. Singing some of those wonderful Episcopalian hymns is not very different from enjoying the odd pagan survivals like kissing under the mistletoe. Hence I’m not particularly supportive of aggressive atheism any more than I’m supportive of aggressive Catholicism or Islam. That said it’s hard to argue that philosophically Coyne doesn’t have the better case. Many if not most participants in organized religion do so for tribalistic reasons and religion remains what it has always been a source of strife from unrest in the middle east to genocide in the Balkans. And in this country we’re seeing militant religious fundamentalists of one denomination or another attempting to impose their views on others or even infiltrate the political system. The founding fathers of this country who were almost all atheists or agnostics (whatever the social climate at the time compelled them to say) and recognized that religion was fundamentally divisive. They thus ensured it was excluded from the political system. It needs to stay that way.

James Wilson Colorado

Find a parent whose child is in serious trouble and you will see the driving force. God did not teach that parent to strive with every ounce of their being to help their child. That does not come from any commandment nor can it be learned or taught. The parent can not predict their behavior until they experience the situation. It is a trivial example of the life force that we do not control and did not invent. Darwinism does not pretend to explain the origin of that life force, but it does a decent job of describing its impact on the organisms and ecosystems of our world. Nobody we know created this universe. We will die without knowing its extent, but we will do everything we can to see that our offspring thrive.

Many enthusiasts for ultimate explanations cheerfully kill and destroy everything that stands opposed to their Church or Party. Better to be uncertain about the meaning of it all than to savage the planet and its inhabitants in the name of a religion or an ideology. The last century teaches us that ideology is wrong, not that the wrong ideas must be killed.

Keeping their fingers off the button should be our first agenda item. Getting rid of the button should be our second. We are unimaginably fortunate to exist at all. And given the failings of or species, there is no reason to believe we will survive the ideologues and religionists who control nuclear weapons and carbon emissions. But we can do small fixes and bungle through, one day at a time.

Richard Bozeman

Douthat, for all his intelligence and writing acumen, tends to make political and emotional arguments. Douthat gleefully leaps on Coyne’s acknowledgment that perhaps “our sense of self is a neuronal illusion”, declaring “the entire edifice suddenly looks terribly wobbly — because … Jerry Coyne, as I understand him … quite possibly does not actually exist at all” Here is the rub. Even if our sense of self is an illusion (and I believe this), it does NOT imply that the entity “Jerry Coyne” is an illusion! Nor does it imply that this debate is an illusion. Coyne is honest enough to even doubt his own beliefs. Douthat has a profound emotional need for a supernatural and impossible cosmos.

Matt S NYC

I’m sorry, but I have to laugh at Douthat’s arguments that empathy and altruism come from a magic god rather than from naturally selected processes which advance life and species. Look at the history of living organisms for such evidence.

The first cell could “eat,” could grow, could replicate, had little use for other cells. Indeed, other cells were competition for resources. However, cells that did work together found more efficient and effective ways of using resources, or surviving when conditions changed or resources grew scarce. They formed colonies, and in time these colonies led to multicellular life forms. Nature had selected cooperation.

And it continued to do so. Living things that worked in concert tended to survive and thrive, while those that didn’t often fell below on the evolutionary scale, and at times died out.

Look at the neanderthals. They had more powerful bodies and even larger brains (though they may not have translated to greater brainpower). And yet we survived because of our communal sense. Humans made advancements and taught those advancements to other humans, spreading that advantage quickly. Neanderthals, like our other primate cousins, evidence suggest did not.

Eduardo Los Angeles

This is an area of human endeavor that becomes so overcomplicated that the obvious is simply missed. We are ill-equipped to recognize and comprehend how long life has had to develop and evolve, and how that very process self-generates purpose — but not purpose in the goal-setting mentality of human thought and emotion.

Survival and reproduction are the most fundamental assets necessary to have the long-term prospect of greater complexity. But complexity is not a purpose or a goal. There is no designer, no greater power, no omniscient deity. Mice have the same will to live (survive) that humans do, but not the means to contemplate what that means or why it exists.

Humans assume that having this ability means there are answers, but that’s just an assumption. Superstition and mysticism, and then religion, exist only to answer questions that have no “answers.” The only answer is that life is about living, and to do this, survival is essential, which includes reproduction. Too simple and obvious, perhaps.

Religion is less about functional answers than about social control and the human penchant for power and wealth. Having “answers” includes rules and obedience that foster and sustain that penchant.

So, really, despite all the efforts to deduce the purpose of life and its persistence, they are all human fabrications. Thus, it’s easily just as plausible that the meaning of life truly is 42.

Eclectic Pragmatist — http://eclectic-pragmatist.tumblr.com/ John F. McBride Seattle

Why is this argument always presented in emotionally laden language chosen by two sides who don’t appear to even understand that fact, let alone escape it?

What, or who, is “God” Ross or Jerry? Unless you know of some way of getting out of your heads that “being” or “state of existence” without traditional language, to which you attribute the name ‘god’ and then proceed to vest belief in, or denial of ‘it,’ this discussion is immediately pointless.

I’ve been reading a collection of biblical scholars’ essays regarding the Dead Sea Scrolls. Among the impressions I’ve formed is how “formless” ‘God’ in this. Individuals use that name as if it means what it meant to individuals in the time that the Mishna was put down on scrolls with no way of knowing that.

All that can intellectually honestly be asserted, regardless of the amount of reading and consideration done is that there is may be function outside self and society and that individually or collectively there’s no success in quite describing it in language, or science.

Why Christianity? Why not Budhism or Hinduism or Judaism?

Mr. Coyne is disingenuous, too, since he apparently is aware of the puzzle that Uncertainty poses with its spooky science and yet asserts absolutely “there is no ‘god.’

The arguments here are intellectually pretty but neither strikes me as seeking to achieve what amounts to “a proof” let alone a paradigm shift. I suggest you get together, share a beer and start with a clean sheet of paper.

gemli Boston @John F. McBride,

Thanks for the reply. There may indeed be something going on behind these processes. After all, our whole conception of the universe changed in the 20th century. Mysteries still abound, but everything we know today was once a mystery. The scientific method that looks closely and systematically at the world did a far better job than religions did at finding not only the answers to many of those mysteries, but in revealing new mysteries that we were unaware of. Asking the right questions is as important as finding the right answers.

It’s not that science has all the answers, but that religions make many claims about the world that are either not true, or are impossible to verify. If god both is and is not, what are we to

do with that? Coyne says that we can get a reasonable explanation of the universe and our place within it without invoking a god: life can arise from non-living things, evolution explains how complexity emerges from simplicity, and complex neural nets can think and feel.

Douthat says there’s something more. Well of course there’s something more. But that something is very unlikely to be superstition and magic that only appears in ancient texts, that can’t be verified, and that is wrong whenever it makes a definite claim. If language limits what we say, imagine how much more limiting it is when we’re speaking in tongues!

serban Miller Place

The question that Douthat should think about is whether God is necessary to justify moral precepts. Or to be more precise, do humans need a God to justify their actions? I fail to see what difference it makes what the existence or non-existence of a real as opposed to an imagined God makes to our every day existence and to our behavior. Most people, whether religious or not, do not even know what it is that they mean by God, any attempt at definition inevitably ends up with some vague nebulous entity. Furthermore why would such entity care whether we believe or not in its existence? To me the most preposterous believers are those who survive some catastrophe that killed thousands and thank God for sparing them.

Matt S NYC

I don’t find Coyne’s view of the direction of history so silly. Take for example the fact that we once asserted (and some gullibly believed) that God selected all monarchs, and thus their rule was always just thanks to God. We certainly don’t govern that way now. Would Douthat assert that Obama was appointed by god, that god guided the vote in his infinite wisdom?

What of trial by combat? Would Douthat have us try George Zimmerman by pitting him against an African American marksman? Why the trouble and expense of evidence-based trials when we could let God choose the victor and dispense justice, as was done in the past?

Icon Chicago

Douthat clearly demonstrates a lack of understanding of the scientific evidence. This piece is full of obtuse misinterpretation. He desperately stumbles over the neuroscience, setting up a straw man of his own misconceptions, fails to grasp basic regression and gleefully jumps on perceived claims that he feels he can refute with logically flawed argument. I am sad that I read this piece and that the Times printed it. Not understanding reality is not proof for the existence of pink unicorns, but Douthat claims on his own faith that it is, and there is no working around such nonsense.

Jan. 8, 2014 at 6:16 p.m. Reply
Koyote The Great Plains

I’m not really interested in these wordy dissections. Got enough of that in grad school.

There is scant physical evidence to support theism, and overwhelming evidence to support the scientific (biology, chemistry, astronomy) view of the world and cosmos. Given that most religious believers think all other religions (other than their own) are wrong, and given that all religions require belief in essentially the same supernatural phenomena, theists are inconsistent – religion boils down to a choice to believe in one crazy narrative over other crazy narratives.

John F. McBride Seattle

I admire the way you’ve put this Gemli. Your argument makes sense to me, including for me by extrapolation the extension of guantum behaviors to nearly a physical level.

But intellectually I can’t completely rule out a “something” going on in all of these processes, even if I’m completely able to conclude as mistaken what ancient humanity, for instance Judaism and then Christianity, described in what is traditionally Wesern religious language as a human like supreme being.

Coyne and Douthat are both very accomplished writers and have and own the theater in which to stage this drama. But neither of them seems capable to me of stepping back and accepting that language itself limits how and what we say, and certainly biases in the mind of the “hearer” the content.

I’ll retreat to my usual ancient Taoist, Hindu and Greek position of god both is and is not, neither is nor is not, because ‘god’ is simply a word bantered around as the accepted name for what is yet not known.

whim New York, NY

There is no need at all for an atheist, or a materialist, to be an ‘eliminativist’, relegating all talk of mentality and values to unreality.

If Ross Douthat exists, and his attempts at reasoning exist, and his passions exist, and everything that exists is matter in motion, then Ross Douthat and his attempts at reasoning and his passions are matter in motion, or properties thereof. That does not imply that Ross and his thinkings and his wantings are unreal.

That an adequate description or explanation of everything can be given in terms from physics is obviously false–we need not reach the mental or the normative to see this, the biological will do nicely. Adequate explanations in biology rely on the notion of function, which is not a term from physics.’Neuronal network’ is not a term from physics. Nor is a neural network composed of anything non-physical.

Human beings are required by their circumstances to choose what to go for, what to believe, what to do. Justification is an ineliminable feature of our lives. Douthat and Coyne share understandings of how arguments are to be evaluated. Rationality, rather than divinity, accounts for this. And rationality is a far cry, as giving warrant to our choosings and valuings, from any arbitrary existential choice ex nihilo.

Coyne’s speaking of an “illusory” self is charitably understood as the claim that we are not what many of us take ourselves to be, not that we do not exist–an incoherent claim for anyone to make.

Scott Butler Newport News, VA

Over-confidence would seem to apply to advocates of all systems of thought, including Mr. Douthat’s Catholicism. Mr. Douthat more or less presents Mr. Coyne as a representative of all “secularists,” although he calls him “a certain type of atheistic polemicist.” This stereotyping of secularists as arrogant and irrational is a way of not fully engaging the obvious objection to a religious perspective that Mr. Coyne makes: no evidence. If the universe is a manifestation of divinity, that divinity doesn’t appear to be the loving, just, and merciful God of traditional Judaism, Christianity, and Islam, at least not as revealed in the indifferent interplay of natural forces. Cosmology clearly does not have a “surface compatibility with moral realism.” Where does morality come from, then? Us. Does that cut it off from ultimate meaning? Not from ultimate human meaning, I think. We are our own work in progress, whether we admit it or not. Are we “neuronal illusions,” as Mr. Coyne proposes? I don’t know, but the universe is a profound mystery that religion tends to explain too glibly and to trivialize.

EB AZ
Douthat says that for secular naturalists, human consciousness is “probably an illusion.”

Is that the claim, though? I am aware of the claim that the self is an illusion, but that’s not the same thing.

If I stare at a green spot and then turn my gaze to a white surface, I may see something red there. The red spot is an illusion. My eyes are giving me an erroneous message. But my eyesight is not an illusion.

The same thing with the self and consciousness. The message that my consciousness gives me, that I have a self, is arguably erroneous. But that is not to say that my consciousness itself is an illusion.

mmwhite San Diego, CA

“For a man who believes in “a physical and purposeless universe” with no room for teleology, Coyne seems remarkably confident about what direction human history is going in, and where it will end up. ”

He probably did what scientists do – looked at the data he had available (the record of human history to date) and projected a trend. While history has been erratic, there has been a general trend towards increasing respect for and tolerance of people who are “other” – other races, other beliefs, other genders, etc. This shows up most clearly in how we treat others (so we no longer torture and kill those who believe differently, we have given the same rights to women as to men, to the descendants of slaves as to the descendants of the wealthy, to those with physical and mental handicaps as to the robustly healthy). And in treating these “others” as worthy of decency, we have learned there are other ways of viewing the world, which may

be at least as correct as our own (I notice Mr. Douthat does not consider any of the myriad of non-Abrahamic religions – don’t they count?)

There has also been a trend to using information supplied by careful observation of the world and logical thought about it to determine what things mean and how they should be done. I don’t think it’s to difficult to put these together to project a trend to a secular, science- (or at least reason-) based society.

“Purposeless” doesn’t mean “utterly random with no connection to what has gone on before”. Luke Grand Rapids MI

How ironic that Douthat criticizes naturalism for having relativistic morality in contrast to.,…. western theism? is he joking? Show me how Yaweh demonstrated a consistent understanding of “thou shalt not kill”. Does that include Amalekites? the babies in Jericho? slaves? Job’s family? Isaac? Materialism in contrast renders morality objective in many instances. I can defend “thou shalt not kill” on the principle of non-contradiction: its wrong to commit a course of action that you would not want done to yourself. There you go: objective.

From a purely empirical level, Douthat also has problems. Why are atheists moral then if they shouldn’t have any objective basis. He mentions the hiding of Jews from Nazis. According to Oliner and Oliner’s study of rescuers, religion was not a predictor. Or rather, it was a curved relationship: those who were both highly religious as well as completely nonreligious were most likely to rescue with the middle of religiosity least likely. Again i ask: what is this evidence that religion in this case provided an objective moral basis that materialism did not?

CastleMan Colorado

What difference does your personal world view make, in terms of what’s actually real? Evolution has happened on Earth, and will keep happening, regardless of whether you think it is real or imagined. As Professor Coyne notes, we are, as far as we know, the only intelligent beings in a cold, dark, and brutally cold universe. We have no proof at all that any god, whether the Christian one or any other, exists. The logical conclusion is that we must derive purpose from our existence; no other force that could create one, or did create one, has done it for us. Again, whether you believe in a god, or God, has nothing to do with what we KNOW. Yes, it’s obvious that we don’t know all about the universe or even about the history of life on this planet, but what we do know indicates pretty clearly that religion is a form of mythology and not a reflection of reality.

Arthur UWS, NYC

” At which point the entire edifice suddenly looks terribly wobbly — because who, exactly, is doing all of this forging and shaping and purpose-creating if Jerry Coyne, as I understand him (and I assume he understands himself) quite possibly does not actually exist at all?”-Douthat

I suggest that Douthat kick Jerry Coyne, as Dr. Johnson kicked a rock to refute Bishop Berkely, assertion of the non-existence of matter. Alternately, Jerry Coyne thinks, as does Douthat, therefore he exist, as per Blaise Pascal. If one has to posit a religion to accept existence, then

the argument becomes an argument pitting one world view against another, with no possibility of resolution.

I read Douthat’s Christmas column and found it unfathomable, in part, because we do share the same world views. I will grant that the creche glorifies family, an important social institution or construct, but I could not see how the creche supported democracy. In fact, I take the creche as just as much the glorification of a mother goddess, although a peculiar one.

John F. McBride Seattle gemli°Boston

I agree with your assessment of the retreat of religion and the advance of science; when I began the process of withdrawing from religion decades ago a major factor was its unwillingness to surrender positions, for instance in the forced recantation of Galileo, that were factually disproven.

I don’t possess the expectation for science to have all the answers, and in that I include the works of great sociologists and psychologists, such as Ernest Becker (Denial of Death, The Birth and Death of Meaning), as well as the hard sciences.

But their explanations, and attempts to find further, research underwritten explanations, make more sense to me than a bishop in Rome asserting the sinfulness of birth control in the face of humanity’s problems and basing that “sin” on…. ? What exactly?

Still, I don’t expect, let alone demand, the unconditional withdrawal and surrender of religion; and without attempting to attribute any personal description to the phenomenon of “existence,” there is a quantum aspect of coming into that existence, and “measuring into specific behaviors in reaction to experience” [nuerophysiologically we select in experience from openness to wide ranges of sounds, etc. into specific language, culture, etc) that fascinates me and that leaves open in my mind “phenomenon” ( de Chardin) that lies outside experience and measurement, and therefore religion and science.

Therein is a great conversation, and not one exhibited in this column. Brad Foley Los Angeles, CA

I feel very little sympathy for Coyne as a human being (either in his old role as a serious scientist, or his new role as self-anointed apostle of atheism). But despite his interpersonal failings, he’s more often than not right. And this case seems to be no exception. To respond to Douthat’s confusion about the atheistic foundation of morality – there’s a perfectly acceptable evolutionary logic underlying Coyne’s claims here. It’s possible he didn’t feel the need to belabor it in the particular post at question, because it’s a very well known argument.

First: our nature has been shaped by evolution (you can interject your favourite blind/cold/ nihilistic adjective here).

Second: Humans have evolved in groups, and in groups the most cooperative, “moral”, individuals prospered. (This is not an article of faith, this is an experimentally tractable premise).

Third: Many of our most deep-seated moral instincts and preferences are thus the product of evolution. Not ‘right’ in any cosmic sense, but real nonetheless (like our predilection for sweets, or bacon).

If we innately enjoy being in cooperative groups, and thrive in these groups, it’s perfectly logical to claim this is a foundation for morality. Much the same way we can say, if we like food, farming is a great thing to encourage in our society. Shifting the explanation from “evolution made us this way” to “God made us this way” really gains us nothing in logical power.

JPalkki South Range

Purpose? Morals? We made them, whether they are in the religious texts or in our own minds. We each interpret them according to own thinking or we defer to someone else’s purpose or morals.

If Mr. Douthat is incapable of coming to his own version then I would say he has become the one of the sheep following someone else’s versions and really should not criticize a person like Mr. Coyne. At least Mr. Coyne invested some time thinking about it.

Lambert McLaurin Pittsboro, NC 25312

Ross appears to have done his job as columnist very well. I am certainly not smart enough to wade into this discussion, but it is one of the better conversations I have read in a long time. His writing seems to have both stimulated people to think and to write clearly. I learned so much from actually reading the postings. I am not sure that any of this information will change my own views, but they have given me

David Appell Salem, OR

Ross doesn’t get it: Some of us think we can’t believe in any “gods” while, at the same time, being intellectually honest with ourselves.

We are scientific materialists because there is no other honest choice.

January 08, 2014 Brain on Metaphors

                                                                                       Click here for a pdf version

This Is Your Brain on Metaphors

By ROBERT SAPOLSKY

Despite rumors to the contrary, there are many ways in which the human brain isn’t all that fancy. Let’s compare it to the nervous system of a fruit fly. Both are made up of cells, of course, with neurons playing particularly important roles. Now one might expect that a neuron from a human will differ dramatically from one from a fly. Maybe the human’s will have especially ornate ways of communicating with other neurons, making use of unique “neurotransmitter” messengers. Maybe compared to the lowly fly neuron, human neurons are bigger, more complex, in some way can run faster and jump higher.

We study hard to get admitted to a top college to get a good job to get into the nursing home of our choice. Gophers don’t do that.

But no. Look at neurons from the two species under a microscope and they look the same. They have the same electrical properties, many of the same neurotransmitters, the same protein channels that allow ions to flow in and out, as well as a remarkably high number of genes in common. Neurons are the same basic building blocks in both species.

So where’s the difference? It’s numbers — humans have roughly one million neurons for each one in a fly. And out of a human’s 100 billion neurons emerge some pretty remarkable things. With enough quantity, you generate quality.

Neuroscientists understand the structural bases of some of these qualities. Take language, that uniquely human behavior. Underlining it are structures unique to the human brain — regions like “Broca’s area,” which specializes in language production. Then there’s the brain’s “extrapyramidal system,” which is involved in fine motor control. The complexity of the human version allows us to do something that, say, a polar bear, could never accomplish — sufficiently independent movement of digits to play a trill on the piano, for instance. Particularly striking is the human frontal cortex. While occurring in all mammals, the human version is proportionately bigger and denser in its wiring. And what is the frontal cortex good for? Emotional regulation, gratification postponement, executive decision-making, long-term planning. We study hard in high school to get admitted to a top college to get into grad school to get a good job to get into the nursing home of our choice. Gophers don’t do that.

There’s another domain of unique human skills, and neuroscientists are learning a bit about how the brain pulls it off.

Consider the following from J. Ruth Gendler’s wonderful “The Book of Qualities,” a collection of “character sketches” of different qualities, emotions and attributes:

Anxiety is secretive. He does not trust anyone, not even his friends, Worry, Terror, Doubt and Panic … He likes to visit me late at night when I am alone and exhausted. I have never slept with him, but he kissed me on the forehead once, and I had a headache for two years …

Compassion speaks with a slight accent. She was a vulnerable child, miserable in school, cold, shy … In ninth grade she was befriended by Courage. Courage lent Compassion bright sweaters, explained the slang, showed her how to play volleyball.

What is Gendler going on about? We know, and feel pleasure triggered by her unlikely juxtapositions. Despair has stopped listening to music. Anger sharpens kitchen knives at the local supermarket. Beauty wears a gold shawl and sells seven kinds of honey at the flea market. Longing studies archeology.

Symbols, metaphors, analogies, parables, synecdoche, figures of speech: we understand them. We understand that a captain wants more than just hands when he orders all of them on deck. We understand that Kafka’s “Metamorphosis” isn’t really about a cockroach. If we are of a certain theological ilk, we see bread and wine intertwined with body and blood. We grasp that the right piece of cloth can represent a nation and its values, and that setting fire to such a flag is a highly charged act. We can learn that a certain combination of sounds put together by Tchaikovsky represents Napoleon getting his butt kicked just outside Moscow. And that the name “Napoleon,” in this case, represents thousands and thousands of soldiers dying cold and hungry, far from home.

And we even understand that June isn’t literally busting out all over. It would seem that doing this would be hard enough to cause a brainstorm. So where did this facility with symbolism come from? It strikes me that the human brain has evolved a necessary shortcut for doing so, and with some major implications.

Consider an animal (including a human) that has started eating some rotten, fetid, disgusting food. As a result, neurons in an area of the brain called the insula will activate. Gustatory disgust. Smell the same awful food, and the insula activates as well. Think about what might count as a disgusting food (say, taking a bite out of a struggling cockroach). Same thing.

Now read in the newspaper about a saintly old widow who had her home foreclosed by a sleazy mortgage company, her medical insurance canceled on flimsy grounds, and got a lousy, exploitative offer at the pawn shop where she tried to hock her kidney dialysis machine. You sit there thinking, those bastards, those people are scum, they’re worse than maggots, they make me want to puke … and your insula activates. Think about something shameful and rotten that you once did … same thing. Not only does the insula “do” sensory disgust; it does moral disgust as well. Because the two are so viscerally similar. When we evolved the capacity to be disgusted by moral failures, we didn’t evolve a new brain region to handle it. Instead, the insula expanded its portfolio.

Or consider pain. Somebody pokes your big left toe with a pin. Spinal reflexes cause you to instantly jerk your foot back just as they would in, say, a frog. Evolutionarily ancient regions activate in the brain as well, telling you about things like the intensity of the pain, or whether it’s a sharp localized pain or a diffuse burning one. But then there’s a fancier, more recently evolved brain region in the frontal cortex called the anterior cingulate that’s involved in the subjective, evaluative response to the pain. A piranha has just bitten you? That’s a disaster. The shoes you bought are a size too small? Well, not as much of a disaster.

Now instead, watch your beloved being poked with the pin. And your anterior cingulate will activate, as if it were you in pain. There’s a neurotransmitter called Substance P that is involved in the nuts and bolts circuitry of pain perception. Administer a drug that blocks the actions of Substance P to people who are clinically depressed, and they often feel better, feel less of the world’s agonies. When humans evolved the ability to be wrenched with feeling the pain of others, where was it going to process it? It got crammed into the anterior cingulate. And thus it “does” both physical and psychic pain.

Another truly interesting domain in which the brain confuses the literal and metaphorical is cleanliness. In a remarkable study, Chen-Bo Zhong of the University of Toronto and Katie Liljenquist of Northwestern University demonstrated how the brain has trouble distinguishing between being a dirty scoundrel and being in need of a bath. Volunteers were asked to recall either a moral or immoral act in their past. Afterward, as a token of appreciation, Zhong and Liljenquist offered the volunteers a choice between the gift of a pencil or of a package of antiseptic wipes. And the folks who had just wallowed in their ethical failures were more likely to go for the wipes. In the next study, volunteers were told to recall an immoral act of theirs. Afterward, subjects either did or did not have the opportunity to clean their hands. Those who were able to wash were less likely to respond to a request for help (that the experimenters had set up) that came shortly afterward. Apparently, Lady Macbeth and Pontius Pilate weren’t the only ones to metaphorically absolve their sins by washing their hands.

This potential to manipulate behavior by exploiting the brain’s literal-metaphorical confusions about hygiene and health is also shown in a study by Mark Landau and Daniel Sullivan of the University of Kansas and Jeff Greenberg of the University of Arizona. Subjects either did or didn’t read an article about the health risks of airborne bacteria. All then read a history article that used imagery of a nation as a living organism with statements like, “Following the Civil War, the United States underwent a growth spurt.” Those who read about scary bacteria before thinking about the U.S. as an organism were then more likely to express negative views about immigration.

Another example of how the brain links the literal and the metaphorical comes from a study by Lawrence Williams of the University of Colorado and John Bargh of Yale. Volunteers would meet one of the experimenters, believing that they would be starting the experiment shortly. In reality, the experiment began when the experimenter, seemingly struggling with an armful of folders, asks the volunteer to briefly hold their coffee. As the key experimental manipulation, the coffee was either hot or iced. Subjects then read a description of some individual, and those who had held the warmer cup tended to rate the individual as having a warmer personality, with no change in ratings of other attributes.

Another brilliant study by Bargh and colleagues concerned haptic sensations (I had to look the word up — haptic: related to the sense of touch). Volunteers were asked to evaluate the resumes of supposed job applicants where, as the critical variable, the resume was attached to a clipboard of one of two different weights. Subjects who evaluated the candidate while holding the heavier clipboard tended to judge candidates to be more serious, with the weight of the clipboard having no effect on how congenial the applicant was judged. After all, we say things like “weighty matter” or “gravity of a situation.”

What are we to make of the brain processing literal and metaphorical versions of a concept in the same brain region? Or that our neural circuitry doesn’t cleanly differentiate between the real and the symbolic? What are the consequences of the fact that evolution is a tinkerer and not an inventor, and has duct-taped metaphors and symbols to whichever pre-existing brain areas provided the closest fit?

Jonathan Haidt, of the University of Virginia, has shown how viscera and emotion often drive our decisionmaking, with conscious cognition mopping up afterward, trying to come up with rationalizations for that gut decision. The viscera that can influence moral decisionmaking and the brain’s confusion about the literalness of symbols can have enormous consequences. Part of the emotional contagion of the genocide of Tutsis in Rwanda arose from the fact that when militant Hutu propagandists called for the eradication of the Tutsi, they iconically referred to them as “cockroaches.” Get someone to the point where his insula activates at the mention of an entire people, and he’s primed to join the bloodletting.
.
But if the brain confusing reality and literalness with metaphor and symbol can have adverse consequences, the opposite can occur as well. At one juncture just before the birth of a free South Africa, Nelson Mandela entered secret negotiations with an Afrikaans general with death squad blood all over his hands, a man critical to the peace process because he led a large, well-armed Afrikaans resistance group. They met in Mandela’s house, the general anticipating tense negotiations across a conference table. Instead, Mandela led him to the warm, homey living room, sat beside him on a comfy couch, and spoke to him in Afrikaans. And the resistance melted away.

This neural confusion about the literal versus the metaphorical gives symbols enormous power, including the power to make peace. The political scientist and game theorist Robert Axelrod of the University of Michigan has emphasized this point in thinking about conflict resolution. For example, in a world of sheer rationality where the brain didn’t confuse reality with symbols, bringing peace to Israel and Palestine would revolve around things like water rights, placement of borders, and the extent of militarization allowed to Palestinian police. Instead, argues Axelrod, “mutual symbolic concessions” of no material benefit will ultimately make all the difference. He quotes a Hamas leader who says that for the process of peace to go forward, Israel must apologize for the forced Palestinians exile in 1948. And he quotes a senior Israeli official saying that for progress to be made, Palestinians need to first acknowledge Israel’s right to exist and to get their anti-Semitic garbage out of their textbooks.

Hope for true peace in the Middle East didn’t come with the news of a trade agreement being signed. It was when President Hosni Mubarak of Egypt and King Hussein of Jordan attended the funeral of the murdered Israeli prime minister Yitzhak Rabin. That same hope came to the Northern Irish, not when ex-Unionist demagogues and ex-I.R.A. gunmen served in a government together, but when those officials publicly commiserated about each other’s family misfortunes, or exchanged anniversary gifts. And famously, for South Africans, it came not with successful negotiations about land reapportionment, but when black South Africa embraced rugby and Afrikaans rugby jocks sang the A.N.C. national anthem.

Nelson Mandela was wrong when he advised, “Don’t talk to their minds; talk to their hearts.” He meant talk to their insulas and cingulate cortices and all those other confused brain regions, because that confusion could help make for a better world.

Robert Sapolsky is John A. and Cynthia Fry Gunn Professor of Biology, Neurology and Neurosurgery at Stanford University, and is a research associate at the Institute of Primate Research, National Museums of Kenya. He writes frequently on issues related to biology and behavior. His books include “Why Zebras Don’t Get Ulcers,” “A Primate’s Memoir,” and “Monkeyluv.”
— ———————————–
salgadoce  North America
It’s great that we can have a scientific account of how people reason and make decisions and perceive intention and metaphors, etc. I’m all for cognitive neuroscience elucidating the ‘how’ of the mind.

But, as you mention, the people who come to understand the mind and its insulas and cingulate cortices can then use this knowledge to manipulate the minds of others in both ‘good’ and ‘bad’ ways; you can either be a force of good, like Mandela; or a force of bad/evil, like Hitler or Bagosora.

Ultimately, the only way this type of cognitive insight is going to be of any use for the world is if it is put to use by *good* people – people who are noble, benevolent, virtuous, even kind-hearted.

Until science can find a way to talk about what is good, what is moral, what is noble, what is valuable – the questions that, so far, only traditional, ‘super-scientific’ philosophy has been able to address – I believe Nelson Mandela is actually ‘righter’ and wiser than you think when he says that we must speak and listen to our hearts, for speaking to a person’s mind can be effective and create a better world only if the *speaker’s* heart is pure and imbued with love.

pgm3    Cambridge, UK
All language, including mathematics, is metaphor. A “tree” doesn’t know it is a tree, and “running” is just two sounds we use to invoke an image of ongoing movement. I have thus always been a bit puzzled by the concept of the logical fallacy known as “arguing from analogy”. While it is true that the concept of a circle necessarilly engages also the concept of, say, Pi, it also true that no falling object
“knows” that it should fall in a parabolic trajectory; this is simply our way of expressing what we observe in the most general terms. Every description of the a posteriori universe is just that, a description, and that involves a degree of approximation. Rational thought is in fact the art of thinking in ratios, in comparisons of one thing in terms of its proportion to another.

Josh Josephs   London
This piece is a very astute analysis, expounding in modern empirical scientific terminology, Locke’s theory of associations. However, this standpoint, that we are somehow driven by metaphorical associations because we mix them up with reality, does a great disservice to humanity. Our actions become driven merely by emotional confusions, consisting of our conflated desires and repulsions. This somehow diminishes or overlooks the intrinsic values that we place on those objects and aspirations that have importance to ourselves as reflecting deeply on our values.

The Afrikaans general who willingly came to an agreement with Nelson Mandela in the homey comfortable of a well-designed living room, probably did not do so just because his hard heart was so easily melted. Rather, he was able to arrive at a certain intellectual conciliation and meeting of ideas for which the living room served as a setting where this could come to a resolution. To pretend that the setting was somehow the cause rather than the diplomatic lubricant is to overlook the intensity and depth of ideological divisions. A policy that would somehow derive from this theory would end up trying to bring world peace via the manipulations of symbols hoping to stir up emotions of universal love.

The purposeful drawing of symbols with the sole purpose of invoking emotions is manipulation of course. The whole hope that lies in humanity is that we are not just creatures that can find ourselves moved only by emotional manipulation of sorts. We must believe that there are values that we hold to for their own sake, because we have understood these and understood that these are true and that these are worthy of our commitment. The symbols should just be the signs that reflect our better judgment and not the other way around. This is not to say that there are not people who are easily manipulated through the emotional call of symbols, since some of the worst depredations of the last century have clearly demonstrated that this tendency is real. Only that we can only have enduring belief and trust in our humanity if we know that we are able to be guided by reasoned judgment that can assess and reconcile values and ideas and not just be moved by emotional confusions.

Sal Anthony    Queens, NY
Dear Professor Sapolsky,
A wonderful essay but for the notion that confusion reigns in the realm of mind. As Lao Tzu said, whether a man dispassionately sees to the core of life or passionately sees the surface makes no difference, for the core and the surface are essentially the same, words only making them appear different. Similarly, it was Nietzsche who said that for the mighty oak to scrape the sky with its branches it must sink its roots deep into the bowels of the earth.

All creatures great and small arose from that primordial stew, so why should it seem odd that we confuse the metaphorical with the literal – it was the literal that brought us into being and roused us into consciousness. Whether guided by a creator or not, it was the material world that conferred the infrastructure for our mighty minds, so why expect a differentiation that shouldn’t exist?

Real confusion attends whenever we try to fix things without taking our origins into account, by seeking solutions for rational automatons who don’t exist, instead of recognizing our nature and acting in accordance with it, at least, to the extent our addled, emotionally-driven, mixed-up-metaphor selves can manage.

Patricia   Pasadena, CA
Nice article. Except the Romans did not wash their hands to absolve their sins. And Pilate wouldn’t have thought he was sinning. He was executing a man who had broken the most important rule of the reign of Emperor Tiberius — the failure to acknowledge Tiberius as a god. Pilate would not have thought that executing such a man was a sin. To Pilate that would have been the essence of Roman cleanliness and piety. An act of cleansing for the Empire itself.
As for averting warfare in light of our history – good luck with that.

fleep   Los Angeles, California
Among the things ‘duct taped’ to the human brain is the symbolic act of civic governance. We don’t have an archetype called “President of the United States” in our brains, but we do have something called “daddy.” Our obsession with calling the people who started this country the “Founding Fathers,” and the need to call Washington “the father of our country,” is about taking the very new concept of elected government and wiring it to the dominant alpha male perceptor in our heads.

The past two years have been about certain brains in this country reacting viscerally to an alpha male that doesn’t look like their daddy. It’s couched in phrases like ‘losing our freedoms.’ it is unsettling to them in ways that transcend coherent thought or debate. And for that single reason, I have a drop or two of compassion in my heart for those so visibly upset.

MKK  Indianapolis, IN
The personality currently using this to greatest effect is Sarah Palin. I really hate that woman, but her “momma grizzly” pitch is brilliant. Every mother has had a “Momma Grizzly” moment, and it is one of the most viceral reactions you can experience. When she uses those words, that same feeling makes you think you need to start protecting something. It takes a couple of seconds to realize it is just Sarah crying wolf again, trying to stir the pot about something.

casey   Ohio

Susan Sontag’s “Illness as Metaphor” brilliantly illuminates the danger of figurative language. She observed that cancer in the late 19th early 20th century become a metaphor for death, which is not terribly encouraging for those who receive a cancer diagnosis. She also notes the sometime- ugly consequences of false correlations between certain illnesses and certain personality types. For example, tuberculosis was seen as a disease characteristic of the poetic and sensitive type — John Keats, for example. It thus became associated with beauty. Therefore a 19th-century man might almost desire to have a consumptive, invalid wife. Tuberculosis, of course, is anything but beautiful; a death via TB is typically messy, agonizing — generally horrible. Once the true cause of TB was discovered, its metaphoric status quickly changed, just as cancer’s metaphoric status is slowly changing as our treatments for it become more effective.

Another example of a misuse of metaphor: the “War” on Terror. Terrorism is a tactic; it has always existed, it always will exist. It cannot be defeated in the way an army or a government can be defeated. The use of metaphor in this case, then, is misleading and potentially pernicious. Same with the “War” on drugs. To call it a “war” is to call up romantic associations with patriotism, honor, glory, etc. It assists those in whose interest it is to keep this pointless government activity going, despite its utter failure to erode drug use and the tremendous cost, both monetary and moral. No one involved in a war wants to “surrender,” after all. It’s shameful.

December 11, 2013 The Great Divide

                                                                                       Click here for a pdf version

Great Divide:  Rich People Just Care Less

By DANIEL GOLEMAN

Turning a blind eye. Giving someone the cold shoulder. Looking down on people. Seeing right through them.

These metaphors for condescending or dismissive behavior are more than just descriptive. They suggest, to a surprisingly accurate extent, the social distance between those with greater power and those with less — a distance that goes beyond the realm of interpersonal interactions and may exacerbate the soaring inequality in the United States.

A growing body of recent research shows that people with the most social power pay scant attention to those with little such power. This tuning out has been observed, for instance, with strangers in a mere five-minute get-acquainted session, where the more powerful person shows fewer signals of paying attention, like nodding or laughing. Higher-status people are also more likely to express disregard, through facial expressions, and are more likely to take over the conversation and interrupt or look past the other speaker.

Bringing the micropolitics of interpersonal attention to the understanding of social power, researchers are suggesting, has implications for public policy.

Of course, in any society, social power is relative; any of us may be higher or lower in a given interaction, and the research shows the effect still prevails. Though the more powerful pay less attention to us than we do to them, in other situations we are relatively higher on the totem pole of status — and we, too, tend to pay less attention to those a rung or two down.

A prerequisite to empathy is simply paying attention to the person in pain. In 2008, social psychologists from the University of Amsterdam and the University of California, Berkeley, studied pairs of strangers telling one another about difficulties they had been through, like a divorce or death of a loved one. The researchers found that the differential expressed itself in the playing down of suffering. The more powerful were less compassionate toward the hardships described by the less powerful. Dacher Keltner, a professor of psychology at Berkeley, and Michael W. Kraus, an assistant professor of psychology at the University of Illinois, Urbana-Champaign, have done much of the research on social power and the attention deficit.

Mr. Keltner suggests that, in general, we focus the most on those we value most. While the wealthy can hire help, those with few material assets are more likely to value their social assets: like the neighbor who will keep an eye on your child from the time she gets home from school until the time you get home from work. The financial difference ends up creating a behavioral difference. Poor people are better attuned to

November 13, 2013 Peacemaker

 Click here for a pdf version

How to become a peacemaker

Four broad, ancient guidelines for human behavior at the heart of every religion.
By James Ishmael Ford
10.14.13

(Note:  Jeannette will bring in a summary of the “Declaration Toward a Global Ethic” though you are encouraged to download it from the link below and review it.)

In 1993, on the one hundredth anniversary of the World Columbian Exposition’s Parliament of the World’s Religions, a second parliament gathered in Chicago. The highlight for many was an address by the Dalai Lama. For me the most important thing to come out of that gathering was a document, Declaration Toward a Global Ethic (PDF).” The principal author was the Rev. Hans Küng, a Roman Catholic priest and scholar. I once heard Küng, a controversial figure within his church, described as the Catholic Church’s finest Lutheran theologian—which is, perhaps, a way of acknowledging that he is one of the ecumenical Christian community’s finest minds.

The document was signed by 200 religious leaders, including the Rev. Dr. Robert Traer of the General Assembly of Unitarian and Free Christian Churches in the United Kingdom. It asserted there are “four broad, ancient guidelines for human behavior which are found in most of the religions of the world,” which it listed as “irrevocable directives” for those who would find peace for the planet.

These irrevocable directives are: 1) a commitment to a culture of nonviolence and respect for life; 2) a commitment to a culture of solidarity and a just economic order; 3) a commitment to a culture of tolerance and a life of truthfulness; and 4) a commitment to a culture of equal rights and partnership between men and women.

I’m deeply moved by this analysis, which I think cuts through the fog of the conservative part of religions, that part which is meant to sustain and transmit a particular culture—defining inside and outside, us and them—and which is so often used as a club to beat people into conformity. Instead, the directives point to the radical heart of pretty much all religions, that part which opens us to the finest of what it means to be human.

The first of the directives, the first grand intuition of our deep humanity, is that in spite of our natural proclivities to violence, there is a better way. The second tells us we are genuinely responsible for each other. The third points to our need for broad tolerance, which is found within our commitment to genuine honesty with ourselves and with each other. And, finally, the fourth—so buried in so many religions, but implicit at their heart—reminds us that women and men need each other, and can only heal from the wounds of life when we see we are all in it together as equals. (I would add that the issues of sexual minorities are bound up with this last assertion, inevitably, inextricably.)

These four directives offer a life of authenticity and truth and a way of healing for hearts and a world torn by strife. For me there’s a next step that takes this document and its four irrevocables from ink on paper (or pixels on a screen) into our actual lives. There is a Japanese saying, gyogaku funi, which means “practice and study are not two.” And it is here I find myself thinking of Bernie Glassman.

Bernie is one of those characters who steps on the scene, and after they pass through, everything is a bit different. He originally was meant to be a rocket scientist, earning a Ph.D. in applied mathematics from UCLA. He did some of that rocket work. While at grad school, however, he met the Japanese Zen missionary Taizan Maezumi. Eventually Bernie became the Zen master’s first American dharma successor. He would go on to have an unconventional Zen career, first as a pretty conventional Zen priest and teacher, but then dropping the priest part, putting on a clown nose, literally going to clown school, and calling on people to lighten up. From there he went on to various social justice-oriented projects, including the Greyston Foundation, a Zen center that has evolved into a social service agency focused on the needs of the homeless and hungry. He is also deeply focused on issues of peace and peacemaking.

In service of that goal Bernie and his then spouse Sandra Jishu Holmes took those four irrevocable directives from the world parliament to heart and created what is now called the Zen Peacemakers. Along the way he reframed the directives as four commitments. And they’ve caught on. I would say most people aware of these four things think they came from Bernie’s fruitful heart. My own Zen community, Boundless Way Zen, which has no connection to Bernie, has incorporated these commitments into the vows we take when we formally undertake a spiritual life.

As the leader says in the ceremony for our Zen community, “The wheel of the dharma turns and turns. Each generation manifests the great way. Today we commit ourselves to the way of awakening, manifesting as peacemakers in a world torn by strife.” It’s time to step up to the plate. It’s time to do things.

Then, in the ceremony, the new initiate on the way responds together with all those who’ve made the commitment before, “I commit myself to a culture of nonviolence and reverence for life; I commit myself to a culture of solidarity and a just economic order; I commit myself to a culture of tolerance and a life based on truthfulness; and I commit myself to a culture of equal rights and partnership between men and women.”

Bernie’s gift to us wasn’t just grabbing these four insights and pitching them to the world, but rather finding a clear, if not easy, path to their realization. Study and practice as one thing. He took something from the standard Japanese-derived Zen initiation ceremony, called the Three Pure Precepts, and reframed them as guidance for people of any and perhaps of no spiritual tradition, inviting us to plunge into the unknown, to bear witness to the pain and joy of the world, and to strive to heal oneself and the world.

The call is to become peacemakers.

Let me repeat the method: plunge into the unknown. Bear witness to the pain and the joy of the world. Strive to heal oneself and the world. I think of the first two as invitations into the very heart of life; the third—healing, for myself and for the world—grows out of them as if through a secret alchemical formula. Plunge into the unknown, bear witness to the world’s pain and joy, that you may bring healing. This is how we become peacemakers.

Adapted from “Plunging Into the Unknown: How to be a Peacemaker in a World Torn by Strife,” a sermon preached to First Unitarian Church of Providence, Rhode Island, September 22, 2013 (Monkey Mind).

October 23, 2013 Rituals

Click here for a pdf version.

Probing Culture’s Secrets, From Capuchins to Children

Michael Balter

LONDON—Scientists once designated culture as the exclusive province of humans. But that elitist attitude is long gone, as evidenced by a recent meeting* here on how culture, usually defined as the passing on of traditions by learning from others, arises and changes. The 700 attendees, a mixture of researchers and members of the public, heard talks on cultural transmission in fish, meerkats, birds, and monkeys, as well as in extinct and living humans. Researchers probed questions such as what sparks cultural trends and how complex traditions are transmitted, and most agreed that studies of both animals and children will provide important clues. “The field of cultural evolution ranges from fish to humans and includes child development,” says meeting co-organizer Andrew Whiten, a psychologist at the University of St. Andrews in the United Kingdom.
But why do certain cultural trends, such as fashions, begin and catch on? Even science finds it hard to answer that question. At the meeting, anthropologist Susan Perry of the University of California (UC), Los Angeles, described her team’s work observing white-faced capuchin monkeys since the early 1990s at several sites in Costa Rica. The monkeys have adopted a number of local traditions, some directly related to foraging for food, such as either cracking or rubbing woody capsules of Luehea fruits to get out their seeds. But other traditions have no clear survival purpose, such as sniffing each other’s fingers and inserting them into a companion’s nose, or biting off a big chunk of another monkey’s fur and holding it in the mouth while he or she playfully tries to get it back. Although foraging traditions tend to be long-lasting, Perry has found that, perhaps like some human fashions, these more mysterious capuchin trends tend to last only about 10 years or so before fading.
In one group of capuchins, the team’s long-term observations have allowed them to witness a rare event: the emergence of a new tradition. In what Perry calls a “bizarre” and “high-risk” ritual, the monkeys poke each other’s eyeballs. One monkey will insert his or her long, sharp, dirty fingernail deep into the eye socket of another animal, between the eyelid and the eyeball, up to the first knuckle. In videos Perry played for the meeting, the monkeys on the receiving end of the fingernail, typically social allies, could be seen to grimace and bat their eyelids furiously (as did many members of the audience) but did not attempt to remove the finger or otherwise object to the treatment. Indeed, during these eye-poking sessions, which last up to an hour, monkeys insisted on the finger being reinserted if it popped out of the eye socket.
Why would the monkeys do something potentially dangerous? Perry suggests that capuchins, which, like humans, are highly cooperative and live in large groups, use this apparently pain-inflicting behavior to test the strength of their social bonds. Back in the 1970s, evolutionary biologist Amotz Zahavi of Tel Aviv University in Israel suggested that some animals engage in certain behaviors to solidify alliances, and researchers have observed some examples. For example, some male baboons will hold each other’s testicles before teaming up to fight higher-ranking individuals, apparently to establish trust before going into battle.
When it comes to the capuchins, “this is a plausible hypothesis,” Whiten says, especially because more functional explanations do not seem to explain the eye poking. Nevertheless, Whiten adds, “it is difficult to test directly.”
Perry notes that capuchin behaviors such as eye poking and cracking fruit capsules are true traditions, but they don’t ratchet up into the kinds of complex culture prevalent in every human society, from language to literature to sophisticated technology. Animal traditions lack this cumulative cultural evolution.
How do humans wind up the cultural ratchet? At the meeting, Derek Lyons, a developmental psychologist at UC Irvine, presented new data on a phenomenon in young children that he and others think may be key to humans’ faithful transmission of complex culture: “overimitation,” or the tendency to copy the actions of an adult even when they are unnecessary for achieving a goal. No other animal has been shown to copy in this way, Lyons and others say.
Lyons’s work builds on a landmark 2005 study by Whiten and primatologist Victoria Horner, now at Emory University in Atlanta. They demonstrated that when young chimpanzees and children are shown how to retrieve a reward from a box using a series of both relevant and irrelevant steps, the chimps skipped the unnecessary steps, whereas children tended to imitate everything. Recent work by another team suggests that overimitation is universal in human children (http://news.sciencemag.org/sciencenow/2010/05/kids-overimitate-adults-regardle.html). Lyons and his co-workers reported further work in 3- to 5-year-old children in 2007 in the Proceedings of the National Academy of Sciences. For example, children were shown how to retrieve toy turtles from transparent plastic containers using irrelevant steps such as tapping the container with a feather and relevant steps such as opening the container’s door. The children continued to overimitate even when they were led to believe that the experiment was over or when they were explicitly told to avoid “silly” extra steps.
Why do children do this? In London, Lyons played a new series of videotaped experiments with children of the same ages in which he attempted to, as he put it, “snap them out of ” their overimitative tendencies. In one experiment, a puppet orangutan named Felix, stationed at an opening on the other end of the box, competed with the children to see who could get the toy turtle out of the box first. Again, Lyons showed each child how to get the turtle while mixing in irrelevant actions such as tapping the box and pushing unnecessary levers. The children, who could not see what Felix was doing, continued to perform most of Lyons’s irrelevant actions, even when Felix kept winning and getting the turtle.
The only way to avoid overimitation, Lyons found, was to convey that one of his actions was unintentional. When he pretended to get a call from his mother on his cell phone and “accidentally” flipped a useless lever while gesturing during the supposed conversation, the children did not flip that lever.
These findings are inconsistent with earlier hypotheses that children overimitate to please adults, Lyons said. Rather, he concluded, they support something he called “automatic causal encoding” (ACE), in which a child assumes that the adult knows what he or she is doing and that each step in the procedure is necessary. “ACE is an important mechanism kids use to bootstrap their knowledge of complex artifacts,” he says. Archaeologist Dietrich Stout of Emory University, who studies prehistoric tool making, says ACE may have been important for the cultural transmission of stone-tool technologies in early hominins. “Certain things, like the internal workings of the plastic box or the precise force with which to hit a stone core, are not directly available to the observer,” Stout says. He agrees with Lyons that such a strategy is “a logical approach when confronted with a complicated, unfamiliar artifact.”
Uta Frith, a cognitive neuroscientist at University College London, concurs. “This is an example of actions for which we cannot see rhyme or reason but which we believe are important and relevant to us,” Frith says. “I am persuaded that this is the secret of the evolution of human culture.”
— ——–

Mind and Matter :  How Irrational Rituals Bring Us Together,

Alison Gopnik,

Human beings love rituals. Of course rituals are at the center of religious practice.  But even secularists celebrate the great transitions of life with arbitrary actions, formalized words and pecluliar outfits.  To become part  of my community of hard-headed, rational , scientific Ph.D.s, I had to put on a weird gown and even weirder hat, walk solemnly down the aisle of a cavernous building, and listen to rhythmically intoned Latin.

Our mundane actions are suffused with arbitrary conventions, too.  Grabbing food with your hands is efficient and effective, but we purposefully slow ourselves down with cutlery rituals.  In fact, if you’re an American, the chances are that you cut your food with your fork in your left hand, then transfer the fork to your right hand to eat the food, and then swap it back again.  You may not even realize that you’re doing it.  That elaborate fork and knife dance makes absolutely no sense.

But that is the central paradox of ritual.  Rituals are intentionally useless, purposefully irrational.  So why are they so important to us

The cognitive psychologist Christine LeGarre at the University of Texas at Austin has been trying to figure out where rituals come from and what functions they serve.  One idea is that rituals declare that you are a member of a particular group.

Everybody eats, but only Americans swap their knives and forks.  (Several spy movies have used this as a plot point).  Sharing your graduation ceremony marks you as part of the community of Ph.D.’s more effectively than the solitary act of finishing your dissertation.

The fact that rituals don’t make practixal sense is just what makes them useful for social identification.  If someone just puts tea  in a pot and adds hot water then I know only that they are a sensible person who wants tea.  If instead they kneel on a mat and revolve a special whisk a precise number of times, or carefully use silver tongs to drop exactly two lumps into a china cup, I can conclude that they are members of a particular aristocratic tea culture.

It turns out that rituals are deeply rooted and they emerge early.  Surprisingly young children are already sensitive to the difference between purposeful actions and rituals, and they adopt rituals for themselves.

In a new paper forthcoming in the Journal Cognition, Dr. LeGare and colleagues showed 3- to 6-year old children a video of people performing a complicated sequence of eight actions with a mallet and a pegboard.  Someone would pick up the mallet, place it on one side, push up a peg with her hand etc.  Then the experimenters gave the children the mallet and pegboard and said “Now it’s your turn.”

You could interpret this sequence of actions as an intelligent attempt to bring about a particular outcome, pushing up the pegs.  Or you could interpret it as a ritual.

Sometimes the children saw a single person perform the actions twice.  Sometimes they saw two people perform the actions simultaneously.  The identical synchronous actions suggested that they two people were from the same group.

When they saw two people do exactly the same thing at the same time, the children produced exactly the same sequence of actions themselves.  They also explained their actions by saying things like “I had to do it the way that they did.”  They treated the actions as if they were a ritual.

When they saw the single actor, they were much less likely to imitate exactly what the other person did. Instead, they treated it  like a purposeful action.  They would vary what they did themselves to m ake the pegs pop up in a new way.

Dr. LeGare thinks that, from the time we are very young children, we have two ways of thinking about people – a “ritual stance” and an “instrumental  stance.”  We learn as much from the irrationaly and arbitrary things that people do, as from the intelligent and sensible ones.
— —-

Sense and Superstition

By JANE L. RISEN and A. DAVID NUSSBAUM

SUPERSTITIOUS people do all sorts of puzzling things. But it’s not just the superstitious who knock on wood. From time to time, we all rap our knuckles on a nearby table if we happen to let fate-tempting words slip out. “The cancer is in remission, knock on wood,” we might say.

In fact, it’s so common we often don’t think about it. But it’s worth asking: why do people who do not believe that knocking on wood has an effect on the world often do it anyway? Because it works.

No, knocking on wood won’t change what happens. The cancer is no more likely to stay in remission one way or the other. But knocking on wood does affect our beliefs, and that’s almost as important.

Research finds that people, superstitious or not, tend to believe that negative outcomes are more likely after they “jinx” themselves. Boast that you’ve been driving for 20 years without an accident, and your concern about your drive home that evening rises. The superstitious may tell you that your concern is well founded because the universe is bound to punish your hubris. Psychological research has a less magical explanation: boasting about being accident-free makes the thought of getting into an accident jump to mind and, once there, that thought makes you worry.

That makes sense intuitively. What’s less intuitive is how a simple physical act, like knocking on wood, can alleviate that concern.
In one study, to be published in the Journal of Experimental Psychology: General, one of us, Jane L. Risen, and her colleagues Yan Zhang and Christine Hosey, induced college students to jinx themselves by asking half of them to say out loud that they would definitely not get into a car accident this winter. Compared with those who did not jinx themselves, these students, when asked about it later, thought it was more likely that they would get into an accident.

After the “jinx,” in the guise of clearing their minds, we invited some of these students to knock on the wooden table in front of them. Those who knocked on the table were no more likely to think that they would get into an accident than students who hadn’t jinxed themselves in the first place. They had reversed the effects of the jinx.

Knocking on wood may not be magical, but superstition proved helpful in understanding why the ritual was effective. Across cultures, superstitions intended to reverse bad luck, like throwing salt or spitting, often share a common ingredient. In one way or another, they involve an avoidant action, one that exerts force away from oneself, as if pushing something away.

This pushing action turns out to be important, because people’s beliefs are often influenced by bodily feelings and movements. For example, other research shows that people tend to agree with the same arguments more when they hear them while they are nodding their head up and down (as if they were saying “yes”) rather than shaking it from side to side (as if they were saying “no”).

Because people generally push bad things away, we suggest that they may have built up an association between pushing actions and avoiding harm or danger. This led us to speculate that when people knock on wood, or throw salt, or spit, the ritual may help calm the mind, because such avoidant actions lead people to simulate the feelings, thoughts and sensations they experience when they avoid something bad.

To test this, in our knocking-on-wood experiment we asked some people to knock down on the table and away from themselves, while we had others knock up on the underside of the table, toward themselves. Those who knocked up engaged in an approach action, not an avoidant one. Despite knocking on wood, people who knocked up failed to reverse the perceived jinx; if anything, their concerns were made worse compared with people who did not knock at all.

Next we tested whether avoidant movements would have the same effect in situations free from the baggage of superstition. Instead of having participants knock down on wood after jinxing themselves, we had them throw a ball (also an avoidant action, but not one associated with a superstition). We conducted two studies, one in Chicago and another in Singapore. We found that the act of throwing a ball also reduces people’s concerns following a jinx, in either culture. Even pretending to throw a ball has the same effect as actually throwing it.

While almost any behavior can be turned into a superstitious ritual, perhaps the ones that are most likely to survive are those that happen to be effective at changing how we feel. We can seek to rid ourselves of superstitions in the name of enlightenment and progress, but we are likely to find that some may be hard to shake because, although they may be superficially irrational, they may not be unreasonable. Superstitious rituals can really work — but it’s not magic, it’s psychology.

Jane L. Risen and A. David Nussbaum are, respectively, an associate professor of behavioral science and an adjunct assistant professor of behavioral science at the Booth School of Business at the University of Chicago.

October 9, 2013 Without God (Rerun from Dec 2009)

Click here for a pdf version.

Without God

By Steven Weinberg

In his celebrated 1837 Phi Beta Kappa Oration at Harvard, titled “The American Scholar,” Ralph Waldo Emerson predicted that a day would come when America would end what he called “our long apprenticeship to the learning of other lands.” His prediction came true in the twentieth century, and in no area of learning more so than in science. This surely would have pleased Emerson. When he listed his heroes he would generally include Copernicus and Galileo and Newton along with Socrates and Jesus and Swedenborg. But I think that Emerson would have had mixed feelings about one consequence of the advance of science here and abroad—that it has led to a widespread weakening of religious belief.[1]

Emerson was hardly orthodox—according to Herman Melville, he felt “that had he lived in those days when the world was made, he might have offered some valuable suggestions”—but he was for a while a Unitarian minister, and he usually found it possible to speak favorably of the Almighty. Emerson grieved over what he saw in his own time as a weakening of belief, as opposed to mere piety and churchgoing, in America and even more so in England, though I can’t say that he attributed it to the advance of science.

The idea of a conflict between science and religion has a long pedigree. According to Edward Gibbon, it was the view of the Byzantine church that “the study of nature was the surest symptom of an unbelieving mind.” Perhaps the best-known portrayal of this conflict is a book published in 1896 by Cornell’s first president, Andrew Dickson White, with the title A History of the Warfare of Science with Theology in Christendom.

In recent times there has been a reaction against talk of warfare between science and religion. White’s “conflict thesis” was attacked in a 1986 paper by Bruce Lindberg and Ronald Numbers, both well-known historians of science, who pointed out many flaws in White’s scholarship. The Templeton Foundation offers a large prize to those who argue that there is no conflict between science and religion. Some scientists take this line because they want to protect science education from religious fundamentalists. Stephen Jay Gould argued that there could be no conflict between science and religion, because science deals only with facts and religion only with values. This certainly was not the view held in the past by most adherents of religion, and it is a sign of the decay of belief in the supernatural that many today who call themselves religious would agree with Gould.

Let’s grant that science and religion are not incompatible—there are after all some (though not many) excellent scientists, like Charles Townes and Francis Collins, who have strong religious beliefs. Still, I think that between science and religion there is, if not an incompatibility, at least what the philosopher Susan Haack has called a tension, that has been gradually weakening serious religious belief, especially in the West, where science has been most advanced. Here I would like to trace out some of the sources of this tension, and then offer a few remarks about the very difficult question raised by the consequent decline of belief, the question of how it will be possible to live without God.

1.

I do not think that the tension between science and religion is primarily a result of contradictions between scientific discoveries and specific religious doctrines. This is what chiefly concerned White, but I think he was looking in the wrong direction. Galileo remarked in his famous letter to Grand Duchess Christina that “the intention of the Holy Ghost is to teach us how to go to heaven, not how heaven goes,” and this was not just his opinion; he was quoting a prince of the Church, Cardinal Baronius, the Vatican librarian. Contradictions between scripture and scientific knowledge have occurred again and again, and have generally been accommodated by the more enlightened among the religious. For instance, there are verses in both the Old and New Testament that seem to show that the earth is flat, and as noted by Copernicus (quoted by Galileo in the same letter to Christina) these verses led some early Church fathers like Lactantius to reject the Greek understanding that the earth is a sphere, but educated Christians long before the voyages of Columbus and Magellan had come to accept the spherical shape of the earth. Dante found the interior of the spherical earth a convenient place to store sinners.

What was briefly a serious issue in the early Church has today become a parody. The astrophysicist Adrian Melott of the University of Kansas, in a fight with zealots who wanted equal time for creationism in the Kansas public schools, founded an organization called FLAT (Families for Learning Accurate Theories). His society parodied creationists by demanding equal time for flat earth geography, arguing that children should be exposed to both sides of the controversy over the shape of the earth.

But if the direct conflict between scientific knowledge and specific religious beliefs has not been so important in itself, there are at least four sources of tension between science and religion that have been important.

The first source of tension arises from the fact that religion originally gained much of its strength from the observation of mysterious phenomena—thunder, earthquakes, disease—that seemed to require the intervention of some divine being. There was a nymph in every brook, and a dryad in every tree. But as time passed more and more of these mysteries have been explained in purely natural ways. Explaining this or that about the natural world does not of course rule out religious belief. But if people believe in God because no other explanation seems possible for a whole host of mysteries, and then over the years these mysteries were one by one resolved naturalistically, then a certain weakening of belief can be expected. It is no accident that the advent of widespread atheism and agnosticism among the educated in the eighteenth century followed hard upon the birth of modern science in the previous century.

From the beginning, the explanatory power of science worried those who valued religion. Plato was so horrified at the attempt of Democritus and Leucippus to explain nature in terms of atoms without reference to the gods (even though they did not get very far with this) that in Book Ten of the Laws he urged five years of solitary confinement for those who deny that the gods exist or that they care about humans, with death to follow if the prisoner is not reformed. Isaac Newton, offended by the naturalism of Descartes, also rejected the idea that the world could be explained without God. He argued for instance in a letter to Richard Bentley that no explanation but God could be given for the distinction we observe between bright matter, the sun and stars, and dark

matter, like the earth. This is ironic, because of course it was Newton and not Descartes who was right about the laws of motion. No one did more than Newton to make it possible to work out thoroughly nontheistic explanations of what we see in the sky, but Newton himself was not in this sense a Newtonian.

Of course, not everything has been explained, nor will it ever be. The important thing is that we have not observed anything that seems to require supernatural intervention for its explanation. There are some today who cling to the remaining gaps in our understanding (such as our ignorance about the origin of life) as evidence for God. But as time passes and more and more of these gaps are filled in, their position gives an impression of people desperately holding on to outmoded opinions.

The problem for religious belief is not just that science has explained a lot of odds and ends about the world. There is a second source of tension: that these explanations have cast increasing doubt on the special role of man, as an actor created by God to play a starring part in a great cosmic drama of sin and salvation. We have had to accept that our home, the earth, is just another planet circling the sun; our sun is just one of a hundred billion stars in a galaxy that is just one of billions of visible galaxies; and it may be that the whole expanding cloud of galaxies is just a small part of a much larger multiverse, most of whose parts are utterly inhospitable to life. As Richard Feynman has said, “The theory that it’s all arranged as a stage for God to watch man’s struggle for good and evil seems inadequate.”

Most important so far has been the discovery by Charles Darwin and Alfred Russel Wallace that humans arose from earlier animals through natural selection acting on random heritable variations, with no need for a divine plan to explain the advent of humanity. This discovery led some, including Darwin, to lose their faith. It’s not surprising that of all the discoveries of science, this is the one that continues most to disturb religious conservatives. I can imagine how disturbed they will feel in the future, when at last scientists learn how to understand human behavior in terms of the chemistry and physics of the brain, and nothing is left that needs to be explained by our having an immaterial soul.

Note that I refer here to behavior, not consciousness. Something purely subjective, like how we feel when we see the color red or discover a physical theory, seems so different from the objective world described by science that it is difficult to see how they can ever come together. As Colin McGinn has said in these pages:

The problem is how to integrate the conscious mind with the physical brain—how to reveal a unity beneath this apparent diversity. That problem is very hard, and I do not believe anyone has any good ideas about how to solve it.[2]

On the other hand, both brain activity and behavior (including what we say about our feelings) are in the same world of objective phenomena, and I know of no intrinsic obstacle to their being integrated in a scientific theory, though it is clearly not going to be easy. This does not mean that we can or should forget about consciousness, and like B.F. Skinner with his pigeons concern ourselves only with behavior. We know, as well as we know anything, that our behavior is partly governed by our consciousness, so understanding behavior will necessarily require working out a

detailed correspondence between the objective and subjective. This may not tell us how one arises from the other, but at least it will confirm that there is nothing supernatural about the mind.

Some nonscientists seize on certain developments in modern physics that suggest the unpredictability of natural phenomena, such as the advent of quantum mechanics or chaos theory, as signs of a turn away from determinism, of the sort that would make an opening for divine intervention or an incorporeal soul. These theories have forced us to refine our view of determinism, but not I think in any way that has implications for human life.

A third source of tension between science and religious belief has been more important in Islam than in Christianity. Around 1100, the Sufi philosopher Abu Hamid al-Ghazzali argued against the very idea of laws of nature, on the grounds that any such law would put God’s hands in chains. According to al-Ghazzali, a piece of cotton placed in a flame does not darken and smolder because of the heat of the flame, but because God wants it to darken and smolder. Laws of nature could have been reconciled with Islam, as a summary of what God usually wants to happen, but al-Ghazzali did not take that path.

Al-Ghazzali is often described as the most influential Islamic philosopher. I wish I knew enough to judge how great was the impact on Islam of his rejection of science. At any rate, science in Muslim countries, which had led the world in the ninth and tenth centuries, went into a decline in the century or two after al-Ghazzali. As a portent of this decline, in 1194 the Ulama of Córdoba burned all scientific and medical texts.

Nor has science revived in the Islamic world. There are talented scientists who have come to the West from Islamic countries and do work of great value here, among them the Pakistani Muslim physicist Abdus Mohammed Salam, who in 1979 became the first Muslim scientist to be awarded a Nobel Prize, for work he did in England and Italy. But in the past forty years I have not seen any paper in the areas of physics or astronomy that I follow that was written in an Islamic country and was worth reading. Thousands of scientific papers are turned out in these countries, and perhaps I missed something. Still, in 2002 the periodical Nature carried out a survey of science in Islamic countries, and found just three areas in which the Islamic world produced excellent science, all three directed toward applications rather than basic science. They were desalination, falconry, and camel breeding.

Something like al-Ghazzali’s concern for God’s freedom surfaced for a while in Christian Europe, but with very different results. In Paris and Canterbury in the thirteenth century there was a wave of condemnations of those teachings of Aristotle that seemed to limit the freedom of God to do things like create a vacuum or make several worlds or move the heavens in straight lines. The influence of Thomas Aquinas and Albertus Magnus saved the philosophy of Aristotle for Europe, and with it the idea of laws of nature. But although Aristotle was no longer condemned, his authority had been questioned—which was fortunate, since nothing could be built on his physics. Perhaps it was the weakening of Aristotle’s authority by reactionary churchmen that opened the door to the first small steps toward finding the true laws of nature at Paris and Lisieux and Oxford in the fourteenth century.

There is a fourth source of tension between science and religion that may be the most important of all. Traditional religions generally rely on authority, whether the authority is an infallible leader, such as a prophet or a pope or an imam, or a body of sacred writings, a Bible or a Koran. Perhaps Galileo did not get into trouble solely because he was expressing views contrary to scripture, but because he was doing so independently, rather than as a theologian acting within the Church.

Of course, scientists rely on authorities, but of a very different sort. If I want to understand some fine point about the general theory of relativity, I might look up a recent paper by an expert in the field. But I would know that the expert might be wrong. One thing I probably would not do is to look up the original papers of Einstein, because today any good graduate student understands general relativity better than Einstein did. We progress. Indeed, in the form in which Einstein described his theory it is today generally regarded as only what is known in the trade as an effective field theory; that is, it is an approximation, valid for the large scales of distance for which it has been tested, but not under very cramped conditions, as in the early big bang.

We have our heroes in science, like Einstein, who was certainly the greatest physicist of the past century, but for us they are not infallible prophets. For those who in everyday life respect independence of mind and openness to contradiction, traits that Emerson admired—especially when it came to religion—the example of science casts an unfavorable light on the deference to authority of traditional religion. The world can always use heroes, but could do with fewer prophets.

The weakening of religious belief is obvious in Western Europe, but it may seem odd to talk about this happening in America. No one who expressed doubt about the existence of God could possibly be elected president of the United States. Nevertheless, though I don’t have any scientific evidence on this point, on the basis of personal observation it seems to me that while many Americans fervently believe that religion is a good thing, and get quite angry when it is criticized, even those who feel this way often do not have much in the way of clear religious belief. Occasionally I have found myself talking with friends, who identify themselves with some organized religion, about what they think of life after death, or of the nature of God, or of sin. Most often I’ve been told that they do not know, and that the important thing is not what you believe, but how you live. I’ve heard this even from a Catholic priest. I applaud the sentiment, but it’s quite a retreat from religious belief.

Though I can’t prove it, I suspect that when Americans are asked in polls whether they believe in God or angels or heaven or hell they feel that it is a religious duty to say that they do, whatever they actually believe. And of course hardly anyone today in the West seems to have even the slightest interest in the great controversies—Arians vs. Athanasians, monophysites vs. monothelites, justification by faith or by works—that used to be taken so seriously that they set Christians at each other’s throats.

I have been emphasizing religious belief here, the belief in facts about God or the afterlife, though I am well aware that this is only one aspect of the religious life, and for many not the most important part. Perhaps I emphasize belief because as a physicist I am professionally concerned with finding out what is true, not what makes us happy or good. For many people, the

important thing about their religion is not a set of beliefs but a host of other things: a set of moral principles; rules about sexual behavior, diet, observance of holy days, and so on; rituals of marriage and mourning; and the comfort of affiliation with fellow believers, which in extreme cases allows the pleasure of killing those who have different religious affiliations.

For some there is also a sort of spirituality that Emerson wrote about, and which I don’t understand, often described as a sense of union with nature or with all humanity, that doesn’t involve any specific beliefs about the supernatural. Spirituality is central to Buddhism, which does not call for belief in God. Even so, Buddhism has historically relied on belief in the supernatural, specifically in reincarnation. It is the desire to escape the wheel of rebirth that drives the search for enlightenment. The heroes of Buddhism are the bodhisattvas, who, having attained enlightenment, nevertheless return to life in order to show the way to a world shrouded in darkness. Perhaps in Buddhism too there has been a decline of belief. A recent book by the Dalai Lama barely mentions reincarnation, and Buddhism is now in decline in Japan, the Asian nation that has made the greatest progress in science.

The various uses of religion may keep it going for a few centuries even after the disappearance of belief in anything supernatural, but I wonder how long religion can last without a core of belief in the supernatural, when it isn’t about anything external to human beings. To compare great things with small, people may go to college football games mostly because they enjoy the cheerleading and marching bands, but I doubt if they would keep going to the stadium on Saturday afternoons if the only things happening there were cheerleading and marching bands, without any actual football, so that the cheerleading and the band music were no longer about anything.

2.

It is not my purpose here to argue that the decline of religious belief is a good thing (although I think it is), or to try to talk anyone out of their religion, as eloquent recent books by Richard Dawkins, Sam Harris, and Christopher Hitchens have. So far in my life, in arguing for spending more money on scientific research and higher education, or against spending on ballistic missile defense or sending people to Mars, I think I have achieved a perfect record of never having changed anyone’s mind. Rather, I want just to offer a few opinions, on the basis of no expertise whatever, for those who have already lost their religious beliefs, or who may be losing them, or fear that they will lose their beliefs, about how it is possible to live without God.

First, a warning: we had better beware of substitutes. It has often been noted that the greatest horrors of the twentieth century were perpetrated by regimes—Hitler’s Germany, Stalin’s Russia, Mao’s China—that while rejecting some or all of the teachings of religion, copied characteristics of religion at its worst: infallible leaders, sacred writings, mass rituals, the execution of apostates, and a sense of community that justified exterminating those outside the community.

When I was an undergraduate I knew a rabbi, Will Herberg, who worried about my lack of religious faith. He warned me that we must worship God, because otherwise we would start worshiping each other. He was right about the danger, but I would suggest a different cure: we should get out of the habit of worshiping anything.

I’m not going to say that it’s easy to live without God, that science is all you need. For a physicist, it is indeed a great joy to learn how we can use beautiful mathematics to understand the real world. We struggle to understand nature, building a great chain of research institutes, from the Museum of Alexandria and the House of Wisdom of Baghdad to today’s CERN and Fermilab. But we know that we will never get to the bottom of things, because whatever theory unifies all observed particles and forces, we will never know why it is that that theory describes the real world and not some other theory.

Worse, the worldview of science is rather chilling. Not only do we not find any point to life laid out for us in nature, no objective basis for our moral principles, no correspondence between what we think is the moral law and the laws of nature, of the sort imagined by philosophers from Anaximander and Plato to Emerson. We even learn that the emotions that we most treasure, our love for our wives and husbands and children, are made possible by chemical processes in our brains that are what they are as a result of natural selection acting on chance mutations over millions of years. And yet we must not sink into nihilism or stifle our emotions. At our best we live on a knife-edge, between wishful thinking on one hand and, on the other, despair.

What, then, can we do? One thing that helps is humor, a quality not abundant in Emerson. Just as we laugh with sympathy but not scorn when we see a one-year-old struggling to stay erect when she takes her first steps, we can feel a sympathetic merriment at ourselves, trying to live balanced on a knife-edge. In some of Shakespeare’s greatest tragedies, just when the action is about to reach an unbearable climax, the tragic heroes are confronted with some “rude mechanical” offering comic observations: a gravedigger, or a doorkeeper, or a pair of gardeners, or a man with a basket of figs. The tragedy is not lessened, but the humor puts it in perspective.

Then there are the ordinary pleasures of life, which have been despised by religious zealots, from Christian anchorites in the Egyptian deserts to today’s Taliban and Mahdi Army. Visiting New England in early June, when the rhododendrons and azaleas are blazing away, reminds one how beautiful spring can be. And let’s not dismiss the pleasures of the flesh. We who are not zealots can rejoice that when bread and wine are no longer sacraments, they will still be bread and wine.

There are also the pleasures brought to us by the high arts. Here I think we are going to lose something with the decline of religious belief. Much great art has arisen in the past from religious inspiration. For instance, I can’t imagine the poetry of George Herbert or Henry Vaughn or Gerard Manley Hopkins being written without sincere religious belief. But nothing prevents those of us who have no religious belief from enjoying religious poetry, any more than not being English prevents Americans from enjoying the patriotic speeches in Richard II or Henry V.

We may be sad that no more great religious poetry will be written in the future. We see already that little English-language poetry written in the past few decades owes anything to belief in God, and in some cases where religion does enter, as with poets like Stevie Smith or Philip Larkin, it is the rejection of religion that provides their inspiration. But of course very great poetry can be written without religion. Shakespeare provides an example; none of his work seems to me to show the slightest hint of serious religious inspiration. Given Ariel and Prospero, we see that poets can do without angels and prophets.

I do not think we have to worry that giving up religion will lead to a moral decline. There are plenty of people without religious faith who live exemplary moral lives (as for example, me), and though religion has sometimes inspired admirable ethical standards, it has also often fostered the most hideous crimes. Anyway, belief in an omnipotent omniscient creator of the world does not in itself have any moral implications—it’s still up to you to decide whether it is right to obey His commands. For instance, even someone who believes in God can feel that Abraham in the Old Testament was wrong to obey God in agreeing to sacrifice Isaac, and that Adam inParadise Lostwas right to disobey God and follow Eve in eating the apple, so that he could stay with her when she was driven from Eden. The young men who flew airplanes into buildings in the US or exploded bombs in crowds in London or Madrid or Tel Aviv were not just stupid in imagining that these were God’s commands; even thinking that these were His commands, they were evil in obeying them.

The more we reflect on the pleasures of life, the more we miss the greatest consolation that used to be provided by religious belief: the promise that our lives will continue after death, and that in the afterlife we will meet the people we have loved. As religious belief weakens, more and more of us know that after death there is nothing. This is the thing that makes cowards of us all.

Cicero offered comfort in De Senectute by arguing that it was silly to fear death. After more than two thousand years his words still have not the slightest power to console us. Philip Larkin was much more convincing about the fear of death:

This is a special way of being afraid No trick dispels. Religion used to try, That vast moth-eaten musical brocade Created to pretend we never die,

And specious stuff that says No rational being Can fear a thing it will not feel, not seeing
That this is what we fear—no sight, no sound, No touch or taste or smell, nothing to think with, Nothing to love or link with,

The anaesthetic from which none come round.

Living without God isn’t easy. But its very difficulty offers one other consolation—that there is a certain honor, or perhaps just a grim satisfaction, in facing up to our condition without despair and without wishful thinking—with good humor, but without God.

The New York Review of Books
Volume 55, Number 14 · September 25, 2008

September 25, 2013 Science and Big Data

                                                                                                                                                                                 Click here for a pdf version.

To Know, but Not Understand: David Weinberger on Science and Big Data

by David Weinberger  original

Thomas Jefferson and George Washington recorded daily weather observations, but they didn’t record them hourly or by the minute. Not only did they have other things to do, such data didn’t seem useful. Even after the invention of the telegraph enabled the centralization of weather data, the 150 volunteers who received weather instruments from the Smithsonian Institution in 1849 still reported only once a day. Now there is a literally immeasurable, continuous stream of climate data from satellites circling the earth, buoys bobbing in the ocean, and Wi-Fi-enabled sensors in the rain forest. We are measuring temperatures, rainfall, wind speeds, C02 levels, and pressure pulses of solar wind. All this data and much, much more became worth recording once we could record it, once we could process it with computers, and once we could connect the data streams and the data processors with a network.

This would not be the first time. For example, when Sir Francis Bacon said that knowledge of the world should be grounded in carefully verified facts about the world, he wasn’t just giving us a new method to achieve old-fashioned knowledge. He was redefining knowledge as theories that are grounded in facts. The Age of the Net is bringing about a redefinition at the same scale. Scientific knowledge is taking on properties of its new medium, becoming like the network in which it lives.

In this excerpt from my new book, Too Big To Know, we’ll look at a key property of the networking of knowledge: hugeness.

In 1963, Bernard K. Forscher of the Mayo Clinic complained in a now famous letter printed in the prestigious journal Science that scientists were generating too many facts. Titled Chaos in the Brickyard, the letter warned that the new generation of scientists was too busy churning out bricks — facts — without regard to how they go together. Brickmaking, Forscher feared, had become an end in itself. “And so it happened that the land became flooded with bricks. … It became difficult to find the proper bricks for a task because one had to hunt among so many. … It became difficult to complete a useful edifice because, as soon as the foundations were discernible, they were buried under an avalanche of random bricks.”

If science looked like a chaotic brickyard in 1963, Dr. Forscher would have sat down and wailed if he were shown the Global Biodiversity Information Facility at GBIF.org. Over the past few years, GBIF has collected thousands of collections of fact-bricks about the distribution of lifeover our planet, from the bacteria collection of the Polish National Institute of Public Health to the Weddell Seal Census of the Vestfold Hills of Antarctica. GBIF.org is designed to be just the sort of brickyard Dr. Forscher deplored — information presented without hypothesis, theory, or edifice — except far larger because the good doctor could not have foreseen the networking of brickyards.

How will we ever make sense of scientific topics that are too big to know? The short answer: by transforming what it means to know something scientifically.

Scientific knowledge is taking on properties of its new medium, becoming like the network in which it lives.
Indeed, networked fact-based brickyards are a growth industry. For example, at ProteomeCommons.org you’ll find information about the proteins specific to various organisms. An independent project by a grad student, Proteome Commons makes available almost 13 million data files, for a total of 12.6 terabytes of information. The data come from scientists from around the world, and are made available to everyone, for free. The Sloan Digital Sky Survey — under the modest tag line Mapping the Universe — has been gathering and releasing maps of the skies gathered from 25 institutions around the world. Its initial survey, completed in 2008 after eight years of work, published information about 230 million celestial objects, including 930,000 galaxies; each galaxy contains millions of stars, so this brickyard may grow to a size where we have trouble naming the number. The best known of the new data brickyards, the Human Genome Project, in 2001 completed mapping the entire genetic blueprint of the human species; it has been surpassed in terms of quantity by the International Nucleotide Sequence Database Collaboration, which as of May 2009 had gathered 250 billion pieces of genetic data.

There are three basic reasons scientific data has increased to the point that the brickyard metaphor now looks 19th century. First, the economics of deletion have changed. We used to throw out most of the photos we took with our pathetic old film cameras because, even though they were far more expensive to create than today’s digital images, photo albums were expensive, took up space, and required us to invest considerable time in deciding which photos would make the cut. Now, it’s often less expensive to store them all on our hard drive (or at some website) than it is to weed through them.

Second, the economics of sharing have changed. The Library of Congress has tens of millions of items in storage because physics makes it hard to display and preserve, much less to share, physical objects. The Internet makes it far easier to share what’s in our digital basements. When the datasets are so large that they become unwieldy even for the Internet, innovators are spurred to invent new forms of sharing. For example, Tranche, the system behind ProteomeCommons, created its own technical protocol for sharing terabytes of data over the Net, so that a single source isn’t responsible for pumping out all the information; the process of sharing is itself shared across the network. And the new Linked Data format makes it easier than ever to package data into small chunks that can be found and reused. The ability to access and share over the Net further enhances the new economics of deletion; data that otherwise would not have been worth storing have new potential value because people can find and share them.

Third, computers have become exponentially smarter. John Wilbanks, vice president for Science at Creative Commons (formerly called Science Commons), notes that “[i]t used to take a year to map a gene. Now you can do thirty thousand on your desktop computer in a day. A $2,000 machine — a microarray — now lets you look at the human genome reacting over time.” Within

days of the first human being diagnosed with the H1N1 swine flu virus, the H1 sequence of 1,699 bases had been analyzed and submitted to a global repository. The processing power available even on desktops adds yet more potential value to the data being stored and shared.

The brickyard has grown to galactic size, but the news gets even worse for Dr. Forscher. It’s not simply that there are too many brickfacts and not enough edifice-theories. Rather, the creation of data galaxies has led us to science that sometimes is too rich and complex for reduction into theories. As science has gotten too big to know, we’ve adopted different ideas about what it means to know at all.

For example, the biological system of an organism is complex beyond imagining. Even the simplest element of life, a cell, is itself a system. A new science called systems biology studies the ways in which external stimuli send signals across the cell membrane. Some stimuli provoke relatively simple responses, but others cause cascades of reactions. These signals cannot be understood in isolation from one another. The overall picture of interactions even of a single cell is more than a human being made out of those cells can understand. In 2002, when Hiroaki Kitano wrote a cover story on systems biology for Science magazine — a formal recognition of the growing importance of this young field — he said: “The major reason it is gaining renewed interest today is that progress in molecular biology … enables us to collect comprehensive datasets on system performance and gain information on the underlying molecules.” Of course, the only reason we’re able to collect comprehensive datasets is that computers have gotten so big and powerful. Systems biology simply was not possible in the Age of Books.

The result of having access to all this data is a new science that is able to study not just “the characteristics of isolated parts of a cell or organism” (to quote Kitano) but properties that don’t show up at the parts level. For example, one of the most remarkable characteristics of living organisms is that we’re robust — our bodies bounce back time and time again, until, of course, they don’t. Robustness is a property of a system, not of its individual elements, some of which may be nonrobust and, like ants protecting their queen, may “sacrifice themselves” so that the system overall can survive. In fact, life itself is a property of a system.

The problem — or at least the change — is that we humans cannot understand systems even as complex as that of a simple cell. It’s not that were awaiting some elegant theory that will snap all the details into place. The theory is well established already: Cellular systems consist of a set of detailed interactions that can be thought of as signals and responses. But those interactions surpass in quantity and complexity the human brains ability to comprehend them. The science of such systems requires computers to store all the details and to see how they interact. Systems biologists build computer models that replicate in software what happens when the millions of pieces interact. It’s a bit like predicting the weather, but with far more dependency on particular events and fewer general principles.

Models this complex — whether of cellular biology, the weather, the economy, even highway traffic — often fail us, because the world is more complex than our models can capture. But sometimes they can predict accurately how the system will behave. At their most complex these are sciences of emergence and complexity, studying properties of systems that cannot be seen by looking only at the parts, and cannot be well predicted except by looking at what happens.

This marks quite a turn in science’s path. For Sir Francis Bacon 400 years ago, for Darwin 150 years ago, for Bernard Forscher 50 years ago, the aim of science was to construct theories that are both supported by and explain the facts. Facts are about particular things, whereas knowledge (it was thought) should be of universals. Every advance of knowledge of universals brought us closer to fulfilling the destiny our Creator set for us.

This strategy also had a practical side, of course. There are many fewer universals than particulars, and you can often figure out the particulars if you know the universals: If you know the universal theorems that explain the orbits of planets, you can figure out where Mars will be in the sky on any particular day on Earth. Aiming at universals is a simplifying tactic within our broader traditional strategy for dealing with a world that is too big to know by reducing knowledge to what our brains and our technology enable us to deal with.

We therefore stared at tables of numbers until their simple patterns became obvious to us. Johannes Kepler examined the star charts carefully constructed by his boss, Tycho Brahe, until he realized in 1605 that if the planets orbit the Sun in ellipses rather than perfect circles, it all makes simple sense. Three hundred fifty years later, James Watson and Francis Crick stared at x- rays of DNA until they realized that if the molecule were a double helix, the data about the distances among its atoms made simple sense. With these discoveries, the data went from being confoundingly random to revealing an order that we understand: Oh, the orbits are elliptical! Oh, the molecule is a double helix!

They are so complex that only our artificial brains can manage the amount of data and the number of interactions involved.
With the new database-based science, there is often no moment when the complex becomes simple enough for us to understand it. The model does not reduce to an equation that lets us then throw away the model. You have to run the simulation to see what emerges. For example, a computer model of the movement of people within a confined space who are fleeing from a threat–they are in a panic–shows that putting a column about one meter in front of an exit door, slightly to either side, actually increases the flow of people out the door. Why? There may be a theory or it may simply be an emergent property. We can climb the ladder of complexity from party games to humans with the single intent of getting outside of a burning building, to phenomena with many more people with much more diverse and changing motivations, such as markets. We can model these and perhaps know how they work without understanding them. They are so complex that only our artificial brains can manage the amount of data and the number of interactions involved.

The same holds true for models of purely physical interactions, whether they’re of cells, weather patterns, or dust motes. For example, Hod Lipson and Michael Schmidt at Cornell University designed the Eureqa computer program to find equations that make sense of large quantities of data that have stumped mere humans, including cellular signaling and the effect of cocaine on white blood cells. Eureqa looks for possible equations that explain the relation of some likely pieces of data, and then tweaks and tests those equations to see if the results more accurately fit the data. It keeps iterating until it has an equation that works.

Dr. Gurol Suel at the University of Texas Southwestern Medical Center used Eureqa to try to figure out what causes fluctuations among all of the thousands of different elements of a single

bacterium. After chewing over the brickyard of data that Suel had given it, Eureqa came out with two equations that expressed constants within the cell. Suel had his answer. He just doesn’t understand it and doesn’t think any person could. It’s a bit as if Einstein dreamed e = mc2, and we confirmed that it worked, but no one could figure out what the c stands for.

No one says that having an answer that humans cannot understand is very satisfying. We want Eureka and not just Eureqa. In some instances well undoubtedly come to understand the oracular equations our software produces. On the other hand, one of the scientists using Eureqa, biophysicist John Wikswo, told a reporter for Wired: “Biology is complicated beyond belief, too complicated for people to comprehend the solutions to its complexity. And the solution to this problem is the Eureqa project.” The world’s complexity may simply outrun our brains capacity to understand it.

Model-based knowing has many well-documented difficulties, especially when we are attempting to predict real-world events subject to the vagaries of history; a Cretaceous-era model of that eras ecology would not have included the arrival of a giant asteroid in its data, and no one expects a black swan. Nevertheless, models can have the predictive power demanded of scientific hypotheses. We have a new form of knowing.

This new knowledge requires not just giant computers but a network to connect them, to feed them, and to make their work accessible. It exists at the network level, not in the heads of individual human beings.

September 11, 2013 Critical Thought Under Attack

 Click here for a pdf version.

The United States Is Awash in Public Stupidity, and Critical Thought Is Under Assault

Henry A. Giroux July 22, 2013

America has become amnesiac – a country in which forms of historical, political, and moral forgetting are not only willfully practiced but celebrated. The United States has degenerated into a social order that is awash in public stupidity and views critical thought as both a liability and a threat. Not only is this obvious in the presence of a celebrity culture that embraces the banal and idiotic, but also in the prevailing discourses and policies of a range of politicians and anti-public intellectuals who believe that the legacy of the Enlightenment needs to be reversed. Politicians such as Michelle Bachmann, Rick Santorum and Newt Gingrich along with talking heads such as Bill O’Reilly, Glenn Beck and Anne Coulter are not the problem, they are symptomatic of a much more disturbing assault on critical thought, if not rational thinking itself. Under a neoliberal regime, the language of authority, power and command is divorced from ethics, social responsibility, critical analysis and social costs.

These anti-public intellectuals are part of a disimagination machine that solidifies the power of the rich and the structures of the military-industrial-surveillance-academic complex by presenting the ideologies, institutions and relations of the powerful as commonsense. [1] For instance, the historical legacies of resistance to racism, militarism, privatization and panoptical surveillance have long been forgotten and made invisible in the current assumption that Americans now live in a democratic, post-racial society. The cheerleaders for neoliberalism work hard to normalize dominant institutions and relations of power through a vocabulary and public pedagogy that create market-driven subjects, modes of consciousness, and ways of understanding the world that promote accommodation, quietism and passivity. Social solidarities are torn apart, furthering the retreat into orbits of the private that undermine those spaces that nurture non-commodified knowledge, values, critical exchange and civic literacy. The pedagogy of authoritarianism is alive and well in the United States, and its repression of public memory takes place not only through the screen culture and institutional apparatuses of conformity, but is also reproduced through a culture of fear and a carceral state that imprisons more people than any other country in the world. [2] What many commentators have missed in the ongoing attack on Edward Snowden is not that he uncovered information that made clear how corrupt and intrusive the American government has become – how willing it is to engage in vast crimes against the American public. His real “crime” is that he demonstrated how knowledge can be used to empower people, to get them to think as critically engaged citizens rather than assume that knowledge and education are merely about the learning of skills – a reductive concept that substitutes training for education and reinforces the flight from reason and the goose-stepping reflexes of an authoritarian mindset. [3]

Since the late1970s, there has been an intensification in the United States, Canada and Europe of neoliberal modes of governance, ideology and policies – a historical period in which the foundations for democratic public spheres have been dismantled. Schools, public radio, the media and other critical cultural apparatuses have been under siege, viewed as dangerous to a market-driven society that considers critical thought, dialogue, and civic engagement a threat to

its basic values, ideologies, and structures of power. This was the beginning of an historical era in which the discourse of democracy, public values, and the common good came crashing to the ground. Margaret Thatcher in Britain and soon after Ronald Reagan in the United States – both hard-line advocates of market fundamentalism – announced that there was no such thing as society and that government was the problem not the solution. Democracy and the political process were all but sacrificed to the power of corporations and the emerging financial service industries, just as hope was appropriated as an advertisement for the whitewashed world, a culture whose capacity to critique oppressive social practices was greatly diminished. Large social movements fragmented into isolated pockets of resistance mostly organized around a form of identity politics that largely ignored a much-needed conversation about the attack on the social and the broader issues affecting society such as the growing inequality in wealth, power and income.

August 28, 2013 Criminal Mind

 Click here for a pdf version.

The Criminal Mind

Adrian Raine

Advances in genetics and neuroscience are revolutionizing our understanding of violent behavior—as well as ideas about how to prevent and punish crime

April 26, 2013,website, The Wall Street Journal 4/27-28/2013.P C1,2

The scientific study of crime got its start on a cold, gray November morning in 1871, on the east coast of Italy. Cesare Lombroso, a psychiatrist and prison doctor at an asylum for the criminally insane, was performing a routine autopsy on an infamous Calabrian brigand named Giuseppe Villella. Lombroso found an unusual indentation at the base of Villella’s skull. From this singular observation, he would go on to become the founding father of modern criminology.

Lombroso’s controversial theory had two key points: that crime originated in large measure from deformities of the brain and that criminals were an evolutionary throwback to more primitive species. Criminals, he believed, could be identified on the basis of physical characteristics, such as a large jaw and a sloping forehead. Based on his measurements of such traits, Lombroso created an evolutionary hierarchy, with Northern Italians and Jews at the top and Southern Italians (like Villella), along with Bolivians and Peruvians, at the bottom.

These beliefs, based partly on pseudoscientific phrenological theories about the shape and size of the human head, flourished throughout Europe in the late 19th and early 20th centuries. Lombroso was Jewish and a celebrated intellectual in his day, but the theory he spawned turned out to be socially and scientifically disastrous, not least by encouraging early-20th-century ideas about which human beings were and were not fit to reproduce—or to live at all.

The racial side of Lombroso’s theory fell into justifiable disrepute after the horrors of World War II, but his emphasis on physiology and brain traits has proved to be prescient. Modern-day scientists have now developed a far more compelling argument for the genetic and neurological components of criminal behavior. They have uncovered, quite literally, the anatomy of violence, at a time when many of us are preoccupied by the persistence of violent outrages in our midst.

The field of neurocriminology—using neuroscience to understand and prevent crime—is revolutionizing our understanding of what drives “bad” behavior. More than 100 studies of twins and adopted children have confirmed that about half of the variance in aggressive and antisocial behavior can be attributed to genetics. Other research has begun to pinpoint which specific genes promote such behavior.

Brain-imaging techniques are identifying physical deformations and functional abnormalities that predispose some individuals to violence. In one recent study, brain scans correctly predicted which inmates in a New Mexico prison were most likely to commit another crime after release. Nor is the story exclusively genetic: A poor environment can change the early brain and make for antisocial behavior later in life.

Most people are still deeply uncomfortable with the implications of neurocriminology. Conservatives worry that acknowledging biological risk factors for violence will result in a society that takes a soft approach to crime, holding no one accountable for his or her actions. Liberals abhor the potential use of biology to stigmatize ostensibly innocent individuals. Both sides fear any seeming effort to erode the idea of human agency and free will.

It is growing harder and harder, however, to avoid the mounting evidence. With each passing year, neurocriminology is winning new adherents, researchers and practitioners who understand its potential to transform our approach to both crime prevention and criminal justice.

The genetic basis of criminal behavior is now well established. Numerous studies have found that identical twins, who have all of their genes in common, are much more similar to each other in terms of crime and aggression than are fraternal twins, who share only 50% of their genes.

(Donta Page’s brain scan, left, shows the reduced functioning of the ventral prefrontal cortex—the area of the brain that helps regulate emotions and control impulses—compared to a normal brain, right.)

In a landmark 1984 study, my colleague Sarnoff Mednick found that children in Denmark who had been adopted from parents with a criminal record were more likely to become criminals in adulthood than were other adopted kids. The more offenses the biological parents had, the more likely it was that their offspring would be convicted of a crime. For biological parents who had no offenses, 13% of their sons had been convicted; for biological parents with three or more offenses, 25% of their sons had been convicted.

As for environmental factors that affect the young brain, lead is neurotoxic and particularly damages the prefrontal region, which regulates behavior. Measured lead levels in our bodies tend to peak at 21 months—an age when toddlers are apt to put their fingers into their mouths. Children generally pick up lead in soil that has been contaminated by air pollution and dumping.

Rising lead levels in the U.S. from 1950 through the 1970s neatly track increases in violence 20 years later, from the ’70s through the ’90s. (Violence peaks when individuals are in their late teens and early 20s.) As lead in the environment fell in the ’70s and ’80s—thanks in large part to the regulation of gasoline—violence fell correspondingly. No other single factor can account for both the inexplicable rise in violence in the U.S. until 1993 and the precipitous drop since then.

Lead isn’t the only culprit. Other factors linked to higher aggression and violence in adulthood include smoking and drinking by the mother before birth, complications during birth and poor nutrition early in life.

Genetics and environment may work together to encourage violent behavior. One pioneering study in 2002 by Avshalom Caspi and Terrie Moffitt of Duke University genotyped over 1,000 individuals in a community in New Zealand and assessed their levels of antisocial behavior in adulthood. They found that a genotype conferring low levels of the enzyme monoamine oxidase A (MAOA), when combined with early child abuse, predisposed the individual to later antisocial behavior. Low MAOA has been linked to reduced volume in the amygdala—the emotional center of the brain—while physical child abuse can damage the frontal part of the brain, resulting in a double hit.

Brain-imaging studies have also documented impairments in offenders. Murderers, for instance, tend to have poorer functioning in the prefrontal cortex—the “guardian angel” that keeps the brakes on impulsive, disinhibited behavior and volatile emotions.

Of course, not everyone with a particular brain profile is a murderer—and not every offender fits the same mold. Those who plan their homicides, like serial killers, tend to have good prefrontal functioning. That makes sense, since they must be able to regulate their behavior carefully in order to escape detection for a long time.

So what explains coldblooded psychopathic behavior? About 1% of us are psychopaths—fearless antisocials who lack a conscience. In 2009, Yaling Yang, Robert Schug and I conducted structural brain scans on 27 psychopaths whom we had found in temporary-employment agencies in Los Angeles. All got high scores on the Psychopathy Checklist, the “gold standard” in the field, which assesses traits like lack of remorse, callousness and grandiosity. We found that, compared with 32 normal people in a control group, psychopaths had an 18% smaller amygdala, which is critical for emotions like fear and is part of the neural circuitry underlying moral decision-making. In subsequent research, Andrea Glenn and I found this same brain region to be significantly less active in psychopathic individuals when they contemplate moral issues. Psychopaths know at a cognitive level what is right and what is wrong, but they don’t feel it.

What are the practical implications of all this evidence for the physical, genetic and environmental roots of violent behavior? What changes should be made in the criminal-justice system?

Let’s start with two related questions: If early biological and genetic factors beyond the individual’s control make some people more likely to become violent offenders than others, are these individuals fully blameworthy? And if they are not, how should they be punished?

Take the case of Donta Page, who in 1999 robbed a young woman in Denver named Peyton Tuthill, then raped her, slit her throat and killed her by plunging a kitchen knife into her chest. Mr. Page was found guilty of first-degree murder and was a prime candidate for the death penalty.

Working as an expert witness for Mr. Page’s defense counsel, I brought him to a lab to assess his brain functioning. Scans revealed a distinct lack of activation in the ventral prefrontal cortex—the brain region that helps to regulate our emotions and control our impulses.

In testifying, I argued for a deep-rooted biosocial explanation for Mr. Page’s violence. As his files documented, as a child he suffered from poor nutrition, severe parental neglect, sustained physical and sexual abuse, early head injuries, learning disabilities, poor cognitive functioning and lead exposure. He also had a family history of mental illness. By the age of 18, Mr. Page had been referred for psychological treatment 19 times, but he had never once received treatment. A three-judge panel ultimately decided not to have him executed, accepting our argument that a mix of biological and social factors mitigated Mr. Page’s responsibility.

Mr. Page escaped the death penalty partly on the basis of brain pathology—a welcome result for those who believe that risk factors should partially exculpate socially disadvantaged offenders. But the neurocriminologist’s sword is double-edged. Neurocriminology also might have told us that Mr. Page should never have been on the street in the first place. At the time he committed the murder, he had been out of prison for only four months. Sentenced to 20 years for robbery, he was released after serving just four years.

What if I had been asked to assess him just before he was released? I would have said exactly what I said in court when defending him. All the biosocial boxes were checked: He was at heightened risk for committing violence for reasons beyond his control. It wasn’t exactly destiny, but he was much more likely to be impulsively violent than not.

This brings us to the second major change that may be wrought by neurocriminology: incorporating scientific evidence into decisions about which soon-to-be-released offenders are at the greatest risk for reoffending. Such risk assessment is currently based on factors like age, prior arrests and marital status. If we were to add biological and genetic information to the equation—along with recent statistical advances in forecasting—predictions about reoffending would become significantly more accurate.

In a 2013 study, Kent Kiehl of the University of New Mexico, looking at a population of 96 male offenders in the state’s prison system, found that in the four years after their release, those with low activity in the anterior cingulate cortex—a brain area involved in regulating behavior—were twice as likely to commit another offense as those who had high activity in this region. Research soon to be published by Dustin Pardini of the University of Pittsburgh shows that men with a smaller amygdala are three times more likely to commit violence three years later.

Of course, if we can assess criminals for their propensity to reoffend, we can in theory assess any individual in society for his or her criminal propensity—making it possible to get ahead of the problem by stopping crime before it starts. Ultimately, we should try to reach a point where it is possible to deal with repeated acts of violence as a clinical disorder.

Randomized, controlled trials have clearly documented the efficacy of a host of medications—including stimulants, antipsychotics, antidepressants and mood stabilizers—in treating aggression in children and adolescents. Parents are understandably reluctant to have their children medicated for bad behavior, but when all else fails, treating children to stabilize their uncontrollable aggressive acts and to make them more amenable to psychological interventions is an attractive option.

Treatment doesn’t have to be invasive. Randomized, controlled trials in England and the Netherlands have shown that a simple fix—omega-3 supplements in the diets of young offenders—reduces serious offending by about 35%. Studies have also found that early environmental enrichment—including better nutrition, physical exercise and cognitive stimulation—enhances later brain functioning in children and reduces adult crime.

Over the course of modern history, increasing scientific knowledge has given us deeper insights into epilepsy, psychosis and substance abuse, and has promoted a more humane perspective. Just as mental disorders were once viewed as a product of evil forces, the “evil” you see in violent offenders today may someday be reformulated as a symptom of a physiological disorder.

There is no question that neurocriminology puts us on difficult terrain, and some wish it didn’t exist at all. How do we know that the bad old days of eugenics are truly over? Isn’t research on the anatomy of violence a step toward a world where our fundamental human rights are lost?

We can avoid such dire outcomes. A more profound understanding of the early biological causes of violence can help us take a more empathetic, understanding and merciful approach toward both the victims of violence and the prisoners themselves. It would be a step forward in a process that should express the highest values of our civilization.

—Dr. Raine is the Richard Perry University Professor of Criminology, Psychiatry and Psychology at the University of Pennsylvania and author of “The Anatomy of Violence: The Biological Roots of Crime,” to be published on April 30 by Pantheon, a division of Random House.

Dr. Raine is the Richard Perry University Professor of Criminology, Psychiatry and Psychology at the University of Pennsylvania and the author of “The Anatomy of Violence: The Biological Roots of Crime,” to be published Tuesday by Pantheon, a division of Random House.

A version of this article appeared April 27, 2013, on page C1 in the U.S. edition of The Wall Street Journal, with the headline: The Criminal Mind.

— —-

Timothy Brown  << Conservatives worry that acknowledging biological risk factors for violence will result in a society that takes a soft approach to crime, holding no one accountable for his or her actions. Liberals abhor the potential use of biology to stigmatize ostensibly innocent individuals. >>
—–
If there were a Pulitzer Prize for sweeping generalizations, I think this passage might deserve the honor.

I, for one, am a conservative, and my worry is that liberals will make brain scans and genetic screening mandatory prerequisites to getting a permit to own a firearm.
Which would NOT prevent the criminally insane from actually owning or using guns.
But it would create a huge barrier for those who are neither sociopathic nor criminal: “So, you want to own a gun, do you? I’m from the government, and I think you might be crazy. Therefore, you need to have your head examined.”

Harold Richard  Hey Man, Ima Demo and I think the same as you on this. I also own a gun now and want to buy another firearm. 
Why don’t we just dream up some Pre-Cog computer software and install it on one of those Superduper Supercomputers. Then we can just lock up people at birth based on their genetics and brain shape. Oh wait, maybe we already have and haven’t been told about it yet.
Minority Report here we come

John Mcrae  Concepts of the impulsive violent psychotic, the impulsive violent mentally ill, and the impulsive violent evil person (possessed) have interested many but remained largely elusive for the past several centuries. In 1843, the M’Naghten Rule separated the insane from the criminal at common law, and is still largely in vogue ……although probably not valid. 
Responsibility after a violent act, whether the impulse be psychotic, criminal, or evil requires protective custody for the safety sake of society. The social fetish of need to assign criminal responsibility in accord with a rigorous protocol while ignoring the dangerous mentally ill, or confusing criminality with evil needs update…..as evidence mounts of multiple genetic, developmental, and imposed conditions triggering the impulse.
As the death penalty becomes increasingly challenged, so to should the effect of the Miranda warning, and petition for habeus corpus. Responsibility can be judged and protective custody imposed….but then a prudent delay in sentencing, even of a few years, to ferret out and weigh all factors of uncertainty where the criminal code calls for prolonged incarceration or execution should be routine. A few offenders will soon be identified who can be treated…..at the discretion of the supervising court. The cost for such would not be greater than the current costs of exhaustive appeal of capital crimes, nor lifetime imprisonment expense. 
The difficulty is that no one can demonstrate the efficacy, the risk, nor the benefit of such until it is tried…..being human behavior and unperfected hypothetical science. In the 1950s, small almost anecdotal experience with early stereotactic ablative ‘psychosurgery’ showed promising results with far less recidivism. Then came the ‘liberated’ 60s doing way with two centuries of improving more humane protective custody of the mentally ill as well as the career criminal. Today, sixty years later technology has greatly intensified the power of such inquiry…..and social consequences of the libertine ignoring of the problem are demanding answers. The facts are that there are psychotics, criminals, and evil persons.

check out http://healthland.time.com/2013/04/23/qa-criminologist-adrian-raine-on-the-marathon-bombs-the-biology-of-violence/

Jay Martin  “No other single factor can account for both the inexplicable rise in violence in the U.S. until 1993 and the precipitous drop since then.”
I believe the authors of “Freakonomics” laid out a much more believable theory regarding the drop in crime starting in the 90s. That being Roe v Wade 1973

Kevin Fisher   Assuming that all behavior is based on brain physics it’s not a total surprise that technology has developed to the point that behavioral aberrations and variations can be “seen” with modern scanning methods.
That doesn’t free any of us from responsibility for what our brain is doing. Those with a criminally aberrant brain must be dealt with as criminals if (and only if) they commit a crime. 
However, it may suggest a certain amount of compassion toward criminals whose brain configuration was caused by factors outside of their control – whatever that mans, with lots of interesting questions regarding free will. In the end, that doesn’t imply leniency in sentencing or other measures needed to protect the public

Kevin Kilty   This was interesting to read, but when the author says that this knowledge puts us on difficult terrain he is a master of understatement. In addition to the worry about eugenics, and the potential to justify permanent incarceration using narrow and not very robust measures, there is also a worry about using it to deal with inconvenient differences of opinion. One does not have to search very hard for Prof. Raines colleagues in academia referring to “conservative” or libertarian thinking as a pathology

Kevin Kilty   “Randomized, controlled trials in England and the Netherlands have shown that a simple fix—omega-3 supplements in the diets of young offenders—reduces serious offending by about 35%….”How many people were involved in the trial, has it been replicated, when was it done, and is the measure a risk measure, or an odds measure? How am I to view such statements if they are so vague

Paul S. Boyer  Well, removing revenge from the hands of the aggrieved is one of the functions of punishment. It has the function of preventing continuous vendetta.
Revenge is one of the legitimate functions of punishment. It is also possible in some societies for the victim (or the family of the victim) to forgive the perpetrator, which they sometimes do (usually for a price). This turns simple revenge into forgiveness, or a fine. The point is that the case is then closed, and a vendetta avoided.
It used to be considered an advance in civilization to consider crime not just as a personal wrong to the victim, but an offense to “society.” Thus the criminal would pay a “debt to society.” This, again, would take revenge away from the aggrieved, and put it into “official” hands which are supposed to be more dispassionate, and to apply more uniform, accepted standards.
It is foolish Utopianism to imagine that the revenge motive can simply be dreamed away. Not even brain research can eliminate it, for it is in human nature. Indeed, it is in the nature of our closest primate relatives, as well as in many other, less closely related animals. There is a clear reason in evolution for the revenge behavior, for it acts to remove a proven threat

Lisa Partridge   “No other single factor can account for both the inexplicable rise in violence in the U.S. until 1993 and the precipitous drop since then.”
Weren’t the Grateful Dead active from the late 60s until the early to mid 90s?
Look, mom! I’m a neurocriminologist!

Ezra Blumenthal   Violent behavior is NOT criminal, -in fact the real criminals of the world are seldom violent. For example, when the Catholic Pope encouraged forced conversion of native Americans into Catholicism, the Pope (whoever it was) could hardly be considered ‘violent’, although his decision resulted in much pain and death among native Americans. Similarly, when the Americans dropped Atomic Bombs and killed millions of civilians in Japan, the American general was probably calm and calculating, as were the generals giving the order to drop Napalm on Vietnamese people who had done nothing wrong against America. 
If we consider financial crooks, like the Swiss Bankers (UBS, for example), who help Americans and Germans to cheat on taxes, we can really see the difference between ‘criminal’ and ‘violent’. The average Swiss is probably well dressed, and pretty civilized, although the Swiss are the biggest criminals in the world as they go around helping thugs and dictators hide their loot :)
Catching a genetically violent person may be useful, but it would be far more useful if we could catch white collar criminals, …. or scan the brain of a politician or a banker or a lawyer, to tell us when he is lying! Well, some would say they are always lying…. but still, maybe we could tell when they really really lying 🙂

Page 2 of 3
1 2 3