Saturday, June 24, 2017

Doubt by Jennifer Michael Hecht

A book I enjoyed a lot less than I thought I would. It’s called “a history” by its author, as in “a history of doubt,” and indeed it is as it chronicles the principles and personalities of great doubters (i.e., those who doubt the existence of God or gods) through 2,600 years of history. An ambitious and worthy subject, but at times it felt like I was reading an encyclopedia.

Here’s are the random bits that seemed to jump out at me. Given the subject matter of the book, you can be sure that they take a unapologetic freethinking bent.

Christian Saints = Pagan Nature Gods

By the sixth century, Christians in the West had won over the cities, but the countryside was still a place of almost endless supernatural energies, and even city dwellers saw the natural world in this spirited way. The great God of the Christians was too far away for farmers, and the Son may have been human but he was not available for watering the fields or fending off locusts. In parts of Spain the practice of leaving little piles of votive candles near springs, in trees, and on hilltops and crossroads was still so rampant that as late as the 690s dramatic Church ceremonies were staged to transfer the candles to the local churches and announce that idolatry was finally dead. What actually worked was not sermons against the enchanted natural world, but rather the reenchantment of the world in Christian terms. Gregory of Tours (538-594) was most responsible for the reinterpretation of the Christian saints as capable of helping average people in their relationship with the natural world; through them springs and crossroads once again became sanctioned places for worship. The saints brought healing, mercy, and fertility to the small places of field and hearth, and brought safety on byroads and high seas. In myriad ways, water was holy again, and trees might spring up on the graves of saints.

Never thought of saints this way before, but is makes total sense. Similar to the way the early Christian church adopted pagan holidays as their own.

Begging the Question

After my long post on The Mind and the Brain, where I accuse the author of constantly begging the question in a similar way, this one really resonated with me.

[John] Locke did not agree with [Rene] Descartes, because Locke noticed that “I think, therefore I am” is a bit of a leap (as the Buddha might have happily pointed out); that “I think, therefore thinking happens” is pretty much all you can get.


More Christian Tormentors Than Christian Martyrs

There were not that many martyrs anyway, wrote [Edward] Gibbon, announcing “a melancholy truth which obtrudes itself on the reluctant mind,” that “even admitting” all the Christian martyrdom history has recorded, “or devotion has feigned … it must still be acknowledged that the Christians, in the course of their intestine dissensions, have inflicted far greater severities on each other than they had experienced from the seal of infidels.” The number of Protestants “executed in a single province and a single reign far exceeded that of the primitive martyrs in the space of three centuries and of the Roman empire.”

An excellent point.

Jesus: Jefferson’s Philosopher, not Savior

Thomas Jefferson, author of the Declaration of Independence and third President of the United States, in a letter to his friend, William Short:

“That Jesus did not mean to impose himself on mankind as the son of God, physically speaking, I have been convinced by the writings of men more learned than myself in that lore. But that he might conscientiously believe himself inspired from above, is very possible. The whole religion of the Jews, inculcated on him from his infancy, was founded in the belief of divine inspiration … he might readily mistake the coruscations of his own fine genius for inspirations of an higher order. This belief carried, therefore, no more personal imputation, than the belief of Socrates, that himself was under the care and admonitions of a guardian Daemon. And how many of our wisest men still believe in the reality of these inspirations, while perfectly sane on all other subjects.”

So, fixated, it seemed, was Jefferson on separating the philosopher Jesus from the mythical one, he famously edited his own Bible, taking out of the supernatural mumbo-jumbo that seem to permeate the Gospels. He also penned this delightful quote:

“But the greatest of all the reformers of the depraved religion of his own country was Jesus of Nazareth. Abstracting what is really his from the rubbish in which it is buried, easily distinguished by its luster from the dross of his biographers, and as separable from that as the diamond from the dunghill.”

Schopenhauer: Jefferson’s Disciple

In the following fact, philosopher Arthur Schopenhauer seems to be taking the same page out of Jefferson’s notebook.

He wrote that believers convince themselves their religion’s myths are somehow connected to its ethical code and thus “regard every attack on the myth as an attack on right and virtue.”

Here, Schopenhauer’s myth is Jefferson’s dunghill, and Schopenhauer’s ethical code is Jefferson’s diamond. But the German takes the idea one step further.

Almost comically, “this reaches such lengths that, in monotheistic nations, atheism or godlessness has become the synonym for absence of all morality.”

To those who equate myth with morality, the rejection of one must therefore entail the rejection of the other.

A Fundamental Misunderstanding


[Charles] Bradlaugh wrote that “the Atheist does not say ‘There is no God,’” but says: “‘I know not what you mean by God; I am without idea of God; the word God is to me a sound conveying no clear or distinct affirmation. I do not deny God, because I cannot deny that of which I have no conception’ especially when even those who believe in the things cannot even define it.”

This seems to me one of the fundamental misunderstandings that exist between believers and non-believers. One cannot deny that which one does not understand.

+ + +

This post first appeared on Eric Lanke's blog, an association executive and author. You can follow him on Twitter @ericlanke or contact him at

Monday, June 19, 2017

It's Okay to Ignore People

My phone doesn't ring as much as it used to. Fifteen years ago, it seemed, my phone rang all the time. Sometimes it was a member of my association looking for help, sometimes it was an unsolicited salesperson, and sometimes it was someone looking for a piece of information that only me or my association could provide.

And, as someone interested in maintaining a professional reputation, I tried to respond to as many of these calls as I could. The members, of course, would get my immediate and prompt attention. The unsolicited salespeople would be politely asked to stop calling if I wasn't in the market or otherwise interested in their services. And I would do whatever I could, within the policies and procedures of my association, to help the people looking for information.

Today, as I said, my phone doesn't ring as much as it used to. The phone is not as popular as it used to be, and I'm in a different position. It's probably harder for outsiders to get my number and get to me. But the calls that do get through still fall into the same three categories.

And today, the only people who get my attention are the members. Both the unsolicited salespeople and the strangers looking for information are ignored.

And that's okay.

Frankly, it took me some time to come to that conclusion. The first to get the cold shoulder were the unsolicited salespeople, and I actually felt guilty about that for a few years. They've got a tough job after all -- calling strangers on the telephone and asking them to buy something they probably don't want. But they created so many interruptions for me -- needless interruptions -- that I eventually found peace with the decision to ignore them.

And shortly thereafter, I realized that the strangers seeking information were creating exactly the same kind of interruptions for me.

Hi, you don't know me, but I'm doing a study on the industry your association represents, and I was wondering if I could get a few minutes of your time?

Hello, I work for a venture capital firm and we're thinking about buying one of the companies in your industry, and I need whatever information you have on the size of product market this company plays in.

Good afternoon, I'm an engineer and I've invented a new product that's going to revolutionize the industry your association represents, and I want you to put me in touch with the companies most likely to license this technology.

One day, after getting one of these calls, I had a kind of epiphany. Nine times out of ten, the kind of information I was being asked to provide was tightly connected to the value proposition that we had created for our members, and for which they paid substantial amounts of money in the form of membership dues. In other words, I worked for a trade association, not a public help line. The information I had access to was not only valuable, it belonged to my members, not any stranger who had found our phone number on our website.

So I started ignoring the people making these calls as well. And, unlike the case of the unsolicited salespeople, I didn't feel guilty at all.

Inspired by this.

+ + +

This post first appeared on Eric Lanke's blog, an association executive and author. You can follow him on Twitter @ericlanke or contact him at

Image Source

Monday, June 12, 2017

Don't Be Misled By the Concentric Circles of Diversity

A few weeks ago I mentioned that Spark Consulting was out with another white paper -- this one on the sometimes challenging topic of diversity and inclusion -- and that it was another thought-provoking read for association CEOs. If you're interested, you can download "Include Is A Verb: Moving From Talk to Action on Diversity and Inclusion" here. It's free and you don't even have to register for it.

I also said that, for me, there were several key concepts. Here's another one.

Look at the picture accompanying this blog post. It's from the white paper, and it leads off the section in which the authors provide some helpful advice on starting your own diversity and inclusion initiative at your association. They, quite correctly, I think, advise that you start first and foremost with yourself.

The first step is to undertake the work individuals must do on themselves.

Start in the center of the target with yourself and then, as implied by the picture, begin working your way out in concentric rings, working next to reform your workplace, then your volunteer leaders, then your membership, and finally, if you have the courage, the very profession or industry your association represents.

To be fair to the authors, they admit in the text of the white paper that things are not really this linear. That, for example, diversity in the industry your association represents is obviously a prerequisite for diversity in your association's membership, and that diversity in your association's membership is just as obviously a prerequisite for diversity in your association's leadership. In this regard, diversity in the outer three concentric rings shown in the picture moves from the outside in, not the inside out.

In my previous post I shared some of the leadership discussions and diversity initiatives that I participated in when I was Board chair of the Wisconsin Society of Association Executives. Well, it was this realization about the white paper's outer three concentric rings--and the recognition of how difficult changing the diversity of the profession we represented would be--that was one of the primary factors that led us down the "dimensions of diversity" path I described. Rather than determining what the diversity of the association management profession in Wisconsin should be, we decided instead to better understand what the diversity of that profession was, and then work proactively to ensure that that diversity was reflected in our association's membership and leadership.

That's one problem I have with the picture. Once you're told you're supposed to start in the center, you assume you have to keep moving outward through the rings. You don't.

Here's the other problem I have with it. The diversity of your association workplace and the diversity of your association leadership have little or no connection at all.

Unless you work for one of the few associations of association professionals, or for an even rarer association entirely staffed by the same people who work in the industry or profession the association represents, then, by definition, the profession of the people who work for the association and the profession of the people who belong to the association are two different professions. And two different professions likely have two different dimensions of diversity. What's important in one may not be important in the other, and therefore, fixing one may have no impact on fixing the other.

It might actually be better for the white paper to show two targets instead of one. The first with yourself in the middle, working outward to change the diversity of your workplace, and the second with your association's industry or profession in the middle, again working outward to change the diversity of your association's membership, and then its leadership. That way, not only do you start from the right premise, you've correctly split the task before you into its two basic strategies.

+ + +

This post first appeared on Eric Lanke's blog, an association executive and author. You can follow him on Twitter @ericlanke or contact him at

Image Source

Saturday, June 10, 2017

The Mind and the Brain by Jeffrey M. Schwartz, M.D., and Sharon Begley

Some time ago I read a book called The Brain That Changes Itself, written by Norman Doidge. Its central theses is that the brain exhibits something called “plasticity”—that it can be rewired and retrained throughout life, a notion that runs contrary to a hundred years of brain theory, but which is gaining more and more acceptance. Each chapter in Doidge’s book presents a case study of someone who consciously or unconsciously used the plasticity of their brain to change fundamental behaviors or regain functionality medical science predicted was impossible.

My own reaction to Doidge’s book was that its subject was fascinating stuff. Doidge makes a compelling argument that the brain is not as we once believed it to be. But a larger—and far more fascinating—question seemed to loom unanswered in all of Doidge’s case studies. What, specifically, is doing the rewiring? Is there an entity, separate from the brain, such as the “mind” or the “soul,” that can exert top-down control over this process? Or is plasticity an inherent property of the brain the way wetness is an inherent property of water? Is there, or isn’t there, a “ghost in the machine?”

Well, Jeffrey Schwartz in The Mind and the Brain tackles that question head-on, and comes down decidedly on the side that the mind does exist as something distinct from the brain. And not only that, the mind can change the brain through “its” conscious will, and that it manifests itself through quantum phenomena in our heads.

I’m not sure I buy that, but let’s unpack it.

1. The Mind Does Exist As Something Distinct from the Brain

From Schwartz’s introduction, page 11:

Through the centuries, the idea of mindfulness has appeared, under various names, in other branches of philosophy. Adam Smith, one of the leading philosophers of the eighteenth-century Scottish Enlightenment, developed the idea of the “impartial and well-informed spectator.” This is “the man within,” Smith wrote in 1759 in The Theory of Moral Sentiments, an observing power we all have access to, which allows us to observe our internal feelings as if from without. This distancing allows us to witness our actions, thoughts, and emotions not as an involved participant but as a disinterested observer. In Smith’s words:

“When I endeavor to examine my own conduct … I divide myself as it were into two persons; and that I, the examiner and judge, represent a different character from the other I, the person whose conduct is examined into and judged of. The first is the spectator. … The second is the agent, the person whom I properly call myself, and of whose conduct, under the character of a spectator, I was endeavoring to form some opinion.”

It was in this way, Smith concluded, that “we suppose ourselves the spectators of our own behaviour.” The change in perspective accomplished by the impartial spectator is far from easy, however: Smith clearly recognized the “fatiguing exertions” it required.

Before reading any further, I scribbled in the margin, “Will he make the argument that this ‘spectator’ is the mind, and what it observes is the brain?” And Schwartz goes on to do exactly that farther down the same page and through the rest of the book. He is approaching the question as a clinician, trying to find a solution to the obsessive-compulsive disorders (OCDs) of his patients, and in this idea of the observing mind and the acting brain, he thinks he has found -- and later demonstrates -- an effective therapy.

The obsessions that besiege the patient seemed quite clearly to be caused by pathological, mechanical brain processes -- mechanical in the sense that we can, with reasonable confidence, trace their origins and the brain pathways involved in their transmission. OCD’s clear and discrete presentation of symptoms, and reasonably well-understood pathophysiology, suggested that the brain side of the equation could, with enough effort, be nailed down.

As for the mind side, although the cardinal symptom of obsessive-compulsive disorder is the persistent, exhausting intrusion of an unwanted thought and an unwanted urge to act on that thought, the disease is also marked by something else: what is known as an ego-dystonic character.

I’ll stop there simply to note the word choice. Scarcely a paragraph after introducing Smith’s idea of “the impartial and well-informed spectator,” Schwartz has simply adopted an assumed and linguistic paradigm of “the brain side” and “the mind side” of his equation. Stating it evidently makes it so.

Now, I haven’t read Adam Smith, and I’m certainly not a clinical psychiatrist, but the problem I detect is that Schwartz is begging the question. The effectiveness of the therapy he builds on this premise -- and it does prove to be effective -- is a red herring.

When someone with the disease experiences a typical OCD thought, some part of his mind knows quite clearly that his hands are not really dirty, for instance, or that the door is not really unlocked (especially since he has gone back and checked it four times already). Some part of his mind (even if, in serious cases, it is only a small part) is standing outside and apart from the OCD symptoms, observing and reflecting insightfully on their sheer bizarreness.

Just because there are two “parts” of the brain, each chemically encoded with a pair of discordant thoughts, and just because you name one of those parts the “mind” and the other part the “brain,” that does not then mean that the “brain” and the “mind” exist as separate phenomena, with the dividing line of physicality (or quantum superposition) running between them. You have stated that the mind and the brain are different, but you have not proven it.

A. The Language Defines, or Dismisses, the Argument

I’ve complained about this before, but Schwartz’s book often took my frustration to new heights. We frankly need a new vocabulary to talk substantively about these issues. Until that day arrives, unfortunately, even bona fide experts in the field, like Schwartz, will be hobbled by inaccurate and misleading turns of phrase like the dozens that appear in his text.

Some simply demonstrate just how confusing the subject is, with terms that should have clear and mutually exclusive definitions being used interchangeably. The following three sentences appear on the same spread of pages.

The discovery that neuroplasticity can be induced in people who have suffered a stroke demonstrated, more than any other finding, the clinical power of a brain that can rewire itself.

Oops. Watch your language there. Aren’t you the one arguing that its is the “mind” that changes the “brain,” not the “brain” that changes itself?

Stapp’s youthful pursuit of the foundations of quantum physics evolved, in his later years, into an exploration of the mind’s physical power to shape the brain.

That’s better. That’s what you mean to say, isn’t it?

Individuals choose what they will attend to, ignoring all other stimuli in order to focus on one conversation, one string of printed characters, or, in Buddhist mindfulness meditation, one breath in and one breath out.

Wait. I thought we were talking about minds and brains. Now you’re calling something an “individual?” Is that the “mind,” or the “brain,” or some third thing you haven’t yet defined?

These are all more or less innocent slips, not intended to frame the argument in one direction or another. But, all too frequently, I found Schwartz to be using language that is outright dismissive of any other paradigm than the one he favors. To see what I mean by that, let’s dive a little deeper into a very deep subject.

B. Moral Philosophy 101

I’m tired of reading paragraphs like this.

One important answer is that the materialist-determinist model of the brain has profound implications for notions like moral responsibility and personal freedom. The interpretation of mind that dominates neuroscience is inimical to both. For if we truly believe, when the day is done, that our mind and all that term entails -- the choices we make, the reactions we have, the emotions we feel -- are nothing but the expression of a machine governed by the rules of classical physics and chemistry, and that our behavior follows ineluctably from the workings of our neurons, then we’re forced to conclude that the subjective sense of freedom is a “user illusion.” Our sense that we are free to make moral decisions is a cruel joke, and society’s insistence that individuals (with exceptions for the very young and the mentally ill) be held responsible for their actions is no more firmly rooted in reason than a sand castle is rooted in the beach.

There are so many things wrong with this line of thinking that I marvel that grown, educated adults actually subscribe to it.

First, as I began to describe above, you’re begging the question (again) with phrases like “nothing but.” For if we truly believe … that our mind and all that term entails … are nothing but the expression of a machine governed by the rules of classical physics and chemistry. Well, what else besides physics and chemistry would they be? Elven magic? “Nothing but” are words by which you smuggle your conclusion into the premise, and which therefore bias the very phraseology of your discussion in its favor. Drop them, and words like them. For if we truly believe … that our mind and all that term entails … are the expression of a machine governed by the rules of classical physics and chemistry … then we’re forced to conclude … What? What are we forced to conclude? Not sure, but at least we’re no longer presuming something not in evidence.

Second, you’re using the wrong grammar. If the mind is an expression of a machine governed by physical laws, then there is no “we” to be further speaking of, at least not the kind of “we” encoded in the syntax you persist in using. In the universe you have postulated, “we” haven’t lost anything, because “we” don’t exist. Choices are made, reactions are had, and emotions are felt, but “we” aren’t making, having or feeling them. The thinking is flawed if for no other reason than because the predicate does not match the subject.

Third, looking ahead to concepts that will be argued later in the book, why should we believe that matter governed by the rules of classical physics is devoid of moral content, yet matter governed by those of quantum physics is somehow imbued with it? When the “particle” goes through the slit and we see the resulting wave pattern on the screen, we may not understand how it is possible, but is the particle in question not following the same kind of universal laws as the planet classically orbiting the sun? Physical laws, regardless of how counter-intuitive they may seem, are still physical laws. And quantum brains could be just a determined as classical ones.

And fourth, even in the world you’re describing, moral action and personal responsibility still exist. Shouldn’t a malfunctioning machine be fixed? Even if it is only harming other machines? To use an emerging example, imagine a world in which we have driverless trucks transporting goods all over the country for us. If one of those driverless trucks had a flaw in its programming that forced it to run other driverless trucks off the road, who would argue that something shouldn’t be done about it? That driverless truck should be reprogrammed, or decommissioned if reprogramming proved ineffective. Now, I get that some may still argue that the only reason to fix the driverless truck is because by derailing the shipments of goods, it is by extension harming the moral agents (i.e., people) for whom those goods are intended. To that I say maybe. But it seems to me that even in a world where those people have no agency -- where, in other words, choices are made, reactions are had, and emotions are felt, not by an observing mind, but by an acting brain -- then there remains a kind of utility in fixing broken machines that creates a moral imperative.

2. The Mind Can Change the Brain Through “Its” Conscious Will

I understand why we all struggle with these ideas, and why some, like Schwartz, want to find a way out of what most will perceive as a “chicken and egg” dilemma. If the actions of the brain come from the physics of its electrochemical activity, then it seems wrong to also say that the electrochemical activity of the brain come from the actions of the brain. It feels like to has to be one or the other, not both. Either the brain (or the “mind” or the “individual” or the “soul”) directs its electrochemical activity to create specific actions within itself and its body -- a condition authors like Schwartz seem desperate to prove -- or the naturally occurring electrochemical activity of the brain create the sensation of consciousness that accompanies so many of the brain’s specific actions -- a condition that seems to be an anathema of the same authors..

But there is actually a fair amount of reasonable conjecture about that second possibility. In evolution circles, it is called epiphenomenalism.

A. Epiphenomenalism

Epiphenomenalism acknowledges that mind is a real phenomenon but holds that it cannot have any effect on the physical world. This school acknowledges that mind and matter are two separate beasts, as are physical events and mental events, but only in the sense that qualia and consciousness are not strictly reducible to neuronal events, any more than the properties of water are reducible to the chemical characteristics of oxygen and hydrogen. From this perspective, consciousness is an epiphenomenon of neuronal processes.

This is from a fairly helpful section of Schwartz’s book where he briefly describes six different “philosophies of mind and matter,” listing them from what he terms the most to the least materialistic. They are:

Emergent Materialism
Agnostic Physicalism
Process Philosophy
Dualistic Interactionism

Never mind what they all mean, I only showed them to illustrate how far to the materialistic side my sympathies lie. Except, when it comes to epiphenomenalism, I think Schwartz gets its slightly wrong -- or at least he doesn’t quite capture in its description what I think is going on.

Epiphenomenalism views the brain as the cause of all aspects of the mind, but because it holds that the physical world is causally closed -- that is, that physical events can only have physical causes -- it holds that the mind itself doesn’t actually cause anything to happen that the brain hasn’t already taken care of. It thus leaves us with a rather withered sort of mind, one in which consciousness is, at least in scientific terms, reduced to an impotent shadow of its former self.

He keeps begging the question, doesn’t he? Why on earth would anyone believe in something that “withers” the mind, that “reduces” it to an “impotent shadow” of its former self?

As a nonphysical phenomenon, it cannot act on the physical world. It cannot make stuff happen. It cannot, say, make an arm move. Epiphenomenalism holds that the brain is the cause of all the mental events in the mind but that the mind itself is not the cause of anything. Because it maintains that the causal arrow points in only one direction, from material to mental, this school denies the causal efficacy of mental states.

This causal arrow idea is key to Schwartz’s entire thesis, but instead of questioning that, let’s take a look at what Schwartz says about evolutionary biologists and their definition of epiphenomenalism.

The basic principles of evolutionary biology would seem to dictate that any natural phenomenon as prominent in our lives as our experience of consciousness must necessarily have some discernible and quantifiable effect in order for it to exist, and to persist, in nature at all. It must, in other words, confer some selective advantage.

Sigh. More begging the question. If philosophical epiphenomenalism is true, then there is no external observer than can determine that the “experience of consciousness” is “prominent in our lives.” By choosing that phraseology, you are biasing the argument against the possibility that epiphenomenalism is true. Again, the better phraseology comes from just deleting the complicating words: The basic principles of evolutionary biology would seem to dictate that any natural phenomenon must necessarily have some discernible and quantifiable effect in order for it to exist, and to persist, in nature at all.

True enough. But as Schwartz well knows, there is a problem. Not every natural phenomenon is specifically selected for. Some are along for the ride with others that are.

True, evolutionary biologists can trot out many examples of traits that have been carried along on the river of evolution although not specifically selected for (the evolutionary biologists Stephen Jay Gould and Richard Lewontin called such traits spandrels, the architectural term for the elements between the exterior curve of an arch and the right angle of the walls around it, which were not intentionally built but were instead formed by two architectural traits that were “selected for”). But consciousness seems like an awfully prominent trait not to have been the target of some selection pressure.

There we go again. Consciousness “seems like an awfully prominent trait.” It does? To who? To the observing mind you haven’t yet proven isn’t something other than the product of the acting brain? There is a deep reluctance on the part of many to consider -- or, as I hope I have shown, to even frame a fair argument for -- the possibility that consciousness is epiphenomenal. Part of this reluctance comes, I suppose, from the very nature of consciousness itself. The ghost in the machine would have a hard time, after all, of admitting that it was, in fact, a ghost. If our consciousness was epiphenomenal, then it’s likely that “we” wouldn’t be able to tell that it was. Almost by definition. As Schwartz quotes one of his colleagues:

Epiphenomenalism is a possible thesis, but it is absolutely incredible, and if we seriously accepted it, it would make a change in our world view more radical than any previous change, including the Copernican Revolution, Einsteinian relativity theory and quantum mechanics.

To which I say, yes, exactly.

B. Conscious Thoughts Without Conscious Intent

Because experiments have shown repeatedly that the brain activity responsible for conscious movements begins before conscious awareness. The most famous may have been performed by neurologist Benjamin Libet in the 1980s. Libet will also show up later in our discussion, but we needn’t even go there to demonstrate the point. Evidence that conscious brain activity occurs through actions other than the free exercise of consciousness can be pulled from Schwartz’s own work on the study obsessive-compulsive disorders.

Someone with obsessive-compulsive disorder derives no joy from the actions she takes. This puts OCD in marked contrast to, for instance, compulsive gambling or compulsive shopping. Although both compulsive shoppers and compulsive gamblers lack the impulse control to resist another trip to the mall or another game of video poker, at least they find the irresistible activity, well, kind of fun. An OCD patient, in contrast, dreads the arrival of the obsessive thought and is ashamed and embarrassed by the compulsive behavior. She carries out behaviors whose grip she is desperate to escape, either because she hopes that doing so will prevent some imagined horror, or because resisting the impulse leaves her mind unbearably ridden with anxiety and tortured by insistent, intrusive urges. Since the obsessions cannot be silenced, the compulsions cannot be resisted. The sufferer feels like a marionette at the end of a string, manipulated and jerked around by a cruel puppeteer -- her own brain.

Wow. Seems like OCD is itself proof of that conscious thoughts can occur without conscious intent. What makes it a disorder is perhaps not the fact the the “conscious” mind cannot control what appear to be the determined actions of the brain, but the fact that the epiphenomenal consciousness finds itself out of sync with those same determined actions. Normally, after all, the consciousness believes it has willed the determined actions of the brain to occur.

And Schwartz’s short description of selective serotonin reuptake inhibitors (SSRIs) as a potential pharmacological therapy for OCD leaves, to my way of thinking, a gapingly open question that needs further exploration. These SSRIs (Prozac, Paxil, Zoloft, Luvox, and Celexa) all block “the molecular pump that moves serotonin back into the neurons from which its was released, thus allowing more of the chemical to remain in the synapse.” It makes me wonder if serotonin isn’t a kind of “consciousness chemical,” since the more of it you have in your synapses, the more aligned the actions of your determined brain seem to be with your conscious will. Too little serotonin in your synapses and the resulting OCD behaviors reveal how disconnected those two phenomena can really be.

C. Telling People They Are More Than Their Gray Matter Doesn’t Make It So

But that’s not how Schwartz approached the problem and not the kind of therapy that he developed as a result. He goes into some detail about the anatomy and biochemistry of the brain that appear responsible for OCD, and I have no reason to question any of it. He describes it as a kind of overactive “worry circuit” in the brain, something that fires to help alert the organism that something is amiss in its environment. In the case of OCD, the circuit fires even when things are not amiss, causing the sufferer to check and doublecheck to make sure the things not out of order are, in fact, in order.

That’s all good. What troubled me was that his therapeutic approach depended on the belief that the patient existed as something separate from the functioning of her brain.

I began showing patients in the treatment group their PET scans, to drive home the point that an imbalance in their brains was causing their obsessive thoughts and compulsive behaviors. Initially, some were dismayed that their brain was abnormal. But generally it dawned on them, especially with therapy, that they are more than their gray matter. When one patient … exclaimed, “It’s not me; it’s my OCD!” a light went off in my head: what if I could convince patients that the way they responded to the thoughts of OCD could actually change their brains?

This makes absolutely no sense to me. Saying “It’s not me; it’s my OCD!” is akin to saying “It’s not me; it’s they way my brain is functioning!” They are, in essence, the same thing, as troubling as that thought may be to someone who believes that they exist in some way separate from their unique brain function.

Which, of course, many people do, and which, paradoxically, allows Schwartz’s cognitive therapy for OCD to be efficacious. Among several other techniques, Schwartz developed practices he calls Relabeling and Reattributing, where the OCD patient makes conscious the separation between herself and her malfunctioning brain.

Accentuating Relabeling by Reattributing the condition to a rogue neurological circuit deepens patients’ cognitive insight into the true nature of their symptoms, which in turn strengthens their belief that the thoughts and urges of OCD are separate from their will and their self. By Reattributing their symptoms to a brain glitch, the patients recognize that an obsessive thought is, in a sense, not “real” but, rather, mental noise, a barrage of false signals. This improves patients’ ability not to take the OCD thoughts at face value. Reattributing is particularly effective at directing the patient’s attention away from demoralizing and stressful attempts to squash the bothersome OCD feeling by engaging in compulsive behaviors. Realizing that brain biochemistry is responsible for the intensity and intrusiveness of the symptoms helps patients realize that their habitual frantic attempts to wash (or count or check) away the symptoms are futile.

I fear I would make a troublesome patient for Dr. Schwartz, for, even though I could potentially benefit from his new therapeutic ideas, I wouldn’t be able to stop from asking troublesome questions. What goes on in the brain, for example, that brain biochemistry isn’t responsible for? If it is brain biochemistry that is responsible for the intensity and intrusiveness of my OCD symptoms, then isn’t it also brain biochemistry that is responsible for my conscious attempts to relabel and reattribute them? What separates one from the other?

Relabeling and Reattributing reinforce each other. Together, they put the difficult experience of an OCD symptom into a workable context: Relabeling clarifies what is happening, and Reattributing affirms why it’s happening, with the result that patients more accurately assess their pathological thoughts and urges. The accentuation of Relabeling by Reattributing also tends to amplify mindfulness. Through mindfulness, the patient distances himself (that is, his locus of conscious awareness) from his OCD (an intrusive experience entirely determined by material forces). This puts mental space between his will and the unwanted urges that would otherwise overpower the will.

Putting mental space between the will and the unwanted urges is something I can understand and get behind. But the other separation Schwartz is talking about still seems to me asserted but not proven. The parenthetical phrases alone in the above excerpt contain enough fuzziness to keep the two of us going around in circles for some time. In the first, he seems to be saying that the patient can be equated with his locus of conscious awareness. We are, mysteriously it would seen, the act of paying attention to something. In the second, he again begs the question with that slippery word entirely. “An intrusive experience entirely determined by material forces” is a very different idea than the one expressed by “an intrusive experience determined by material forces.” The former implies that his counterpose, in this case, the will, is not determined by material forces. But who says it isn’t?

Despite the fact that Schwartz’s cognitive therapy works, it does not necessarily follow that his patients are able to marshal their conscious will from a source other than the material forces on which the brain is built and functions. How can the observing mind cause something to happen in the acting brain through “its” conscious will when that very consciousness may very well be dependent on the brain and its casual activities in order to manifest itself. Schwartz’s OCD therapy may have shown that there is a way out of that maze, but I don’t think it shows what that way is.

3. The Mind Manifests Itself Through Quantum Phenomena in Our Heads

I don’t understand quantum mechanics. And I’ll wager that Schwartz doesn’t either. As physicist Richard Feynman once reportedly said, if you think you understand quantum mechanics, then you absolutely do not understand quantum mechanics. It is, seemingly by its very nature, counter-intuitive and resistant to human understanding, full of undecipherable math equations and observable phenomena that apparently defy the physical logic that governs the classical world of bodies in motion.

And yet, Schwartz hangs his entire argument for the mechanism by which the mind changes the brain on one of these seeming tricks of quantum mechanics -- the apparent reality that the act of observation changes, or perhaps defines, what is being observed.

I’ll try to tease apart what Schwartz reports to understand about quantum mechanics from what I think know about the subject, but we’re likely to get twisted into even more challenging knots that the philosophical ones I’ve created so far.

Let’s begin with the math.

A. Mere Mathematical Devices

It has been a century since the German physicist Max Planck fired the opening shot in what would become the quantum revolution. On October 19, 1900, he submitted to the Berlin Physical Society a proposal that electromagnetic radiation (visible light, infrared radiation, ultraviolet radiation, and the rest of the electromagnetic spectrum) exists as tiny, indivisible packets of energy rather than as a continuous stream. He later christened these packets quanta. … Planck viewed his quanta as mere mathematical devices, something he invoked in “an act of desperation” to explain why heated, glowing objects emit the frequencies of energy that they do (an exasperating puzzle known as the black-body radiation problem). He did not seriously entertain the possibility that they corresponded to physical entities. It was just that if you treated light and other electromagnetic energy as traveling in quanta, the equations all came out right.

So here we have one of the fundamental realities of science. Not that light travels in “tiny, indivisible packets” called quanta, but that math is the tool that science uses to describe, not define, natural phenomena. In one of my most accessible examples, the mathematics of the epicycles used in pre-Keplerian astronomy did an admirable job in describing (and predicting) the observable motions of the planets in the solar system, but the math used did not bring into existence the great celestial wheels upon wheels that the formulae described. Like all sciences, quantum mechanics uses mathematics first to describe what is observed and then to extrapolate from those descriptive formulae new phenomena and understandings of reality. In some cases, those extrapolations are confirmed by observation, and those instances are held up as proof of the predictive power of the mathematical theory. But one always has to be careful to remember that the theory is one of constructed of epicycles -- structures that have no hard existence in reality.

Now, as I said earlier, one of the fundamental observed phenomena of quantum mechanics is the apparent reality that the act of observation changes, or perhaps defines, what is being observed. Let’s delve deep into what I only mentioned earlier, the famous two-slit experiment.

B. The Famous Two-Slit Experiment

In 1801, the English polymath Thomas Young rigged up the test that has been known forever after as the two-slit experiment. At the time, physicists were locked in debate over whether light consisted of particles (minuscule corpuscles of energy) or waves (regular undulations of a medium, like water waves in a pond). In an attempt to settle the question, Young made two closely spaced vertical slits in a black curtain. He allowed monochromatic light to strike the curtain, passing through the slits and hitting a screen on the opposite wall. Now, if we were to do a comparable experiment with something we know to be corpuscular rather than wavelike -- marbles, say -- there is no doubt about the outcome.

Pay close attention to Schwartz’s use of terms here. Planck’s “quanta” have now become “particles,” a transposition not fully explained, and “marbles” have been equated with those particles of light (soon to be called “photons”), an extrapolation indefensible by any structure of logic I’m familiar with. In what way, exactly, should we expect marbles to act similarly to “minuscule corpuscles of energy.”

Most marbles were fired at, for instance, a fence missing two slats would hit the fence and drop on this side. But a few marbles would pass through the gaps and, if we had coated the marbles with fresh white paint, leave two bright blotches on the wall beyond, corresponding to the positions of the two openings.

This is not what Young observed.

Again, why should it be? Comparing photons to marbles is like comparing electrons to elephants.

Instead, the light created, on a screen beyond the slitted curtain, a pattern of zebra stripes, alternating dark and light vertical bands. It’s called an interference pattern. Its genesis was clear: where crests of light waves from one slit met crests of waves from the other, the waves reinforced each other, producing the bright bands. Where the crest of a wave from one slit met the trough of a wave from the other, they canceled each other, producing the dark bands. Since the crests and troughs of a light wave are not visible to the naked eye, this is easier to visualize with water waves.

Okay, but be careful. We’re about to make another “photons to marbles” comparison. Unlike water waves, light waves do not need a medium in order to propagate. A few minutes on Google helped me verify that double-slit experiments done in a vacuum produce the same results: an interference pattern despite the lack of any medium for the waves to be traveling through. Light waves, therefore, are about as analogous to water waves as photons are to marbles (or electrons are to elephants).

Place a barrier with two openings in a pool of water. Drop a heavy object into the pool -- watermelons work -- and observe the waves on the other side of the barrier. As they radiate out from the watermelon splash, the ripples form nice concentric circles. When any ripple reaches the barrier, it passes through both openings and, on the other side, resumes radiating, now as concentric half-circles. Where a crest of ripples from the left opening meets a crest of ripples from the right, you get a double-height wave. But where crest meet trough, you get a zone of calm. Hence Young’s interpretation of his double-slit experiment: if light produces the same interference patterns as water waves, which we know to be waves, then light must be a wave, too. For if light were particulate, it would produce not the zebra stripes he saw but, rather, the sum of the patterns emerging from the two slits when they are opened separately -- two splotches of light, perhaps, like the marbles thrown at our broken fence.

It’s not my broken fence. I think what troubles me most about this line of reasoning is how falsely dichotomous it is. It assumes that light is either a wave or a particle, although we already know that light bends (if not breaks) the very definitions of both of those words. Let me boldly state the obvious. Light is neither a wave nor a particle. It is something else. You planted your own seeds of disappointment when you decided that light had to act predictably like either a wave or a particle. Don’t blame the observation when you yourself biased your thinking against it.

But what does any of this have to do with the mind or the brain? We’re getting there.

So far, so understandable. But, for the next trick, turn the light source way, way down so that it emits but a single photon, or light particle, at a time. (Today’s photodetectors can literally count photons.) Put a photographic plate on the other side, beyond the slits. Now we have a situation more analogous, it would seem, to the marbles going through the fence: zip goes one photon, perhaps making it through a slit. Zip goes the next, doing the same. Surely the pattern produced would be the sum of the patterns produced by opening each slit separately -- again, perhaps two intermingled splotches of light, one centered behind the left slit and the other behind the right.

Why would you think that? You’re still not shooting marbles. You emitting the smallest possible quantities of light. Marbles go through one missing fence slat or the other. Light, whatever its quantity, will go through both curtain slits.

But no.

As hundreds and then thousands of photons make the journey (this experiment was conducted by physicists in Paris in the mid-1980s), the pattern they create is a wonder to behold. Instead of the two broad patches of light, after enough photons have made the trip you see the zebra stripes. The interference pattern has struck again. But what interfered with what? This time the photons were clearly particles -- the scientists counted each one as it left the gate -- and our apparatus allowed only a single photon to make the journey at a time. Even if you run out for coffee between photons, the result is eventually the same interference pattern. Is it possible that the photon departed the light source as a particle and arrived on the photographic plate as a particle (for we can see each arrive, making a white dot on the plate as it lands) -- but in between it became a wave, able to go through both slits at once and interfere with itself just as a water wave from our watermelon drop goes through the two openings in the barrier? Even weirder, each photon -- and remember, we can release them at any interval -- manages to land at precisely the right spot on the plate to contribute its part to the interference pattern.

Do you see how Schwartz is tying himself into knots, trying to interpret the behavior of light through the false dichotomy of wave vs. particle? It has to be one or the other, so let’s call it a particle when it acts like a particle and a wave when it acts like a wave. Except I don’t see why such knots are necessary since the “particles” are acting like particles, in the sense that they hit the photographic plate one at a time, just as one would expect them to. It’s only when “hundreds and thousands” of particles make the journey that the interference pattern emerges.

And, as Schwartz goes on to explain, this is not just some “weird property of light.” The same experiments, with the same results, have been done with electrons and “larger particles.”

Electrons -- a form of matter -- can behave as waves. A single electron can take two different paths from source to detector and interfere with itself: during its travels it can be in two places at once. The same experiments have been performed with larger particles, such as ions, with identical results. And ions … are the currency of the brain, the particles whose movements are the basis for the action potential by which neurons communicate. They are also, in the case of calcium ions, the key to triggering neurotransmitter release. This is a crucial point: ions are subject to all of the counterintuitive rules of quantum physics.

Maybe now you see where Schwartz is going with all of this. Forget the fact that calling electrons “a form of matter” is misleading in the extreme. And forget the fact that using the same term, “particle,” to describe both electrons (whatever they are) and calcium ions (atomic masses with 20 protons, 20 neutrons, and 18 electrons) is also misleading in the extreme. Schwartz has apparently taken us down this quantum journey so he can arrive as this destination -- the ions that are the basis for neurological function exhibit quantum properties.

Because, for the purpose of his conjecture, Schwartz has only seemingly been tying himself into knots. He knows why photons, electrons, and calcium ions exhibit these strange “double-slit” behaviors, and it isn’t because they are sometimes particles and sometimes waves. He’s been leading us down the garden path.

C. Collapsing Wave Functions

A key to understanding the whole bizarre situation is that we actually measure the photon or electron at only two points in the experiment: when we release it (in which case a photodetector counts it) and when we note its arrival at the end. The conventional explanation is that the act of measurement makes a spread-out, fuzzy wave (at the slits) collapse into a discrete, definite particle (on the scintillation plate or other detector). According to quantum theory, what in fact passes through the slits is a wave of probability. In fact, quantum physics describes the behavior of a particle by something called the Schrödinger wave equation (after Erwin Schrödinger, who conceived it in 1926). Just as Newton’s second law describes the behavior of particles, so Schrödinger’s wave equation specifies the continuous and smooth evolution of the wave function at all times when it is not being observed. The wave function encodes the entire range of possibilities for that particle’s behavior -- where the particle is, when. It contains all the information needed to compute the probabilities of finding the particle in any particular place, any time. These many possibilities are called superpositions. The element of chance is key, for rather than specifying the location, or the energy, or any other trait of a particle, the equation modestly settles for describing the probability that those traits will have particular values. (In precise terms, the square of the amplitude of the wave function at any given position gives the probability that the particle will be found in some region near that position.) In this sense the Schrödinger wave can be considered a probability wave.

Oh my god. What does all of that mean? Let me simplify it for you. It’s math. Neither the math invented by Newton to describe the observed behavior of particles nor the math invented by Huygens to describe the observed behavior of waves suffices when it comes to the observed “two-slit” behavior of photons, electrons, and calcium ions. So a very smart person named Schrödinger invented a new set of math equations that do describe those observed behaviors. Schrödinger’s math deals with probabilities, not with concrete factors, and, as such, when one extrapolates his equations to make accurate predictions about the behavior of the phenomena represented by his “wave functions,” some very unusual things happen.

When a quantum particle or collection of particles is left alone to go its merry way unobserved, its properties evolve in time and space according to the deterministic wave equation. At this point (that is, before the electron or photon is observed), the quantum particle has no definite location. It exists instead as a fog of probabilities: there are certain odds (pretty good ones) that, in the appropriate experiment, it will be in the bright bands on the plate, other odds (lower) that it will land in the dark bands, and other odds (lower still, but nonzero) that it will be in the Starbucks across the street.

Do you see what Schwartz did there? He equated the inability of the math to determine a precise location for the quantum particle with the supposed reality that the quantum particle has no precise location. Do you see what else he did? He equated the fact that the Schrödinger wave equation has solutions that are nonzero for the quantum particle being in the Starbucks across the street with the supposed reality that the quantum particle could, in fact, be in the Starbucks across the street. The math tells us that both are possible -- in a “nonzero” kind of way -- so I guess we’d better consider them.

But as soon as an observer performs a measurement -- detecting an electron landing on a plate, say -- the wave function seems to undergo an abrupt change: the location of the particle it describes is now almost definite. The particle is no longer the old amalgam of probabilities spread over a large region. Instead, if the observer sees the electron in this tiny region, then only that part of the wave function representing the small region where observation has found it survives. Every other probability for the electron’s position has vanished. Before the observation, the system had a range of possibilities; afterward, it has a single actuality. This is the infamous collapse of the wave function.

Schwartz will go on to describe multiple theories multiple physicists have offered to explain the “abrupt change” or “collapse” of the wave functions that describe the observed behaviors of photons, electrons, and calcium ions. But Schwartz settles on one in service of the “mind changes the brain” theory he is developing.

[Niels] Bohr insisted that quantum theory is about our knowledge of a system and about predictions based on that knowledge; it is not about reality “out there.” That is, it does not address what had, since before Aristotle, been the primary subject of physicists’ curiosity -- namely, the “real” world. The physicists [who agreed with Bohr] threw in their lot with this view, agreeing that the quantum state represents our knowledge of a physical system.

Before the act of observation, it is impossible to know which of the many probabilities inherent in the Schrödinger wave function will become actualized. Who, or what, chooses which of the probabilities to make real? Who, or what, chooses how the wave function “collapses?” Is the choice made by nature, or by the observer? According to [Bohr’s interpretation], it is the observer who both decides which aspect of nature is to be probed and reads the answer nature gives. The mind of the observer helps choose which of an uncountable number of possible realities come into being in the form of observations. A specific question (Is the electron here or there?) has been asked, and an observation has been performed (Aha! The electron is there!), corralling an unruly wave of probability into a well-behaved quantum of certainty. Bohr was silent on how observation performs this magic. It seems, though, as if registering the observation in the mind of the observer somehow turns the trick: the mental event collapses the wave function. Bohr, squirming under the implications of his own work, resisted the idea that an observer, through observation, is actually influencing the course of physical events outside his body. Others had no such qualms. As the late physicist Heinz Pagels wrote in his wonderful 1982 book The Cosmic Code, “There is no meaning to the objective existence of an electron at some point in space … independent of any actual observation. The electron seems to spring into existence as a real object only when we observe it!”

“Seems” being the operative word in that last sentence. But like Heinz Pagels, Schwartz has no qualms accepting that which made Niels Bohr squirm.

D. Quantum Brains

This maxim that “reality only exists when it is observed,” if true, remains one of the deepest and most misunderstood puzzles of quantum mechanics, but it is finally the peg that Schwartz will hang his hat on. Because, according to Schwartz’s theory, which he spends the rest of his book developing, the observer in the “quantum brain,” the entity that forces the wave function of the brain’s electrochemical activity to collapse into a concrete reality is, you guessed it, the mind.

And, although you more than likely have thought this long before getting to this paragraph, this idea that it is the “observing mind” that is creating biochemical reality by collapsing Schrödinger wave functions in the “acting brain,” is where Schwartz really starts driving us off the rails.

Applying quantum theory to the brain means recognizing that the behaviors of atoms and subatomic particles that constitute the brain, in particular the behavior of ions whose movements create electrical signals along axons and neurotransmitters that are released into synapses, are all described by Schrödinger wave equations. Thanks to superpositions of possibilities, calcium ions might or might not diffuse to sites that trigger the emptying of synaptic vesicles, and thus a drop of neurotransmitter might or might not be released. The result is a whole slew of quantum superpositions of possible brain events.

Okay, I’m with you so far, but I notice you’re no longer talking about the nonzero solutions to the Schrödinger wave equations that would put the calcium ions responsible for neurotransmitter release in the Starbucks across the street.

When such superpositions describe whether a radioactive atom has disintegrated [a reference to an earlier description of the thought experiment involving Schrödinger’s famous cat], we say that those superpositions of possibilities collapse into a single actuality at the moment we observe the state of that previously ambiguous atom.

Technically, the possibilities collapse into a single actuality when they are observed, not when “we” observe them. The electrons hit the photographic plate whether we’re in the room or not, just as Schrödinger’s cat is alive or dead in the box before we open it. Plenty of famous physicists accept this interpretation, but Schwartz carefully avoids it in order to build his case. He continues ...

The resulting increment in the observer’s knowledge of the quantum system (the newly acquired knowledge that the atom has decayed or not) entails a collapse of the wave functions describing his brain.

This is a reference not just the the radioactive atom threatening Schrödinger’s cat, but an interpretation of quantum mechanics that classifies it as an information system within a consciousness, not as a mechanical system within a classical universe. Quantum mechanics, in this view, is not a science of particle/wave things interacting with their environment, but the extent and ability of brains to understand and interpret the world around them. That’s why Schwartz talks about the “newly acquired knowledge” as a part of the quantum system. It’s an unjustified leap -- in my opinion -- but let’s allow Schwartz to continue …

This point is key: once the brains of observers are included in the quantum system, the wave function describing the state of the brain of any observer collapses to the form corresponding to his new knowledge. The quantum state of the brain must collapse when an observer experiences the outcome of a measurement. The collapse occurs in conjunction with the conscious act of experiencing the outcome of the observation. And it occurs in the brain of the observer -- the observer who has learned something about the system.

Do you see what he did? We’ve now moved away from the wave functions of calcium ions -- probabilistic math equations describing all the possible energies and locations of “physical” objects -- to wave functions of … what? Knowledge “moments” of brains? Probabilistic math equations describing all the possible energies and locations of all the elementary particles in a brain that correspond to a particular and distinct “set” of knowledge? Do such equations even exist? Is there a physics professor who has written such an equation down? How many lecture hall blackboards did it take? And how many times did the wave function of his brain change in the time it took him to write it? Let’s give Schwartz a chance to explain …

What do we mean by collapsing the quantum state of the brain? Like an atom threatening Schrödinger’s cat, the entire brain of an observer can be described by a quantum state that represents all the various possibilities of all of its material constituents.

“Can be described” as in “can be expressed mathematically,” or “can be described” as in “can be conjectured”? Are we dealing with an actual or a thought experiment here? Back to Schwartz …

That brain state evolves deterministically until a conscious observation occurs. Just before an observation, both the observed quantum system (let’s stick with the radioactive atom) and the brain that observes it exist as a profusion of possible states. Think of each possible state as a branch on a tree. Each branch corresponds to some possible state of knowledge, or course of action. But when the observation registers in the mind of the observer, the branches are brutally ...


… pruned: only the branches compatible with the observer’s experience remain. If, say, the observation is that the sun is shining, then the associated physical event is the updating of the brain’s representation of the weather. Branches corresponding to “the sky is overcast” are chopped off. An increase in knowledge is accompanied by an associated reduction of the quantum state of the brain. And with that, the quantum brain changes, too.

Okay. That’s as far as I’m going to go with this. Schwartz has so hopelessly confused his argument that I don’t think we’re conceivably talking about reality any more.

This business about branches being pruned is a reference to the “many worlds” theory -- one of the explanations for the “collapse” of the wave function that some physicists have offered, and which Schwartz himself seemed to dismiss earlier in his text. It says that all possible solutions to the wave function in fact do exist, the one observed here in this universe, and all the others … um, somewhere else. Enough said there.

But more practically, if Schwartz is going to claim that the brain exists in a quantum state that changes (or collapses?) with each new piece of knowledge it receives, he frankly has a lot more explaining to do.

First, what counts as a “piece” (of quantum?) of knowledge? The example Schwartz continues to use is the decay of the radioactive atom threatening Schrödinger’s cat. He treats that in the binary sense. Either the atom has decayed or it hasn’t, and he therefore treats the resulting quantum brain states as equally binary. It looks like this if it knows the atom has decayed, and it looks like that if it knows it hasn’t. But, I’m not sure I buy that we’re only talking about two brain states (or two solutions to the brain’s Schrödinger wave function) here. To borrow a phrase, there “seems” to be a lot going on in our brains moment to moment that would probably contaminate any pure sample of quantum brain states that we’re trying to measure. Forget all the random bits of conscious trivia that are constantly flowing along in our awareness (sorry, can’t resist: Did I turn off the iron? Did I lock the house?). What about all the subconscious monitoring and control our brains are responsible for? Heart activity, respiration, digestion, tactile awareness, spatial orientation -- our brains are always working on these processes whether we are aware of them or not. What if the duodenum needs to contract, or the pancreas needs to excrete, just as the bit of “important” quantum information about decaying atoms comes into our awareness? Can we really be sure that we’re collapsing the right wave function at the right time?

Second, why stop at the brain? As I read through Schwartz’s text I was constantly confused as to which wave function he was talking about and he teased his way through the various pieces of his theory. Are we talking about the wave function associated with a single electron on a single calcium ion, a single calcium ion in a single synapse, all the calcium ions in a single synapse, all the calcium ions in all the synapses, all the molecules and floating ions/atoms that constitute a single neuron, all the neurons associated with the quantum knowledge gain in question, all the neurons in the entire brain, and/or the whole brain itself? And if we accept that wave functions can usefully describe macroscopic objects like brains, then why stop there? What about the wave functions associated with the head, the body, the room, the building, the city, the earth, the solar system, the galaxy, and the universe? Why aren’t we talking about the addition of quantum information to any of those collapsing wave functions?

And third, who, exactly, is the observer?

The fact that the collapse of the wave function so elegantly allows an active role for consciousness -- which is required for an intuitively meaningful understanding of the effects of effort on brain function -- is itself strong support for using a collapse-based interpretation in any scientific analysis of mental influences on brain action.

Evidently, “we” are. We’re back to the unproved statement that consciousness exists as something separate from the actions of the brain to provide a mechanism to explain brain action -- this time on a quantum rather than spiritual level.

E. Free Will Vetoes Determined Brain Activity

Let me try to conclude with this. After presenting his hypothesis that the “observing mind” is the observer in the quantum phenomenon that collapse into brain activity, Schwartz spends most of the rest of his book making the case that this “quantum observer” indeed represents our “efficacious will,” that is, the non-physical ability to choose between a multitude of quantum states and direct our brains towards certain actions and not others. Quoting Benjamin Libet, he of the famous experiments that show that the brain activity associated with conscious action occurs before the consciousness becomes aware of it, Schwartz says:

But in later years [Libet] embraced the notion that free will serves as the gatekeeper for thoughts bubbling up from the brain and did not duck the moral implications of that. “Our experimental work in voluntary action led to inferences about responsibility and free will,” he explained in late 2000. “Since the volitional process is initiated in the brain unconsciously, one cannot be held to feel guilty or sinful for simply having an urge or wish to do something asocial.

Jesus in Matthew 5:28 might disagree with that.

But conscious control over the possible act is available, making people responsible for their actions. The unconscious initiation of a voluntary act provides direct evidence for the brain’s role in unconscious mental processes. I, as an experimental scientist, am led to suggest that true free will is a [more accurate scientific description] than determinism.”

Forgive me, I couldn’t resist the biblical reference, because, frankly, that reads more like theology than science to me. You will be tempted by forces you can’t control, but you have the strength to resist them and choose the righteous path.

But where does this volitional activity come from? What does science tell us about the source of this ability to veto the asocial thoughts that come “bubbling up from the brain”?

Study after study has indeed found a primary role for the prefrontal cortex in freely performed volitional activity. “That aspect of free will which is concerned with the voluntary selection of one action rather than another critically depends upon the normal functioning of the dorsolateral prefrontal cortex and associated brain regions,” Sean Spence and Chris Frith concluded in “The Volitional Brain.” Damage to this region, which lies just behind the forehead and temples and is the most evolutionarily advanced brain area, seems to diminish one’s ability to initiate spontaneous activity and to remain focused on one task rather than be distracted by something else. These symptoms are what one would predict in someone unable to choose a particular course of action. Large lesions of this region turn people into virtual automatons whose actions are reflexive responses to environmental cues: such patients typically don spectacles simply because they are laid before them, or eat food presented to them, mindlessly and automatically. (This is what those who have had prefrontal lobotomy do.) And studies in the 1990s found that when subjects are told they are free to make a particular movement at the time of their own choosing -- in an experimental protocol much like Libet’s -- the decision to act is accompanied by activity in the dorsolateral prefrontal cortex. Without inflating the philosophical implications of this and similar findings, it seems safe to conclude that the prefrontal cortex plays a central role in the seemingly free selection of behaviors, choosing from a number of possible actions by inhibiting all but one and focusing attention on the chosen one. It makes sense, then, that when this region is damaged patients become unable to stifle inappropriate responses to their environment: a slew of possible responses bubbles up, as it does in all of us, but brain damage robs patients of the cerebral equipment required to choose the appropriate one.

So, evidently, a brain with a damaged or without a dorsolateral prefrontal cortex does not have the “cerebral equipment” needed to exercise its “efficacious will” over the unbidden actions that deterministically occur in the other portions of the brain. Here I seem I have an answer to the question I asked earlier about the “size” of the necessary wave function. Without a dorsolateral prefrontal cortex, there is evidently no observer to collapse the brain’s wave function -- and yet, quantum phenomena undoubtedly occur as calcium ions makes their deterministic passages in and out of neurons elsewhere in the brain. Which observer is collapsing those?

+ + +

This post first appeared on Eric Lanke's blog, an association executive and author. You can follow him on Twitter @ericlanke or contact him at

Monday, June 5, 2017

Go Visit Your Members

I've written about this before, but I recently had another experience that again underscored the value of going out and visiting your members where they live and work.

I was sitting next to this member at dinner at an earlier function we both attended. Why don't you come down and visit us sometime? he said. I'd love to show you our operation.

It was all the invitation I needed. A few months later I had to make a trip to a nearby city, and I added the extra day to my itinerary to pop over and spend some time with him.

When we were together he expressed how happy he was that I had taken the time to come see him. It was no trouble at all, I told him truthfully. Every time I visit a member like this I learn something important about them, their business, or our industry -- and usually all three.

And it is these learnings that make the trips worthwhile. These are not just social calls; although they are also social calls in an important and increasingly necessary way. I get real, on-the-ground intelligence from these trips that helps me do my job better.

If you are an association CEO and you don't regularly visit your members, you really need to start asking yourself why.

+ + +

This post first appeared on Eric Lanke's blog, an association executive and author. You can follow him on Twitter @ericlanke or contact him at

Image Source

Monday, May 29, 2017

Get Specific About Your Workforce Needs

Like a lot of manufacturing based trade associations, our organization is working hard to tackle the workforce issue our members consistently cite as the number one challenge facing their companies. Given the multi-faceted challenge this represents, we have been deliberate in our strategy conversations about what piece of the problem we will try to fix.

Our members have a wide spectrum of workforce needs, and, given the availability of other workforce development programs in the market, we need to acknowledge that our efforts can focus effectively on only one portion of that spectrum. In other words, if you need a welder, we can point you to the nearest tech school with a welding program, but we're not going to spend our time and resources on developing a welding program specific to our industry.

Our attention is more appropriately placed on the development of skill sets that would otherwise be ignored by the marketplace, those that are unique and specific to our industry.

Truth be told, it took us a fair amount of time to reach that conclusion. For too long, our strategic discussions were hampered by a lack of clarity and consensus around this core issue.

Everyone was talking about workforce development, but some were talking about welders, some were talking about maintenance techs, and some were talking about degreed enigineers. While everyone was talking about fruit, it wasn't always apparent that some were talking about apples and others were talking about oranges.

We've seen that getting specific is key to having any chance at success. Picking a category and describing the desired skill sets is absolutely crucial. Only then can you apply your resources in a way that maximizes your chances of delivering what your industry decides it needs.

+ + +

This post first appeared on Eric Lanke's blog, an association executive and author. You can follow him on Twitter @ericlanke or contact him at

Image Source

Saturday, May 27, 2017

The Rise and Fall of the American Whig Party by Michael F. Holt

This is a massive work of serious history. More than a thousand pages, based on a detailed and studied analysis of, among other things, state-by-state voting patterns in every election, Congressional and Presidential, between 1828 and 1856. It is a book much better suited for consumption by academics and their students than by enthusiastic amateurs like me.

So I can’t honestly say that I enjoyed reading it. But I can honestly say that I’m glad I read it. It taught or reinforced a few things.

The More Things Change…

Here’s three paragraphs from the first ten pages.

But there was more to Jackson’s appeal than martial glory. Though himself a wealthy slaveholding member of Tennessee’s plantation gentry, Jackson was a perfect standard bearer for angry voters bent on venting resentments. Westerners and Southerners embraced the Tennessean as a foe of the haughty East. His ownership of slaves and his renown as an Indian fighter only increased his appeal to such men. More important, Jackson was clearly a political newcomer compared to Adams, Clay, and Crawford. All who wanted to throw the establishment out of Washington, or at least out of the White House, could cleave to him.

Andrew Jackson: the first Donald Trump?

As astute Jacksonian managers recognized much more quickly than the Adams party, dealing with a mass electorate required different strategies than could be used with a relatively small one. Voters had to be mobilized directly; alliances of local elites loyal to one political leader or another could no longer win. Issues now had to be framed in terms that were understandable and compelling to relatively less educated and less interested voters. At times this necessity meant presenting specific policies in broad ideological or symbolic terms; at times it meant developing campaign issues that resonated with voters’ emotions, values, and prejudices but that had no specific programmatic focus.

Andrew Jackson: the first Barry Goldwater?

Surprisingly optimistic about their ability to topple the new regime, National Republicans initially decided to wait quietly for the Jacksonian coalition to disintegrate. Refusing to acknowledge the 1828 election as a repudiation of economic nationalism and of leadership by the traditional political elite they represented, they regarded the outcomes instead simply as a triumph by the magnetic Jackson over the aloof and colorless Adams. Hoopla, demagoguery, and Jackson’s refusal to take a stand on matters of national policy, they thought, had temporarily dazzled voters, while sheer opportunism had engaged politicians with divergent policy goals in the Jackson cause. Once Jackson clarified his position on matters such as the tariff and internal improvements, they believed, people would regain their senses and desert the Jackson movement as quickly as they had joined it.

Henry Clay, leader of what was then called the National Republicans: the first David Cameron on the eve of Brexit?

Time and again, as I read history, I discover that the more things change, the more they stay the same. History, especially political history, is like a great pendulum, swinging between populism and pluralism, repeating predictable cycles of rhetoric and action with every movement back and forth.

Whigs Took a Stand Against Executive Power

The early focus on Andrew Jackson is an appropriate place for Holt to start his book, because the American Whig Party came into being as a direct result of his election to the presidency.

At the end of December, [Henry] Clay defined the opposition’s platform in a ringing three-day speech. “We are in the midst of a revolution,hitherto bloodless, but rapidly tending towards a total change of the pure republican character of the Government, and to the concentration of power in the hands of one man,” he warned the Senate. He demanded passage of two resolutions. One rejected [Treasury] Secretary [Roger B.] Taney’s report to the Senate justifying removal [of government deposits for the Bank of the United States]. The other denounced Jackson for trampling on the laws and the Constitution. With these resolutions, the Whig party at its birth focused on its everlasting basic principle: opposition to executive usurpation in general and to Andrew Jackson in particular.

But it wasn’t just Jackson that they opposed. As described above, the weapon they opposed was that of unchecked executive power. As the most adept wielder of that weapon, Jackson was the catalyst for their creation and organization, but their opposition to executive power would keep them together for years after Jackson’s vacancy of the White House.

Indeed, for forty or so years in the middle of the nineteenth century, the Whigs and the Democrats were the two major political parties. They each stood for certain principles, but, at least in the case of the Whigs, their primary principle was simply one of opposition. They opposed all kinds of Democrats, Jacksonian and otherwise, and it was only in that opposition that they seemed to come together as a political force. Finding something consistent that they could all be in support of -- especially as the nation began to align itself into Northern and Southern factions -- proved much more difficult, and would eventually bring about their own demise as smaller parties, each single-mindedly focused on support for one particular position, splintered off the shaky Whig tree.

The Issues Are Different; The Angry Rhetoric Is the Same

After more than a thousand pages of deep and scholarly analysis, I know that I still don’t understand the politics that shaped the history that Holt is describing in his text. Some of the issues -- especially slavery -- may just be beyond my ability to fully understand from a nineteenth century perspective.

But one thing is interesting. Whatever the issues were that the Whigs and the Democrats of the 1840s and 1850s fought over, the angry rhetoric that they used to demonize each other is eerily familiar.

These contrasting partisan perspectives on governmental activism also engendered conflicts over social legislation. To a far greater degree than Democrats, Whigs backed state intervention to regulate social behavior: temperance legislation, Sunday blue laws, and the creation of state-run public school systems. Democrats denounced such legislation as intolerable infringements on individual freedom, and although they did not oppose education, they feared that state-supported schools would compel increased state taxation and threaten local supervision of schools.

There’s a twist. Democrats standing up for individual freedom and local supervision of schools, and Whigs (in some ways, precursors to our modern-day Republicans), in favor of state-run public school systems. It’s almost like this Connecticut Yankee has gone back in time and found himself in bizarro land. But here’s where things start sounding familiar again.

Increasingly, Democrats portrayed Whigs as bigoted and self-righteous religious fanatics intent on imposing their ethical values on others. Whigs retorted that Democrats were immoral deadbeats or dangerous radicals bent on destroying the very fabric of society -- property, morality, education, and the rule of law.

Now there’s a political paradigm I have some familiarity with. Although the strange juxtaposition of opposite issues and identical rhetoric makes me speculate on how true the accusations of any age can be. If you can swap out the cake of political principles and keep the same frosting of demonizing rhetoric for your opponents, you have to suspect that the two things are really not all that connected in the first place.

No, seriously. Look how the Whigs reacted when they lost the presidential election of 1844 to Democrat James Polk.

Those Whigs who remained convinced of the superiority of their candidate and their issues could only attribute the Democratic surge to “the utter mendacity frauds & villainies of Locofocoism [a term referencing the radical wing of the Democratic party of the time].” The Democrats, Whigs repeatedly inveighed, relied on “appeals to every bad passion, the hostile instinct of the poor against the rich, lies and calumnies etc etc” to “bamboozle” the masses. Worse still, Whigs charged, Democrats illegally naturalized immigrants and marched them to the polls, openly bought votes or paid the taxes of those who could not meet taxpaying requirements to vote, employed double and triple voting, and stuffed ballot boxes to steal the election from Whigs in Louisiana, Georgia, New York, Pennsylvania, and elsewhere. “You have lost this state by the most unprecedented frauds and rascality,” a New Orleans Whig consoled [losing Whig presidential candidate Henry] Clay. “Parishes giving more votes or as many as there are white inhabitants of all sexes & ages being in them. Steamboats chartered to convey voters in the same day at different Polls, and every other species of fraud that could be imagined.”

Millions of illegal votes, indeed. If only the Whigs had access to Twitter. Have these things ever been true?

History Is Made by Political Compromises Only Understood in Their Time

This is another one of those truisms that becomes more apparent to me every time I read history. Not apparent in the sense that I can effectively explain or remember the political nuances and compromises of another age, but apparent in the sense that I encounter examples of this dynamic over and over again.

For example, a lot of the political and legislative victories and defeats chronicled in Holt’s book derive from an vibrant and acrimonious debate over banks and tariffs.

To Whigs, banks and tariffs were integrally linked as the keys to prosperity, for the oil that lubricated the engine of economic growth was credit. Individuals’ ability to borrow beyond their existing resources and to use those loans to transport products, start businesses, pay workers’ weekly wages, buy land to farm, and earn the profits from which to repay loans generated expansion and opened opportunity for upward mobility. Banks and businesses provided the necessary credit, and since the specie resources of the United States were limited, it came primarily in the form of paper bank notes, bills of exchange secured by goods in transit, and promissory notes.

Got that? Whigs want people to be able to borrow money from the government so they can launch business ventures they couldn’t otherwise afford. And, because government financial resources at that time were limited by the precious metals it had in reserve, that meant that Whigs supported protective tariffs.

The credibility of those paper devices ultimately depended on assurance that they could, if necessary, be redeemed in specie. Thus the supply of credit and interest rates for it ultimately depended on the nation’s specie reserves. That is why Whigs regarded the tariff as so crucial. To them the biggest threat to the nation’s specie reserves and thus to the availability of credit was an unfavorable balance of foreign trade. If the value of imports exceeded the value of exports, Whigs believed, specie would be drained abroad, and credit, the economy’s lubricant, would dry up. Hence protective tariffs did more than shelter American manufacturers, mine operators, and workers from foreign competition. By limiting imports, they also slowed the exodus of specie and preserved the credit supply that freed men to pursue their economic ambitions beyond the limits of their restricted individual financial capacities.

But, of course, not everyone agreed with the Whigs on this point.

Most Democrats, of course, had always castigated this program as baneful and unnecessary. They viewed credit from its dark flip side, as debt, as a trap rather than a release. They denounced its public form -- bonds -- as a burden on taxpayers and its private forms as threats to individual autonomy, as insidious inducements to self-enslavement. They attacked banks and other corporations as privileged monsters that violated the principle of equal rights before the law. They vilified paper money as a cheat and a fraud. They dismissed protective tariffs as pandering to manufacturers, who would inevitably raise prices to unjust and unjustifiable levels if shielded from foreign competition. What is more, they denied that active government intervention into the private economic sector was necessary to achieve growth or enhance public welfare. “There is, perhaps, no more dangerous heresy taught in our land than that the prosperity of the country is to be created by its legislation,” intoned Pennsylvania’s Democratic Governor William Bigler in his inaugural message of 1852. “The people should rely on their own individual efforts, rather than the mere measures of government for success.”

What resulted, apart from the cognitive dissonance in my 21st century brain trying to wrap itself around 19th century Democrats taking a stand against government involvement in the economy, was years of back and forth, from one political administration to the next, first in favor of protective tariffs, then against them. As that dance occurred, countless other pieces of legislation got dressed up and spun around on the dance floor, the political parties cutting deals for and against things not because they were for or against them, but because their position would help them get closer to the position they sought on the tariff.

That was evident enough, even if I had trouble following all the ins and outs of every discussion chronicled. But if nineteenth century vibrant and acrimonious debate is what you’re looking for, nothing compares to slavery and how it would be understood and practiced as new states continued to be admitted to the Union. Presidential candidates were chosen or rejected, and presidential elections were won or lost on how the divisive and tangled issue was proposed to be resolved. And many, although fully recognizing its polarizing power, did not even see the disagreement over slavery’s expansion as one of direct substance.

Most regarded the whole sectional dispute over slavery extension as far more symbolic than substantive. To them, protecting southern equality and “Southern honor” by escaping the stigma of enslavement to northern dictation that congressional prohibition of slavery entailed, rather than actually extending the institution of slavery westward, was the heart of the territorial issue. Even [Whig Georgia Senator John M.] Berrien, who argued that slavery could flourish in California, saw the territorial dispute primarily in symbolic terms. He admitted to his kinsman [Charles J.] Jenkins that Northerners in Congress had no intention of abolishing slavery and that slavery could prosper into the unforeseeable future even if its extension were prohibited. Nonetheless, he protested, if the Northern majority could exclude slavery from the Cession, they would gain complete control of the national government. “Slavery will then exist in a double aspect. The African, and his owner, will both be slaves. The former, will as now, be the slave of his owner -- but that owner, in all matters within the sphere of federal jurisdiction, will be the doomed thrall of those, with whom he associated on the basis of equal rights.” For Berrien, other Southern Whigs, and many Southern Democrats, in sum, what was at stake in the territorial question was neither the end nor the weakening of African-American slavery. Rather, it was that dictatorial Northerners intended to treat white Southerners themselves as slaves.

I find this one of the most fascinating aspects of American history -- essential to any accurate understanding of these times and the civil war that followed. That Americans of the same history and lineage could have such divergent views on the same subject. Putting black men in chains was slavery, but so evidently, was trying to prevent white men from doing so. Yikes.

Because, frankly, I don’t see why things had to be interpreted that way. Take a different issue. Tobacco smoking, let’s say. If majorities in the majority of States in the Union wanted to ban the practice of smoking tobacco in their States, and in any new States that entered the Union, would those minorities in those same States and the majorities in other States that wanted tobacco smoking to be legal everywhere and in newly-admitted States really describe themselves as the “doomed thralls” of those with whom they associated on the basis of equal rights?

And if something morally ambivalent like tobacco smoking doesn’t make my point, how about something with obvious moral implications? Like child molestation? Those who wish to outlaw child molestation in newly-admitted States are enslaving those of us who support child molestation and will “gain complete control of the national government.” Why is the idiocy of that line of thinking obvious to us today, but far from obvious those those wrangling over the morally loaded issue of slavery in the 19th century?

Maybe that’s why, whether is was seen as an issue of symbol or substance, the disputes over slavery were the thing, eventually, that obliterated the existing party lines, transforming those political institutions, temporarily at least, from houses divided by political principle to houses divided by geography. Whigs and Democrats in the South came together in the existing Democratic Party, and Whigs and Democrats in the North similarly coalesced more painfully together in the new Republican Party.

States Mattered a Lot More Before the Civil War

Perhaps that’s an obvious statement, but it really hit home while reading the majority of Holt’s 1,000+ pages.

… in April [1851], Whigs stumbled across a new issue that united their party and redivided the Democrats. It bore no relation whatsoever to [Whig presidential candidate Millard] Fillmore, the Compromise [of 1850], or slavery. It illustrated a fundamental fact about the federal structure of American government in the nineteenth century: state policies often mattered more to politicians and the public than the actions of Congress or presidents. The issue that saved the New York Whig party from almost certain disaster was enlargement of the state’s Erie Canal system.

For this reason, Holt spends a lot of time describing state-level issues and politics and, despite ample warning in his preface that this deep dive analysis is one of his key objectives in this work, I have to admit that this enthusiastic amateur frequently got lost in all the details. Here a randomly-selected sample of what I’m talking about.

As even Berrien recognized, however, Stephens and Toombs insisted on nominating military men in 1847 and 1848 primarily because they feared that the party could never carry the state legislature or governorship again without attracting Democratic support. Even in the congressional elections of 1846, when Whigs had joyously pilloried the record of Polk and the Democratic Congress, they had won less than 47 percent of the statewide vote. And the chief source of their weakness was clear to all -- the seemingly unshakable grip Democrats had on the growing nonslaveholder vote in the Cherokee District of northwestern Georgia. Running only military heroes appeared the easiest way to cut into that vote, and hence Whigs from northwestern Georgia clamored more vociferously than anyone else for Clinch’s nomination in 1847. After his narrow defeat in October 1847, those same Whigs insisted that Taylor be the Whig nominee. Taylor was a more famous military hero than Clinch. His image as a No Party or People’s candidate who repeatedly spurned a regular Whig nomination made him potentially far more attractive to Democrats than Clinch, who had served in Congress as a Whig. “Very many Whigs from the counties North & West say that we are down unless we hoist the Taylor flag,” wrote one Georgia Whig. “Nothing can … save us but Genl. Taylor -- nothing can destroy the Democracy but Genl. Taylor.” Gaining control of a party that could not control the Georgia state government had little appeal for Stephens and Toombs. Thus they insisted on, and energetically worked for, Taylor’s nomination in December 1847, not only to isolate Berrien but also to win crucial Democratic votes.

And that’s just the view from the Cherokee District of northwestern Georgia. After a while I found myself looking forward to the chapters where he came back up to the national stage, or at least talked about what was going on in my home state during all these years.

There Was Initially More Than One Republican Party

There’s precious little about Wisconsin in the book, but I was primed to look for one linchpin that was clearly going to force Holt to spend some time on the Badger State -- the birth of the Republican Party in Ripon in 1854. But that, at least according to the way Holt tells things, is not necessarily how it happened.

The new organizations emerging in 1854 and 1855 co-opted Whigs’ mission to defend republicanism by portraying themselves as better able to do so. They insisted that powerful new threats to America’s experiment in republican self-government had emerged that made executive tyranny and the other antirepublican bogeys against which Whigs had campaigned seem tame by comparison. They explicitly and repeatedly invoked the key code phrases of the familiar republican idiom -- power, tyranny, corruption, conspiracy, and enslavement versus liberty, freedom, self-government, majority rule, and republicanism itself. And they summoned voters to join a crusade in defense of republican principles and institutions that, they argued, far exceeded in importance stale partisan quarrels fought between now irrelevant parties. They initially portrayed themselves, in short, not as officeseeking politcial parties, but as patriotic Minute Men springing to freedom’s defense. Anti-Nebraska coalitions and the Know Nothings, however, saw different dangers to republicanism that approached from different directions. In effect, they wanted to wage the battle to rescue public liberty on different fronts.

There were many “anti-Nebraska coalitions,” the name being a reference to their opposition to the Kansas-Nebraska Act, passed by Congress in May 1854, and which repealed the Missouri Compromise of 1820, prohibiting slavery north of latitude 36°30´, and which allowed people in the territories of Kansas and Nebraska to decide for themselves whether or not to allow slavery within their borders. Which one, if any, gave direct rise to the Republican Party that nominated Abraham Lincoln for president in 1860 is never made clear in Holt’s text. It rather leaves one with the impression that, like much of actual history, the process was much more organic than linear.

But however the Republican Party came about, the organic progression that resulted in its creation would be a much more interesting subject of deeper inquiry. It was described earlier in this post. Not just the death of the Whigs and the birth of the Republicans, but the national transition from two parties representing ideological difference to two parties representing geographic ones.

Speculation about a political realignment in which parties that exclusively represented the North or South displaced the nationwide, bisectional competition between Whigs and Democrats began in early February [1854] when events seemed to presage such a reshuffling. Caucuses of pro-Nebraska southern Whig and Democratic congressmen in Washington portended a bipartisan fusion in Dixie. Simultaneously, in community after community across the North, meetings that combined Whigs, Democrats, Free Soilers, and the politically unaffiliated gathered to protest the Nebraska bill as an outrageous southern aggression against the rights, interests, expectations, and moral convictions of Northerners. Along with the acrimonious debate in Congress and angry recriminations traded by northern and southern editors, these cross-party sectional gatherings in 1854 seemed harbingers of intrasectional unity and permanent intersectional conflict.

Let’s add it to that imaginary list of future PhD dissertations I plan to write.

+ + +

This post first appeared on Eric Lanke's blog, an association executive and author. You can follow him on Twitter @ericlanke or contact him at