The agony of Kanazawa

Satoshi Kanazawa

In my peripheral web vision, I’ve been watching the unfolding drama of Satoshi Kanazawa, an evolutionary biologist at the London School of Economics of whom you’ve probably never heard until now. He writes a daredevil blog, on which he practically asks for trouble. And recently he got a bit more trouble than even he expected. Now I find myself contemplating deeper questions, as I will explain in a moment.

1) Background:

I first mentioned Kanazawa here, more than a year ago, by way of … endorsing him! Or rather, endorsing not him but his philosophy as I understood it, which claims to distinguish between

  • ought and
  • is.

Kanazawa, if you ask him, will say that he forges ahead valiantly in search of the is (truth) even when it conflicts with the ought (what is good).

I like that. In this context, I even compared that attitude to Friedrich Nietzsche’s, as expressed in his letter to his sister. I might also have compared it (the attitude, not the man) to that other gadfly, Socrates. I might even have drawn a line from Kanazawa all the way back to the first recorded conversation (Callicles v Socrates) about the tension between ought and is.

As it happens, I find myself sympathizing with specific aspects of these men — Kanazawa, Nietsche, Callicles, etc. Each is part thinker but also part court jester, boat-rocker, pot-stirrer — whatever metaphor you want to choose. They live for the piquant headline. They run toward controversy, not away from it. They dare you to bring it on. They’re a tiny bit mad, possibly megalomaniacal, occasionally profound, and — this is the crucial bit — necessary.

2) The controversy:

The last post that Kanazawa wrote on his blog — now deleted, although it lives on in my RSS reader and is being preserved here — was titled:

Why Are Black Women Rated Less Physically Attractive Than Other Women, But Black Men Are Rated Better Looking Than Other Men?

You see the problem already.

In the post, Kanazawa did what he always does: dig for some interesting data, whether those data are good or not, then grind the data for nuggets of insight, or hypotheses to be tested. In this post, he did “factor analysis”, which seems to have become the term that, with its pomposity, sets everybody off.

And then, the tornado. Protests at the LSE, an “investigation” by the LSE, jihad in the blogosophere, and so forth.

Psychology Today, which publishes his blog, deleted the post and apologized.

Everybody agreed that Kanazawa’s “racist nonsense should not be tolerated.”

Case closed. Society saved.

3) The meta-issue

There were some reactions, such as this, that also attempted to answer Kanazawa’s post the traditional scientific way: By reexamining his data, his methodology, and his logic. And it does seem that Kanazawa was:

  • sloppy, and indeed
  • wrong.

Usually, this is is how science (which is just Latin for knowledge) progresses:

Research → Falsifiable hypothesis → Replication and scrutiny → corroboration, refutation or refinement → more research and hypotheses …

Thus, scientists with integrity are equally proud of hypotheses that are corroborated as of those that are disproved: Both push humanity, in tiny steps, to higher levels of ignorance. In free societies, people are free to ask any question and form any hypothesis they like, and knowledge advances faster. In unfree societies, we censor the questions and hypotheses people are allowed to formulate, and knowledge stagnates.

Thus a few questions:

  1. Why was Kanazawa’s post deleted (as opposed to updated, refuted etc)?
  2. Where is the evidence that Kanazawa is racist (as opposed to wrong)?
  3. Why has he not posted since then? (It’s been over a month, and he usually posts weekly.)
  4. Has he been shut up? Fired? Lynched? Censored?
  5. Or is he on boycott, hunger strike?

Speak up, Satoshi. If ever there was a time to hear from you, it’s now. A lot is at stake.

Murphy’s Law of radioactivity measurement

If you’re like me, you’ve been following with great concern the latest radioactivity measurements in various places, from Japan to the US West Coast. What an utterly hopeless task:

  • sieverts
  • grays
  • rads
  • rems
  • Roentgens
  • becquerels

Is this a joke? How are you supposed to understand anything at all from this gibberish?

Well, yes it is a joke, of course, in the same way the entire universe is a joke (and a rather sick one!), as the apocryphal sage Murphy first observed:

Anything that can go wrong, will go wrong.

I once saw a booklet of addenda to Murphy’s Law. This week, I suddenly remembered one that seems germane:

Measurements will always be given in the least useful unit: Thus speed will be given as furlongs per fortnight.

Fortunately we have Mr Crotchety, who sent me this chart which, if correct, puts it all in some perspective.

What Mendel tells us about thinking

Find quietude. Observe whatever is around you. If it seems banal, discover it to be fascinating and mysterious. Ignore distractions, otherwise known as ‘everybody else’. Ask simple questions that puzzle you. Be patient in pondering them.

That is how I imagine Gregor Mendel might answer us today if we asked him: How  — I mean how! — did you achieve your stunning intellectual breakthroughs, on which we today base our understanding of biology?

Put differently: Let’s pretend that Gregor Mendel were alive today instead of in the 19th century, and that he were not an Augustinian monk in the former Austrian Empire but a wired and connected, über-productive modern man with an iPhone, a Twitter account, cable television, a job with bosses who email him on the weekend, etc etc.

Would this modern Mendel be able to achieve his own breakthrough in those circumstances?

So far in my rather long-running thread about the greatest thinkers in history, I’ve featured mostly philosophers and historians, with the odd scientist and even one yogi. But it occurred to me that Mendel belongs into that pantheon — not only for his thought but also for his thinking. I think he offers us a timely life-style lesson, an insight that fits the Zeitgeist of our hectic age.

So: First, a brief recap of his breakthrough. Then my interpretation how his life style and thought process made that breakthrough possible (and why ours might make such breakthroughs harder).

1) Mendelian genetics

Mendel was an Augustinian monk in what used to the Austrian Empire (and what is now the Czech Republic). He had an open and inquisitive mind and, as a monk, wasn’t all that busy, so he had plenty of spare time. He liked to breed bees. Then he began breeding peas. That’s right. Peas.

Peas intrigued him. (Would they intrigue you? What else does not intrigue you?) He found peas interesting because they had flowers that were either white or purple and never anything else. (Would you find that interesting?)

Mendel contemplated what peas could therefore teach him about how parents pass on traits to their offspring, ie what we would call genetics.

At the time, conventional wisdom held that the traits of parents are somehow mixed in their children. If parents were paint buckets, say, then a yellow dad and a blue mom would make a green baby bucket, and so on. (It’s interesting that nobody spotted how implausible this was: After several generations every bucket, ie every living thing, would have to end up mud-brown. Every creature would look the same. Instead, nature is constantly getting more colorfol, more diverse, with more and stranger new species.)

So Mendel, in the late 1850s and early 1860s, started playing with his peas. Pea plants fertilize themselves, so Mendel cut off the stamens of some so that they could no longer do that. Then he used a little brush and fertilized the castrated pea plant with pollen from some other pea plant. He thereby had total control over who was dad and who was mom.

He was now able to cross-breed the peas with purple flowers and the peas with white flowers. So he did. Then he waited.

Surprise #1:

Already in the next generation, Mendel could rule out the prevailing “paint-bucket-mixing” theory. No baby pea plants had lighter purple (or striped or dotted) flowers. Instead they all had purple flowers.

So he took those new purple-flowered pea plants and cross-bred them again. And again, he waited.

Surprise #2:

In the next generation, most pea plants again had purple flowers. But some now had white flowers. Wow! How did that happen?

Moreover, the ratio in this generation between purple and white flowers was exactly 3:1. Hmm.

Mendel kept doing these experiments, and kept thinking, and then inferred the simple but shocking conclusion:

  1. Each parent had to be contributing its version of a given trait (white vs purple, say) to the offspring.
  2. Each baby thus had to have both versions of every trait, but showed in its own appearance only one version, which had to be dominant.
  3. The other (“recessive“) version, however, did not go away, and when these pea plants had sex again, they shuffled the two versions and randomly passed one on to their offspring (with the other coming from the other parent), so that their baby again had two versions.

This looks as follows:

In the second generation, every pea plant has a purple (red, in this picture) and a white version, one from each parent, but since the purple is dominant, every flower looks purple.

In the next generation,

  • one fourth will have a purple from dad and a purple from mom (and look purple),
  • one fourth will have a purple from dad and a white from mom (and still look purple),
  • one fourth will have a white from dad and a purple from mom (and still look purple), and
  • one fourth will have a white from dad and a white from mom (and look white).

The rest, you might say, is history. With all our amazing breakthroughs in biology in the 20th century, we merely elaborated on his insights, in the process explaining the mechanism of evolution (Darwin, coming up with that idea at the same exact time, had no knowledge of Mendel’s breakthrough.)

In today’s language, Mendel

  • showed the difference between genotype and phenotype. (Your genotype might be white/purple, for example, but your phenotype would be purple.)
  • understood the basic idea of meiosis (the division of a cell into two haploid gametes — a sperm cell or egg with half of the mother cell’s chromosomes, randomly chosen),
  • described how two gametes then merge sexually to form a diploid zygote (ie, a cell with all chromosome paired up again, one member of each pair coming from each parent),
  • explained how some versions of the gene pairs, called alleles (such as purple or white), are expressed and some not, even as those not expressed can re-emerge in the phenotype in the next generation.

DNA, RNA, ribosomes and all that were merely detail.

2) How was it possible?

Let’s make ourselves aware, first, of what it must have been like for Mendel during these years (this is purely conjecture):

  • He got up.
  • He prayed.
  • Had breakfast.
  • Went into the garden.
  • Looked at the pea flowers for a long time.
  • Watered them.
  • Took a break.
  • Watched the peas some more.
  • Thought about them.
  • Dozed off for a nap.
  • Woke up and had an idea, still inchoate in his mind.
  • Went to bed.
  • Thought about it some more….

You get the idea. Not exactly stressful. Few interruptions. Lots of waiting (how long is one generation of peas anyway?).

He was, we would say, switched off. He was not multi-tasking, he did not have adrenaline coursing through his veins as he answered a text message while watching a video stream while writing a Powerpoint …

Compare his time with his pea plants to Einstein‘s time at the Bern patent office, where he was utterly underemployed and could easily have been bored, but instead did thought experiments and had his “miracle year”.

Or compare it to Isaac Newton‘s time after had to leave the action of Cambridge (because plague broke out) and returned to the isolation of his family farm with nothing to do except watch apples drop from trees….

Or compare it to the time when Gautama Siddhartha (aka the Buddha) withdrew from all action and sat, just sat, under a tree, with the birds pooping on his head until there was a pile of guano on his hair, with his flesh melting from his bones because he was too deep in concentration to eat…..

Lesson #1:

Good stuff can happen during downtime (even if you didn’t volunteer for it).

Corollary: Can good stuff happen during uptime? You may have to take time out to be creative.

Lesson #2:

Be amazed.

Corollary: Don’t assume the things and people in your daily life are boring.

Lesson #3:

Turn the devices off.

Corollary: Distraction not only kills people, it also kills thought.

Lesson #4:

Be patient.

Corollary: You can’t breed peas in internet time. Nor novels, scripts, songs, paintings…

Lesson #5:

Look for the simple.

Corollary: The more bewildering the complexity observed, the simpler the solution.

(See also: Gordian knot.)

Lesson #6:

It doesn’t have to be complete to be original.

Corollary: It took us a century to explain the process Mendel grasped; an idea is good even if it “merely” starts something.

(See also: Incompleteness theorem. Mr Crotchety’s favorite — need I say more?)

Lesson #7:

Don’t expect the world to get it right away.

Corollary: If it took us a century to understand Mendel’s breakthrough, we might take a while even for yours. 😉

Patanjali in a lab coat

That modern science is somehow “catching up” with Eastern philosophy (logos uniting with mythos, as it were) is an old idea.

At least 25 years old, if you date it to Fritjof Capra’s The Tao of Physics, a good book then which could be even better if written now.

In my mind, this convergence redounds to, rather than detracts from, both science and Eastern philosophy. (It does, however, make the “Western”, ie monotheistic, religions look ever more outdated.)

I will state the premise thus:

The millennia-old traditions of India and China express in metaphorical language concepts that we are today corroborating in scientific language.


  • By “Indian” traditions I mean Vedantic philosophy and all its offshoots, from Yoga and Ayurveda to Buddhism.
  • By “Chinese” tradition, I mean Taoism and Chinese medicine.

(Zen, for example is thus included, for it is basically the Japanese form of the Chinese version of the Indian tradition of Buddhism.)

This premise yields a rich genre of research and inquiry. Here are three examples:

  1. one from within our bodies,
  2. one from the workings of our minds, and
  3. one from the entire cosmos.

1) In search of qi

A dear friend of mine is a successful Western doctor who is now also certified in Chinese medicine. In our conversations, we spend lots of our time “translating” Eastern concepts such as qi (prana in Sanskrit) into “Western” medical vocabulary.

Usually the medical vocabulary is less beautiful and less elegant but also less threatening to people in the Western mainstream, and hence useful. Qi, for example, is simply the (measurable) bioelectric energy in our bodies.

Once translated, seemingly occult claims by Eastern medicine offer themselves much more readily to scientific experimentation. The needles in acupuncture, for instance, are nothing but tiny antennas, which can receive, re-transmit and amplify electro-magnetic vibrations — in other words, qi. We should be able to measure this.

Ditto for the chakras. I’ve written before about how the chakras correspond to Western psychological concepts such as those of Abe Maslow. But in essence, they are simply the swirls of bioelectric energy you get in the ganglia along our spine where many nerves (ie, many little antennas) converge. Again, we should be able to measure and observe them.

2) The monkey mind of misery

You might recall that I awarded the prize of “greatest thinker” in world history to Patanjali, a contemporary of the Buddha in India and the author of the Yoga Sutras. His insight was that happiness, balance and unity (= yoga, loosely) are products of only one thing:

A still mind.

The rest of the Yoga Sutras are, in effect, an analysis of how things go wrong when our minds wander, and a manual of how to return the mind to stillness. (That’s all Yoga is, really.)

Buddhism and Zen aim to do the exact same thing. Our slightly modish concept of “flow” is also the exact same thing. Total absorption into any one thing = stillness of mind.

The opposite of a still mind is often depicted as a monkey mind in Eastern tradition. It makes us miserable.

Now two boffins at Harvard — Matthew Killingsworth and Daniel Gilbert — have developed an ingenious experiment using (what else?) an iPhone app.

(Thank you to Mr Crotchety for forwarding their article in Science Magazine.)

The app, at random moments, asks people questions such as:

  • How are you feeling right now?
  • What are you doing right now?
  • Are you thinking about something other than what you’re currently doing?
  • If yes, something pleasant, neutral; or unpleasant?

The huge sample of data shows, as Killingsworth and Gilbert put it, that

A human mind is a wandering mind, and a wandering mind is an unhappy mind.

Specifically, our minds (ie, the minds beings sampled) wandered about half the time (46.9%). And it did not matter what people were doing at the time! If they were doing pleasant things, their minds wandered just as much, and not necessarily to pleasant thoughts.

Furthermore, people were less happy whenever their minds wandered, even when they were thinking pleasant thoughts. (Obviously, unpleasant thoughts made them even more miserable than pleasant thoughts, but the point is that any mind-wandering discomforted them.)

And Patanjali said all that in the second sentence. 😉

(However, there is a fascinating twist — a benefit of mind-wandering — that touches on a subject dear to my heart: creativity. I’ll save that for a separate post.)

3) The cosmic parade of ants

In Indian tradition, there was not just one Big Bang. There have been infinitely many. That’s because the universe is born, expands, collapses and is reborn in an eternal cycle.

In metaphorical language,

  • each creation (or Big Bang) is the work of Brahma,
  • each expansion that of Vishnu, and
  • each collapse that of Shiva.

But these three are all part of the same underlying reality (Brahman). Metaphorically, Brahman is inhaling and exhaling, and each breath is its own spacetime, as Einstein might put it.

Because this is hard to grasp, even gods need reminding of it. Hence, for instance, the story of Indra and the Parade of Ants.


Indra was haughty and summoned a great architect to build a splendid palace. He kept adding requirements so that the architect was never done. Brahma (ie, also Vishnu and Shiva) decided to teach Indra a little lesson and appeared to him as a boy.

Boy: Will you ever complete this palace? After all no Indra has ever completed it before.

Indra: What do you mean, “no Indra”? There were other Indras?

Boy: Oh yes. When twenty-eight Indras have come and gone, only one day and night of Brahma has passed.

And just then, an endless parade of ants filed in and through the palace. Each one, said the boy, was once an Indra.

Our science currently tells us that our universe started (in earth time) 14 billion years ago. But now I read that Roger Penrose, a famous British mathematician, and V. G. Gurzadyan, a physicist, have found patterns in the microwave radiation generated by the Big Bang which suggest that

our universe may “be but one aeon in a (perhaps unending) succession of such aeons.” What we think of as our “universe” may simply be one link in a chain of universes, each beginning with a big bang and ending in a way that sends detectable gravitational waves into the next universe.

Is or Ought, true or good

Satoshi Kanazawa

I’ve recently discovered the blog of Satoshi Kanazawa, an evolutionary psychologist at the London School of Economics (LSE), which happens to be one of my alma maters (I got my Masters there).

It is called The Scientific Fundamentalist, and for good reason. As he says here,

From my purist position, everything scientists say, qua scientists, can only be true or false or somewhere in between. No other criteria besides the truth should matter or be applied in evaluating scientific theories or conclusions. They cannot be “racist” or “sexist” or “reactionary” or “offensive” or any other adjective. Even if they are labeled as such, it doesn’t matter. Calling scientific theories “offensive” is like calling them “obese”; it just doesn’t make sense. Many of my own scientific theories and conclusions are deeply offensive to me, but I suspect they are at least partially true. Once scientists begin to worry about anything other than the truth and ask themselves “Might this conclusion or finding be potentially offensive to someone?”, then self-censorship sets in, and they become tempted to shade the truth. What if a scientific conclusion is both offensive and true? What is a scientist to do then? I believe that many scientific truths are highly offensive to most of us, but I also believe that scientists must pursue them at any cost.

Well, in this post, The Hannibal Blog would simply like to endorse and celebrate Kanazawa — both his approach and philosophy and his research and style.

Subscribe to his blog! It will do what I secretly hope The Hannibal Blog occasionally does for you:

  • intrigue you,
  • offend you,
  • delight you,
  • enrage you,
  • enthrall you.

How? Because it does not — as so much of the politically correct piffle out there does — try to achieve one half of the above effects without the other half. It has writerly courage. More specifics to come.

Bookmark and Share

How humans are (not) unique

Robert Sapolsky

Beat me, said the masochist.

No, said the sadist.

We, Homo sapiens sapiens, are the only species that can understand the humor (ie, the meaning) of this conversation. It involves advanced versions of simpler concepts such as Theory of Mind and tit-for-tat. But the simple versions of those and other concepts are not unique to humans. So the definition of human really rests on marginal complexity.

Take 37 minutes of your time to watch Robert Sapolsky, a brilliant and hilarious neuroscientist at Stanford, as he analyzes what makes humans “uniquiest”. It is a prime example of making science accessible through storytelling.

The short of it: Almost all of the things that we used to think made us humans unique in the wild kingdom can in fact be observed in other species. Such as:

  • Intra-species aggression (including genocide)
  • Theory of Mind
  • The Golden Rule
  • Empathy
  • Pleasure in anticipation & gratification-postponement
  • Culture

However, we humans exhibit these facilities with a twist — with an added layer of complexity.

(By the way, he refers to the same baboon study that I mentioned in this post, but could not locate. Does anybody have a lead?)

Bookmark and Share

Great, if not greatest, thinker: Galileo


Four hundred years ago exactly, Galileo Galilei pointed his telescope at the moon and began, with his wonderfully open mind, writing down what he saw. Other people had done this before him. So why include Galileo in my pantheon of the greatest thinkers ever?

Two reasons:

  1. He made us understand that our universe is much bigger than we could imagine.
  2. He, in his human and fallible way, stood up for truth against superstition, ignorance and fear, otherwise known as… but I get ahead of myself.

I) The universe is bigger than we can imagine

It’s one of those many cases in science, and in all thought (think: Socrates, Plato, Aristotle), when a great contribution came from several people building on the work of one another. This is wonderful. We place far too much emphasis on the solitary genius.

In Galileo’s case, he built on the prior work of, among others,

  1. Copernicus,
  2. Tycho Brahe, and
  3. Johannes Kepler,

in the process proving wrong the views of Aristotle and everybody else that the sun (and everything else) moved around the earth.



Copernicus was the first to realize that the earth in fact moved around the sun, which must count as one of the most revolutionary (pun intended) advances in our understanding of ourselves and our world. But Copernicus assumed (and why not?) that the orbit was a circle.

Tycho Brahe took things an important step further not so much by thinking as by measuring: the motion of Mars, in particular. He created data, in other words.



Kepler, who was Brahe’s assistant, then looked at those data and realized that our orbit, and those of the other planets, could not be circular but had to be elliptical. (A colleague of mine wrote a good and quick summary of all this.)

And Galileo? He filled in a lot of the blanks with his telescope.

  • He saw the moons of Jupiter, realizing that they were orbiting another body besides the earth and the sun, which was a shocker.
  • He saw that Venus was, like earth, orbiting the sun.
  • He saw that the sun was not a prefect orb.
  • He saw that the Milky Way contained uncountable stars just like our own sun.

For Homo Sapiens, who was still coming to terms with the fact that the earth was round, all this was almost too much to bear. Our universe was vastly, unimaginably, bigger than the Bible had told us. How would we react to that news?

II) Those who seek and are open to truth will have enemies

This brings us to the church, or shall we say “religion” generally. The church hated Galileo and everything he said and stood for. He questioned what they thought they “knew”, which unsettled them, scared them, threatened them. But they had power. With Nietzschean ressentiment, they attacked him.

You can make anybody recant, and Galileo did. Sort of. In any case, he was declared a heretic and sentenced to house arrest for his remaining life.

In one of my all-time favorite ironies, the Catholic Church, having condemned him, decided–359 years later, in 1992, two years before I sent my first email!–that Galileo was in fact right. How? A committee had discovered this. Good job, guys.

And so, Galileo is still with us, inspiring many. As he discovered that our universe was incomprehensibly big, we are discovering, as another colleague of mine, Geoff Carr, puts it, that

the object that people call the universe, vast though it is, may be just one of an indefinite number of similar structures … that inhabit what is referred to, for want of a better term, as the multiverse.

And as Galileo had to confront the the mobs of ignorance, fear and superstition, so do we today. Here, remind yourself with this casual comment by an Arizona state senator (!), Sylvia Allen, Republican, that the earth is 6,000 years old:

Oh, and what about Aristotle? He was the one proved wrong, you recall. That’s OK, as I have argued. You can be wrong sometimes and still be a great thinker, provided you were genuinely looking for the truth.

Bookmark and Share

The leopard and the baby baboon


I have been puzzling over, and moved by, a scene from Eye of the Leopard, a film by Dereck and Beverly Joubert, a handsome couple (above) who are quite the up-and-coming wildlife-documentary makers.

It is the second clip in this video, called “Unlikely Surrogate”.

The “plot”, as provided by Mother Nature (and as narrated by Jeremy Irons):

A leopard hunts a baboon mother, kills her and begins to drag her up on tree for the feast. Suddenly, something wriggles, and it is the one-day old baboon baby that was clinging onto her mother and now falls out.

The leopard pauses. … It does not know how to react. It watches the baby for hours. Then it gently picks the little primate up with its fangs and carries it further up to the tree to safety from other predators. The leopard licks and comforts the baboon baby whose mother the cat has just killed. The baboon baby recognizes the kindness and snuggles into the leopard’s chin. They cuddle for hours together against the cold. Then the leopard moves back down to eat the baby’s mother.

You can study biology, Darwin, evolution. You can hypothesize why this trait is passed on and not that trait. You can throw around fancy terms, such as cross-species altruism. And just when you’re feeling reassuringly scientific, nature reminds you of her eternal, sublime, moving mystery.

Bookmark and Share

Bad writing about white oral sex

A while ago, using George Orwell’s classic essay on language, I opined that:

Good writing = clear thinking + courage

with the implication that

Bad writing = confused thinking

or, more interestingly,

Bad writing = clear thinking + cowardice

Well, I was thinking about this today when reading a phenomenally badly written article in the Science section of the New York Times. It is a case study not only in writerly cowardice but its more petty form: squeamishness.

The article starts meekly enough with the headline that

Findings May Explain Gap in Cancer Survival

The background is a genuine conundrum, which is that

  1. cancers of the throat and neck have been increasing and
  2. whites survive more often than blacks.

The obvious question is: Why the difference? It could be late diagnosis for blacks, lack of access to health care by blacks, different treatment for blacks or something else.

Well, it’s something else! And this ought to be the big, screaming headline of the article, except that the article never says it! Since the article does not, I will write the simple, plain-English sentence that is missing:

Whites have more oral sex than blacks, and therefore get infected with a virus that causes more of them to have cancer, but of a less lethal sort.

There you have it: The two most explosive subjects in America, sex and race, both in the same sentence. Naturally, any editor of the New York Times will seek cover. I say: Cowardice! Squeamishness!

The result is some cryptic and off-putting verbiage that buries the central insight underneath impenetrable code. It is exactly the sort of intentionally obtuse language that George Orwell mocked.

Look at how the hints are buried in the text:

The virus can also be spread through oral sex, causing cancer of the throat and tonsils, or oropharyngeal cancer.


The new research builds on earlier work suggesting that throat cancer tumors caused by the virus behave very differently from other throat cancers, and actually respond better to treatment. And the new research suggests that whites are more likely than blacks to have tumors linked to the virus, which may explain the poor outcomes of African-Americans with HPV-negative tumors.

The research does actually establish the crucial link, but you would hardly know it from sentences such as this:

The results were striking: the TAX 324 patients whose tumors were caused by the virus responded much better to treatment with chemotherapy and radiation. And they were also overwhelmingly white. … While about one-half of the white patients’ throat tumors were HPV-positive, only one of the black patients had a tumor caused by the virus, Dr. Cullen said.

Towards the end, the writer dares venture the following hypothesis:

This suggests that the racial gap in survival for this particular cancer may trace back to social and cultural differences between blacks and whites, including different sexual practices, experts said.

Excuse me. “Social and cultural differences … including different sexual practices”?!

This would not happen at The Economist. If I wrote such claptrap, I would get laughed out of the room.

Bookmark and Share

If you don’t know what it is, give it a name

  • What is sleep?
  • What is an electron/photon?
  • What is money?

I find it forever fascinating how utterly clueless we (Homo Sapiens) are, about almost anything. A different sort of person marvels at how much we know, but I marvel at how little we know.

Which sort of person you are, I find, depends on how curious you are–ie, how easily satisfied that you know enough about something, anything. To oversimplify for the sake of some easy labels, the first sort might be called intellectual, the second practical. Every joke you’ve ever heard about intellectuals applies to me.

The most boring branch of college philosophy, as I recall hazily, is epistemology, the logos of episteme, ie knowledge. You read and write endless stupid essays on whether we really know that the chair we’re sitting on is a chair, whether we can be sure that we are not brains in a vat, and so forth. Able-bodied twenty-year-olds tune out and go to the keg party, as I did.

But there are infinitely more interesting questions to ask, and they get more fascinating with age. Today I want to give you a sample of just three. They have two things in common: 1) the practical types are likely to roll their eyes because, you see, the answer is too obvious to merit the question, and 2) nobody who does ask the question, least of all the experts, has the foggiest notion of what the answer might be.

800px-Puma_Sleeping1) What is sleep?

The practical person says ‘Make sure you get enough of it.’ Thank you, and I do. I’m really good at it, or I was until I had children.

But what is it we’re getting ‘enough’ of? With food, it’s easy to tell. Chemical energy goes in, changes shape into bodily functions and waste. But with sleep, it’s a mystery.

Some animals do it standing up, others lying down, some for minutes a day, others for months on end. All of us go through different phases in our sleep and we should probably have different names for each phase. We can measure some brain waves and chart them. We can follow people who don’t sleep enough and observe their immune systems and reaction times and such. We can, in short, describe what sleep does to us.

But can we say what it is? I’ve been asking some neurologists lately, and the answer is No. You can answer with semantic layers (“rest”, eg), but each layer leaves you more frustrated. We just don’t know. If we find out, that might be one of the greatest breakthroughs in human consciousness ever.

2) What is an electron/photon?

The practical person says ‘If this light switch works, you see the electrons and photons in action, okay?’ Indeed, he might whip out all sorts of measuring devices for both. But we didn’t ask what electrons and photons can do. We asked what they are.

I love this example because it illustrates how we soothe our ignorance with labels. First we called “them” (we were/are not sure whether they are separate things or aspects of the same thing) waves. Waves, of course, are something we think we understand because we’ve skipped stones in ponds and all that. And somebody discovered that if you shoot electrons/photons through two slits, this happens:


A wave pattern, in other words. Aha.

Then somebody else discovered that when you shoot electrons at a metal plate, photons are knocked out, like this:


Particles, in other words. Aha.

And so we have the answer: “wave-particle duality“. It is Orwellian in its beauty. Rather than admit that we don’t know what it is (a “bundle” of energy? A “quantum”?) we take two things we know and mix them together with a hyphen.

This example goes far beyond electrons and photons, by the way. We follow this approach with all subatomic particles–ie, we bash them together, see another flying off, and instantly … name it. Bosons, muons, leptons. My favorites are the quarks which can be (and I kid you not) up, down, top, bottom, charmed or strange. Those guys in the hadron colliders have a great sense of humor.

800px-BanknotesWhat is money?

I actually found myself in the amusing situation once of teaching (to a class of journalism students) a lecture on this question. What you do, in case it ever happens to you, is that you say you don’t know, but at a high intellectual level, for two hours.

Again, the practical person says ‘I know it when it’s in my bank account’, or describes things that money does.

It does three things, by the way: It acts as a

  1. medium of exchange (so we don’t have to barter)
  2. unit of account (so we can keep track of value)
  3. store of value (so we can save value over time, lest it rot as bananas do)

Great. We can describe other aspects of it. It has velocity. It has a multiplier effect. And so on.

But what is it? It is not cowry shells, although it once was. It is not gold or silver, although it once was (and still is in many names for money, such as Geld or argent). But even though the queen promises to pay me x pounds of sterling, she would not actually give me any metal if I showed up at Buckingham Palace. Other times money is cigarettes (post-war Germany) or sex (ditto). Often it is just paper (above). But almost all of the time, nowadays, it is just debits and credits on a computer screen. (!)

The key moment for me occurred when I was talking to an economist about this, and finally he said:

you have to understand that all this money isn’t actually … there.

He meant it can go pouff if people don’t believe it’s there (see: etymology of credit). It can reappear when people believe it might be there.

And that may be the appropriate note to leave this post on, in the second year of our Great Recession. Everything you lost was … faith-based to begin with.

Bookmark and Share