False perception, false memory

The biggest social event of the year 1878 in Palo Alto, California, took place on a horse-breeding farm. Leland Stanford, former governor and co-founder of the all-powerful Southern Pacific Railroad, had retired and was indulging, here at the site where he would soon found Stanford University, in his passion, which was anything equestrian.

Stanford was, at a general level, an alpha male who trusted his own opinions. More specifically, when it came to horses, he considered himself “an expert”. So it was utterly clear to him that he, the expert, knew how horses galloped.

After all, all you had to do was look! And Stanford had looked, as had artists throughout all of human history. It was obvious that horses briefly “flew” by splaying their four legs in the air before alighting for the next leap. Like this:

So Stanford, as this account tells the tale, made contact with Eadward Muybridge, an eccentric Briton who had mastered the cutting-edge technology of the day, photography, and was able to take photos in rapid succession. Muybridge brought his kit to Palo Alto.

At Stanford’s invitation, large crowds turned out for the occasion. Muybridge was to document a galloping horse and thus prove common sense.

Eadweard Muybridge

Muybridge’s photos did nothing of the sort. Instead, they were shocking. For they disproved mankind’s common sense, thereby contradicting the direct observation of many generations.

You can see this disproof above, in the (deservedly famous) animation derived from the images. If you want to be sure, you can look at the stills in one of the other sequences:

During the only instant in the cycle when the horse is entirely in the air, its legs are actually tucked together, not splayed.

After Muybridge’s breakthrough, mankind thus had some adjusting to do, not least its painters:

Artists of the day were both thrilled and vexed, because the pictures “laid bare all the mistakes that sculptors and painters had made in their renderings of the various postures of the horse,” as French critic and poet Paul Valéry wrote decades later… Once Muybridge’s photos appeared, painters like Edgar Degas and Thomas Eakins began consulting them to make their work truer to life. Other artists took umbrage. Auguste Rodin thundered, “It is the artist who is truthful and it is photography which lies, for in reality time does not stop.”

(Does Rodin’s reaction remind you of anything today?)

The general insight

The big point here is really that we should be less confident in (= more skeptical about — however you want to put it) our own opinions and grasp of reality. That’s because:

  • we tend to “see” what we want or expect to see (as Stanford did with his horses),
  • what we notice is determined by what we pay attention to (which is why distracted driving is so dangerous), and
  • we can only make sense of the world by interpreting it through stories we tell, and storytelling can be problematic.

In that sense, this post is a follow-up on

This topic seems to strike a chord with writers and journalists in particular. The other day, for instance, I was discussing it with Rob Guth, a friend of mine at the Wall Street Journal. Rob recently wrote great stuff about the surprising recollections of Microsoft co-founder Paul Allen (surprisingly negative about Bill Gates, in particular). As Rob got deeper and deeper into his research — meaning: as he “fact-checked” his sources’s memories of Microsoft’s early years — the “truth” became ever more elusive. Was so-and-so in the room all those years ago when such-and-such happened? A says Yes, he was. B says No. Suddenly A begins to doubt himself (re-narrating the story in his mind). And so on.

Journalists, of course, are not the only ones relying on the recollection or observations of others. Judges, lawyers and jurors do as well, to name just one particularly germane area.

Can you trust eyewitnesses?

In this article, Barbara Tversky, a psychology professor, and George Fisher, a law professor, suggest that eyewitnesses cannot always be trusted. (Since witnesses are at the heart of the adversarial legal system, this undermines our entire tradition of justice.)

As Tversky and Fisher say,

Several studies have been conducted on human memory and on subjects’ propensity to remember erroneously events and details that did not occur. …

In particular,

Courts, lawyers and police officers are now aware of the ability of third parties to introduce false memories to witnesses…

But even without such tricks,

The process of interpretation occurs at the very formation of memory—thus introducing distortion from the beginning. … [W]itnesses can distort their own memories without the help of examiners, police officers or lawyers. Rarely do we tell a story or recount events without a purpose. Every act of telling and retelling is tailored to a particular listener; we would not expect someone to listen to every detail of our morning commute, so we edit out extraneous material.

In fact, these studies show what Rob discovered during his interviews of sources for the Paul Allen story:

Once witnesses state facts in a particular way or identify a particular person as the perpetrator, they are unwilling or even unable—due to the reconstruction of their memory—to reconsider their initial understanding.

Tversky and Fisher conclude:

Memory is affected by retelling, and we rarely tell a story in a neutral fashion. By tailoring our stories to our listeners, our bias distorts the very formation of memory—even without the introduction of misinformation by a third party…. Eyewitness testimony, then, is innately suspect.

And:

It is not necessary for a witness to lie or be coaxed by prosecutorial error to inaccurately state the facts—the mere fault of being human results in distorted memory and inaccurate testimony.

If you don’t know what it is, give it a name

  • What is sleep?
  • What is an electron/photon?
  • What is money?

I find it forever fascinating how utterly clueless we (Homo Sapiens) are, about almost anything. A different sort of person marvels at how much we know, but I marvel at how little we know.

Which sort of person you are, I find, depends on how curious you are–ie, how easily satisfied that you know enough about something, anything. To oversimplify for the sake of some easy labels, the first sort might be called intellectual, the second practical. Every joke you’ve ever heard about intellectuals applies to me.

The most boring branch of college philosophy, as I recall hazily, is epistemology, the logos of episteme, ie knowledge. You read and write endless stupid essays on whether we really know that the chair we’re sitting on is a chair, whether we can be sure that we are not brains in a vat, and so forth. Able-bodied twenty-year-olds tune out and go to the keg party, as I did.

But there are infinitely more interesting questions to ask, and they get more fascinating with age. Today I want to give you a sample of just three. They have two things in common: 1) the practical types are likely to roll their eyes because, you see, the answer is too obvious to merit the question, and 2) nobody who does ask the question, least of all the experts, has the foggiest notion of what the answer might be.

800px-Puma_Sleeping1) What is sleep?

The practical person says ‘Make sure you get enough of it.’ Thank you, and I do. I’m really good at it, or I was until I had children.

But what is it we’re getting ‘enough’ of? With food, it’s easy to tell. Chemical energy goes in, changes shape into bodily functions and waste. But with sleep, it’s a mystery.

Some animals do it standing up, others lying down, some for minutes a day, others for months on end. All of us go through different phases in our sleep and we should probably have different names for each phase. We can measure some brain waves and chart them. We can follow people who don’t sleep enough and observe their immune systems and reaction times and such. We can, in short, describe what sleep does to us.

But can we say what it is? I’ve been asking some neurologists lately, and the answer is No. You can answer with semantic layers (“rest”, eg), but each layer leaves you more frustrated. We just don’t know. If we find out, that might be one of the greatest breakthroughs in human consciousness ever.

2) What is an electron/photon?

The practical person says ‘If this light switch works, you see the electrons and photons in action, okay?’ Indeed, he might whip out all sorts of measuring devices for both. But we didn’t ask what electrons and photons can do. We asked what they are.

I love this example because it illustrates how we soothe our ignorance with labels. First we called “them” (we were/are not sure whether they are separate things or aspects of the same thing) waves. Waves, of course, are something we think we understand because we’ve skipped stones in ponds and all that. And somebody discovered that if you shoot electrons/photons through two slits, this happens:

Young_Diffraction

A wave pattern, in other words. Aha.

Then somebody else discovered that when you shoot electrons at a metal plate, photons are knocked out, like this:

132px-Photoelectric_effect.svg

Particles, in other words. Aha.

And so we have the answer: “wave-particle duality“. It is Orwellian in its beauty. Rather than admit that we don’t know what it is (a “bundle” of energy? A “quantum”?) we take two things we know and mix them together with a hyphen.

This example goes far beyond electrons and photons, by the way. We follow this approach with all subatomic particles–ie, we bash them together, see another flying off, and instantly … name it. Bosons, muons, leptons. My favorites are the quarks which can be (and I kid you not) up, down, top, bottom, charmed or strange. Those guys in the hadron colliders have a great sense of humor.

800px-BanknotesWhat is money?

I actually found myself in the amusing situation once of teaching (to a class of journalism students) a lecture on this question. What you do, in case it ever happens to you, is that you say you don’t know, but at a high intellectual level, for two hours.

Again, the practical person says ‘I know it when it’s in my bank account’, or describes things that money does.

It does three things, by the way: It acts as a

  1. medium of exchange (so we don’t have to barter)
  2. unit of account (so we can keep track of value)
  3. store of value (so we can save value over time, lest it rot as bananas do)

Great. We can describe other aspects of it. It has velocity. It has a multiplier effect. And so on.

But what is it? It is not cowry shells, although it once was. It is not gold or silver, although it once was (and still is in many names for money, such as Geld or argent). But even though the queen promises to pay me x pounds of sterling, she would not actually give me any metal if I showed up at Buckingham Palace. Other times money is cigarettes (post-war Germany) or sex (ditto). Often it is just paper (above). But almost all of the time, nowadays, it is just debits and credits on a computer screen. (!)

The key moment for me occurred when I was talking to an economist about this, and finally he said:

you have to understand that all this money isn’t actually … there.

He meant it can go pouff if people don’t believe it’s there (see: etymology of credit). It can reappear when people believe it might be there.

And that may be the appropriate note to leave this post on, in the second year of our Great Recession. Everything you lost was … faith-based to begin with.

Bookmark and Share