Jeff Duntemann's Contrapositive Diary Rotating Header Image

psychology

The Impersistence of Memory, Part 2

Human memory is peculiarly unreliable–but verifiably unreliable. The science is there, and it’s pretty good science, too. In his excellent book, On Being Certain, neurologist Robert A. Burton describes the Challenger study: Within a day of the Challlenger disaster, a psychologist asked 106 of his students to write down precisely where they were when the explosion occurred, how they heard about it, and how they felt at that moment. Two and a half years later (hardly a lifetime, though significant for the young) the students were interviewed, and asked to recount the details of what they had written down and given to their professor. Fewer than ten percent of the students recalled all of the details correctly as they had written them. A quarter of the students’ memories were significantly different, and over half had some major differences with what they had recorded at the time.

Thirty months–and an event that stands in many people’s memories (including my own) as one of the most striking events of their lifetime. Intriguingly, even when confronted with their original notes written the day after the event, many students with conflicting memories insisted that their current memories were correct. As one said, “That’s my handwriting, but that’s not what happened.”

Egad.

I’ve been struck in recent years with an increasing number of things that happened that I don’t remember, things I remember incorrectly, and (disturbingly) things that I remember vividly that simply didn’t happen at all. I introduced this topic with a simple example: A friend of mine found a college-era manuscript of a short story I wrote that I just don’t remember writing. Getting old, I guess. The bitchy part is that it’s a pretty good story, and it was completely outside my usual aliens-and-starships turf. Somehow I would have thought it would make a more vivid impression on me.

But we forget things. Odder are things we remember vividly that we in fact remember wrong. Forty-three years ago, when I was in eighth grade, I remember talking to a girl in my class and stumbling on the fact that her father had died. Forty years later, I ran into her again at our grade-school reunion, and it came out that it was her mother and not her father who had died. The original conversation was painful, and I remember painful things very well–you’d think I would have remembered it more accurately. In a different conversation with the same girl, I asked her what high school she would be attending that fall. I remember her indicating one Catholic girls’ school, but in fact (again, verified forty years later) she had attended another. She had never even considered the school that I remember her saying, because it was a fair ways off and the other school was within walking distance.

But I remember both conversations to this day, with the sort of clarity one would expect of a bright if nerdy kid attempting to make conversation with a girl he was a little sweet on. It took considerable courage to talk to her at all, and those are the things of which solid memories are made.

Except when they’re not, I guess.

It was that particular incident that started me looking critically at my own memories, especially those that could be verified somehow. I found a lot of little things that didn’t add up, including a few “flashbulb” memories (as psychologists call them) that one would expect would be vivid and indelible forever. The most recent one is something I chased down just the other day: I vividly remember the first time I kissed Carol–who wouldn’t?–and I remember that it was after we started school in the fall, which would be at least five or six weeks after we met at the end of July. Well, on the back of her 3 X 5 card in my teen-years telephone index box (which still exists among piles of oddments I’m amazed that I still have) is the note “kissed 8/16/69.” That was only two weeks after our most fateful meeting, and school was still another two weeks off. (Remember when school started after Labor Day?)

If I don’t remember that accurately, well, what hope for the rest of it? What kind of life did I actually live?

Stand by: The weirdest part is yet to come.

The Impersistence of Memory, Part 1

The other day, I had dinner with my high school locker partner and college friend Tom Barounis. He handed me something that he had found among his own things: a college-era non-SF story manuscript of mine, a typewritten original and not a Xerox copy, complete with comments by an unknown third party who sounded like a college prof. On the back of the last sheet, in my own distinctive block printing, was the date: 4/30/72.

There were two things wrong with this: 1) I don’t remember having my Selectric typewriter in the spring of 1972, and 2) I don’t remember writing the story itself.

Point 1 is checkable. I used to date typewritten manuscripts, and I have two moving boxes full of them back home, so as soon as we get back to Colorado I can haul out my writer’s trunk and see when exactly I made the transition from Smith-Corona to IBM. I recall it being a year later, as I was ramping up for the Clarion SF workshop, which I attended in the summer of 1973.

Point 2 is more peculiar. I vaguely remember writing a story with that title, but the story I remember writing was nothing like the story I read last night, for probably the first time in 37 years. I know what probably happened with the manuscript: After getting it back from the prof I wrote it for, I passed it on to Tom to read, and it remained with him since the spring of 1972.

But why do I remember the story being about something else entirely?

I remember the story being a failed experiment, about two (male) friends who experience a physical attraction between them and don’t know how to deal with it. Instead, it was about two male friends stressing about the draft lottery, and how one of them runs to Canada when he pulls number 5. Furthermore, it was not a failure but a pretty decent story, considering that I only wrote “mainstream” (non-SF) fiction with a gun to my head in those days. (I’d even consider sending it out for publication, except that I don’t think anybody remembers what the draft lotteries were about anymore.)

It’s a headscratcher. It’s also the latest in a series of headscratchers that have turned up here and there as I’ve grown older, and have realized that a growing number of things that I remember happening did not happen anything like the way I remember them. Some did not, in fact, happen at all. I’ve begun to wonder what other memory holes are waiting for me to discover, and how much the life that I remember living resembles the life that I actually lived.

More in coming days.

So What’s a “Contrarian”?

Ever since I declared myself a Contrarian Optimist and renamed my VDM Diary to ContraPositive Diary in 2000, people have been asking me what a “contrarian” is. Everybody seems to connect it with buying stock right after a market crash (not always a bad thing) or buying gold because, after all, the end of the world is coming Real Soon. Nor does it mean stubborn, although it may mean “stubborn about refusing to agree with you.” It’s a fair question, and warrants some explanation.

First of all and most fundamentally, a contrarian is a sharp stick in the eye of conventional wisdom. There are certain things that “everybody knows” even though this “everybody” is often the intersection of the sets of the captious, the lazy, and a tribe of opinion-makers with an agenda. The troubling part is that conventional wisdom is sometimes true, and sometimes in opposition to other tenets of conventional wisdom. The world is never as simple as we think, and conventional wisdom is often an oversimplification of a difficult truth, offered up to the ignorant to keep them from having to work too hard, and sometimes serving to sugarcoat an agenda in the process. Contrarians understand that conventional wisdom is the cross-product of lazy thinking and hidden agendas, and go digging for the truth. Where to dig is important, and generally not obvious.Contrarians pay attention to those who doth protest too much, and look for clues in the sound and the fury. Much can be learned by listening to fools and discerning their agendas; fools are less adept at concealing agendas than the people who originated the agendas.

Agendas are key. Contrarians do not swear fealty to tribes or tribal ideologies. (Tribalism is a special danger to civilization, and I’ll expand on this issue here on Contra as time allows.) Tribes are groups who define their own specific conventional wisdom–a collection of ideologies that I call “received opinions”–and then enforce it within the tribe as mercilessly as they must. Deep psychologies are at work here. There seems to be a peculiar and powerful desire in some personality types to offer fealty to a tribe, in a very deep and preverbal way that precludes any meaningful opposition to the tribe’s ideologies. We are looking at things we inherited from our primate ancestors, things we’ve had since before we had language. Such people are pretty much owned by the tribe, and serve the needs of tribal leaders while feeling that the fate of the world depends on their loyalty to the tribe and their vilification of The Enemy–basically, competing tribes.

Contrarians may hold positions that they develop over time, but they do not swear fealty to anything, and reserve the right to change their minds, and recognize their occasional responsibility to do so.

Changing one’s mind is good exercise (way better than leg lifts) and Contrarians do it as often as necessary. Key here is that contrarians are not certain. Contrarians doubt everything, in the older and higher sense of “doubt,” meaning to recognize the incompleteness of a particular understanding of something. Doubts do not preclude faith. Doubts, in fact, are how faith happens. Certainty is how faith dies. Faith may well be defined as “conditional acceptance of something for which we have incomplete corroboration.” I have personal doubts about whether God exists, but I also have faith that He does. This is not schizophrenic; this is how it works. When you become certain of something, the game is over, the doors are locked, and the lights inside go out. Further insight is impossible, and further movement toward wisdom does not happen. (Worse, people prone to certainty are easy pickins’ for tribal leaders who need foot soldiers.) Certainty, after all, is the conviction that there is nothing more to be learned. After that, what’s left but watching hockey?

Issues of God and religion may be bad examples; I’m just odd that way. I believe in the laws of physics, but I also know that somebody with Major Doubts got the “law” of parity conservation repealed in the 1950s. That’s how science works, and in these days of belligerent certainty, a true scientific mindset is a contrarian attribute. I will not be surprised when String Theory gets shot in the head by some doubter somewhere (who may not even be born yet) and I won’t get annoyed when it happens. Less cosmic and closer to home, I am pretty sure that eating carbs makes you fat and eating fat makes you thin. I won’t say that I know for certain, but the more I look, the more evidence I’ve found to balance my doubts. My doubts remain. This doesn’t bother me. Nor would being proven wrong. I enjoy changing my mind when the evidence suggests that it’s necessary. The process is painful, but so is a twenty-mile hike. The pain will pass.

Doubts are a manifestation of humility. We always know less than we think we do, and the best way to learn more is to assume that you know less going in. No matter what you think you know, you are wrong. And so am I. A contrarian, however, is willing to admit it, and keep on diggin’.

It’s not all drudgework, this contrarian business. A contrarian enjoys the perversity inherent in being a contrarian. A touch of perversity keeps your crap detector sharp, and prevents you from falling into predictable ruts that all too often lead directly to tribal enslavement.

The more I read wine snobs dumping on sweet wine, the more I enjoy sweet wine. The more some people froth about Global Warming, the more intrigued I am by the possibility of Global Cooling–and the research that I’ve pursued there has been a lot of fun. I enjoy tweaking cultural snobs of all types, and my practice in being a contrarian has allowed me to work both sides of most of these streets: I’ll waltz but damn, I’ll polka! I read Chaucer in Middle English, but I like country music and I have a cowboy hat made by Ronald Reagan’s hatmaker. I like a good souffle, and I like Egg McMuffins. I write my reserved words in uppercase. (My language allows that. Sorry about yours.) My mix CDs jump between the Chicago Symphony and The Peppermint Trolley Company. Bach, sure. Barry Manilow, no problem.

Ruts, after all, are horizons pulled in too close. Shove ’em back whenever you can. I know dopers and scientists and crackpots and 4-star generals, and I have enjoyed the company of all of them. Life is full of irony and little weirdnesses, and as Art Linkletter hugely profited in learning, people are funny. Contrarians strive not to take anything too seriously. (We fail sometimes, but we try.) Even, or especially, ourselves.

Finally, a contrarian is free. This shouldn’t be necessary to say, but so much of modern life consists of surrendering your intellectual freedom to tribes of various kinds for dubious rewards. Tell the weasels to –ck off. (Tsk. Really, now. The hidden word is “back.”)

So I begin 2009 and a rebooted Contra, with a promise to revisit some of the points here in more detail as time permits. Happy New Year. Keep an open mind. And stay tuned.

Is Everybody Happy?

I just ordered two books: Gross National Happiness by Arthur Brooks, and The Big Sort, by Bill Bishop. The books are part of my long-term research into why we think and act the way we do. I’ll report further next year when I summarize my thoughts so far, but sniffing around online for reactions to Brooks’ book has raised an interesting question: Can we in fact measure happiness?

I don’t always agree with Arthur Brooks, but I admire his willingness to bring up issues that seem calculated to infuriate liberal opinion-makers—and back his opinions up with reasonable research. One of his controversial positions in Gross National Happiness is that happiness appears to correlate with intensity of religious feelings. Cato research fellow Will Wilkinson challenges that thesis in his blog, and whereas it’s a reasonable counterpoint, one of the comments below Wilkinson’s essay hit the whole problem between the eyes: People belonging to deeply conservative religious organizations are pressured, sometimes intensely, to say that they’re happy. (The commenter claims to be a lapsed Evangelical.) This maps with my own experience dealing with the conservative Catholic fringe, and yet the truth is that a lot of these people seem to me to be not only deeply unhappy, but on the thin edge of panic.

Why this should be is a subject I hate to broach at all and can’t even attempt right now, but set it aside for the moment. The real flaw in Brooks’ research may be that asking a person if he or she is happy is not a useful way to measure happiness. I see research summarized online indicating that the people in Nigeria are the happiest people in the world, though more recent research tags the Danes. The summaries understate the obvious: Happiness does not mean the same thing to all people. Worse, there are cultural pressures in a lot of places to fit in and not make a fuss (Japan comes to mind) and heavy pressure in religious and other tribal organizations to claim that the tribe provides everything they need to be happy—leading their adherents to make the statements that are simply expected of them. It’s like the ritual answer to the seminal rhetorical question, “How ya doin’?” People who answer something other than “Great!” don’t really understand the ritual.

It might be more useful to measure happiness by way of things like public civility, rate and tenure of marriage, incidence of alcohol and drug abuse, and so on. If research must be based on questionnaires, it may be possible to approach the matter from the other side, by asking more oblique questions about feelings like satisfaction, pain, sadness, or enthusiasm, or at least things that are not obviously a part of cultural or religious scripts. The truth may be that the whole question is meaningless; after all, what is the objective experience of the color red, or the taste of dry wine? We all experience the world differently, and we interpret that experience for ourselves through the lens of our culture and the social structures that are the most important to us. If we badly want to be part of a sophisticated social culture, we may choke down a crappy bitter Cabernet and praise it to the ceiling even if (to us) it’s (red) swill, because that’s what the cultural leaders and our “initiated” peers expect. This is a very deep well of inquiry, and I will be writing more about it in months to come.

We’ll see what Brooks has to say when the book arrives, but I’m suspicious of the premise, even though I would be happy (as it were) to be proven wrong.

An Outrageous Proposition

I just got home to Colorado Springs from a week's trip to Chicago, and whereas a week sounds like a long time, well, it may be when you're 12. I am not 12. Poof! The week was there and gone.

But I had an idea yesterday that I'm going to pursue in this space. It's a challenge, to myself and to all of you, to engage in an outrageous experiment here in Contra. This will require the comments feature here on LiveJournal (alas, I'm not quite ready to move Contra over to WordPress yet) but that isn't the tough part. The aim of the experiment is to see if the larger “we” (again, myself and all of you) can engage in online political discussion completely devoid of anger.

I do not mean that you can't be angry; that's unreasonable and may be impossible. What I want you to do is write without anger. That takes some effort but it can be done, and it's a useful skill to have. I've found that forcing myself to write without expressing anger allows me to think more clearly. In some weird way, it decouples my anger from my rational mind and leaves it on a side track for awhile where it won't get in the way of the points I'm trying to put across.

Note that this is a challenge, but (for a limited time only! As not seen on TV!) it is also the rules. I have a rule for Contra that I don't invoke very often: You can be either angry or anonymous on my blog but you cannot be both. I delete ten or twelve comments a year from anonymous flamers who come out of nowhere and flame either me or someone in the comments. I sometimes give them a chance to identify themselves, but this rarely happens. Mostly I get another flame, and then the thread goes where all flames eventually go: Out. But until I finish up this series on politics, a new rule applies: No anger. It applies from today's entry until I call the whole thing done, which will almost certainly be when I go get my mouth worked on next week. Until then, angry comments will be deleted.

However, there's one final wrinkle: If and when I discern anger in a comment, I'm going to point it out in a nonjudgmental fashion and ask my readers if they agree that the message contains anger. I reserve the right to override the vote, but I promise to consider it seriously. A thumbs-up or -down is sufficient, but explaining why you agree or disagree with me regarding the presence of anger in the comment (not with the comment's factual content, which should be done separately) could be interesting.

I will be watching for the very human tendency to see anger more clearly in people you disagree with. I may or may not say anything, but I will be watching.

Let's see what happens.

_ . . . _

Some of the most reliable political theater (though generally not the best) proceeds from promised tax cuts. If I were to flip the Magic 8-Ball this second, it would predict that neither party will even attempt a tax cut in the next two years, irrespective of which wins. All the promises we've heard will be quietly forgotten, and probably explained by the obvious truth: We cannot afford to cut taxes at this time. The Bush tax cuts will quietly expire, and among the ill and elderly wealthy there will be more assisted suicides (both willing and unwilling) in 2010 than a civilized nation should tolerate. The Magic 8-Ball says no more than that, other than its standard mantra when answering political questions: “You are all behind me now.”

What I want to talk about tonight is another oft-heard mantra: “The rich aren't paying their fair share!” What never seems to come up in discussion is what the “fair share” would actually be. I want some hard numbers here. I remember reading of a psych experiment years ago in which people were asked a question something like this: “One man makes $10,000 a year. Another man makes ten times that amount. In a truly fair income tax system, how much more should the second man pay in income taxes than the first man?” The several choices ran from “The same” through an ascending scale of multipliers, like 2X, 5X, 10X, 50X, 100X, and 1000X. Overwhelmingly, people answered “10X” and seemed to think (as gleaned from subsequent discussions with the experimenters) that this was a progressive tax. It's not. It's a flat tax. The experiment was (if I recall) about leading questions, and this was only one question among many. But it suggests to me that we as a nation don't even remotely understand the tax system that we have, which is unsurprising, given that most Americans probably couldn't even lift the tax code. This makes the discussion difficult and complex.

We do have some hard numbers on the state of things as they now exist: 26% of Federal tax receipts come from the wealthiest 1%, which comprise 1.1 million individuals. The wealthiest 6% of taxpayers (5.6 million individuals) contribute 42% of all Federal receipts. The poorest 40% of Americans pay no Federal taxes at all beyond the Social Security payroll tax. And that's looking at Federal taxes generally; if you look at income taxes alone the picture is even more striking: For tax year 2005, IRS numbers tell us that the wealthiest 1% paid 39% of all income tax revenues. The top 10% paid 70%. This is a pretty progressive system. The question we need to ask ourselves as a nation is whether it's progressive enough, and we need to be brave enough to talk about real numbers.

There are two complications that need to be part of that discussion. First of all, the very rich have a great deal of control over how much their income is and when they get it. This is why tax receipts often go down when tax rates are raised: The rich simply cut back on generating new income and draw on their cash reserves until they call their tax guys and figure out which loopholes they can switch to in order to reduce their tax liability. This is in large part why the very rich have not been champions of the flat tax or other radical tax simplification schemes: Any such scheme would increase their liability hugely because such systems offer little flexibility and few loopholes.

The second complication is related to the first: It's not a good idea for the Federal government to depend on so few taxpayers for so much of its tax revenue, because the fewer people are paying, the “wigglier” and less predictable the numbers get. Even short-term planning becomes fluky, because a change in tax laws, or even an innovative new investment mechanism, can sweep across the finance business in less than a year, making previous tax revenue projections obsolete. The very rich share a common culture, and their money is “shaped” by a relatively few large banks and financial services firms. Small changes in the way money is handled are thus hugely leveraged.

I haven't even touched on the argument that everybody should pay something in income taxes simply to have a stake in the economy and the government. I only want to point out that Federal revenues would be a lot more stable and predictable if hundreds of millions of people are each paying a little (and those at the top paying a lot) than if only the people at the top are paying at all.

And on that note, I've got dogs to walk. More tomorrow. Remember: Keep your cool! (We may all learn something if you do!)

Scarcity Leaves Its Mark

Whether or not an unexamined life is worth living, examining what goes on inside your head is a lot of fun. I’ve become interested in psychology late in life (after treating it with contempt when I was a cocksure young rationalist) and identifying my biases and tracing them back down to their sources has become a minor hobby here.

My recent study of CSS reminded me of one of those biases: I hate windowing. I just hate it, and hate it so deeply I don’t even notice the hatred anymore. If you were to look over my shoulder as I work, you’d notice that I don’t use it. Whatever app I’m working in gets the whole screen, and when you can see the desktop at all, it means I’m in neutral and nothing useful is going on. I came to the insight after practicing fluid layouts in CSS. BTW, If you’re interested in learning how to do fluid layouts, I haven’t found anything better than Nate Koechly’s Web article “Intricate Fluid Layouts in Three Easy Steps.” Nate created the Yahoo UI Grids CSS system, which I may begin using once I learn enough CSS by building things from scratch. I like YUI because it supports fixed widths. Fluid layouts are not mandatory.

This is good, as I find fluid layouts peculiarly repellant. Things like this suggest a live frog nailed to a tree, squirming in agony. (Drag the corner of the window around and you may start to see what I mean.) Part of it is my long history with fixed page layouts in magazine and book work, and part of it is a desire to focus and not be distracted by things going on in other windows. The bulk of the bias, I think, proceeds from the same reason that the Greatest Generation were tireless savers and hated to waste anything: They grew up in conditions of scarcity. I ducked the Great Depression and WWII, but I followed personal computing from its rank beginnings, when displays were 16 X 64 character text screens or worse. I learned computers starving for screen real estate.

The IBM PC gave us 24 X 80 displays, but that was never enough. Text windowing systems like TopView seemed insane to me, and back in April 1989, when I was doing the “Structured Programming” column in DDJ, I wrote and published an “anti-windowing system” that treated the crippled 24 X 80 display as a scrollable window into a much larger character grid. Full-page text displays eventually arrived: The MDS Genius 80-character X 66-line monochrome portrait-mode text display (left) sat on my desk from 1985 through 1992, when Windows 3.1 finally made text screens irrelevant. (Lack of Windows drivers for the display soon forced MDS into liquidation.) It wasn’t until I bought a 21″ Samsung 213T display in 2005 and started running at 1600 X 1200 that I first recall thinking, “Maybe this is big enough.”

And only just barely. People who were born with a 1024 X 768 raster in their mouths may not be able to figure it, and I guess there’s really no way I can explain. It’s just me. Starve a man for screen space for thirty years, and he is unlikely to want to share what he has with more than one app at a time. Scarcity leaves its mark.

Fetishes

The original Star Trek premiered 42 years ago today. Feeling old, I went for a walk and tried to identify another pair of three-syllable homonyms and got nowhere. Viritrilbia, we need ya down here for a bit—and bring McPhee if you’ve got him.

Also on the word front, I got a note last night from a reader asking me how I define “fetish”, as my use of the word in yesterday’s entry puzzled him. I think he’s young, and maybe he’s thinking latex or bicycle seats, but not so: A fetish is a morally-neutral opinion held with peculiar force. The words “bias” and “prejudice” are now generally considered pejorative, so I had to think of something else. “Fetish” seemed to fit. We all have them, and as we get older and more willing to consider the possibility that we are not all-wise, we often begin to admit it.

My best-known fetish is the contrarian reaction to the well-known (and pretty silly) tech culture aversion to upper-case characters. Talk about a fetish: EVERYBODY KNOWS THAT UPPER-CASE CHARACTERS MEAN THAT YOU’RE SHOUTING, SO NO ONE ANYWHERE IN THE UNIVERSE SHOULD EVER EVER EVER EVER EVER EVER EVER USE THEM FOR ANYTHING EVER AGAIN!!!!!! well guys in just spring when the little lame goat-footed balloon man begins coding far and wee (in pretty-how towns like palo alto) even e. e. cummings cant figger out wtf hes trying to do especially if he does it in c {heh}

My fetish is this: Upper-case characters should be used for the framing members of program code and content markup. In Pascal, things like BEGIN, END, WHILE, REPEAT, UNTIL, IF, THEN, and so on give the program its shape. They should stand out against the general landscape of functions and variables like kleig lights. Ditto content: Markup tags should be in upper case. They need to stand out. Statistically, ordinary content text is lower case, with a sprinkling of upper-case characters so thin as to barely be there. Not being able to spot a tag in the thick of your text can make errors so hard to see that you start flip<p>ing out, whether you’re in Palo Alto or Pa<hr>ump. The whole idea is to make the structure of your work easier to see at a glance, especially when there are pages and pages of it to go through and keep correct and-up-to-date.

I know I’ve lost the war, but I and others with the same fetish may have fought it well enough that the lower-case fetishists had to build the prohibition into what amount to the physical laws of content markup: XHTML absolutely will not allow upper-case characters in tags. God help us all if somebody somewhere perceived our HTML tags as SHOUTING!

And we give these people Ph.D.s, mon dieu.

(The only rational argument I’ve ever seen about this involves HTML compression, which gains you a mind-boggling 3-4% in markup file size. OMG, PONEZ!)

My other major fetish is about visual development. As our tools get better, hand-coding is increasingly a waste of time and an exercise of pure hubris. I know it’s fun, but how much will you bet that you can write better assembly code than gcc? I’m sure that I can’t, and I may know maybe a little bit about the subject. This goes triple for CSS/XHTML, which compared to modern x86 machine code are almost trivial. The field is newer than native code generation, and the tools are less mature, but the day will come when you draw the screen you want, and correct, optimized markup and styles come out the back end. We may be closer than you think, and halleluia for that!

It’s downhill from there on the fetish side. My off-dry wine fetish is well known. I’m increasingly sure that high-fructose corn syrup lies behind most of our obesity problem. I worry that the Pope will become a serious danger to the Catholic Church, if he hasn’t already. Etc. The point is that we all have our obsessions. We may have reasons for them—or think that we do—but certain ideas put down roots in us, and after awhile it’s difficult to set them aside. The wise person watches his/her own fetishes closely, lest they become damaging in some way. Shoot for moderation in all things, especially your obsessions!

Josef Fritzl, Evil, and Dumb Luck

Most of you have probably heard by now of Josef Fritzl, an Austrian psychopath who created a custom dungeon under his home, kept his daughter a prisoner there for 24 years, and sired seven children by her. I haven't been this disturbed by a crime since the boggling case of John Wayne Gacy, right here in NW metro Chicago, who tortured, murdered and then buried 33 young men under his house back in the 1970s.

Like it or not, crimes like this prompt one to ask an ugly question: If evil like this is possible, why are we still here? Why are we not already extinct? (Those who have studied the history of the 20th Century might say it was a very near thing.) I have a theory, though I admit it's a little thin to hang the future of humanity on: It's difficult to be brilliant and evil at the same time. Evil as we define it generally comes with limitations, primarily the limitation of not being able to see yourself and your own situation very clearly.

Right Men (as described by A. E. Van Vogt and Colin Wilson) are the best example: They just cannot conceive of the possibility that they are wrong. A huge number of Right Men thus never get very far in life. We see through them easily, recognize them as egomaniacal psychopaths, and do our best to avoid them. They have a bad habit of getting injured or killed in conflicts with others. Even when they somehow succeed in society to a degree, they are almost invariably humbled at some point, which is unbearable to them and often causes them to die young.

This is a good thing for us, as truly brilliant evil is extremely dangerous. What most “ordinary” evil people (like Fritzl) have that isn't often remarked upon is simple, dumb, statistical luck. Most criminals get caught eventually, and the worse their crimes are, the more likely they are to get caught. Some vanishingly rare few end up skating past justice for years and years (like Fritzl), and we only see a couple per century who are so lucky that they end up in command of armies. (Think Hitler, Mao, and Stalin.)

There are a lot of Fritzls out there. Most try evil things and get caught very quickly; you see them on the news all the time with their coats over their heads. Some get by for awhile, through a combination of luck and unusual intelligence. Only a handful are lucky enough to get away with the sort of depravity that John Wayne Gacy or Josef Fritzl got away with. Choosing an easily concealable form of evil is part of that luck, and sometimes there is a lot of cunning hard work involved. (Like creating a custom dungeon with flush toilets in the basement, or burying 33 bodies under your house without stinking up the neighborhood. I still don't entirely understand how Gacy managed that.)

As for where individual evil itself comes from, I think (against all political correctness) that it's primarily genetic. We're born along a bell curve, with Mother Teresa on one end and Stalin on the other. The optimist in me would like to think that the curve is biased toward the good. But whether or not we're evenly distributed across that bell curve, good and evil as success strategies are not symmetrical. Good is outward-looking, cooperates with others, and is generally supported by society as a whole. Evil handicaps itself in various ways. (Read Colin Wilson's A Criminal History of Mankind for hundreds of pages of examples.) Evil overestimates its chances, isolates itself, picks fights, and operates within a seriously distorted view of reality. This is fortunate, otherwise we'd long be extinct. But every now and then an evil individual gets catastrophically lucky, and we witness crimes that make us gasp. Given the huge number of moving parts in our seriously overstuffed world, this is inevitable, and the real astonishment, perhaps, lies in the fact that evil remains as rare as it is.

Clay Shirky’s Cognitive Surplus

sent me a pointer to an article by Clay Shirky about the “cognitive surplus”—excess brainpower not needed for making a living. Thought-provoking in the extreme; read it if you haven't already. Shirky's thesis is that free time created by postwar productivity gains have been sopped up by watching TV, and if we watched only a little less TV and deployed that wasted brain-time in collaborative projects, wow! The things we could do! I'm short on time today, so I'll post a few quick observations on the article and the idea here, in the hope for more general conversation:

  • The consequences of the invention of gin are very well covered by Colin Wilson in his book A Criminal History of Mankind. The problem was not solved for 200 years—the urban poor drinking themselves to death was the primary reason for Prohibition—and it's still not entirely clear to me how it was solved. My guess: Universal education and technology (think TV) allowed the poor to entertain themselves in less destructive ways.
  • Do we really watch two billion hours of TV every year, or are TVs simply on for two billion hours? In many households, TVs provide a sort of background noise to prevent too much disturbing silence, but how much attention is paid to the box is a lot more difficult to measure, and I suspect Big Media would prefer that no one try.
  • TV is often watched for the many but very small time slices that it takes to catch the news or weather. Carol and I watch very little TV but The Weather Channel, and we watch it in small chunks of a few minutes a few times a day. It adds up, but converting those dispersed minutes into productive brain-time may not be possible. The problem here is not a shortage of time but a shortage of focus, which is a separate and worthy discussion.
  • Almost incredibly, Shirky does not say much about the immense cognitive energy already spent on collaborative noncommercial cognitive projects. Think Linux and all open-source software, tens of thousands of free ebooks scanned and formatted from out-of-copyright sources (and a few in-copyright sources, like some of mine, sigh) not to mention the work of all those Pixel-Stained Technopeasants who had their day (April 23) last week. This is not new news, though it's good for it to be pointed out now and then.
  • Nor is much said about the general rise in volunteering in more traditional social service pursuits, which by and large are not cognitive in nature. As problematic as I consider some of his conclusions, much good data on volunteering is cited in Arthur Brooks' Who Really Cares? Not all time and energy released from TV watching now goes into, or would go into, cognitive pursuits.
  • And the ugly truth that no one seems willing to recognize: A huge percentage of people simply don't have the ability to contribute meaningfully to cognitive projects. This may be by genetics or by adverse circumstances, but it's true nonetheless. I keep thinking that a lot of them, if there were no TV, would go back to gin or to worse things that didn't exist when gin was invented in 1740.

My early conclusion: There's less cognitive surplus than the article suggests, and less upside to be gained by turning away from TV. People inclined to be creative already cluster toward the bottom of the TV time curve, and a lot of the people I consider brilliant don't watch TV at all. There is almost certainly an irreducable minimum number of people who need TV as an anaesthetic, and this number may be higher than we care to admit. Loneliness, clinical depression, and other psychiatric problems dissipate and render useless an immense amount of human energy, and we don't seem any closer to solving those problems than we were in 1740. Those problems may not in fact be solvable, though saying so won't make me any friends.

Still, anything that nudges people away from the box toward creative or collaborative pursuits of any nature is a good thing. My problem at this point in my life is not TV but sleep, since I need ten hours in bed to yield eight hours of useful sleep cycles. The time I used to spend commuting I now spend wondering why I'm not asleep, and if I could lick that problem I would get a great deal more done.

Tabletop Fluoroscopy for Boys, Circa 1913

It took a few minutes of flipping through some books in my workshop, but I eventually found what I remembered: That one of my “boys” books contained a description of a tabletop X-ray setup. The book in question is The Boy Electrician, the first volume of many from Alfred Morgan, who later wrote The Boys' First Book of Radio and Electronics and its three sequels, all of which loomed large in my tinkersome youth. The Boy Electrician was originally published in 1913 and is now in the public domain. The 1913 edition has been reprinted by Lindsay Books and I consider it worth having. There was a significant revision in 1943 that added chapters on radio and a few other things, and as best I can tell, the copyright on that edition was not renewed and it too is now in the public domain. A 40 MB PDF of the 1943 edition is here.

The Boy Electrician explains that “it is possible to obtain small X-ray tubes that will operate satisfactorily on an inch and one half spark coil.” This does not refer to the coil's dimensions; it means a coil capable of generating a spark an inch and a half long. He goes on to say that X-ray tubes cost about four and a half dollars each (albeit 1913 dollars) and may be obtained from laboratory supply houses. Hookup is fairly simple, with the spark coil driven by four of those wonderfully gutsy #6 dry cells with the huge carbon rod running down the middle. The drawing of the setup is shown below:

Morgan explains that you can either view images directly with a fluoroscope or expose ordinary photographic plates by placing an object to be X-rayed between the tube and the plate and leaving it there for fifteen minutes. This includes things like purses, mice, or…your hand. If you have the money, he also explains that a hand-held fluoroscope may be constructed by simply coating a sheet of white paper with crystals of platinum barium cyanide. It looks like the fluoroscope screen is used by basically staring at the X-ray tube with the object to be X-rayed between the tube and the paper screen.

It would be interesting to know just how many boys bought the tube and tried to make it work; though given that $4.50 in 1913 would be about $100 today, I doubt it was many. Nor do I know how toxic platinum barium cyanide is, but I'm guessing a little more than iron filings. (On the other hand, my 1962 chemistry set contained a little bottle of sodium ferrocyanide, which sounds much worse than it actually is.)

I remember taking The Boy Electrician out of the Chicago Public Library when I was 12 or so and pondering the X-ray project. What stopped me wasn't any fear of X-rays themselves, but concern that the whomping big spark coil would wipe out TV reception for a quarter mile in every direction and get me in trouble with the FCC. My friend Art had an old Model T ignition coil, and we could hear it sizzling on Art's transistor radio for half a block. The project had to be safe; I mean, the book was in the juvenile section of the library…

We knew less about a lot of things in 1913; X-rays were in some respects the least of it. But the hazard is significant, if not as bloodcurdling as luddites specializing in radiation insist. People used to self-treat insomnia by inhaling chloroform; well-known Victorian British scientist Edmund Gurney died by falling asleep with a chloroform-soaked cloth next to his nose. We know more now, and understand the precautions a great deal better, which has led to an escalation of conern that (untempered by any grasp of statistics or risk evaluation) quickly descends to rank superstition. One has to wonder how much knowledge isn't obtained these days simply because people are afraid of small but nonzero hazards. Panic over traces of phthalates—then heedlessly drive fifty miles to a football game with a car full of kids. It's the modern way of life.