Jeff Duntemann's Contrapositive Diary Rotating Header Image

Odd Lots

J and C - 5-27-2023

  • Our longtime friend David Stafford stopped in for an evening on 5/27, and we took him to Tutti Santi restaurant at 64th & Greenway. It’s one of our favorite eateries here, high-end Italian, and we ate on the patio. David took some photos, which turned out pretty well, as you can see above.
  • Could the ancient Greeks see the color blue? This is evidently a massive, long-term fistfight in certain circles, as ridiculous as it sounds. Mostly, the best guess is that the Greeks didn’t have a word specifically for blue tints. Matt Iglesias posted the best discussion I’ve found. It’s apparently more about linguistics than color vision.
  • I’ve posted some of my weird experiences dealing with modern AI here. AI images often have the wrong number of fingers or toes, and sometimes bizarre body proportions. Now an AI has created an entirely fictional governor of South Dakota, whose term in office was 1949-1951…in its imagination, or whateverthehell creates AI weirdness like this.
  • Carol and I have three Intel NUC computers, which are both small and quiet and yet still manage to do pretty much anything we do in terms of computing. (We are not gamers.) I’m not entirely sure why, but you can now buy a lid for a NUC machine that is a Lego base plate. I gave what Lego I had to our nieces years ago, or I’d be sorely tempted.
  • I’m a sucker for robots, so an article stack-ranking the top 100 movie robots was a must-read, even though my all-time favorite film robot, Kronos, only made it to #57. (I do agree with the very high quality of #1, which may be my second favorite movie robot.) Some of the robots are very old and/or very obscure; I think there were fifteen or so that I’d never heard of and another four or five that I’d simply forgotten.
  • A study published in the Lancet shows that natural immunity to COVID19 is equal to and often greater than what the supposed vaccines offer. The paper is a real slog if you’re not a researcher, hence the link to City Journal‘s overview.
  • And another City Journal piece I enjoyed, about Rod Serling and some of his struggles during the rise of television as the premier form of American entertainment.
  • A cow got loose in Carol’s thoroughly suburban hometown of Niles, Illinois (just north of Chicago) and CBS News described the results as “Udder chaos.” Points for that one, guys.
  • Some lunatic stole two million dimes from the US Mint in Philadelphia. That’s not as much money as it sounds like (do the math) but the bigger problem is how to spend it. Unless you’re getting a burger and fries at McDonald’s, paying for things by the pound (of coins) will attract a great deal of unwanted attention.

STORMY Vs. the AI Doom Kvetchers

I follow the AI discussion to some extent (as time permits, which it hasn’t lately) and from initial amusement it’s pivoted to apprehension and doom-kvetching, as if we didn’t get a bellyful of doom-kvetching as COVID passed through. The AIs I’ve played with have had peculiar failure modes, among them expressing that 128-bit registers are larger than 512-bit registers. Numbers aren’t their forte, even down at level of counting on their fingers, since AI image generators don’t have any clear idea how many fingers a hand is supposed to have. (More on that topic here.)

I get the impression that our current generation of AIs have their own way of proposing solutions to problems. What seems obvious to them isn’t always obvious to us. The danger, if there is any, lies in giving them more responsibility than something that can’t count fingers or toes should rightfully have.

Which brings us to an SF flash story that I wrote in 1990 and published in my magazine PC Techniques about that time. Most of my regular readers are familiar with “STORMY Vs. the Tornadoes.” It was designed as a humor piece, and a satire on the concept of AI as it was imagined thirty years ago. A few days ago I realized that I had, in a sense, predicted the future: That AIs will do ridiculous things because those ridiculous things make sense to the AIs. STORMY, a National Weather Service AI, was asked how we might reduce American tornado fatalities.

And STORMY took the question very seriously.

Here’s the whole story, for those who haven’t seen it, or haven’t read it in a long time.


STORMY Vs. the Tornadoes

By Jeff Duntemann

“Mr. Petter, in the last six months, that computer program of yours cut Federal government purchase orders for 18,000 ‘uninhabitable manufactured housing units,’ to a total of 21 million dollars.” Senator Orenby Ruesome (R., Oklahoma) sent the traitor Xerox copies skittering over the Formica tabletop.

U.S. Weather Service Programmer Grade 12 Anthony Petter winced. “Umm…you gave us the money, Senator.”

“But not for rotted-out house trailers!”

Petter sucked in his breath. “You gave us 25 million dollars to create a system capable of cutting annual US tornado fatalities in half. We spent a year teaching STORMY everything we knew about tornadoes. Every statistic, every news item, every paper ever published on the subject we fed him, and we gave him the power to set up his own PERT charts and plan his own project. Umm…I preauthorized him to cut purchase orders for items under $2000.”

“Which he did. 18,000 times. For beat-up, rotted-out, abandoned house trailers. Which he then delivered to an abandoned military base in west Nebraska a zillion miles from nowhere. And why, pray tell?”

Petter keyed in the question on the wireless terminal he had brought from his office. STORMY’s answer was immediate:

TO KEEP TORNADOES FROM KILLING PEOPLE.

Petter turned the portable terminal around so that the Senator could see it. A long pause ensued.

Ruesome puffed out his red cheeks. “Mr. Petter, be at my office at 8:00 sharp tomorrow. We’re going to Nebraska.”

The two men climbed out of the Jeep onto scrubby grass. It was July-muggy, and it smelled like rain. Petter gripped his palmheld cellular remote terminal in one hand, and that hand was shaking.

Before them on the plain lay an enormous squat pyramid nine layers high, built entirely of discolored white and pastel boxes made out of corrugated aluminum and stick pine, some with wheels, most without. The four-paned windows looked disturbingly like crossed-out cartoon eyes. Petter counted trailers around the rim of the pyramid, and a quick mental estimate indicated that they were all in there, all 18,000 of them.

A cold wind was blowing in from the southeast.

“Well, here’s the trailers. Ask your software expert what made him think stacking old trailers in the butt end of nowhere would save lives.”

Lightning flashed in the north. The sky was darkening; a storm was definitely coming in. Petter propped the wireless terminal on the Jeep’s fender and dutifully keyed in the question. The cellular link to STORMY in Washington was marginal, but it held:

TORNADOES ALWAYS SEEM TO STRIKE PLACES WHERE THERE ARE LOTS OF MOBILE HOMES.

Petter read the answer for the Senator. Ruesome groaned and kicked the Jeep hard with his pointed alligator boot. “Goldurn it, son, you call this ‘artificial intelligence?’ That silly damfool program bought up all the cheap trailers it could find and stacked them in Nebraska to get them away from tornadoes in the midwest. Makes sense, right? To a program, right? Save people who don’t live in empty trailers, right?”

The force of the wind abruptly doubled. Lightning flashed all around them, and huge thunderheads were rolling in from all points of the compass. Petter could hear the wind howling through cavities between the trailers.

“I’m sorry, Senator!” Petter shouted over the wind.

But Ruesome wasn’t listening. He was looking to the west, where a steel-grey tentacle had descended from the sky, twisting and twitching until it touched the ground. Petter looked south—and saw two more funnel clouds appear like twins to stab at the earth.

The programmer spun around. On every side, tornadoes were appearing amidst the roiling clouds, first five, then a dozen, and suddenly too many to count, all heading in defiance of the wind right toward them. The noise was deafening—and Petter could now feel through the soles of his feet that unmistakable freight-train rumble of the killer twisters.

Petter had felt all along that he had never quite asked STORMY the right question. Now, suddenly, the question was plain, and he hammered it into the terminal with shaking fingers:

STORMY: FOR WHAT PURPOSE DID YOU BUY ALL THESE TRAILERS?

The answer came back as a single word:

BAIT.

The winds were blowing him to the ground. Petter dropped the terminal and grabbed the Senator by the arm, pulling him toward a nearby culvert where the road crossed a dry creekbed. He shoved the obese man into one three-foot drainpipe, then threw himself into the other.

A moment later, the tornadoes converged on the trailers, all at once. The sound was terrifying. Petter fainted.

Both men lived. Local legend holds that it rained corrugated aluminum in Nebraska for several weeks.

And it was years before another tornado was seen anywhere in the USA.

RTL-SDR Software Defined Radio

I’ve been meaning to try software-defined radio (SDR) for a good long while. I had a suspicion that it would require some considerable research, and I was right. However, it wasn’t especially difficult or expensive to give it a shot. Amazon offers a kit that consists of an SDR USB dongle, plus some whip antennas and connecting cables. Price? $42.95. I also bought a book by the same outfit that offered the kit: The Hobbyist’s Guide to the RTL-SDR. Given that it’s 275 8 1/2 x 11  pages of small print, I’ll be plowing through it for awhile.

Of course, my first impulse is always to just run the damned thing, and do the research later. Very fortunately, the firm has a “quick start” page online, and by following its instructions (carefully) I got the product running in half an hour. The UI is reasonably well-designed:

RTL-SDR-UI

It has the waterfall display and amplitude display that you would expect, plus the ability to detect AM, NBFM, WBFM, CW, USB, LSB, DSB, and RAW. There’s a squelch and several ways of selecting the tuner frequency. There are other things that I haven’t figured out yet, but that’s also to be expected.

The software is a free download (see the Quick Start Guide) with a slightly fussy installation mechanism that runs from a batch file. The dongle has an SMA connector on its end for an antenna. The kit includes a little tabletop photo tripod that can carry an adjustable whip dipole, which I put on the tripod and eyeballed at 100 MHz. Without further ado, my favorite FM classical station, KBAQ on 89.5 MHz, was roaring out of my headphones.

Although the dongle can technically tune from 500 KHz to 1.7 GHz, I found that there’s a low-frequency cutoff at 24 MHz. I saw some mumbling in the book about an upconverter, but haven’t explored it yet. The implication is that it’s part of the dongle but you have to select it as an option somewhere. I’ll get to that eventually.

The software installs on Win7 and up. I have a Win10 Intel NUC box that isn’t doing anything right now, and the plan is to put it in my workshop, where I can feed the SDR with the discone I have on a mast above the garage. It’s currently down in the garage for repairs—one of the cone elements fell off. All the more reason to put it back together and get it up on the mast again.

This isn’t supposed to be a review. I need to dig into the doc a lot deeper than I have so far before I can say with any confidence how good it is. It receives broadcast FM just fine. However, like most Arizona recent construction, this is a stucco-over-chickenwire house, which means (roughly) that I’m running the SDR in a so-so Faraday cage.

I see some fun in my near future. I’ll keep you all posted on what I can make it do and how well it performs. So far, so good.

Feet Have No Excuse

(If you haven’t read my entry for April 23 yet, please do so—this entry is a follow-on, now that I’ve had a chance to do a little more research.)


AI image generators can’t draw hands worth a rat’s heiny. That’s the lesson I took away from my efforts some days ago, trying to see if any of the AI imagers could create an ebook cover image for my latest novelette, “Volare!” It wasn’t just me, and it wasn’t just the two image generators I tried. If you duckduck around the Web you’ll find a great many essays asking “Why can’t AIs draw hands and feet?” and then fail to answer the question.

The standard answer (and it’s one I can certainly accept, with reservations) is that human hands are very complicated machines with a lot of moving parts and a great many possible positions. I would argue that an infinite variety of positions is what hands are for—and are in fact the reason that we created a high-tech civilization. Even artists have trouble drawing hands, and to a lesser extent, feet. This is a good long-form tutorial on how to draw hands and feet. Not an easy business, even for us.

In photographs and drawn/painted art, hands are almost always doing things, not just resting in someone’s lap. And in doing things, they express all those countless positions that they take in ordinary and imaginary life. So if AIs are trained by showing them pictures of people and their hands, some of those pictures will show parts of hands occluded by things like beer steins and umbrella handles, or—this must be a gnarly challenge—someone else’s hands. In some pictures, it may look like hands have four fingers, or perhaps three. Fingers can be splayed or together and clenched against their palm. AIs are pattern matchers, and with hands and especially fingers, there are a huge number of patterns.

So faced with too many patterns, the AI “guesses,” and draws something that violates one or more traits of all hands.

The most serious flaw in this reasoning comes from elsewhere in the body: feet. In the fifty-odd images the AIs created of a barefoot woman sitting in a basket, deformed feet were almost as common as deformed hands. This is a lot harder to figure, for this reason: feet have nowhere near the number of possible positions that hands have. About the most extreme position a foot can have is curled toes. Most of the time, feet are flat on the floor, and that’s all the expressive power they have. This suggests that AIs should have no particular trouble with feet.

But they do.

I’ll grant that in most photos and art, feet are in shoes, while hands generally go naked except in bad weather or messy/hazardous work. So there are fewer images of feet to train an AI. I had an AI gin up some images this morning from the following description: “A woman sitting in a wicker basket in a nightgown, wearing ballet slippers.” I did five or six, and the best one is below:

Woman In Basket in Ballet Slippers

Her left leg seems smaller than her right, which is a different but related problem with AI images. And her hands this time, remarkably, are less grotesque than her arms. But add some ballet slippers, and the foot problem goes away. The explanation should be obvious: In a ballet slipper, all feet look more or less alike. The same is likely the case for feet in Doc Martin boots or high-top sneakers. (I may or may not ask an AI for an image of a woman in sandals, because I think I already know what I’d get.)

There were other issues with the images I got back from the two AIs I messed with, especially in faces. Even in the relatively good image above, her face seems a little off. This may be because we humans are very good at analyzing faces. Hands and feet, not so much. Defects there have to be more serious to be obvious.

Anyway. The real problem with AI image generation is that they are piecing together bits of images that they’ve digested as part of their training. They are not creating a wire-frame outline of a human body in a given position and then fleshing it out. At best they’re averaging thousands or millions of images of hands (or whatever) and smushing them together into an image that broadly resembles a human being.

Not knowing the nature of the algorithms that AI image generators use, I can’t say whether this is a solvable problem or not. My guess is that it’s not, not the way the software works today. And this is how we can spot deepfakes: Count fingers. The hands don’t lie.

AI Image Generators, Mon Dieu

I finished a 10,700 novelette the other day, the first short fiction I’ve finished since 2008, when I wrote “Sympathy on the Loss of One of Your Legs,” now available in my collection, Souls in Silicon. I’ve mostly written novels and short novels since then. (I’ll have more to say about “Volare” in a future entry here.)

To be published, it needs a cover. I have no objection to paying artists for covers, which apart from an experiment or two (see “Whale Meat”) I’ve always done in the past. Given all the yabbjabber about AI content creation recently, I thought, “Hey, here’s a chance to see if it’s all BS.”

The spoiler: It’s not all BS, but parts of it are BS-ier than others.

Ok. I’ve tested two AI image generators: OpenAI’s DALL-E 2, and Microsft’s Bing Image Generator. I found them through a solid article on ZDNet by Sabrina Ortiz. As it happens, Bing Image Generator outsources the process to DALL-E. I wanted to try Midjourney, and may eventually, but you have to have a paid subscription (about $8/month) to use it.

I’m not going to summarize the story here. One image I wanted to try as a cover would be the female lead sitting with her behind in a wicker basket, floating through the air at dawn a thousand feet or so over Baltimore. In both generators (which are basically the same generator) you feed the AI a detailed text description and turn it loose. I started simple: “A woman flying through the air in a wicker basket.” Edy Gagliano does precisely that in the story. What DALL-E gave me was this:

DALL·E 2023-04-23 14.46.55 - a woman flying through the air in a wicker basket - 500 Wide

Well, the woman is flying through the air, but we have a preposition problem here. She is over, not in the basket. Good first shot, though. I tried various extensions of that basic description, to the tune of 48 images on Dall-E. I won’t post them all here for space reasons, but they ran the gamut: A woman flying through the air holding a basket, a woman flying through the air in a basket the size and shape of a bathtub, and on and on.

The next one here is perhaps the best I’ve gotten from DALL-E. It’s a woman in a basket over Baltimore, I guess. Here’s the description: “a barefoot woman sitting down inside a magical wicker basket that flies through the air at dawn over Baltimore.” In one sense, it’s not a bad picture:

DALL·E 2023-04-23 10.05.40 - a barefoot woman sitting down inside a magical wicker basket that flies through the air at dawn over Baltimore 500 wide

That said, It looks out of focus. The basket is not wicker and it’s yuge. And in the story, Edy just puts her butt in the basket and lets her legs hang over the side.

Now let us move over to Bing Image Generator. In a way, it came closer than nearly all of the DALL-E images. But now we confront a well-known weakness of AI image generators: They can’t draw realistic hands or feet or faces. Here’s my first take on the image from Bing:

_77229ce5-3d7c-4c09-964f-b2b784ba3580 - 500 Wide

Look closely. Her hands and feet appear to be drawn by something that doesn’t know what a human hand or foot looks like. The face, furthermore, looks like it has one eye missing. (That’s easier to see in the full-sized image.)

I’ll give Bing credit: The images are less fuzzy and smeary. Because Bing uses DALL-E, I suspect there are DALL-E settings I don’t know about yet. I tried a few more times and got some reasonable images, all of them including some weirdness or another. The one below is a better rendering of a woman who is actually sitting in the basket with her legs hanging over the basket’s edge. But did I order a helicopter? Her face is a little lopsided, and her hands and feet, while not grotesque, aren’t quite right.

_090cd681-df9a-4736-8fcd-cdaafe028ae1 - 500 wide

Bing gave me about 24 images while I messed with it, and some of the images, while not capturing what I intended, were well-rendered and not full of weirdness. The one below is probably closest to Edy as I imagine her, and we get a SpaceX booster burning up in the atmosphere to boot. Is she over Baltimore? I don’t know Baltimore well enough to be sure, but that, at least, doesn’t matter. Stock photos of anonymous cities are everywhere.

_794c2ce1-7cd6-492d-9712-7e75ab646a3c - 500 wide

None of the others are notable enough to show here.

So where does this leave us? AIs can draw pictures. That’s real, and I’m guessing that if you tell it to draw something a little less loopy than a woman with her butt in a flying basket, it might do a better job. I remain puzzled why hands and feet and faces are so hard to do. Don’t AIs need training? And aren’t there plenty of photos of hands and feet and faces for them to generalize from a substantial number of specific examples?

I have no idea how these things are supposed to work, and if there were a good overview book on AI image generator internals, I’d buy it like a shot. In the meantime, I may practice some more and look at specific settings. If nothing else, I can produce some concept images to show to a cover artist. And maybe I’ll luck into something usable as-is.

Whatever I discover, you can count on seeing it here.

Odd Lots

The End of the Bluecheck Blues

Yesterday was the end of the line for the Twitter bluecheck. You can still get one, but you’ll have to pay for it. And anybody who has a bluecheck but won’t pay for it will lose it, as of (ostensibly) today. If you can pay for it, you can get it. (I believe you only have to prove your identity.) It will cost you eight bucks a month. What’s that? Two lattes at Starbucks? Cheap! But as it happens, paying for it isn’t the point.

Duhhhh.

There are a couple of problems with the pre-Musk bluecheck. It was free, but bestowed only upon those judged worthy, via a process completely opaque outside of being politically slanted. This led to a noxious side effect: It created a sort of online aristocracy with a built-in echo chamber that dominated the whole platform.

Elites are always a problem, because they consider themselves above the condition of ordinary people and not bound by ordinary people’s limitations. On Twitter, they’re mostly just stuffed shirts who lucked into Harvard and scored a job in a newspaper somewhere. (Or just happen to be celebrities famous mostly for being famous.)

Much bluecheck dudgeon has been hurled around about having to pay for what was once free, or (way worse!) knowing that any prole with 8 bucks to spare can have the same badge. The connotation of the badge changed, from “I‘m a demigod” to “I support the new Twitter.” OMG! NO WAY! I’M LEAVING!

The big question now, of course, is whether the malcontents will actually leave the platform. We all know how many people threatened to quit Twitter when Musk bought it. The Atlantic has an interesting article on the phenomenon. (Nominally paywalled, but they will give you a couple of free articles.) I’ve posted here about the supposed mass migration from Twitter to Mastodon. Mastodon grew hugely after the beginning of the Musk era, to the annoyance of a lot of long-time Mastodoners. The open question is how thoroughly the emigrants burned their bridges as they went.

This month we may finally find out.

Odd Lots

RIP Aero 2007-2023

Aero - Tarry-All 2010 - Best of Winnres - New Champion - 500 Wide

Our little dog Aero has left us, at 16 years 7 months. Last week he was in some sort of discomfort, and by Monday it was pretty clear that his liver and gall bladder were failing. Carol set up an appointment with our mobile vet for Wednesday morning to put him to sleep. But Tuesday noonish, I checked him as I’ve been checking him for several weeks, to make sure he was still breathing.

This time, he was not.

I made sure that his heart was no longer beating, straightened his head, and with my hand on his forehead said my Prayer of Returning over him, as is our custom when dogs leave us:

From our Creator we took you;

To our Creator we return you,

That your life with us may glorify our Creator,

And in the hope that we may someday meet again.

Go with God, my good and faithful companion!

Aero on couch 2007 - 500 wide

When we bought Aero from his breeder, the late Jimi Henton, in 2007, he quickly told us his name by his ears, which as a puppy often stuck straight out either side of his head, like a plane. He was on the small side for a Bichon Frise, and a little shy, so Jimi suggested that Carol start showing him. Doing this required show grooming and a multitude of other details, but Carol bore down, mastered whatever skills were necessary, and by the spring of 2010 Aero became an AKC Champion. (See top photo.)

He was a lot of fun out in the yard, chasing cheap Wal-Mart playground balls with the rest of the pack. As soon as the balls lost enough air pressure so that Aero could push his sharp little teeth against the plastic, he went for the kill and the ball popped.

His kennel name was Champion Jimi’s Admiral Nelson. He lived longer than any other dog we have ever had, both as a couple or earlier, as kids. (Chewy came close, at 16 years 4 months.) He was a lot of fun and we will always thank God for sharing such a wonderful creature with us.

Scraps: Where Are Cheezer’s Atoms?

A couple of people have mentioned in DMs that I’ve written about this before, but they can’t find it. I have indeed written about memories gone wrong, but it was a long time ago and if you’re interested, here are the four installments:

The Impersistence of Memory, Part 1

The Impersistence of Memory, Part 2

The Impersistence of Memory, Part 3

The Impersistence of Memory, Part 4

It’s an interesting series, and certainly related, but not exactly what I’m going after with Scraps. Scraps is about earworms, true, but actually a more general category that might be called “mindworms.” It’s things that you’ve long since forgotten that pop into your head for no discernable reason and with peculiar force, as though the action were somehow deliberate. I have a theory, which I’ll get to after a few examples.


At some point during the worst of the COVID craziness, a peculiar question entered my mind: Where is Cheezer now? The weird part isn’t that I remembered a childhood toy. It’s that I remembered wondering where the toy had gotten to, probably when I was in high school fifty-plus years ago.

Cheezer was a small diecast metal car of a sort that was common in the ‘50s and early ‘60s. They still exist, but (like so much else) are much fancier now than they were when I was a first-grader. The car’s name was “Cheezer” because he was the color of American cheese, of which we ate much in that era. My sister tells me that I had several other diecast cars, none of which I can bring to mind at all.

Again, the memory was a peculiar one: Some time probably fifty years ago, I went looking for Cheezer and couldn’t find him. He used to live in one of the two little drawers in a cherry-wood gate-leg table that was in our basement. He shared the drawer with some plastic toy dinosaurs and plastic Army men. I think the Army men were still in the drawer, but Cheezer was nowhere to be found.

I did some searching around the house but did not find poor Cheezer. He had some sentimental value, and I was a little annoyed at being unable to find him. And then a peculiar insight came to me: Cheezer might be lost, but he was somewhere. Or at least the atoms of which he was made were somewhere, because absent an atom-smasher, atoms are forever.

Don’t misread me here. The odd thing isn’t that I remembered a toy I probably hadn’t seen in sixty years. The odd thing is clearly remembering myself wondering where he was, right down to that nerdy insight about his atoms. Nothing triggered that memory. I marked it as silly but it kept coming back to me. Ok, COVID was a weird business that left a lot of people with a certain amount of mental turmoil. Maybe COVID was stirring some stagnant internal pot, and up popped an old and odd state of mind starring a toy car.

Except…the very same thing had happened before, more than once, and long before COVID. More here as time permits.

I’m guessing now that Cheezer was in a box somewhere that my mother threw away once it was clear that I would no longer be playing with diecast cars and plastic Army men. So he’s in a landfill somewhere, atoms and all.

One has to wonder if our brains are like landfills full of ancient states of mind, and every so often one jumps up and says, “Hi!”

Again, stay tuned.