Jeff Duntemann's Contrapositive Diary Rotating Header Image

April, 2023:

RTL-SDR Software Defined Radio

I’ve been meaning to try software-defined radio (SDR) for a good long while. I had a suspicion that it would require some considerable research, and I was right. However, it wasn’t especially difficult or expensive to give it a shot. Amazon offers a kit that consists of an SDR USB dongle, plus some whip antennas and connecting cables. Price? $42.95. I also bought a book by the same outfit that offered the kit: The Hobbyist’s Guide to the RTL-SDR. Given that it’s 275 8 1/2 x 11  pages of small print, I’ll be plowing through it for awhile.

Of course, my first impulse is always to just run the damned thing, and do the research later. Very fortunately, the firm has a “quick start” page online, and by following its instructions (carefully) I got the product running in half an hour. The UI is reasonably well-designed:

RTL-SDR-UI

It has the waterfall display and amplitude display that you would expect, plus the ability to detect AM, NBFM, WBFM, CW, USB, LSB, DSB, and RAW. There’s a squelch and several ways of selecting the tuner frequency. There are other things that I haven’t figured out yet, but that’s also to be expected.

The software is a free download (see the Quick Start Guide) with a slightly fussy installation mechanism that runs from a batch file. The dongle has an SMA connector on its end for an antenna. The kit includes a little tabletop photo tripod that can carry an adjustable whip dipole, which I put on the tripod and eyeballed at 100 MHz. Without further ado, my favorite FM classical station, KBAQ on 89.5 MHz, was roaring out of my headphones.

Although the dongle can technically tune from 500 KHz to 1.7 GHz, I found that there’s a low-frequency cutoff at 24 MHz. I saw some mumbling in the book about an upconverter, but haven’t explored it yet. The implication is that it’s part of the dongle but you have to select it as an option somewhere. I’ll get to that eventually.

The software installs on Win7 and up. I have a Win10 Intel NUC box that isn’t doing anything right now, and the plan is to put it in my workshop, where I can feed the SDR with the discone I have on a mast above the garage. It’s currently down in the garage for repairs—one of the cone elements fell off. All the more reason to put it back together and get it up on the mast again.

This isn’t supposed to be a review. I need to dig into the doc a lot deeper than I have so far before I can say with any confidence how good it is. It receives broadcast FM just fine. However, like most Arizona recent construction, this is a stucco-over-chickenwire house, which means (roughly) that I’m running the SDR in a so-so Faraday cage.

I see some fun in my near future. I’ll keep you all posted on what I can make it do and how well it performs. So far, so good.

Feet Have No Excuse

(If you haven’t read my entry for April 23 yet, please do so—this entry is a follow-on, now that I’ve had a chance to do a little more research.)


AI image generators can’t draw hands worth a rat’s heiny. That’s the lesson I took away from my efforts some days ago, trying to see if any of the AI imagers could create an ebook cover image for my latest novelette, “Volare!” It wasn’t just me, and it wasn’t just the two image generators I tried. If you duckduck around the Web you’ll find a great many essays asking “Why can’t AIs draw hands and feet?” and then fail to answer the question.

The standard answer (and it’s one I can certainly accept, with reservations) is that human hands are very complicated machines with a lot of moving parts and a great many possible positions. I would argue that an infinite variety of positions is what hands are for—and are in fact the reason that we created a high-tech civilization. Even artists have trouble drawing hands, and to a lesser extent, feet. This is a good long-form tutorial on how to draw hands and feet. Not an easy business, even for us.

In photographs and drawn/painted art, hands are almost always doing things, not just resting in someone’s lap. And in doing things, they express all those countless positions that they take in ordinary and imaginary life. So if AIs are trained by showing them pictures of people and their hands, some of those pictures will show parts of hands occluded by things like beer steins and umbrella handles, or—this must be a gnarly challenge—someone else’s hands. In some pictures, it may look like hands have four fingers, or perhaps three. Fingers can be splayed or together and clenched against their palm. AIs are pattern matchers, and with hands and especially fingers, there are a huge number of patterns.

So faced with too many patterns, the AI “guesses,” and draws something that violates one or more traits of all hands.

The most serious flaw in this reasoning comes from elsewhere in the body: feet. In the fifty-odd images the AIs created of a barefoot woman sitting in a basket, deformed feet were almost as common as deformed hands. This is a lot harder to figure, for this reason: feet have nowhere near the number of possible positions that hands have. About the most extreme position a foot can have is curled toes. Most of the time, feet are flat on the floor, and that’s all the expressive power they have. This suggests that AIs should have no particular trouble with feet.

But they do.

I’ll grant that in most photos and art, feet are in shoes, while hands generally go naked except in bad weather or messy/hazardous work. So there are fewer images of feet to train an AI. I had an AI gin up some images this morning from the following description: “A woman sitting in a wicker basket in a nightgown, wearing ballet slippers.” I did five or six, and the best one is below:

Woman In Basket in Ballet Slippers

Her left leg seems smaller than her right, which is a different but related problem with AI images. And her hands this time, remarkably, are less grotesque than her arms. But add some ballet slippers, and the foot problem goes away. The explanation should be obvious: In a ballet slipper, all feet look more or less alike. The same is likely the case for feet in Doc Martin boots or high-top sneakers. (I may or may not ask an AI for an image of a woman in sandals, because I think I already know what I’d get.)

There were other issues with the images I got back from the two AIs I messed with, especially in faces. Even in the relatively good image above, her face seems a little off. This may be because we humans are very good at analyzing faces. Hands and feet, not so much. Defects there have to be more serious to be obvious.

Anyway. The real problem with AI image generation is that they are piecing together bits of images that they’ve digested as part of their training. They are not creating a wire-frame outline of a human body in a given position and then fleshing it out. At best they’re averaging thousands or millions of images of hands (or whatever) and smushing them together into an image that broadly resembles a human being.

Not knowing the nature of the algorithms that AI image generators use, I can’t say whether this is a solvable problem or not. My guess is that it’s not, not the way the software works today. And this is how we can spot deepfakes: Count fingers. The hands don’t lie.

AI Image Generators, Mon Dieu

I finished a 10,700 novelette the other day, the first short fiction I’ve finished since 2008, when I wrote “Sympathy on the Loss of One of Your Legs,” now available in my collection, Souls in Silicon. I’ve mostly written novels and short novels since then. (I’ll have more to say about “Volare” in a future entry here.)

To be published, it needs a cover. I have no objection to paying artists for covers, which apart from an experiment or two (see “Whale Meat”) I’ve always done in the past. Given all the yabbjabber about AI content creation recently, I thought, “Hey, here’s a chance to see if it’s all BS.”

The spoiler: It’s not all BS, but parts of it are BS-ier than others.

Ok. I’ve tested two AI image generators: OpenAI’s DALL-E 2, and Microsft’s Bing Image Generator. I found them through a solid article on ZDNet by Sabrina Ortiz. As it happens, Bing Image Generator outsources the process to DALL-E. I wanted to try Midjourney, and may eventually, but you have to have a paid subscription (about $8/month) to use it.

I’m not going to summarize the story here. One image I wanted to try as a cover would be the female lead sitting with her behind in a wicker basket, floating through the air at dawn a thousand feet or so over Baltimore. In both generators (which are basically the same generator) you feed the AI a detailed text description and turn it loose. I started simple: “A woman flying through the air in a wicker basket.” Edy Gagliano does precisely that in the story. What DALL-E gave me was this:

DALL·E 2023-04-23 14.46.55 - a woman flying through the air in a wicker basket - 500 Wide

Well, the woman is flying through the air, but we have a preposition problem here. She is over, not in the basket. Good first shot, though. I tried various extensions of that basic description, to the tune of 48 images on Dall-E. I won’t post them all here for space reasons, but they ran the gamut: A woman flying through the air holding a basket, a woman flying through the air in a basket the size and shape of a bathtub, and on and on.

The next one here is perhaps the best I’ve gotten from DALL-E. It’s a woman in a basket over Baltimore, I guess. Here’s the description: “a barefoot woman sitting down inside a magical wicker basket that flies through the air at dawn over Baltimore.” In one sense, it’s not a bad picture:

DALL·E 2023-04-23 10.05.40 - a barefoot woman sitting down inside a magical wicker basket that flies through the air at dawn over Baltimore 500 wide

That said, It looks out of focus. The basket is not wicker and it’s yuge. And in the story, Edy just puts her butt in the basket and lets her legs hang over the side.

Now let us move over to Bing Image Generator. In a way, it came closer than nearly all of the DALL-E images. But now we confront a well-known weakness of AI image generators: They can’t draw realistic hands or feet or faces. Here’s my first take on the image from Bing:

_77229ce5-3d7c-4c09-964f-b2b784ba3580 - 500 Wide

Look closely. Her hands and feet appear to be drawn by something that doesn’t know what a human hand or foot looks like. The face, furthermore, looks like it has one eye missing. (That’s easier to see in the full-sized image.)

I’ll give Bing credit: The images are less fuzzy and smeary. Because Bing uses DALL-E, I suspect there are DALL-E settings I don’t know about yet. I tried a few more times and got some reasonable images, all of them including some weirdness or another. The one below is a better rendering of a woman who is actually sitting in the basket with her legs hanging over the basket’s edge. But did I order a helicopter? Her face is a little lopsided, and her hands and feet, while not grotesque, aren’t quite right.

_090cd681-df9a-4736-8fcd-cdaafe028ae1 - 500 wide

Bing gave me about 24 images while I messed with it, and some of the images, while not capturing what I intended, were well-rendered and not full of weirdness. The one below is probably closest to Edy as I imagine her, and we get a SpaceX booster burning up in the atmosphere to boot. Is she over Baltimore? I don’t know Baltimore well enough to be sure, but that, at least, doesn’t matter. Stock photos of anonymous cities are everywhere.

_794c2ce1-7cd6-492d-9712-7e75ab646a3c - 500 wide

None of the others are notable enough to show here.

So where does this leave us? AIs can draw pictures. That’s real, and I’m guessing that if you tell it to draw something a little less loopy than a woman with her butt in a flying basket, it might do a better job. I remain puzzled why hands and feet and faces are so hard to do. Don’t AIs need training? And aren’t there plenty of photos of hands and feet and faces for them to generalize from a substantial number of specific examples?

I have no idea how these things are supposed to work, and if there were a good overview book on AI image generator internals, I’d buy it like a shot. In the meantime, I may practice some more and look at specific settings. If nothing else, I can produce some concept images to show to a cover artist. And maybe I’ll luck into something usable as-is.

Whatever I discover, you can count on seeing it here.

Odd Lots

The End of the Bluecheck Blues

Yesterday was the end of the line for the Twitter bluecheck. You can still get one, but you’ll have to pay for it. And anybody who has a bluecheck but won’t pay for it will lose it, as of (ostensibly) today. If you can pay for it, you can get it. (I believe you only have to prove your identity.) It will cost you eight bucks a month. What’s that? Two lattes at Starbucks? Cheap! But as it happens, paying for it isn’t the point.

Duhhhh.

There are a couple of problems with the pre-Musk bluecheck. It was free, but bestowed only upon those judged worthy, via a process completely opaque outside of being politically slanted. This led to a noxious side effect: It created a sort of online aristocracy with a built-in echo chamber that dominated the whole platform.

Elites are always a problem, because they consider themselves above the condition of ordinary people and not bound by ordinary people’s limitations. On Twitter, they’re mostly just stuffed shirts who lucked into Harvard and scored a job in a newspaper somewhere. (Or just happen to be celebrities famous mostly for being famous.)

Much bluecheck dudgeon has been hurled around about having to pay for what was once free, or (way worse!) knowing that any prole with 8 bucks to spare can have the same badge. The connotation of the badge changed, from “I‘m a demigod” to “I support the new Twitter.” OMG! NO WAY! I’M LEAVING!

The big question now, of course, is whether the malcontents will actually leave the platform. We all know how many people threatened to quit Twitter when Musk bought it. The Atlantic has an interesting article on the phenomenon. (Nominally paywalled, but they will give you a couple of free articles.) I’ve posted here about the supposed mass migration from Twitter to Mastodon. Mastodon grew hugely after the beginning of the Musk era, to the annoyance of a lot of long-time Mastodoners. The open question is how thoroughly the emigrants burned their bridges as they went.

This month we may finally find out.