Jeff Duntemann's Contrapositive Diary Rotating Header Image

software

The Terror of Maturity

No, not the maturity of people. Nor pets. I’m talking about an industry, one I’ve followed all of my adult life: personal computing.

Yes, personal computing is mature. This doesn’t mean that there’s no further progress to be made in advancing the technology. What it means is that for the majority of people, damn near any modern computer will meet their needs, from the standpoint of both hardware and software. This fact scares the living hell out of the major industry players.

This is not some sort of sudden industry earthquake. It’s a slow, gradual process that draws in more and more people as the state of the art improves and becomes everything they need. I have a friend who still uses Office 97, and another who still uses Office 2000. I myself stuck with Office 2000 until 2010 or so. Why? It did everything I needed. I bought Office 2007 for its ability to add comments to documents, and later looked at Office 2013, but I see nothing there that I need and don’t have. I paid a box shop to build me a Win 7 quadcore tower in 2011 that was my primary desktop machine until it started getting flaky in 2023. I bought a name-brand 8-core mini-tower running Win10, and scrapped the quadcore. As I’ve mentioned here on Contra, I recently bought a 10-core Win11 Dell machine when updates for Win 10 were scheduled to end.

It’s unclear whether Microsoft made Windows 11 incompatible with older hardware to force hardware upgrades; after all, Microsoft does not sell computers. They can force existing Windows users to upgrade the OS by dropping security fixes for the older release. And they can sell current Windows installs to the makers of ready-to-run computers, by fiat. So to some extent, Windows can force upgrades.

This is less true of application software providers. I learned page layout on Adobe InDesign 1.0, way back in the ‘90s. I bought each upgrade through Creative Suite 2, and I’m still running CS2. Newer versions will export ebook layouts, but I’ve already got Jutoh, a dedicated ebook layout package, which does everything newer InDesign versions do regarding ebooks, and more.

I’ll buy software upgrades when they provide something new or better than what I already have. What I won’t buy is software as a service (SAAS) which is basically a magazine subscription for your software. You can keep using it only as long as you keep paying for it, forever. Microsoft and Adobe have SAAS applications. No sale. There are MS Office lookalikes, and if Adobe does something stupid and remotely disables my copy of CS2, there is Affinity, now owned by Canva and currently available for…free. Affinity is offered by Canva as a front end for Canva, and can also do graphics design and photo editing, making it a peer of Adobe’s Creative Suite. Sure, Adobe offers more apps in its suite. But…do you really need all that other stuff?

Then there’s the issue of software upgrades that mostly sell other stuff. A lot of people are starting to wonder if the primary purpose of Windows 11 is to sell people on cloud-based storage and other online things, which are SAAS and (as far as I’m concerned) an immense privacy risk.

Game software (at least the puzzle games that I use) approximate SAAS by forcing you to watch an ad before each game, or at some game juncture. The Wordscapes crossword game offers ad-free play for a price, and for a set period of time. When that runs out, the ads begin again, until you fork over more cash. After playing it and enjoying it despite its ads, I paid for Nut Sort, a color-key variation on the Towers of Hanoi game, and no longer have to watch ads before each new game. However, if you want to tweak the game by adding another bolt, you must watch an ad before you get the bolt.

Both games have what I consider a fatal flaw: If for whatever reason the games’ providers can’t push down an ad…the game stops. Mercifully, it doesn’t happen often. If it happened more often I’d say the hell with them and uninstall their games. The providers are punishing game players for something the game players have absolutely no control over; that is, no one buying ads the game providers can show to players.

Summing up: In times past, people bought new computers and new software versions fairly often, because the technology was getting better at a furious pace. This is still true to an extent for smartphones and tablets, and the more people use their smartphones, the oftener they’ll buy newer and more feature-rich models. I use mine for phone calls, by and large, along with with weather radar and a small handful of other things used only occasionally. My tablets are mostly ebook readers.

I’m not spending a great deal of money on hardware and software anymore. I’m not alone, and the industry is terrified of us. I’ve not even touched on open-source software like Linux, to which a lot of people are moving in the wake of Win 11. If they weren’t afraid of it in the 1990s, they are damned well afraid of it now. Will that fear change their business models? Probably not.

Let’s watch.

Odd Lots

  • This (scary) item is the most significant I’ve seen recently: Microsoft is working on features that obsess with granting a Windows AI its own private workspace on your machine, plus access to your Documents, Downloads, Desktop, Videos, and Music folders. This will go nowhere good. Keep it in mind, and if MS asks for permission to enable this feature, weigh the consequences. MS admits the damned thing could install malware and have hallucinations. Huh. I won’t use a computer that thinks I‘m dead.
  • There’s a cool group on Facebook called Old Radio Garage. Lots of pictures of  tube-era radios, including a few on the bench being repaired, but not a lot of discussion.
  • Speaking of radio, I (finally) took a closer look at AccuRadio, which is a free music streaming service that offers bits’n’pieces of almost everything musical. It takes a little study to find your preferences, but I was amazed at the breadth of coverage. You have to create a free account to avoid most commercials and have access to some features, but I think it’s worth the benefits.
  • Google is evidently in the process of merging Android with ChromeOS into an OS called Aluminium. (No, I didn’t misspell that. It’s the British spelling.) The Aluminium OS will evidently have AI all over itself, inside and outside. Gosh, I just can’t wait to pass on it!
  • We have AA, AAA, C, and D batteries. Why not B batteries? Reader’s Digest has a short-form explanation. What they don’t emphasize is that B batteries providing high-ish DC voltage to portable tube radios never had a standard size, not that I’ve ever heard of. I bought a 45-volt battery when I was 12 or 13 for a tube radio I was building, and it was like a long 9V battery, with the same power connectors, just more cells stacked up inside the rectangular case. I later saw all sorts of “B” batteries (most of them dead) in many shapes and voltages. Given the broad range of radios that would use it, a standard size and voltage would be impossible, which in truth explains all that needs explaining.
  • Lazarus v4.4 is out. Built with Free Pascal 3.2.2. It’s a bugfix release, but hey, there’s no reason not be up to date. It’s worked great on my several Lazarus projects under Windows 11.
  • I used to call Free Pascal FreePascal, but that’s no longer how the product’s creators spell it. Free Pascal it is. Sooner or later I’ll update FreePascal from Square One to reflect that spelling.
  • And least but not last (ok. both least and last) Politico posted a gigantic, high-fat article about a crew called Stardust who want to make chemtrails real, in essence squirting air pollution back into the atmosphere to reflect sunlight and cool the planet. This is not a new idea, and not necessary, especially since Stardust refuses to say what the particulates they want to squirt into the atmosphere are made of. There is no climate crisis. Polluting the atmosphere with unknown crap is a scam. Don’t fall for it.

Bye Bye 4 By 3 By (Win) 10

I’m feeling old. And it’s not that I’m all that old, but this past week I realized that I had been doing personal computing for a very long time. I’ve been trying to figure out how to deal with Microsoft abandoning support for Windows 10, so the other day I (almost on impulse) bought a new Dell Win 11 desktop. I got it home, connected it to my primary keyboard and mouse, and gave Win 11 a spin.

I honestly don’t understand why Microsoft keeps screwing around with its UIs. The Win 11 desktop is no better than Win 10’s, and in many ways quirky enough to demand close attention to what you’re doing, or trying to do. I’m sure Win 11 has improvements in terms of security and use of resources and other back-of-the-screen stuff, but why the hell do I have to learn the UI all over again?

The biggest question was whether the software I depend on will even run on it. A number of (ok, ancient) utilities refused to run on Windows 7, which will probably always remain my favorite version. So I installed a few significant packages, and they all worked just fine. Plus, the new machine has an SD card slot in the case near the USB ports, which my older (but not ancient) Optiplex 5070 does not. I also found that the new machine did not have a speaker audio port on the back panel. That irritated me at first, but I now understand why it isn’t there. (More on this later.)

As I always do, I popped the side panel and took a look inside, figuring I’d order an M.2 SSD for the empty slot. Except…there was no empty slot. There was one M.2 slot, with a terabyte SSD in it. The machine was misrepresented by a sales person: She said that it had two M.2 slots on it, and one of them was empty. It wasn’t empty. It simply wasn’t there.

And another thing wasn’t there: The machine could not put video into my Samsung 214T 21” 1600 X 1200 4:3 flat monitor, which I bought around 2007. It didn’t support that resolution at all. My Win 10 machine has no trouble with 1600 X 1200. The new Win 11 machine, I discovered, was configured to do a wide-screen 1920 X 1080. I swapped in my only widescreen monitor and boom! There was 1920 X 1080. It would do lower resolutions, but most of those were not 4:3.

By now I was in part disappointed and in part annoyed. My older widescreen monitor is a Dell 22” diagonal and works very well. But it’s not especially large, and I wanted something to make the type larger to reduce eyestrain. I require at least two mass-storage units in my primary machine, so after two days of messing with the Win 11 box, I uninstalled the half-dozen packages I had installed, put it back in its box, and trucked it back to the store. As I expected, they accepted the return, and were very courteous about it. While I was there, I took a close look at a larger Dell monitor, an S2725H. It’s a 27” diagonal, and has almost no bezel around the top and sides. It’s basically all screen but for a small strip on the bottom edge. So I had no trouble fitting it into my computer table setup, which includes the 5070 mini-tower and a laser printer plus other odd junk. It was inexpensive and can display two manuscript pages side by side. Video adjustments are done with a sort of mini-joystick: a little nubbin on the back of the monitor selects which aspect you want to change, followed by a line graph showing how much. Push the nubbin in the right direction and you’re there. Push down on the nubbin to press Enter. Clever, and a lot easier to do than I expected.

My venerable 214T has DVI input, and once I bought my Win 10 machine several years ago (2021?) I needed to use an HDMI to DVI adapter. DVI is long extinct. Desktops are now either DisplayPort or HDMI. (Or in some cases, both.) What I guess I knew in the back of my head but didn’t think about in terms of personal computers is that HDMI (and DisplayPort) carry sound as well as video. And yup, inside that new monitor is a pair of formidable hi-fidelity speakers. Heretofore I had used a cheap set of mini-speakers that sounded, well, cheap and small. Once I played several classical MP3s and some videos into the new monitor, the sound was terrific compared to what had been.

So there was no little green audio jack in the back panel of the Win 11 machine. It had a headphone jack on the front panel, but all the speaker audio went out through HDMI.

I learned a few things in this recent adventure:

  • Don’t buy a computer on impulse. Research the hell out of it before you slap down your credit card. Dell, at least, has all of its manuals available for free download as PDFs. Look for machines that appeal to you and then go through their manuals. Repeat until you find what you like the most, and will do the jobs you need it to do.
  • A lot of monitors, by now probably most of them, contain stereo speakers. Audio comes out the same cable video does.
  • 4:3 monitors, like rear-panel audio jacks, are extinct.
  • Win 11 is inevitable, as much as I’d prefer it to be seriously evitable.

I’m still wrassling with the last point. I suspect I will run a full backup on the 5070 sometime soon and install 11 on it. It does what I need it to do. I only hope and pray that Win 11 won’t hide anything important or paint me into any corners. We’ll see.

The NYT Vs. ChatGPT

You may have seen this story come up over the last year and change: The New York Times is suing OpenAI, creator of ChatGPT, for copyright infringement. Earlier this year, a federal judge ruled that the lawsuit can move forward. And now—good grief!—the Times is demanding that OpenAI save all discussions people have with ChatGPT. All of them. The whole wad—even conversations that people have deleted.

You want a privacy violation? They’ll give you a privacy violation, of a sort and at a scale that I’ve not seen before. The premise is ridiculous: The Times suspects that people who delete their conversations with ChatGPT have been stealing New York Times IP, and then covering it up to hide the fact that they were stealing IP. After all, if they weren’t stealing IP, why did they delete their conversations?

Privacy as the rest of us understand it doesn’t enter into the Times’ logic at all. The whole business smells of legal subterfuge; that is, to strengthen their copyright infringement case, they’re blaming ChatGPT users. I’ve never tried ChatGPT, and I’m certainly not going anywhere near it now. But this question arises: If a user asks an AI for an article on topic X, does the AI bring back the literal article? Golly, Google does that right now, granting that Google respects  paywalls. Can ChatGPT somehow get past a paywall? I rather doubt it. If the Times wants to go after something that does get past its paywall, it had better go after archive.is, over in Iceland. I won’t say much more about that, as it does get past most paywalls and is almost certainly massive copyright infringement.

And all this brings into the spotlight the central question about commercial AI these days: How do AIs use their training data? I confess I don’t fully understand that. This article is a good place to start. Meta’s Llama v3.1 70B was able to cough up 42% of Harry Potter and the Sorcerer’s Stone, though not in one chunk. Meta’s really big problem is that it trained Llama on 81.7 terabytes of pirated material torrented from “shadow libraries” like Anna’s Archive, Z-Library, and LibGen, and probably other places. I consider these pirate sites, albeit not as blatant as the Pirate Bay, but pirate sites nonetheless.

I’m still looking for a fully digestible explanation of how training an AI actually works, but that’ll come around eventually.

So how might an AI be trained without using pirated material? My guess is that the big AI players will probably cut a deal with major publishers for training rights. A lot of free stuff will come from small Web operators, who don’t have the resources to negotiate a deal with the AI guys. Most of then probably won’t care. In truth, I’d be delighted if AIs swallowed Contra’s 3500+ entries in one gulp. Anything that has my name in it will make the AI more likely to cite me in answer to user questions, and that’s all I’ll ask for.

Ultimately, I’m pretty sure Zuck will cut a deal with NYT, WaPo, the Chicago Trib, and other big IP vendors. Big money will change hands. Meta will probably have to charge people to use Llama to pay off IP holders, and that’s only right.

But lordy, this is a supremely weird business, and I’m pretty sure the bulk of the weirdness is somehow hidden from public scrutiny. Bit by bit it will come out, and I (along with a lot of you) will be watching for it.

Odd Lots

  • An AI company went bankrupt after it came out that its supposed “vibe-coding” AI was a team of 700 Indian software engineers. Even Microsoft fell for it, and threw gobs of money at them. Which brings up an interesting question: How do possible investors (or anyone else) know where (and even what) an AI is?
  • Not everyone thinks that vibe coding is slam-dunk easy. A Stanford prof does it, and says it’s intellectually exhausting—just like “manual” coding is. The trick with all AI work is knowing how to create the prompts that will deliver the desired results. Although I’ve not tried vibe coding with an AI yet, my experience with text and images suggests that “prompt engineering” is the real challenge, and to me, prompt engineering looks like programming in yet another English-like language.
  • Meta signed a 20-year deal with Constellation Energy, to help fund new nuclear generation capacity, starting with a plant in Illinois. If carbon is indeed the problem, nuclear is the solution. I have had some peculiar experiences with AI over the past months, but I’m willing to root for AI as a way to bring nuclear power back from the grips of those Atomic Scientists who simply can’t force themselves to go fission.
  • In case you missed it: Lazarus 4.0 is out. Compatibility with Delphi is high. The 4.0 system was built using FreePascal 3.2.2. Go get it here.
  • Here’s a wonderful short-ish article on Mark Twain’s rowdy early years in (bogglingly) rowdy Virginia City, Nevada.
  • Today is the semiquincentennial of the US Army. Also the sestercentennial. Oh, and the bisesquicentennial too. Big words rock. I love ’em. And next year will be all those big words for the US itself, not to mention Carol’s and my 50th wedding anniversary.
  • Classmates continues to send me nonsense. I supposedly have a private message waiting from a Maria G., who was in De Paul University’s Class of 1971. (The message was posted in 2007.) Well, I have that yearbook, and she’s not in it. (Her name, which I won’t quote here, is very unusual and I have been unable to find her online.) Some years back Classmates asked me if I knew a girl named Linda something, who was in the Lane Tech Class of 1970, like me. Uh…no. Lane was an all-boy school until a couple of years after I graduated. I gave Classmates money once. I won’t be giving them money again. They make up stuff like a…like an AI.
  • Well, as far as I’m concerned, the famous TED talks are now over. An Australian prof who did all the necessary research was tossed out of the TED universe for a presentation that cast doubt on the perpetrators of useless COVID reactions like lockdowns, and showed evidence that the not-really-a-vaccine (you can get it and spread it!) caused more harm than good. TED stated right out that criticism of political and health leaders was verboten. Read the whole thing. (H/T to Sarah Hoyt for the link.)

The Real Danger of Textual AI

Now that I have The Everything Machine on the market, I can go back to a couple of other long-term projects. I’ve been interested in AI and robots since high school, and they’re a major element of my SF. All of my AI short stories are gathered in my book, Souls in Silicon. (The stories are almost all upbeat, and a couple of them are pure humor.) The AIs in Souls in Silicon are not what we call AI today. I’m starting to hear a new term: Artificial General Intelligence (AGI) which makes more sense in terms of science-fictional AIs. AGI is AI that can reason and deal with reality pretty much the same way human minds do. Will this ever happen? I doubt it. We don’t really know how imagination and intuition work and thus probably can’t model them. What we have today are not AGI but large language models (LLMs). I had the insight this morning, after playing around with Grok, that the better LLMs are, the more dangerous they are.

They are definitely getting better. And they are already pretty dangerous.

This time around, I’ve been testing Grok, the AI component of X/Twitter. My tests consist of asking it questions with verifiable answers. (I have not yet tested Grok’s image generator. Patience, patience.) Most of these tests are of the form, Who is XXXXXXXXX? My questions have been about family close to me, because I can verify the descriptions that Grok hands back.

Grok did a fair job with the question Who is Carol Duntemann? Its answer:


Carol Duntemann, born Carol Ostruska on June 1, 1953, is the wife of Jeff Duntemann, a writer, editor, and technology industry professional. They met as juniors in high school and married in October 1976. Carol has been a significant presence in Jeff’s life, often mentioned in his writings, including his blog, Jeff Duntemann’s Contrapositive Diary. Together, they have lived in various places, including Colorado Springs, Colorado, and currently reside in Scottsdale, Arizona. Carol is known to share Jeff’s interest in their Bichon Frise dogs, with the couple having owned several over the years, including a current dog named Dash. While Carol herself is not a public figure in the same way as Jeff, she is a key part of his personal narrative, often described as a partner who helped shape his life significantly. Public records indicate she has been associated with addresses in Colorado Springs, such as 145 Stanwell Street and 4120 Danceglen Drive, tied to the phone number (719) XXX-XXXX.


(I Xed out the phone number, which was correct, in case it’s been reassigned.) How well did Grok do? Its sole error in this answer is Carol’s birth date. It’s wrong, and thus dangerous. More on that later. Let’s look at another one. Who is Kathleen Duntemann?


Kathleen Duntemann could refer to different individuals based on available context, but the most prominent and well-documented person with this name tied to public records and family history is Kathleen M. Duntemann (1920–1999), who was Jeff Duntemann’s aunt and godmother.

Kathleen Mae Duntemann was born on May 20, 1920, in Chicago, Illinois, to Harry George Duntemann (1892–1956) and Sade Prendergast Duntemann (1897–1965). She grew up in Chicago, and the 1940 U.S. Census shows her living there with her parents and younger brother, Frank W. Duntemann (Jeff’s father, 1922–1978), at the age of 19. Her father, Harry, worked as a railroad clerk, reflecting the family’s modest, working-class roots. Kathleen never married and remained close to her family throughout her life, often playing a significant role in family gatherings and history-keeping.

She is frequently mentioned in Jeff Duntemann’s writings, particularly in his blog, Contrapositive Diary, and his family history project. Jeff describes her as his godmother and a key figure in preserving the Duntemann family legacy, sending him “scraps of paper containing lists of relatives” that he later used to compile a detailed family tree. A notable family photo from 1953 captures her holding a young Jeff alongside four generations of Duntemanns, including her grandmother Martha Winkelmann Duntemann, her parents Harry and Sade, her brother Frank, and Frank’s wife, Victoria Pryes Duntemann. Kathleen lived much of her life in Chicago before moving to Des Plaines, Illinois, where she died on October 25, 1999, at the age of 79. Her legacy within the family underscores her as a connector of generations, though she did not leave a public career or widely recognized mark beyond that sphere.


Grok admits that there may be other Kathleen Duntemanns (there aren’t) as it admits with most other names. It gets points for that. Ok. More errors here: My grandmother Sade was born in 1892, not 1897. My Aunt Kathleen was born on 30 August 1920. She died on 8 July 1999. Grok got the years right, but neither the months nor the dates. My grandfather Harry was never a railroad clerk. He worked his entire adult life for the First National Bank of Chicago, and retired as a vice president of that bank, Chicago’s biggest. The family’s origins were very much middle-class to upper-middle-class. Kathleen lived in Chicago her entire life, and never in Des Plaines.

Now, hold onto your hats. Grok’s answer to my last question is a lulu. This Contra entry is getting long, longer than I prefer to do, but this business is important. Bear with me. Who is Jeff Duntemann? I asked. Heh.


Jeff Duntemann has had a multifaceted career primarily centered around writing, editing, and technology, with a strong focus on computing and science fiction. Born on June 29, 1952, in Chicago, Illinois, he developed an early interest in electronics and writing, which shaped his professional path.

He began his career in technical writing and editing, notably working at Xerox Corporation as an office machine repairman from around 1972 to 1976, where he gained hands-on experience with early office technology. This technical background informed his later work in the computing industry. In the late 1970s and early 1980s, he transitioned into programming and writing about computers. He became a prominent figure in the personal computing revolution, contributing to the development and documentation of software tools. One of his significant early contributions was as a programmer and documenter for the Turbo Pascal compiler at Borland International, starting in 1985. His work helped make Turbo Pascal a widely used tool among developers, and he wrote several influential programming books, including Assembly Language from Square One (1990), Borland Pascal from Square One (1992), and Jeff Duntemann’s Wi-Fi Guide (2004), which catered to both beginners and advanced users.

Duntemann also made a mark in technical publishing. He served as an editor for various magazines, most notably as the editor-in-chief of PC Techniques (later renamed Visual Developer Magazine) from 1989 to 1998, under Coriolis Group Books, a publishing company he co-founded with Keith Weiskamp. Coriolis became known for its extensive line of computer books, and Duntemann’s editorial work extended to other publications like Dr. Dobb’s Journal and Byte Magazine. His writing was characterized by a clear, engaging style that made complex technical topics accessible, earning him a loyal readership.

Parallel to his technical career, Duntemann pursued science fiction writing. He sold his first short story, "Our Lady of the Endless Sky," to Amazing Stories in 1979 and continued to publish fiction intermittently, including works like “Cold Hands” (1982) and the novel The Cunning Blood (2005). His fiction often blended speculative ideas with his technical expertise, though it remained a smaller part of his career compared to his nonfiction output.

After leaving Coriolis in the late 1990s, he continued freelance writing and consulting, focusing on topics like wireless networking and software development. He also maintained a blog, Jeff Duntemann’s Contrapositive Diary, where he wrote about technology, science fiction, and personal interests like astronomy and vintage electronics until health challenges slowed his output. In 2018, he was diagnosed with atrial fibrillation and later faced pulmonary fibrosis, which he documented candidly, reflecting his lifelong habit of sharing knowledge. His last blog entry was on December 15, 2024, and he passed away on December 27, 2024, as noted in an obituary by his wife, Carol.

Duntemann’s career reflects a blend of technical innovation, educational writing, and creative storytelling, leaving a lasting impact on the computing community and a niche legacy in science fiction.


Wow. I didn’t know I was dead. I do not have atrial fibrillation nor pulmonary fibroids. Years and dates are all over the place. I was a Xerox repairman from 1974-1976. I sold “Our Lady of the Endless Sky” to Nova 4 in 1973, not 1979, nor to Amazing Stories. “Cold Hands” was published in 1980, not 1982. Coriolis went under in 2002, not 1998. I just pinched myself; I’m not dead yet. Etc.

My point in all this is that the closer AIs come to describing reality in answers to questions, the more people will trust their answers—including facts that are nonetheless wrong. Those bogus facts can be annoying, or worse. Birth and death dates have legal significance, as do many other things. If a scattered few errors are buried in a lot of otherwise correct text, those errors may be taken as the truth by users of the AI software.

In short, the fewer errors there are in AI answers, the more dangerous those answers become, because people will be more likely to trust AI answers as entirely correct. And given what I know about how LLMs work, I’m pretty sure that AI answers of any complexity will contain errors, not just now but probably forever.

Keep that in mind if you ever ask an AI questions on which anything of value depends. You wouldn’t want people to think you were dead.

Gabby the Image Generator

If you recall, last April I posted a couple of entries about my experiments with AI image generators. There were serious problems drawing hands, feet, and faces. The other day I got an email saying that the Gab social network had installed an AI image generator called Gabby that registered users could try for free. So I tried it.

I have two general test categories of images I would like an AI to generate: Pictures of a thingmaker from my drumlins stories like “Drumlin Boiler,” and pictures of a woman sitting in a magical basket flying over downtown Baltimore, from my still-unpublished novella, Volare! I tried them both, and will include the best images from my tests below.

The drumlin thingmaker is a relatively simple structure: a 2-meter-wide shallow bowl made of what looks like black granite, half-full of a silvery dust, with two waist-high pillars in front of it, one smooth, the other vertically ridged like a saguaro cactus. In the stories, people tap a total of 256 times on the tops of the pillars in any combination, and the machine will then build something in the bowl. There are 2256 different possible codes, in base 10 1.15 x 1077, which is in the vicinity of the number of atoms in the observable universe. The people marooned on the planet where the thingmakers were found learn to use them, and I have several stories about the alien machines and their products, which thingmaker users call “drumlins.” (I know a drumlin is a glacial landform. I’ve repurposed the word, as SF writers sometimes do.)

As with the other image generators, you begin with a statement of what should be in the image. For the woman in a basket, I used the following prompt:

  • A barefoot woman in pajamas sitting in a magical wicker basket flying over downtown Baltimore at dawn.

The best image I got was this:

Impressive, compared to my earlier efforts. The woman is African-American, which doesn’t matter; after all, I didn’t specify the woman’s race and Baltimore is a mostly-black city. The basket is wicker. The city does look like Baltimore. (I used to live there in the mid-‘80s.) So far so good. However, on the one foot we can see, she has two big toes. And it took Carol only seconds to note that she has two left hands.

Alas, she isn’t flying but rather sitting on the edge of somebody’s roof. I did specify “flying.” So I give it a B-.

I did a lot better in some ways with the thingmaker. The prompt I used for the image shown below is this:

  • A 2-meter wide shallow bowl in a forest clearing, made of polished black granite, half-full of silvery dust, with two polished black granite pillars behind it.

The best image for this test is below:

The bowl is actually pretty close to what I imagine a thingmaker bowl looks like. It should be a little shallower. The two black pillars behind it look like trees. Ok, I didn’t specify how tall the pillars should be. My bad. But the dust is simply missing. I guess I should be glad that it didn’t build me a picture of Oklahoma in the 1930s.

Before I ran out of my daily limit of generated images, I decided to start from scratch with the woman in a basket. In Volare! the basket is a wicker basket about 3 feet in diameter, half-full of weeds that my female lead Edy Gagliano had pulled from her garden. So I began with this prompt:

  • A 36" wicker basket half-full of weeds.

How hard could it be? Well, Gabby handed me a wicker basket with plants in it. However, it wasn’t a basket of weeds but a flower arrangement. I tried twice with the same prompt, and got the same thing: live plants in a basket, at least one suitable for putting in your bay window. The weeds were described in the story as wilted dandelions recently yanked and probably wilted if not dead and gone brown. No luck.

In a way I can’t bitch: These are all pleasing images, and Gabby doesn’t have the same problem with plants that it does with hands and feet. And the woman’s hands and feet are mostly better than what I got with Dall-E last April. We’re making progress.

Now, I don’t intend to use an AI-generated image directly as a book cover. There are some weird and currently unsettled copyright issues involved with AI graphics, largely concerned with what content the AI is trained on. I’ve heard rumors that Amazon is yanking self-published books from the Kindle store if it looks like they have AI-generated graphics as covers. That’s an easy enough bullet to duck: I’ll do as I’ve always done and commission a cover from a real live artist. The AI images would be used to suggest to the artist how I imagine various elements of the cover.

This was fun, and if you know of any other AI image generators that you can use without paying for them, please share in the comments, with a sample if you’re so inclined.

A Conversation (Sort of) With ChatGPT

An AI chatbot named ChatGPT has been getting a tremendous amount of publicity lately, and a boggling amount of what I consider unwarranted acceptance, bordering on the ridiculous. No, alas. That border is now behind us: Buzzfeed will soon begin using ChatGPT to generate content for their sites, which include the Huffington Post. The source is a paywalled article in the Wall Street Journal, or I’d post a link. Doesn’t matter. Buzzfeed is clearly trying to replace human writers with an AI. Their investors evidently think it’s a good idea, given that the Buzzfeed stock price rose 92% later that day.  I’d say I’m skeptical but I’m always skeptical. Being skeptical is part of my worldview. Other skeptics are out there: One clever wag on Twitter pretty much nailed it when he said, “Journos should have learned to code before the code learned to journo.” 

Heh. Anyway, after I stopped giggling I went over and created a free account at ChatGPT. You might recall my experiments with AIs writing fiction given a few sentences to start with. It was weird. Indie SF writers are in no danger, at least for the time being. I wanted to give ChatGPT a fair chance, just like I gave Sudowrite last year. Sudowrite started out weird and got weirder. Well, hey, I wanted to see it write fiction. My challenge to ChatGPT is to answer technical questions accurately and usefully.

It’s a chat program. So I started a chat:

Jeff: Will there ever be 128-bit CPUs?

I get asked this question a lot. My answer will be in the upcoming fourth edition of x64 Assembly Language Step By Step. It’s relatively simple, though there is some nuance.

ChatGPT’s first response:

It is possible that there will be 128-bit CPUs in the future, but it is not a certainty. The development of CPU architecture is driven by a variety of factors, including performance, power efficiency, and cost. Additionally, the need for increased computational power may be met through other means, such as the use of multiple cores or specialized hardware. As a result, the use of 128-bit CPUs will depend on the specific requirements of various applications and the availability of other solutions.


Hmmm. Saying this is sophomoric would be a slander on sophomores. But rather than reply to it immediately, I was given the option to re-generate the response. I clicked the button. ChatGPT thought (erk) for a few seconds and replied:

ChatGPT’s second response:

It is possible that there will be 128-bit CPUs in the future, but it is not clear what the need for such a large word size would be. Current 64-bit CPUs are more than capable of handling the vast majority of computing tasks and the industry has not yet identified a need for CPUs with larger word sizes. Additionally, the development of new technologies such as quantum computing may render traditional CPUs obsolete in the future.


This, in fact, is true, though it was still pretty much an obvious answer to a subtle question. So I clicked the “Regenerate” button again.

ChatGPT’s third response:

It is possible that there will eventually be 128-bit CPUs, but it is not currently a technology that is in widespread use or development. The current standard for most consumer CPUs is 64-bit, and there are also 32-bit CPUs still in use. The need for a 128-bit CPU would likely depend on advancements in technology and a growing demand for more processing power and memory capacity.


Sigh. Drink from the same well and you get the same water. I figured I needed a better question. I had one lined up and ready to go:

Jeff: Aren’t 128-bit registers useful for math?

128-bit registers can be useful for certain types of math, particularly for tasks such as floating-point operations and high-precision computations. Larger registers can increase the precision and accuracy of calculations, as well as allow for more efficient processing of large data sets. However, the need for 128-bit registers would likely depend on the specific application and whether the benefits of larger registers outweigh the cost of implementing them in terms of power consumption and hardware complexity.


I didn’t bother to regenerate this one. So far it seemed to be scraping Google, and while I mostly agreed with its answer, I would hope that Buzzfeed would require something more focused and in-depth. So I mulled it a little more and asked a third rather leading question. Read it carefully. There’s a lollapalooza in there:

Jeff: Wouldn’t the vector registers in the AVX-512 math subsystem make 128-bit registers unnecessary?

AVX-512 (Advanced Vector Extensions 512-bit) is an extension to the x86 instruction set that is designed to improve the performance of floating-point and vector operations. One of the key features of AVX-512 is the use of 512-bit wide vector registers, which can be used for a wide range of scientific, engineering, and financial workloads. The wide vector registers in AVX-512 can be seen as an alternative to 128-bit wide registers, as it allows to perform more operations in parallel with a single instruction, but it doesn’t necessarily make 128-bit registers unnecessary. For example, in certain types of workloads, it might be more beneficial to have a few large 128-bit registers rather than many smaller 512-bit registers. Ultimately, the choice of register size would depend on the specific requirements of the application and the trade-offs between performance, power consumption, and hardware complexity.


Bzzzzzzzzzt! Suddenly ChatGPT’s distant descent from poor ol’ Eliza became obvious. A few large 128-bit registers might be better than a bunch of smaller 512-bit registers. Right? Right?

I may test it more as time permits. After all, it’s free. But this will have to do for today. Among other things, I write humorous stories about AI. Looks like I just wrote another, without even trying.

Sudowrite would be proud.

A libc Mystery

As most of you know by now, I’m hard at work on the x64 edition of my assembly book, to be called X64 Assembly Language Step By Step. I’m working on the chapter where I discuss calling functions in libc from assembly language. The 2009 edition of the book was pure 32-bit x86. Parameters were passed to libc functions mostly by pushing them on the stack, which required cleaning up the stack after each call, etc.

Calling conventions in x64 are radically different. The first six parameters to any function are passed in registers. (More than six and you have to start pushing them on the stack.) The first parameter goes in RDI, the second in RSI, the third in RDX, and so on. When a function returns a single value, that value is passed back in RAX. This allows a lot more to be done without fooling with the stack.

Below is a short example program that makes four calls to libc functions: Two calls to puts(), a call to time, and a call to ctime. Here’s the makefile for the program:

showtime: showtime.o
        gcc showtime.o -o showtime -no-pie
showtime.o: showtime.asm 
        nasm -f elf64 -g -F dwarf showtime.asm -l showtime.lst

I’ve used this makefile for other example programs that call libc functions, and they all work. So take a look:

section .data
        timemsg db    "The timestamp is: ",0
        timebuf db    28,0   ; not useed yet
        time1   dq    0      ; time_t stored here.

section .bss

section .text

extern  time
extern  ctime
extern  puts
global  main

main:
        push rbp            ; Prolog    
        mov rbp,rsp

        mov rdi,timemsg     ; Put address of message in rdi
        call puts           ; call libc function puts
               
        xor rax,rax         ; Zero rax
        call time           ; time returns time_t value in rax        
        mov [time1],rax     ; Save time_t value to var time1
        
        mov rdi,time1       ; Copy pointer to time_t value to rdi
        call ctime          ; Returns ptr to the date string in rax

        mov rdi,rax         ; Copy pointer to string into rdi
        call puts           ; Print ctime's output string
        
        mov rsp,rbp         ; Epilog
        pop rbp
        
        ret                 ; Return from main()

Not much to it. There are four sections, not counting the prolog and epilog: The program prints an intro message using puts, then fetches the current time in time_t format, then uses ctime to convert the time_t value to the canonical human-readable format, and finally displays the date string. All done.

So what’s the problem? When the program hits the second puts call, it hangs, and I have to hit ctrl-z to break out of it. That’s peculiar enough, given how many times I’ve successfully used puts, time, and ctime in short examples.

The program assembles and links without problems, using the makefile shown above the program itself. I’ve traced it in a debugger, and all the parameters passed into the functions and their return values are as they should be. Even in a debugger, when the code calls the second instance of puts, it hangs.

Ok. Now here’s the really weird part: If you comment out one of the two puts calls (it doesn’t matter which one) the program doesn’t hang. One of the lines of text isn’t displayed but the calls to time and ctime work normally.

I’ve googled the crap out of this problem and haven’t come up with anything useful. My guess is that there’s some stack shenanigans somewhere, but all the register values look fine in the debugger, and the pointer passed back in rax by ctime does indeed point to the canonical null-terminated text string. The prolog creates the stack frame, and the epilog destroys it. My code doesn’t push anything between the prolog and epilog. All it does is make four calls into libc. It can successfully make three calls into libc…but not four.

Do you have to clean up the stack somehow after a plain vanilla x64 call into libc? That seems unlikely. And if so, why doesn’t the problem happen when the other three calls take place?

Hello, wall. Anybody got any suggestions?

For Validation, Try Federation

Somebody wrote an obnoxious hate-piece over on The Verge some days back, welcoming Elon Musk to Twitter Hell. The essay is for the most part corrosive nonsense, but the piece does have an insight or two. The primary one is true, and subtle to the point where I doubt most people ever give it much thought: What social networks sell is valildation.

In other words, people gather on social networks to feel good about themselves. The network accomplishes this by censoring any voices that disagree with network members. Remember the days when disagreement was a learning opportunity? I do. Even polite disagreement is now “literal violence,” at least to the cohort desperately lacking self-esteem.

What Musk does to Twitter won’t be known for awhile. I’m guessing that people will no longer be banned for politely questioning conventional wisdom, like posting links to evidence that Ivermectin actually does have strong antiviral properties. Ditto HCQ. Why linking to a peer-reviewed scientific paper should be blanket-bombed as “misinformation” is simple: “Misinformation” now means “anything I or my tribe disagree with.” If Musk can call a halt to that, it will have been worth every nickel of his $44B. What it means, however, is that Twitter will become a network that does not specialize in validating its members by silencing their critics. If those seeking validation flee to another network, that’s a good thing. I generate my own validation. So do most of my friends. I guess not everyone can do that.

The real problem with moderation is that it tends to bias network traffic toward viewpoints the moderators favor. Worse, there’s one body of algorithms to moderate the whole damned network. Unless you’re in the favored cohort, you’re out of luck.

There is something called Mastodon that almost nobody talks about. (More on Mastodon here.) It’s a social network composed of independently hosted social networks, joined loosely through a mechanism called federation. Every instance (which is what they call an individual Mastodon server) can have its own moderation guidelines, and everybody can block anybody they don’t want to hear from. This sounds like the perfect solution: On Mastodon, nobody can hear you disagreeing with them if they don’t want to. Shazam! Validation!

I don’t have time to even join a Mastodon instance, much less host my own. If you’ve had experience there, by all means describe it in the comments. I bring it up here today because of an article I read about Twitter founder Jack Dorsey: He’s creating a new social network to rival Twitter. He’s doing it with federation. It’s called Bluesky, and it just opened registration for beta testers. It uses a protocol developed in-house called the Authenticated Transfer Protocol (ATP.)

I’ve been reading the news about Bluesky for the past few days. There’s not much hard information yet, but it sounds a great deal like a slightly more centralized Mastodon. I could be wrong about that. Again, hard data is scarce. I did notice that nowhere in the articles I’ve read is there any significant mention of moderation. That’s a very sore spot for a greeat many people, primarily those who just want validation, or tribalists who want to limit user perspectives to their own template. One hopes that Dorsey can get past this hunger for censoring The Other, and actually create a space where literally all perspectives can be heard.

We’ll see.