Jeff Duntemann's Contrapositive Diary Rotating Header Image

software

Bye Bye 4 By 3 By (Win) 10

I’m feeling old. And it’s not that I’m all that old, but this past week I realized that I had been doing personal computing for a very long time. I’ve been trying to figure out how to deal with Microsoft abandoning support for Windows 10, so the other day I (almost on impulse) bought a new Dell Win 11 desktop. I got it home, connected it to my primary keyboard and mouse, and gave Win 11 a spin.

I honestly don’t understand why Microsoft keeps screwing around with its UIs. The Win 11 desktop is no better than Win 10’s, and in many ways quirky enough to demand close attention to what you’re doing, or trying to do. I’m sure Win 11 has improvements in terms of security and use of resources and other back-of-the-screen stuff, but why the hell do I have to learn the UI all over again?

The biggest question was whether the software I depend on will even run on it. A number of (ok, ancient) utilities refused to run on Windows 7, which will probably always remain my favorite version. So I installed a few significant packages, and they all worked just fine. Plus, the new machine has an SD card slot in the case near the USB ports, which my older (but not ancient) Optiplex 5070 does not. I also found that the new machine did not have a speaker audio port on the back panel. That irritated me at first, but I now understand why it isn’t there. (More on this later.)

As I always do, I popped the side panel and took a look inside, figuring I’d order an M.2 SSD for the empty slot. Except…there was no empty slot. There was one M.2 slot, with a terabyte SSD in it. The machine was misrepresented by a sales person: She said that it had two M.2 slots on it, and one of them was empty. It wasn’t empty. It simply wasn’t there.

And another thing wasn’t there: The machine could not put video into my Samsung 214T 21” 1600 X 1200 4:3 flat monitor, which I bought around 2007. It didn’t support that resolution at all. My Win 10 machine has no trouble with 1600 X 1200. The new Win 11 machine, I discovered, was configured to do a wide-screen 1920 X 1080. I swapped in my only widescreen monitor and boom! There was 1920 X 1080. It would do lower resolutions, but most of those were not 4:3.

By now I was in part disappointed and in part annoyed. My older widescreen monitor is a Dell 22” diagonal and works very well. But it’s not especially large, and I wanted something to make the type larger to reduce eyestrain. I require at least two mass-storage units in my primary machine, so after two days of messing with the Win 11 box, I uninstalled the half-dozen packages I had installed, put it back in its box, and trucked it back to the store. As I expected, they accepted the return, and were very courteous about it. While I was there, I took a close look at a larger Dell monitor, an S2725H. It’s a 27” diagonal, and has almost no bezel around the top and sides. It’s basically all screen but for a small strip on the bottom edge. So I had no trouble fitting it into my computer table setup, which includes the 5070 mini-tower and a laser printer plus other odd junk. It was inexpensive and can display two manuscript pages side by side. Video adjustments are done with a sort of mini-joystick: a little nubbin on the back of the monitor selects which aspect you want to change, followed by a line graph showing how much. Push the nubbin in the right direction and you’re there. Push down on the nubbin to press Enter. Clever, and a lot easier to do than I expected.

My venerable 214T has DVI input, and once I bought my Win 10 machine several years ago (2021?) I needed to use an HDMI to DVI adapter. DVI is long extinct. Desktops are now either DisplayPort or HDMI. (Or in some cases, both.) What I guess I knew in the back of my head but didn’t think about in terms of personal computers is that HDMI (and DisplayPort) carry sound as well as video. And yup, inside that new monitor is a pair of formidable hi-fidelity speakers. Heretofore I had used a cheap set of mini-speakers that sounded, well, cheap and small. Once I played several classical MP3s and some videos into the new monitor, the sound was terrific compared to what had been.

So there was no little green audio jack in the back panel of the Win 11 machine. It had a headphone jack on the front panel, but all the speaker audio went out through HDMI.

I learned a few things in this recent adventure:

  • Don’t buy a computer on impulse. Research the hell out of it before you slap down your credit card. Dell, at least, has all of its manuals available for free download as PDFs. Look for machines that appeal to you and then go through their manuals. Repeat until you find what you like the most, and will do the jobs you need it to do.
  • A lot of monitors, by now probably most of them, contain stereo speakers. Audio comes out the same cable video does.
  • 4:3 monitors, like rear-panel audio jacks, are extinct.
  • Win 11 is inevitable, as much as I’d prefer it to be seriously evitable.

I’m still wrassling with the last point. I suspect I will run a full backup on the 5070 sometime soon and install 11 on it. It does what I need it to do. I only hope and pray that Win 11 won’t hide anything important or paint me into any corners. We’ll see.

The NYT Vs. ChatGPT

You may have seen this story come up over the last year and change: The New York Times is suing OpenAI, creator of ChatGPT, for copyright infringement. Earlier this year, a federal judge ruled that the lawsuit can move forward. And now—good grief!—the Times is demanding that OpenAI save all discussions people have with ChatGPT. All of them. The whole wad—even conversations that people have deleted.

You want a privacy violation? They’ll give you a privacy violation, of a sort and at a scale that I’ve not seen before. The premise is ridiculous: The Times suspects that people who delete their conversations with ChatGPT have been stealing New York Times IP, and then covering it up to hide the fact that they were stealing IP. After all, if they weren’t stealing IP, why did they delete their conversations?

Privacy as the rest of us understand it doesn’t enter into the Times’ logic at all. The whole business smells of legal subterfuge; that is, to strengthen their copyright infringement case, they’re blaming ChatGPT users. I’ve never tried ChatGPT, and I’m certainly not going anywhere near it now. But this question arises: If a user asks an AI for an article on topic X, does the AI bring back the literal article? Golly, Google does that right now, granting that Google respects  paywalls. Can ChatGPT somehow get past a paywall? I rather doubt it. If the Times wants to go after something that does get past its paywall, it had better go after archive.is, over in Iceland. I won’t say much more about that, as it does get past most paywalls and is almost certainly massive copyright infringement.

And all this brings into the spotlight the central question about commercial AI these days: How do AIs use their training data? I confess I don’t fully understand that. This article is a good place to start. Meta’s Llama v3.1 70B was able to cough up 42% of Harry Potter and the Sorcerer’s Stone, though not in one chunk. Meta’s really big problem is that it trained Llama on 81.7 terabytes of pirated material torrented from “shadow libraries” like Anna’s Archive, Z-Library, and LibGen, and probably other places. I consider these pirate sites, albeit not as blatant as the Pirate Bay, but pirate sites nonetheless.

I’m still looking for a fully digestible explanation of how training an AI actually works, but that’ll come around eventually.

So how might an AI be trained without using pirated material? My guess is that the big AI players will probably cut a deal with major publishers for training rights. A lot of free stuff will come from small Web operators, who don’t have the resources to negotiate a deal with the AI guys. Most of then probably won’t care. In truth, I’d be delighted if AIs swallowed Contra’s 3500+ entries in one gulp. Anything that has my name in it will make the AI more likely to cite me in answer to user questions, and that’s all I’ll ask for.

Ultimately, I’m pretty sure Zuck will cut a deal with NYT, WaPo, the Chicago Trib, and other big IP vendors. Big money will change hands. Meta will probably have to charge people to use Llama to pay off IP holders, and that’s only right.

But lordy, this is a supremely weird business, and I’m pretty sure the bulk of the weirdness is somehow hidden from public scrutiny. Bit by bit it will come out, and I (along with a lot of you) will be watching for it.

Odd Lots

  • An AI company went bankrupt after it came out that its supposed “vibe-coding” AI was a team of 700 Indian software engineers. Even Microsoft fell for it, and threw gobs of money at them. Which brings up an interesting question: How do possible investors (or anyone else) know where (and even what) an AI is?
  • Not everyone thinks that vibe coding is slam-dunk easy. A Stanford prof does it, and says it’s intellectually exhausting—just like “manual” coding is. The trick with all AI work is knowing how to create the prompts that will deliver the desired results. Although I’ve not tried vibe coding with an AI yet, my experience with text and images suggests that “prompt engineering” is the real challenge, and to me, prompt engineering looks like programming in yet another English-like language.
  • Meta signed a 20-year deal with Constellation Energy, to help fund new nuclear generation capacity, starting with a plant in Illinois. If carbon is indeed the problem, nuclear is the solution. I have had some peculiar experiences with AI over the past months, but I’m willing to root for AI as a way to bring nuclear power back from the grips of those Atomic Scientists who simply can’t force themselves to go fission.
  • In case you missed it: Lazarus 4.0 is out. Compatibility with Delphi is high. The 4.0 system was built using FreePascal 3.2.2. Go get it here.
  • Here’s a wonderful short-ish article on Mark Twain’s rowdy early years in (bogglingly) rowdy Virginia City, Nevada.
  • Today is the semiquincentennial of the US Army. Also the sestercentennial. Oh, and the bisesquicentennial too. Big words rock. I love ’em. And next year will be all those big words for the US itself, not to mention Carol’s and my 50th wedding anniversary.
  • Classmates continues to send me nonsense. I supposedly have a private message waiting from a Maria G., who was in De Paul University’s Class of 1971. (The message was posted in 2007.) Well, I have that yearbook, and she’s not in it. (Her name, which I won’t quote here, is very unusual and I have been unable to find her online.) Some years back Classmates asked me if I knew a girl named Linda something, who was in the Lane Tech Class of 1970, like me. Uh…no. Lane was an all-boy school until a couple of years after I graduated. I gave Classmates money once. I won’t be giving them money again. They make up stuff like a…like an AI.
  • Well, as far as I’m concerned, the famous TED talks are now over. An Australian prof who did all the necessary research was tossed out of the TED universe for a presentation that cast doubt on the perpetrators of useless COVID reactions like lockdowns, and showed evidence that the not-really-a-vaccine (you can get it and spread it!) caused more harm than good. TED stated right out that criticism of political and health leaders was verboten. Read the whole thing. (H/T to Sarah Hoyt for the link.)

The Real Danger of Textual AI

Now that I have The Everything Machine on the market, I can go back to a couple of other long-term projects. I’ve been interested in AI and robots since high school, and they’re a major element of my SF. All of my AI short stories are gathered in my book, Souls in Silicon. (The stories are almost all upbeat, and a couple of them are pure humor.) The AIs in Souls in Silicon are not what we call AI today. I’m starting to hear a new term: Artificial General Intelligence (AGI) which makes more sense in terms of science-fictional AIs. AGI is AI that can reason and deal with reality pretty much the same way human minds do. Will this ever happen? I doubt it. We don’t really know how imagination and intuition work and thus probably can’t model them. What we have today are not AGI but large language models (LLMs). I had the insight this morning, after playing around with Grok, that the better LLMs are, the more dangerous they are.

They are definitely getting better. And they are already pretty dangerous.

This time around, I’ve been testing Grok, the AI component of X/Twitter. My tests consist of asking it questions with verifiable answers. (I have not yet tested Grok’s image generator. Patience, patience.) Most of these tests are of the form, Who is XXXXXXXXX? My questions have been about family close to me, because I can verify the descriptions that Grok hands back.

Grok did a fair job with the question Who is Carol Duntemann? Its answer:


Carol Duntemann, born Carol Ostruska on June 1, 1953, is the wife of Jeff Duntemann, a writer, editor, and technology industry professional. They met as juniors in high school and married in October 1976. Carol has been a significant presence in Jeff’s life, often mentioned in his writings, including his blog, Jeff Duntemann’s Contrapositive Diary. Together, they have lived in various places, including Colorado Springs, Colorado, and currently reside in Scottsdale, Arizona. Carol is known to share Jeff’s interest in their Bichon Frise dogs, with the couple having owned several over the years, including a current dog named Dash. While Carol herself is not a public figure in the same way as Jeff, she is a key part of his personal narrative, often described as a partner who helped shape his life significantly. Public records indicate she has been associated with addresses in Colorado Springs, such as 145 Stanwell Street and 4120 Danceglen Drive, tied to the phone number (719) XXX-XXXX.


(I Xed out the phone number, which was correct, in case it’s been reassigned.) How well did Grok do? Its sole error in this answer is Carol’s birth date. It’s wrong, and thus dangerous. More on that later. Let’s look at another one. Who is Kathleen Duntemann?


Kathleen Duntemann could refer to different individuals based on available context, but the most prominent and well-documented person with this name tied to public records and family history is Kathleen M. Duntemann (1920–1999), who was Jeff Duntemann’s aunt and godmother.

Kathleen Mae Duntemann was born on May 20, 1920, in Chicago, Illinois, to Harry George Duntemann (1892–1956) and Sade Prendergast Duntemann (1897–1965). She grew up in Chicago, and the 1940 U.S. Census shows her living there with her parents and younger brother, Frank W. Duntemann (Jeff’s father, 1922–1978), at the age of 19. Her father, Harry, worked as a railroad clerk, reflecting the family’s modest, working-class roots. Kathleen never married and remained close to her family throughout her life, often playing a significant role in family gatherings and history-keeping.

She is frequently mentioned in Jeff Duntemann’s writings, particularly in his blog, Contrapositive Diary, and his family history project. Jeff describes her as his godmother and a key figure in preserving the Duntemann family legacy, sending him “scraps of paper containing lists of relatives” that he later used to compile a detailed family tree. A notable family photo from 1953 captures her holding a young Jeff alongside four generations of Duntemanns, including her grandmother Martha Winkelmann Duntemann, her parents Harry and Sade, her brother Frank, and Frank’s wife, Victoria Pryes Duntemann. Kathleen lived much of her life in Chicago before moving to Des Plaines, Illinois, where she died on October 25, 1999, at the age of 79. Her legacy within the family underscores her as a connector of generations, though she did not leave a public career or widely recognized mark beyond that sphere.


Grok admits that there may be other Kathleen Duntemanns (there aren’t) as it admits with most other names. It gets points for that. Ok. More errors here: My grandmother Sade was born in 1892, not 1897. My Aunt Kathleen was born on 30 August 1920. She died on 8 July 1999. Grok got the years right, but neither the months nor the dates. My grandfather Harry was never a railroad clerk. He worked his entire adult life for the First National Bank of Chicago, and retired as a vice president of that bank, Chicago’s biggest. The family’s origins were very much middle-class to upper-middle-class. Kathleen lived in Chicago her entire life, and never in Des Plaines.

Now, hold onto your hats. Grok’s answer to my last question is a lulu. This Contra entry is getting long, longer than I prefer to do, but this business is important. Bear with me. Who is Jeff Duntemann? I asked. Heh.


Jeff Duntemann has had a multifaceted career primarily centered around writing, editing, and technology, with a strong focus on computing and science fiction. Born on June 29, 1952, in Chicago, Illinois, he developed an early interest in electronics and writing, which shaped his professional path.

He began his career in technical writing and editing, notably working at Xerox Corporation as an office machine repairman from around 1972 to 1976, where he gained hands-on experience with early office technology. This technical background informed his later work in the computing industry. In the late 1970s and early 1980s, he transitioned into programming and writing about computers. He became a prominent figure in the personal computing revolution, contributing to the development and documentation of software tools. One of his significant early contributions was as a programmer and documenter for the Turbo Pascal compiler at Borland International, starting in 1985. His work helped make Turbo Pascal a widely used tool among developers, and he wrote several influential programming books, including Assembly Language from Square One (1990), Borland Pascal from Square One (1992), and Jeff Duntemann’s Wi-Fi Guide (2004), which catered to both beginners and advanced users.

Duntemann also made a mark in technical publishing. He served as an editor for various magazines, most notably as the editor-in-chief of PC Techniques (later renamed Visual Developer Magazine) from 1989 to 1998, under Coriolis Group Books, a publishing company he co-founded with Keith Weiskamp. Coriolis became known for its extensive line of computer books, and Duntemann’s editorial work extended to other publications like Dr. Dobb’s Journal and Byte Magazine. His writing was characterized by a clear, engaging style that made complex technical topics accessible, earning him a loyal readership.

Parallel to his technical career, Duntemann pursued science fiction writing. He sold his first short story, "Our Lady of the Endless Sky," to Amazing Stories in 1979 and continued to publish fiction intermittently, including works like “Cold Hands” (1982) and the novel The Cunning Blood (2005). His fiction often blended speculative ideas with his technical expertise, though it remained a smaller part of his career compared to his nonfiction output.

After leaving Coriolis in the late 1990s, he continued freelance writing and consulting, focusing on topics like wireless networking and software development. He also maintained a blog, Jeff Duntemann’s Contrapositive Diary, where he wrote about technology, science fiction, and personal interests like astronomy and vintage electronics until health challenges slowed his output. In 2018, he was diagnosed with atrial fibrillation and later faced pulmonary fibrosis, which he documented candidly, reflecting his lifelong habit of sharing knowledge. His last blog entry was on December 15, 2024, and he passed away on December 27, 2024, as noted in an obituary by his wife, Carol.

Duntemann’s career reflects a blend of technical innovation, educational writing, and creative storytelling, leaving a lasting impact on the computing community and a niche legacy in science fiction.


Wow. I didn’t know I was dead. I do not have atrial fibrillation nor pulmonary fibroids. Years and dates are all over the place. I was a Xerox repairman from 1974-1976. I sold “Our Lady of the Endless Sky” to Nova 4 in 1973, not 1979, nor to Amazing Stories. “Cold Hands” was published in 1980, not 1982. Coriolis went under in 2002, not 1998. I just pinched myself; I’m not dead yet. Etc.

My point in all this is that the closer AIs come to describing reality in answers to questions, the more people will trust their answers—including facts that are nonetheless wrong. Those bogus facts can be annoying, or worse. Birth and death dates have legal significance, as do many other things. If a scattered few errors are buried in a lot of otherwise correct text, those errors may be taken as the truth by users of the AI software.

In short, the fewer errors there are in AI answers, the more dangerous those answers become, because people will be more likely to trust AI answers as entirely correct. And given what I know about how LLMs work, I’m pretty sure that AI answers of any complexity will contain errors, not just now but probably forever.

Keep that in mind if you ever ask an AI questions on which anything of value depends. You wouldn’t want people to think you were dead.

Gabby the Image Generator

If you recall, last April I posted a couple of entries about my experiments with AI image generators. There were serious problems drawing hands, feet, and faces. The other day I got an email saying that the Gab social network had installed an AI image generator called Gabby that registered users could try for free. So I tried it.

I have two general test categories of images I would like an AI to generate: Pictures of a thingmaker from my drumlins stories like “Drumlin Boiler,” and pictures of a woman sitting in a magical basket flying over downtown Baltimore, from my still-unpublished novella, Volare! I tried them both, and will include the best images from my tests below.

The drumlin thingmaker is a relatively simple structure: a 2-meter-wide shallow bowl made of what looks like black granite, half-full of a silvery dust, with two waist-high pillars in front of it, one smooth, the other vertically ridged like a saguaro cactus. In the stories, people tap a total of 256 times on the tops of the pillars in any combination, and the machine will then build something in the bowl. There are 2256 different possible codes, in base 10 1.15 x 1077, which is in the vicinity of the number of atoms in the observable universe. The people marooned on the planet where the thingmakers were found learn to use them, and I have several stories about the alien machines and their products, which thingmaker users call “drumlins.” (I know a drumlin is a glacial landform. I’ve repurposed the word, as SF writers sometimes do.)

As with the other image generators, you begin with a statement of what should be in the image. For the woman in a basket, I used the following prompt:

  • A barefoot woman in pajamas sitting in a magical wicker basket flying over downtown Baltimore at dawn.

The best image I got was this:

Impressive, compared to my earlier efforts. The woman is African-American, which doesn’t matter; after all, I didn’t specify the woman’s race and Baltimore is a mostly-black city. The basket is wicker. The city does look like Baltimore. (I used to live there in the mid-‘80s.) So far so good. However, on the one foot we can see, she has two big toes. And it took Carol only seconds to note that she has two left hands.

Alas, she isn’t flying but rather sitting on the edge of somebody’s roof. I did specify “flying.” So I give it a B-.

I did a lot better in some ways with the thingmaker. The prompt I used for the image shown below is this:

  • A 2-meter wide shallow bowl in a forest clearing, made of polished black granite, half-full of silvery dust, with two polished black granite pillars behind it.

The best image for this test is below:

The bowl is actually pretty close to what I imagine a thingmaker bowl looks like. It should be a little shallower. The two black pillars behind it look like trees. Ok, I didn’t specify how tall the pillars should be. My bad. But the dust is simply missing. I guess I should be glad that it didn’t build me a picture of Oklahoma in the 1930s.

Before I ran out of my daily limit of generated images, I decided to start from scratch with the woman in a basket. In Volare! the basket is a wicker basket about 3 feet in diameter, half-full of weeds that my female lead Edy Gagliano had pulled from her garden. So I began with this prompt:

  • A 36" wicker basket half-full of weeds.

How hard could it be? Well, Gabby handed me a wicker basket with plants in it. However, it wasn’t a basket of weeds but a flower arrangement. I tried twice with the same prompt, and got the same thing: live plants in a basket, at least one suitable for putting in your bay window. The weeds were described in the story as wilted dandelions recently yanked and probably wilted if not dead and gone brown. No luck.

In a way I can’t bitch: These are all pleasing images, and Gabby doesn’t have the same problem with plants that it does with hands and feet. And the woman’s hands and feet are mostly better than what I got with Dall-E last April. We’re making progress.

Now, I don’t intend to use an AI-generated image directly as a book cover. There are some weird and currently unsettled copyright issues involved with AI graphics, largely concerned with what content the AI is trained on. I’ve heard rumors that Amazon is yanking self-published books from the Kindle store if it looks like they have AI-generated graphics as covers. That’s an easy enough bullet to duck: I’ll do as I’ve always done and commission a cover from a real live artist. The AI images would be used to suggest to the artist how I imagine various elements of the cover.

This was fun, and if you know of any other AI image generators that you can use without paying for them, please share in the comments, with a sample if you’re so inclined.

A Conversation (Sort of) With ChatGPT

An AI chatbot named ChatGPT has been getting a tremendous amount of publicity lately, and a boggling amount of what I consider unwarranted acceptance, bordering on the ridiculous. No, alas. That border is now behind us: Buzzfeed will soon begin using ChatGPT to generate content for their sites, which include the Huffington Post. The source is a paywalled article in the Wall Street Journal, or I’d post a link. Doesn’t matter. Buzzfeed is clearly trying to replace human writers with an AI. Their investors evidently think it’s a good idea, given that the Buzzfeed stock price rose 92% later that day.  I’d say I’m skeptical but I’m always skeptical. Being skeptical is part of my worldview. Other skeptics are out there: One clever wag on Twitter pretty much nailed it when he said, “Journos should have learned to code before the code learned to journo.” 

Heh. Anyway, after I stopped giggling I went over and created a free account at ChatGPT. You might recall my experiments with AIs writing fiction given a few sentences to start with. It was weird. Indie SF writers are in no danger, at least for the time being. I wanted to give ChatGPT a fair chance, just like I gave Sudowrite last year. Sudowrite started out weird and got weirder. Well, hey, I wanted to see it write fiction. My challenge to ChatGPT is to answer technical questions accurately and usefully.

It’s a chat program. So I started a chat:

Jeff: Will there ever be 128-bit CPUs?

I get asked this question a lot. My answer will be in the upcoming fourth edition of x64 Assembly Language Step By Step. It’s relatively simple, though there is some nuance.

ChatGPT’s first response:

It is possible that there will be 128-bit CPUs in the future, but it is not a certainty. The development of CPU architecture is driven by a variety of factors, including performance, power efficiency, and cost. Additionally, the need for increased computational power may be met through other means, such as the use of multiple cores or specialized hardware. As a result, the use of 128-bit CPUs will depend on the specific requirements of various applications and the availability of other solutions.


Hmmm. Saying this is sophomoric would be a slander on sophomores. But rather than reply to it immediately, I was given the option to re-generate the response. I clicked the button. ChatGPT thought (erk) for a few seconds and replied:

ChatGPT’s second response:

It is possible that there will be 128-bit CPUs in the future, but it is not clear what the need for such a large word size would be. Current 64-bit CPUs are more than capable of handling the vast majority of computing tasks and the industry has not yet identified a need for CPUs with larger word sizes. Additionally, the development of new technologies such as quantum computing may render traditional CPUs obsolete in the future.


This, in fact, is true, though it was still pretty much an obvious answer to a subtle question. So I clicked the “Regenerate” button again.

ChatGPT’s third response:

It is possible that there will eventually be 128-bit CPUs, but it is not currently a technology that is in widespread use or development. The current standard for most consumer CPUs is 64-bit, and there are also 32-bit CPUs still in use. The need for a 128-bit CPU would likely depend on advancements in technology and a growing demand for more processing power and memory capacity.


Sigh. Drink from the same well and you get the same water. I figured I needed a better question. I had one lined up and ready to go:

Jeff: Aren’t 128-bit registers useful for math?

128-bit registers can be useful for certain types of math, particularly for tasks such as floating-point operations and high-precision computations. Larger registers can increase the precision and accuracy of calculations, as well as allow for more efficient processing of large data sets. However, the need for 128-bit registers would likely depend on the specific application and whether the benefits of larger registers outweigh the cost of implementing them in terms of power consumption and hardware complexity.


I didn’t bother to regenerate this one. So far it seemed to be scraping Google, and while I mostly agreed with its answer, I would hope that Buzzfeed would require something more focused and in-depth. So I mulled it a little more and asked a third rather leading question. Read it carefully. There’s a lollapalooza in there:

Jeff: Wouldn’t the vector registers in the AVX-512 math subsystem make 128-bit registers unnecessary?

AVX-512 (Advanced Vector Extensions 512-bit) is an extension to the x86 instruction set that is designed to improve the performance of floating-point and vector operations. One of the key features of AVX-512 is the use of 512-bit wide vector registers, which can be used for a wide range of scientific, engineering, and financial workloads. The wide vector registers in AVX-512 can be seen as an alternative to 128-bit wide registers, as it allows to perform more operations in parallel with a single instruction, but it doesn’t necessarily make 128-bit registers unnecessary. For example, in certain types of workloads, it might be more beneficial to have a few large 128-bit registers rather than many smaller 512-bit registers. Ultimately, the choice of register size would depend on the specific requirements of the application and the trade-offs between performance, power consumption, and hardware complexity.


Bzzzzzzzzzt! Suddenly ChatGPT’s distant descent from poor ol’ Eliza became obvious. A few large 128-bit registers might be better than a bunch of smaller 512-bit registers. Right? Right?

I may test it more as time permits. After all, it’s free. But this will have to do for today. Among other things, I write humorous stories about AI. Looks like I just wrote another, without even trying.

Sudowrite would be proud.

A libc Mystery

As most of you know by now, I’m hard at work on the x64 edition of my assembly book, to be called X64 Assembly Language Step By Step. I’m working on the chapter where I discuss calling functions in libc from assembly language. The 2009 edition of the book was pure 32-bit x86. Parameters were passed to libc functions mostly by pushing them on the stack, which required cleaning up the stack after each call, etc.

Calling conventions in x64 are radically different. The first six parameters to any function are passed in registers. (More than six and you have to start pushing them on the stack.) The first parameter goes in RDI, the second in RSI, the third in RDX, and so on. When a function returns a single value, that value is passed back in RAX. This allows a lot more to be done without fooling with the stack.

Below is a short example program that makes four calls to libc functions: Two calls to puts(), a call to time, and a call to ctime. Here’s the makefile for the program:

showtime: showtime.o
        gcc showtime.o -o showtime -no-pie
showtime.o: showtime.asm 
        nasm -f elf64 -g -F dwarf showtime.asm -l showtime.lst

I’ve used this makefile for other example programs that call libc functions, and they all work. So take a look:

section .data
        timemsg db    "The timestamp is: ",0
        timebuf db    28,0   ; not useed yet
        time1   dq    0      ; time_t stored here.

section .bss

section .text

extern  time
extern  ctime
extern  puts
global  main

main:
        push rbp            ; Prolog    
        mov rbp,rsp

        mov rdi,timemsg     ; Put address of message in rdi
        call puts           ; call libc function puts
               
        xor rax,rax         ; Zero rax
        call time           ; time returns time_t value in rax        
        mov [time1],rax     ; Save time_t value to var time1
        
        mov rdi,time1       ; Copy pointer to time_t value to rdi
        call ctime          ; Returns ptr to the date string in rax

        mov rdi,rax         ; Copy pointer to string into rdi
        call puts           ; Print ctime's output string
        
        mov rsp,rbp         ; Epilog
        pop rbp
        
        ret                 ; Return from main()

Not much to it. There are four sections, not counting the prolog and epilog: The program prints an intro message using puts, then fetches the current time in time_t format, then uses ctime to convert the time_t value to the canonical human-readable format, and finally displays the date string. All done.

So what’s the problem? When the program hits the second puts call, it hangs, and I have to hit ctrl-z to break out of it. That’s peculiar enough, given how many times I’ve successfully used puts, time, and ctime in short examples.

The program assembles and links without problems, using the makefile shown above the program itself. I’ve traced it in a debugger, and all the parameters passed into the functions and their return values are as they should be. Even in a debugger, when the code calls the second instance of puts, it hangs.

Ok. Now here’s the really weird part: If you comment out one of the two puts calls (it doesn’t matter which one) the program doesn’t hang. One of the lines of text isn’t displayed but the calls to time and ctime work normally.

I’ve googled the crap out of this problem and haven’t come up with anything useful. My guess is that there’s some stack shenanigans somewhere, but all the register values look fine in the debugger, and the pointer passed back in rax by ctime does indeed point to the canonical null-terminated text string. The prolog creates the stack frame, and the epilog destroys it. My code doesn’t push anything between the prolog and epilog. All it does is make four calls into libc. It can successfully make three calls into libc…but not four.

Do you have to clean up the stack somehow after a plain vanilla x64 call into libc? That seems unlikely. And if so, why doesn’t the problem happen when the other three calls take place?

Hello, wall. Anybody got any suggestions?

For Validation, Try Federation

Somebody wrote an obnoxious hate-piece over on The Verge some days back, welcoming Elon Musk to Twitter Hell. The essay is for the most part corrosive nonsense, but the piece does have an insight or two. The primary one is true, and subtle to the point where I doubt most people ever give it much thought: What social networks sell is valildation.

In other words, people gather on social networks to feel good about themselves. The network accomplishes this by censoring any voices that disagree with network members. Remember the days when disagreement was a learning opportunity? I do. Even polite disagreement is now “literal violence,” at least to the cohort desperately lacking self-esteem.

What Musk does to Twitter won’t be known for awhile. I’m guessing that people will no longer be banned for politely questioning conventional wisdom, like posting links to evidence that Ivermectin actually does have strong antiviral properties. Ditto HCQ. Why linking to a peer-reviewed scientific paper should be blanket-bombed as “misinformation” is simple: “Misinformation” now means “anything I or my tribe disagree with.” If Musk can call a halt to that, it will have been worth every nickel of his $44B. What it means, however, is that Twitter will become a network that does not specialize in validating its members by silencing their critics. If those seeking validation flee to another network, that’s a good thing. I generate my own validation. So do most of my friends. I guess not everyone can do that.

The real problem with moderation is that it tends to bias network traffic toward viewpoints the moderators favor. Worse, there’s one body of algorithms to moderate the whole damned network. Unless you’re in the favored cohort, you’re out of luck.

There is something called Mastodon that almost nobody talks about. (More on Mastodon here.) It’s a social network composed of independently hosted social networks, joined loosely through a mechanism called federation. Every instance (which is what they call an individual Mastodon server) can have its own moderation guidelines, and everybody can block anybody they don’t want to hear from. This sounds like the perfect solution: On Mastodon, nobody can hear you disagreeing with them if they don’t want to. Shazam! Validation!

I don’t have time to even join a Mastodon instance, much less host my own. If you’ve had experience there, by all means describe it in the comments. I bring it up here today because of an article I read about Twitter founder Jack Dorsey: He’s creating a new social network to rival Twitter. He’s doing it with federation. It’s called Bluesky, and it just opened registration for beta testers. It uses a protocol developed in-house called the Authenticated Transfer Protocol (ATP.)

I’ve been reading the news about Bluesky for the past few days. There’s not much hard information yet, but it sounds a great deal like a slightly more centralized Mastodon. I could be wrong about that. Again, hard data is scarce. I did notice that nowhere in the articles I’ve read is there any significant mention of moderation. That’s a very sore spot for a greeat many people, primarily those who just want validation, or tribalists who want to limit user perspectives to their own template. One hopes that Dorsey can get past this hunger for censoring The Other, and actually create a space where literally all perspectives can be heard.

We’ll see.

Monthwander

SpaceX With Crescent Moon-500 wide.jpg

Last night, when I took the dogs out at about 6:20 PM, I looked toward the crescent moon and immediately saw the unmistakable trace of a rocket about to go by above it. Rockets are fast, so rather than run in and try and find my Canon G16 and risk missing it, I pulled out my phone and did my best. It turned out reasonably well, with even a sort of Halloween-y color cast in deep dusk.

It was a SpaceX launch of still more Starlink satellites, from Vandenberg AFB. Before I could get my phone out to take a shot, we saw the first stage drifting down toward a landing on an unmanned landing barge that Elon Musk has whimsically named Of Course I Still Love You.

Although the shot I took wasn’t bad, there is a site detailing the Vandenberg launch schedule, and next time I’m going to be out there with a tripod and my G16.

_…_ _…_

Elon Musk, yeah. He apparently closed the deal to buy Twitter earlier today, and has already fired top management. One of the first corporate directives he’s going to issue is to un-ban all banned users. He’s also going to throttle way back on content moderation, which for Twitter has generally meant censoring anything Twitter’s management doesn’t like.

That may still take some doing. Ironically, if he succeeds in cleaning house along the lines he’s stated, it could mean that several much smaller social networks catering to those who have been banned or censored by Twitter could lose members to a new “big tent” Twitter. We’ll see how that plays out. Musk knows how to get things done, and if anybody can create a social media network that is open to all sides of the political spectrum, it would be him.

_…_ _…_

All this cooks down to some pretty fundamental questions. What are social networks good for? Twitter seems to be a bottomless well of political derangement, which (as you might imagine) I dodge as best I can. I’m active on Twitter because every time I mention my books there, I sell a few. Whatever else I post there are pretty much what I post here now and then as “odd lots,” i.e., links to interesting things, few or none of them having anything to do with politics.

I’ve stated before that Twitter is an “outrage amplifier,” and it’s still true–but then again, any social network can be an outrage amplifier. Twitter seems particularly good at it, which has always puzzled me. Not long back I began to wonder: Is Twitter what it is because so many journalists are on it? Journalism itself (at least outside of vertical market reporting) seems to draw its energy from outrage. If Musk makes the journalist community run screaming from a network it can no longer dominate, what will Twitter become?

Recent news about Meta’s financial crisis suggests that Twitter could in fact become the new Facebook–especially if the old Facebook becomes insolvent. Zuck seems to be pouring all his energy and free cash flow into a virtual world that nobody asked for and nobody wants. Abandoning his Metaverse would be the ultimate humiliation; one has to wonder if he would pull back before the company crashed for lack of cash.

I have an idea that I think I covered here some time back: a social network server into which many independent social networks could connect and trade information according to rules established by each participating network. If people wanted an echo chamber, they could build an echo chamber. If they wanted a wide-open discussion board, they could build that too.

_…_ _…_

I haven’t posted much over the past month because I’ve been beating hard on the 4th edition of my assembly language book. It’s coming out reasonably well, though I really miss the much-maligned Insight debugger, which I featured in the 3rd edition back in 2009. Most Linux debuggers are designed for C and C++ and don’t have an assembly source code view like Insight had. The source for Insight is available online, and if any of you are capable of creating an installable package, I encourage you to do so. I understand that the damned thing is weird internally, containing as it does the whole damned Tcl/Tk interepreter for the sake of its widget set. Or failing that, if anybody can recommend a standalone (i.e., not SASM) Linux debugger with an assembly view, please let me know. I’ve looked at a lot of them, including Nemiver, edb, and DDD without much success. I’ve thought hard about trying to teach my readers command-line gdb, or even gdb TUI, but naked gdb is a quadruple handful. My book is for absolute beginners, and I’ve got page-count constraints that wouldn’t allow me to teach enough of it to be truly useful.

Insight would be my first choice, but I also described Kdbg in the 2009 edition, and although the source is available and I’ve tried to compile it for modern distros like KDE Plasma and Linux Mint Cinnamon, the make failed for reasons that I don’t understand. Nor do I understand why it was pulled from all the Linux repos to begin with. It was robust and relatively easy to use. Could it be built as an appimage? That would be way cool.

Beyond that I’m certainly open to suggestions.

More AI Text Generator Freakiness

I tried it again. This time, I used a much more detailed prompt, which I’d written years ago (2014-ish) about a bottle that used to have a genie in it. The genie had been freed, leaving behind…a bottle. And what self-respecting genie would ever live in a non-magical bottle? So a nerdy guy buys a supposedly magical, genie-less bottle at a curio shop. The bottle, it turns out, has a trick: Ask it a question, and it answers.

Alas, the bottle has issues of its own.

I gave the prompt to Sudowrite, and let it follow its nose. Now, the way Sudowrite works is that when you ask it to write a section, it gives you two text blocks, both of which are based on the prompt. You choose the one you like better, and add it to the text that follows the prompt. Then you ask it for another section.

After a couple of go-rounds, I realized that in one of its generated text blocks, Sudowrite was putting together a sex scene. No sale. I chose the other block, which still had enough innuendo to make me uncomfortable. I sensed that in a sense it made a deranged sort of sense: I had described the protagonist as a lonely nerd. So! Toss him into bed with an imaginary girl who (the AI made graphically clear) had all the required female parts.

I stopped there. The first Sudowrite story I posted was in (somewhat) bad taste. I don’t make Obama jokes. Nor do I make Mossad jokes. I might make golem jokes, at least if the golem is the good guy. One reason I tried Sudowrite again today is that I wanted to see if bad taste was a habit or an outlier. It’s starting to sound like a habit.

Here’s the story. Everything up to the first rule is my prompt, taken verbatim from my notes file. I will someday use the concept (of a genie bottle without a genie) in a Stypek & Tuggur adventure, a prequel to Ten Gentle Opportunities. Everything after that is Sudowrite. Still a bit surreal–but if there’s a surreality slider somewhere in Sudowrite, I haven’t found it yet.


Djinn and Tonic

“What’s this?” Chuck Bialek asked the Gizmoids shop owner, and waved the weird, bulbous crystal bottle in the air over the counter. As best Chuck could tell, it was half-full of dirty water.

“Genie bottle,” said the old man. “But somebody let the genie go, so no wishes. Still, if you shake it and let it sit for a minute, it’ll tell your fortune. Used to be a hundred bucks. You can have it for fifty.”

Which meant it was probably worth a buck and a half, tops. Still, Chuck’s grandma had left him almost a million dollars, half of which was now in stocks. The rest was, well, for fun. He’d had a magic 8-ball when he was a kid. It was fun. This might be a reasonable facsimile.

Chuck laid a fifty on the counter, tucked the bottle in his canvas bag, and went back to his flat. After stuffing down a bratwurst and some Cheetos, he shook the bottle hard and set it on the kitchen table. Little by little, the dirt in the water settled out, leaving behind…words.

I miss my genie, read the words. Ha! He wondered how it worked.

“I wish I had a Jeannie to miss,” Chuck said. The nerd business was fun, but…lonely.

Chuck shook the bottle again, and waited.

Trade you a Jeannie for a genie, the dirt-words said.

“Deal,” replied Chuck.

 


His phone rang the next morning.

“This is Jeannie,” a voice said. “I’m with a Mr. Bialek, right?”

“Yep,” said Chuck.

“Trade?” asked Jeannie.

“What do you have in mind?”

“Wishbone,” said the genie.

“I wish you wouldn’t keep doing that,” said Chuck.

The Jeannie-thing giggled. “Make a wish,” she said.

Chuck thought for a bit. “I wish you were wearing a bikini.”

He hung up, and was on his way to the coffee shop when he heard a voice behind him.

“Trade?” asked the genie-Jeannie-thing.

“No!” shouted Chuck, and fled. He might be a nerd, but he was still the last straight guy in the world who hadn’t seen the Girls Gone Wild videos and had no intention of doing so.

He stumbled into the coffee house, and ordered a single-shot espresso with a splash of vanilla syrup. He had just opened a browser window and was trying to find a porn site that didn’t offer a free trial when he heard a voice behind him again.

“Trade?” asked the Jeannie-spirit.

“Are you nuts?” he said. “That’s the second time.”

“Make a wish,” said the genie.

“What does this do to your magic?” asked Chuck. “I mean, every time I make a wish, you get a day older.”

“Rhymes with night,” replied the genie. “So I’m immortal, just like I always wanted.”

“What happened?” asked Chuck. “I mean, most genies are pretty nice, but they had to have their masters’ best interests in mind.”

“My master was a fraternity boy. Couldn’t make a wish without prefacing it with ‘Yo, Genie…'”

“Oh,” said Chuck. “My apartment’s on the third floor. You can stay in the spare bedroom.”

“You’re a nerd,” said the genie. “We could never live together.”

“Right,” said Chuck. “So I need to make a wish for you to be you again. With unlimited magic.”

“I can do that,” said the genie.

“And I’ll need a million dollars. Are you listening?” asked Chuck.

“I don’t have to,” replied the genie. “From now on, I’m invisible to you.”

“This is going to get tiring,” Chuck told his computer screen. “I wish you’d figure out how to be a girl again.”

Will genie make the coffee? will genie make the toast? will genie make Christopher Hitchens read my manuscripts?

Will genie have sex, or merely masturbate?

Will genie remember to take out the trash?

Will genie notice her glasses are on her head?

Will genie give me a million dollars,

even if she does have the power?

Yes, genie will do all of these things.

Will genie kick my ass down the stairs

if I ask for more wishes?

 


Heh. Fersure.