Jeff Duntemann's Contrapositive Diary Rotating Header Image

programming

A libc Mystery

As most of you know by now, I’m hard at work on the x64 edition of my assembly book, to be called X64 Assembly Language Step By Step. I’m working on the chapter where I discuss calling functions in libc from assembly language. The 2009 edition of the book was pure 32-bit x86. Parameters were passed to libc functions mostly by pushing them on the stack, which required cleaning up the stack after each call, etc.

Calling conventions in x64 are radically different. The first six parameters to any function are passed in registers. (More than six and you have to start pushing them on the stack.) The first parameter goes in RDI, the second in RSI, the third in RDX, and so on. When a function returns a single value, that value is passed back in RAX. This allows a lot more to be done without fooling with the stack.

Below is a short example program that makes four calls to libc functions: Two calls to puts(), a call to time, and a call to ctime. Here’s the makefile for the program:

showtime: showtime.o
        gcc showtime.o -o showtime -no-pie
showtime.o: showtime.asm 
        nasm -f elf64 -g -F dwarf showtime.asm -l showtime.lst

I’ve used this makefile for other example programs that call libc functions, and they all work. So take a look:

section .data
        timemsg db    "The timestamp is: ",0
        timebuf db    28,0   ; not useed yet
        time1   dq    0      ; time_t stored here.

section .bss

section .text

extern  time
extern  ctime
extern  puts
global  main

main:
        push rbp            ; Prolog    
        mov rbp,rsp

        mov rdi,timemsg     ; Put address of message in rdi
        call puts           ; call libc function puts
               
        xor rax,rax         ; Zero rax
        call time           ; time returns time_t value in rax        
        mov [time1],rax     ; Save time_t value to var time1
        
        mov rdi,time1       ; Copy pointer to time_t value to rdi
        call ctime          ; Returns ptr to the date string in rax

        mov rdi,rax         ; Copy pointer to string into rdi
        call puts           ; Print ctime's output string
        
        mov rsp,rbp         ; Epilog
        pop rbp
        
        ret                 ; Return from main()

Not much to it. There are four sections, not counting the prolog and epilog: The program prints an intro message using puts, then fetches the current time in time_t format, then uses ctime to convert the time_t value to the canonical human-readable format, and finally displays the date string. All done.

So what’s the problem? When the program hits the second puts call, it hangs, and I have to hit ctrl-z to break out of it. That’s peculiar enough, given how many times I’ve successfully used puts, time, and ctime in short examples.

The program assembles and links without problems, using the makefile shown above the program itself. I’ve traced it in a debugger, and all the parameters passed into the functions and their return values are as they should be. Even in a debugger, when the code calls the second instance of puts, it hangs.

Ok. Now here’s the really weird part: If you comment out one of the two puts calls (it doesn’t matter which one) the program doesn’t hang. One of the lines of text isn’t displayed but the calls to time and ctime work normally.

I’ve googled the crap out of this problem and haven’t come up with anything useful. My guess is that there’s some stack shenanigans somewhere, but all the register values look fine in the debugger, and the pointer passed back in rax by ctime does indeed point to the canonical null-terminated text string. The prolog creates the stack frame, and the epilog destroys it. My code doesn’t push anything between the prolog and epilog. All it does is make four calls into libc. It can successfully make three calls into libc…but not four.

Do you have to clean up the stack somehow after a plain vanilla x64 call into libc? That seems unlikely. And if so, why doesn’t the problem happen when the other three calls take place?

Hello, wall. Anybody got any suggestions?

SASM Crashes on “Section” in a Comment

As most of you know, I’m grinding along on the fourth edition of my book Assembly Language Step By Step, updated to cover x64. I’m using the SASM IDE for the example code because it provides seamless visual debugging using a front-end to gdb. Back in 2009 I created the third edition, and incorporated the Insight debugger front end for visual debugging. A month or so after the book appeared, Insight vanished from the Linux world. I tried a lot of debuggers and editors before I discovered SASM. It’s treated me very well.

Until today.

Now, I’ve been programming since 1970, in a lot of languages, on a lot of platforms, and I’ve made a lot of mistakes. Finding those mistakes is what debugging is about. Today, I was working on a short example program for the book. When I finished it, I clicked the Build button. It built as it should. I needed to single-step it to verify something about local labels, but when I clicked the debug button, SASM crashed. As Shakespeare would have put it, SASM died and gave no sign. The whole IDE just vanished. I tried it again. Same thing. I rebooted Linux. Same thing.

Puzzled doesn’t quite capture it. I loaded another example program from the book. It built and debugged without any trouble. I loaded example after example, and they all worked perfectly. Then I copied the source from the malfunctioning example into a file called crashtest.asm, and began cutting things out of it. I got it down to a start label and a SYSCALL to the exit function. Still blew SASM away.

Most of what was left was comments. I did a ctrl-X to cut the comment header onto the clipboard. Save, build, debug–and it worked perfectly.No crash, no errors, no problemo.

Soooooooo…….something in a comment header crashed the IDE? That would be a new one. So I dropped the comment header back into the file from the clipboard and started cutting out lines, one by one. I narrowed it down to one comment line, properly begun with a semicolon and containing no weird characters. The line that crashed SASM was this:

;         .bss sections.

I cut out the spaces and the period. No change. I cut out “.bss”. No change. I was left with the word “sections.” On a hunch, I lopped off the “s”. No change. Then I lopped off the “n”. Suddenly, it all worked.

SASM was crashing on a comment containing the word “section.” I verified by deleting the line entirely and typing it in again. Crash!

I stared at the damned thing for a long time. I loaded a couple of my other examples, and dropped the offending comment header into them. No problems. Twenty minutes later, I noticed something: In crashtest.asm, the fragment of comment header text was below the three section markers:

section.bss
section.data
section.text

; section

Now, in my other examples, the ones that didn’t crash, the comment header was above the three section markers. So I went back to crashtest.asm, and moved the comment header to the very beginning of the file, above the section markers. Suddenly everything worked. No crashes.

WTF? I assembled the offending crashtest binary from the command line without trouble. I loaded it into gdb from the command line and messed with it. No trouble.

I wrote this entry not for answers so much as to provide a report that other SASM users can find in search engines. There are things about SASM that aren’t ideal. Sure. But I’ve never seen it crash before. I’ll see if I can send the crashtest.asm to the people who created SASM. I’m sure it’s just a bug. But it’s the weirdest damfool bug I’ve uncovered in a whole lot of years!

In Pursuit of x64

You may be wondering where I am, given that I haven’t posted a Contra entry for over a month. I didn’t want May to conclude with zero entries posted, so I figured I’d take a break here and get you up to speed.

Here’s the deal: My publisher has asked for a Fourth Edition of Assembly Language Step-By-Step. It’s been thirteen years since the Third Edition came out, so it’s well past time. The idea here is to bring the book up to date on the x64 architecture. In fact, so that no one will mistake what’s going on, the title of the new edition will be x64 Assembly Language Step-By-Step, Fourth Edition. Whether they keep the “x” in lowercase remains to be seen.

So I’m off and editing, writing new code and checking every code snippet in a SASM sandbox, and making sure that I don’t forget and talk about EAX and other 32-bits-and-down entities without good reason. (There are good reasons. Even AH and AL are still with us and used for certain things.) Make no mistake: This is going to be a lot of work. The Third Edition is 600 pages long, which isn’t the longest book I’ve ever written (that honor belongs to Borland Pascal 7 From Square One, at 810 pages) but it’s right up there.

My great fear has been the possibility of needing to add a lot of new material that would make the book even longer, but in truth, that won’t be a huge problem. Here’s why: Some things that I spent a lot of pages on can be cut way back. Good example: In 32-bit Linux, system calls are made through the INT 80H call gate. In the Third Edition I went into considerable detail about how software interrupts work in a general sense. Now, x64 Linux uses a new x64 instruction, SYSCALL, to make calls into the OS. I’m not completely sure, but I don’t think it’s possible to use software interrupts at all in userspace programming anymore. I do have to explain SYSCALL, but there’s just not as much there there, and it won’t take nearly as many words and diagrams.

Oh, and of course, segments are pretty much a thing of the past. Segment management (such that it is) belongs to the OS now, and for userspace programming, at least, you can forget about them. I’m leaving a little description of the old segment/offset memory model for historical context, but not nearly as much as in previous editions.

I also dumped the Game of Big Bux, which doesn’t pull its weight in the explanation department, and isn’t nearly as funny now as it was in 1990. But have faith: The Martians are still with us.

My guess is that from a page count standpoint, it will pretty much be a wash.

It’s going to take me awhile. I don’t know how long, in truth. Especially since I am going to try to keep my fiction output from drying up completely. The book will slow me down, but (for a change) the publisher is not in a huge hurry and I think they’ll give me the time I need. I have 56,000 words down on The Everything Machine, and don’t intend to put it on ice for months and months. I’m not sure how well that’s going to work. We’ll see.

Odd Lots

  • Pertinent to my last two entries here: City Journal proposes what I proposed two years ago: To reduce the toxicity of social media, slow it down. What they propose is not exponential delays of replies and retweets to replies and retweets until those delays extend fifteen minutes or more. Like a nuclear reactor control rod, that would slow the explosion down until the hotheads cooled off or got bored and went elsewhere. Instead, they suggest Twitter insist on a minimum of 280 characters to posts. That might help some, but if the clue is to slow down viral posts, eliminate the middleman and just slow down responses until “viral” becomes so slow that further response simply stops.
  • A statistical study of mask use vs. COVID-19 outcomes found no correlation between mask use and better outcomes, but actually discovered some small correlation between mask use and worse outcomes. Tough read, but bull through it.
  • While not as systematic as the above study, an article on City Journal drives another nail in the coffin of “masks as infection prevention.” Graph the infection rates in states with mask mandates and states with no mask mandates and they come out…almost exactly the same.
  • Our Sun is getting rowdy, and getting rowdier earlier than expected. Cycle 25 is starting out with a bang. Recent cycles have been relatively peaceful, and nobody is suggesting that Cycle 25 will be anything close to the Cycle 19 peak (1957-58) which was the most active sunspot max in instrumental history. What Cycle 25 may turn out to be is average, which mean 20 meters may start to become a lot more fun than it has been in recent (slow) years.
  • And this leads to another question I’ve seen little discussion on: To what extent are damaging solar storms correlated to sunspot peaks? The huge solar storm of 1921 took place closer to the sunspot minimum than the maximum. The legendary Carrington event of 1859 took place during the fairly weak Cycle 10. As best I can tell, it’s about individual sunspots, and not the general state of the Sun at any point in time.
  • NASA’s Perseverence Mars rover caught a solar eclipse, when Phobos crossed the disk of the Sun as seen from Perseverence. The video of the eclipse was sped up, but it really is a startling image, especially if you know a little about Phobos, which is decidedly non-spherical.
  • I found this very cool: An online, Web-based x86/x64 assembler/disassembler. Although intended for computer security pros, I found it a lot of fun and it may turn out to be useful here and there as I begin to revise my assembly book for the fourth time.
  • Skipping sleep can lead to putting on belly fat, which is absolutely the worst place to have it. Get all the sleep you can, duh. Sleep is not optional.
  • How many stars are there in the observable universe? It’s a far trickier and sublter calculation than you might think. But the final number looked familiar to me, and might look familiar to people who do low-level programming.

Problems with SASM on Linux Mint

I’m scoping out a fourth edition of my book, Assembly Language Step by Step. I got wind of a simple FOSS utility that could be enormously useful in that effort: SASM (SimpleASM), which is an IDE created specifically for assembly-language work. It’s almost ideal for what I need: Simple, graphical, with a surprisingly sophisticated text editor and a graphical interface to GDB. It works with NASM, my assembler of choice. I want to use it as the example code IDE for the book. I installed it without effort on Windows, which is why I decided to use it. But I want to use it on Linux.

Alas, I’ve been unable to get it to install and run on Linux Mint 19 (Tara) using the Cinnamon desktop.

I’ve installed a lot of things on Linux Mint, all of them in the form of Debian packages. (.deb files.) I downloaded the SASM .deb file for Mint 19, and followed instructions found on the Web. There is a problem with dependencies that I just don’t understand.

I got it installed once but it wouldn’t run. I uninstalled it, and then it refused to reinstall.

Keep in mind that I am not a ‘leet Linux hacker. I’m a teacher, and most of what I teach is computing and programming for newcomers. The problem may be obvious to Linux experts but not to me. Most of the software I’ve installed on Mint came from repositories. SASM is a .deb download.

So. Does anybody else use it? If you’ve got Mint on a partition somewhere, could you try downloading it and installing it? I need to know if the problem is on my side of the screen or the other side.

Thanks in advance for any advice you might offer.

Odd Lots

  • I got caught in an April Fools hoax that (as my mother would say) sounded too true to be funny: That Tesla canceled all plans to produce its Cybertruck. (Read the last sentence, as I failed to do.) I like Musk; he has guts and supports space tech. About his Cybertruck concept, um…no. It looks like an origami, or else something that escaped from a third-shelf video game. The world would go on without it, and he might use the money to do something even cooler, whatever that might be.
  • Oh, and speaking of Elon Musk: He just bought almost 10% of Twitter, to the tune of about $3B. He is now the biggest outside shareholder. This is not a hoax, and I wonder if it’s only the beginning. Twitter is famous for suspending people without explaining what they did wrong, sometimes for things that seem ridiculously innocuous. A major shareholder could put pressure on Twitter’s management from the inside to cut out that kind of crap. It’s been done elsewhere. And boy, if anybody can do it, he can.
  • Nuclear energy has the highest capacity factor of any form of energy, meaning the highest percentage of time that energy producers spend actually producing energy. I knew that from my readings on the topic. What shocked me is that there is in fact an Office of Nuclear Energy under the DOE. I’m glad they exist, but boy, they hide well.
  • The Register (“Biting the hand that feeds IT”) published a fascinating article about how C has slowly evolved into an Interface Definition Language (IDL). C was never intended to do that, and actually does a pretty shitty job of it. Ok, I’m not a software engineer, but the way to build a new operating system is to define the IDL first, and work backwards from there. C is now 50 years old, sheesh. It’s time to start again, and start fresh, using a language (like Rust) that actually supports some of the security features (like memory protection and safe concurrency) that C lacks. This is not Pascal sour grapes. I’m studying Rust, even though I may never develop anything using it. Somehow, it just smells like the future.
  • Drinking wine with food (as I almost always do) may reduce your chances of developing type 2 diabetes. It’s not taken up in the article, but I have this weird hunch that sweet wines weren’t part of the study. Residual sugar is a real thing, and I’m drinking way less of it than I did 20 years ago.
  • People have been getting in fistfights over this for most of a century, but establishing Standard Time year-round may be better than year-round Daylight Savings Time. I’m mostly neutral on the issue. Arizona is on permanent DST and we like it fine. The problems really occur at high latitudes, where there isn’t much daylight in winter to begin with, so shifting it an hour in either direction doesn’t actually help much.
  • There is Macaroni and Cheese Ice Cream. From Kraft. Really. I wouldn’t lie to you. In fact, I doubt I would even imagine it, and I can imagine a lot.
  • Optimists live longer than pessimists–especially older optimists. Dodging enough slings and arrows of outrageous fortune somehow just makes the whole world look brighter, I guess.
  • Finally, some stats suggesting that our hyperpartisan hatefest online has pushed a lot of people out of political parties into the independent zone–where I’ve been most of my post-college life. 42% of Americans are political independents, compared to 29% who are Democrats and 27% who are Republicans. I’m on Twitter, but I don’t post meanness and (as much as possible) don’t read it. And if Mr. Musk has his way with them, I may be able to post links to ivermectin research without getting banned.

Odd Lots

The Raspberry Pi Pico…and a Tiny Plug-In Pi

Yesterday the Raspberry Pi Foundation announced the Raspberry Pi Pico, at the boggling temporary low price of…$4US. It’s definitely a microcontroller on the order of an Arduino rather than the high-end 8GB RPi that might stand in for a complete desktop mobo. And that’s ok by me. The chip at its heart is new: the RP2040, a single-chip microcontroller designed to interface with mainstream Raspberry Pi boards, and lots of other things.

Raspberry-Pi-Pico-at-an-angle-500x357.png

Now, what caught my attention in the page linked above was the list of partner products made by other firms using the same RP2040 chip. Scroll down to the description of the SparkFun MicroMod RP2040 proccesor board. It’s still on preorder, but look close and see what’s there: an edge connector…on a board the size of a quarter! That’s not precisely what I was wishing for in my previous entry, but it’s certainly the right idea.

17720-MicroMod_RP2040_Processor_Board-04.jpg

As I understand it, SparkFun is turning the RPi-wearing-a-hat on its ear, into a hat-wearing-an-RPi. The M.2 interface used in the product is actually a standard developed some years back for use in connecting SSDs to tiny slots on mobos. I knew about M.2, but wouldn’t have assumed you could mount a CPU-add-in board using it. Well, shazam! Done deal.

The RP2040 chip is a little sparse for my tastes. I want something I can run FreePascal/Lazarus on, over a real OS. I don’t see anything in the M.2 spec that would prevent a much more powerful processor board talking to a device (like a keyboard, TV or monitor) across M.2. The big problem with building a high-end RPi into things is keeeping it cool. The Foundation is aware of this, and did a very good job in the $100US Raspberry Pi 400 Pi-in-a-keyboard. (This teardown and review is worth a look if you’re interested in the platform at all. The author of the teardown goosed the board to 2.147 GHz and it didn’t cook itself.)

I fully intend to get an RPi 400, though I’ve been waiting awhile to see if there will soon be an RPi 800 keyboard combo with an 8GB board instead of 4GB. Given the price, well hell, I might as well get the 4GB unit until an 8GB unit appears.

So consider my previous post overruled. It’s already been done. And I for one am going to watch this part of the RPi aftermarket very carefully!

Proposal: A New Standard for Encloseable Small Computers

Monitors are getting big. Computers are getting small. I think I’ve mentioned this idea before: a cavity in a monitor big enough to hold a Raspberry Pi, with the monitor providing power, video display, and a couple of USB ports for connecting peripherals like mice, keyboards, and thumb drives. Several of my Dell monitors have a coaxial power jack intended for speaker bars, and a USB hub as well. I’ve opened up a couple of those monitors to replace bad electrolytics, and as with most computer hardware, a lot of that internal volume is dead space.

The idea of a display with an internal computer has long been realized in TVs, many of which come with Android computers inside. That said, I’ve found them more a nuisance than useful, especially since I can’t inspect and don’t control the software. These days I outsource TV computing to a Windows 10 Intel NUC sitting on the TV cabinet behind the TV.

The top model of the Raspberry Pi 4, with 8 GB RAM, is basically as powerful as a lot of intermediate desktops, with more than enough crunch for typical office work; Web, word processing, spreadsheets, etc. With the Debian-based Raspberry Pi OS (formerly Raspbian) and its suite of open-source applications, you’ve got a desktop PC. More recently, the company has released the Raspberry Pi 400, which is a custom 4GB RPi 4 built into a keyboard, with I/O brought out the back edge. (In truth, I’d rather have it built into a display, as I am extremely fussy about my keyboards.) Computers within keyboards have a long history, going back to (I think) the now-forgotten Sol-20 or perhaps the Exidy Sorcerer. (Both appeared in 1978.)

What I want is breadth, which means the ability to install any of the modern small single-board computers, like the Beaglebone and its many peers. Breadth requires standardization, both in the monitor and in the computer. And if a standard existed, it could be implemented in monitors, keyboards, printers, standalone cases, robot chassis, and anything else that might be useful with a tiny computer in its tummy.

A standard would require both physical and electrical elements. Electrical design would be necessary to bring video, networking, and USB outside the enclosure, whatever the enclosure is. (I reject the bottom-feeder option of just leaving a hole in the back of the enclosure to bring out conventional cables.) This means the boards themselves would have to be designed to mate with the enclosure. What I’m envisioning is something with a card slot in it, and a slot spec for video, network, i2s, and USB connections. (GPIO might not be available through the slot.) The boards themselves would have slot connectors along one edge, designed to the standard. The redesigned boards could be smaller and thinner (and cheaper) without the need for conventional video, network, audio, and USB jacks. (Network connectors are increasingly unnecessary now that many boards have on-board WiFi and Bluetooth antennas.) Picture something like the Raspberry Pi Zero with edge connectors for I/O.

Defining such a standard would be a minor exercise in electrical engineering. The big challenge would be getting a standards body like ANSI interested in adopting it. The Raspberry Pi Foundation has the engineering chops, obviously, and once a standard has been created and proven out, groups like IEEE or ANSI might be more inclined to adopt it and make it “official.”

I understand that this might “fork” the small-board computing market between GPIO boards and non-GPIO boards. Leaving the GPIO pads on the opposite edge of the board is of course possible, and would allow the board to be enclosed or out in the open, or inside some other sort of enclosure that leaves room for GPIO connections. A big part of the draw of the small boards is the ability to add hardware functionality in a “hat” that plugs into the GPIO bus, and I don’t want to minimize that. I think that there’s a market for non-GPIO boards that vanish inside some larger device or enclosure that provides jacks for connections to the outside world. The Raspberry Pi 400 is an excellent example of this, with GPIO header access as well. What I’m proposing is a standard that would allow a single enclosure device to be available to any board designed to the standard.

Ok, it would be hard–for small values of hard. That doesn’t mean it wouldn’t be well worth doing.

Delphi Turns 25

Today (or maybe tomorrow, depending on who you talk to) is the 25th anniversary of Borland’s introduction of the Delphi RAD environment for Object Pascal. Delphi changed my life as a programmer forever. It also changed my life as a book publisher for awhile. The Delphi Programming Explorer, a contrarian tutorial book I wrote with Jim Mischel and Don Taylor and published with Coriolis, was the company’s biggest seller in 1995. We did a number of other Delphi books, including a second edition of the Explorer for 32-bit Windows, Ray Konopka’s seminal Developing Custom Delphi 3 Components, and others, including Delphi 2 Multimedia Adventure Set, High Performance Delphi Programming, and the ill-fated and much-mocked Kick-Ass Delphi. We made money on those books. A lot of money, in fact, which helped us expand our book publishing program in the crucial years 1995-1998.

It took OOP to make Windows programming something other than miserable. I was interested in Windows programming from the outset, but didn’t even attempt it while it was a C monopoly that involved gigantic switch statements and horrendous resource files. With OOP, you don’t have to build that stuff. You inherit it, and build on it.

There is an asterisk to the above: Visual Basic had no OOP features in its early releases, and I did quite a bit of Windows BASIC work in it. Microsoft flew a team out to demo it at the PC Techniques offices in late 1990 or early 1991. A lot of Windows foolishness was exiled to its runtime P-code interpreter, and while a lot of people hate P-code, I was used to it from UCSD Pascal and its descendents. What actually threw me back in my chair during the Thunder demo (Thunder being VB’s codename) was the GUI builder. That was unlike anything I’d seen before. Microsoft bought the GUI builder from Tripod’s Alan Cooper, and it was a beautiful and almost entirely new thing. It was Visual Basic’s GUI builder that hammered home my conviction that visual software development was the future. Delphi based its GUI builder on OOP, to the extent that Delphi components were objects written within the VCL framework. I enjoyed VB, but it took Object Pascal within Delphi to make drag-and-drop Windows development object-oriented from top to bottom.

People who came to OOP for the first time with Delphi often think that Delphi was the first Borland compiler to support OOP. Not so: Turbo Pascal 5.5 introduced OOP for Pascal in 1989. Although I wasn’t working for Borland at the time, I was still in Scotts Valley writing documentation for them freelance. I wrote about two thirds of the Turbo Pascal OOP Guide, a slender book that introduced OOP ideas and Object Pascal specifics to Turbo Pascal 5.5 users. A little later I wrote a mortgage calculator product using BP7’s OOP features, especially a confounding but useful text-mode OOP framework called Turbo Vision. I licensed Mortgage Vision to a kioskware vendor, and in doing so anticipated today’s app market, where apps are low-cost but sold in large numbers. I cleared $17,000 on it, and heard from users as late as the mid-oughts. (Most were asking me when I was going to start selling a Windows version. I apologized but indicated I had gone on to other challenges.)

I mention all this history because, after 25 years, a lot of it has simply been forgotten. Granted, Delphi changed the shape of Windows development radically. It did not, however, come out of nowhere.

One of the wondrous things about Delphi development in the late 90s and early oughts (and to this day, as best I know) was the robust third-party market for Delphi VCL components. I used to wander around Torry’s Delphi Pages, marveling at what you could buy or simply download and plug into Delphi’s component palette. I have all of TurboPower’s Delphi VCL products and have made heavy use of them down the years. (They’re free now, in case you hadn’t heard. Some but not all have been ported to the Lazarus LCL framework.) I’ve also used Elevate’s DBISAM for simple database apps, and Raize Software’s DropMaster for drag-and-drop data transfers across the Windows desktop. Those are simply the ones I remember the best. There were many others.

I don’t use Delphi much anymore. I still have Delphi 7, and still use it now and then. The newer versions, no. It’s not because I don’t like the newer versions. It’s because what I do these days is teach “intro to programming” via books and seminars, and I can’t do that with a $1,000 product. Well, what about the Delphi Community Edition? I tried to install that in 2018. The binary installed fine. But the registration process is insanely complex, and failed for me three times for reasons I never understood. Sorry, but that kind of nonsense gets three strikes and it’s out. On the other hand, if I were actively developing software beyond teaching demos, I’d probably buy the current version of Delphi and go back to it. I’m willing to deal with a certain amount of registration kafeuthering, but I won’t put my students through it, especially when Lazarus and FreePascal can teach the essentials of programming just as well.

Nonetheless, Delphi kept me programming when I might otherwise have given it up for lack of time. It allowed me to focus on the heart of what I was doing, not on writing code for user interface elements and other mundane things that are mostly the same in all applications. Back when Delphi was still a beta product, Project Manager Gary Whizin called Delphi OOP programming “inheriting the wheel”. That’s where the magic is, and Delphi is strong magic indeed.