Jeff Duntemann's Contrapositive Diary Rotating Header Image

software

The Trouble with Wikis

A week or so ago I bestirred myself and installed MediaWiki on my Web host. I’d been intending to do that for some time, but (as my friend Don put it) my life was ODTAA for a bit. Installing it was a snap. My provider has something called Installatron that did the job, no issues. The software, of course, is free and open-source.

I installed it in part to become more familiar with the MediaWiki system. As usual, when installing something new, I went up to Amazon and checked for books on MediaWiki.

Unless I missed something, there are five.

Plus a few more in French, German, and Japanese. Furthermore, those five books did not all get favorable reviews. The title I was most interested in is now 11 years old and way behind the current release of MediaWiki. (I ordered it anyway, along with O’Reilly’s MediaWiki: Wikipedia and Beyond, which is even older.)

My first question was: Why so few books about software this famous?

The answer came to me slowly: Almost nobody wants to create/maintain/populate their own wiki. MediaWiki is famous for one reason: Wikipedia. I’ve seen a number of other public wikis, including Fandom.com, Conservapedia, Everipedia, WikiHow, Wikispecies, and WikiTree. There is a list on Wikipedia that eyeballs at about 80. Let’s be generous and triple that to account for wikis that Wikipedia didn’t list, and for private wikis. So, say, 250. That’s not much of a market for books. Even 500 installs would not float a print book.

MediaWiki’s online presence has a feature for creating a downloadable PDF version of the MediaWiki documentation, but it’s currently disabled. Sheesh.

Having gone crosseyed reading about it online, my conclusion is that MediaWiki is a bit of a hot mess. That said, I should tell you all why I even bothered: I want to create a wiki for my fiction, and especially about the Gaeans Saga, which includes the Metaspace books and the Drumlins books. I’ve done a little wiki editing, and have a couple of decent books on my shelf about creating content on Wikipedia. The trick to creating content on wikis is having a group of content templates and knowing how to use them. If you look at the page source for any Wikipedia article, the problem becomes obvious: The stuff is crawling with templates, and for the most part they’re templates that don’t come with the generic MediaWiki install.

I discovered this by opening an edit window for Wikipedia’s article on the star mu Arae, which in my Metaspace books is the location of Earth’s first colony. I loaded the whole wad onto the clipboard and dropped it into a new page on my MediaWiki instance. A few of the templates were present on MediaWiki. Most were not, and the article incorporated dozens. I went back and lifted the source for 47 Tucanae. Same deal.

Now, Wikipedia content is available under Creative Commons. Grabbing the articles is easy and legal. I soon found after googling around for awhile that grabbing the templates, while legal, is not easy. Some templates are actually contained in libraries written in…Lua. I have some sympathies for Lua, which strongly resembles Pascal. It made me wonder, however, why a formatting template needs to make calls into a code library. As best I know, this is something specific to Wikipedia, and is not present in the generic MediaWiki.

I like the overall look of Wikipedia. People are used to it. I’d like to incorporate that design into my own instance of MediaWiki. I wouldn’t need all the templates, though some would be damned useful. That said, I see no reason why some sharp MediaWiki hacker couldn’t gin up an installer for all of Wikipedia’s templates, no matter how many there are. Maybe such a thing already exists, though I think that if it did, I would have found it by now.

There are other projects needing my attention, so I’m going to set this one aside for awhile. Obviously, if anybody reading this knows where to find an installable collection of Wikipedia’s templates, give a yell.

The Raspberry Pi Pico…and a Tiny Plug-In Pi

Yesterday the Raspberry Pi Foundation announced the Raspberry Pi Pico, at the boggling temporary low price of…$4US. It’s definitely a microcontroller on the order of an Arduino rather than the high-end 8GB RPi that might stand in for a complete desktop mobo. And that’s ok by me. The chip at its heart is new: the RP2040, a single-chip microcontroller designed to interface with mainstream Raspberry Pi boards, and lots of other things.

Raspberry-Pi-Pico-at-an-angle-500x357.png

Now, what caught my attention in the page linked above was the list of partner products made by other firms using the same RP2040 chip. Scroll down to the description of the SparkFun MicroMod RP2040 proccesor board. It’s still on preorder, but look close and see what’s there: an edge connector…on a board the size of a quarter! That’s not precisely what I was wishing for in my previous entry, but it’s certainly the right idea.

17720-MicroMod_RP2040_Processor_Board-04.jpg

As I understand it, SparkFun is turning the RPi-wearing-a-hat on its ear, into a hat-wearing-an-RPi. The M.2 interface used in the product is actually a standard developed some years back for use in connecting SSDs to tiny slots on mobos. I knew about M.2, but wouldn’t have assumed you could mount a CPU-add-in board using it. Well, shazam! Done deal.

The RP2040 chip is a little sparse for my tastes. I want something I can run FreePascal/Lazarus on, over a real OS. I don’t see anything in the M.2 spec that would prevent a much more powerful processor board talking to a device (like a keyboard, TV or monitor) across M.2. The big problem with building a high-end RPi into things is keeeping it cool. The Foundation is aware of this, and did a very good job in the $100US Raspberry Pi 400 Pi-in-a-keyboard. (This teardown and review is worth a look if you’re interested in the platform at all. The author of the teardown goosed the board to 2.147 GHz and it didn’t cook itself.)

I fully intend to get an RPi 400, though I’ve been waiting awhile to see if there will soon be an RPi 800 keyboard combo with an 8GB board instead of 4GB. Given the price, well hell, I might as well get the 4GB unit until an 8GB unit appears.

So consider my previous post overruled. It’s already been done. And I for one am going to watch this part of the RPi aftermarket very carefully!

The All-Volunteer Federated Encyclopedia of (Really!) Absolutely Everything

My regular readers will recall that I wrote an article in the June/July 1994 issue of PC Techniques, describing a distributed virtual encyclopedia that pretty much predicted Wikipedia’s function, if not the details of its implementation. My discontent with Wikipedia is not only well-known but not specific to me: The organization has become political, and editor zealots have various tricks to make their ideological opponents either look bad, or disappear altogether. Key here is their concept of notability, which is Wikipedia’s universal excuse for excluding the organization’s ideological opponents from coverage.

In one of the decade’s great hacks, Vox Day created Infogalactic, which is a separate instance of the MediaWiki software underlying Wikipedia and a fair number of other, more specialized encyclopedias. Infogalactic has a lot of its own articles. However, when a user searches for something that is not already in the Infogalactic database, Infogalactic passes the search along to Wikipedia, and then displays the returned results. I don’t know whether or to what extent Infogalactic keeps results from Wikipedia on its own servers. It’s completely legal to do so, and they may have a system that keeps track of frequent searches and maintains frequently searched-for Wikipedia pages in local storage. Or they may just keep them all. We have no way to know.

Infogalactic’s relationship with Wikipedia immediately suggested a form of federation to me, though Infogalactic does not use that term. (Federation means a peer-to-peer network of nodes that are independently hosted and maintained yet query one another.) The Mastodon social network system is the best example of online federation that I could offer. (It’s not shaped like an encyclopedia, so don’t take the comparison too far.) There is something else called the Fediverse, which I have not investigated closely. In a sense, the Fediverse is meta-federation, as it federates already federated platforms like Mastodon. For that matter, Usenet is also a form of federation. It’s been around a long time.

The MediaWiki software is open-source and freely available to anyone. There are a lot of special-interest wikis online. One is about Lego. (Brickipedia, heh.) For that matter, there’s one about Mega Bloks. Hortipedia is about gardening and plants generally. It’s a huge list; give it a scan. You might find something useful.

My suggestion is this: Devise a MediaWiki mod like Infogalactic’s, but take it farther. Have a “federation panel” that allows the creation of lists of MediaWiki instances for searches falling outside the local instance. A list would generally start with the local instance. It might then search instances focusing on related topics. The last item on most lists would be a full general encyclopedia like Wikipedia or Infogalactic.

Here’s a simple example, which could defeat Wikipedia’s notability fetish for biographies and a lot of other things: Begin a search for a given person (or other topic) with Infogalactic, which, remember, searches Wikipedia if its own database doesn’t satisfy the query. So if that search fails, submit the same search to EverybodyWiki, which doesn’t apply notability criteria to biographies. In fact, EverybodyWiki does what I suggested be done a number of years ago: It collects articles marked for deletion on Wikipedia, of which it currently has over 100,000. I’m tempted to post a biography on Wikipedia just to see if, when it’s deleted (and it would be) EverybodyWiki picks it up.

(As an aside: I just found EverybodyWiki a month or so ago, and I’m surprised I hadn’t heard of it before. It has more than just biographies and is definitely worth a little poking-around time.)

Now, the tough part: How would this be accomplished? I don’t know enough about MediaWiki internals to attempt it myself. There’s an API, and I’ve been surfing through the API doc. There’s even an API sandbox, which is a cool idea all by itself. Alas, there are remarkably few technical books on MediaWiki, and the ones I would be most interested in get terrible reviews. Given how important MediaWiki is, I don’t understand why tech publishers have skated past it. My guess is that few people bother to do more than custom-skin MediaWiki. (There’s a book on that, at least.) If the demand were there, the books would probably happen. If you know enough about MediaWiki modding, I’ll bet you could find a publisher.

I’m thinking about installing MediaWiki on my hosting services, just to poke at and try things on. Hell, I predicted this thing. I should at least know my way around it.

If you’ve done any hacking on MediaWiki, let me know how you learned its internals and what you did, and if there are any instructional websites or videos that I may not have encountered.

Teergrubing Twitter

I’m of two minds about Twitter. Maybe three. Maybe seventeen. I’m on it, and I post regularly, typically two items per day. That’s just how it averages out; I don’t post for the sake of posting, but only when I find something worth linking to.

Why? Two reasons:

  1. More people are on Twitter than cruise blog posts. By posting on Twitter, I make more people aware of me than I do when I post things here on Contra. Every time I tweet a link to one of my books, I sell a few copies.
  2. It’s one of the most gruesomely fascinating phenomena to come out of tech since the Web itself, thirty years ago.

I’ve written about Twitter before. Back in 2019, my proposed solution to Twitter toxicity was to remove the retweet function. That would certainly help, but only to an extent, and not to the extent that I would like. I’ve spent more time on Twitter in the last 18 months since I posted that entry than I did in the 5 years before that. And in doing that…

…I’ve changed my mind.

I generally lurk, but occasionally I join a Twitter rumble to watch how it all unfolds. I never stoop to profanity or unhinged emoting. Here and there I have politely called a few people on their BS. In doing so I made an observation that bears on today’s entry: When I get involved in a ruckus, my follower count goes up. When I just post odd lots, my follower count decays. The reason is simple: Twitter has made itself into a sort of deranged video game. The Analytics panel shows you how many people have looked at your tweets, how many have mentioned you, how many followers you’ve gained or lost, and more. In the same way that Twitter as a whole is an outrage amplifier, the Analytics panel is a vanity amplifier. You have a “score.” The object of the game is to raise your score. And the best way to do that is to create or partake in a ruckus. In fact, the more rucki you launch or dive into, the higher your score will climb.

I’ve thought quite a bit about how Twitter would change simply by removing the Analytics panel, or any other stats on your activity. Even if Twitter would agree to do that (highly unlikely) it would reduce the number of neutrons only modestly. A ruckus feeds the ancient tribal impulse. Tribal Twitter is a game whether or not you have an explicit score.

Very briefly, I wondered how Twitter might change if the platform removed limits on (or at least greatly increased) the allowable size of tweets. Again, it would help a little by changing Twitter’s DNA to be a little more like Facebook or other social networks. Because it takes more time to write longer, more thoughtful entries, people would spend their energy doing that and not trying to destroy one another.

Maybe a little. Or maybe not. Which brings us to the heart of what I’m about to propose: slowing Twitter down. To return to the metaphor of nuclear fission, it would be about inserting a neutron moderator. I don’t mean a control rod (which eats neutrons) but something that merely slows them down and therefore reduces their energy.

I’m reminded of the spam wars before centralized spam suppressors appeared. The idea was to reduce the effectiveness of email spam by slowing the rate at which an email server would accept commands. There are several ways to do this, including sending nonsense packets to the system requesting connections. This was called tar-pitting, which translated directly to the charming German neologism, teergrubing. Spamming works by throwing out a boggling number of emails. Teergrubing fixes that by making the process of throwing out a financially workable number of spam messages too lengthy to bother with.

I don’t know to what extent teergrubing is done today. Doesn’t matter. What I’m suggesting is this: Build a delay into the process of accepting replies to any given tweet. Make this delay increase exponentially as the number of replies increases. First reply, one minute. Second reply, two minutes. Third reply, four minutes, and so on. Once you get up to the eighth reply, the delay is over two hours. By the time a tweet could go viral, the delay would be up in days, not hours. At that point, most of the original outragees would have lost interest and gone elsewhere. Most ordinary Twitter conversations generate only a few replies, and those would arrive in less than an hour, the first several in minutes.

Outrage addicts might try to finesse the system by replying to replies and not the original tweet. This would quickly reduce a ruckus to incoherence, since people trying to read the mess would not be able to tell what tweet a replier was replying to.

Or it might not work at all. Certainly Twitter would not consent to a change like this short of legal action. That legal action may someday arrive. My point is that the heat in any argument dies down when the back-and-forth slows down. Some few diehards may choose to sit out a several days’ delay just to get the last word. But if nobody reads that last word, having the last word loses a lot of its shine.

And Twitter is all about shine.

Delphi Turns 25

Today (or maybe tomorrow, depending on who you talk to) is the 25th anniversary of Borland’s introduction of the Delphi RAD environment for Object Pascal. Delphi changed my life as a programmer forever. It also changed my life as a book publisher for awhile. The Delphi Programming Explorer, a contrarian tutorial book I wrote with Jim Mischel and Don Taylor and published with Coriolis, was the company’s biggest seller in 1995. We did a number of other Delphi books, including a second edition of the Explorer for 32-bit Windows, Ray Konopka’s seminal Developing Custom Delphi 3 Components, and others, including Delphi 2 Multimedia Adventure Set, High Performance Delphi Programming, and the ill-fated and much-mocked Kick-Ass Delphi. We made money on those books. A lot of money, in fact, which helped us expand our book publishing program in the crucial years 1995-1998.

It took OOP to make Windows programming something other than miserable. I was interested in Windows programming from the outset, but didn’t even attempt it while it was a C monopoly that involved gigantic switch statements and horrendous resource files. With OOP, you don’t have to build that stuff. You inherit it, and build on it.

There is an asterisk to the above: Visual Basic had no OOP features in its early releases, and I did quite a bit of Windows BASIC work in it. Microsoft flew a team out to demo it at the PC Techniques offices in late 1990 or early 1991. A lot of Windows foolishness was exiled to its runtime P-code interpreter, and while a lot of people hate P-code, I was used to it from UCSD Pascal and its descendents. What actually threw me back in my chair during the Thunder demo (Thunder being VB’s codename) was the GUI builder. That was unlike anything I’d seen before. Microsoft bought the GUI builder from Tripod’s Alan Cooper, and it was a beautiful and almost entirely new thing. It was Visual Basic’s GUI builder that hammered home my conviction that visual software development was the future. Delphi based its GUI builder on OOP, to the extent that Delphi components were objects written within the VCL framework. I enjoyed VB, but it took Object Pascal within Delphi to make drag-and-drop Windows development object-oriented from top to bottom.

People who came to OOP for the first time with Delphi often think that Delphi was the first Borland compiler to support OOP. Not so: Turbo Pascal 5.5 introduced OOP for Pascal in 1989. Although I wasn’t working for Borland at the time, I was still in Scotts Valley writing documentation for them freelance. I wrote about two thirds of the Turbo Pascal OOP Guide, a slender book that introduced OOP ideas and Object Pascal specifics to Turbo Pascal 5.5 users. A little later I wrote a mortgage calculator product using BP7’s OOP features, especially a confounding but useful text-mode OOP framework called Turbo Vision. I licensed Mortgage Vision to a kioskware vendor, and in doing so anticipated today’s app market, where apps are low-cost but sold in large numbers. I cleared $17,000 on it, and heard from users as late as the mid-oughts. (Most were asking me when I was going to start selling a Windows version. I apologized but indicated I had gone on to other challenges.)

I mention all this history because, after 25 years, a lot of it has simply been forgotten. Granted, Delphi changed the shape of Windows development radically. It did not, however, come out of nowhere.

One of the wondrous things about Delphi development in the late 90s and early oughts (and to this day, as best I know) was the robust third-party market for Delphi VCL components. I used to wander around Torry’s Delphi Pages, marveling at what you could buy or simply download and plug into Delphi’s component palette. I have all of TurboPower’s Delphi VCL products and have made heavy use of them down the years. (They’re free now, in case you hadn’t heard. Some but not all have been ported to the Lazarus LCL framework.) I’ve also used Elevate’s DBISAM for simple database apps, and Raize Software’s DropMaster for drag-and-drop data transfers across the Windows desktop. Those are simply the ones I remember the best. There were many others.

I don’t use Delphi much anymore. I still have Delphi 7, and still use it now and then. The newer versions, no. It’s not because I don’t like the newer versions. It’s because what I do these days is teach “intro to programming” via books and seminars, and I can’t do that with a $1,000 product. Well, what about the Delphi Community Edition? I tried to install that in 2018. The binary installed fine. But the registration process is insanely complex, and failed for me three times for reasons I never understood. Sorry, but that kind of nonsense gets three strikes and it’s out. On the other hand, if I were actively developing software beyond teaching demos, I’d probably buy the current version of Delphi and go back to it. I’m willing to deal with a certain amount of registration kafeuthering, but I won’t put my students through it, especially when Lazarus and FreePascal can teach the essentials of programming just as well.

Nonetheless, Delphi kept me programming when I might otherwise have given it up for lack of time. It allowed me to focus on the heart of what I was doing, not on writing code for user interface elements and other mundane things that are mostly the same in all applications. Back when Delphi was still a beta product, Project Manager Gary Whizin called Delphi OOP programming “inheriting the wheel”. That’s where the magic is, and Delphi is strong magic indeed.

I Wish I Could Pay for Software

Actually, I do pay for software, but not as often as I used to–and the reason is peculiar. This has been especially true since I started using Android on my Samsung Note 4 phone, and more recently, a Galaxy Tab S3.

Now, I still pay for commercial Windows software, like the brand new Affinity Publisher, which might be enough of a competitor to InDesign for me to dump InDesign and be rid of Adobe’s regular copy-protection tantrums. Android apps are a whole ‘nother universe, and in recent years, many of the apps I’ve tried are free–with ads. Used to be, you could choose between having ads displayed, or paying for the app. I’m seeing more and more apps that simply display ads, without any option for me paying to remove the ads. I found this puzzling. Why turn down user money?

I’m sure I’m not be the first to suggest this, but I have a theory: There’s cash flow in ads. But before I unpack that, some history. Back in the ’90s, software was evolving furiously, often to keep pace with Windows. So we eagerly forked over money every couple of years, sometimes considerable money, for new major releases of Office, WordPerfect, Lotus, and the other bit-behemoths of that era. I’m pretty sure upgrades were a huge part of those firms’ revenues.

Today, not so much. I used Office 2000 from 1999 until 2012. That’s when I bought Office 2007 so I could work on a collaborative book project for which Office 2007 was the minimum requirement. Why did I use Office 2000 for 13 years? It did what I needed it to do, and I was good at it. A friend of mine still uses Office 97, for the same reasons: It does whatever he needs to do (which is nothing exotic) and he knows it inside and out. So Microsoft got his money 22 years ago, and nothing since.

That’s not unethical. Carol and I still use things we got as wedding gifts 43 years ago. The Realistic stereo I bought in 1976 is still our main stereo. On the other hand, firms that used to rely on two- or three-year upgrade cycles are finding that people are using software they’ve had for eight or ten years or more. The big companies’ solution was Software as a Service; i.e., the subscription model. You pay for the software every year, and if you stop paying, they disable it the next time the software phones home to check if you’re a deadbeat or not.

To be charitable: Screw that. My primary objection to SAAS is that the skills I’ve developed on Office (or other packages like InDesign) belong to me. Disable the software I’ve paid for, and you’re basically stealing my skillset. So I’ll have nothing to do with SAAS, and may well use Office 2007 for the rest of my life.

As I expected, pay-once packages like Affinity Publisher are popping up to compete with SAAS products like InDesign. I already have the Atlantis word processor, which actually has features that Word 2007 does not. If I need a more ramcharged spreadsheet, they’re out there. But…why? I like what I have, and currently, what I have is plenty good.

So. Back to Android. Most Android apps are now ad-supported. A few years ago, I bought a few games and some oddments for five-ish bucks each. I’m sure a lot of other Android users did the same thing. But once the vendors get your five bucks, that’s all they ever get. I have some sympathy: They provide updates, which are worth something. I’ve bought InDesign four different times, and Atlantis twice. But even with a user base as large as Android, five bucks doesn’t go very far. Worse, it makes for very unreliable cash flow. The ad business model helps here. What happens is that the vendors of ad-supported software get an ongoing dribble of money from advertisers. The dribble from any single instance of a product is small. Put together fifty or a hundred thousand of those dribbles, though, and you’re talking real money. Better still, pauses in that multitude of dribbles average out into a reasonably predictable cash flow stream.

I dislike ads, especially animated ads, double-especially force-you-to-watch ads, and triple-especially ads with audio. I’ve been suspicious of ads ever since Forbes served up malware through ads on its Web site–after demanding that readers disable their ad blockers. This is still a problem on Android to a great extent, though the mechanisms are complex and far from obvious.

There’s not much to be done about ads on Android apps. The money from selling ads is too good, compared to getting five bucks once and nothing ever again. I avoid malware primarily by installing all updates to the OS and downloading only well-known brand-name apps, and only through the Play store. That’s all anybody can do.

It’s an odd thing to think, but I think it often: Sigh. I miss the days when software actually cost money.

Taming Twitter

I knew Twitter was mostly useless before I ever got an account there. I got the account because the service seemed insanely popular, which I simply could not understand. My account is now four years old, and having mostly lurked in that time I think I finally understand what Twitter is for, and why it’s a problem.

This past week saw another instance of what many call a Twitter lynch mob: Hordes of tribalists, intoxicated with their own outrage, descended upon a group of Catholic high school boys who were waiting for a bus in DC when various kinds of hell broke loose around them. I won’t go over the details here; you can google as much as you like. The incident itself isn’t my point, and I will delete any arguments in the comments over whether they “deserved” the ill-treatment they got. (They did not. If you disagree, disagree in your own space, not mine.)

The point I’m actually making here is that this incident (and countless others like it) would not have happened without Twitter. I’ve been on LiveJournal since 2005, and on Facebook since 2009. I’ve never seen an online lynch mob on either service. I’ve seen plenty of arguments, some of them quite heated, a few of them absolutely insane. None of them “went viral” the way that Twitter lynch mobs go viral.

Part of the underlying problem is a lack of discipline among many journalists. Most of the money has gone out of mainstream news journalism over the past twenty years, and with it went the sort of disciplined, methodical reporting I took for granted before 2005 or so. When you have to get clicks to keep your job and pay your bills, “methodical reporting” means all the other starving journalists will get those clicks before you do.

But bad journalism is mostly an enabling factor. The real mechanism is Twitter’s ability to act as an amplifier of emotion. Until very recently, tweets were limited to 140 characters. That’s room enough to post a link to an Odd Lot (which is most of what I do) and not much else beyond quips, brief questions, quotations, short descriptions of photos and videos, and so on. This means that rational discussion doesn’t take place very often on Twitter. There just isn’t room. Sure, some people make their case using a number of independently posted tweets intended to be read in sequence. Megan McArdle of the Washington Post is very good at this. Alas, the process of creating such a thread sounds mighty tedious to me.

What’s left? Emotion. And what’s the emotion of the day, year, and decade hereabouts? Outrage. And while Twitter can amplify things like humor, cuteness, and gratitude (and occasionally real beauty) what it does best is outrage.

From a height, Twitter is an outrage amplifier. It starts with somebody posting something calculated to outrage a certain demographic. (Innocent posts sometimes trigger Twitter mobs, but they are uncommon.) Then begins a sort of emotional feedback loop: The outraged immediately retweet the reactions they’ve seen, so that their followers (who would not otherwise have seen the outrage tweet) get to see it. They retweet it to their followers, and so on, until millions of gasping outrage addicts are piling on without knowing anything at all about the original issue that caused the outrage.

The word “amplifier” may not be quite the right metaphor here. Most of us in the nerdiverse have seen videos of a common science demo consisting of a room full of set mousetraps, each with two ping-pong balls carefully placed on the bar. Toss a single ping-pong ball into the room, and it sets off whatever mousetrap it lands on. That moustrap launches two more balls, which set off two more mousetraps, and a few seconds later there is this chaotic cloud of ping-pong balls flying around the room, until the last mousetrap has been spring. This metaphor is a nuclear fission chain reaction, and I think it describes a Twitter mob very well.

So what do we do about Twitter mobs? We could encourage the victims to lawyer up and start suing the news organizations that tossed the original ping-pong ball, and perhaps Twitter itself. That process is evidently underway with the Covington Kids. But preventing Twitter mobs is simple, if difficult: All it would require is a single change to the Twitter software:

Eliminate retweets.

That’s all it would take. Really. The retweet function is like a neutron emitted by an unstable nucleus. (There are a lot of unstable nuclei in the Twitter system.) Chain reactions are easy to kick off, and difficult to suppress. But without the ability to instantly retweet some expression of outrage, the issue never goes critical. Sure, you can manually copy and paste somebody else’s tweet and tweet it to your own followers. But the sort of people who participate in Twitter mobs are impatient, and lazy. If copy/paste/tweet is work, well, their ADHD sends them on to something else.

Basically, eliminating retweets would turn Twitter from U-235 to U-238. U-238 is non-fissionable. Without retweets, Twitter would be non-fissionable too. Problem solved.

Of course, Twitter won’t voluntarily disable retweets. Without retweets, Twitter becomes just another microblogging social network. People would abandon it in droves. However, if a class action against Twitter mounted by victims of Twitter mobs ever got any traction, part of the settlement might include requiring Twitter to disable retweets. If I were the victim of a Twitter mob, that’s what I’d demand. Money wouldn’t hurt. But to fix the problem, retweets would have to go. If that in fact became the end of Twitter, I for one wouldn’t cry too hard.

Twitter is not a common carrier. It attempts to police its own content, though that policing is sparse and rather selectively applied. If it isn’t a common carrier, it can be held responsible for the actions of its members. If its members set out to deliberately destroy private citizens by retweeting slander and doxxing, Twitter should face the consequences. If it were forced to confront the possible consequences, who knows? Twitter might eliminate the Retweet button all by themselves.

Don’t wait up for it. But don’t count it out, either.

Revisiting The All-Volunteer Virtual Encyclopedia of Absolutely Everything

24 years and some months ago, I published an article in PC Techniques, on the END. page, which was where I put humor, crazy ideas, and non-of-the-above. The article was “The All-Volunteer Virtual Encyclopedia of Absolutely Everything!” and as I recall it generated a lot of mail. The idea was this: We should create a way to capture knowledge, even highly eccentric knowledge, in a browsable online encyclopedia. Remember that I had this idea in 1993, when the Web was not so much in its infancy as still in utero, and broadband outside of an office or university was practically unheard of. That’s why I imagined the Encyclopedia as a central index with pointers to encyclopedia articles hosted on machines owned by the authors of the articles, with caching for popular items. You browse the index, you click on an article link, and then retrieve the article text back to your machine as a file via FTP, where it would be rendered in a window in a standard layout. (The now-defuct DMOZ Web directory worked a little like this.) HTTP would work even better, but in 1993 I’d barely heard of it.

I chewed on the idea for several years, and then went on to other things. In 2001, Wikipedia happened, and I felt vindicated, and even though the vision had an utterly different shape, it was still an all-volunteer virtual encyclopedia.

Of absolutely everything, well, not so much.

As good as it is, Wikipedia is still trying to be a paper encyclopedia. You won’t find articles on pickled quail eggs in a paper encyclopedia, because paper costs money, and takes up space. These days, with terabyte disk drives going for fifty bucks new, there’s no reason for an online encyclopedia not to cover everything. Yet Wikipedia still cleaves to its “notability” fetish like superglue; in fact, in reading the discussion pages, I get the impression that they will give up almost anything else but that. My heuristic on the topic is simple and emphatic:

Everything is notable to somebody, and nobody can judge what will be notable to whom.

In other words, if I look for something on Wikipedia and it’s not there, that’s a flaw in Wikipedia. It’s a fixable flaw, too, but I don’t expect them to fix it.

Several people have suggested that my Virtual Encyclopedia concept is in fact the Web + Google. Fair point, but I had envisioned something maybe a little less…chaotic. Others have suggested that I had at least predicted the MediaWiki software, and if Wikipedia won’t cover everything, that’s their choice and not a shortcoming of the machinery behind it.

Bingo.

Some years back I had the notion that somebody should build a special-purpose wiki to hold all the articles that Wikipedia tosses out for lack of notability. I thought about some sort of browser script that would first search Wikipedia for a topic, and if Wikipedia didn’t have it, would then look it up in WikiDebrisdia. I never wrote this up, which is a shame, because something similar to that appeared last year, when Theodore Beale (AKA Vox Day) launched Infogalactic.

It’s a brilliant and audacious hack, fersure: When a user searches Infogalactic (which, like Wikipedia, is MediaWiki-based) for a topic, Infogalactic first searches its own articles, and if the topic isn’t found, then searches Wikipedia. If the topic is available on Wikipedia, Infogalatic brings the article back and serves it to the user, and retains it in a cache for future searches. This is legal and fully in keeping with Wikipedia’s rules, which explicitly allow re-use of its material, though I’m guessing they weren’t imagining it would be used in fleshing out the holes in a competing encyclopedia.

There’s considerably more to Infogalactic than this, but it’s still very new and under active development, and its other features will have to wait for a future entry. (Note that Infogalactic is not concerned with Wikipedia’s deleted articles; that was my concept.) One of the things I find distinctive about it is that it has no notability fetish. Infogalactic states that it is less concerned with a topic’s notability than it is about whether the article is true. That’s pretty much how I feel about the issue: Notability is a holdover from the Age of Paper. It has no value anymore. What matters is whether an article is true in all its assertions, not how important some anonymous busybody thinks it might be.

I’m wondering if the future of the All-Volunteer Virtual Encyclopedia of Absolutely Everything is in fact a network of wikis. There are a number of substantial vertical-market wikis, like WikiVoyage (a travel guide) and WikiSpecies, which is a collection of half a million articles on living things. I haven’t studied the MediaWiki software in depth, so I don’t know how difficult this would be, but…how about a module that sends queries to one or more other wikis, Infogalactic-style? I doubt that Wikipedia has articles on all half a million species of living creature in WikiSpecies, but if a user wanted to know about some obscure gnat that wasn’t notable enough for Wikipedia, Wikipedia could send for the article from WikiSpecies. Infogalactic already does this, but only to Wikipedia. How about a constantly updated list of wikis? You broadcast a query and post a list of all the search hits from all the wikis on the list that received the query.

This is the obvious way to go, and it’s how I envisioned the system working even back in 1993. Once again, as I’ve said throughout my career in technical publishing, the action is at the edges. It’s all about how things talk to one another, and how data moves around among them. There’s a distributed Twitter clone called Mastodon with a protocol for communication between servers. That’s the sort of thing I’m talking about.

Bottom line: I admit that “absolutely everything” is a lot. It may be more than any one single encyclopedia can contain. So let a thousand wiki encyclopedias bloom! Let Wikipedia be as much or as little of an encyclopedia as it wants to be. The rest of us can fill in the gaps.


Note well: Theodore Beale has controversial opinions, and those are off-topic and irrelevant to this entry. I mentioned one of his projects, but the man and his beliefs are a separate issue. Don’t bring them up. I will delete your comments if you do.

Contra Turns 20

Egad. Contra turned 20 when I wasn’t looking. Actually, I was looking. What I wasn’t doing was breathing. Enough. At night. I think I have a handle on that problerm now, and with any luck at all I’ll be writing more of everything going forward. I’m 50,000 words into my new novel Dreamhealer, and tinkering the last bits of my free ebook FreePascal From Square One. There’s much to be done, now that my energy is starting to come back.

The anniversary was this past June 5. On June 5, 1998, the very first entry in Jeff Duntemann’s VDM Diary went up on the Coriolis Web server. That first entry was nothing grandiose. I didn’t have permalinks on those early entries, so I’ll quote it here in its entirety:

Spent most of this past week in Chicago at Book Expo America, and saw two remarkable “book on demand” operations of interest to small software developers. Both IBM and Xerox have developed super hi-res, high-speed laser printers that print on continuous roll paper, almost like miniature offset printing presses. Both firms have set up subsidiaries to act as service bureaus, capable of producing high-quality perfect-bound books with glossy four-color covers, quantity one, at a unit price of between $2 and $4, depending on the size of the book. They’re targeting the service at small press, and to keep low-volume books from going out of print entirely. But you and I know the real application here is going to be software documentation for small developers, especially shareware developers whose volumes are smallish and unpredictable. Go take a look: IBM and Ingram’s partnership LightningPrint is at www.lightningprint.com.

Those early entries didn’t have titles, and were not the long-form essays that evolved over time, but instead short, newsy items much like I later came to publish as Odd Lots.

For those who didn’t know me back then, “VDM” was our (carefully chosen) acronym for Visual Developer Magazine, published by The Coriolis Group from 1990-2000. By 2000 most of our energy went into books. The magazine, in competition with increasingly sophisticated (and free) Web pages, ceased to be viable toward the end of 1999. The March/April 2000 issue was the last, and VDM Diary closed down with Visual Developer itself.

By that time, however, I was hooked. On July 25, 2000, I created Contrapositive Diary on my own Web hosting space, where it’s been ever since.

So let’s go back to Contra’s secret origins. Without realizing it (and years before that truly ugly word came to prominence) I had invented blogging. Now, others invented it as well. There is such a thing as independent invention, and in truth the idea seems kind of obvious to me. I’m not sure Slashdot is a blog (I’ve always considered it a news site) but it launched in the fall of 1997, though I don’t remember seeing it until a couple of years later. Justin Hall is almost certainly the first blogger in the sense that we use the word today, having invented the concept back in 1994. Still further back in time, I remember reading a periodic (weekly?) posting on Usenet from Moonwatcher, a chap who posted about the phases of the Moon, eclipses, meteor showers, visible planets, and other things relating to astronomy. This was in 1981 or thereabouts, when I worked at Xerox and had a login to ARPANet. So yeah, it’s an old idea, and an obvious one.

Still, I think of it as the best idea I never had.

Huh? It’s true: Contra was someone else’s idea. My ad sales rep for VDM was Lisa Marie Hafeli, and in the spring of 1998 she approached me with a request: Find a way to publish something short online every day, or close to it. What she wanted was more product mentions, which helped her sell ads to industry firms. I wasn’t entirely sure that such a thing would work as an ad sales tool, but the notion of a daily diary online intrigued me. It took until June to get to the top of my stack. At the time I wasn’t in direct control of our Web presence, so (almost) every day before I went home from work I emailed the text to my webmaster Dave, and he added it to the tail end of the HTML file stored on our Web server.

I didn’t post every day, and not every post was a product mention, but the vehicle proved popular with our readers. I wasn’t surprised over the next couple of years when others did the same thing. As I said, it’s a pretty obvious idea. What did surprise me was the scope of its adoption. By the time the company itself shut down in the spring of 2002, the word “blog” had been coined, and blogs were all over the place.

I edited the HTML files by hand as the sole format until 2005, when I created an account on LiveJournal and used it as a mirror of the manually edited month files. I never really liked LiveJournal as a platform, but it did the job until I installed WordPress on my own hosting space in late 2008, launching on 1/1/2009. I later backported the 2008 month files to WordPress, found it more trouble than it was worth, and stopped there. My LiveJournal account still exists, but I get almost no comments on it and assume the platform is no longer as well-used as it was ten years or so ago.

I don’t post on Contra as often as I used to. I get a lot more traffic and exposure on Twitter and Facebook, and I periodically gather short items originally published on Twitter into Odd Lots. (I invariably add a few bullets that never went to Twitter for various reasons, so you won’t see all my Odd Lots on Twitter.)

That’s the story. I enjoy social networking a lot less than I used to, because so much of what goes around online is flat-out political hatred. Still, it’s one of the few ways to get above the noise and be heard. I’m trying to earn a reputation for not being crazy, but alas, the crazy stuff seems to get the most mileage these days. There are insights in that fact somewhere (a lot of insights, for what it’s worth) but I’m not entirely sure I want to be the one to describe them. I’d prefer a peaceful retirement, whatever it takes. Mostly what it takes is not talking about politics.

That’s been my policy for a long time, with only very occasional lapses. It will be my policy going forward, for as long as I can write at all.

Odd Lots

  • Twitter has gone absolutely off its rocker since Parkland, and now it’s just haters hating anyone who disagrees with them. (No, that’s not new; it’s just never been this bad.) I stumbled across a site called Kialo, which is a kind of digital debate club, in which issues are proposed and then discussed in a sane and (hurray!) non-emotional manner. I myself certainly don’t need another time-sink, but I wanted to bring it to the attention of anyone who enjoys (increasingly rare) reasoned debate.
  • Another interesting approach to political social media is Ricochet, a center-right bloggish system with paid membership required to comment. (You can read it without joining.) No Russian bots, or in fact bots of any kind, and a startling courtesy prevails in the comments. Its Editor in Chief is Jon Gabriel, who used to work for us at Coriolis twenty years ago. Not expensive, and the quality of the posts is remarkable.
  • FreePascal actually has an exponentiation operator: ** That was what FORTRAN (my first language) used, and I’ve never understood why Pascal didn’t have an operator for exponentiation. Better late than never.
  • This article doesn’t quite gel in some respects, but it’s as good an attempt as I’ve seen to explain why Xerox never really made much money on the startling computer concepts it originated back in the crazy years of the ’70s. I worked there at the time, and top-down management was responsible for a lot of it, as well as top management that wasn’t computer literate and thought of everything simply as products to be sold.
  • Japanese scientists found that treating the hair follicles of bald mice with dimethylpolysiloxane grew new hair. Dimethylpolysiloxane is used to keep McDonalds’ deep fryers from boiling over, and given that Mickey D’s fries are one of my favorite guilty pleasures, I suspect I’ve ingested a fair bit of the unprounceable stuff. No hair yet, though I keep looking in the mirror.
  • German scientists, lacking a reliable supply of bald mice, have discovered a species of bacteria that not only enjoys living in solutions of heavy metal compounds, but actually poops gold nuggets. How about one that poops ytterbium? I still don’t have any ytterbium.
  • Eat more protein and lift more weights if you’re a guy over 40. Carbs are no food for old men.
  • Evidence continues to accumulate connecting sugar consumption to Alzheimer’s. Keep that blood sugar down, gang. I want to be able to BS with you all well into my 90s. Try cheese as snacks. It’s as addictive as crack(ers.)
  • If in fact you like cheese on crack(ers), definitely look around for St. Agur double-cream blue cheese. 60% butterfat. Yum cubed. A little goes a long way, which is good, because it keeps you from eating too many crack(ers.)
  • And don’t fret the fat. The Lancet has published a study following 135,000 people, and the findings indicate that there is no connection between dietary fat and heart disease. Ancel Keys was a fraud. Ancel Keys was the worst fraud in the history of medical science. How many times do we have to say it?
  • 37,132 words down on Dreamhealer. It’s now my longest unfinished novel since college. (It just passed Old Catholics, which may or may not ever be finished.) Target for completion is 70,000 words by May 1. We’ll see.
  • On March 17th, it will be 60 years since Vanguard 1 made Earth orbit as our 2nd artificial satellite. Probably because it’s so small (a 6″ sphere, not counting antennae) it is now the satellite that’s been in orbit the longest, including those the Russians launched. The early Sputniks & Explorers have all burned up in the atmosphere.
  • I never knew that the parish church of my youth was Mid-Century Modern, but squinting a little I would say, Well, ok. Here’s a nice short visual tour of the church where I was an altar boy and confirmed and learned to sing “Holy God We Praise Thy Name.” It hasn’t aged as well as some churches (note the rusty sign) but some of the art remains startling. I met Carol in the basement of that church in 1969, and will always recall it fondly for that reason alone.
  • Ever hear of Transnistria? Neither had I. It’s a strip of Moldova that would like to be its own country, (and has been trying since 1924) but just can’t get the rest of the world to agree. It has its own currency, standing army, and half a million citizens. (I’ll bet it has its own postage stamps, though why I didn’t notice them when I was 11 is unclear.)
  • A guy spent most of a year gluing together a highly flammable model of a musk melon (or a green Death Star, if you will) from wooden matches, and then lit it off. He even drew a computer model, which needed more memory to render than his system had. Despite the bankrupt politics, we live in a wonderful era!