Jeff Duntemann's Contrapositive Diary Rotating Header Image

software

Chrysanth WebStory Is Not Free

Because as best I can tell Zoundry Raven is abandonware (it hasn’t been updated in almost four years) I’ve been sniffing around for a client-side blog editor that’s still alive and kicking. I came across something peculiar the other day, which highlights a trend in small-scale commercial software that I find extremely annoying: Hiding your pricing structure and obfuscating your business model.

The product in question is Chrysanth WebStory. I went up to the firm’s Web site to see what it is, what it does and what it costs. Figuring out what it is was not easy. Figuring out what it does was easier, though I keep getting the creeping impression that I don’t have the whole story. Figuring out what it costs is impossible, apart from near certainty that it is not free. (More on that shortly.)

When I evaluate commercial software, I do a certain amount of research before I even download the product. I look for a company Web site. I look for buzz, in the form of online discussion and product reviews posted by individuals on their own blogs, and not sites supported by ads. I make sure I understand how the company makes money (one-time cost? subscription?) and how much money is involved. Only then do I download the software and give it a shot.

The first red flag with WebStory is that there is almost no buzz online. The free download is available all over the place, but almost no one has anything to say about it. The site itself is extremely stingy with hard information. I managed to dope out that what WebStory really is is a blogging service. There is a free client-side editor app that connects to the company servers, where blog entries are stored in a database. From the database you can feed one or more blogs hosted elsewhere, or a blog hosted on the firm’s own servers.

There are two license levels for the service, casual and professional. The casual license is limited, and to activate it you must present a certain unstated number of undefined “credits.” Here’s where it gets a little freaky: To find out more about the service’s cost you have to establish an account with WebStory, which involves handing them an email address and creating a password.

Read that again: You have to create an account before you can even find out what the service costs. Nowhere on the public portions of the site do I see any mention of what credits cost, nor what the professional license costs. It’s true that they do specify that credits can be earned by writing reviews of the product, but for people who would just prefer to pay for the service, there’s no clue at all. The service is thus “free” in the sense that you can use it without paying money for it as long as you keep reviewing it and earning credits. (Or something.) In my view, it doesn’t matter if you are required to pay in money or credits. Paying anything at all for the Chrysanth WebStory service means that it is not free.

The almost complete lack of discussion of the product online makes me wonder if more than a dozen people are actually using it. The online forums have 14 posts total, across all forum topics. Discussion of the product in other online venues is virtually absent. Of the handful I found, this one was not reassuring.

I do not object to paying for software or online services. I do it all the time. I have a lot of sympathy for developers who want to explore new business models and ways to make money. I can also understand that linking a piece of client-side software to a server-side system is one way to eliminate software piracy as an issue. None of that bothers me in the slightest. What I object to is the secrecy. Tell me up front and in big type: What does your product/service cost?

And how in any weird dimension of the multiverse can it help sales to keep the price a secret?

Odd Lots

Odd Lots

  • The Big Honking Sliding Puzzle Project continues, and today Carol and I are mostly stuffing boxes downstairs. Mover guys coming Tuesday. The carpeting is coming out on Wednesday, and the plastic tarps will go up. Thursday they drill holes in the slab and start pumping gooey stuff underneath to stabilize the soil and raise the slab to where it originally was. We are shopping for new carpeting, and will begin choosing new paint colors tomorrow morning. The lower level will not be back in livable shape until mid-January, but when it is it will be much improved.
  • I do not do walled gardens. I absolutely do not do walled gardens. This gentleman from Harvard Law School has done a good job capturing my unease with vendor-controlled hardware and especially software.
  • Reader Nick DeSmith sends a pointer to a wonderful site on numeric-readout vacuum tubes of various species, from humdrum nixies to one I had never heard of before: A Compactron-based micro-CRT with ten guns. I consider Nixies at least to be steampunk-possible, since there’s no physics involved that wasn’t understood in 1900. Not sure they’ve been used in the steampunk canon so far; if they have, let me know.
  • There were giant beavers during the Pleistocene. There have been talking beavers on TV in the past, though they weren’t all that huge. Now there’s an angry giant beaver. Don’t piss one off unless you’re wearing the right overalls.
  • I’ll meet your giant, jeans-eating beaver and raise you a giant cricket so big it eats carrots! (Thanks to Esther Schindler for the link.)
  • If giant beavers or giant crickets aren’t your passion, how about miniature forests of old-growth moss that may be thousands of years old? Such are found in Antactica, and by spotting nuclear test fallout debris along the length of their stalks, we can see how slowly they grow. Think, slow. (Thanks to Frank Glover for the link.)
  • I keep tools and even a wi-fi bridge node in ammo cans. Why not wine?
  • The many faces of Superman. (Thanks once again to Frank Glover for the link.)
  • This has some steampunk resonance, but (oldster that I am; how old were you in 1966?) I keep hearing an endless loop in the back of my head: “Batfan! Batfan! Batfan, Batfan, Batfan. Na na na na na na na na na na Batfan!”

Odd Lots

Minty Failness

I gave it a good shot and I tried, honestly I did. But Canonical’s Unity UI simply doesn’t work for me. It’s obvious that Canonical is trying to create a single UI that will serve end-user computing from top to bottom. It’s just as obvious to me (now that I’ve had six weeks or so to play around with a Droid X2) that there is no single “end-user computing” anymore. Desktops are fundamentally different from smartphones, or anything else (tablets, possibly; we’ll see) that is primarily tap-and-consume. I’m having no trouble working the Android UI on my phone, and Android habits don’t intrude on my desktop synapses. I’m not confused or in any way slowed down by the differences between the two, no more than I’m confused about the differences between a shovel and a rake.

So if Unity is all I get under Ubuntu, Ubuntu has to go. Others seem to agree with me, and at times the discussion gets disturbingly violent. Online I’m seeing that huge numbers of people are fleeing Ubuntu for Linux Mint, which I’d barely heard of a year ago. I have to smile a little bit, because Linux Mint is Ubuntu, basically pulled back to a variation on the GNOME 2.3 interface. The upcoming release (Mint 12) will move to GNOME 3, which worries me a little (I like GNOME 2) but I’ve seen word that Mint 12 will allow users to have something very like the old UI–which is precisely what Canonical did not do with Ubuntu and Unity. It was Unity or the highway, and boy, it’s bumper-to-bumper out there.

There’s an enormous issue of why we’re suddenly tossing older and much-loved UIs away without nary a glance over our shoulders, when there’s no compelling reason to adopt one of the new models. Programmers like to create Shiny New Stuff, fersure. I in turn don’t like to change the way I interact with the machine I use, unless such changes make me a lot more effective. So far, the costs in relearning ordinary tasks far outweigh the fairly paltry benefits for me.

I’ll take up that issue eventually. In the meantime, I’ve hit the highway, and installed Linux Mint 11 Katya in its own partition here on the quad core. The OS looks great and works the way I’m used to working. I have some minor quibbles, like the failure of the Software Manager to tell me when it’s done installing something. Ubuntu does this well, but Mint installs and gives no sign. This was critical when I installed WINE, since (because WINE is not an app, strictly speaking) it’s tricky to determine if WINE was fully and correctly installed. Because running Software Manager again and selecting WINE still indicates “not installed,” I think there’s something wrong.

Small stuff. The big deal is that Mint doesn’t work well with the integrated graphics on my EVGA NForce e-7150/630i Core 2 Quad motherboard. The default graphics drivers worked, but looked clunky and don’t support effects. Installing the recommended proprietary NVIDIA drivers produced weird graphics failures, including windows refusing to render once they’re over a certain size. (Some windows would not render at all, and simply remained blank and white even when first instantiated.) Using the supposedly experimental NVIDIA 173 drivers worked better, but still fails on certain apps, especially Stellarium, which worked exactly once and then comes up with a blank, black window every time. I’m not willing to give up Stellarium, so at this point Linux Mint is on hold while I wait for Mint 12 Lisa.

Linux Mint has supposedly become the 4th most popular OS on the planet. It’ll be interesting to see if that continues to be the case once they cut in the mandatory GNOME 3 upgrade. I’ll give GNOME 3 the same consideration I gave Unity, but I’m also looking closely at the Xfce UI and Xubuntu. It’s going to be an interesting year in the Linux world. I’m keeping all my old Linux installer .iso files, trust me.

Odd Lots

Odd Lots

  • Maybe it’s some of the recent solar storms (the sunspots were not spectacularly high) but I heard both Guyana and the Cayman Islands on 17m the other day–the first time I’ve seen any significant life on that band in several years.
  • I have yet to find an Android ebook reader app that will open and render an MS .lit file, of which I have several. No surprise: Having blown an early and promising start in ebook reader software, MS has recently announced that it is withdrawing the app. Reader is actually a nice piece of work, and the first ebook reader program I used regularly. DRMed .lit books are now just noise, and the rest of them will have to be translated by something like Calibre. DRM, especially when it’s abandoned, trains people to locate cracks and become pirates. Way to go, guys.
  • SanDisk just announced a thumb drive about the size of its own USB connector cap. 4, 8, or 16GB. I’ve now broken two thumb drives by leaving them plugged into the rear edge of a laptop and then tipping the laptop back. If that’s a common problem, this is definitely the solution.
  • What do you do with the Moon once you rope it down? (Watering it would be interesting, though Mars needs it more.)
  • This guy thinks like I do. Just ask Carol. (Thanks to Michael Covington for the link.)
  • I recently found a PDF describing the first computer I ever programmed for money. It was a…1 MHz…8080. It cost a boggling number of 1979 dollars, so Xerox ended up using most of the initial production run in-house. The 3200 cast a long shadow: I got so used to sitting in front of it that when I built a computer table later that year for my S100 CP/M system, I made it just high enough that the keyboard was precisely as far off the floor as the 3200’s, a height that I use in computer tables to this day.
  • How long did it take you to figure out what this really was? (Thanks to Pete Albrecht for the link.)
  • Russian President Medvedev has taken a liking to ReactOS, a long-running and mostly ignored attempt to create a driver-compatible, win32-friendly (via WINE) open source Windows clone. He’s suggesting that the Russian government fund it. Now if Medvedev can convince Putin, we could have quite a project on our hands.
  • I’d never thought much about how you recycle a dead refrigerator. Now I know.
  • Begorrah! Zombies are not a new problem. (Thanks to Frank Glover for the link.)
  • And if that machine gun in your hollow leg won’t slow them down, send them into sugar crash.

Goggling Google Goggles

As at least ten people by now have written to tell me (though Eric the Fruit Bat gets credit for being the first) Google has a project targeted at recognizing things in the physical world and looking them up online, as I wistfully wished for in my September 17, 2011 entry: Google Goggles. I vaguely recall hearing of the product on its first release, which (because it was for Android) was not something I could fool with on Windows.

There’s even a word for the general concept, though it’s not one I would use: augmented reality. I’m not looking for things to augment reality so much as simply document it–but in this age of exaggeration, I guess that’s pretty much the same thing.

Google Googles is a mobile app currently available for Android and iPhone. You aim your smartphone (assuming it has a camera, as virtually all do) at something, and tap a button. The phone takes a photo, and then (I assume) there’s a conversation with the Google mothership to see if the photo resembles anything already in the recognition database. The app is free, at least for Android, and I’ve been having some good fun with it trying to see what its limits are. Here’s my report:

Google Goggles recognized the following things:

  • A bottle of Coke Zero.
  • A conventional painting of Jesus Christ.
  • A conventional painting of St. Francis of Assisi.
  • Two different contemporary paintings of Ben Franklin.
  • A bottle of Campus Oaks Old Vine Zinfandel.
  • The Colonel Sanders portion of the KFC logo. (Without “KFC”.)
  • The Virginia Cavaliers alternate logo.
  • The iconic Rolling Stones tongue logo.
  • The Insane Clown Posse logo.
  • The Dave Matthews Band logo.
  • The Hieroglyphics band logo.

It did not recognize the following things:

  • Me. No clue about my standard publicity photo, as seen in my blog header, even though it’s logged in Google Images.
  • A headshot of Isaac Asimov, also found on Google Images. I guess I don’t feel so bad.
  • QBit. (It states clearly that animals generally aren’t recognized.) It did say that he resembles a poodle, a kitten, and two bunnies. Goggles isn’t the first entity that thought QBit was a poodle, though I won’t mention the kitten part to him.
  • My Celtic peat cross. It said the cross resembled several tall, skinny women dressed in black. I can almost see that.
  • The Nike swoosh. Failed four times. Now that surprised me.
  • A tape measure.
  • A fork. It thought the fork resembled the Statue of Liberty.
  • A knife. It thought the knife resembled a white bunny.
  • A 430-ohm, 2-watt carbon resistor. It thought it resembled the Canadian flag.
  • A cordless telephone handset.
  • My Weber gas grill.
  • A pair of headphones. It said my headphones resembled a wristwatch.
  • A screwdriver, though it did say my screwdriver resembled photos of other screwdrivers.

I’m reasonably happy with this record, considering that Goggles is more a proof-of-concept than anything close to what I want to document (ok, awright already, augment) reality. It does seem to prefer things that are enormously popular. My first suspicion was that Goggles would not recognize anything that did not include OCR-able text, but most of the logos tested have no text, nor did the paintings of Jesus, St. Francis, and Franklin. Goggles had an impression that QBit was a small white animal, and there were flickers of recognition of a screwdriver. So far, so good. Cripes, it’s only 2011.

So. Share your success stories, if you have any. I’m modestly impressed.

Annotating Reality

We’ve had evening clouds here for well over a week. Maybe ten days. I’ve lost count, but I may well have to kiss off seeing that supernova in M101. That’s a shame, because I’ve downloaded the Google Sky Map app to my new Android phone, and I want to try it out under the stars.

The app knows what time it is and where you are, and if you hold the phone up against the sky, it will show you what stars/planets/constellations lie in that part of the sky. Move the phone, and the star map moves to reflect the phone’s new position. How the phone knows which way it’s pointed is an interesting technical question that I still need to research, but let it pass: The phone basically annotates your view of the sky, and that’s not only useful, it suggests boggling possibilities. I’m guessing there are now apps that will identify a business if you point your phone at it, and possibly display a menu (the food kind) or a list of daily sales and special deals. With a rich enough database, a phone could display short history writeups of historical buildings, identify landforms for hikers, and things like that.

That mechanism is not an original insight with me; Vernor Vinge described almost exactly that (and much more) in his Hugo-winning 2006 novel Rainbows End . Most of my current boggle stems from not expecting so much of it to happen this soon. When I read the book back in 2006 I was thinking 2060. We are well on our way, and may be there by 2040. (Vinge himself said 2025, but me, well, I’m a severe pessimist on such questions. How long have we been waiting thirty years for commercial fusion power?)

In general terms, I call this idea “annotating reality.” In its fully realized form, it would be an app that will tell me in very specific terms (and in as much detail as I request) what I’m looking at. I do a certain amount of this now, but with the limitation that I have to know how to name what I’m looking at, and that’s hit-or-miss. I have an excellent visual vocabulary in certain areas (tools, electronic components, wheeled vehicles, aircraft) and almost none in others (clothes, shoes, sports paraphernalia, exotic animals.) I was 25 before I’d ever heard the term “lamé” (metallic-looking cloth) and had no idea what it was when I saw it mentioned in one novel or another. I had indeed seen lamé cloth and lamé women’s shoes, but I didn’t know the word. It’s more than the simple ignorance of youth. As much as Carol and I are involved in the dog show scene, I still see dog breeds here and there that I don’t recognize. (Is that a bergamasco or a Swedish vallhund?) Even my core competence has limits: I received a Snap-On A173 radiator hose tool in Uncle Louie’s estate, and if it hadn’t had Snap-On’s part number on it I doubt that I’d know what it was even today, because I don’t work on cars.

I want something that lives in my shirt pocket and works like Google Images in reverse: Show it the image and it gives you the text description, with links to longer descriptions, reviews, and shopping. This is a nasty computational challenge; much worse, I’m guessing, than query-by-humming. (I’ve been experimenting with Android’s SoundHound app recently. Nice work!) Dual-core smartphones won’t hack it, and we’ll need lots more bandwidth than even our best 4G networks can offer.

But we’re working on it. Facial recognition may be worst-case, so I have hopes that the same algorithms that can discriminate between almost-identical faces can easily tell a tubax from a soprillo. I can’t imagine that identifying the Insane Clown Posse band logo is all that hard–unless, of course, you don’t follow rap. (I don’t.) Bp. Sam’l Bassett did some clever googling and identified Li’l Orby for me, but as with the Insane Clowns logo, the problem isn’t so much drawing distinctions as building the database. Pace Sagan, there are billions and billions of things right down here in the workaday world. Giving them all names may be the ultimate exercise in crowdsourcing. But hey, if we can do Wikipedia in forward, we can do it in reverse. C’mon, let’s get started–it’s gotta be easier than fusion power!

UPDATE: Well, if I read Bruce Sterling more I’m sure I’d have known this, but Google’s already started, with Google Goggles. I downloaded the app to the Droid X2, and surezhell, it knew I was drinking a Coke Zero. The app said clearly that it doesn’t work on animals, but when I snapped QBit it returned photos of three white animals as “similar,” including a poodle, a kitten, and two bunnies. Close enough to warrant a cigar, at least in 2011. More as I play with it. (And thanks to the six or seven people who wrote to tell me!)

Amazon’s Print Replica

A few days ago, Mike Ward tipped me off to a new ebook format coming from Amazon: Print Replica. The new format is a lot like PDF, in that it presents a fixed page layout that cannot be reflowed, only panned and zoomed. A lot of people have been scratching their heads over it, but some things were almost immediately obvious to me:

  • Amazon will some time (reasonably) soon release their long-rumored high-res color tablet, capable of displaying fixed-format color page layouts at high quality.
  • Amazon wants a piece of the digital textbook market.
  • The whole point of the format is time-limited DRM.
  • And the whole point of time-limited DRM is to prevent any least possibility of a used ebook textbook market.

I’ve spent a couple of days sniffing around for details, though not much is out there yet. The format in question is .azw4, and you can buy some titles in the new format right now. However, .azw4 ebooks will only render on Kindle for Windows 1.7 and Kindle for Mac 1.7–and only in the US. It’s not only a great deal like PDF; it is PDF, inside a proprietary wrapper. For the moment, it seems that publishers submit a conventional print-image PDF to Amazon, and Amazon places it inside the wrapper.

I’m pretty sure that Print Replica is Amazon’s version of Nook Study, which I mentioned in my April 18, 2011 entry. Nook Study is also a DRM wrapper around a PDF. The DRM is draconian and mostly hated by everyone who’s ever tried it. I’ve never seen evidence that Nook Study is being adopted broadly, but if Amazon’s imitating it, that market must have begun to move.

If it is, it’s probably the only segment of the publishing market that is moving right now, where “moving” means “better than marginally profitable.” Textbooks are the cash cows of the publishing business, and because college education is a monopoly market for books, students shrug and pay well over $100 a copy, often more. There’s very little competition and almost no choice. The prof assigns the book and that’s that. The only shopping possible is for cheaper used copies.

The argument made for digital textbooks is that they are less bulky and can be cheaper than printed textbooks, but cheaper here means $80 as opposed to $120. The argument against is that the legal waters are still very murky on used ebook sales. The doctrine of first sale makes it legal to sell used print textbooks, though there are wrinkles involving importation. Current case law for software suggests that license agreements (even ones that can’t be examined before the sale) may prohibit resale of a physical boxed software product, like AutoCAD. It’s pretty clear that if ebooks are eventually considered software, first sale may no longer apply. To be certain, publishers want textbooks to vanish once each term is over, so that they cannot be resold irrespective of future legal decisions. Once most textbooks are ebooks, every sale is a new, cover-price sale, and if time-limited DRM is taken at face value, once the term is over, the book goes poof. (And whaddaya bet that that $80 e-text will be $85 next year, and $95 the year after that?)

I still have about a quarter of my college textbooks and still refer to them occasionally, most recently Listen, by Nadeau and Tesson (1972.) It’s hard to imagine not having any of the books I studied back then (granting that I only kept the better ones) but it’s sure starting to look like that’s the future. It’s also hard to think of a redder flag to wave in front of the nascent ebook piracy scene than an $80 price tag. As I’ve said many times, I’m glad I got my degree in the 70s, when a term cost $600 and you could keep your books forever.