Dissecting Three Classic Automatic Proofreaders

I’ve been thinking about type-in programs again. In particular, I’ve been thinking about one of the features many magazines and books provided for type-in programs that I never actually saw back when I was a youth typing programs in: automatic proofreader programs that would provide verification codes for the program as you typed it in, thus saving you multiple passes through the program trying to figure out why it was giving you wrong answers.

In poking around through the Internet Archive’s collections, I’ve found three of note and in this article I’ll be picking them apart.

SWAT: Strategic Weapon Against Typos

I encountered the SWAT system from publications associated with SoftSide Magazine, which focused on the TRS-80, the Apple, and the Atari home computers. These have generally been a bit before the time I focus on, though I really do owe the Atari home computers a more thorough investigation. The earliest attestation of the system I’ve found is in the June 1982 issue, and it provides implementations for all three of its target systems.

SWAT was a program intended to be appended to the program that it was to check; one would then start running the program from that point instead of running the program proper. It would then compute a simple checksum of every byte in the program by adding them up and then printing them out in groups. You would then check these codes against a separate, shorter listing that provided the codes a correct listing would produce. If they didn’t match, one edited the program until they did.

This is somewhat interesting because this is much closer to how we would organize such a utility in this day and age. The program would be read in, and a SWAT code table would be printed out. The other systems we will see in this article essentially modify the code editor and require checking as one types.

SWAT takes three parameters: the boundaries of the program to check, the maximum number of lines per chunk (default 12), and the target number of bytes per chunk (default 500). It then scans through the program as it exists in memory, producing a running sum of every byte in the program, modulo 676. Once it reaches the end of a line, it checks to see if this is the maximum number of lines, or if the byte target has been exceeded. If it is, it emits a line on the SWAT table indicating the range of lines, the total number of bytes, and the resulting sum. Instead of printing the sum as a number between 0 and 676, it emits it as two letters. (676 is, after all, 26*26.) The first letter is the “tens digit” of the result.

One interesting thing about this is that it does not operate on the actual text the user typed. The BASICs for these three systems analyze and reformat the instructions so that they may be executed more efficiently at run time (a process that documentation of the time often called crunching, but which modern writers would call tokenizing), and it is the tokenized form of the program that is summarized. This meshes extremely well with Applesoft BASIC, because its tokenizer actually also removes all user-supplied formatting, which means that all program lines are actually converted into a single canonical form. The TRS-80 preserved all user formatting, which meant that the program had to be entered by the user exactly as printed to match the SWAT codes. The Atari systems were particularly unusual—they normalized input lines like Apple did, but some quirks of its tokenization process meant that how lines were tokenized would depend on the order in which they were entered, so skipping around in a program while entering it or editing typos along the way could actually corrupt your SWAT codes. Fortunately, there was a procedure for normalizing a program, and so SWAT simply required users to perform this procedure before running any checks.

As a checksum, this mostly did what it needed to, but it wasn’t ideal. In addition to its false positives, a simple sum of bytes will not catch transposition of characters, and for programs with a lot of DATA statements, this was the most dangerous and difficult-to-identify problem that a user was likely to cause. Summing the collapsed tokens, however, did mean that any misspelling of a word BASIC recognized would be immediately obvious, altering not only the final sum but even the length of the line. For the kinds of programs that SoftSide tended to publish, this was entirely adequate, though. Their programs tended to be pure BASIC and would not have large amounts of machine code or graphical data within them.

That privilege would go to Compute!’s Gazette, which focused on the Commodore line, which also required much more aggressive use of machine code and memory-mapped I/O to function.

Compute!’s Automatic Proofreader (1983-1986)

Compute!’s Gazette started out as a magazine for the VIC-20 and the Commodore 64. In October 1983 they introduced a pair of programs that would provide proofreading support for automatic proofreading. The tighter focus of the magazine—and the close similarity of the operating systems of the two machines, even at the binary level—allowed the editors to provide tools that hooked much more deeply with the machine.

All the Commodore 8-bit computers provided a uniform interface for basic I/O operations, and also provided a number of points where they user could replace core functionality with custom routines. This low-level interface—which they called the KERNAL—allowed a lot of work to be done at the machine code level and still run acceptably across the entire line.

This program worked by copying itself into a block of memory that was only used for tape I/O and which was commonly used by BASIC programs as scratch space for small machine language programs. A simple BASIC loader copied it into place and then ran a routine that attached the bulk of the program to the KERNAL’s character-input routine. This routine, interestingly, wasn’t called when the user pressed a key; instead, once a line had been entered, the screen-editor logic decided which part of the screen constituted that line and then provided the contents of that line as input, followed by the RETURN key that kicked it all off.

This proofreader would read characters and add their codes to a running 8-bit total, wrapping around as necessary, and ignoring spaces. When the return key was detected, it would stash the output state, then move the cursor to the upper left corner of the screen, print out the final sum (from 0 to 255), and then set the cursor back the way it was. As a checksumming algorithm, this had the same problems with not detecting transposition of characters that SWAT did, and it also was less reliable about misspelled keywords (since this scan was happening before tokenization). On the plus side, a new code was generated for every line of text and you could check your work as you typed, or list an entire block and check it by going to the top of the program block and repeatedly pressing RETURN to evaluate each line.

Early versions of the proofreader had two editions, one for the VIC-20 and one for the Commodore 64, but the only actual difference between the versions was that they called a routine in the BASIC ROM to convert the byte into a decimal number, and the BASIC ROM was mapped to a different part of memory in the two machines. The API for the functions was identical, and indeed the BASICs were so similar that this was the same routine, in the end.

Ultimately later editions of this proofreader unified the two versions and usde the actual original value of the “character read” routine that the proofreader hooked itself up to as a switch to decide where to call to print a decimal number. This added a dozen bytes or so to the final program but even on the extremely cramped VIC-20 this was a cost that could be easily paid.

However, the tighter binding to the operating system produced some unique drawbacks as well. The CHRIN routine the proofreader extended was actually called for all kinds of character input, not just program lines. As a result, running a program with the proofreader active would have it corrupt the program’s display with handy check codes for every response the user gives to an INPUT statement. Worse, it would do the same for textual data read off of the disk or tape. Of course, the tape wouldn’t have time to do any reading; once the tape routines started using their temporary storage, this would trash the memory holding the proofreader, and the system would begin trying to execute random temporary data as code and probably crash extremely hard.

Compute!’s Automatic Proofreader (1986-)

Over the next few years, Compute!’s Gazette got more and more sophisticated programs in its lineup—many approaching or exceeding commercial quality—and it also got several more systems it needed to support. In February 1986, they updated their proofreader to use a more sophisticated technique. While they were at it, they also addressed all the shortcomings I listed above.

The most difficult issue to address was where to put the proofreader so that it would not be touched by the running system during normal operation. They fixed this by pushing the start of BASIC’s program memory forward 256 bytes and using the space freed for that as dedicated to the proofreader. However, this was a different place in memory for the five machines they supported, so they also needed to patch the program after loading so that the addresses pointed to the right place. The necessary information for patching turns out to be largely supplied in a portable way by the KERNAL, so this is not as heinous as it sounds, but it does still require the most sophisticated BASIC loader I have seen.

The other system-specific issues were solved by extending the “tokenize a line of BASIC text” function instead of the “read a character” function. This also lets the proofreader intervene less frequently and lets it process an entire lin eof text at once, guaranteed. User input and file I/O aren’t intercepted, and with the program relocated to the old start of BASIC RAM, tape I/O works fine too.

The final—and, for the user, the most important—change was to use a more sophisticated checksum algorithm that can actually reliably flag swapped characters and make it much less likely for typos to cancel each other out:

  1. The checksum is a 16-bit unsigned integer, and its initial value is the line number being processed.
  2. The line is preprocessed by removing all spaces that are not between quotes. So, for instance, 10 PRINT "HELLO WORLD!" becomes 10PRINT"HELLO WORLD!"
  3. Add the byte value of each character to the checksum, but before adding it, multiply it by its position in the line after extraneous blanks are removed. So, for our sample line, the checksum starts at 10, then gets 49*1 and 48*2 added for the line number 10, then 80*3 for the P in PRINT, and so on.
  4. XOR the high and low bytes of the checksum together to produce the final 8-bit checksum.
  5. Express the checksum as a two-letter code. This is basically a two-digit hexadecimal number, but the least significant digit comes first and instead of using the traditional 0123456789ABCDEF digits, it instead uses the letters ABCDEFGHJKMPQRSX.

This scheme was sufficiently effective that they never modified it afterwards and it continued in use until Compute! stopped publishing type-in programs in the early 1990s. That is a solid pedigree.

After the jump, I will dissect the sophisticated BASIC loader that was used to make the same core program work on five different computer models, and then present my reconstruction of the proofreader itself.

Continue reading


2018 Compilation and Review

2018 has come and gone, so it’s time for me to do a summary post and collection of my work on Bumbershoot over the year.


The various projects I did on this blog in 2018 are now collected for download in a single zip file. 2018 was marked more by a series of larger projects rather than a swarm of small programs. I had four sizable projects I worked through line by line and built from first principles:

In addition to those, there were a handful of smaller programs created along the way to test my build systems or my grasp of specific hardware techniques:


2018 was the most active year for the blog by a factor of about two; this is the first year I cracked 5,000 page views, 50 articles, and 100,000 words. The most popular articles this year were largely the same as last year, covering weird CGA modes and file formats and machine code linking on the ZX81. Sneaking into the top 5 was my article about the VSP Glitch on the C64, which seems to have rocketed above my other C64 articles thanks to a Reddit comment linking to it as an explanation for how Mayhem in Monsterland managed its high-speed scrolling.

The three articles I wrote this year that got the most views were the beginning and the end of my Atari 2600 project, and the post on the legacy sound chip on the Sega Genesis. All them seem to, again, have risen above the other articles due to links in forums. As usual, though, Bumbershoot Software mostly works as a standing reference, and search engines drive more traffic than everything else combined by an order of magnitude.

Other Stuff

Off of my usual topics, 2018 was also interesting in that it saw the release of five games that were very different from one another but also targeted quite narrowly at my current gameplay interests. I can’t really rank them against each other for a top five list, so here they are in alphabetical order:

  • Celeste. I played a lot of Thorson’s early work—most notably the Jumper series—and while those were occasionally a bit rough-hewn I consider them foundational to the “challenge platformer” subgenre, which also includes games like VVVVVV and Super Meat Boy, but which distinguishes itself from “masocore” games like I Wanna Be The Guy by always honestly presenting what the current challenge is. (This measure, which I discussed as part of what “perfect play” means across genres, does mean that Limbo also stands with I Wanna Be The Guy despite having much more forgiving platforming challenges.) Celeste is an extremely well-polished challenge platformer and quite possibly the best example of the subgenre now extant. It achieves this through excellent controls and map design but also through accessibility—while much has been made of Celeste’s Assist Mode, even a player whose training and reflexes are a match for the intended design will find that the most punishing or abstruse stages are hidden behind clearly optional unlocks. I have observed that Super Meat Boy is in part about testing the player to destruction, and that as a result its plot and ending cutscenes and such are all extremely perfunctory. Celeste actually wants to tell a story alongside its challenges and it puts the more generally-inaccessible stages in places where no story is being told. It’s a very effective combination.
  • EXAPUNKS. Zachtronics games are, in effect, a series of programming challenges. I like them, but I often have trouble sticking with them because it’s hard to motivate myself to write assembly language programs for pretend computers when I could instead be writing assembly language programs for real computers. The earlier Opus Magnum avoided this fate for me by not being as obviously a programming exercise even though it was one (you schedule motions of mechanical arms to assemble alchemical compounds), but EXAPUNKS seems to avoid it by having a sufficiently exotic programming model. Commanding cute little spider robots to run rampant through a pretend network seems to be far enough from the retrocoding projects I actually do to keep the challenges from interfering with each other in my motivation.
  • La-Mulana 2. The original La-Mulana was an homage to the MSX generally and Konami’s Knightmare II: Maze of Galious specifically. However, Maze of Galious was in my opinion an unplayable mess, while (with a few exceptions) La-Mulana managed to be crammed full of tricks and secrets and still mostly work. It did this by (apparently unconsciously, since they’re open about their inspirations and didn’t list this one) lifting a lot of design aesthetics from Myst—overwhelm the player with information and have all of it be relevant to something eventually. This is then layered on top of fairly-traditional action-adventure exploration gameplay. When that game was refined and modernized for the Wii, the parts that were problematic in the design were polished away and the combat was rebalanced and generally improved. At that point it stopped being a quirky obscure freeware game and started being an interesting genre-jam game that didn’t get imitated. The sequel is in some sense more of the same, but since the original hasn’t been imitated since its release more of the same was very welcome.
  • Return of the Obra Dinn. This is a first-person adventure game in the Myst mold, but manages to evolve the formula there in meaningful ways. Standard Mystlike games tend to involve using information in the environment to bypass obstacles, which in practice often reduces to replacing “find a key somewhere and use it to unlock a door somewhere else on the map” with “find a combination written on a wall somewhere and use it to open a combination lock somewhere else on the map.” Sometimes you’ll have to build the combination out of a lot of disparate pieces—Riven and the La-Mulana series both excelled at this—but Obra Dinn evolves the formula by requiring more aggressive deductive work on the part of the player to get the answers required. Despite being technically just a new, improved twist on a classic game design, it’s been long enough since Riven that this game was received like a bolt of lightning from a cloudless sky. It’s not that good but it is very, very good—and if you didn’t play the old Myst and La-Mulana games, you may indeed have never seen anything like this before.
  • Yoku’s Island Express. This is a super-cheerful action-adventure game built around pinball controls, starring a dung beetle turned postmaster drafted into a plan to save the island’s local gods. Despite all that it remains relentlessly cheerful all the way through (you have a dedicated button to blow a party horn and possibly throw confetti around—while there are in-game reasons to do this you are free to deploy it at any point) and I found the difficulty to stay well within reasonable bounds. I’m not very good at pinball, but I was able to work my way through the game without too much trouble, and it also neatly avoided what I think of as the biggest problem with high-level pinball play—the path to high scores usually involves finding some technique that’s reasonably high scoring and that one can perform with extreme consistency, and then doing that thing for as long as your endurance and precision can hold out. Here, because you have actual plot objectives to accomplish and a cap on “score”—where a normal pinball game would grant bonus points, you get bonus money instead, and there’s a cap on how much money you can carry at a time—you are always encouraged to attempt sequences of more varied skill shots to progress. It’s an interesting case study in how mixing another genre of gameplay into a game can address shortcomings in the original genre’s gameplay.

Bumbershoot Software in 2019

2018 saw me complete my ambition of doing a software release for every platform I grew up with. As such, I don’t have any big pressing projects bearing down on me this year as things I really want to attack going into the new year. That said, this was the state I was in for most of the year and I wrote more than I ever have, and got several releases out to go with them.

As such, I’ll be walking into 2019 with no firm plans for the blog but confident that something interesting will ultimately come up. Onward we go!

Forcing an Aspect Ratio in 3D with OpenGL

OK, enough of Cocoa. Let’s go play with something else for a bit.

I’ve done three posts now on freely-scaled aspect-corrected 2D images.

So lets do it in 3D today instead, for a change of pace.

An Initial Caveat

For most 3D applications, you really don’t want to do an aspect-corrected scaling system like this. Instead you should let the end user specify an FOV angle and then render to the aspect ratio that your drawing area has; the user will just get to see more or less of your 3D world, as needed. This technique is only really interesting if you’re only rendering a smallish region that can’t be meaningfully expanded, and thus need to maintain a 4:3 or 16:9 aspect ratio irrespective of the actual display size.

Restricting the Display in OpenGL

OpenGL is very convenient about letting us only render to part of the screen—the glViewport function lets us pick any fraction of the window’s rectangle to be the part we render to.

Actually getting the window size is OS-specific, but if you’re using SDL or SDL2 to manage your OpenGL context, it will manage it for you. We can compute the viewport we wish to draw in a manner similar to what we did with Cairo:

    int width, viewport_width;
    int height, viewport_height;
    int viewport_x = 0;
    int viewport_y = 0;
    SDL_GL_GetDrawableSize(window, &width, &height);
    viewport_width = width;
    viewport_height = height;
    if (width * 3 > height * 4) {
        viewport_width = height * 4 / 3;
        viewport_x = (width - viewport_width) / 2;
    } else if (width * 3 < height * 4) {
        viewport_height = width * 3 / 4;
        viewport_y = (height - viewport_height) / 2;
    glViewport (viewport_x, viewport_y, viewport_width, viewport_height);

We can then prepare our OpenGL display as if it were any other 4:3 aspect-ratio display, but there is a small problem that may appear:


Despite the fact that we’ve set the viewport to only be part of the window, it turns out that glClear will clear the entire window. Normally we would want our pillarboxes or letterboxes to be a different color.

Clearing Just The Viewport

The solution for this in OpenGL involves taking advantage of the scissor test. This is a different clipping rectangle that can be turned on and off and is independent of the viewport itself. In this case, however, we want it to be the same as the viewport, and we set it by calling glScissor with the same arguments as the ones we computed for glViewport. It turns out that glClear doesn’t respect the viewport, but it does respect the scissor test if you turn it on. Thus, we get our two-color clear (black for the pillarboxes, bright blue for the viewport) with this sequence of GL calls:


This will give us the display we want.

About the Sample Screenshot

The sample code and screenshots here are from my final project from a graphics class I took back in 2004 or so; at the time it would force the screen resolution to 640×480 and its procedurally-generated terrain did not extend much past the viewable area. When I was experimenting with this pillarboxing technique, I was porting its window management code to SDL2, and that meant altering its fullscreen code so that it would still look right when the actual window was 1920×1080, or whatever other resolution the user’s desktop had.

Unfiltered Cocoa: Fit and Finish

The Mac CCA project is complete, and I’ve published it to the Bumbershoot Github account. We’ve talked through all the actual source code here, but we do still have some work to discuss before we put this project completely to bed.

Building the Binary

I’m using Make to manage my builds of these applications. I’ve talked about how to take best advantage of Make here before, so some of this is just building a worked example out.

We have a set of object files we need to link into the final binary, and all of them end in .o. Since we need to build Objective-C and C files slightly differently, we want to split those object files apart:

OBJCOBJS = AppDelegate.o CCAView.o MainView.o main.o

Then we need to actually set our compiler flags. We’ve seen a certain amount of this already, but Apple’s compilers require a lot of special-purpose flags:

CFLAGS = -O2 -Wall -mmacosx-version-min=10.7 -Wunguarded-availability
OBJCFLAGS = $(CFLAGS) -ObjC -fobjc-arc

We covered -mmacosx-version-min and -Wunguarded-availability last time; the other two C flags are standard but we’ve not mentioned them before. -O2 is setting the optimization level to 2, which is about the fastest you can get your code before the compiler gives itself permission to start spending a lot of program size to make the code marginally faster. It’s a pretty standard “release-quality” level of optimization, and it’s aggressive enough that it’s usually quite possible to read out what the generated assembly language code is doing. Lesser levels of optimization generally end up drowning the actual work out with boilerplate that makes it easier for debuggers to identify where everything is. -Wall is turning on “all” the warnings, which is not actually all the warnings, as we saw with the unguarded-availability warning.

When we’re compiling Objective-C code, we need to pass in an extra flag -ObjC to remind it to actually compile it as Objective-C, and we also need to turn on the modern reference-counted runtime library with -fobjc-arc. Everything else gets inherited from our C flags.

That gives us what we need to teach it how to find our source files and turn them into object files and a final binary:

	clang $(OBJCFLAGS) -o $@ $(COBJS) $(OBJCOBJS) -framework Cocoa

$(COBJS): %.o: %.c
	clang -c $(CFLAGS) -o $@ $<

$(OBJCOBJS): %.o: %.m
	clang -c $(OBJCFLAGS) -o $@ $<

We covered what’s actually being commanded here back in the original Makefile article. We’ll also want to tell it how to clean up after itself:


And finally we have to tell it about all the cross-file dependencies. In my old Makefile article I touted the makedepend tool for this, and it seems like my Mac system actually has it installed, but this is such a small project it’s feasible to just write out the dependencies by hand:

CCA.o: CCA.h
AppDelegate.o: AppDelegate.h MainView.h CCA.h
CCAView.o: CCA.h CCAView.h AppDelegate.h
MainView.o: MainView.h CCAView.h CCA.h
main.o: AppDelegate.h

This is the Makefile I’ve been using to produce the “unfiltered” application that I worked through over the past few posts. If we want to build a full app bundle out of it, though, we need a few more files to back us up.

The Structure of a Mac Application

Back in the 20th century, Mac filesystems did not do the Unix or Windows thing where a file is a linear stream of bytes. Instead, it had a linear stream of bytes in a data fork and kept most of its important data in a carefully formatted structure it called the resource fork. When OS X moved the OS’s underpinnings to a Unix base, it inherited some of this. Instead of having an actual resource fork, though—the data/resource split was a source of a vast number of headaches over the preceding decades and it never really played well with things like Internet or even BBS file transfers—it instead constructs complex objects as directories that have the old data or resources in well-defined locations. In this way a modern Machintosh application looks a great deal like a RISC OS application bundler, just with different files in different places.

  • The directory name must end in .app. Ours will be named CCA.app
  • That directory has one subdirectory, named Contents.
  • That subdirectory has subdirectories of its own named MacOS, Resources, and (possibly) Frameworks.
  • The actual binary image goes in the MacOS directory and has the same name as the top-level directory, minus the .app extension.
  • An icon file in ICNS format should be in the Resources directory.
  • If an application uses any frameworks that are not shipped with the OS the way Cocoa is, they should be provided in the Frameworks directory. Frameworks are basically app bundles but for libraries.
  • A file named Info.plist should exist in Contents, which provides the metadata for the program and also where to find the icons and executables and the like. It usually also points the application to the UI description file it’s supposed to load on launch.

Let’s take these in order. Directories are easy enough to create, and we’ve already built the application file, and we aren’t using any custom frameworks.

Creating Icons

The ICNS format that is expected for application (and App Store) icons is a custom format, but the program iconutil program lets us swap at will between that format and a collection of PNGs in a directory with the .iconset extension. I took a screenshot of the application running and cut a circular hole out of it and then scaled that to the many sizes Apple wants. The idea here is that a 16×16 icon probably shouldn’t just be a shrunken-down version of a 1024×1024 icon; it should instead be something that remains iconic even while it’s tiny. For us the CCA’s spiral is still just as visible down there, so I simply scale it. I can then take that directory and produce the ICNS file with this command:

iconutil -c icns CCA.iconset

And this will produce the CCA.icns file to copy into CCA.app/Contents/Resources.

The Info.plist File

This holds all the metadata about the program, like the VERSIONINFO resource in Windows programs that lets you attach author/title information to executables. However, this isn’t optional on the Macintosh, because it actually specifies things like the name of the binary to run, the location of the icon files, and other such things. Because there are a lot of moving parts here, I actually just copied the template that Xcode provided for me in a new project and filled in the blanks. Part of that also meant deleting the NSMainNibFile key and its corresponding MainMenu value; that was the very file the Unfiltered Cocoa project was intended to let us evade.

Putting It All Together

When organizing this project, I put all the actual source code into a src directory and all of the graphical and metadata resources into a res directory. To assemble the application bundle itself, I created a new Makefile at the top level directory and told it how to make CCA.app and clean:

	rm -rf CCA.app && \
	make -C src CCA && \
	mkdir CCA.app && \
	mkdir CCA.app/Contents && \
	mkdir CCA.app/Contents/Resources && \
	mkdir CCA.app/Contents/MacOS && \
	cp src/CCA CCA.app/Contents/MacOS && \
	cp res/CCA.icns CCA.app/Contents/Resources && \
	cp res/Info.plist CCA.app/Contents && \
	touch CCA.app

	rm -rf CCA.app && \
	make -C src clean

Most of this is just ordinary shell commands stuffed into where the invocations to the compiler would go. There are a few unusual things here that I haven’t shown previously, though:

  • Multi-line arguments are using backslashes to mark that the command continues onto the next line.
  • Multiple commands within a sequence are separated with &&, which means “continue on to the next command if the previous command exited successfully”.
  • We’re calling make within our makefile recipes here. The -C src argument asks it to travel down the directory tree and do this make command there. This functionality lets us split our work amongst modules in a reasonably way.

  • The final command in building the application bundle is to touch it. This makes sure that the directory itself is seen by the operating system as at least as recently modified as any of its components, and makes sure that it rescans it and thus updates any edits to icons when you view the application in Finder.
  • We’ve called the top-level build product bindist instead of CCA.app. We don’t want it to decide to do nothing just because the directory already exists, and this is the traditional name for this product (binary distribution) anyway.

What Is Left Undone

This is not a complete guide to creating a fully-formed Macintosh application from the command line. Here are what I’d consider the major remaining gaps:

  • I haven’t covered letting Xcode do all this work for you, scripting it from the command line as part of some larger build process. This is governed by the xcodebuild program, and is reasonably well-documented since it’s the workflow that Apple actually wants you to use in build farms and such.
  • I haven’t covered code-signing application bundles. Xcode normally handles this as part of managing your Apple Developer account, if you have one, and for processing code signatures after the fact or instead of Xcode, the codesign program is your main point of entry.
  • CCA didn’t ship with any libraries or frameworks. There is a lot of work you have to do to make shipped copies of libraries or frameworks continue to work no matter where the end user happens to install your application bundle. The install_name_tool application lets you manipulate the binaries after copying them, but this is a large and extremely painful topic that is the one that is most likely to bite open-source developers hoping to get Macintosh relases for their notionally platform-independent software. I should probably write a proper article on this at some point, but I’ve just finished wrangling this for the VICE project and if I wrote it now it would just be several thousand words of despairing rage-froth. Maybe later next month.
  • Macintosh applications are traditionally distributed as mountable disk images, and I haven’t covered how to do that. The hdiutil program manages this, and its (terrifyingly extensive) manpage includes some explicit example commands for producing space-optimal disk images from a start directory. The only caveat is that if you’re doing this on macOS 10.13 or later, remember to pass in the -fs HFS+ argument when creating the initial image, or it will create an APFS-based disk image that cannot be read by any version of macOS older than 10.13!
  • Some mountable disk images have custom extra graphics as part of their drive icon or even decorate the “root directory” window when it’s opened in Finder. I haven’t covered that because I’ve never actually done it before; I’d have to go research how it works.

At this point, I think I’ve done what I wanted to do with this project, and I’m going to move on to something else now. But the end result is at least a handsome graphical demonstration, and it’s not noticably out of place on any system it’s compatible with.

Unfilitered Cocoa: Coding for Compatibility

Over the course of this series, I’ve been doing my development using Xcode 10—the 10.14 Mojave SDK. However, I haven’t really been going out of my way to use any recent innovations in the macOS SDKs. In fact, the one really big innovation here—content view controllers for NSWindow—I went out of my way to avoid. It should be simple enough to just target an earlier version and recompile it. We can change our target to 10.10 Yosemite simply by adding a flag -mmacosx-version-min=10.10 to our CFLAGS. It even builds without warnings! Unfortunately, if we were to attempt to actually run the result on a Yosemite machine, our program would crash instantly, because it turns out that it’s not an error or even a warning by default to use APIs that were introduced after your deployment target.

At first glance, this seems howlingly insane. In fact, it does on second and third glances as well. The only reason that they could possibly get away with this is because of the way that Objective-C objects work. Unlike Java or C++, it’s possible within compiled code to invoke any method on any object in Objective-C, and as a result calling a function that doesn’t exist produces code that makes sense and won’t fail until runtime. When it does fail, you have very detailed information about which class, in fact, did not have which method. The overall runtime also has support for checking for the existence of methods or even installing implementations of them into classes before making the calls—and it is these facilities that inform why they didn’t warn you. Prior to 10.13, the assumption was that if you tried to call a method that didn’t exist yet, you’d done some work elsewhere to make sure that this call would work, and otherwise worked around it. The problem with this is that the compiler can’t really tell that you’ve done this without understanding not just the syntactic structure of your code, but its semantics at a relatively deep level.

The 10.13 SDK added a new set of directives to Objective-C for doing version-checking in a way that the compiler can trivially detect. These are provided by the @available keyword, and because it didn’t exist until 10.13, the default behavior of clang is to only warn about using APIs that introduced in 10.13 or later. The assumption is that any pre-existing code shouldn’t suddenly have a ton of new warnings, and that any such warnings would be false flags because you’d have been doing the checking in ways the compiler couldn’t easily detect or confirm.

We can turn on out-of-deployment-version checking for all versions by adding the -Wunguarded-availability flag to our CFLAGS. In Xcode, this is filed under “Warnings – All languages” under the name “Unguarded Availability”, and in a truly impressive feat of user interface design, you need to change it from “Yes” to “Yes (all versions)” for it to actually behave in the way we want. (“No” disables it even when the new calls are into the 10.13 or 10.14 SDKs.).

Anyway, once we’ve turned that warning on, we can set the deployment target to earlier versions and start refining away any APIs we’ve used that are too modern. Actual recommended development practice appears to be that you should support older systems by literally implementing everything multiple times, but I refuse to do this—if they’re not going to emulate the new behavior against the older systems, but will continue to run the older systems’ APIs untouched in later versions, I’m just going to use the older systems’ APIs in the first place, or drop support for that old version.

As I’d mentioned in the first Unfiltered Cocoa post, there are two major breakpoints in macOS where the APIs changed dramatically: 10.7 brought in the memory management system (ARC) and the widget placement logic (Auto Layout) that is still used today, and 10.10 unified a lot of the class logic across iOS and macOS. So while I was writing for 10.14 with no particular thought to backcompat beyond not using content view controllers, we should be able to eventually run on 10.7 without compromising anything. We will use 10.10 as a waypoint, however, to make sure no other too-contemporary idioms crept in.

A Trip to Yosemite

By adding -mmacosx-version-min=10.10 -Wunguarded-availability to our Makefile, we can check to make sure we’re not using anything from 10.11 or 10.12. (10.13 and 10.14 were checked for us by default, after all.) Now, we already knew that we were using something from 10.12—our window style flags were introduced in 10.12 because the older equivalents were deprecated. However, that doesn’t come up in our warnings, because those were different names for exactly the same constants and the 10.14 SDK produces code the same code from both. So it doesn’t care about that, and that wasn’t what was causing our crash if we tried to run it on a Yosemite machine. That crash was caused by this warning:

AppDelegate.m:66:27: warning: 'timerWithTimeInterval:repeats:block:' is only
      available on macOS 10.12 or newer [-Wunguarded-availability]
  ...timerWithTimeInterval:0.05 repeats:YES block:^(NSTimer *timer) {
/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.13.sdk/System/Library/Frameworks/Foundation.framework/Headers/NSTimer.h:23:1: note: 
      'timerWithTimeInterval:repeats:block:' has been explicitly marked partial
+ (NSTimer *)timerWithTimeInterval:(NSTimeInterval)interval repeats:(BOO...
AppDelegate.m:66:27: note: enclose 'timerWithTimeInterval:repeats:block:' in an
      @available check to silence this warning
  ...timerWithTimeInterval:0.05 repeats:YES block:^(NSTimer *timer) {

Yes, despite the fact that blocks were added to the Objective-C language and runtime in 10.6, the APIs that use them were added in 10.12. We’ll just have to use the older method of explicitly naming a target and method:

    self.timer = [NSTimer timerWithTimeInterval:0.05 target:self.displayWindow.contentView selector:@selector(tick:) userInfo:nil repeats:YES];

We already had a tick method inside our MainView class, but now it needs to take an NSTimer as an argument and do the extra work that was done inside the block in the first place:

- (void)tick:(NSTimer *)timer {
    if (timer.valid) {
        [self.ccaView setNeedsDisplay:YES];

Now, even though Clang does not complain about our use of constants like NSWindowStyleMaskTitled even though they too were introduced in 10.12, now that we’ve set the deployment target below 10.12 it will stop complaining about using the pre-Sierra equivalents like NSTitledWindowMask. So I’ll just change those back to the old names too:

    NSUInteger windowStyleMask = NSTitledWindowMask | NSResizableWindowMask | NSClosableWindowMask | NSMiniaturizableWindowMask;

The reset button gets some similar changes:

    [resetButton setButtonType:NSMomentaryLightButton];
    [resetButton setBezelStyle:NSRoundedBezelStyle];

With these changes, the result should not only run on Yosemite-based Macs, it should even compile with the 10.10 SDK.

The Lion’s Share

With that out of the way, let’s now turn the clock back to 10.7 Lion. This gets us quite a few new warnings, but they boil down to three fundamental issues:

  • The NSGestureRecognizer API we’re using to detect clicks on the CCA display wasn’t added to Cocoa until 10.10.
  • Likewise, the ability to activate and deactivate layout constraints in bulk or independently of the view hierarchy didn’t exist until 10.10.
  • Our constraint code also is using an Objective-C array literal, and those were not added to Objective-C until 10.8 Mountain Lion, but they place no demands on the runtime. So even if our code can run on Lion, we’ll need Mountain Lion to actually compile it. At this remove, I am no longer concerned by this. We don’t even get a warning when we use a modern SDK, anyway.

We’ll cover these in turn.

Refactoring Away the Click Recognizers

NSGestureRecognizer is pretty much overkill for what we’re working on here. Its iOS equivalent is much more handy, because a lot of custom displays are going to want to react to things like multi-tap, drag, throw, pinch, etc., and these are all built out of a lot of primitive events. In the desktop world, the only gesture we really need to build up out of a state machine is a double-click. And we don’t even need that: we’re reacting to a normal single click here, which means we can simply react directly to the mouse button being released. The NSView class provides a series of methods that we can override to intercept various events as they happen, and so we can replace our click recognizer with three lines in CCAView:

- (void)mouseUp:(NSEvent *)event {
    [self resetModel];

A spoilsport who read the previous post more carefully, however, might recall that resetModel was part of the main view for the application and not the display itself. This ended up being the largest change I had to make for this compatibility project. as I refactored a bunch of the model-editing code to live within the lowest-level view. I ultimately decided to make the views visible to the full application so that the menu bar could reset it as well—the alternative would be to have the menu bar have its own code for calling functions that modify the CCA model. (After all, the model itself belongs to the application, and the views only borrow it.) I’ll take overpublic APIs over relying on aliasing.

Fixing the Constraint Code

This is pretty simple. Constraints know which two views they are constraining, and starting in 10.10 you could just turn them on and off directly. Under the hood, this is being stored as constraints solved by some actual widget container—turning a constraint on or off meant adding or removing the constraint from the closest shared superview of the constrained objects. In 10.7, we have to do that work ourselves.

That’s incredibly easy for our case, because our widget hierarchy is two deep. We have MainView and we have its contents. All our constraints are added to or removed from it:

    for (NSLayoutConstraint *constraint in self.constraints) {
        [self addConstraint:constraint];

Or removeConstraint:, as needed.

Smaller Cats?

We now have a version of the Cyclic Cellular Automaton that runs on any Mac from 10.7 on and the whole executable even fits in 48 kilobytes. That’s a pretty good result, and we got it without sacrificing anything at all of importance. The closest thing to a sacrifice was the gesture recognizers. For an application this simple, we could in principle go even further back, but we’ll have to start making more significant sacrifices to do so:

  • To go back to 10.6, we’ll have to abandon Auto Layout. Our program is small enough that we can probably get away with the older “sticks and struts” or even direct manual layout, but that’s a rewrite of a lot of the code.
  • Going back to 10.6 also drops OS support for Automatic Reference Counting, which we’ve been relying on pretty heavily here. However, 10.7 SDKs and later include an “ARClite” static library that provides most of this for us. The main thing missing is support for weak references, and we got rid of our only weak references when we dropped our dependencies on 10.12.
  • Going back before 10.6, however, will require us to disable ARC and rely entirely on manual memory management with retain and release. That would, however, also let us build with the 10.6 SDK.
  • We only rely on one API call introduced in 10.6, so we if we did all the work above we could also support 10.5 simply by deleting that call. This call we’d need to delete was [NSApp setActivationPolicy:NSApplicationActivationPolicyRegular];, which turns out to be necessary if we want to run a properly-behaving Cocoa App as if it were also a Terminal program. It’s completely optional if we insist on shipping as a normal app bundle.
  • We cannot take binaries before 10.5. 10.4 did not support 64-bit GUI applications, and 10.14 insists on them. While one could imagine building a “fat binary” to cover those cases, this is building two executables and pasting them together. I’m going to say that doesn’t count.
  • Going back before 10.5 SDKs is a fool’s errand. 10.5 introduced Objective-C 2.0, which means a lot of core language constructs like object properties and looping through collections are no longer available. Even verifying that I’d succeeded in doing this would require access to build tools I haven’t had access to in nearly ten years.

So for now, I think, this is the stopping point for compatibility. We’ve taken an application that behaves like it’s been built with all the latest and most up-to-date development tools, and which would look only slightly amiss if you were to look into its app bundle, but its app bundle is completely optional and it’s able to run without doing any compatibility checks on any Mac released after the introduction of the iPhone.

By the standards I’ve been applying to my retro projects, that’s a result I’m pretty happy with.

Of course, we do still need to create an app bundle if we want to get a program that’s easy for end users to actually use. We’ll wrap this series up next time with that.

Unfiltered Cocoa: Powering A Custom Widget

Last time we worked through the general principles of divorcing a macOS application from all of the ancillary files and directories that are traditionally part of it. This time I’ll be doing a detailed walkthrough of a complete application that actually does something. It’ll be pretty small, but it will at least use Cocoa in a way that is properly recognizable as A Small Program.

No, not Lights-Out this time. I’ll be revisiting the cyclic cellular automaton we last saw on the Sega Genesis.

In this post and the next I’ll be talking my way through the 400 or so lines of code that produce this application, both to show each bit of it and also as a sort of a quick tour of the Objective-C programming language in practice. At the end of the day that means this will be about as riveting reading as my last complete program walkthrough, but like that one, how riveting that is will depend on who you are and where your headspace currently is. I’ll be trying to explain all the constructs I use as I go along, so if you don’t have any experience with C or Objective-C it should still be possible to follow along and also provide a bit of a guide as to how a modern program is formed.

Still here? Great. Let’s go. We’ll start with the platform-independent code for the automaton, then build the display code for it for macOS. Next time, we’ll take that custom widget and bind it up into a proper application.

Continue reading