Author Archives: mcmartin1723

GTK3: Aspect-corrected Image Scaling with Cairo

GTK3 has pretensions of multiplatform support at this point, but its primary purpose has always been to be the a core widget kit for the GNOME project. (Earlier versions of GTK had an X11 focus but it started life as an ad-hoc toolkit for a paint program.) Part of losing that original X11 focus, however, involved using more generic mechanisms for painting and drawing. It turns out that a 2D rendering library for X11 called Xr similarly generalized its scope, becoming Cairo. (Xr → Χ ρ → Chi Rho → Cairo. Welcome to programming, puns are mandatory.) GTK started moving to use cairo for its rendering fairly early in the 2.x development timeframe, but it was always an optional mechanism. In GTK3 it’s mandatory and it’s the mechanism for painting all widgets.

For VICE, that means that this should be able to include the emulated computer screen. Last time we showed how it’s done with OpenGL—how do we do it in Cairo?

How Cairo Thinks About Drawing

Cairo is mostly vector-based. You create a vector-based mask to define a shape, a source pattern to fill that shape with, and then those are transferred to the target surface. Actually setting up the drawing is very similar to other abstract 2D graphics libraries—in particular, I’ve done custom drawing with both Java Swing and drawing canvases on both Android and iOS, and the basic calls all look quite similar—but Cairo’s insistence that you only touch the canvas by, essentially, spray-painting through a stencil felt a bit alien.

Since everything’s vector-based, though, Cairo lets you use transform matrices to warp displays before you actually start the spray-painting. This is very similar to OpenGL’s transform matrices, but since it’s two-dimensional, the vectors and matrices are only three elements per dimension instead of OpenGL’s four.

And since scaling a rectangle is exactly the problem we’re facing here, that’s just what we need.

How GTK3 Thinks About Cairo

If you want to draw custom graphics in a widget, the easiest way to do this is to use the GtkDrawingArea class and attach a processor to the “draw” signal. This will provide you with a pre-configured cairo rendering object through which you may draw your display.

Continue reading

Advertisements

GTK3: Aspect-corrected Image Scaling With OpenGL

It’s past time I wrote a bit about the work I’ve been doing for the VICE project in detail.

Most of it is drudgework, really; there a large number of events that the emulation core can send to the UI, and the UI has to make them visible to the end user. Likewise, the end user will interact with the system and the UI has to translate those actions into calls that the emulation core can understand. Most of that is finicky but also very ad-hoc and not terribly generalizable.

That said, there are still a few techniques that have come up that are simple enough to fit in a blog post, complex enough that they aren’t totally free, and niche enough that there aren’t a billion writeups of the techniques involved already.

Here’s one of them.

The Problem

Here’s what VICE looks like while it’s running:

GTK3_NTSC

We are interested in the primary portion of the display; the large widget in the middle of the application that actually displays the Commodore 64’s “screen”. The actual data is constructed, pixel by pixel, by the emulator core and put into a memory buffer. Our job is to make use of it. The basic requirements are pretty simple:

  • All of the pixel data must actually go to the screen.
  • The pixel data should thus imply a minimum window size, so that everything generated fits.

The rest of the requirements make things a bit more exciting, though:

  • The window should be able to be resized arbitrarily as long as the minimum window size is respected.
  • The emulated display should be as large as it can be while still fitting within the current window size.
  • The emulated display should preserve its aspect ratio within the application window as a whole, letterboxing or pillarboxing as necessary but but without constraining the shape of the application window itself. (This is a relatively modern requirement, but is necessary to correctly handle “fullscreen” applications. In earlier years, one went fullscreen by actually altering the display’s resolution. This was easy for applications to code for but produced a lot of havoc on desktop icons, any other applications that might have been running, and any kind of multi-monitor system. The rise of ubiquitous GPUs—and the development of techniques like the one in this article—altered expectations so that a fullscreen application is equivalent to a single window on the desktop that covers the entire desktop.)
  • The displayed image should preserve the fact that the 8-bit computer’s pixels weren’t square.
  • The minimum window size should adjust itself so that it’s just large enough to contain the screen of non-square pixels, despite the fact that the window’s own pixels are still square.

That’s a much longer list, but an awful lot of it boils down to making sure that the machinery of the UI interacts sanely with the rendering context. From the point of view of the rendering context, it only sees three tasks:

  1. We have a rectangle to draw the display into. Start by clearing it to some nice neutral color like black or dark grey.
  2. Draw another rectangle inside of it.
  3. Have that rectangle take its colors from some rectangular array of pixel data.

Everything in our list of requirements boils down to determining the size of that second rectangle.

The Old School

When I was working on The Ur-Quan Masters we faced the same issue, but in 2001 most people’s graphics cards didn’t work the way modern ones do—they were more like specially-designed circuits built to render certain kinds of scenes, and they needed to be tricked into actually doing the work we want.

The general technique was to set up the camera and the projection transforms so that they represented a space the size of the screen to display, without any real notion of distance (that is, it was an “isometric” instead of “perspective” transform). One would then disable lighting computations and render two flat matte-white triangles that created that internal rectangle. The older graphics pipelines had a notion that these triangles could be “textured” with a repeating, small pattern that would look like grass or brick or similar. We would load the screen we wanted to display into one enormous texture and then map it to the rectangle such that it does not repeat at all.

The Modern World

Even in 2001, though, change was in the wind. Graphics cards were moving from, essentially, ASICs designed to do perspective transforms of textured triangles with hardcoded lighting and fog equations into massively parallel vector processors that graphics developers would program directly. It would take another five years or so before this became ubiquitous, but once it did it became very ubiquitous. Despite being more generic, it also ended up ultimately becoming more consistent across both the low and high end devices.

The best treatment of how modern graphics programming work is Learning Modern 3D Graphics Programming by Jason L. McKesson. It makes a deliberate decision to ignore the history of the field, treating the entire fixed-function rendering pipeline of the late 1990s and early 2000s as a giant misstep. That’s probably the best way to learn it, honestly, but I spend a lot of time on Bumbershoot Software looking at the how the past relates to the present, so…

The earlier rendering techniques used two transform matrices to represent the world and the camera, and optionally included additional information about color and texture information that was associated with each vertex. All of these operations are collapsed into a single program called the vertex shader, which consumes arbitrary arrays of information and outputs new arrays, one of which is the actual final location of some vertex.

Once the vertices are computed they’re then formed into polygons and filled in with the actual pixels to display. (OpenGL calls them fragments because they technically might not actually be pixels, but in practice people seem to be pretty casual about the distinction.) The results from the vertex shader is thus fed to the fragment shader, which takes the results computed from the previous phase and uses them to determine what color should be output at that point. (There are many pixels for each polygon, of course. Shaders can indicate which values should be treated as constant for a polygon and which will be interpolated across the face of it.) Fragment shaders can be used to directly create a bunch of effects that, in the early 2000s, were usually acheived by precomputing a bunch of textures and relying on texture blending operations to produce the desired result.

This is a major oversimplification of the full capabilities of our modern graphics APIs—in particular, I’m completely ignoring the ability of shaders to generate new geometry on the fly, and the way GPUs may be used to perform arbitrary computation on their own—but these are the bits we need.

Our general strategy is also quite similar to the one using the older hardware was, but without nearly as much tapdancing:

  • The geometry we submit will be two triangles that form a rectangle, as before, but they will be implied to cover the entire rendering area.
  • The vertex shader will scale this rectangle to the proper shape while leaving it centered.
  • The fragment shader will simply perform the texture lookup and return that as the color for that point.

The hardware’s more powerful, and part of that additional power is that we get to ask it to do less work. I’m completely on board with that.

Below the fold, we’ll delve into the details.

Continue reading

Compatibility Across ZX Spectrum Variants

The Spectrum is a new platform for me, and that means that as is traditional, I’ve ported Lights-Out to it. The final tape file weighs in at 1,050 bytes, which is the largest 8-bit implementation I’ve done to date, but that extra space is being used to buy something. One of those things is obvious:

spectrum_lights_out

This is my first implementation that actually makes use of custom graphics of any kind. The default graphics characters for the Spectrum are less comprehensive than even the ZX81, but a certain amount of custom graphics is painless and so that is clearly what developers are expected to use.

The second thing can’t be shown on screenshots, but also accounts for some of the space used. Despite the fact that the two machines are broadly incompatible, this program will run with no modifications and no model-detection code on both the 16KB Spectrum and the Timex Sinclair 2068. (By virtue of running on the 16KB Spectrum it also runs on the rest of the computer line, because within that line things stayed pretty consistent. There are a few ways you can go wrong, but not many.)

I’ve already covered the general design of this program, and the Spectrum/Timex port is really just an expansion and adaptation of the ZX81 port. I’ve uploaded the source code and updated the Lights-Out Collection download to include this version too.

Below the fold I’ll talk about the changes that had to be made to move from ZX81 to Spectrum, and the compatibility restrictions that permitted a 100% machine code program to run on the Spectrum and the TS2068.

Continue reading

SpectraLink: Creating tape files from scratch

Last time, we created a self-loading and auto-running BASIC/ML hybrid program and saved the combination out to tape. We built our program in the emulator using ordinary BASIC commands. That’s the most painless workflow yet for making a machine-code program with period tools—at least with what we’ve explored here.

But it’s 2017. We want to have cross-development workflows that don’t require us to manually fire up an emulator and mess around with memory injection and hand-written BASIC programs. Let’s get this up to speed.

Continue reading

Getting Started With the ZX Spectrum

It’s time to revisit an old friend.

spectral_sorcery

Well, that’s a lie. The ZX81 wasn’t really an old friend in the first place, and as we can see above, this isn’t a ZX81—we’ve got not just mixed-case text but working exclamation points!

This is its successor, the ZX Spectrum. This machine never reached American shores, unless you count the spectacularly ill-fated Timex Sinclair 2068, which you shouldn’t. The TS2068 had a completely different RAM layout and ROM system, which in turn meant that basically no software ran on it unless it was 100% BASIC, and maybe not then.

But the Spectrum was very well-beloved elsewhere in the world, and had many clones, and is a much cleaner platform for experimenting with Z80 assembly code than most of my other options.

If you want to work with these yourself, FUSE is the premier Spectrum emulator overall, while EightyOne, the ZX81 emulator I had recommended for Windows, also covers the primary Sinclair line.

In this post, I’ll be outlining what it takes to produce a machine code program for the Spectrum, and how to mix it with BASIC. This is a lot different than the systems we’ve looked at previously, to the point that it almost feels like this is the first system we’ve looked at that actually intended hybrid BASIC/ML code as a common use case.

Continue reading

Odds and Ends

Two things of note, neither of which is portentous enough to make into its own post:

  • This has been making the rounds on the Internet for quite awhile now, but in case you haven’t seen it yet, GameHut’s Coding Secrets playlist is full of explanations of demo-like effects that were used on the Sega Genesis/Sega Mega Drive. If you’ve been enjoying reading about the techniques I’ve been finding here, that playlist is extremely relevant to your interests. (As for why now, it appears that one of the developers for Sonic 3D Blast is going in and making a Director’s Cut version of it as a ROM patch, and has been sharing relevant bits as he goes.)
  • WordPress, or at least the theme I’m using on it, appears to have a glitch where escaped HTML entities get unescaped when you edit the post they’re in. This has resulted in a number of older posts that needed minor corrections or edits getting their code snippets corrupted as comparisons were taken as wacky HTML tags and edited out. I’m in the process of reviewing my old posts and making sure any damage has been fixed, but as a rule, code in the Github repository is the final source of truth.

Perfect Play Across Genres

(This is another “from the archives” post, republished from an earlier, smaller distribution. I’d built this out of a series of discussions I’d had on IFmud and similar places with people who mostly played in these genres.

I haven’t updated the text of it much, and of course the sample size is restricted to the folks I talked to. The authoritative tone of much of this article should perhaps be taken with a grain of salt. —McM)

In the excellent and insanely detailed article Shmups 101: A Beginner’s Guide to 2D Shooters, we find the following uncontroversial claim:

Theoretical Perfection: Perhaps the single most important quality for any respectable shmup to possess: it should be technically possible for a player to make a “perfect” run through the game, without getting hit even once. Put another way, there should never be spots where eating damage is 100 percent unavoidable—no matter the situation, your raw skills should always be sufficient to get you through if you’re good enough. Of course, only a select few gamers actually are that good, but this ideal MUST be legitimately attainable: failing to tie up this crucial loose end during development is guaranteed to hamstring any shooter, no matter its strengths in other areas.

It seems like this has a lot of resonance in other genres, too, and takes different forms:

Perfect Play Should Be Possible

  • Platformers: Take this condition unmodified. A sufficiently good player should theoretically be able to finish without taking damage. (Health bars are native enough to platformers that this may be better phrased as “without dying”—nonlethal avoidable damage would be more likely to be acceptable in a platformer if the concept exists.)
  • Fighting Games: Deterministic damage model. Randomized mechanics are anathema to serious fighting-game players, who hope to see their competitions are purely skill-based, not due in any way to die rolls. This is the predominant reason the Smash Brothers games weren’t taken seriously at fighting game tournments for many years.
  • Interactive Fiction/Adventure Games: Much like the Fighting Game version, no random elements capable of deciding the game. Random elements may be acceptable, but there must exist some strategy that, properly applied, will win the game 100% of the time. This requirement is automatically met by puzzlebox games, and in a very real sense, in any game that has nothing isomorphic to randomized combat. However, even with randomized combat, it’s still possible to meet this requirement.

I would now want to add some variants to this, because when these are violated some group of even hardcore players gets turned off.

A Priori Perfect Play: “Perfect Information”

This is a known consideration, but does not hold nearly as strongly. The idea here is that the game provides perfect information to the player, so that perception, reflex, and execution are the only things tested. Violating this requirement makes the game—at least on a first playthrough—more of a mind game. These usually end up being specific subgenres.

  • Platformers: If the player is not in danger, they can see a way to safely proceed. No invisible deathtraps, no leaps of faith. Invisible hazards may be acceptable if it is possible to guarantee their nonlethality vais some strategy and there is some cue that the strategy should be applied. (Violating this is an entire subgenre: “masocore”, of which Kaizo Mario World and I Wanna Be The Guy are the most famous, and the Karoshi games were the best.)
  • Shmups: Memorization should be effectively unnecessary. This is arguably too strong—familiarity will help in any game with scripted components, which is all of the shmups of note. However, this is pretty close to the platformer requirement. If a path branches, both branches should be ultimately viable, or it should be possible to switch branches after it becomes clear that one is not viable after all. (Violating this is the “memorizer” subgenre, of which R-Type is the best known example.)
  • Fighting Games: This would require that every move have a unique prefix, or at least that all moves with a unique prefix animation should all have identical counters. There isn’t a name for fighters that do this—I’m going to call them “reflex fighters” because they rely entirely on the reflexes and perception of the players. Unlike the other examples here, violating this requirement makes you mainstream&mash;modern fighters deliberately have moves with different counters have the same prefix as a way to simulate feinting, or, more directly, to add a mechanism by which two competitors can psych one another out. While fighting game enthusiasts loudly proclaim their fealty to “pure skill tests”, reflex fighters are considered a bridge too far—they move parts of the game they like out of the design space.
  • Interactive Fiction/Adventure Games: Winning the game should not require exploiting knowledge from “past lives”. (Violating this is an entire subgenre: “Accretive PC” games, of which Varicella may be the most famous example.)

Basic Toolkit Perfect Play

Perfect play should be possible using only the most “basic” of the game’s mechanics. This is not always considered a feature, because it by definition is making some skill tests totally optional, and sometimes that means removing them entirely.

  • Fighting Games: No special move should be only counterable by another special move; no matter what strategy your opponent takes, it ought to be theoretically possible for a sufficiently skilled player to win using only the basic toolset of basic attacks, defenses, dodges, and throws. (This is more or less true for all fighting games, under equal circumstances. Special abilities may be better at it, and if a player has been trapped by his opponent specials may be his only way to break it, but if you’ve gotten into your opponent’s head, there’s no need for anything fancy to win.)
  • Platformers: This is generally automatic. Most platformers—and in particular most challenge platformers—only have basic mechanics, and they are always available. When this is not the case, powerup-free runs should always be possible.
  • Shmups: In addition to “dying should be optional”—the basic perfect play criterion—bombing and powerups should also be optional. Guaranteeing this is usually how you guarantee the perfect play condition.
  • RPGs: No system mastery traps; any “sensible” build should be capable of beating the game. This is the genre where it’s most likely to not be considered a feature, because by definition it means that optimization doesn’t pay off as much as it could. Even so, games that encourage specialization (like Alpha Protocol or Deus Ex) carry with them an implicit promise that any specialization will eventually see you through, ideally with that specialization. Breaking this will still be considered OK as long as the core Perfect Play rule still applies.
  • Interactive Fiction/Adventure Game: The game should be beatable using only standard verbs. More or less automatically achieved by mouse-driven graphical adventures unless they do something amazingly gimmicky—not usually as valued in IF, which prefers the weaker requirement “properly clue all required nonstandard verbs”. What qualifies as a “standard verb” is a community consensus, which doesn’t really help novices. Still, if a standard verb can do something, the standard verb should at least be a synonym.

Perfect Play From Any Point

Regardless of the game’s situation, a sufficiently good player can provide a “perfect tail” from any point.

  • Shmups: Intrinsically impossible, since being one frame away from death will usually be too late. Weaker versions of this may apply.
  • Platformers: Intrinsically impossible for the same reason as for shmups; restricting to “possible starting from any point where the PC is not in immediate danger” is usually automatic if perfect play is possible at all.
  • Interactive Fiction/Adventure Games: This is traditionally measured on the Zarfian Cruelty Scale; any game with a rating of “Polite” or lower meets this criterion if it also meets the perfect play criterion. Community consensus is that this is very important, but not strictly mandatory.
  • Fighting Games: This is almost automatic. The basic concept here is that comebacks should always be possible, and since damage is deterministic and there is always a psychological component of some kind, if a player is not in the middle of being defeated by a trap or combo, a comeback will be possible.