Clojure, Docker, Dokku, Digital Ocean and Shell Script: The New Blog, Part 1

Clojure, Docker, Dokku, Digital Ocean and Shell Script: The New Blog, Part 1

I was in a funk. I had been working on products and consulting for a length of time long enough to escape easy recollection. In that time, I had made little time for one of the primary joys of coding: small, exploratory, harmless projects. Experiments whose only customer is myself, and whose success or failure impacts nothing critical in my daily life. Play, in other words.

I had, months prior, completed an Instrumental project that replaced some Ruby components with Scala. In the course of that effort, I had the chance to use the excellent Netty, and thoroughly enjoyed the abstractions it provided over IO consumption. Wanting an excuse to play with Netty in a different environment, I decided that rewriting my blog in Clojure+Netty would be a simple enough excercise that let me learn Lisp for a trivial project. And, because reasons, I should be able to deploy my blog via git pushes to a Dokku/Docker managed container on an entirely new server. And render Github flavored Markdown. And store blog posts in a MySQL db. And also manage the blog content with existing tools like Sublime Text and Marked. And also automatically upload referenced assets in my posts to S3.

Well, it seemed trivial at the beginning.

The Host

I decided that the first and most effective step I could make was migrate my existing blog (a Wordpress site) from Linode to Digital Ocean and the new Dokku setup; Dokku's simple install of

 wget -qO- https://raw.github.com/progrium/dokku/master/bootstrap.sh | sudo bash

was enough to get the software running, though I found that configuring it beyond its very simple defaults required reading the source regularly. Since I was also porting over a number of other domains that I also wished to host via Dokku, I had to make the following changes:

  • Ensure every domain hosted via dokku was 'bare', eg no "www". This meant creating the Dokku remotes looked like: git remote add deploy git@yeti-factory.org:the-domain-name.com.
  • Change the Dokku Nginx config (/etc/nginx/sites-available/default) to hold a line like the following:

    server {
      # Dokku config above here
      location / {
        if ($host ~* www\.(.*)) {
          set $host_without_www $1;
          rewrite ^(.*)$ http://$host_without_www$1 permanent;
        }
      }
    }
    

    The above would catch any request for a domain with wwww as a prefix and strip it from the requested domain, then send the request back through the Nginx URL resolution phase (and be caught by Dokku's own generated domain configs)

  • Add a custom "static" buildpack to the Dokku progrium/buildstep container, the exact steps of which I omit here because it was late and the process was annoyingly manual. Suffice to say, it involved shelling into the container, updating buildpack indices, committing container changes, and then redeploying an app.
  • Spin up a MySQL DB running outside of Docker that would service all the running containers. I didn't find any super easy ways of handling a container that had specific contents ( MySQL's data dir, let's say ) that should persist across container changes.

Once I had finished these steps, I was in a place to migrate entirely from the expensive Linode ( $40/mo )to the cost efficient Digital Ocean ( $5/mo ). Next up, I just had to learn Lisp and write a bunch of code!

Getting Therious

Already having some experience with Netty, I was able to bootstrap my knowledge of how to build a simple HTTP server by browsing the excellent [samples directory](https://g...

Read more...

Half Price Books, Amazon and the Real Value of a Used Book

Half Price Books, Amazon and the Real Value of a Used Book

One month ago I was fed up. The amount of space in my house devoted to things I didn't use, wasn't interested in, and had no plans on using in the immediate future tweaked a serious nerve.

One of the worst offenders when it came to space waste was a lifetime's worth of paperback and hardback novels; books I had owned since my teen years, running the gamut from Bram Stoker's Dracula to Steve Perry's Shadows of the Empire to a pile of marginally useful O'Reilly tech books. ( Clearly I abuse the word gamut. )

With close to a hundred and fifty books that I had no plans on reading again, I had found an easy way to begin removing unnecessary clutter from my house. Armed with a copy of Delicious Monster, I scanned in all the barcodes of the books I owned, and created a spreadsheet that tracked their used prices (automatically fetched thanks to the excellent Delicious Monster).

Now I had a general guide as to how much my books would be worth if I sold them all used on Amazon; in this case, my books would be worth something in the range of $1800. Granted, there was a fair amount of variability in this price; the $1800 presumed that there was some definite demand for each book (doubtful), that the edition of the book that Delicious Monster looked up vs. the copy of the book I had was the same (not true in many cases), and that I had infinite time such that the likelihood each book would sell would eventually be 100%. So, my belief that these books were actually worth $1800 was a fantasy not so different from the embarrassingly large number of Dragonlance books I was about to sell.

Thankfully, I had another route open to me.

Amazon Trade-In

According to Amazon's website, Amazon Trade-In will let you send certain books, video games and movies directly to Amazon and receive immediate Amazon credit. No waiting for a sale, immediate gratification.

This definitely satisified my desire to get rid of many (not all) of my books, though I was disappointed to see that Amazon did not offer trade ins for every book I owned. I would guess that the trade in value is directly tied to the likelihood that Amazon can sell a book for some % greater than the lowest(?) used price available, and for some books either the supply is simply so great that demand cannot meet it, or demand is nonexistent.

However, in order for me to determine if it was worth my while to even go down this road, it seemed that the best course of action would be to find out the sum Amazon Trade-In value of all books I owned. One Ruby script later, I had calculated the Amazon Trade-In value of my collection: $504.75.

Put another way, Amazon stood to make ( at best ) $1295.25 off my book collection. They had the comparatively infinite time to make that profit enter into the realm of possible.

Miffed at the rather large profit margin Amazon stood to make off my books, I abandoned my project, and let myself be distracted by life for a few weeks.

Half Price Books

Flash forward to a few weeks later, and the lucky chance of my in-laws being in town to help watch our daugher while my wife and I did chores allowed me to return to my decluttering project. Desperate to simply get rid of the things I owned, I considered taking all the books to the library ( no luck, the library only accepted books during personally inconvenient time windows ), simply selling everything personally through Twitter ( required too much work without a guarantee that I'd free up the space anytime soon ), or Half Price Books.

Desperate to simply not have this problem anymore, I loaded up my car, and made my way to Half Price Books. An hour later, they made their offer: $67. Or, following the example above, 3.7% of their Amazon Used value, and 13% of their Amazon Trade-In value.

I was surprised, of course. Asking for an itemized list of how they valued the different books, I was told they had none. **And here was wh...

Read more...

Scaling Instrumental with Scala

Scaling Instrumental with Scala

Background

Instrumental, the product I've been working on at Fastest Forward for the past few years, is written in Ruby. In general, Ruby's been a great win for us: easy to write readable, testable app logic, lots of tools available to help with deploy and infrastructure automation, and, of course, great for developing ideas quickly.

Ruby does have some well known drawbacks, however: concurrency is not a major area of effort for the community (or the most widely used Ruby implementation, MRI), CPU intensive tasks typically are slower in comparison to other languages, and performance tooling support is pretty lackluster.

About a year ago, we were forced to make a migration from Linode to AWS due unpredictably variable IO performance for our primary database (along with a host of smaller issues). While AWS had some great tools to help us scale our infrastructure to meet our performance needs exactly (provisioned IOPs, SNS+SQS), we found that the cost to keep the same performance level was about 5x higher.

The Problem

The largest contributor to our high hosting cost was the money we were paying for high CPU performance boxes. During our migration, we made the decision to do a 1:1 machine transition, such that for the N boxes we had hosted on Linode, an equivalent N was hosted on AWS, with roughly the same CPU/RAM allocation.

It should be noted here how cost efficient Linode can be in comparison to AWS, if CPU performance alone is your major consideration. Separate testing of DigitalOcean on our part showed an even greater $/operation savings.

So, we were paying 5x the cost for the same level of performance. One of our largest cost contributors was our collectors: these were daemons written in Ruby and EventMachine to accept and queue incoming metrics data. These processes represented the most performance sensitive areas of the app, as performance hiccups at their level could cause dropped data from the customer. One Ruby collector would process around 150,000 updates per second on a c1.xlarge; an audit of the Ruby code didn't show very many areas we could make effective performance improvements to our code to make any drastic improvements. The process' behavior was simple enough that there weren't many architectural improvements we could make that would yield any singificant gains; we buffer data in memory, and then flush it to the filesystem at regular intervals to be queued by a separate process.

Months earlier, I had written a prototype implementation of the collector in C, both as a return to a language I enjoy programming in, and as an experiment in seeing how fast we could make the front end. Under the testing conditions, we saw a greater than 3x improvement in performance, and expected that there were more gains to be had should we commit to rewriting the collector. We chose not to because at the time, our need for a high performing collector was a theoretical curiosity; now it was a financial necessity.

Choosing Scala

Scala was not our first choice. The aforementioned C prototype seemed an obvious first pick, but simultaneous code rot and architectural changes made a full rewrite likely necessary, which removed the value of having an existing prototype. C++ or Java seemed promising in that there'd be a well tested standard library available to use for data structures we had got "for free" in Ruby (and would not in C), but prior experience led us to believe that development speed would suffer for choosing either.

JRuby initially seemed like it might be an obvious win, but we only saw 1.7x improvement over the Ruby collector, and some odd behaviors in gems that we relied on that made us believe we might spend more time than we'd like fixing compatibility issues. Both node.js and [Go](http://nod...

Read more...

TextTumble: A summary of development for a fledgling iPhone game programmer

TextTumble: A summary of development for a fledgling iPhone game programmer

This post was originally intended to be published in June. I held off publishing it though, intending to expand upon it. In retrospect, it's pretty damn big, so I'm just going to let it ride, and address TextTumble's approval process and release process in a separate post. -CZ

We made it! 249 days since the first source code commit, TextTumble is finished. At around 11:30 PM, I cracked the champagne and sat back to enjoy the prospect of a job completed. While there is still some additional features that will be fit into the online scores and profiles, I feel very happy labeling the package with "1.0" (A version number whose significance can be daunting to a first timer like myself). As it sits in Apple's review queue now, I'd like to give you an abbreviated history of the game's development. I'm not going to offer much in the way of advice or instruction here; this is just telling a story. :-)

Sometime near mid September 2008, I went out for lunch with now co-owner of Magellan Media, Matt Rogers. Matt and I were co-workers at the Indianapolis Star (Matt recently left the Star to work at mediasauce), where we worked with a team of other artists and programmers to create local community sites.

Matt asked me to work together with him on a new game concept: create a falling-letter-tile word game for the iPhone. At the time, one of the more popular games on the iPhone app store, Wurdle, was gaining in popularity, yet seemed to be generating a lot of derivative apps in the App Store. Matt felt (a point which I believe to be still valid) that a new type of word game would capture the public's attention as an alternative to the Wurdle-style game.

I was definitely interested; to say that I have long desired to enter into the game industry would be an understatement. While in college, I had hacked together some simple mods for Unreal Tournament and Rune, but I had never stuck with a project long enough to see it polished and complete. While many of the upper echelon of iPhone games on the App Store seem to be coming from game industry veterans breaking out on their own, the game that Matt and I would make would be our first.

Invigorated from that lunch, I told Matt that I would create a prototype and have it ready for him to check out a few days. On September 29, 2008, I sent Matt a video of the iPhone simulator that looked something like this:

image omitted due to laziness in blog migration

I think that programming is full of many images like this: screenshots that on their own merits are entirely forgettable, but hold huge significance to the creator. This image represents the very beginnings of not only the first game I intended to follow through to completion, but also the first publicly used application I've written using OpenGL, a long time career goal of mine.

Matt and I, both pleased that a prototype of the game's basic interaction was put together so quickly, set a deadline for ourselves: finish the game by Christmas. It would be some hard, late nights, but I felt it would be possible. And yes, I mean Christmas 2008.

Over the course of October and November, I began working late nights, trying to pull in the necessary components of the game: the word dictionary, the tile texturing, and the interface elements. As I did this, I kept one goal in mind: keep all the code as simple as possible. I knew that attempting my first game would mean that I'd be spending more time rewriting code than writing new code, as I discovered newer and better ways to accomplish things. I also knew that I could easily fall into the trap of writing over-architectured code when I encountered a problem, as ...

Read more...

"Toll Free Bridging" for your own data

In the course of working on a quick side project, I came across a few interesting items of note for those working on Objective-C projects.

I was looking for a way to recreate the behavior shown by some Core Foundation objects in that they may be cost-free casted to their equivalent type in the Cocoa framework. An example:

CFStringRef my_string = CFSTR("a string");
NSString* also_my_string = (NSString*) my_string;

It turns out that, should you have a need to represent a struct also as a subclass of some NSObject, you can do so by having your struct include a member of type Class (or struct objc_class*) named isa. The isa member is used by Objective C to access the table of methods and class hierarchy that your object (or struct, but we're splitting hairs) is associated with. A much more elaborate discussion regarding the isa member can be found at Mike Ash's website, which is damn near invaluable for learning more about Objective C.

Once you'e added the isa member to your struct, all that remains is to reflect your struct's internal structure in the NSObject representation you're creating. While it may seem obvious to some, it should bear worth noting that the order of declaration in your NSObject subclass should mimic the order of declaration in your struct. Here's an example:

typedef struct {
  Class isa;
  int   some_number;
  BOOL  some_flag;
} MyDumbStruct;

@interface MyDumbObject : NSObject {
  int some_number;
  BOOL some_flag;
}
-(int) someMethod;
@end

Now that you've done this, you'll be able to cast a properly created MyDumbStruct to a MyDumbObject instance. I say properly created because you will need to populate the isa member of your struct with the pointer to the Class object that correctly represents the class your struct will be associated with. You can do this by accessing the class method of the, ur, class in question. For example:

MyDumbStruct s;
s.isa = [MyDumbObject class];

That's it - and here's what you end up with in terms of capability:

MyDumbStruct s = { .isa = [MyDumbObject class],
                   .some_number = 2,
                   .some_flag = TRUE };
MyDumbObject *o = &s;
NSLog("My dumb object has a number: %i", [o someMethod]);

In my next post, I'll show what I've been working on that actually uses this technique.

Addendum: This link elaborates on what I described above; it also describes usage of the @defs keyword, which will allow you to only have to declare the member variables for your Object once, and have them inserted in your struct at compile time.

Comment

Usable Xcode?!?!?

My usual user interface experience with Apple software is of the pattern:

  • Feel an app has a shortcoming
  • Work with shortcoming
  • Curse shortcoming in bars / Twitter / work
  • Spend 10 minutes googling to see if someone else has experienced the problem, and find out the feature has been there all along, and has some shortcut key assigned to it
  • Facepalm

So, predictably, when I was griping over the lack of a quick file navigation feature like Emacs' C-X b or Textmate's ?-T, a few minutes googling solved my complaint.

The Open Quickly option (shortcut: ??D ) not only gives you quick file navigation, it also (as shown by the screenshot) gives you quick jump-to-symbol navigation. I can only say: sweeeeet. (And also, facepalm.)

Comment

Blog Redesign and some undocumented ERB

(Update: Some minor clarification regarding the performance cost of eval())

Visitors to the site might notice that there was a drastic (and long called for) redesign of the site. It is not, I assure you, the sudden manifestation of artistic ability on my part; rather, it was some great work done by Erik Goens, which has made the website look like something a little better than "derelict".

I've got lots to say about Text Tumble, and maybe even a little bit to contribute about Objective C (pending some exploratory work on my own part), but I'm afraid I don't have enough ready yet to go in depth. So, in the short term, allow me to share a little bit of undocumented ERB capability that came to me by way of Eric Hodel's Cached ERB Template class.

Background: ERB is the library used to drive most Rails templates. When you write a view template and use this form:

This is my html file, <%= @some_ruby_variable %>

it will be processed by the ERB library (which is part of your Ruby Standard Library) and turned into Ruby code which may be executed. You can see it for yourself in irb:

require "erb"
doc = ERB.new("this is my template <%= @var >")
puts doc.src.inspect

The "src" attribute of the doc variable is the Ruby code that will be executed when the template is rendered. You can evaluate it yourself, like so:

eval(doc.src)

or use the .result method to obtain the evaluated template result, like so:

doc.result

Each time you do this, you will evaluate the generated Ruby code anew. The cost of evaluating Ruby code programmatically is not trivial - it requires parsing and interpreting the string of Ruby code as if it were a separate file that had been loaded into the specified context.

You can bypass this by using some undocumented functions in the ERB library. The important method to consider in this case is the method named: "def_method". def_method is a method that allows you to evaulate the ruby code and "compiles" it into a method, allowing you to call the method later on without re-evaluating the string.

For instance:

doc = ERB.new("foo bar <%= @baz %>")
class Test; end
doc.def_method(Test, "call_dynamic_method")
t = Test.new
puts t.call_dynamic_method

def_method will take the generated Ruby source and evaluate it in a manner similar to this:

doc = ERB.new("foo bar <%= @baz %>")
method_name = "call_dynamic_method"
method_source = "def #{dynamic_method}; #{doc.src}; end"
Test.module_eval(method_source)

You can see a more elaborate version of this technique in the Rails source in ActionView::Renderable#compile!. That means that, if you are using Rails, that the performance gains from this are already applied to you (and have been since Rails 1.0). However, if you are using ERB on its own, it's a good technique to have at hand.

Comment

SQL and the Update Loop

SQL and the Update Loop

A common technology use for a word based iPhone game is the inclusion of an SQLite database containing the word list of the game, to be queried when checking for potential matches to a sequence of characters. While the ease of offloading data management to SQLite is certainly a boon (as opposed to creating your own data structure to load the word list at run time), it is important to keep several things in mind when doing so:

  • Normalize your data. If you are querying the database for words, for example, ensure that your input data form matches your database data form before you issue your query. Using a dictionary as an example, make sure the words in your dictionary are either all upper-case or all lower-case, and that your input data matches. The difference between SELECT word FROM dictionary WHERE UPPER(word) = UPPER(?) and SELECT word FROM dictionary WHERE word = ? is at least n + 1 calls of UPPER. Buy yourself that time with in advance data treatment.

  • Query on change. Chances are, your game state will only need to query when some user input or game event occurs. Should this be the case, your data structures should be such that they only query the database when absolutely necessary. For instance, using a word based game as an example again: if the user has spelled CAPITULATE, and the I in CAPITULATE is removed, your data structure should already recognize that CAP and LATE are still words (based on the previous query that also showed CAPITULATE to be a word), and not query the database again to rediscover this. While it places more of a burden on your game logic, you'll avoid the cost of another scan across your dictionary against each potential word candidate.

  • Indexes. This should be obvious, but your database should be aggressively indexed if it will be queried during the update loop. Create UNIQUE and PRIMARY indexes as relevant, and do so in advance.

  • VACUUM. The VACUUM statement cleans up fragments left behind from successive INSERTs and DELETEs. You should VACUUM before shipping your database to clean up any work you've done in development, and if your database will be modified by the game, it's a small thing to issue the statement during your game's quit-cleanup phase.

  • Cache failures or successes. Consider creating an in-memory cache of failures or successes ( whichever is more likely to happen based on your requests ), and use it often. You can whip together a simple ring buffer in no time, or you can create something more complex that expires stale data based on access count or time of last access.

Most of this advice has been inferred through repeated performance optimization of Text Tumble. If you've decided to do your database access in the update loop, this should help you shave off a few hours from your optimization work (and get closer to your target constant FPS :-) ).

Comment

Ruby Bit for the Weekend

Ruby Bit for the Weekend

I was looking over the code behind the New Relic RPM plugin for Rails, and noticed a small bit of code in the following style:

@obj = SomeClass.new
# Do stuff
def @obj.a_new_method
  # add functionality
  # to obj, specifically
end

Without entering into the mostly futile discussion regarding the numerous goods / evils of this sort of monkey patching, it was a cool piece of Ruby functionality I hadn't yet seen.

Comment

JQuery Style Function Chaining

JQuery Style Function Chaining

As I just posted, I recently had the chance to add JQuery style function chaining to my Spatial Query javascript library. By function chaining, I mean the ability to do something like this:

$("div.of_interest").css("border", "solid 1px black").show()

A useful technique in situations where you plan on performing multiple operations on a given set of data.

I went straight to the source to determine how to create that sort of functionality - the JQuery source, mirrored on GitHub by JackDanger.

The relevant code was in jquery/src/core.js. The simplified version of the technique could be stated like:

var jQuery = function(selector, context){
  return new jQuery.fn.init(selector, context);
}

jQuery.fn = jQuery.prototype = {
  init : function(selector, context){
    /*
      Acquire the elements in question
    */
    return this;
  },
  // The rest of jQuery's functionality would
  // be defined in this prototype, such as
  // .each, .find, etc.

};

jQuery.fn.init.prototype = jQuery.fn;

The line of particular interest is the line:

jQuery.fn = jQuery.prototype = { // etc.

Here we assign the prototype to a separate variable, jQuery.fn, in order to refer to its members (namely the init function) in the main jQuery function. Then, in order to make sure that new instances of jQuery.fn.init have the methods declared in jQuery.prototype, we assign the jQuery.fn.init.prototype to the jQuery.prototype.

Now at this point, the jQuery function can return instances of "itself". In order to chain from one function to the next, functions of the jQuery.prototype will either return this or jQuery(collection_of_elements) to allow the chain to continue.

There are lots of other cool techniques in the jQuery source - they're obviously having fun pushing Javascript to what limits they can find. :-)

Comment

Spatial Query

Spatial Query

I recently had the need to build sets of polygons and compute an overall union of them; being the painfully naive person I am, I jumped right into the task thinking that it would be relatively easy. A week later, I had finally arrived at a working polygon clipper modified for boolean operations on poygons (mostly, see the README), but boy was it ugly. Throughout the entire process, I kept thinking "wouldn't it be nice if I had a decent Vector/Matrix/Polygon data structure". I made a few dirty attempts at writing something I liked looking at, but none of them were very satisfactory - really, I wanted something that looked a lot like JQuery. Something that just took whatever data I threw at it, and most of the time did something sane with it.

So, I decided I'd write Spatial Query, a small JS library that lets you do all the gruntwork Vector/Matrix/Polygon work easily. And some not so gruntwork stuff, that I just needed anyways - like generating a convex hull, or the union of two polygons.

$p([[0,0], [0, 10], [10, 10], [10, 0]]).convex_hull_2d();
$p([[0,0], [0, 10], [10, 10], [10, 0]]).union_2d([[5,5], [5, 7], [15, 7], [15, 5]]);

Since the project that spawned this library dealt with geometric data, there is also some functionality that will allow you to compute distances between latitude and longitude, as well as conversion between Latitude / Longitude and WSG84 coordinates:

$ll([lat1, lon1]).distance_to([lat2, lon2]);
$ll([lat1, lon1]).vector();

Better explanation and more is available in the README

There are still some kinks in this version that I've got to iron out, so I've not even bothered assigning a number to it. Consider it version zero point ugly.

Thanks to my employer, the Indianapolis Star, for letting me open this to the public.

Comment

Text Tumble: Getting Started

Text Tumble: Getting Started

It's a tough thing, moving between web development (a world which is dominated by scripting languages) and game development. I've been programming computers since I was 16, and for the majority of that time, I've been /paid/ to develop webpages, and /dreamed/ about developing video games.

Now, here I am at 28, finally doing that. While I've worked on video game mods in the past, those really aren't quite the same thing as doing a from scratch video game. (Even one that, were it not my own, I would think of simplistic - embarrassingly so, when compared to other games) And, while I certainly can make my way among the major points of a more explicit memory management language (additionally, where resources may be constrained, though saying that the iPhone is memory constrained may receive different reactions from different developers), it's rather like saying I can speak Spanish. I've been trained to do so, and have been able to in limited situations, but my day to day usage is far different- (see, Ruby, JavaScript, Python, et al)

So getting started on iPhone development, while not impossible for me, introduced me to a different set of problems than I'm used to solving in web based apps.

  • OpenGL: I've played with OpenGL in trivial apps off and on for years, always swearing I'd do a better job learning it, then never getting around to it. Now that I actually AM using it, I feel like I have a lot to learn when it comes to how colors combine when objects are rendered over each other, or image bit alignment. These aren't things I need to know in the simple case, but I can see how becoming much more familiar with these things in the future will open a lot of graphical options to me.

  • Data structures: Believe it or not, this one actually bothers me quite a bit. I've spent a lot of time thinking about how to represent a list of entities in memory where my primary method of access would be spatial. It may be naive, but I've been constantly thinking about how a "spatial hash" may work, and be implemented. I'm kicking myself now for my silly list style representation, when I could have spent just a day looking through the Algorithms book beforehand and implemented a balanced interval tree instead. This is a common thing for me, working with real time "physical" objects now - learning to reference my books and the internet FIRST, not after the fact.

  • Appreciation for physics: I feel like knowing and learning more about physical simulation will change me from being an average developer to being a /good/ developer. While I don't think I'll ever author a paper on a topic, being more aware of where to turn for an equation that describes some physical phenomenon that accomplishes what I need (and being comfortable in how to translate that to actual code) will be invaluable.

  • State : This is infuriating for me, as I should have known better. Maintaining state, changes to state, and centralizing access to this information is something I should have known right from the beginning of TextTumble. While the code is by no means unmaintainable, I see now the bad habits forming through independent has_done_x state flags littered through the code. Surprisingly (to me), abstracting all this not only would reduce the noise in source code, it could also lead to efficiencies in rendering by abstracting the state change facility of OpenGL. Only sending state changes to the hardware when they are necessary (obviously in retrospect) leads to an increase in framerate, which will obviously pay off later on.

    There's more of course, but the sum is: I really couldn't be any happier than I am right now, (re)learning things and finally making a game.

Comment