Joseph Hallenbeck

July 28, 2016

WordPress to Jekyll Conversion

Filed under: Software Development — Joseph @ 6:52 pm

Flattr this!

I am currently undergoing a process of slowly converting this and my other blogs from WordPress to Jekyll. One of the first items that I needed to account for was converting all of the posts from WordPress into Markdown for use by Jekyll.

Jekyll itself provides a process for importing, but I was intially displeased with the results. I want my posts exported into Markdown files so I can continue to retain them in a simple plaintext format that can be post-processed into a variety of typesettings be it online or perhaps a print format. The default setting only outputs html.

In all honesty, I’m not sure why I’m using Jekyll. The Ruby dependency ecosystem always seems like such a pain to me. Dependencies not automatically resolving. Things breaking from one system to the next. But, I don’t really know of any other big-name static site generators in other languages. I’d do a Python one in a heartbeat.

So, for my own personal memory. This is the process that I went through to get my posts out of WordPress and into Markdown:

1. Export Content from WordPress

WordPress has an export tool when you are logged in to the admin dashboard. By selecting “All content,” I can get everything from the site in a massive XML file. This gets us a little closer.

2. Ignore Jekyll-Import

Jekyll has a series of importers for popular sources. It even has two for WordPress! I tried both with little satisfaction. They take the exported XML file and spit out HTML copies of our articles. If I wanted to get back to MarkDown, this would require additional post-processing.

3. ExitWP

I stumpled upon a Python tool that does the trick so much better. ExitWP takes the exported XML file and converts all of our articles into *.markdown files.

Follow the instructions to install the dependencies. Dump the XML file into the wordpress-xml directory and then run python I found that there were some linting issues in my XML file that caused it to fail. Opening the file in VIM and tracking them down via it’s XML linting functionality made it pretty simple.

4. Copy Your Images Directory

Unfortunately, you are still left copying the images directory and manually updating the links to images to get things working. This isn’t a major problem for me as a migration does entail a lot of additional overhead if you want to do it right — 301 redirects, image updates, cleaning up posts.

May 2, 2016

Goodbye Trello, Hello Todo.txt

Filed under: Software Development — Tags: , — Joseph @ 9:00 am

Flattr this!

In 2013, I was fresh on my switch from Windows to Linux as my full-time OS. I was reading books like David Allen’s Getting Things Done and looking for a good digital planning system. Enter Gina Trapani’s Todotxt script.

Todo.txt allowed for command line todo lists. Every was stored in a plaintext file, easily editable with any text editor or automated via the command line. I used it for roughly a year. At the time I both loved and hated using Todo.txt.

On the one hand, it was easily automated. I could set up daily and weekly tasks to be automatically populated to my list in the morning. I could easily bulk edit things in VIM.

But there was still some big pain points. My lists tended to get way too long — scrolled right off the top of my screen. There was no easy way for managing multiple todo files. There also wasn’t much for sorting. The result was that managing my lists and getting an overview of everything became increasingly difficult.

When my employer started using Trello for product management, I saw my solution. Trello does a great job of visualizing where all my tasks belong. Following GTD, I had a backlog column, next actions column, today column, in progress, and done. Moving cards between columns let me visually see the flow of work through the day. A big tickler board kept all my long-term ideas.

Now in 2016, I find myself re-installing Todo.txt and giving Trello the boot. Why if it was such an excellent system?

Goodbye Trello

There are a number of pain points that Trello simply cannot get over that Todo.txt solves easily:

Vendor Lock-in

A theme for a lot of my projects this first quarter of 2016 has been a move away from Vendor lock in. I got rid of my IDE and switched back to developing using VIM. This got me to thinking about how many other products I use that have vendor-lock-in. Evernote instead of just keeping plaintext files. Dropbox instead of using rsync. And Trello instead of Todo.txt.

With Trello, my done lists, my massive tickler list of project ideas, and my entire workflow is dependent upon the continued existence of Trello the company and it’s good graces to continue hosting all of this content for free.

Now Trello does have an export feature, but the result is a massive json blob. It might as well be binary for as much use as I will get out of it. I most certainly will be backing up all my trello boards. Yet, if I ever wanted to make use of this data, I will first need to write some kind of interpretor for it.

Todo.txt, as a plaintext file manager is to todo lists what Markdown is to Word Documents. It’s open, interchangeable, can be opened nearly any file system. It will follow me for year’s to come.

Difficult Automation

Switching back to VIM and working on the terminal all the time made me realize just how many computing tasks I have left un-automated.

In planning my daily todo tasks there are a number of recurring todos. A daily stand up starts my work day. A sprint planning meeting occurs every other week. Duolingo calls my daily French learning session. Monthly bills need to be paid.

On Trello entering these items into my board is a manual exercise. I keep a second board of “reoccurring” tasks that I copy over at the start of each sprint. It takes me thirty some minutes just to do this.

Now Trello does have an API, but I would need to learn it, probably create some kind of developer account, get API keys, compose some sizable application to interface with that API, make REST calls. It would take me probably a week’s worth of work to automate that entire process.

With Todo.txt, and a little BASH-fu and a cronjob, this all gets automated away. Every night my daily tasks get added to my todo, every sprint my per-sprint tasks get added to my todo. At the end of the month a note to pay my bills shows up on my todo. This gets offloaded so I no longer need to think about it.

Task Creation Friction

GUI’s add friction to any task.

Trello is no different in that regard. If I want to add a new task, I need to fire up a browser, navigate to Trello (assuming I even have an internet connection), create the card, name it, click a bunch of buttons to add a label. Sometimes, I just don’t want to do all of this, often times I find that I don’t sufficiently break a task down into small enough tasks purely out of a resistance to creating more cards., being on the command line means I need no internet connection, I can simply start typing to add my task, and there is little overhead in truly breaking any project down into atomic tasks that can be accomplished in a single Pomodoro.


After considering these options, I decided to revert to using After a week of being back, I find that I love it. I am still working out my system for using It truly is powerful. I’ve already discovered quite a few commands and options that I had no idea even existed before (I never realized there is a means of doing a logical or for terms or excluding terms via -TERM).

I could easily write up an entire second post about how to manage todos, how to install the script, get yourself running, useful aliases and methods for creating new add ons and automating things. Once I really get my daily system going, I could probably write a whole post on that as well.

Plaintext Planning

I would highly recommend a read through Michael Descy’s Plainttext Productivity website as the tips are quite above board. The biggest take away is priority management. Only use three or four priorities and use them to management where an task exists in the GTD workflow:

  • (A): Tasks that are in progress. Keep this down to three or four tasks at a time.
  • (B): Tasks that I will get to today
  • (C): Next actions that can be started now. (Descy uses this for “Next Actions this week”)
  • (D): On hold (Descy uses this for “Next Actions next week”)

Everything else is in the backlog which for me is items to be done this quarter. Anything further back goes on the tickler to be evaluated some day and added at my leisure.

Add-Ons & Set Up

A very brief overview of my current set up.

First, I have the todo.txt-cli script installed in my dotfiles repository which has it’s own script for installing all of my related configuration files on any system I touch. The todo lists themselves are in their own separate repository since I don’t manage todos on every system that I touch.

I follow the instructions for setting up auto completion. I also set up a number of aliases for different todo lists:

  • todo: for my daily, sprint, and quarterly task list
  • todot: for managing my tickler list
  • todos: for managing my shopping list

The aliases use the -a flag since I prefer to not auto archive by default.
Each alias has it’s own todo.cfg file which each sources a base.cfg file and only exports configurations that are unique to that command. As a base, I changed my priority colors to Blue, Green, Brown, and Red solarized values for the A-D priorities. Changed the project color to red and left the context a nice light gray.

As for the add ons, I added:

  • archive for archiving only selected items
  • edit for quickly opening the todo in VIM
  • sync and it’s requirements commit, pull and push for quick version control my todo lists.
  • projectview has some pretty formatting for project lists
  • recur for automating recuring tasks. I tried the ice_recur module but simply could not get it to work on my system.
  • xp another task visulation. This time for done tasks.
  • pri and rm (with a soft link for pri to p as a shortcut) for bulk editing priorities and deletions
  • lsgp/lsgc another project and context visualization.

Still Some Rough Spots

There are still some rough spots in land. First, sorting is still not quite perfect. Ideally, if I type todo lsp, I would like to have all my tasks listed by priority, then line number grouped by project. The best that I can do right now is by priority and then line number. Project grouping only occurs if I group the project lines together in VIM.

Secondly, the one big item that Trello had going for it was it’s phone app. This made adding tasks on the go quite easy and made looking things up easy as well.
Perhaps some of the various todo apps will have the functionality that I need, or perhaps I will need to compose my own app to meet my needs. The joy of the matter is though, I’m not locked in. I can easily develop that app if I so choose.

October 13, 2014


Filed under: Software Development — Tags: , , , — Joseph @ 6:00 pm

Flattr this!

This summer, I plunged into the depths of my back up drives and came up with some old projects that were growing some dust. Like most old projects, I find them, get excited. Decide to do a major revolutionary revamp, and ultimately just end up touching up some things and kicking them out the door. The DropFramework is one such thing. For a long time, I wanted to make my own micro-framework to compete with the likes of Slim or Silex. In the end though, I really feel that those two have the space of micro-frameworks very well covered. No one needs yet another PHP micro-framework with half-done ideas floating around. Still, I did want to show it off, so I polished it up a little bit and through it up on github. Below are my thoughts on “Yet Another PHP Microframework:”

Yet Another PHP Microframework

For all intensive purposes, you can consider this an abandoned project and I would not recommend anyone actually use this in production.

A few years ago when Code Igniter was still quite a hot thing and a lot of
servers were still running PHP 5.2, e.g. the “dark ages” before we got all the nice things that came along in PHP 5.3 it seemed to be quite the fashion for everyone to try their hand at writing their own framework.

This was my go at it.

You will find a lot of similarities with Code Igniter (since that is the framework I worked with at the time) and you might also find a lot of classes that look like they came straight out of PHP Objects, Patterns and Practice since that was my bible.

I wanted to do a few things in writing the DropFramework:

  1. I wanted to better understand the MVC pattern, the choices being made and how CI works.
  2. I wanted a framework that was small enough that I could read and understand every class in it.
  3. I wanted a framework with a very small footprint that worked by transforming HTTP requests into request objects / command objects. This allowed me to fire up multiple instances of the framework per HTTP request with the master application generating it’s own request objects that it would feed into it’s child instances and then construct a document out of the application responses from the children.
  4. I did not like at the time, and still do not like the major design patterns of a lot of ORM solutions which tend to treat the database as the authoritative model of the data. I rather turn this convention upside down: treat the database as just another form of user input. The model can then be constructed from any form of input — the database, an HTTP post, a file. The PHP object is then the authoritative source for how the data structure relates with other data. Any data coming into the model passes through a validation layer that translates it (or rejects it if it invalid).

Whether or not I succeeded at this items? I don’t think I would really know.

Version 0.4.0

The version of the framework that had been sitting on my hard disk for some time was 0.3.0. In deciding to release it I have done two major things:

  1. I created a simple example of the framework working. The code for this example is also up on github and a live version as well.
  2. I namespaced the entire framework and brought it into PSR-4 compliance allowing for installation via Composer and the use of the Composer autoloader. This defeats a lot of the purpose of the PHP 5.2 era frameworks which devoted a lot of their resources to locating and managing the loading of assets. This, of course, makes this no longer a PHP 5.2 compatible framework and probably even makes a lot of the framework look rather silly.

October 1, 2014

Centipede-RS Dev Log #2

Filed under: Journal,Software Development — Tags: , , — Joseph @ 5:00 pm

Flattr this!

Getting started with Piston can be a little daunted right now. Mostly this is because it’s a project that is still evolving and which has either little documentation or documentation that rapidly becomes wrong. A lot of games that I found made with Piston can no longer be compiled, a lot of example code needs various minor tweaks to get to compile, etc. That said, the two best items that I found where:

Getting Started

The first hurdle in getting Piston to work was getting the SDL2 and GLFW dependencies installed. Cargo does a great job of going out and grabbing everything else, but these two items require you to do it yourself. SDL2 was rather easy and the instructions for it can be found in the Getting Started tutorial (see above). GLFW was a bit more of a pain and I ended up going through a stack overflow question to get it working. If anything, I would just point to the Getting started tutorial to get the initial empty repository set up with cargo and all the dependencies in the Cargo.toml.

My Repository at this Point

At this point my repository looks like this. I began by setting up a new Piston project as detailed in the Getting Started tutorial and from there I copied the code from the piston image example. This was just a nice starting point to ensure that everything is working and that the Rust logo would appear in the window designated.

From there, I began working through the Piston-Mov-Square project and the Getting Started tutorials and religiously commenting every line of the code with what it does. This is just something I picked up in college and a good way to puzzle out foreign code. Even if the comment turns out to be wrong (like it happened in many cases for myself), it at least is a step in manually dealing with the code.

I played around for a while and after I felt confident in the code that I had, I began abstracting it into various data objects and getting down to work. Hopefully my puzzling with help someone else to understand this faster than I.

An Explanation of the Code

Loading Crates

// Load external crates provided via cargo.
extern crate graphics;
extern crate piston;
extern crate sdl2_game_window;
extern crate opengl_graphics;

// Texture is used for images; Gl for accessing OpenGL
use opengl_graphics::{

// For creating a window
use sdl2_game_window::WindowSDL2;

// Stuff from piston.
use piston::{
    EventIterator,                  // Used for the game loop
    EventSettings,                  // Struct used for setting and updates
    WindowSettings,                 // Struct defines window config
    Render,                         // Render Evemt 

use piston::graphics::*;
use piston::shader_version::opengl;

We begin by loading all of our various library provided to us by the Piston developers and which we will use for getting our game window to appear on the screen. I have yet to figure out what the #![feature(globs)] lint actually does and if someone does know, I would love to find out since removing it causes everything to break. The rest of the code is just giving us access to various libraries that we will use latter on. I have tried to comment those libraries as best I could since it wasn’t entirely clear what does what.

Config and Main Entry Point

// Config options for the game.
struct GameConfig {
    title: String,
    window_height: u32,
    window_width: u32,
    updates_per_second: u64,
    max_frames_per_second: u64,
    tile_size: uint

// Entry point for our game.
fn main() {

    let config = GameConfig {
            title: "Centipede-RS".to_string(),
            window_height: 480,
            window_width: 800,
            updates_per_second: 120,
            max_frames_per_second: 60,
            tile_size: 32,

    // Create and run new game.
    let mut game = Game::new( config );;

If there is one thing that I know it’s to confine magic numbers. Let them sprout wherever you please and code maintenance becomes a mess. Hence, I have taken the various constants for our game and packaged them up into a GameConfig struct. Right now this struct defines the attributes of our window: title, height, width, frames per second, and tile size. I imagine that this structure will probably grow larger as we begin adding in actors, players, and assets. We will deal with that when the time comes.

I have also created a Game struct (more on it later). The game struct simple takes a GameConfig and returns an instance of itself. Calling run fires off our game loop which loops infinitely or until we kill the process. In essence the Game struct represents and handles the loop. We could leave this in main, but by turning it into a struct we have the option further down the line of moving it out into a module which would leave our file consisting only of loading Piston, setting the various config items and calling

The Game Struct

// Represents the Game Loop
struct Game {
    config: GameConfig

impl Game {

I’ve seen this simply called App, but since we are making a game, I think it should be Game. The Game simply holds the game state and runs the game loop. Inside it, I have added several methods via impl: new, run, window, and render. New and run are our public methods which we have already seen. One takes a GameConfig and returns a Game. The other starts the game loop. The remaining methods are just there to run the internals of the loop itself. Let’s walk through each method:

// Returns a new game struct
pub fn new( config: GameConfig ) -> Game {                           

    // Return a new Game
    Game {
        config: config,



This one is rather simple. It is a public function (pub fn) named new. We can access it via Game::new(). It takes a GameConfig and returns a Game whose config property is config. I am sure I am mixing a lot of OOP language here, but after years of working in the realm of PHP that’s just how I end up thinking.

// Run the game loop
pub fn run( &mut self ) {

    let mut window = self.window();        

Run is a little messier it fires off our game loop. It takes a mutable copy of itself which allows us to access it on an instance of Game e.g. The first line it calls is to a member function window():

// Returns a window.
fn window( &self ) -> WindowSDL2 {

    // Values for Window Creation
    let window_settings = WindowSettings {
            title: self.config.title.to_string(),
            size: [self.config.window_width, self.config.window_height],
            fullscreen: false,
            exit_on_esc: true,
            samples: 0,

    // Create SDL Window

This is not a public function, thus when we turn Game into a module it will not be accessible outside of the module file. We are using this essentially as a hidden or private method on Game. The window function is accessible from inside a game object via self, e.g. self.window(). We really only need one window, so this method is only called once at the start of the run method. Window returns a WindowSDL2 which is our back-end we loaded way above at the start for managing our windows. This window takes a WindowSettings struct whose values we pull out of the GameConfig stored in our Game. Either way, it makes a new WindowSDL2 and passes it back to the run method. Now back to our second line of the run method:

// Get Gl
let ref mut gl = Gl::new( opengl::OpenGL_3_2 );

Now this took me a while to figure out. The call to Gl::new() must come after the creation of the WindowSDL2. In an earlier version of this I had the call to create GL after the call to create the Window. The code will compile fine if you create GL first and then the Window, but when you run it you will get a CreateShader error. I only solved this by stumbling upon an IRC log. Anyways, hold on to that gl variable since we’ll be passing it around a lot.

// Create Settings for Game loop
let event_settings = EventSettings {
    updates_per_second: self.config.updates_per_second,
    max_frames_per_second: self.config.max_frames_per_second,

Rather boring. We need to create and EventSettings object to pass into our game loop.

// For each e in Event Iterator (whose range is 0 to infinity) 
// e becomes a new Event by passing our window and event settings
for e in EventIterator::new(&mut window, &event_settings) {

    // If e is Render(args) do something, else return ()?
    match e {
        Render(args) => self.render( gl ),
        _ => {},

Here is the magic! The game loop. I really like how this works in Rust. Since iterators can go from 0 to infinite we take advantage of it. The EventIterator takes the window and event_settings variables we set up earlier and returns something (I don’t know what) which is put into e. We then do a match on e to see what was returned. Right now there are only two things that can match: a call to render the screen, or everything else. Looking at some of the example code, I do see that we can catch all different kinds of events — user input, calls to update the game state, etc. but for now we are just concerned with rendering the screen. So we get a render event (Render(args)) and we call our private method render via self.render and pass in our gl variable (I said we would be passing him around a lot).


// Render the game state to the screen.
fn render( &mut self, gl: &mut Gl ) { 
    let width = self.config.window_width;
    let height = self.config.window_height;

    // Get number of columns and rows.
    let tile_size = self.config.tile_size;        
    let num_cols = width as int / tile_size as int;
    let num_rows = height as int / tile_size as int;  

Render simply takes a mutable reference to Gl and paints to our screen. The first two lines just get the window_height and window_width out of our config since we will be using them a lot in this method. Since this is going to be a tiled game we need to know how many columns and rows of tiles we will be drawing. So I calculate that here by dividing the window’s height and width by the tile_size.

// Creates viewport at 0,0 with width and height of window.
gl.viewport(0, 0, width as i32, height as i32);

// graphics::context a new drawing context (think html5)
let c = Context::abs(width as f64, height as f64);

The next two lines in our render call do two important things. First we set our view port to start at the cordinates 0,0 and to extend to the width and height of our window. Second, we get a Context which I like to think as our virtual pen for drawing on our canvas. In fact, the first thing we do is fill the entire canvas with white:

c.rgb(1.0, 1.0, 1.0).draw(gl);

This takes an rgb (red, green, blue) value that sets each to a 100% (or white) and then draws this to our window by calling draw and passing in our old friend gl.

Now let’s have some fun. Just to show that we are indeed drawing on the window, let’s fill the window with 32×32 pixel tiles each one slightly reader than the last. The effect should look like this:

Screenshot of Centipede-RS at Dev-2

We begin by setting our starting red value:

let mut red = 0.01

This needs to be mutable since we will be adding to it with each iteration of our rows.

Second, we loop through each row and each column drawing a red square the size of our tiles:

// Fill screen with red one 32x32 tile at a time.
for row in range(0i, num_rows ) {

    red = red + 0.02;    
    let row_shift: f64 = row as f64 * tile_size as f64;

    for col in range(0i, num_cols ) {

        let col_shift: f64 = col as f64 * tile_size as f64;

            tile_size as f64
        ).rgb( red, 0.0, 0.0).draw(gl);                                


What does this do? First we are looping through our rows from zero go num_rows (we calculated the number of rows earlier). On each row we adjust our redness slightly this should make each row more red than the last with the first row being fairly dark. Next we calculate row_shift this is simply done my multiplying what row we are on by the size of our tiles. This will be used to tell the context to move down 32 pixels when it gets to row 2, and down 64 pixels when it gets to row 3 and so forth. The inner loop does the same only for our columns. We loop through each column and calculate our col_shift or how far to shift to the right for each column. If I recall correctly this is the most efficient way to loop since the screen paints outwards from your upper-left corner. Finally, we draw our square. The context (c) knows how to draw squares so we pass into it the coordinates of the upper-left corner of our square (col_shift, row_shift), the width of our square as a float (tile_size), instruct the context to fill this square by calling rgb( red, 0.0, 0.0 ). Note, we passed in our red variable so the redness of the tiles should adjust as the red variable does. Last, we draw the square by calling draw and once again passing in gl.

Call cargo build and cargo run and enjoy!

September 26, 2014

Centipede-RS Dev Log #1

Filed under: Journal,Software Development — Tags: , , — Joseph @ 7:23 pm

Flattr this!

A rather rambling design document for my ideas for a Centipede clone that I’m releasing under the MIT license. Following all my reading in Rust it seems like a good idea to have some kind of project to complete. After scrounging about for ideas, I came up with the one of doing an open source centipede clone using Piston. This would be good practice for trying a Rust Ludum Dare next April.

The following is more or less a rambling stream of consciousness design doc for what I’m about to do. I’ll probably follow this up with a series of other entries about the steps and break down of the code as I go.


A Centipede clone done in Rust using Piston with perhaps some additional flavor.

The core idea of the game is to have a gridded window of size X and Y with a centipede that begins with one segment that grows as the game progresses. The centipede moves continuously in the last cardinal direction specified by the player. As the character moves it encounters various items randomly populated on the screen. Upon contact some effect occurs such as adding an additional segment. If the user comes into contact with itself (such as looping back around on it’s own tail). The game ends or some failure condition occurs.

Objects in the Game

The Game

Well of course it’s an object unto itself. The game represents the game loop.

The Board

The board is 800×480 and divided into 32 pixel squares. At start of the game and at a fixed interval actors are randomly assigned squares on the board.


The centipede has the following characteristics:

  • Collection of Segments
  • Who each have a position and sprite
  • Who each have a direction (Each moves in the direction of the segment before it except the head segment which moves in the last direction input by the player).
  • If a segment intercepts another segment it destroys it. The severed segment then becomes bombs.
  • Number of mushrooms eaten (Used as a score)


Actors specifies an indescriminate number of items placed on the board that the centipede interacts with when it comes into contact with them. The actors need to be able to expand to include new actors with new effects.

  • Sprite
  • Board position
  • An affect

Right now we have two actors: mushrooms and bombs. Mushrooms are placed randomly on the board at a fixed interval. Bombs are segments that have seperated from the centipede. They each have an affect. Mushrooms cause a new segment to be added to the centipede after X mushrooms have been consumed. Bombs cause the game to immediately end.

September 22, 2014

Resources for Learning Rust

Filed under: Journal,Software Development — Joseph @ 11:40 am

Flattr this!

I just started delving into Rust last week with the release of the Rust Guide. In Web Development, I really have moved away from the “bare level” languages of my schooling into the flighty realm of scripting languages. For the most part, I’ve been quite satisfied to leave behind the rigors of memory management and obtuse C linking errors for PHP, JavaScript and Python.

Yet, Rust is the first systems language that really has gotten me excited to sit down and try it out. Maybe get back into the indie game scene (which I have been saying forever).

This post is going to be updated semi-regularly as just a continuing list of Rust resources worth looking into:

Learning the Language



September 16, 2014

TimeKeeper v0.1.1 Released

Filed under: Journal,Software Development — Joseph @ 11:37 am

Flattr this!

TimeKeeper is a little utility tool that has become both a pet project for testing out new PHP and JavaScript tools as well as a very useful tool that I use every day to keep track of my billable hours, projects and tasks that are completed through out the day. An example of TimeKeeper in action can be found at

This week, after a year of dormacy, I updated TimeKeeper to v0.1.1 with a major internal refactoring and improvement in the interface’s “responsiveness.” Major improvements include:

  • The UI is now 100% responsive thanks to a rewrite of all templates to Bootstrap3
  • Libraries now install via bower and composer
  • Moved database configuration into a seperate config.php file, this along with the bower and composer updates makes installing TimeKeeper much easier
  • 100% Documentation of all interfaces and files used by TimeKeeper

Future Plans

TimeKeeper’s direction is still rather vague. This is a useful tool for a single user to keep track of their own time. I am not yet sure if I want to keep it focused on being a planning tool for a single user or to expand TimeKeeper into a team-based tool.

The single biggest issue with TimeKeeper is that it does not provide a password-protected user log in which means that it cannot be public-facing or at least ends up relying on apache for user-login.

v0.2.0 RoadMap

For v0.2.0, which will be released “whenever,” I plan on adding the following features to TimeKeeper:

  • Single-User Password Log In (so the site can be public-facing)
  • A Reports table that generates a variety of charts analyzing the filtered time frame including: break down of 100% time spent per project or billable hours; daily break down showing hours worked and whether they went over or under 40 hours; Perhaps a monthly report as well.
  • 100% Test coverage

May 5, 2014

The Glories of Text Files: On Using Vim for Code and Prose

Filed under: Software Development — Tags: — Joseph @ 9:01 pm

Flattr this!

Were to begin? This post is a kind of smörgåsbord of random thoughts and musing regarding editing and creating documents. It all really began when I started contemplating learning LaTeX, which lead to a good deal of time spent thinking about what is a document and from there to extrapolating much of the best-practices for web development into a wider sense. Namely, that a web page is merely a marked-up document and that the principles of separating style from content ought be considered in our document processing.

I think that Allin Cottrell says it best: Word Processors are Stupid and Inefficient. This is something, that I think anyone who spends a good deal of time editing text begins to realize. I recall long hours in college editing work cited lists to carefully format them into their specified manners. Even more, I recall hours writing long form Dungeons and Dragons Adventures to submit to Dungeon Magazine and all the pedantic formatting that it required.

XKCD 1360

Not surprisingly, early on in my computing, I had turned to various forms of mark up for my writing — HTML, simple text files, anything at all to just get away from the mess that was the Word Processor. It seems that I was on to something, even though I was unaware that the problem of separating content from the issues of styling (or more properly: typesetting) had long been a solved problem.

If I were to paraphrase Cottrell’s points about the disadvantages of Word Processors and advantages of typesetting it would be:

  • Text editing allows us to focus on the content and leave stying for latter (Which is often a solved problem if your content is going on a website or submitting to a publication)
  • With separate concerns we can use software like Pandoc to export our text file into LaTeX, PDF, Doc files, HTML, whatever use we want in whatever style that pleases us without needing to go back and edit the content itself.
  • Text is pretty much ubiquitous, it works on nearly every computer and is resilient against file corruption.

Since Cottrell wrote his document we’ve also got an upsurge in easy-to-use and reader-friendly mark up languages like MarkDown and reStructured text which allows us to create text files that are readable as both a text file and exportable into a format that can be compiled into a beautiful print document via LaTex. In fact, this entire blog is done in MarkDown and as of late, I’ve turned to writing my articles as separate text files in VIM and just uploading them to WordPress after the fact.


Enter VIM, my text editor of choice. Sublime seems to be getting a lot of traction amongst my fellow developers, but as far as I know Sublime still lacks terminal support — so I stick it out with VIM. That said, I really only started to master VIM about a year ago. Before then, my interaction with VIM was limited to random encounters changing configuration files on production servers. At the time, I only really learned the bare minimum to get by — how to open a file, get into insert mode, and save.

A year ago, I decided I really needed to try to master VIM. So I sat down and did the various tutorials. Made cheat sheets. I got decent at it, but not perfect. Right now, I’m refreshing myself and I’m setting a goal of setting aside NetBeans for my next project to do it all in VIM as well as officially tossing the Word Processor for writing my prose in VIM as well.

For those who want to follow along, I’ve created a public git repository with my VIM configuration.

VIM for Code

enter image description here

If I plan on developing an entire website with just VIM, then I really need to get VIM tweaked out to do exactly what I want for development. Now, I read a lot of tutorials, but I found Mir Nazim’s “List of VIM Plugins I Use with Mini Tutorials” to be a very good start.

I think the take aways from Nazim’s article are:

  • Install Pathogen. This is pretty much the go-to package manager for VIM plugins.
  • Put your ~/.vim directory into a git repository. Move your ~/.vimrc into your ~/.vim directory and then create a link to it. Get this set up on all the machines you work on and then you can easily sync any change to your configuration across all of your platforms.
  • Use git submodules to manage all of your VIM plugins.

The Plugins

For myself, I use the following plugins in my VIM install currently:

  • closetag Automatically closed open HTML tags
  • delimitmate Automatically closes quotes, parentheses, brackets
  • fugitive For GIT inside VIM (haven’t made much use of this one so far)
  • nerdtree The go-to for in VIM directory navigation and opening files.
  • pyflakes For Python Syntax checking
  • supertab For word completion
  • syntastic For advanced syntax highlighting and error checking code

I’ll leave you in Nazim’s exellent hands for how to install and configure these.

The .vimrc File

A few notes on my choices of settings in the .vimrc file:

set encoding=utf-8
set number
set ruler
set autoindent

This set sets our internal character encoding to utf-8, turns on line numbers and the rule by default. Also, autoindent, because it’s cool.

set tabstop=4
set shiftwidth=4
set expandtab

These three defines our tabs as being four spaces and sets VIM to automatically expand any tab characters into being four spaces.

nnoremap <C-t> :tabnew<CR>
map <C-n> :NERDTree<cr>

This combination creates two new key bindings. First, we can now hit Ctrl+t to open a new tab in VIM. The second allows us to hit Ctrl+n to pop open NERDTree so we can navigate around the file system and select files to open. A quick note: in NERDTree pressing Shift+t opens a file in a new tab. An extremely useful shortcut to know.

syntax on
filetype on
filetype plugin indent on
let g:syntastic_check_on_open=1
let g:syntastic_enable_signs=1
let g:syntastic_mod_map = { 'mode': 'active '
  \ 'activeIfiletypes': ['python', 'php'],
  \ 'passive_filetypes': ['html'] }
let g:syntastic_python_checkers = ['pyflakes']
let g:syntastic_python_flake8_args = '--ignore="E501,E302,E261,E701,E241, 
let g:syntastic_php_checkers=['php','phpcs','phpmd']

Supposedly all of this should enable syntax checking and highlighting for Python and PHP. Python seems to work quite well. PHP, unfoortunately, requires you to write out the file to see the errors.

Lastly, we want to make word-search a little looser so by default we adjust some of the search parameters:

set ignorecase
set smartcase
set gdefault
set incsearch
set hlsearch

I will skip the WordProcessorMode and CodeMode commands for latter, for now let’s skip to the last three lines:

if filereadable(".vim.custom")
  so .vim.custom

These three lines sets up VIM to look for a .vim.custom file in the directory that it is running from and then essentially append it to the end of our .vimrc. This allows us to create custom configurations for VIM on a project-by-project basis.

VIM for Prose

Vim in Prose Mode

I began this talk with a discussion on why we should use a text editor for editing our prose. VIM works extremely well for writing code. I am not yet entirely sold on it being the editor for prose, although I do think that any prose-text editor had better come with VIM bindings to be worth its salt.

Right now, I am using VIM to write this and will probably be using VIM to work on a lot of long-length prose. This gives us a number of great advantages:

  • Files are small
  • Files avoid corruption. Imagine this, if one byte of this file gets corrupted what happens? I have a misspelled word. If this happened in a binary file who knows if it could be recovered.
  • I can use my programming skills to do such things as incorporate tables via comma-separated-files, images, or break this out into separate files and compile them into a larger document.
  • I can write it using whatever mark up language I want (in this case MarkDown) and then use a converter like Pandoc to export into nearly any mark up language or file format.
  • I can take advantage of all of VIM’s keyboard functions to keep my hands on the keyboard and my mind in the flow of putting words on paper.

So what have I done to get VIM working for prose? I dug through a lot of tutorials and even used Vimroom for a while. At first, I loved Vimroom, but over the course of a week the bugs and poor user interface and the abandon-ware feel of Vimroom lead me to abandoning it.

There’s a number of bugs that simply annoyed me. For example, the some color scheme throws all kinds of errors when toggling Vimroom, and quiting out of Vimroom without toggling it off first requires repeated closing empty buffers to get back to the terminal. There also appears to break the drop-downs in SuperTab causing them to appear but only to allow you to select the first item in the drop down.

So after a week of Vimroom, I set out to roll my own solution. The solution was to add two commands to Vim — :Code and :Prose. These toggle between the settings I want when writing code and the settings I want for prose.

func! WordProcessorMode()
  set formatoptions=aw2tq
  set laststus=0
  set foldcolumn=12
  set nonumber
  higlight! link FoldColumn Normal
  setlocal spell spelllang=en_us
  nnoremap \s eas<C-X><C-S>
com! Prose call WordProcessorMode()

This snippet creates a WordProcessorMode function and then on the last line, attaches to a command :Prose. Let’s take a look at each line in part.

set formatoptions turns on an umber of important features. With a we set our text to automatically wrap when it reaches our textwidth values. In this case, it is 80 characters. Next, w defines our paragraphs as being separated by a blank line. t sets our text to be automatically formatted to text width and q allows us to use the gq command to automatically reformat selected text.
Note: you can can use gGgq to select the entirety of a document and reformat it.

The foldcolumn and highlight lines sets a 12 column margin on the left side of our text and sets the color of that column to the same as our background.

With spell on misspelled words will appear highlighted, we can tab through the misspelling via [s and ]s to jump to the previous and next misspelling respectively. Once our cursor is on a misspelled word hitting z= brings up our corrections options and zg adds it to our personal dictionary. One addition makes correcting words so much easier:

nnoremap \s eas<C-X><C-S>

This displayed the spelling correction options in an in place drop-down!

Before we forget. We need a function to turn all this back off again if we wanted to jump back into code mode:

func! CodeMode()
  set formatoptions=cql
  set number
  set ruler
  set laststatus=1
  set foldcolumn-0
  setlocal nospell
com! code call CodeMode()
call CodeMode()

This function resets our environment back into code mode, and of course we call the function on start up as well so we always begin VIM in code mode.

Last: If you, like me, plan on using MarkDown as your prose mark up language of choice, grab the vim-markdown plugin which gives me excellent highlighting of the MarkDown syntax.

Vim Color Scheme: Solarize

There is a bunch of color schemes available in VIM via the colorscheme command, but honestly nothing really beats out the simple, thought out beauty of the Solarize color scheme.

The problem is getting it to work in the console. You might notice that my repository does not include the popular vim-solarize plugin. The reason? In terminal mode the Solarize color scheme breaks horribly.

It took a while for me to discover the solution to this problem: change the terminal. Granted, this solution requires you to have a desire to have the Solarize color scheme throughout your terminal experience.

Sigurd Gartmann has a nice repository on git hub that, once installed allows for toggling the terminal into dark or light mode of the Solarized color scheme.

So there you go, a complete walk through for using VIM for both development (in this case web development) and prose writing. Enjoy.

April 22, 2014

Wanted: An Unobtrusive Javascript Framework

Filed under: Software Development — Tags: , , , — Joseph @ 9:42 pm

Flattr this!

I decided to spend the last couple of weeks introducing myself to some of the big MVC Javascript Frameworks that have gotten so much traction over the last couple of years. I sadly, have found the field littered with frameworks that happily violate the principle of Unobtrusive Javascript and am wondering — is there any solid MVC Javascript Framework that is clean and unobtrusive, will I need to keep rolling my own, or am I just a Luddite?

Unobtrusive Javascript

Now first, I must admit that I feel as though I am a technological Luddite when it comes to the rise of Javascript. When I started making websites the standard advice was to keep as much of the document generation on the server-side as possible and to practice what is called “unobtrusive” Javascript.

The idea of unobtrusive Javascript has been a paramount item of good front-end design. Namely, that you clearly separate your concerns and avoid reliance on server-side scripts. HTML ought be semantically distinct from style or behavior and we do this by keeping our markup in one file, our style-sheets in another, and our Javascript in a third file. We do not inline our styles nor our Javascript and we try to keep them distinct so that even if the style-sheet or Javascript never loads the unstyled, un-scripted document is still in a usable state.

The earlier concept, simply keeping things separated decouples the reliance of our code on any one element. We can change the markup, the style, or the behavior of our application without necessarily impacting the other two elements.

The latter idea is a concept refereed to as failing gracefully. Namely, it is that we put fall backs into our application such that if the Javascript does not work, the user can still make use of the web application. There’s a lot of ways that we can do something like this:

  • Have an ajax form submit normally if the browser does not support ajax
  • Add form submit buttons that are hidden using Javascript on load.
  • Make sure client-side generated content has some kind of fall-back view that is generated server-side

The list goes on and on, but you begin to get the idea. Vasilis van Gemert has opened a great discussion about arguments against building Javascript based documents and his comments section is ripe with the reasons that unobtrusive Javascript is still very much relevant to the day-to-day development of websites.

Obtrusive Javascript is where you get page behaviors and views that are only accessibly if the client has Javascript support. The result of these websites is that they are completely un-usable without their supporting Javascript files. We can see this on websites that:

  • Only allow a form to be submitted via a Javascript call
  • Links whose destination is dynamically generated with Javascript
  • Views that are created by generating and appending DOM elements client-side rather than server-side

Now, I grant that unobtrusive Javascript can be hard. Sometimes there just isn’t a suitable fallback. Some times you are running late on a project and the fact that it runs fine on 99% of the browsers means it’s time to just shove it out the door and be on your way. However, I do believe it is a good idea to keep the principle of separating concerns and failing gracefully in mind whenever adding client-side behaviors to an application.

State of Affairs for Javascript MVC

I will address in some article my own personal solutions to structuring a Javascript application as well as the challenge of coming up with a solid framework for addressing UX and DOM manipulation without turning into spaghetti code or re-inventing the solution with each website. Yet, it is typically a good idea to go with a community framework in a team environment since it offers a familiar structure between projects and programmers on a team. For this reason, I embarked on working my way through some of the more popular Javascript MVC frameworks to see what they offer and decide which one, if any offers an unobtrusive solution. My concern is that on a cursory look (AngularJS and EmberJS) both seem to scatter Javascript snippets throughout the document and in the latter case invents a whole new template language that it injects into a script tag. Oh dear.

The only Javascript framework that I have come upon that makes any attempt at keeping any kind of unobtrusive fallback seems to be Knockout.js. That said, it is not the sexiest of new frameworks out there.




<p>First name: <input data-bind="value: firstName" /></p>
<p>Last name: <input data-bind="value: lastName" /></p>
<p>Full name: <strong data-bind="text: fullName"></strong></p>


// This is a simple *viewmodel*
function AppViewModel() {
  this.firstName = ko.observable("Bert");
  this.lastName = ko.observable("Bertington");
  this.fullName = ko.computed( function() {
    return this.firstName() + " " + this.lastName();
  }, this);
  this.capitalizeLastName = function() {
    var currentVal = this.lastName();
    this.lastName( currentVal.toUpperCase() );

// Activates knockout.js
ko.applyBindings(new AppViewModel());

Knockout works by using the data attribute to bind to DOM elements. This means that if the Javascript happens to fail we are still left with your typical document with typical document behaviors. Take the above example clip. If the data-bind attributes are ignored we would still get a form with a first and last name. Indeed, we could even fill that form in server side by assigning value="Bert" and value="Bertington" to the first name and last name inputs.

On top of this, there is something about Knockout that just makes sense. It isn’t as flashy as Angular or Ember. It doesn’t seem to incorporate any new trendy templating systems, massive API libraries, or require us to create half a dozen separate Java script files for controllers, models, and parts of views.

February 3, 2014

Fiddling with HTML5’s Canvas

Filed under: Software Development — Tags: , , — Joseph @ 8:33 pm

Flattr this!

I had my first real exposure to the HTML5 Canvas element this week. It was a fairly fun transport back to Intro to Computer Graphics and my school days working in C.

Canvas provides a very simple bitmap surface for drawing, but it does so at the expense of loosing out on a lot of the built-in DOM. I suppose there is a good reason for not building an interface into canvas to treat drawings created with contexts as interactive objects, but sadly this leaves us with having to recreate a lot of that interactivity (has a user clicked on a polygon in the canvas? is the user hovering over a polygon on the canvas?) up to us to implement using javascript.

So let’s dive in and see what canvas is capable of doing!

This complete tutorial is available as a fiddle on Check it out.

Getting Started

Let’s begin with the absolute basics. First, we need the element itself which is simply a “canvas” element with a specified id that we’ll later use to interact with it. By putting some textual content inside the canvas element we give some fallback for older browsers that might not offer canvas support.

<canvas id="myCanvas">Your browser does not support canvas.</canvas>

Now we need to interface with the element itself. This is done using javascript:

var canvas = document.getElementById("myCanvas");
var context = canvas.getContext("2d")

We are doing two things here. First, we are getting the canvas element from the DOM, second we are getting a context from that element. In this case that context is the “2d” context which defines a simple drawing API that we can use to draw on our canvas.

Drawing a Polygon

The “2d” context API defines a number of methods for interacting with the canvas element. Let’s look at how we can use this to draw a blue triangle on our canvas:

context.moveTo(25, 25);
context.lineTo(75, 100);
context.lineTo(125, 25);
context.fillStyle = "#0000ff"; // Alternative: "rgba(0,0,255,1)"

Recall that pixels on a computer screen are mapped as though the screen was in the fourth quadrant of a plane — that is they spread out with x values growing larger as the pixels are placed further to the right and y values growing larger as they move towards the bottom of the screen. This puts the value 0,0 at the upper left corner of your screen and 25,100 located twenty five pixels to the right and one hundred pixels from the top.

The first three lines of code can be thought of as moving an invisible (or very light) pencil around the canvas. The first moves our pencil to the position 25,25 which should start the drawing near the upper-left corner of the canvas. The second line draws a line down 75 pixels and over and additional 25 pixels. The third returns to 25 pixels from the top, but 125 pixels from the left-hand side of the canvas.

The forth and fifth lines simply define the color to fill our polygon with and to actually do the filling. In this case we passed a hex value for blue, but we could alternatively used and rgba (red, green, blue, alpha) value if we wanted transparency.

Adding Interactivity

One thing you will note about our blue triangle: we can not tie off DOM events to it. The context merely draws on the canvas, but the drawings themselves do not exist in the DOM. The closest we can do is capture events on the canvas itself (onClick, hover, etc.). It is up to us to then decide if those events were just interacting with the canvas or whether they should are interacting with something drawn on the canvas.

First, we must recognize that each position that we move or draw the context to is a vertices.

PNPOLY is our solution, and to be honest, I did not come up with this one but found the answer on Stack Overflow

function pnpoly( nvert, vertx, verty, testx, testy ) {
  var i, j, c = false;
  for( i = 0, j = nvert-1; i < nvert; j = i++ ) {
    if( ( ( verty[i] > testy ) != ( verty[j] > testy ) ) &&
      ( testx < ( vertx[j] - vertx[i] ) * ( testy - verty[i] ) / 
      ( verty[j] - verty[i] ) + vertx[i] ) )           
        c = !c;
  return c;
$('#myCanvas').click( function( e ) {
  var x = e.clientX;
  var y = e.clientY;
  alert( pnpoly( 3, [25,75,125], [25,100,25], x, y ) + ' x:' + x + ' y:'  + y );

PNPOLY takes five variables: the number of vertices (corners) on our polygon, an array of the X values, an array of the Y values, and the x/y cordinates where the user clicked on the canvas. Now if we add this to our code and run it we should see an alert saying either true or false as to whether we clicked inside or outside of our triangle.

Accounting for Global (Window) and Local Cordinate Systems

It is not easy to see on the jsFiddle website, but we can run into some issues with mapping between the local and global coordinate systems. e.clientX and e.clientY map to the document coordinate system not the canvas itself. We may, in some instances find ourselves needing to map between the local (canvas) coordinates which begins with 0,0 at the upper-left corner of the canvas element and the document coordinate system which begins with 0,0 at the upper-left most corner of the page.

This can occur when our canvas is absolutely positioned or positioned inside a fixed element. In these cases we must include the offset of the canvas from the document coordinate system to find where the click is actually occurring:

$('#myCanvas').click( function( e ) {
  var offset = $(this).offset();
  var x = e.clientX - offset.left;
  var y = e.clientY -;
  alert( pnpoly( 3, [25,75,125], [25,100,25], x, y ) + ' x:' + x + ' y:'  + y );

Note our additions to the first three lines in our function. The first line retrieves the offset for the position of our canvas from it’s global position. We then subtract that offset from e.clientX and e.clientY to get the coordinates of the click in the canvas’s coordinate system.

We might also need to add another variable to our offsets and that is to account for scrolling. If we have a canvas inside a fixed position element then we must also account for any potential scrolling that might have occurred. We do this via the scrollTop() and scrollLeft() jQuery functions:

$('#myCanvas').click( function( e ) {
  var offset = $(this).offset();
  var x = e.clientX - offset.left + $(window).scrollLeft();
  var y = e.clientY - + $(window).scrollTop();
  alert( pnpoly( 3, [25,75,125], [25,100,25], x, y ) + ' x:' + x + ' y:'  + y );

In fact, we can safely include the offset(), scrollLeft(), and scrollTop() calls even if we are neither using absolute nor fixed positioned elements since these values will simply be 0 in the case of a statically positioned canvas.

Older Posts »

Copyright 2011 - 2017 Joseph Hallenbeck Powered by WordPress