Archive for November, 2010

Book Review: Land Of Lisp

11/23/2010

Land of Lisp is a book written that teaches programming in Common Lisp through writing a series of games. It attempts to inspire the reader through fun example programs and is an overwhelming success.

This is one of the few programming books I have read where I found myself really wanting to finish off each sample so that I could play the resulting game. It does a great job of creating games that are fun and original enough to hold your attention.

Not only this, but the book actually goes into quite a bit of depth for each of the topics covered. During the course of building these games the reader writes a web server, a DSL for creating SVG, and an interface for adventure like games. Through these exercises many of the benefits of Common Lisp (and all lisps in general) are explored.

Does this have me wanting to rush out and use lisp for my next project, no. But, I will certainly consider it, and I had a lot of fun reading through this book and learning the language.

Book Review: Focus

11/19/2010

I recently read through the free release of Focus by Leo Babauta. Its short, about 125 pages, and reads very quickly. I enjoyed the read and definitely got some ideas from it about both why and how to focus better.

The book’s main point seems to be that in this busy age of distraction focusing on a single subject at once can be very difficult. Despite this, focus is very important to productivity, and to enjoyment of your life. By consciously acknowledging the importance of focusing we can make steps towards improving our own attention to the topic at hand.

Most of the contents of this book didn’t strike me as new information, but seeing it all presented in one place helped me to realize how well everything plays together. Avoiding distractions and interruptions has always been important to me, but for some reason I previously only associated these ideas with work. After reading Focus I realize that these same problems can hurt enjoyment of other aspects of your life. I also found his view on goal setting in the chapter “Letting go of goals” to be an interesting alternative to ideas I’ve heard in the past.

I only read the free version of this book because I was just curious and the vast amount of stuff included with the premium one looks overwhelming; but assuming the premium one is of comparable quality to the free one I think it would be worth the money. I am definitely going to be evaluating whether or not it is something that I want to buy.

Book Review: Start Small, Stay Small

11/15/2010

I just finished reading Start Small, Stay Small: A Developer’s Guide to Launching a Startup by Rob Walling and I thoroughly enjoyed it. It struck me as The Four Hour Work Week with most of the fluff/exaggeration stripped out and presented from a developer’s perspective. I generally rate business books by how much they inspire me to start working, and by that rubric this one’s a winner.

Start Small, Stay Small does an excellent job of explaining what is necessary to run a software business to a developer who may think that creating the product is all that needs to be done. The book’s short length and the avoidance of unnecessary filler makes the information (at least feel) much more actionable than advice I have read in other books.

Business books in the software world seem to be mostly focused around the venture capital driven, make it huge types of companies but this book presents an alternative plan; one that is more realistic for the average developer. The options of bootstrapping a company and then growing it larger, or just creating a small, self-sustaining business and going on to found another are presented along with reasons why each one should be chosen. Even if your goal is to build a gigantic company, the information on marketing in this book could prove invaluable.

In short, I found this book to be exactly what it advertised; a blueprint to getting a startup off the ground for a developer.

Scripting with ruby

11/13/2010

Earlier I wrote about using Unix command line tools to manage text when the job at hand calls for a quick fix instead of a program that you plan to keep around. When the script that I’m writing is a longer one I will often reach for ruby, but ruby can also be quite useful for quick scripts. Specifically, the ruby executable provides several command line flags that are helpful when writing these quick scripts.

-e

-e is the first flag we’ll need for using ruby as our command line swiss army knife. If you call ruby with -e it will evaluate the string following it with the ruby interpreter. Example:

ruby -e 'puts "Hello, world!"'

Got it? Good, now lets move on to more interesting options.

-n, -p, and -i

The -n flag causes ruby to loop over each line of the input. For example if you want to capitalize all of the lines in a file (to stdout) you could do the following:

ruby -n -e 'puts $_.upcase' < original-file.txt > upcased-file.txt

Printing out something is so common that ruby provides another flag that will print out the value of $_ after each step. With -p the following example becomes:

ruby -p -e '$_.upcase!' < original-file.txt > upcased-file.txt

Notice that we are now using the destructive version of upcase (namely upcase!) so that the value of $_ is redefined before it is printed out. It turns out that taking a file, performing some operation on each line, printing the changed line and then putting in a new file is so common that ruby gives us yet another flag to help with this occasion. We can shorten our simple example even further with -i:

ruby -p -i -e '$_.upcase!' file.txt

The -i flag tells ruby to operate on the passed file in-place. This means rather than redirect the file into ruby and the output out of ruby, it will open the file itself and overwrite it with the modified lines. Obviously this isn’t quite the same result as the earlier examples in that the original file is no longer maintained. If you don’t want to lose the original (or you aren’t confident that your script is going to work as expected) you can pass -i a backup extension to make a copy of the original file.

You’ll notice this is similar to the -i flag of sed. I find myself using ruby with -i now whenever I might reach for sed because the -i flag of sed seems to work differently in Linux than in the BSD tools. With ruby I don’t have to worry about the cross-platform stuff as much.

Other Resources

Dave Thomas (of the Pragmatic Programmers) put together a list of handy one liners for ruby. This is old but still quite useful:

http://www.fepus.net/ruby1line.txt

And, as always, the man page has a lot of great information.

Call For Help

11/12/2010

What is the best way to edit a wordpress.com blog within emacs? I tried to use weblogger from the ELPA but I had some trouble with accidentally overwriting my posts. I heard something about using org-mode to post to WordPress (this sounds good) but I was wondering if there are any other options, and which ones are commonly preferred.

Testing mobile sites with Cucumber

11/04/2010

Recently I was working on a site that had been originally built with Rails 2 and was later migrated to Rails 3. As part of the migration the cucumber integration tests were switched from using Webrat to drive the UI to using Capybara. Unfortunately this broke a handful of tests for the mobile version of the UI.

Take One

The site decides whether or not to show the user the mobile version of the site based on the user agent. The problem is that Capybara won’t let you set custom headers (such as the user agent) the Webrat will. After a quick search of the internet I can across This site. In short the blog post details a way to open up the current RackTest driver (the default driver that cucumber uses) and adds a way to put in arbitrary headers.

I added the code in the blog post to my project and indeed it fixed my tests (with a little reorganizing). Something about it struck me as wrong. Even the blog post I got the technique from calls the class a “Hack.” I wanted to find a cleaner way…

Introducing capybara-iphone

My “cleaner” solution was to write a new Capybara driver that pretends to be an IPhone and to have the mobile specific tests run with that. It is a very simple project that extends the Capybara::Driver::RackTest code and adds an additional user agent header to identify itself as an IPhone. Now all you have to do is set up the capybara-iphone gem in your project and tag the tests that you want to run the a mobile browser with ‘@iphone’.

The Future

One could imagine an incarnation of capybara-iphone that adds support for handling that IPhone specific javascript calls, and lets you do other things that better test your application’s IPhone interface. At the moment, however, I don’t have plans to do any of these, but the project sounds fun enough that with a little push I might start working on it.

More information on using capybara-iphone can be found in the Readme on Github.

Learning Unix

11/01/2010

Programmers work with text. Lots of text. So much text that we need specialized tools to help us manage and navigate this text. There are a handful of relatively simple unix commands that when strung together can greatly increase your efficiency dealing with these massive amounts of text. Let’s take a look at a few of them, shall we?

Note: I’ll be looking at the POSIX versions of these tools. If you’re running Linux it is possible that some of these commands might vary slightly. See the manpages for a more definitive reference

wc

wc is a tool to count characters, lines, words or bytes. Most commonly I use this tool to count the lines with the -l flag. For example to count the number of lines in a text file:

wc -l some_file.txt

Or to count the number of files in a directory:

ls | wc -l

sort

sort can be used to sort and merge lines of a text file. In the simplest case it can be used to sort a list of items alphabetically, but it can also sort by columns in a text file. For example to sort the files in a directory by size from largest to smallest:

ls -al | sort -k 5 -nr

The -k 5 specifies that it sorts by the 5th column when the line is split by whitespace characters. The -nr tells it to make a numeric sort in reverse order.

uniq

uniq is used to find and filter duplicate lines in a text file. It can also be used to count lines with the -c flag. Mixing this with the above sort command gives us an easy way to detect which line in a file appears most frequently:

sort my-file.txt | uniq -c | sort -nr

Note: that we are sorting the file first so that uniq can count the total number of times each line appears, not just the number of times that it appears in a row. Alternatively with wc we can easily count the number of unique lines in a file:

sort my-file.txt | uniq | wc -l

cut

We can use cut to pull only the specific parts of files that we care about for processing. The general model is you specify a delimiter and cut will split each line at the delimiter and then you can pick which fields you want from there. For example to pull out just the first and fourth columns from a csv you could use:

cut -d',' -f1,4

We can also use cut to select a series of characters from a line by position with the -c flag. To take the first 2 characters from every line and count the number of times they appear:

cut -c1-2 | sort | uniq -c

The point

The point of all this isn’t that any one of these commands is super useful by itself, but by knowing them you can often throw together a script very quickly to extract data for some file. If I find myself needing a larger script that I will want to maintain I will almost always reach for Ruby or a similar programming language, but being able to write these quick scripts without thinking about it too much can save a lot of time.

Follow

Get every new post delivered to your Inbox.