All Posts in Dev Cave

July 9, 2013 - 1 comment.

FoxySync: How to Synchronize Your Website with FoxyCart

FoxySyncIf you've ever done an e-commerce integration, then you know what a pain it can be. Traditionally you'd build a shopping cart, create a checkout workflow, and integrate with a third party payment gateway. Ultimately you spend a lot of time writing and testing new code for an old task. I've done a few of these integrations, and the last time I did I tried something new: FoxyCart.

I wanted to try FoxyCart because it would allow me to outsource the shopping cart, checkout, and payment gateway integration. As a result I could clean up my code base, reduce my maintenance costs, and setup for an easy payment gateway switch in the future. Making FoxyCart work with my Ruby on Rails app, however, was not a cinch. There were no Ruby gems to work with and examples in Ruby were sparse. I knew I'd have to figure out a lot of the integration on my own so I thought I'd make it easy for the next Rubyist and cut a gem out of the work. That gem is called FoxySync.

FoxySync encapsulates four FoxyCart integrations: cart validation, single sign on, XML data feed, and API communication. Using all together fully synchronizes and secures all communication between your app and the FoxyCart service. Let's take a look at each.

Cart Validation

Since FoxyCart knows very little about your products, it depends on you to post any metadata—including price—when customers add items to a cart. As a default, the metadata is stored as plain text in the web page where the “Add to cart” button lives. This is risky because, if someone knows what they're doing, they could change the price of your product before it’s sent to FoxyCart. To prevent such tampering, FoxyCart offers HMAC product verification, or what I like to call cart validation. The feature works by validating a hash on each piece of metadata to ensure authenticity. FoxySync makes this easy by providing a helper method to generate the correct HTML form variables.

include FoxySync::CartValidation
cart_input_name 'code', 'mai', 'mai'
# results in <input type="hidden" name="code||5651608dde5a2abeb51fad7099fbd1a026690a7ddbd93a1a3167362e2f611b53" value="mai" />

Single Sign On

FoxyCart keeps an account for each user that checks out on your site, but with a good integration, those customers shouldn’t even know they’re using it. That being the case, it's weird to ask them to reauthenticate on the checkout page if they’re already logged into your site. FoxyCart's single sign on feature prevents this weirdness by asking your application to acknowledge authentication before the checkout page is displayed. FoxyCart makes a request to your site and your application redirects back to FoxyCart. FoxySync helps with this handshake by providing a helper method to generate the redirect URL.

include FoxySync::Sso
redirect_to sso_url(params, user)

XML Datafeed

FoxyCart's transaction datafeed feature ensures that your application is notified of sale details after each successful checkout. When enabled, FoxyCart will post to your application an encrypted XML document and expect a particular response. FoxySync helps with this feature by handling the XML decryption and providing a helper to generate the appropriate response.

include FoxySync::Datafeed
receipt = []
xml = datafeed_unwrap params
receipt << xml.customer_first_name
receipt << xml.customer_last_name
receipt << xml.receipt_url
# etc

API Communication

FoxyCart has a robust API that lets you manipulate and retrieve data about your store, customers, transactions, and subscriptions. FoxySync makes working with the API dead simple, so you can easily access this powerful feature.

api =
reply = api.customer_get :customer_email => ''
reply.customer_id # is the customer's FoxyCart id

FoxyCart is a great service for adding sophisticated e-commerce to your website without having to do a lot of the hard work. However, FoxyCart still needs to be integrated, and for Ruby on Rails apps, FoxySync makes that pretty easy.

July 2, 2013 - No Comments!

Russian Doll Caching in Rails

Matryoshka-dollsRussian dolls: not just a knickknack you bring home from Moscow or a short-lived Lifetime reality TV show. It’s a powerful caching technique that we used to make the new fly. Russian Doll Caching is the default in the new Rails 4, so it’s best to start getting used to it.

Traditional page and action caching are powerful and speedy, but the control they offer is often coarse and unwieldy. Say you want to update some text in the shared site header. You now have to invalidate every single page on the site. All those pages will have to be fully rendered the next time someone visits them.

Expiring expire_fragment

If you’re using traditional fragment caching, you’ll find yourself face-to-face with the cache often. Any time you add a new fragment, you better go add an expire_fragment for anywhere where it might get updated. In a cache store like Redis or FileStore you can use regular expressions to delete a series of keys, but that process is very slow since behind the scenes the store is comparing every single key to the Regex. In Memcached, you don’t even have that and thus need to delete every single key individually.

Enter the obligatory Phil Karlton quote: “There are only two hard problems in Computer Science: cache invalidation and naming things.” In this new caching paradigm, we build on fragment caching, but flip cache expiration inside out. The beautiful secret to Russian doll caching is you never explicitly invalidate anything. Never again will you need to expire_fragment. Instead, we just use a brand new key.

We’ll get into the details of this soon, but like a proper Matryoshka doll, let’s start from the outside in.

An Example View

Here is an example from I’m simplifying the code for the sake of brevity, but all the salient ideas will still be there. For reference, we are using Mongoid as our ORM on top of MongoHQ and the Dalli memcached store over Amazon’s Elasticache. If you’re using ActiveRecord, these techniques still apply, although the syntax might be slightly different.

The basic organization of the site is pieces of content organized within channels. A piece of content belongs to primary channel.1

Here’s a channel listing taken from and below it, simplified view code:

Ebert Channel Page

What's the Key?

The cache view helper takes an array as its first argument. It concatenates all the pieces together, calling #cache_key for any objects that respond to that method. We’ll get to cache_key’s implementation soon, but for now, know that each model object has a unique cache key. A naïve implementation might be "#{self.class.model_name}/#{id}"2.

[:index, :page, @contents.current_page, @channel] would create a cache key like views/index/page/1/channels/123 and nested within that cache, [:listing, content] would turn into views/listing/contents/12345. An advantage of this is it allows us to easily share a cached fragment across pages. If the piece of content falls off to the next page and we have to re-render the main index, we can still use the other individual content fragments from cache.

This still leaves us with keys that must be invalidated. How do we account for that? What if #cache_key were smarter than our previous implementation? And, what if every model instance had some persistent attribute that gets changed every time the instance gets updated. It’s a good thing we’ve got updated_at handy, and the default implementation of #cache_key already takes advantage of this.

If you look at the source, you’ll see that the key is a combination of the model name, its ID, and its updated_at timestamp. So if you call #update_attributes or #save on a model, that updated_at field will get the new time and voila! #cache_key will be different the next time you ask for it.

So a normal ActiveRecord cache key might look like contents/1234-20130519024351. Mongoid will look a little crazier: contents/5194dd8d4206c510c2000001-20130528101518.

Combine that with a context like :listing, and you have a unique cache key for each fragment that will be updated any time you update the object.

And thanks to touch: true that we have on the belongs_to :primary_content association, any time you make an update to the piece of content, it will also touch the channel and give it a new updated_at timestamp.

So, imagine that we have views/index/page/1/channels/1234-20130528101518 as a cache key. We’re still having to look up the channel from the database, but we don’t have to hit the database to find the @channel.contents association (it’s still just a Mongoid::Criteria or ActiveRecord::Relation at this point waiting to be lazy-loaded). If we update the title of one of the pieces of content, we get a new cache key for the content and for the channel. The next time someone visits the page, we do have to hit the database to get the contents' updated_at fields, but we only have to render 1 out of 10 of the :listing fragments (there are ten posts per page); all the rest are fetched from the cache.

What about the old cached fragment? Don’t we need to delete it? The short answer is “who cares” and “no.” If you’re using a store like Memcached and it needs more space, it will automatically purge any key/values that haven’t been accessed recently.

There are still things to be aware of, such as what happens if your code changes; how do you invalidate the cache then? That's where cache digests come in, and a subject for another post.

In the meantime, start nesting your fragments and speed up your site's performance!

1 A piece of content can belong to additional channels, but I’m leaving that out for simplicity. The primary channel is the important one, since it's what we use for determining the canonical URL of a piece of content.

2 In fact, this is the return value of one of the three cases in the default cache_key implementation.

June 27, 2013 - No Comments!

Selling in a Responsive Design

If there's one thing that's sure to drive you batty in a responsive layout it's working with third party ad services. Not everyone can afford their own ad sales team or has the resources to support their own ad network, and because of this, they turn to services such as Google DFP. Those services are easy to implement and enable an instant revenue stream. They also don't work well in fluid width layouts.

We ran into this wall the weekend before the launch of the new Roger Ebert website. We were implementing a new ad network that did not play nicely with our fluid width columns. Not having the time to fully implement and test an entire new layout for the site, we had to think quickly. Here's what we did...


Our ads were all 300 pixels wide and existed in the same column. This column was one of two columns in its parent container. On the parent container, we put a class of fixed-rail. Inside the container we added a class of pad to our main content and place to our ad rail. The HTML looked like this...

<section class="fixed-rail">
    <section class="main pad">
        ... Insert main content here ...
    <aside class="place">
        ... Insert ad rail content here ...


Easy enough, right? The magic is the CSS. We're already using border-box box-sizing, so we set our pad column to width: 100%;. We then give it right padding of 330px (300 pixels for our ad-rail and 30px as the universal gutter width setting in our SASS files). We then position the placecolumn absolutely to the upper right corner. Instant fixed width right rail and fluid width content column.

$gutter-width: 30px;

.fixed-rail {
    position: relative;

.pad {
    width: 100%;
    padding: 300px + $gutter-width;

.place {
    width: 300px;
    position: absolute;
    top: 0;
    right: 0;

We Got This...

Looks great, right? One problem. There were some cases where the ad rail was taller than the content column. Since it was positioned absolutely, the layout would collapse. Uh-oh. The solution? We inserted a quick bit of JS to calculate the height of the place column and added that as a minimum height on its sibling pad column.

    var forceHeight = $(this).height() + 'px';
    var sibling = $(this).siblings('.pad');
    $(sibling).css({'min-height': forceHeight});

Honestly, I don't know how this solution will hold up over time, but it seems to have been doing just fine in practice so far. If you've got some thoughts on this, let's chat.

May 14, 2013 - No Comments!

Rails vs. The Client Side: XI to Eye

xitoeyeTwo completely different ways have emerged for using Rails as the back-end to a rich client-side JavaScript application:

  • The 37Signals "Russian Doll" approach, used in Basecamp Next, where the server generally returns HTML to the client. This approach uses aggressive caching and a little bit of JavaScript glue to keep the application fast.
  • The "Rails API" approach, where the server generally returns JSON to the client, and a JavaScript MVC framework handles the actual display.

Which of these will work for you? For this week's XI to Eye, I've posted my RailsConf presentation on the topic.

Also, if you like this talk, my book Master Space and Time With JavaScript has much more detailed information on using JavaScript effectively, including a work in progress on Ember.js.

Watch live video from Confreaks - Live Streaming on

May 8, 2013 - 3 comments

Tales on Rails: The Three Little Devs

Squeel: 3 Little Pigs

You know how the story goes: Once upon a time, there were three little devs. Smart and determined, the three developers set up a new Rails app.

The first little dev, the youngest and littlest, wrote a handful of ActiveRecord conditions in SQL.


All was well, and the queries worked! So it was, and so it continued, for several minutes. But as the littlest dev carried on, innocently combining his scopes, he was interrupted by a big, bad bug. The bug snarled, "Little dev, little dev, just call it quits!"

The littlest dev was having none of this. "Not by the code in my github commits!"

"Then I'll huff, and I'll puff, and I'll raise an exception!" shouted the bug, who proceeded to do just that.

The littlest dev fled in terror to the chat window of his brother, the second little dev, who was a little bit wiser and a little bit bigger (relatively speaking, of course). He nodded solemnly as his younger brother explained what had happened.

Putting on his bravest demeanor, he announced, "Let's refactor, and chase off that big, bad bug once and for all!"

They worked tirelessly for minutes upon minutes, using the squeel gem to replace their old SQL conditions with robust, readable Ruby code.

Just as they finished adding in a new query, the big, bad bug arrived yet again, displaying an uncanny sense of plot and timing. "Little devs, little devs, just call it quits!"

"Not by the code in our github commits!" came the obvious response from the little devs.

"Then I'll huff, and I'll puff, and I'll raise an exception!" roared the bug. Sure enough, he huffed, and he puffed, and it turned out that the devs had made a typo in their new query, a direct result of too much code duplication.

The two little devs opened a group chat with their eldest brother, the third little dev, who was much wiser and much bigger (relatively speaking, of course). He nodded solemnly as his brothers explained what had happened. Confidence radiating from his monitor-tanned face, he spoke slowly and calmly, "Let's refactor, and chase off that big, bad bug once and for all."

The brothers worked tirelessly for at least a few seconds, using ActiveRecord::Relation's merge method to reduce the duplication of logic in their queries.

The big, bad bug did not show up again, and the three little devs lived happily, for at least a little while.

I'm sorry—you don't think this sounds familiar? The version you know involves pigs building houses made of straw, sticks, and bricks? And a wolf with improbable lung capacity? This is no time for make-believe; I'm telling the story the way my father told it to me, and his father told it to him.

Like all good stories, this one has a moral: Use squeel and ActiveRecord::Relation#merge to reduce the complexity and duplication of your ActiveRecord queries. Who knows? Maybe you too can live happily, at least for a little while.

April 12, 2013 - 1 comment.

Project Estimation

xitoeyeEstimation. Every project does it. Most projects dread it. In this short video, we'll explain why estimating complexity is easier and more manageable than estimating time, and why points are actually a more concrete measure than hours.

For many web projects, a good estimation process is quick and simple: Time spent estimating is not time spent building. Time spent watching an XI to Eye video, on the other hand, is time well spent, indeed.

If you like this, I estimate that you might enjoy the e-book Master Space and Time with JavaScript.

April 4, 2013 - No Comments!

Don Your Cape: Becoming a Deploy Hero


It's a day like any other. You're sitting at your desk, building a web application in Rails, when suddenly disaster strikes! You find yourself in need of something that Rails simply can't do on its own. Maybe you need to run Resque or Delayed::Job to add some asynchronous behavior to your app. Maybe you need to add a daemon or a cron job of your own for an even more customized process. Whatever the case, you know that you're going to need to write and run some rake tasks.

You're making good progress, but you find yourself edging closer and closer to dangerous, forbidden territory—the Capistrano deploy scripts! You've been warned not to touch them. You've heard the whispered rumors of a young developer, his sanity claimed all too early by the tangled code lurking deep within those recipes. You have a decision to make...

Do you choose Option A, retreating to the safety of your application code? After all, you can always ssh onto the server after each deploy and do whatever cleanup you need to do. It'll only take a few minutes, right? Besides, those recipes are just so...scary.

No! You've come this far already, there's no turning back now! You've taken a few hesitant steps and peeked around a corner or two; it doesn't look that bad in here. So maybe you'll choose Option B, and define your own Capistrano tasks! You move ahead, and end up with something that looks like this:

It's not...terrible. In fact, it's okay. Sure, you're manually defining a Capistrano task for each Rake task you need to run on deploy, but at least you're using bundler, and you even made your rake calls nice and DRY!

But what if I told you there's an Option C? What if I told you that there's a way you can become a deploy hero, striking down your recipe complexity once and for all? There is.

Introducing: cape! By day, a must-have fashion accessory for superheroes of all ilk; by night, a small gem dedicated to cleaning up repetition in your deploy files. Add cape to your gemfile, and the same deploy script from above is transformed into the following:

Cape helps keep your deploy scripts nice and clean, by allowing you to write your rake tasks once and run them on your remote servers painlessly via Capistrano. It's the Robin to your Batman, tackling hordes of henchmen so you can focus on bringing down the big baddie. It's the Arthur to your Tick, bringing a voice of reason and pragmatism to temper your cavalier sense of adventure. It's the Snarf to your Lion-O...okay, maybe not so much.

If you ever find yourself needing to run rake tasks remotely, give cape a shot.

Image by Andrew Horner, 2013

March 29, 2013 - 3 comments

Separating from the DOM, a JavaScript Story


Most web applications have to talk to a third party API that they don’t control and didn’t write.  A good strategy for dealing with that library is to have all your access to the library go through a wrapper object.

When you use a wrapper object, you can define the wrapper object in terms that are semantically relevant to your application. It also becomes easier to test your application, because you can test it as far as the wrapper and then test the wrapper's interaction with the API separately.  It’s also easier to change the library if needed.

Those features are valuable in front-end development, too. However, many front-end applications intertwine with the DOM or with a library like jQuery in a way that can make the code extremely dependent on the specifics of one particular page setup.

In this XI to Eye video tutorial I walk through the process of treating jQuery and the DOM like a third-party library, and creating a wrapper object to manage interaction with the DOM.

March 26, 2013 - 3 comments

How to Add Watermarks to Your Photos

knight_rider_watermarkLately I've been using the Dragonfly gem to work with images on my rails models. One of my favorite things about Dragonfly is its chainable image processing language (or DSL) for generating different versions of an image (thumbnail, greyscale, etc). At the core of Dragonfly's DSL is a thin wrapper around ImageMagick which is really handy for common transformations. For more complicated transforms it sure would be nice to extend that language with your own custom processing step, and thankfully Dragonfly makes that really easy too!

For example, recently I needed to generate a low-res watermarked version of an image. The first step—to scale the image down—is easy to do with Dragonfly's existing syntax:


The next step is to overlay a watermark. We want to resize our watermark to fit the source image, and then apply the fitted watermark onto the source image with a given level of transparency (or opacity). This is possible to do with ImageMagick; however, it's a bit more complicated, so this is a good opportunity to implement a custom image processor:

This processor cannot work in a vacuum, however; it must be registered so that Dragonfly can use it with its other processing steps:

Now that Dragonfly knows about our Processor, we can put it together with the scaling step to achieve a low-res watermarked version of any image:


That's it! This technique can be adapted to insert whatever custom ImageMagick transformation you can come up with into Dragonfly's DSL. Pretty sweet!

December 28, 2012 - No Comments!

Pragmatic Flight of Fancy in Rails Testing

A Flight Of Fancy sideview full 4x4x120A common TDD concept is that you write tests targeting the most optimal API imaginable, rather than contorting your code around current production realities. It’s possibly the most practical form of flight of fancy anyone has ever considered. Run free in a field with your API before you build retaining walls to thwart mudslides. The resulting code is much better because you work toward the best possible experience, deferring details for as long as possible. It’s amazing how well the process works in all types of contexts.

Let’s consider the Rails test suite. What’s our flight of fancy? Super duper fast tests. Once we set our sights on lightning fast tests, we bring them to life, nevermore accepting pokiness as threadbare reality. Thankfully many smart people have done great work to bring this closer to being, but not without a touch of controversy.

Background on Test Performance

Executing a single (trivial) RSpec test against an empty Rails app is on the order of several seconds. This performance is not the product of anyone’s fantastical dreams, of course, but a function of reality. The Rails stack takes time to load and there’s no obvious way around it. Except, of course, to go around it. All of the sudden tests are several orders of magnitude faster. That’s the trick. Only use Rails when necessary and life is beautiful. Which begs the question, when exactly is Rails necessary in a Rails app?

Rails as Detail of Project

DHH chuckles at the notion of Rails-as-detail: "… Web apps don’t wake up in the morning and don an iOS suit." And it's true, no Rails codebase will ever end up running on an iPad mini. That said, how many people would benefit from lopping off functionality into lightweight independent services but have little idea how to begin? Generally speaking, who doesn’t want a better understanding of the entanglements of an application? After all, clear definition of dependencies is a central challenge in programming—it's a core tenet of useable design.

Quick Example of Rails as Detail

Take a module that lives in Rails whose mission in life is to talk to a third party API. I recently set out to extract something like this from Rails. It required one piece of Rails (the from_xml method) to parse the response:

response = Hash.from_xml(http_response)

So I added the single dependency to the api spec (and learned a bit of Rails trivia in the process):

require "active_support/core_ext

In this simple but common case, the entire Rails stack is obviously not needed to test the wrapper. By decoupling from the entire framework testing is quick and we’ve outlined the specific dependency for the benefit of posterity.

Controversies be Darned

It's true. As programmers we should be happy about and embrace progress in the form of:

1. Orders of magnitude performance improvements. Goes without saying. Orders of magnitude efficiencies are ultimately what we’re concerned with in various forms every day. Does it take 10 people to ship an order or just 1? Does it take 500 ms to execute a test class or 5 minutes?

2. Clearer dependencies. Asking classes to outline their dependencies to Rails and elsewhere leads to clarity. Rails is a big framework. Seeing the forest for the trees drives a better understanding of the framework and its relationship to the domain logic.

tl;dr Actors in Today’s Flight of Fancy:

Rails = Chuck wagon with all manner of useful provisions, but heavy.
Fast tests = Beautiful pastoral field, but far away.
Independence = Awesome horse that gives you a ride and teaches you lessons along the way.

Take the ride, enjoy the field, pick some flowers.