Over at Limited Pressing, we’re using a wonderful background queue called Resque. Resque was built by Chris Wanstrath of Github fame and runs on top of Redis. It’s fantastic and I absolutely love it. If you haven’t had a chance to try it, go check it out.
When setting up Resque in production, I ran into a few issues that weren’t extremely obvious to me. I ended up working through them in the end and thought I’d write something up to help others out in the future.
Monitoring with God
Resque comes bundled with a config file for god, but I had an issue with the way it would start up a worker. My example is only using a single worker, but you can combine my method with the bundled one. Here’s how I did it:
Restarting Workers on Deploy
The other issue I ran into was that I needed my workers to restart each time I deployed. Because Resque loads your application’s environment, a new deploy with changed models, would mean that your existing Resque workers are running outdated code. The way I handled this was to QUIT my workers and let god start them back up (using the updated codebase).
At first, I wasn’t really happy about quitting and restarting my workers this way, but after reviewing how Resque handled shutting down workers, it seemed like an okay approach. If you have any tips on how the environment can be reloaded without waiting for god to start new workers up, I’d like to hear them, so shoot me an email.
A special thanks to Nate Daiger of ChunkHost for pointing out a much better way of collecting the worker pids in my
queue:restart_workers rake task. Essentially, I was iterating over
Resque.workers and calling
#worker_ids on each worker. Nate pointed out that
#worker_ids actually returned all pids anyway so it was unnecessary to loop. Another thing he suggested was replacing the use of
#worker_pids with splitting the output of
#to_s. The idea here is that
ps under the hood and could erroneously pick up similarly named processes, such as resque_web.
These improvements have already been applied to the above gists. Thanks, Nate!