In an effort to minimize my downtime when the funny noises this MacBook Pro is making finally amount to something – I’ve wired up a git repository to OS X’s native launchd service. The git repository hold all of my active projects – whether development projects with their own repos, research projects, consulting project. Everything.
Right now, there’s a Mac mini holding the shared repo with a MacBook Pro and a MacBook Air pushing and pulling to it.
- Set up SSH keys the laptops and server (I like GitHub’s instructions)
- Set up the repo on the server
git --bare init
- set up a repo on both client macs
git add .
git commit -a -m "initial commit"
- create the active-projects.sh backup script in your ~/Documents directory
git pull origin master
git add .
git commit -a -m "Active Project Sync - $DATE"
git push origin master
- make active-projects.sh executable
chmod +x project-backup.sh
- Make the active-projects-backup.plist file for launchd
< ?xml version="1.0" encoding="UTF-8"?>
< !DOCTYPE plist PUBLIC "-//Apple Computer//DTD PLIST 1.0//EN"
- save the active-projects-backup.plist file to ~/Library/LaunchAgents and load it up
launchctl load ~/Library/LaunchAgents/active-projects-backup.plist
- Now, whenever a change is made in your ~/Documents/Projects – it will be automatically committed to the git repo and propagated to all computers connected to that repo. Like magic.
Thanks to culturedcode’s instructions for syncing Things with git & Launchd.
First quarter, things were so crazy I had gone to blocking off specific days for specific projects. I had gotten into a comfortable rhythm with my schedule and was able to move quite a few things forward quickly.
Second quarter, things lightened up a bit and I went back to blocking off multiple projects per day in 2-3 hour chunks.
Then this week, two deadlines have me – unexpectedly – back to dedicating entire days to a project (and it’s only Tuesday!).
I had forgotten the sense of accomplishment and flow that comes with plowing through one thing, uninterrupted for 6 hours.
So, yes, I’ll be instituting 1 project / day on the calendar moving forward.
This is also consistent with both the Pomodoro Technique and the Cult of Done.
I was getting 403 errors after deploying my newest Sinatra app with Passenger.
Turns out Passenger assumes and requires a /public folder.
This app is so tiny and new, it didn’t have one yet – so I was pointing Passenger at the app’s root. Resulting in the 403 errors.
Solution: Create an empty /public folder and restart Apache. Ta Da. Like magic.
If you’re still having issues – confirm your LoadModule passenger_module path is correct, mine looks like this:
After updating the Passenger gem to 2.2.7, my LoadModule path was way off, not helping the deployment troubleshooting efforts.
I bumped into a very strange bug trying to compile Sphinx on OS X Leopard today.
After running ./configure for Sphinx 0.9.8-rc2, things looked good until:
configure: error: cannot run C compiled programs. .
Now, I’m positive that my computer is advanced enough to run C compile programs. So I peaked into the resulting config.log and noticed:
...Bad CPU type in executable...
Turns out Sphinx defaults to compiling for 64bit machines and, well, my MacBook Pro isn’t.
Changing the configure flags to 32bit mode fixed it:
./configure CFLAGS="-O -arch i386" CXXFLAGS="-O -arch i386" LDFLAGS="-arch i386" --disable-dependency-tracking
Thanks to schmeeve over at d27n for the tip.
make; sudo make install, remember to create a sphinx.conf file:
cd cd /usr/local/etc
sudo cp sphinx.conf.dist sphinx.conf
sudo searchd to run Sphinx or for ThinkingSphinx:
In part 1 and part 2, I laid out my initial approaches on caching and performance in Cullect.
While both of them pointed in the right direction, I realized I was caching the wrong stuff in the wrong way.
Since then, I tried replicating the database – one db for writes, one for reads. Unfortunately, they got terribly out of sync just making things worse. I turned off the 2nd database and replaced it with another pack of mongrels (a much more satisfying use of the server anyway).
Cullect has 2 very database intensive processes: grabbing the items within a given reading list (of which calculating ‘importance’ is the most intensive) and parsing the feeds.
Both cause problems for the opposite reasons – the former is read and sort intensive while the latter is write intensive. Both can grind the website itself to a halt.
Over the last couple weeks, I moved all the intensive tasks to a queue processed by Delayed_Job and I’m caching the reading lists’ items in database table – rather than memcache.
Yes, this means every request is pulled from the cache, and a ‘update reading list’ job is put into the queue.
So far, this ‘stale while update’ approach is working far better than the previous approaches.
BTW, as this practice confirms, the easiest way to increase the performance of your Ruby-based app is to give your database more resources.
This weekend I made some significant head way on one of my key 2009 projects: Kernest.com.
Right now, Kernest is also the most likely candidate for my 5 minute Ignite Mpls presentation.
After a couple very rough weeks – I’m happy with where Cullect and it’s caching strategy is. It’s slightly different from where I talked about last. I’ve also added a slave DB to the mix since my last write up. Overall, it feels more solid, and is performing at or better than before
In my part 1, I laid out my initial approach on caching in Cullect.
It had some obvious deficiencies;
- This approach really only sped up the latest 20 (or so items). Fine if those items don’t change frequently (i.e. /important vs /latest) or you only want the first 20 items (not the second 20),
- The hardest, most expensive database queries weren’t being cached effectively. (um, yes that’s kinda the purpose).
I just launched a second approach. It dramatically simplified, cache key (6 attributes down from 10) and rather than caching entire the items in that key, I just stored the pointer object to them.
Unfortunately, even the collection of pointer objects was too big to store in the cache, so I tried a combination of putting a LIMIT on the database query and trying to store 20 items a time in a different cache object.
This second approach had the additional problem of continually presenting hidden/removed items ( there’s 2 layers of caching that need to be updated).
Neither was a satisfactory performance improvement.
I’ve just launched a solution I’m pretty happy with and seems to be working (the cache store is updating as I write this).
Each Cullect.com reading list has 4 primary caches – important, latest, recommended, hidden – with a variants for filters in keyword searches. Each of these primary caches is a string containing the IDs of all the items in that view. Not the items themselves, or any derivative objects – both of those take up too much space.
When items are hidden, they’re removed from the appropriate cache strings as well.
Murphy willing, there won’t be a part 3. 😉
1. Storing the items in the cache as objects
I started building up new project today, one of the 2 initial revenue generating projects on my 2009 list. While it’s a way from launching, much of the heavy lifting was completed today. Conceptually, I’ve been using a proof-of-concept of this project for a couple years now. Oh, and I spent waaaay to long looking for domain names for it. The Code Name thus far has been ‘Cashboard’ – but since it’s not available, it needs to be changed.
All this work on command line Ruby apps has got me happily avoiding /views in this new Rails app.
Ironic, considering when I first started writing in Ruby, the lack of presentation was my biggest mental hurdle.