Thursday, 11 February 2010

Importance Persists

I woke up a to an inspiring post from Michael Janssen, one of Cullect’s biggest supporters:

“Still, I hope Cullect can come back in some form in the future, as it was hands down the best reader that I had ever used.”

Wow. That means a great deal to me.

Cullect was originally built during a time of stagnation within feed readers. Since it went down, sites like Facebook and Twitter have increased the number of people comfortable with the noisy-ness of real-time publishing and processing. Simultaneously, the innovation those services once provided has also stagnated. Additionally, many of the services Cullect integrated with are also down for the count.

Though Cullect.com is down – the Cullect engine is still being actively worked on – it’s powering RealTimeAds.com

If you’re wondering – no, I haven’t used another feed reader since Cullect.

Saturday, 16 January 2010

What Were You Doing?

“I shut down a whole bunch of experimental Twitter apps. I feel a phase ending. I don’t see Twitter as my platform.” – Dave Winer

In my work to bring Cullect back from hiatus, I’ve been doing a full code review and asking myself what should stay, what should be fixed, and what should go.

A number of the services Cullect originally integrated with no longer exist (Ma.gnol.ia for example). Cullect had fairly deep Twitter integration (at the time) but that seems extraordinarily less useful or valuable today.

Importance is difficult to discern with a 5-minute half-life.

Adding to that – I’ve got another project with fairly significant Twitter integration – and I’m just not terribly interested in building it out. Nor am I seeing the demand for it.

Monday, 7 September 2009

Cull.us: Branded URL Shortener with Google Analytics, CNAME, and .htaccess

One of the biggest problems with URL shorteners – aside from being needed at all – is it’s not easy to move from one to another without breaking all the previous links.

Culld.us hopes to change all that.

  • Use Your Own Domain Name
    At Culld.us, you get a subdomain – like grv.culld.us – and just point a CNAME record to it from your domain.
  • Use Your Own Web Analytics
    Put the statistics on your short URLs in your existing web analytics package, whether it be Google Analytics or another package, just paste the tracking code in your subdomain’s settings.
  • .htaccess and archival feeds
    If you want to leave Culld.us, you can take your redirects with you. Anytime you want – you can grab the .htaccess file, containing your shortened urls and the webpages they redirect to, and upload to your own server.
    You can also grab the RSS or JSON feeds.
  • Fully Customized Stylesheet
    Anything you can change in CSS can be changed in your Culld.us subdomain.
  • Collaborative
    Anyone you authorize has access to add URLs to your Culld.us subdomain. Everyone gets their own login and API tokens.

Monday, 31 August 2009

Culld.Us – URL Shortening Reimagined

We don’t shorten URLs just to shorten them.

We shorten them for the same reason big box retailers sell flat-pack furniture – greater confidence during transport.

With that in mind, I’ve completely rebuilt Cullect’s URL Shortener1http://culld.us

It’s still custom brand-able. In a way I’m much happier with than in the previous version – just point your domain at your Culld.us subdomain.

The part I’m very excited about – it flips shortening on it’s ear.

Sure – you can shorten a URL in Culld.us and share it with a comment in Twitter or email or wherever…or you can just leave it in Culld.us.

Think microblogging + url shortening.

1. This is the first step in a complete rebuild of Cullect as a whole.

Thursday, 25 June 2009

RealTimeAds.com Launches at MinnPost.com

MinnPost RealTimeAds

I’m pleased to announce the launch of RealTimeAds.com – a advertising product now in beta testing at MinnPost.com

Karl and I have been building and testing the system for a couple of months now and I’m quite happy with it on three of fronts;

  • It feels like it makes advertising approachable to people and organizations that haven’t considered it within reach before. Especially, extremely small and locallly-focused people.
  • It re-frames publications that already exist (Twitter feeds, blog feeds, etc) as text advertisements, cuz, you know, that’s what they are anyway.
  • It extends the real-time nature of Twitter outside of the Twitter silo, helping those people and organizations to get more mileage out of their tweets.

“Real-Time Ads runs on RSS. So, you use what you are already using-Twitter, Blogger, Tumblr, proprietary CMS, whatevs!” – Karl Pearson-Cater

Interested in trying it out? Give MinnPost a call: 612 455 6953.

Yes, the RealTimeAds.com system uses a version of Cullect’s engine tuned for ad serving (verses feed reading).

For those of you following along, RealTimeAds.com is Secret Project 09Q02A.

UPDATE:
Here’s the official RealTimeAds announcement from MinnPost’s Joel Kramer

“Imagine a restaurant that can post its daily lunch special in the morning and then its dinner special in the afternoon. Or a sports team that can keep you up-to-date on its games and other team news. Or a store that could offer a coupon good only for today. Or a performance venue that can let you know whether tickets are available for tonight. Or a publisher or blogger who gives you his or her latest headline. ” – Joel Kramer, MinnPost

UPDATE 2: More from Joel Kramer, this time talking to the Nieman Foundation for Journalism at Harvard.

“We do believe Real-Time Ads will prove more valuable for advertising at a lower entry point.”

Tuesday, 14 April 2009

How To Cache Highly Dynamic Data in Rails with Memcache – Part 3

In part 1 and part 2, I laid out my initial approaches on caching and performance in Cullect.

While both of them pointed in the right direction, I realized I was caching the wrong stuff in the wrong way.

Since then, I tried replicating the database – one db for writes, one for reads. Unfortunately, they got terribly out of sync just making things worse. I turned off the 2nd database and replaced it with another pack of mongrels (a much more satisfying use of the server anyway).

Cullect has 2 very database intensive processes: grabbing the items within a given reading list (of which calculating ‘importance’ is the most intensive) and parsing the feeds.

Both cause problems for the opposite reasons – the former is read and sort intensive while the latter is write intensive. Both can grind the website itself to a halt.

Over the last couple weeks, I moved all the intensive tasks to a queue processed by Delayed_Job and I’m caching the reading lists’ items in database table – rather than memcache.

Yes, this means every request is pulled from the cache, and a ‘update reading list’ job is put into the queue.

So far, this ‘stale while update’ approach is working far better than the previous approaches.

BTW, as this practice confirms, the easiest way to increase the performance of your Ruby-based app is to give your database more resources.

Sunday, 1 February 2009

Tuesday, 27 January 2009

After a couple very rough weeks – I’m happy with where Cullect and it’s caching strategy is. It’s slightly different from where I talked about last. I’ve also added a slave DB to the mix since my last write up. Overall, it feels more solid, and is performing at or better than before

Sunday, 25 January 2009

How To Cache Highly Dynamic Data in Rails with Memcache – Part 2

In my part 1, I laid out my initial approach on caching in Cullect.

It had some obvious deficiencies;

  1. This approach really only sped up the latest 20 (or so items). Fine if those items don’t change frequently (i.e. /important vs /latest) or you only want the first 20 items (not the second 20),
  2. The hardest, most expensive database queries weren’t being cached effectively. (um, yes that’s kinda the purpose).

I just launched a second approach. It dramatically simplified, cache key (6 attributes down from 10) and rather than caching entire the items in that key, I just stored the pointer object to them.

Unfortunately, even the collection of pointer objects was too big to store in the cache, so I tried a combination of putting a LIMIT on the database query and trying to store 20 items a time in a different cache object.
This second approach had the additional problem of continually presenting hidden/removed items ( there’s 2 layers of caching that need to be updated).

Neither was a satisfactory performance improvement.

I’ve just launched a solution I’m pretty happy with and seems to be working (the cache store is updating as I write this).

Each Cullect.com reading list has 4 primary caches – important, latest, recommended, hidden – with a variants for filters in keyword searches. Each of these primary caches is a string containing the IDs of all the items in that view. Not the items themselves, or any derivative objects – both of those take up too much space.

When items are hidden, they’re removed from the appropriate cache strings as well.

Murphy willing, there won’t be a part 3. 😉

1. Storing the items in the cache as objects

Thursday, 22 January 2009

Cullect and Why I Built It – UPA-MN Feb 12 ’09 6-8pm @ Open Book

(cross posted @ MNteractive.com & blog.cullect.com)

I’ll be talking about how and why I built Cullect.com, Feburary 12th UPA-MN event at the Open Book, 6-8pm

(Feels good to be back at the UPA-MN, it’s been too long.)

The agenda:

  • Developing a product you will use and the way you will use it.
    Designing the API as the primary user interface.
  • A tour of some of the innovative UI concepts behind Cullect.
  • An activity in which you will be able to play the role of an information curator.

If you’re a fan or customer of Cullect, I hope you’ll be join us. It’ll be more fun if I’m not the only one talking about it.

If you’re not sure what Cullect does, come by as well, as I’d like to get better at articulating it.

$10 members, $30 non-members (only cash and check are able to be accepted at the door)
RSVP: By 5:00 p.m. on Monday, February 9, 2009, to rsvp@upamn.org