I’m not a fan of roller coaster rides, so this is almost comforting compared to California, Florida, or Arizona.
Thanks to Mark Perry for the pointer.
About time. And product. And being more deliberate.
I’m not a fan of roller coaster rides, so this is almost comforting compared to California, Florida, or Arizona.
Thanks to Mark Perry for the pointer.
While listening to John Gruber & Dan Benjamin’s – The Talk Show #24, I was reminded about one of my pet peeves with all the free software – it completely kills the innovation1.
John and Dan were talking about email.
I feel the same way about email clients as I do about feed readers – they’re water.
And by water, I mean money.
By money, I mean: my wallet is open for something better than Mail.app.
Elsewhere:
1. I’m calling web browsers the exception that proves the rule. For some reason, there are plenty of desktop options for web browsers, each fairly distinct and interesting from the other. Throughout the day, I regularly alternate between Safari and Camino while usually touching Internet Explorer and Firefox every other day.
Bruno Bornsztein and I talk about;
[24 min].
If services like Friendfeed, Twitter, etc, have an innovation, it’s in present reading and publishing in the save view. This single view – often described as ‘presence’ or ‘social-ness’ – makes it easy to write a comment or publish a new idea quickly.
The 2…3…4? blogs I maintain is where I feel the most comfortable exploring and archiving ideas1. Yet the one (apparently killer) feature the popular weblog tools lack is this combined view. I’m thinking of the ability to easily initiate a reply on one weblog/service that can be read in its entirety on another weblog/service – without a click.
Separate places distracts and dilutes.
Now, imagine loading up feeds into a WordPress install and reading them the same way you read things within FriendFeed or Twitter or WhatHaveYou2.0. The writing process would remain the same, and when a post is published – it’s sent to all the other sites/services that are subscribed to the feed.
Weblogs today aren’t far off. The difference is immediacy and a read/write combined presentation. There’s nothing requiring a weblog post to; be larger or smaller than some arbitrary number of characters, have comments, categorization, or any number of other things that separate it from ‘microblogging’ tools.
Perhaps you feel more comfortable publishing through Twitter or YouTube or Utterz than a weblog proper. This difference should be as meaningless as our respective carriers when we’re chatting on the phone.
1. This post originally started as a comment on FriendFeed, but the lack of paragraphing and a few other annoyances sent me here.
via Twitter, I was asked the above question.
It’s a good question, cutting to the core of my ambivalence over the religious wars between RSS, Atom, etc.
The flavor of XML a feed is published in shouldn’t matter.
Neither to the publisher nor the receiver.
Any parser able to handle multiple flavors should be able to parse all flavors equally fast. Some parsing engines are built for one flavor of XML or another – rather than abstracted to parse XML in general. Then again, it’s trivial to spit out one XML format as another, so, maybe format is a conversation between the user agent and the server.
Eh. (Get a smarter parser, jeeez.)
From my studying of both RSS and Atom, comparing them is like comparing the UIs of Windows and Macintosh. They do feel different. One puts window buttons over here, one puts them over there. One is this color, one is that color. One prefers the Control key, the other prefers the Command key. Some people prefer this one, others prefer that one. One says ‘potahtoe’. One, ‘potaytoe’.
From my I understanding, Atom was developed due to perceived deficiencies and ambiguity in the RSS 2.0 spec. Perhaps RSS 2.0 is guilty of being open to interpretation. I don’t know. I’ve found it to have logical places for everything I want to publish. Same for Atom.
Last I checked, Cullect was parsing somewhere north of 8100 feeds. Cullect doesn’t and shouldn’t care if a feed is Atom or RSS or RDF or filled with crazy namespaces. Cullect has 2 jobs when it comes to feeds; parse XML tags in a smart way, publish out useful feeds in whatever flavor the user agent requests.
The biggest issue I’ve found in parsing thousands of XML feeds is badly published XML. Feeds using the tags in bizarre ways. Feeds just not conforming to any spec. Feeds published in a way that just makes parsing hard.
Both RSS and Atom publishers are equally guilty. My Wacky-Feeds-That-Won’t-Parse list of contains just as many RSS feeds as Atom feeds.
A year ago, I wrote up my thoughts on publishing RSS 2.0 for easy publishing.
4:30 minutes of my own back channel. Quicktime
Here’s the recording of the full event:
http://www.ustream.tv/recorded/563535
I needed to copy a database and the idea of backing it up just to re-import1 seemed like double the work. Here’s a snippet to pipe a mysqldump into a remote database. Keep an eye on the user names and passwords – you’ll need 3 sets; one for the database your copying, one to get into your remote server, and one for the remote, target database.
mysqldump -v -uUSER -pPASSWORD --opt --compress DATABASE_NAME | ssh REMOTE_SERVER_USER@REMOTE_HOSTNAME mysql -uREMOTE-MYSQL-USER -pREMOTE_MYSQL_PASSWORD REMOTE_DATABASE_NAME
1. Backing up is a good thing. Why aren’t you doing it? Here’s a script for that.
mysqldump -h HOSTNAME DATABASE_NAME | gzip -9 > BACKUP_DIR/DATABASE_NAME.sql.gz
On this beautiful Saturday, I missed PublicRadioCamp. Unfortunate for many reasons, including – many of my favorite people in town were there. Hell, some of my favorite people in town organized it.
Instead, I had one of the best days ever with my family.
Our Day
After all that, I’m catching up what I missed with Bob Collins Off to Camp post. Good stuff.
If you have a bunch of text containing relative path hyperlinks, and you’d like to change to them to absolute paths, you might find this snippet helpful.
content = "some text with a <a href='/path/to/relative_link/'>relative link</a>"
link = "http://somedomain.com/"
content.gsub(/=('|")//, '=1*/').gsub(/*//, link.match(/(http|https)://[w.]+//)[0])
The asterisk ‘*’ is a hackey placeholder for the actual link swapped in with the second gsub
. If you know of a way to pass in the link without needing the placeholder, awesome – paste it in the comments. Thanks.
BTW – I found this link on my economics reading list in Cullect which proves (at least to me) that good curation is about both depth and discovery.