A Use Case for Identity XML – Demographic Surveys

Stowe Boyd’s running a reader survey. I’ve followed Stowe from Get Real to /Message and thought I’d check out the survey.

Standard demographic stuff; age, gender, household income, zip code, employment status, profession, internet usage, etc. Those common questions attempting to build an anonymous picture of people without actually getting involved with them.

Reading through the questions, the myth of blogs-as-conversation fell away. Stowe doesn’t know who I am – or you are – at all.

If he did, he wouldn’t need to ask these questions – because we’ve already answered them. All of us. Somewhere – if only at the BackBeat Media survey, or in our My.Yahoo.com.

I’d don’t mind giving this info. It’s just annoying to answer the same question twice. I’d much rather just point a URL at the survey.

In the same way I’d prefer to point a URL at my current photo than upload it _again_ to another website (43things, Stikipad, Eventful, or Amazon, Technorati etc).

I’m wondering if there’s an XML specification (or something like it) for the basic identity info all these surveys (read marketers) want. For example:


I could spin a file with this info, host it, maintain it, and provide brief glimpses into for the right price (so could you).

Yes, this is the Customer as Silo idea, feels like there’s some intention economy connection as well. No, I didn’t complete the survey.

Measure Map = Google Analytics v2.0?

I sure hope so. The current offering (Urchin assimilated) isn’t that useful.

“Bringing Measure Map to Google is an exciting validation of the user experience work I’ve been doing with my partners at Adaptive Path for years. – Jeffrey Veen”

Congrats to Veen, Congrats to Google. To PeterMe and everyone at Adaptive Path, I’m sorry for your loss.

I suspect Measure Map will first be available to domains Google hosts today (blogger.com) by default and positioning Google as the host-of-choice for tomorrow.

The Problems with Podcast Directories

I had a great lunch with Paul Cantrell today at Sushi Tango. Oh, and if you need an idea for lunch, ask Paul. He listed off a half-dozen other places that sounded just as fantastic.

One of the many things we discussed was the problem of podcast ratings and categorization – i.e. the problem of finding interesting podcasts.

At the bottom of each post here on the Work Better Weblog (and many of the other sites I contribute to) you’ll see a star rating. Click it if something I say resonates with you – don’t if it doesn’t. I offer it as a low-investment feedback mechanism. It’s cheaper than writing a comment and only slightly-more expensive than reading the post itself.

Like all feedback mechanisms – those most likely to bother are those at the poles (polls) – why speak up if you’re not the choir or in the wrong church altogether?

The number to pay attention is the number of votes – not the rating itself. So yes, an overall rating of 2.5 with 10 votes would be a good thing. In the end, our individual rating criteria are very different. Is this rating in comparison to the previous post? Another post on the same topic on a different weblog? How well my writing went with your morning coffee? Is 5 good or is 1 better?

The star is only a single indicator. Top rated posts on this blog will be different than top rated posts that I’ve written, than, well, you get the picture. How does a 4 at podcasts.yahoo.com compare to a 5 at Podcast Pickle to a 0 at Podcast Alley? Given how niche anything in a weblog or podcast is – the qualifiers of what these ratings mean are a mile long.

AmigoFish has promise – its collaborative filter + RSS feed sends new stuff directly to my feedreader – based on what I and others have rated – then provides an easy way to go back and finish the loop. Problem is (like all the directories) ratings are applied channel-wide and there are a lot of open loops.

I’ve got a channel over at GigaDial – Garrick’s Podcast Picks. It’s an on-going list of podcasts that I’ve found exceptional (35 as of this writing). Here’s the 9-step process for a item to get added to the list:

  1. A podcast finds its way into my feedreader
  2. It gets transferred into my iTunes’ Unlisted Podcast smart playlist
  3. It comes up on shuffle
  4. I listen and don’t hit ‘next’
  5. It resonates with me
  6. I remember I liked it the next time I’m at my computer
  7. I click the ‘add to podcast picks’ bookmarklet in my browser
  8. I search for the specific podcast in their directory (not everything is in there)
  9. I find the podcast and add it to the list

I gotta wanna – I’m just saying. So, this means something and I’m only going to do it once. Now, unless I take the extra step of telling the publisher I’ve added them – they’ll never know they connected. Same is true at all the other directories. That sucks. More than Earthlink advertisements in podcasts.

Within the RSS 2.0 spec, there’s an optional category tag, at the channel and item levels. It’s a free-form field – can be anything you’d like. Anything. If it’s a series of characters – it’s a category. And it can be different item to item, podcast to podcast.

Reminds me of a scene in a quiz show sketch from MTV’s 90s comedy show ‘The State’:

“Name a type of car.”
“Yes, blue can be a type of car.”

So, why are all the directories shoehorning podcasters into 15 main, meaningless sections when each podcaster could declare their own unique categories – plural – and standout?

A single-dimension directory is like trying to make money hosting podcasts or sanitizing telephones – it’s only fulfilling at the most cursory level. This is why Google is still the best podcast directory – it takes very specific queries, ones with multiple qualifiers. Then returns fulfilling results.

Bringing me to the podcast directories splogging up the search results. Yes, podcast directories are guilty of the same crime as the the other PageRank-loving sploggers – taking an RSS feed and republishing it for higher placement. 6 of the 10 items on the first page of Google results for “first crack podcast” are directories echoing one another. This redundancy makes each result less valuable.

Update: 9 Feb 06, If you’d like a more colorful read of the same issues, The Bman at FalconTwin.com delivers

Why Google Analytics Isn’t Useful

After a couple months of being completely without site analytics, I thought I’d try out Google Analytics.

Things it doesn’t measure; RSS feeds, downloads, downloads from RSS feeds.

Considering 95% of what I’m tracking is accessible via an RSS feed (like podcasts). Google isn’t helping me. The previous iteration of their tool – Urchin, was a server log cruncher, not a pixel bug (little big of javascript on every page). As a server log cruncher – I could measure RSS subscriptions and related downloads. As a pixel bug – despite how sexy the map overlay view of traffic is – I’m helping Google more than the other way around.

I’m confident the people that I’m writing this for – i.e. you – are reading this via some type of RSS aggregator (yes, My Yahoo and the gFeed counts). So, ironically, the people I’m most interested in – i.e. you – aren’t counted via Google. Only the people viewing the HTML version of the site. Like those coming by via Google. Hmmmmm. This doesn’t feel right.

Related: Read/WriteWeb: “Page Views per user: RSS blows HTML away”

Update 5 Jan 2006: Google Analytics doesn’t give full referrer URLs. It’d be more useful if it did.

On Measuring What Matters

I’ve been itching to see Dave Slusher’s reaction to the Audible Wordcast announcement and he didn’t disappoint.

“What matters to me are the number of sensible comments, the other shows that quote me, the number of people that came up to me and talked to me at PME and told me they enjoyed the show. These are not simple numbers, but the simple numbers are flawed and odd and full of fraud.” – Dave Slusher

Earlier this year, I was asked how I’m measuring the success of the First Crack Podcast. With robots and aggregators hitting the feed, people downloading and not listening immediately (or at all), and so many other factors throwing off the simple numbers – I’ve also decided they weren’t good measures.

Instead, I’ve decided on two factors:

  1. Showing up within the top 10 results in searches for the people I talk to.
  2. The number of comments and ratings for the individual conversations.

Both of these factors are driven by people interested in the conversation and have an indefinite time period associated with them. Two things that map very well to podcasting’s inherent characteristics.

UPDATE: Hugh’s got a great comment on metrics

“Metrics don’t really matter. What matters is your network, your readers, the quality of your writing etc etc. It’s an easy thing to forget, once you first start seeing your traffic exploding and the lucrative consulting offers start landing in your inbox.”

Our Memories Are Poor, Measure in Context

“It isn’t as astonishing the number of things I can remember, as the number of things I can remember that aren’t so” – Paine, Mark Twain: A Biography, 1912

This Sunday, the Vikings are playing the Lions. I’m confident they’ll be keeping score throughout the game. Yard by yard, play by play. Rather than having the refs remember who played better and declaring an arbitrary score on Monday morning,

There’s a huge, often un-acknowledged difference between what people say they do and what they do. This delta widens with time. If you’re looking for accuracy – metrics need to be captured in context. If you’re looking for fiction – then making up what someone did two days ago is as valuable as asking them to remember (making it up just might be more interesting).

PodcastAlley, Podcasts.Yahoo.com, and a number of the other podcast directories offer ranking and voting for “your favorite podcast”. There are three problems with this;

  1. The metrics at different sites don’t know about each other – diluting the value of each of them. For example, what does it mean to be #8 at PodcastAlley and simultaneously #23 at Podcasts.Yahoo.com?
  2. All these systems rank unrelated podcasts against each other – just like Arbitron or Nielsen Ratings. The only people interested in how, for example, meatloaf ranks higher than lawn mowers are advertisers. This ranking doesn’t help the listeners’ enjoyment or the producers improve (in fact it could be detrimental to both).
  3. I still have to remember to vote, at one site or multiple (just like Arbitron). The action of voting and ranking is separate from listening, so I don’t.

I’ll let Mark Ramsey wrap this one up for us:

“The diary methodology is woefully inadequate to meet the challenges of measurement in our industry going forward.”

“If we want the advertising community to place any credence whatsoever in our measurements, then we are obliged to use measurement methodology which inspires credence.”

On being Seth-dotted

I want to thank Seth Godin for linking to the Work Better Weblog yesterday. As expected, I saw it in the server logs.

Work Better saw quadruple the traffic of just 24 hrs earlier. It didn’t take down my server (slashdotted) but it did make my afternoon (seth-dotted ?). Plus, I got a great email from Joe Ely over at Learning about Lean – one of the blogs that inspired the Work Better Weblog.

It reminded me just how fast and direct internet communication is. Two other recent, personal, and measurable examples of this: