Is the Salmon protocol tasty enough?

Conversations on the social web are mostly performed through comments. But comments are so fragmented! Consider this example:

  • Publisher  publishes a blogpost
  • A regular reader of the Publisher comments on the blogpost
  • Someone else reads the post in  Google Reader and shares it
  • Another comments and reshares the Google Reader item
  • Another decides to share it on Facebook
  • Another comments on above  the Facebook link
  • Another submits the link on Digg
  • Another comments on the Digg link
  • Publisher  has a Friendfeed account and the post appears in his FF stream
  • Another user comments on the FF stream item
  • etc

Obviously the post has stirred some interest and generated a conversation. But the conversation is dispersed in many different places. Publisher looses track of many aspects of the conversation around the post. Commenters also mostly ignore what happens outside their area of interaction with the content: Facebook users ignore the FF commenters etc.

This situation has fired some intense debates. Many publishers think this situation is not  in their best interest as potential traffic to their sites is deflected to an ‘aggregator’. Especially publishers that have a financial interest in their site traffic and do not just want their opinions spread, find this particularly not appealing.

To mend this situation, a group in Google is working on a new protocol that will allow comments to ‘return’ back on the original publisher site. The protocol is called Salmon

Salmon protocol logo
Salmon protocol logo

and you can get a basic idea of its workings  from this  presentation.

Salmon does bring back the comments to the publisher site but it does not solve the publishers’ problem.  As you can see from slide 4, once a comment is back to the publisher’s site, it  is republished back to all its subscribers (including the aggregators). What this would mean is that each aggregator has a full picture of the comments around the post, regardless of origin. From the user’s standpoint there is no need to move to the publisher’s site or to another aggregator for any reason, as the  full picture will be available in  whatever site the user prefers to frequent. The publishers may object it, but in what right? The publishers’ protests imply  they OWN the comments which is hardly the case. The user owns his comments.

But let’s leave aside the publisher’s concern for  a moment. Is slamon a good thing for the user? I would argue it is. He can have access to a discussion in its entirety  without much hassle. And therefore he might be tempted to engage or engage more.

But there is something still missing: the user does not have easy access to his own comments for ALL pieces of content he has interacted with. And he has no control either. They can disappear with a site that closes down. Or in the simplest case, the can be deleted by the site moderators. This is the problem that systems like disqus, intense debate and JS-Kit are aiming to solve. But they won’t. Because it is very unlike that one of them will become ubiquitus.

I think the problem should be approached from another angle. A comment is a piece of content. There is no distinction in form from any other piece of content. They are both text (or audio or video in some cases). What subordinates a comment-content to the original post-content is notional and semantic: the post-content preceded the comment-content and actually the post-content was what aroused the commenters interest in the issue. But the same applies to a post that pingbacks to another post. So a comment is a piece of content and should have independence.

The question is how?

The issue is related to our digital identities: if in the web -to-come we can  have a unique independent central point for our digital identities, this central point could be the originator and hoster of our comments.

A modification of the salmon protocol could easily let this happen: whenever a user comments on a publisher site, the site will send the comment back to the users digital identity home. Likewise, whenever an aggregator receives a user comment, the aggregator sends the comment back to the user home, as well as to the publisher.

I do not think this is difficult to implement although I can predict the frictions about who controls the user’s  digital ‘home’. But that’s another issue.

Read also Louis Gray’s post on Salmon

Advertisements

I have a dream (of social bookmarking)!

By Alex King
Back in the end of 2007, half a year after Google Reader had launched the sharing feature, I had an idea of a new service that would aggregate all the shared items and sort them according to the number of times one post was shared.

As it usually happens with the new ideas, somebody else had it too, and, most importantly, made it real before I had even started coding (actually, I had, but just a few lines). In a short while, a second similar aggregator appeared and, today, we are fortunate to have ReadBurner and RssMeme.

The two services, both dear to me,  have a lot in common with one notable exception: RssMeme employs a kind of spider to find and aggregate shared items  while ReadBurner is an opt in service.

In due course, other feed readers were added as sources: Bloglines, Netvibes, Newgator etc. and RssMeme went  a bit further querying known services to find out whether an article had been bookmarked in any way.

The idea that what one shares through his feed reader is actually a vote or a recommendation is pretty solid, and, once a big number of sharers is reached, the power of statistics comes to play: the articles that emerge to the top are the ones that people truly feel are important. Isn’t this the essence of social bookmarking? And isn’t it also true that this essence is actually gamed in the digg like sites by a rather small group of people, despite the huge influx of traffic these sites enjoy?

One short  visit to Readburner or RssMeme reveals though, that the articles that rise to the top, have been shared by such a small number of people that, with  equal diggs, they would never see the light of day in digg.

Which leads to the conclusion that either the people who share are not that many, or they have not been included in the two aggregators yet.

Speaking of numbers, how many people really use Google Reader? I tried to google the question but came with no answer. I tried to google also the ‘google reader market share’, but came with no recent data either.

Without an idea of how many people use feed readers and share, it is  pretty hard to make any predictions or recommendations. Yet, if we assume that it is only because it is too early  (less than a year) that the sharing culture hasn’t spread and that it, eventually, will, we can fantasize one implication:

Some clever engineer will think of incorporating the share-votes into digg: a little bit of matching (is the sharer a digg user, and has the shared post been dugg already etc) and there you go.

But would that be a good thing?

Yes, it would. Because it would instill the democratic element of Readburner/RssMeme into digg. And, democracy is a good thing, isn’t it?

Reblog this post [with Zemanta]