Opacity in Internet Explorer

This is an ancient problem: older versions of IE do not support the opacity CSS property. Instead, IE8 uses -ms-filter and IE prior 8 uses filter. Also, in IE, one has to be cautious about the positioning of the element the opacity properties apply to: the element has to be positioned for the filter to work properly. Another trick is to use zoom. 
Let’s wrap this up in the css snippet below:

#page {
  opacity: 0.5;
}
/* IE7 and less */
#page {
  filter: alpha(opacity=50);
  position: relative;
  zoom: 1;
}
/* IE8 specific */
#page {
  -ms-filter: progid:DXImageTransform.Microsoft.Alpha(opacity=50);
  position: relative;
  zoom: 1;
}

It is a pain but it works.

But what I found is that if you try to set these properties dynamically, through jQuery for instance, they are less obedient.

For filter and -ms-filter to be set through jQuery, the element has to be positioned through css and NOT by jQuery.

So one would need something like this:

/* IE less than 9 */
#page {
  position: relative;
  zoom: 1;
}
if($.support.opacity){
  $('#page').css('opacity' , ".5")
}else {
  $('#page').css('filter' , 'alpha(opacity=50)');
  $('#page').css('-ms-filter' , 'progid:DXImageTransform.Microsoft.Alpha(opacity=50)');
}

This is empirical knowledge though. I don’t know why is it like this.

Advertisements

Hootsuite can post to a WordPress.com blog

Hootsuite can post to a WordPress.com blog.

I am really impressed by this feature. And the reason I am writing this post is to explore the capabilities.
Does the twitter character limit apply?
Can I upload pictures and videos?

Update 1: The character limit does not apply as you can testify from this post

Is Social Search a threat to SEO?

I think it is, and I tweeted so  yesterday.  And the reason is obvious. What is SEO about? Ultimately, it is about one thing: the ‘website’. It’s about making a website and its pages discoverable, ranked favorably in search results, described appropriately so that searchers hook on the description etc.

But ‘websites’ are not ‘in’. Check the diagrams  from Google trends for websites below.

Website traffic for 5 major IT companies
Website traffic for 5 major IT companies
Website traffic for the 2 major consumer goods companies
Website traffic for the 2 major consumer goods companies

While the overall number of people online is increasing, the visits to the web sites keep falling.

At the same time the volume of searches for these brands shows a completely different picture.

Search volume for the 2 major consumer goods companies
Search volume for the 2 major consumer goods companies
Search volume for 5 major IT companies
Search volume for 5 big IT companies

In the last 12 months CG companies see a volume increase or remain steady (amidst the crisis) while, for IT, a longer perspective reveals a mixed picture that has to do with what these companies are and technologies they offer:

  • oracle and ibm are gradually decreasing,
  • apple is increasing,
  • dell  increases too  although less quickly,
  • and hp seems to hold its ground or slightly decreasing.

But there is an equally important movement undergoing: people shift their reliance from search to peers for news,  recommendations and answers.

I don’t remember how many times and about how many things I  have asked my twitter friends’ advise. And it always comes. And most of the time  it’s good too. Not so  abundant as  search results, but who reads search results past the first page anyway?

Enter social seach. Google injects results in search from our social graphs (opt in). I don’t have to reason the usefulness of this.

What should we expect? What else than  these two inversely related trends accelerating?  Less reliance on search, more reliance on peer recommendations.

There are some interesting implications here: SEO consulting and search advertising have profited from our reliance on search. Search won’t go away anytime soon, especially with the social element in it. But what would be the need for SEO? And what would be the need for adword advertising, if the important factor in search results turns out to be our peers?

Is Google shooting its own foot?  So it seems. But I am sure they have figured it out already and they are thinking of alternatives.

Is the Salmon protocol tasty enough?

Conversations on the social web are mostly performed through comments. But comments are so fragmented! Consider this example:

  • Publisher  publishes a blogpost
  • A regular reader of the Publisher comments on the blogpost
  • Someone else reads the post in  Google Reader and shares it
  • Another comments and reshares the Google Reader item
  • Another decides to share it on Facebook
  • Another comments on above  the Facebook link
  • Another submits the link on Digg
  • Another comments on the Digg link
  • Publisher  has a Friendfeed account and the post appears in his FF stream
  • Another user comments on the FF stream item
  • etc

Obviously the post has stirred some interest and generated a conversation. But the conversation is dispersed in many different places. Publisher looses track of many aspects of the conversation around the post. Commenters also mostly ignore what happens outside their area of interaction with the content: Facebook users ignore the FF commenters etc.

This situation has fired some intense debates. Many publishers think this situation is not  in their best interest as potential traffic to their sites is deflected to an ‘aggregator’. Especially publishers that have a financial interest in their site traffic and do not just want their opinions spread, find this particularly not appealing.

To mend this situation, a group in Google is working on a new protocol that will allow comments to ‘return’ back on the original publisher site. The protocol is called Salmon

Salmon protocol logo
Salmon protocol logo

and you can get a basic idea of its workings  from this  presentation.

Salmon does bring back the comments to the publisher site but it does not solve the publishers’ problem.  As you can see from slide 4, once a comment is back to the publisher’s site, it  is republished back to all its subscribers (including the aggregators). What this would mean is that each aggregator has a full picture of the comments around the post, regardless of origin. From the user’s standpoint there is no need to move to the publisher’s site or to another aggregator for any reason, as the  full picture will be available in  whatever site the user prefers to frequent. The publishers may object it, but in what right? The publishers’ protests imply  they OWN the comments which is hardly the case. The user owns his comments.

But let’s leave aside the publisher’s concern for  a moment. Is slamon a good thing for the user? I would argue it is. He can have access to a discussion in its entirety  without much hassle. And therefore he might be tempted to engage or engage more.

But there is something still missing: the user does not have easy access to his own comments for ALL pieces of content he has interacted with. And he has no control either. They can disappear with a site that closes down. Or in the simplest case, the can be deleted by the site moderators. This is the problem that systems like disqus, intense debate and JS-Kit are aiming to solve. But they won’t. Because it is very unlike that one of them will become ubiquitus.

I think the problem should be approached from another angle. A comment is a piece of content. There is no distinction in form from any other piece of content. They are both text (or audio or video in some cases). What subordinates a comment-content to the original post-content is notional and semantic: the post-content preceded the comment-content and actually the post-content was what aroused the commenters interest in the issue. But the same applies to a post that pingbacks to another post. So a comment is a piece of content and should have independence.

The question is how?

The issue is related to our digital identities: if in the web -to-come we can  have a unique independent central point for our digital identities, this central point could be the originator and hoster of our comments.

A modification of the salmon protocol could easily let this happen: whenever a user comments on a publisher site, the site will send the comment back to the users digital identity home. Likewise, whenever an aggregator receives a user comment, the aggregator sends the comment back to the user home, as well as to the publisher.

I do not think this is difficult to implement although I can predict the frictions about who controls the user’s  digital ‘home’. But that’s another issue.

Read also Louis Gray’s post on Salmon

Web 2.0 without javascript?

A couple of days ago I came across this terrifying presentation from John Graham-Cumming.

Although the topics covered weren’t entirely new to me, put together in one presentation, had an impact.  I came to wonder if and how would the major web 2.0 sites work, if javascript was out of the picture.

I decided to make a little test to find out: I disabled javascript from my browser  and started logging  in such sites to see how would they behave.

Here is the outcome for the three most important for me.

a. Twitter

Most of the functionality was in place: the timeline, friend and followers. From the various buttons on the tweets and the timeline pages, the reply did work but not the fav button.

The direct message and delete buttons did not work either. Same with the drop down where you select a follower to dm, and finally, the followers and trending topics buttons.
But all these are rather trivial. Because most of the tweet buttons replicate user behavior (putting the @ sign in front of another user name for a reply, or the d letter for a direct message).
Not being able to fav, or, more importantly, to delete is a loss, but not a major one.

b. Facebook
Things are worse in Facebook: while Home, Profile, Friends and Settings are accessible, the inbox and chat are not.
Also, from the bottom bar, the applications menu is inaccessible. Most of the edit links and buttons don’t work either and finally the status updates, link sharing , photos etc cannot be submitted.

c. Youtube
Here things are disastrous: without javascript you cannot see the videos! On top, you cannot access your account settings or you mailbox. There was no point looking for more.

A small gallery with pics of the failure areas of the above web applications follows

Facebook tremors today

I thought it was just me but a search (employing Facebook search) reavealed that this is a widespread issue. See the picture.

Picture 7

I haven’t seen any announcement anywhere. Or a post in one of the tech blogs. Anyone?

Facebook Feeds

I added a Lifestreaming plugin to my blog recently and as I was entering the feed urls of the various Web 2.0 sites I am participating, I stumbled upon the Facebook problem.
Since its last change, the old mini-feed feed has disappeared, so one has to reassemble it by its components.
I was particularly interested in the Noted Feed, the Links feed and the Status feed.

Why?

Well, the notes is the facebook blogging.

Notes
Although I rarely use it, it can occassionaly contain some thoughts that are posted nowhere else.


By clicking to the notes tab in your profile (hoping you have added the tab to your profile), you get on the right side a column which, at the lower part has the notes feed. Like this:

The structure of the url is as follows:
http://www.facebook.com/feeds/notes.php?id=<yourid>&viewer=<yourid>&key=<yourkey>&format=rss20

Links

The Links feed is essentialy the feed of all the sharing activity in facebook, so it is a must to include in a lifestream. Working as with notes we can find it at a similar place.
The structure of the url is as follows:
http://apps.facebook.com/feeds/share_posts.php?id=<yourid>&viewer=<yourid>&key=<yourkey>&format=rss20

Status
Last, the Status feed is the most important one, especially if no cross posting is taking place on your Facebook Wall, as it comprises of all the original thoughts and situations you share in Facebook.
But where is this feed located?
As much I have searched I could not find it.

So after discussing this in twitter, from the responses I realized that the structure of the statuses feed url must be the same with other two feed.

First guess: replace notes.php with status.php and … voila, it works!
http://www.facebook.com/feeds/status.php?id=<yourid>&viewer=<yourid>&key=<yourkey>&format=rss20

Posted via email from websurfing diaries

Tethering through a Nokia N95 phone

I am ‘locked’ up today in my mother’s house, which, quite unsurprisingly, does not have internet access.  One option is to steal my way to the net through a neighbors’ open wifi. Not without some odd problems though: I can open my Gmail in https, send and receive mail as normal, I can browse pages in https (where supported), I can use a twitter desktop client, I can use twitter through (you guessed) https, but every other simple web page request through http fails!

But problems can make one creative! Having encountered the same situation before, this time I came prepared. I had preset my MacBook Air and N95 for tethering and I could surf the web with no restrictions. Well, almost, as  my data plan  isn’t for heavy use  (it’s just a quarter of a Gb per month) .

How ?

You’ll need a USB cable.
Connect Phone to Mac and select PC Suite on the phone.
Make setting as the pictures below
See the full gallery on posterous

One can achieve the same without a  USB cable (through Bluetooth) but I did not try it, as bluetooth drains phone batter all to quickly.

The above settings are specific for my provider, but you can get an idea how it would work with yours. A tip: do not confuse the APN (Access point name, with the connection name on your phone). Open your connection (the one you use to connect to the internet) and see the APN name there.
Leave a comment if you have tried this on a different provider.

 

Posted via email from websurfing diaries

A call for twitter clients interoperability

A Comparison of an IBM X31 laptop with 12&quot...
Image via Wikipedia

A basic one at least.

With the advent of the second gen twitter clients, which support, among other things, groups, users are confronted with higher barriers to entry and exit: In all clients, the painstakingly prepared groups are hardwired in the client. No easy way to get them out. When one desires  to switch to a new client, he has to recreate all these groups by handpicking users one by one.

The problem becomes apparent even in the case where one does not necessarily want to change twitter client, but he simply has to work on two different (or more) computers.

In my case, I had to recreate the Tweetdeck groups for my desktop and laptop computers. And I did it only for the two out of four computers I use and for two out of seven operating systems (3 windows, 1 mac, and 3 flavors of linux). I did not even manage to create them as exact copies.

Now, this call might sound like a luxury request, but given the path the twitter clients have taken (check Nambu or AlertThingy or Seesmic Desktop, to mention a few), you will notice that ‘groups’  is one of their prevalent characterists. So dealing with this feature effectively is essential for the success of the product.

And it is very simple really. A CSV or XML file would suffice. With bare minimum information. The name of the group, the twitter account it belongs to and the follower or friend ids that belong to it.

And even if the software vendors do not to go this way for fear of loosing users, there is something else they could do, to allow portability of groups for the same client but different computers: store this piece of info in the cloud. Create a simpe web app where each client will connect to and retrieve the group data.

Some people have  already hacked their way to group migration for Tweetdeck but this, clearly, is  not the way for the masses. Hence this call. But is anybody listening?