The Calacanis effect: 1st AWS Athens meetup

Jason Calacanis in OpenCoffee Greece

Image by nikan_gr via Flickr

Jason Calacanis urged people gathered to listen him in the last Greek OpenCoffee, to abandon their isolation, form groups of common interests and advance and thrive on cooperation. Half jokingly,  half seriously, he appointed John Nousis, cofounder of the Zuni student social network, as the one responsible for a meetup of people interested in cloud computing and Amazon Web Services.

Well, today we witnessed this ‘joke’  come alive!

Close to 40 people gathered in  the cafeteria of Eleftheroudakis’ bookstore to watch John’s presentation, discuss and network.

I got a positive feeling from the meeting for the following reasons:

  • Most people gagthered were IT professionals
  • They seemed to have a genuine interest in the subject flared by an unconfessed concern about some project they were running.
  • There were lots of ‘new  faces’ which means that this meetup motivated people that we do not usually see in other meetups.
  • The subject itself (cloud computing, scaling, need for ‘unlimited’ bandwidth etc) points to projects that do not bear the usual characteristics of  Greek startups: local focus, limited reach etc.

“Now what?” you might ask. I am not sure whether or when there is going to be another meetup, but my suggestion is that it should. And it should have a more concrete form: to train people interested in the basics. Because startups don’t need only computing power that scales, but also people to maintain it.

Reblog this post [with Zemanta]

Nothing lives on until you declare it dead: the web 2.0 case.

NASDAQ in Times Square, New York City.

Image via Wikipedia

The comparison of the current financial crisis with the thirties is a commonplace for bloggers and mainstream media alike. Yet, extremely few, if any, of those making the comparison were born, lived through or near the time of the big crash in 1929. And as history teaches us that history does not teach people anything, the comparison of the current crisis to the 1929 one, is just a matter of impressions. We will not be able to know and understand the implication of the current crisis for years. Which is also a generous premise, since, even today, we do not understand the causes of 1929 crisis.

Another commonplace comparison is the one with the .com collapse. The parallels here run truly scarce, as very few of the  web 2.0 companies made it to the stock market to get exorbitant valuations. So where is the comparison? To the fact that the good times for web2.0, meaning capital flow, copycat financing, building a new venture in a matter of days, etc,  will eventually end. And  end it will, there is no doubt about it, since the melt down of the big financial corporations will dry up the rest of the economy.

Yet, like a forest fire that helps a forest regenerate, this dry up will put the vigor of web 2.0 into a test. And like in the .com era, Darwin will see himself proven right once more, and the fittest will survive.

I can’t help reminding myself of all the articles I read in the late ’90s that were doupting the prospect of Amazon‘s survival.  Amazon is now an undeniable e-commerce king. It has also  gone far beyond that, transforming itself into the main web services provider, the behemoth of cloud computing.

A similar thing will happen to web 2.0: the most valuable companies, those adept and well adapted, those with sound business models and practices will survive to see themselves the business royalty of tomorrow. The crisis may take away some money capital from them, but will also provide the with new cheap human capital, less competitors and a market that settles around them.

Is web 2.0 declared dead? Good! Now it can start living.

Reblog this post [with Zemanta]

A strategy for the cloud



This post is an open deliberation on the best usage strategy for the new EBS offered by Amazon Web Services.

Last Thursday Amazon launched a much anticipated new service: Elastic Block Store (:EBS).
EBS offers persistent storage (something that was missing from EC2) at a price of 10 cents per Gb per month. In addition, one has to pay another 10 cents per million of I/O requests.

I was quick to jump and test the new offering. As a matter of fact, I had to, because I was about to transfer a customer’s server to EC2 on Friday anyhow. It took me a few hours to accomplish the task, most of them spent in moving data from server to server.

The setup and use of a new EBS volume is fairly simple and with the aid of Elasticfox, is even simpler.

While the technicalities of the migration did not pose a serious obstacle, I realized that all of a sudden there was too much storage available and I needed a strategy for making the best and most cost effective use of it.

Let me explain.

The site I was migrating was a small site, so my choice of the EC2 instance was obvious: the smaller one.
The small instance comes with 160 GB of storage, split in two: 10G for the operating system (only a fourth of it actually used) and the rest completely free.

Now, the problem with the EC2 instances is that once shut down, storage and data evaporate to thin air. Frequent backing up is, therefore, a must.

Before the introduction of EBS, one could back up the data to S3 (but the process was rather tedious), or to another server.

With the EBS, one can decide how many Gigabytes of persistent storage he needs, create a volume, attach it to its instance and make the necessary changes so that all data, from then on, is stored in the EBS. In the case that something goes wrong with the instance, all data stay intact.
For the record, I created a 50Gb volume, so the storage available was now on 210 Gb

EBS offers a kind of backing up to S3 called snapshot. With Elastic fox it is a piece of cake to take a snapshot of EBS and store it to S3, and while EBS risk of failure is 1/10 of a normal hard disk, in the case that it happens, the data can be restored from the latest snapsot to S3.

Now, this is all well, but isn’t it too much of redundant storage there? And doesn’t this mean a waste of money for the AWS customers?

My first thought was to place the database files in the EBS too, to make them safe from failure. The uploaded files had to go there as well, for the same reason.
The application itself, not a really gigantic one, stayed in the 10Gb partition. And since it is not going to change that often, I built an instance image with the application included and stored it to S3.

What else is left to store? Not much. See the problem? The instance now has 150Gb of storage completely unused.

On second thought, I decided to move the database back to the EC2 instance from the EBS. The reason? Well, the database is responsible for most of the I/O and I/O, as we said, is a cost factor.
Having the database in the instance does not consume too much space. A cron script can back it up at regular intervals and save the backup to EBS. In case of instance failure, the data between two consequent saves are jeopardised though.

EBS cost issues aside, there is always this 150Gb left unused that puzzles me. Assuming most AWS customers will play safe and utilize EBS, what are they supposed to do with the storage of their instances?
Either they have to come up with a reasonable use, or Amazon needs to change the instance policies. What do you think?

Photo courtesy of Flickr user skimaniac

Reblog this post [with Zemanta]