Monthly Archives: September 2011

Talk Cloudy to Me! — Some Notes

Whatever I know about cloud computing comes from Wikipedia and the How Stuff Works page, which is actually a rather good overview. The tone of these articles, though, suggests a kind of optimism rarely seen since the advent of the microprocessor, and this Meetup gave the attendees a good, long look at the difficulties involved with putting data online.

I took hasty notes in my iPad (which might have been invented for tasks like these), so here are the takeaways for each talk I attended. Anything that sounds nonsensical is most likely my own misunderstanding creeping in; mistakes are mine, not the speakers’.

Which Cloud? by Nitin Borwanker

This was an interesting introduction to the Cloud that I wish I’d been able to see in full. I wasn’t planning on catching it, but the light rail was remarkably efficient and dropped me off at the doorstep of PayPal.

  • The cloud might seem like a technical, IT-driven solution, but it’s become increasingly important for straight-up science. Things like studying cell lines becomes more doable if it’s outsourced to the cloud (that link is to an abstract doc I found online that briefly discusses this; I don’t know anything about its origins, it’s just an example). By the way, one beautiful proof that crowdsourcing is useful is this news about 3D protein folding.
  •  There are basically three criteria for figuring out which cloud solution’s good for you — size, speed and ease of adoption. Borwanker thinks (if I remember right) that ease of adoption is an oft-ignored factor that really should be influencing the decision as well.
  • Mongo DB actually scores very well in all three factors. Should’ve checked it out when I had the chance, alas, but that’s what open source forums are for!
  • The Stockholm syndrome of cloud is catching up with us. When it works, we love it (Facebook!); when it breaks, we hate it (Facebook!) but we essentially end up needing it and wanting it.
  • Basically, take the path of least compromise towards your cloud solution. It’s not going to work out of the box; you’re going to have to modify something in your system or adapt your perfect solution to your needs. It’s going to be a compromise between what the system offers and what you need.
History of Cloud by James Watters
  • Watters seemed to concentrate on examining the myths of cloud computing. His angle is that it was “all marketing”. So the three myths he examines are:
  1. “Easy Provisioning”: this, says Watters, is a big win. Cloud lives up to the hype of making life easier once the system is up and running.
  2. “Commodity pricing”: As far as I could tell, Watters was saying that although DRAM costs were falling, the ones who were really benefiting were the much larger companies, not necessarily the SMEs. That means that the per hour cost of a startup is going to be the same now as it was maybe three years ago, but that costs for large companies like Google would be dropping steadily. I’ve written “Is google big enough to do good costs?” which makes little sense now, but I wonder if he was referring to Goog cutting costs for people using its cloud services. Actually, for that matter, what about Amazon?
  3. On the other hand, Google Apps Engine seems to have taken their costs up to ten times the original price, because individual instances of the apps were proliferating; RAM footprint goes up.
  4. “Scale is more important than software”: Not true, says Watters. For instance, memory compression software is some of the most interesting software around, and that sort of thing still needs to be written.
  • Speaking about the Urban Airship startup, which was begun to solely handle notification systems, Watters pointed out that it’s all about scale. When it began, UA was doing about 1 million notifications; now the number’s closer to 5 billion.
  • So scaling is the most powerful thing about cloud — and at the same time, hardware still matters.
  • He also began to touch on code development on the cloud, saying that we’d rather not have testing done on multiple VMs and that we should just be able to push code to an automated system. I wonder if this is how CI (continuous integration) comes into the picture.
Cloud Computing Evolution at eBay: JC Martin
  • First, some cool stats: eBay’s internal cloud is about 6000 app servers; the company loses about $2000 every second that the site is down; and the company has about 90 million users, roughly the entire population of the Philippines.
  • Obviously cloud solutions are of interest. There are so many sites and countries but even when eBay is international, the site has to provide the same agility and security to each and every instance of the site.
  • Cloud was developed internally to improve the agility and productivity of eBay’s developers.
  • Martin went on to explain that, on top of that, traffic to eBay is spiky; servers are only really pushed to capacity half the time. When they are, though, why not burst to the cloud? eBay won’t want to rent on the cloud all the time, but it could be useful to have the extra capacity around, especially during emergencies and holidays.
  • So eBay decided to build a hybrid model of private-public cloud. Right now, they would focus on the public cloud; and then in the future migrate to a public cloud.
  • They are also looking at open source solutions that will let them build their own solutions. At the moment, there is a public IP layer that makes private data visible to the public space. In the future, they want to extend the internal cloud to the external IP space. Some “DNS magic” has to happen to make sure the spaces work, said Martin.
  • One of the troublesome aspects of being more physical, said Martin, was that coupling between applications was high, and that latency between applications cannot be guaranteed. Bids and buys need to be updated immediately, but unknown latencies could be problematic for those. And to get low (and guaranteed) latency, eBay needs to be thinking about easily scaled databases, like MongoDB.
  • Then Martin explained that the old silo’ed app servers were highly inefficient; each server had to be labeled manually and then changed if necessary, and deployment after receiving and configuring servers would be several weeks. On the other hand, using rack-and-roll servers, the deployment time dropped from weeks to 45 minutes. Those were some seriously impressive timelines on his slides, if I was understanding him correctly.
  • Martin also mentioned that though open source clouds were widely available for some applications, they weren’t yet available for many purposes. As the apps became available through the infrastructure layers — from hardware to application level — eBay would be thinking about replacing their own  internal systems with open source ones.
Such stuff as dreams are made of: Patrick Chanezon
  • I enjoyed this talk for reasons other than Chanezon’s frankly rather attractive accent — he spoke about cloud for developers, for the sorts of practices and philosophies we should be carrying over now into the future if we want cloud to really work for us (I like how I’m including myself in the “developer” category).
  • He thinks that now, after mobile and social apps have taken to the cloud, that it’s going to finally become mainstream. Chanezon also thinks the singularity is “bullshit” — that not everything will be automated. Software, he says, is still a craft. And cloud can play a role in increasing productivity.
  • In the 60s, computers were mainframes. In the 80s, they were client-side, like Macs and Windows. In the 90s came web-coding, and now the web is like the mainframe all over again.
  • Now, with HTML5 and mobile apps, we can push code to the cloud and it should be able to scale and distribute it for us — code once, deploy millions of times.
  • Chanezon thinks we’re at the peak of the cloud app — the speeches, the marketing, everything. “Cloudy with a real chance of innovation”, he explained.
  • Some observations he made revolved around the fact that we are moving from vertical scalability to horizontal scalability, with databases managed by a single provider. Storage capacity, he says, is increasing faster than Moore’s Law.
  • “Cloud is a productixation of the grwing virtualization at private companies” — which sounds impressive, but I believe he’s just restating the nature and importance of cloud here.
  • Chanezon wants the Internet to be viewed as a platform — “platform as a service”, but something that could be a architect’s dream and a developer’s nightmare.
  • He called for a rewrite of the ACID principle: now, it should be “Associative, Commutative, Idempotent, and Distributed”.
  • Here, Chanezon breezed through the “Starbucks doesn’t use two phase commit” example which I’m afraid I didn’t fully understand. His point I think was that baristas don’t begin guessing at your coffee, make it, charge you for a coffee, you reject it, and then they make it again. I’m not quite sure how the analogy itself plays out, though.
  • Now for some very nice “cultural” changes that he introduced, which I found particularly relevant. Instead of having a huge monolithic code base that ships changes on a schedule, to everyone, Chanezon pointed out that code testing now is becoming far more adaptable and agile. Facebook and Google, for instance, roll out code as soon as they think it’s ready for some beta testing, getting their feedback from their users and taking massive risks sometimes (Wave, anyone? Buzz?)
  • But then again, you learn from failure. Chanezon was formerly of Google, and quoted the “fail often, fail quickly and learn” principle. This gives you time to invent and learn along the way instead of trying to be perfect from the get-go and having no energy to be flexible.
  • The API culture is another big deal — when Twitter started out, they built a huge monolithic internal backend which they later farmed out to developers as their API. Now, Chanezon says, it pays to think of an API first and a UI later.
  • What’s the deal with platforms? Chanezon sees them as a service — developers don’t want to install a billion VMs and build on those; they want to be able to test and deploy quickly on the cloud, which will automagically help them with it.
  • Interestingly, Chanezon thinks that another aspect of the culture change will be the craftsmanship model — that we’re going to move from learning from large organizations to small bloggers or certain specific experts. This is interesting, because “craftsmanship” gives me the impression of small, inefficiently scaled things which reduces availability for people. But then again maybe I’ve been brainwashed by capitalism or whatever; to think about it, open source is a bit like the organic food movement all over again, isn’t it? Freeing up code for the masses!
  • One of the biggest takeaways involves some profanity (WARNING): “be your own bitch”, says Chanezon. “Don’t depend on third party platforms, examine the business mode and strategy of the provider — where are they headed? Will they eat you up or buy you out like Twitter, for instance? Look at terms and conditions. Can you monetize? Be safe from your providers.”
  • And finally, some personal advice: forget expertise in a single language. Be agile; UI design is extremely important, and I can see exactly how that would be. Software is becoming mainstream — or rather, whatever’s mainstream is finding its way into software. It’s not the military network anymore, it’s the internet for the average citizen. Learn Javascript, says Chanezon, and become used to the Babel of languages.
Advertisements