Tuesday, March 31, 2009

The line missing from Open Cloud

Ron Tolido made a good point about the link between the new Intel processors (can't wait for the new Mac Pro) and cloud in that people can still do internal work pretty quickly and these processor help. But I think there is a more obvious link that is missing from the Cloud Manifesto.
Open Cloud means x86
It really is that simple, for a cloud to be open and allow portability then you are going to need to have zero proprietary solutions, this means no z/OS from IBM, no Google App Engine lock-in, no Azure lock-in. The base platform of portability is the x86 machine.

Now you can argue that a Java VM could be portable if you have full JDBC libraries and the like, and I wouldn't argue too much so yes you could have an Open Cloud for Java approach. Theoretically you could do the same with .NET. The Platform as a Service play is however fundamentally a lock-in play in the same way as JavaEE vendors give you lots of specific libraries and features that only work on their platform, sure you can avoid those but a PaaS provider can make that rather hard.

Infrastructure as a Service is the most obvious place for Open Clouds to start and that means agreeing on x86 as the basis. It clearly can't be Java as MS are unlikely to sign up to that and IBM are highly unlikely to agree to .NET therefore the lowest common denominator is the virtual physical machine which has to be x86.

It would be good to see an Open Cloud for Java but can we at least agree that when its an infrastructure cloud that the cloud must be x86 based.



Technorati Tags: ,

Monday, March 30, 2009

The importance of common reference points

Recently I've had a couple of occasions where a major issue was created from something quite simple because a perceived common understanding just didn't exist. What does this mean? Well it means that one team thought that the definition of X was something and that the other team thought it was something else. Because however it was so clear to both teams they didn't check and it became an issue.

The point was that the common reference point wasn't what everyone thought it was. This isn't about canonical data models, which can still suffer from issues of definition, its about the cultural aspects of a programme and about the importance of defining a few basic elements that people can agree on. A core piece on these basic elements is that everyone agrees on a specific definition and any change to these basic building blocks can have lots of unintended consequences.

Since then I'd been looking for an example that lots of people could understand about how important this was. Well thanks to the wonder of the previous US Administrations I have one.

Daylight Savings time.

For many, many years the whole world switched to Daylight savings, or summer, time at the same point. This helped to some degree with power consumption but the point was that everyone who was going to switch did switch. Some countries didn't (e.g. India) but at the top of the Northern Hemisphere everyone switched at the same time.

Then some bright spark in the US Government somewhere decided that while the rest of their energy policy was a complete shambles they could at least sow more confusion by making the US switch a few weeks earlier, which meant that the poor Canadians had to change to match the crazy neighbour.

So historically there was a set of dates that everyone agreed on and where everyone switched at the same time. This meant that calendaring solutions could work globally in a simple manner and you wouldn't have to upgrade them as it was all known well in advance. It also meant people could know "East Coast = UK + 5 hours" or "India + 5.5 = UK in the winter and 4.5 in the summer" and then combine the two to get a 3 way call going.

Now however those basic rules have failed for a few weeks of every year because the US moves earlier and drags Canada with it. This means that for a couple of weeks I've had lots of calls that have "missed" due to the timezones as people just don't expect a fixed reference point to move like that.

So the point of reference points when you are working in distributed teams are simple. Firstly make sure they REALLY are clear and secondly make sure they REALLY are fixed. Whether this be something as simple as an XML schema and a data dictionary or as complex as a full business strategy the important element is that both sides need to agree on the starting points and on where they interact.

This is the basic principle for me behind my SOA method, creating a simple reference point that all people can agree on from both business and IT and defining the boundaries between teams. This makes the reference points explicit and massively helps reduce the communication problems, best of all I don't have a US Government making adhoc decisions to change parts of the the model mid-way through the project.



Technorati Tags: ,

Thursday, March 26, 2009

Trains and QoS

One of the challenges of designing a service is understanding the Quality of Service (QoS) both in terms of what it is, and importantly in how it is measured. The point about QoS is that it should be from the perspective of the consumer rather than from the producer. Historically people have ignored pieces like the network connectivity and latency but this is rapidly becoming a nonsense as people look at cloud computing and SaaS as standard parts of an IT infrastructure.

To use an analogy. I live about 30 miles or so from the office in Soho, Mike Morris lives in south london about 10 miles away. Now to open the door to get into the office takes about 10 seconds, and historically this is the SLA that would be given and the QoS would be measured from the point of standing outside the door to inside, a whole metre.

Now both of us are on train networks within walking distance at both ends, myself on the overground system, Mike on the underground. So in other words we both have network connectivity to the office.

Now here is the point, despite the fact that I live significantly farther away than Mike it takes us about the same time door-to-door because my train moves miles faster and stops much less than his underground train. This means from a QoS perspective our normal QoS is pretty much the same. This is where most people stop if they bother at all, measuring the end-to-end normal performance. However to really understand QoS and especially in a more critical area its important to look at what happens when things go wrong.

For Mike he has a couple of alternative routes if the main one fails, these take a bit longer (lets say 20-30% longer) and in the worst case scenario he could walk there in about 2 hours (a 120% increase), this means that from a disaster recovery perspective its not actually something that is over-worrying from a QoS perspective, its like switching from a dedicated commercial internet connection to ADSL or at worst to dialup, you can still get there but the performance sucks.

For me however we are in a different world of hurt, there are no alternative train routes so it would be either car (lets say at a 150% increase) or if that was buggered, for instance in the recent snow, then walking would take an impressive 11 hours or in other words it wouldn't work at all. Therefore I need an alternative solution, this comes in the form of home working where I have a duplication of the office environment (power, light, internet connectivity) and some additional software (VPN) to ensure I can continue working.

This is the point of QoS when looking at Cloud and SaaS and distributed SOA solutions. You've got to think about what happens if the main network routes fail, are there alternatives, can you put in an ISDN line for emergencies? What if the connection really is down and it can't be established. Can you have a local cache that will support minimal working? Can you degrade so you don't need to use that service?

The point about QoS is that you need to look at the failure conditions of the whole network not simply the last inch. Many of these elements might be out of your control and therefore you need to push back the QoS onto the consumer to make sure they are informed about what is their responsibility and what you will guarentee. This is what the cloud and SaaS providers do, but if your job is to ensure the business can use those solutions then you need to be looking at that end-to-end element and concentrating on the connectivity and caching more than thinking "the internet is always up and its always performant".

SouthEastern trains provide me with a great QoS on a good day and complete rubbish on rather a lot of days, if I was using a SaaS solution where the network issues were similar then my perception would be that it is the SaaS solution that is rubbish.

Perception is everything when rolling out new technologies and QoS is the rigour required to make sure the perception is positive.


Technorati Tags: ,

Tuesday, March 24, 2009

Beware designers and other's prentending to be professionals

First off an admission, I'm an HCI guy and spent the first 6 years of my career working on front end thick-client high interactivity sort of interfaces where the accuracy of the interface was critical.

I then got into the Web world and learnt to distrust designers very quickly, for the simple reason that 95% of them were interested in designing a "pretty" interface rather than a functional interface. The two are very different beasts. Designers are often in the WILI (Well I like it) camp of design rather than the KISS camp. They love putting graphics onto sites and having complex interfaces that "surprise" their users.

Today I read this
“Google has momentum, and its leadership found a path that works very well. When I joined, I thought there was potential to help the company change course in its design direction. But I learned that Google had set its course long before I arrived.”
Which summed up for me the mentality of most designers I've worked with. Namely that no matter how good, and especially clean, the design is and how successful it is with the users then a designer will want to take you in a different "direction".

At the heart of the issue is one of training, most designers are good at designing static pages but are not well versed in HCI, human psychology, ergonomics or other scientific disciplines around how people interact with systems. This makes them amateurs but unfortunately ones who are perceived to have a differentiated skill, mainly because most of the time their designs are presented on paper, a medium in which they are very strong.

This is a pretty common piece around IT, architects who lay claim to oversight in areas where they don't understand the technology or business, business folks who know that a website is "just Excel scaled up" and database guys who know how to write code because "its all about the data".

These are not well meaning amateurs, they are professionals working in the wrong place at the wrong time and in the wrong way.

If you are working with a designer here is a top tip.

Get them to create a page template, not a full page, one which has the corner pieces, fonts et al all defined in a template. Then turn that template into CSS and use it on the pages that you create. You'll normally get a nice clean design feel and consistency but the designer won't like it because its all too consistent and you didn't implement a photoshop picture.

But your users will thank you.

Technorati Tags: ,

Tuesday, March 17, 2009

How IT departments plan to make themselves irrelevant in the down turn

Since the beginning of the year its been rather clear that the business imperative is cost cutting. From renegotiating licenses, looking at Open Source to looking at using Cloud and SaaS to provide a more "scale down" model for IT (we've always been good at scale up - pay more, use more). Now what I've observed is two very clear approaches from IT

Firstly there is the business centric view which says "oh crap, lets start looking at where we can rationalise" and pushing pieces like cloud, server rationalisation, apps rationalisation and the like as a way to drive cost out of the business. These folks also tend to look at the business model they are supporting to understand where the costs are most out of kilter. The mentality here is basically the same as the business and its about changing the imperative to face the current climate.

A big part of this mindset is also the drive to use new technologies that support the business model better, especially that "scale down" problem that most traditional approaches have. The business IT view is that cloud and SaaS represent a good solution you just need to be clear where they work and then overcome the hurdles.

The other mindset however is the technology centric one and the mindset that basically says "fine the way it is, don't want technology that is outside of my control". I've described Terry Pratchett architects before and I'm hearing lots from the later camp at the moment. It almost sounds like the old phrase about the business
I don't understand the hardware, I don't understand the software, but I can see the flashing lights
The problem is that with cloud and SaaS they don't get to see the flashing lights and they don't get to even design what the hardware will be.

This will be the biggest impact out of the year around IT, business focused IT folks who understand the model and can actively suggest new approaches to rationalise cost will do well. Those that put barriers in the way will do very badly, especially if those barriers are placed their to maintain a comfortable status quo.

The key for IT is to understand the business model, understand the business services and then understand where IT adds real value and where it should simply be a utility, then plan against that utility. That means cloud and SaaS will figure largely in how you build, deploy and manage those business services because differentiation is not important.

The final point is that when an IT person comes up with barriers around security or compliance then you have to be rock solid, 95% of the time someone has tried that in the areas I've dealt with it has turned out they were wrong. Being cautious is one thing, but in this market the erring on the side of caution is also a business issue, not just a technology one.


Technorati Tags: ,

Tuesday, March 10, 2009

SaaS isn't software its service

I've said it before and I'll say it again. SaaS isn't about SOFTWARE as a service its about a SERVICE as a service.

Salesforce.com don't sell software, they sell a CRM solution. They also sell and INFRASTRUCTURE or DEVELOPMENT platform as a service but it still isn't the software that you are buying its the whole platform service.

This is the problem that the Middleware as a Service type vendors are falling into. Looking for a generic offer they appear to be going for the concept that just lobbing their software on the cloud and putting in some utility makes it more attractive. It doesn't.

Platform as a Service plays are (IMO) pretty niche but they happen to be the poster children at the moment because that is where the billions of investment have gone. These are plays for a large proprietary platform and the investment levels just make it unsustainable for smaller companies to compete and smaller PaaS offers are doomed to fail as they won't have the scale or buzz around them.

Within the SaaS space however you may well use a BPM or an ESB or a rules engine within a cloud infrastructure and require cloud licensing models. The point however is that you are USING the tool to deliver something more rather than simply offering the tool itself for rent. When building a SaaS offer then having a BPM or Rules layer for configuration and integration makes a lot of sense, this doesn't mean that you are delivering PaaS it means that you are BUILDING SaaS.

The critical bit is SaaS is the final delivered service you are selling, this service will include an SLA, licensing arrangements and most importantly an operational solution onto which users can be provisioned. This isn't software, its a service.

SaaS is service as a service, which sounds rubbish but is the reality of what a business will buy. Middleware vendors who think SaaS really means SaaS are either going to spend billions on a major market platform pay or be doomed wasting their cash.


Technorati Tags: ,

Sunday, March 08, 2009

Cloud watching - does that one look like Churchill?

Clouds are wonderful things, they come in lots of types and just like the old game of cloud gazing where you try and find clouds that look like things you can do the same with the various different types of cloud.

Firstly though lets work out the different types and their level of reality

Infrastructure clouds
Amazon.com type thing. These are the easiest type of clouds to get you head around at first as these are really just your current data centre model just made virtual. You get a "box" you can install software on the "box" and you can start and stop the "box". The only difference is that you can also duplicate the box quickly to add more capacity and throw the unwanted boxes away without having any issues around wasted investment. Scaling is your problem but the infrastructure makes it easy.

Infrastructure clouds also have the great advantage that you can get off them at anytime and move back to your own kit or move to another provider. After all its just Linux/Windows so it really isn't anything special.

Development Platforms
These are what Google, Microsoft and Salesforce offer you. Its not an infrastructure solution in a traditional sense as they require you to develop specifically for their platform. These people give you a virtual machine and a set of vendor specific libraries. You write your application for these virtual machines (and write is the operative term) and then you deploy it, scaling is taken as being the problem of the platform provider but they might either limit you or set some sort of bugetary limit.

The key here is that these environments are just your local dev shifted to the cloud, its actually a decision to write code for someone else's environment and this means accepting that you will be locked in. Believing that you can just "unplug" yourself in these environments is very dangerous unless the provider actually gives you a route. The area of most lock-in at the moment appears to be around data access with all of the providers giving their own spin around data storage and search. If they start doing something like Java + JDBC and you can get a hibernate provider that enables you to switch more simply then the lock-in reduces. If they are using their own development language then the lock-in is near total.

Middleware as a Service
These are where smaller companies than Amazon/Google/Microsoft have seen the development platforms and thought "our middleware could do that". The offer here is that instead of having all the servers in your environment you just rent the infrastructure you need from the provider and everything is then easy.

This for me is the cirrus of the cloud world. After all what is the big problem with when you buy a big load of Middleware licenses and set up some centralised service? No bugger uses it or you get lots of people using it in millions of different ways. This is a cheap way to do the former and a more expensive way to do the later. I could be wrong about this one but on its own I think its exactly what a cloud is vapourware.

SaaS
For me this is the poster child of Cloud. Its the CapEx to OpEx converter, its the business utility and it shouts loud and proud "I do not differentiate your business I am a commodity". This is a very good thing. Whether it is Salesforce.com, Wrike or something else the whole point is saving money and turning it into an OpEx business utility.

SaaS means lock-in. But lock-in around a commodity which means its really only the data that matters and most of the SaaS providers give you a decent set of APIs so you could build your own migration to another commodity provider if you want.

Microclimates
Ok these are the real private clouds. How do you know if you have a micro-climate? Well you cease to consider it as "yours" but as an internal, or outsourced, utility. There are no flashing lights, there is no physical ownership, the only thing you care about is capacity and flexibility. Critically in a micro-climate you are billing the business in the same way as Amazon ... as a utility and based on short term consumption.

Microclimates are (IMO) a few years away in most areas with the likes of governments probably leading the way with microclimates that meet their hightened security requirements. Microclimates are only for the VERY biggest companies to do on their own and more likely should be sector specific. These micro-climates will also be dominated by applications that have dynamic scalability requirements, forecasting, web sites, intranets on pay day etc and applications will be adjusted to take advantage of that scaling, especially at the business value end. Crucially a micro-climate must be seen as a business OpEx.

Your Data centre with new kit
Don't let the hardware and software vendors fool you. Buying new hardware and a new set of provisioning tools does not make something a "cloud" it just makes it a better automated data centre.

This is the snake-oil of Cloud computing, software and hardware vendors who are claiming some magic dust that you can sprinkle on your current data centre to make it a cloud so you don't need to work with those "messy" public clouds which aren't for "proper" companies like yourself. This is hooey and just age old bandwagon jumping.

So there it is, the different worlds of cloud as I see them today.... needs a picture though.
Spot the difference?


Technorati Tags: ,

Clouds are officially the new T-SOA

One of the questions I raised last year and early this (with the SOA is Dead meme) was what would be the next hype item that the analysts and vendors started flogging. Would it be cloud, would it be something else....

Well its definitely cloud.

IDC say it will be $42 billion by 2012 trumpeting the race to the cloud. Now before I start this rant lets be clear
  1. I think cloud computing is important
  2. I think it is different
  3. I have quite a bit of experience in the area
So that said.... what a load of rubbish is being spoken about clouds at the moment.

The kicker in the IDC article is the wonderful paragraph
To succeed, cloud services providers need to address a mixture of traditional and cloud concerns. According to survey respondents, the two most important things a cloud services provider can offer are competitive pricing and performance level assurances. These are followed by the ability to demonstrate an understanding of the customer's industry and the ability to move cloud services back on-premises if necessary.
So to succeed you need to be cheap but with a strong SLA, understand my industry and give me a back-out route.

As caveats go these are pretty big ones. The first is of course what every client says "be cheap" and clearly cloud isn't a premium service and so has to be cheaper than your current data centre approach. The second is again a no-brainer if you want your biz critical apps on the cloud, but this is quite tough especially if you are getting into secondary liability, i.e. the cloud provider is liable not just for the cost of the service when it goes down but also the cost to your business.

"Know my business" stacks up, especially for the SaaS providers and the last one basically says that the current Azure and Google App Engine model aren't what people are looking for.

In the last month the number of vendors who HAVEN'T tried to tell me about their SaaS/PaaS/IaaS/Cloud offer has to number a lot less than those who have. So I officially declare 2009 to be the year of cloud hype and make a bold prediction.

Most of the crap about cloud will turn out to be wrong and over optimistic. The focus on business reality and business cost will drive IT through 2009 and the use of cloud will primarily be driven by a desire to move towards a more OpEx model for IT.

Technorati Tags: ,