Wednesday, January 28, 2009

Getting lost at the EDGE

Speaking with a business & commercial chap today he had a bit of a moan to me about architects worrying about edge scenarios. His complaint was that they kept trying to consider all of the edge conditions and then design around them when in fact he

a) didn't think those things would happen
b) even if they did happen he'd be fine for the system to collapse
c) thought the investment the architects had made in edge conditions was a waste of time

The point here is that sometimes its okay for a system to fail and also its okay to specify what a system won't handle rather than trying to make it handle everything.

Now some architects may leap forward and say "well what about the future, what about extension"

But you know what? If the business doesn't want to pay for these edge conditions then why on earth are you bothering? Sure you should force the point and say "the system won't handle X,Y,Z or pigs learning to type in ancient greek, is that okay" rather than "The system doesn't handle X,Y,Z or greek typing pigs so what we should do is redesign the keyboards to enable pigs to work with the system when they learn to type".

The sort of architectural "perfectionism" that underpins this mentality is, IMO, one of the worst traits of architecture, namely the avoidance of getting into the solution. By putting these edge cases in the way the architecture and design phase takes longer and the dirty solutioning piece when the architecture actually has to be proven can be postponed.

Edge conditions can be non-functional as well "What if your carbon blade procurement system is turned into a Facebook app and gets 10m hits in an hour?" The answer is of course "That won't happen, and even if it did we sell carbon blades to the airline industry so why on earth would we worry about a Facebook app?". Still architects will argue about the "most scalable" way to build the solution that is currently targeted at 5 end-customers with up to 20 concurrent users max.

Sometimes architects claim this is them being "professional" and considering the "future" while taking a "holistic" view. What rubbish, its about architects focusing on architecture and losing track of the big, and simple, picture and merrily pursuing their own mental exercises to demonstrate their architectural prowess.

Edge conditions can be hard to deal with, but that doesn't mean you HAVE to deal with them. Often, in fact most times, the right approach is to declare them out of scope and make it clear what the system won't do as much as what it will.

Technorati Tags: ,

Wednesday, January 21, 2009

One Service many interfaces

I'm doing some departmental re-org stuff at the moment and its made me really believe in the difference between the service and its interface. Way back in 2000 I worked on a project and developed a framework (who didn't in 2000) that allowed you to have multiple different interfaces on the same back-end service. The reason for this was that different clients needed to see different subsets of capabilities and rather than showing everyone a great big interface with everything on it, which would be complicated, it seemed simpler to just separate the interface from the service.

Doing the business bit has made me really strongly realise how important it is to keep them apart. Right now I've effectively got four different groups who need to interface to this one business service.

Now the core of the service is the same for everyone, they all want basically the same thing. The difference comes in exactly how they want to interact with it and the specifics of the capabilities that they will use. This means four independent sets of external KPIs (remember to always think of the consumer when designing service descriptions) and four semi-independent sets of capabilities. By semi-independent I mean that they are in fact strongly linked but the mode of interaction is slightly different in each case.

Now if you were building the code for this you might be thinking of four independent services that shared information together. The problem is that this means that you have disjoint processes as the reality is that one of the major KPIs for the service is that you are linking properly with all the other groups.

Sometimes you will build the interface as just that, an interface, other times it will be a lightweight façade that might do some process or data manipulation work, on a very few occasions it might be something heavier weight but still using the same underlying base service.

The other technical view would be the "sod the users" and just have one interface. The trouble is that this just means that it will be rubbish to use for everyone and it will in fact be harder to manage because that doesn't represent the way that the world actually operates.

So what I've got is a service description that describes what the service does and then I'm building four service interfaces (which are to all intents and purposes service descriptions) on how it will interact and be measured by the four external parties. This means that from a catalogue perspective I have five services, from a consumer perspective they are all getting the service that they want and from my perspective I'm getting the consistent management and centralised governance that I need.

Everyone ends up happy and the amount of duplicated work is reduced. So keep your interfaces independent of your service.



Technorati Tags: ,

Thursday, January 15, 2009

REST is a crap name in a web world

Okay I was looking around for some REST stuff today, specifically around performance tuning for the mandlebrot stuff. So I thought I'd search for "REST performance tuning" and its no better on yahoo. Like when Microsoft talked about having a language called "COOL" and its bad enough that they ended up with .NET (image searching for COOL .NET application tuning).

For something that was designed for the Web, indeed which helped design the protocol behind the Web there really wasn't much thought put into naming it so it works well on the Web.

Technorati Tags: ,

Thursday, January 08, 2009

REST is dead long live the Web

REST met its demise on January 1, 2009, when it was wiped out by the catastrophic impact of the economic recession. REST is survived by its offspring: mashups, SaaS, Cloud Computing, and all other architectural approaches that depend on the web.

REST had begun to gain some traction in 2008 as the "next big thing" in technology, promoted by vendors, analysts and champions as being the only way forward, often by the same people who had promoted both EAI and Web Services as the only way forward. The economic downturn however has led to people looking at REST as nothing more than a new technology driven fad that was disconnected from the daily problem of a profitable business. Proponents would laud Google, Amazon and a small number of new startup companies as being the example that all the old crusty companies should follow.

These old crusty companies however have heard it all before, both from the .com boomers who were meant to replace them and from the technology vendors who have shipped them varying degrees of snake-oil over the year. Fortunately all is not doom and gloom for REST as these old crusty companies are doing exactly what they did with .com and looking at what they can do to drive down costs and increase profitability by using the web. As REST proponents shout about PUT/DELETE/POST and GET and whether anything from a browser can truly be "RESTful" because it doesn't have DELETE then the business users are looking at the Web, and more especially the services delivered via the Web as an excellent way of managing their IT costs.

Integrating these new Web delivered services into their enterprise often means using exactly the approach that REST proponents advise, but it is not REST that is important it is the Service. REST vainly tried to make itself the thing that people should care about but the sad reality was that its role was simply in helping people connect to the services that they use.

Already REST advocates are leaving the funeral a promoting the "web centric view" as being the only way of the future, but the crusty old companies continue to operate successfully often using systems that predate the web and chuckle at the cute naivety of these technology prophets.

Surviving REST are a series of technologies that at their core are about using the principles of REST hidden away in their dark hearts like a secret that must not be told. Mashups and SaaS often rely on REST but proclaim instead the business benefits, the productivity gains or the business service that they deliver. The biggest child of REST is the Web, it shouts as a colossus across the globe a shiny beacon of light which proclaims the success of its heritage, but no-one knows or cares about its parentage only about its usefulness.

So RIP REST the business never really knew you at all.

With deference to Anne

Technorati Tags: ,

Monday, January 05, 2009

In a recession its even more about the services

Anne Thomas Manes has obviously taken a decent course in headline writing with her recent post SOA is dead long live services where she says that SOA has been killed by the economic downturn, but that this is a good thing as it means we can concentrate on the services.

Reading between the lines of what Anne writes I think I'd say that her statement is that vendors are moving away from SOA as they've flogged you enough stuff and now they want to flog you its offspring: mashups, BPM, SaaS, Cloud Computing. Now in another headline writing award winner Andy Mulholland wrote Innovation is Dead, long live cost cutting which acts as a cautionary tale to those who are hearing vendors claiming that BPM/mashups/SaaS/Cloud will magically reduce costs and solve all ills. The reality is that it is in fact the services that matter more at this stage than at any other.

This doesn't mean that SOA is dead, it means that the marketing fury of T-SOA has moved on as their just aren't that many more ESBs and Web Service tools that you can be sold. What remains is in fact what SOA was all along its Services are the starting point for SOA its not those pretty technologies. I've argued before that Web 2.0 requires SOA so building on the Anne and Andy posts I'd say that simply put.

If you adopt the new technologies without having a services mentality then you will create a degree of mess that will make the one that consultants and vendors got fat on with EAI look like a trivial problem. Doing Spaghetti inside your firewall in big applications is one thing, doing it over the internet and with thousands of small ones is a completely different scale of problem.

So in a recession you need to Identify your services, understand the business value that they deliver, understand the cost model to deliver that value and then decide on the right technology approach.

If that isn't SOA then I don't know what is. So in reality its the "other" SOA that is dead, not the SOA of today.


Technorati Tags: ,

Friday, January 02, 2009

Clouds and micro-climates

A shout to Tim Kelly who before Christmas came up with the term "Micro-climate" to describe the clouds that companies and industry verticals will build for themselves when using a generic cloud (ala Amazon, Google, Azure) won't do for security, regulatory, money or paranoia reasons.

Cloud computing won't be just about who is the big generic cloud, it will be about how do you create clouds for the pharma, defence, government and other industries. These are the micro-climates.

Technorati Tags: ,

SOA resiliance and the power of virtual machines

One of the things that continues to amaze me when I look at companies Disaster Recovery policies is how much they concentrate on the Backup and how little they concentrate on the restore. Chatting to a CIO in the manufacturing industry a while back he gave me a great stat on his business
For every minute that we are down it costs 200k euros a minute, my DR plan takes two days, that is why I have three redundant data centres and a full set of passive backups.

Simply put this chap couldn't let his systems go down so he invested very heavily in making sure that they didn't.

More often however people just tick the "backup" box, send it off to tape and then don't worry about bringing a service backup and what that really takes.

If you've lost a disk then its a physical job followed by a data restore (you did of course test that). If the server is trashed then you have procurement, install and then recovery. If the Data Centre is trashed then you have a much bigger challenge.

This is where companies come in and charge quite a bit of money to have "warm" servers on standby. You pay for them at a certain rate (and more when you actually use them) and then have to do all the rebuild job if disaster strikes.

Now in a networked world such as SOA this is a big issue as the failure of one service can have significant knock-on effects. Now you need to design around this, but there is still the question of the quickest way to get a degraded instance of the service back.

Why degraded? Well lets say its a high demand service, you've gone stateless, you've got 20 Linux boxes running it horizontally when a muppet manages to dig up the network cable into your data centre. Its going to be 3 days to get it fixed to 100% but you need something degraded that works now.

This is where Virtual Machines really kick-in. Along side your normal data backup strategy I'd recommend taking a Virtual Machine backup of the server. Now in future the VM approach will probably be the normal one, but its got a great job right now as part of your DR solution. Take the VM backup at the same time and then if you need to just fire it up on some commodity hardware that you have lying around. Its going to perform badly so think about throttling, or fire up a virtual grid with the hardware in the office. This then gives you the space to do the full recovery and get everything up and performing at 100%. Having the VM backup means putting your patch in place as quickly as possible.

When you do this however I'd also recommend that you run, at least once a month, a full unit/system test on the VM backup to make sure that it does actually work properly.

Disaster Recovery is about planning for the recovery, not planning for backup.


Technorati Tags: ,