Friday, May 23, 2008

I should not speak in airport lounges

An airport lounge on a Friday afternoon is a dreadful place to be. You know there is no work in the morning and you know that you are just travelling for a pointless period of time before you get home. This is why you should never to speak to anyone in a hotel lounge. I won't say what country I was in and I won't say what lounge it was but I don't apologise for the conversation I just had... I just wish I hadn't had the rather nice cognac before I had the discussion.

There was a chap on the phone, nice suit, nice conversation skills. On the phone I heard the following

"Why don't we just put some more people on it"
"How hard can it be to learn"
"We've been doing it for 6 months we must understand it enough to explain to someone else"
"You are just making excuses, do it"

When he got off the phone he saw me looking at me and shrugged (he had been a little loud) so I kicked off the conversation asking what it was about. We had a chat... I browsed Wikipedia and showed him the MMM page, explained to him (clearly much better than the person on the other end) why his plan wouldn't work. He explain more detail and I explained why his IT department had screwed up right at the start.

I apologise to the poor bugger on the other end of the phone who will get slammed on Monday morning for how he handled the requirements process at the start of the project, but seriously, if you don't argue early then no-one listens to the arguments later on.

Damn fine Congac though.



Technorati Tags: ,

Thursday, May 22, 2008

Questions I like to ask at interviews

There are a few questions that I always like to ask in interviews, some are old favourites others are newer but all are trying to work out what type of person I'm dealing with. There are technical detail ones as well but these are the general ones to work out if I'm going to have to kill or simply re-educate.

General Knowledge
This isn't general knowledge in the terms of "who won the FA Cup final in 1960" but General Knowledge about IT. Simple questions
  1. Have you read the Mythical Man Month?
    1. Is it still relevant
    2. What is Brooks' Law
    3. What are the important lessons you learnt from the book
  2. Have you read Death March?
    1. Is it still relevant
    2. What experiences have you had where a project became a Death March
  3. Have you read Peopleware?
    1. Is it still relevant
    2. What do you think is important when building and managing teams
There are a few other books that I throw in sometimes and of course asking what is the book that they think is the most relevant (hint of the day: slashdot.org is not the right answer)

Systems Theory
The next bit is always around what they know about the architecture and design of overall systems. The only way to do this is via a practical. So I tend to set a theoretical RFP (Request for Proposal) and give them 15 minutes to read it and then 15 minutes to respond. The test is designed so in that time period they will almost certainly fail to find a perfect answer but its the approach that is important. What I'm looking for is
  1. What questions do they ask
  2. Where do they start
  3. How clear is the architecture
  4. What principles are they applying
  5. Why aren't they doing certain things
  6. How do they react when a better way is suggested
  7. How do they react when a worse way is suggested
The latter is just as important as the rest as it shows how firm people are. If you are in an interview it takes a certain class of communicator and engineer to correctly explain why a bad suggestion is rubbish. The previous question is to find out how stuck in their ways they are, if there is a suggestion that is clearly better and they defend their previous idea explaining why they chose it then that is okay, if they get defensive that is not okay, if they explain the previous thinking and then switch to the new thinking that is ideal.

SOA
Asking them to define SOA is always entertaining. People who rush down the technical rat hole of T-SOA (WS-*) then have to draw the "stack" before being asked from a business perspective what would be the difference between two identical services, one implemented in "code" the other in BPEL. People who deal at the abstraction and architecture level tend to do better here. Normal questions that I like at this stage are
  • What is a service
  • What is the right granularity for a service
  • What is the relationship between business process and service
  • How would you find the right services for a business
On the later point this brings me to the last section

Passion
This is all about understanding the passion someone has for an area and how up to date they are. Personally I just don't understand why anyone gets into IT who doesn't get excited by change and who doesn't track change. Sitting on your arse saying "well it worked in the 90s" is just ridiculous, the only way you can say "X is crap" is to read about it and try it out.

The first test is do they understand the power of the internet? The way to find this out is did they Google their interviewer? The number of times it becomes clear that the interviewee hasn't done their research on the company they are interviewing for and hasn't bothered checking to see if any of their interviewers come up is just mental. Every client meeting I go into I Google the client to see what they have said publicly and I certainly do the same to people I'm interviewing. If you haven't done this then I'm always concerned.

Next up is just the social stuff. What do you read? How do you keep up to date? Then we move onto the next stage
  1. If ten years ago you'd said that the internet would be the default form of B2B and B2C commerce, that Mobile Phones would revolutionise the developing world and that RF-ID would fail to make it to large scale adoption in the next ten years then right now you'd be looking like a genius. What technologies that are currently hyped will you predict now for the next ten years and why.
  2. Ruby, Scala, Python and other dynamic languages are on the rise, why do you think they will take over from Java
  3. Who do you think are the most influential technology companies
There are a few others but number 1 is a cracker (for Java interviews I also like "we broadly know what will be in Java SE 7, what would you like to see in Java SE 8" and similar for Java EE, the same can be done with even more entertainment on .NET) most people just blurt out technologies that either already major or pick things that are so low level (like saying something like REST or Web Services) that shows they don't understand the context.

Selling yourself
After the questions comes the bit to ask the person to describe themselves in three words and the positive attributes that they will bring to the job. The final question I like to ask is for someone to come up with a marketing slogan, one line, which is the reason why they will stick in my head and they should get the job.


Now if I am about to interview you and you've read this.... well done, its going to help you. If you are reading this on the way to the interview and haven't read the Mythical Man Month.... find a way to crib.

Otherwise hopefully these might help other people who are looking to interview and are looking at the more general elements of a persons technical ability rather than the technologies themselves. I've worked with far too many people where I've thought "how the hell did they get through an interview" and then found out how they were interviewed and not been surprised at the quality that resulted. Its not fool proof, I've hired a few duffers in my time, but I'd say my quality ratio is higher than the average.

Employing people in IT is about much more than the technology its about how rounded they are and how they will cope with change. Time and time again I've found those that can cope with and communicate change are much better at their jobs than those who can simply code. The technical skills are the basics that everyone must have to even have a chance at being good, but its the overall view that makes them good.


Technorati Tags: ,

Monday, May 19, 2008

Quit it with the Web Services "bloat"

I'm getting a bit annoyed at people ranting about Web Service "bloat" or inefficiency and then gibbering on about Ruby on Rails or the like. Lets be clear
  1. Server side Web Service and XML stacks are not optimised for binary executable size
  2. That doesn't mean it isn't possible
Back in 2002 I was presenting at JavaOne on "Mobile Web Services" where we demonstrated a number of device to device interactions. These used a server intermediary to bridge the connections between the phones (simulated phones because the GSM network in Moscone was practically non-existent back them) something that looked a lot like comet (cheers Gavin) which was a bugger to make reliable over GSM. As an aside the old JavaPhone API allowed phone to phone comms.

Anyway so back then we talked about how to get Web Services running on mobile phones (including noting that Web Services != WS-*, it was about the Service design). Around that time I had a Nokia 6310i which was pretty much the first MIDP phone I could get my hands on. It had 30kb available for programme (compiled) space so I set myself a challenge to write something that used Web Services from the phone. If anyone saw the quick code example that Duane did at JavaOne then basically that is what I did. Using a stripped down (very stripped) version of ksoap the little client allowed you to enter a city name, it then returned a list of airport codes related to that name, you could then pick one and it would display the current weather report for that airport. It made two different WS calls and worked perfectly well in 30kb of executable on a mobile phone

Now this isn't best practice WS for many reasons, mainly however because memory and CPU are rarely the biggest issues on modern server infrastructures. The point is therefore what are you trying to save on a server? The answer is support time as that represents the biggest cost. Doing my stripped down WS stack on a server would be mental as it would be practically impossible to support and wouldn't be flexible.

The only bloat that really matters is support bloat and every technology that saves a couple of lines in developer time but adds time to support is a failed technology.

Bloat isn't a code issue on server software, its a support issue.


Technorati Tags: ,

Did Borland destroy design?

At JavaOne I tend to focus on the real world case studies rather than the vendor pitches or the latest shiny library. At a few of these when they talked about lessons learnt the old "wish we'd done more design/analysis" came up. Wandering around the floor I also noticed that not only were there a lot less companies than years gone by, admittedly this isn't a bad thing given the "buy our EJB container" stuff that used to be there, I also noticed that the amount of people talking about "design" and "analysis" on the floor was also incredibly low.

Now I know that software goes in cycles but this has to be one of the worst eras in software for design support and indeed design championing. So it got me wondering, what killed design? Back in 2003 (only 5 years ago) design was almost at its peak with Oracle, IBM and others all competing to demonstrate their design credentials. Back then however the question was largely about what Borland would do with TogetherJ.

Now there were other things around (and others have come on since) but at the time TogetherJ stood out, it had brilliant round-trip engineering, it had a fantastic quality metrics package and generally just helped you get your job done. They even listened to feature requests. Peter Coad dropped in to hear us vent (only slightly) about how the network license manager wasn't any good for us because we travelled quite a bit and couldn't use Together on the planes. What we'd like we said would be the ability to "check-out" a license so we could use it on the plane and on client sites while we sorted network access out. The answer? "I haven't heard that use case before but its a good one, I'll get back to you". The result? The next version allowed you to check out the licenses.

Then at the end of 2002 Borland bought TogetherSoft (the company that made Together) and.... well basically nothing. From that point on Together went from something that almost everyone I knew used to something that was used by practically nobody. Sure it took a few years for the software to become out of date but it happened and as there had been no really new features added to it no-one upgraded.

This left the field down to small players like Enterprise Architect by Sparx systems and MagicDraw and the beast that was Rational and became the even bigger IBM Rational. Back then Rational had one of the all time dogs of a product Rational XDE it was awful in comparison with TogetherJ, XDE was practically a one package project killing machine.

Now fair play to IBM, since then they've dumbed the whole ethos of XDE and built a pretty good new stack, its probably the strongest one out there today (IMO) if you are building large scale enterprise apps.

Clearly design hasn't gone away and remains as important as ever. Its back to (drum roll please) The Mythical Man Month and the fact that thought is the hard part not the coding or technology part. Design tools help with the thought part.

Now Borland may not have destroyed design but they certainly removed one of the most innovative companies in the market from play. Its sad to see JavaOne and the Java Community in general focusing on code syntax over full professional solutions.

On the plus side this current fad could completely screw up Enterprise IT infrastructures thus meaning that people who actually concentrate on the important things over the technology will have yet another mess to clean up after.


Technorati Tags:

Wednesday, May 14, 2008

REST on Mars - scaling the problem to make a point

One of the objections I've had about REST for a while is that it appears to ignore Deutch's fallacies of network computing
  1. The network is reliable.
  2. Latency is zero.
  3. Bandwidth is infinite.
  4. The network is secure.
  5. Topology doesn't change.
  6. There is one administrator.
  7. Transport cost is zero.
  8. The network is homogeneous.
Now REST specifies 8, assumes 1, 2 and 3 and takes 4 to mean HTTP/S with Basic Authentication. Now to be clear I've seen people doing Web Services who believe in pretty much all 8 of these fallacies and they create crap systems. But with things like WS-RM and WS-Security at least there are answers to a few elements.

A common push back I've received is that in this day and age that 1 is about the idempotent nature of REST, which is a reasonable point even if it puts effort on the client. The follow up on 2 and 3 however has been that in modern networks this just isn't a problem. So in this day and age just how bad could it be? Well for local work inside a VM obviously no-one would use either REST or WS-* as it would represent a massive overhead so clearly at the very reactive level there are issues. The next question is what about if you have a very limited connection, somewhere outside the developed world or remote parts of modern countries..... or to really stress the point.... Mars.

This Sunday the Phoenix mission will aim to touch down on Mars on board it has lots of sophisticated technology and lots of information to send back to base. To dispatch this information however it has a 128kbps maximum speed comms link.

So lets say we want to get a new 1MB image everytime that the rover takes it and this is being implemented by a bit of a muppet who is thinking about using either WS-* or REST, both of which are the wrong decision. Anyway Mr Muppet looks at the REST approach and structures the resources as follows
  1. GET on Rover to determine other available resource
  2. GET on Camera URI from the rover URI to determine available pictures
  3. Work out if there has been a new picture added (new URI available)
  4. GET on the new URI
  5. Once we've got the image, check that it is okay
  6. PUT on the URI to delete the image
And then the muppet looks at the WS implementation and decides to use callbacks and WS-Notification to say when there is a new image and then do a Web Service call to get the image.
  1. Register with Rover for the callback
  2. Receive callback when there is a new image, this gives you the ID of the image
  3. Call Rover.getImage(ID)
  4. Check the image
  5. Call Rover.deleteImage(ID)
Now it sort of looks like we have the same number of calls, but of course one interface is polling while the other is pub/sub. Lets say the camera takes an image every 6 seconds, this means a good polling interval will be around 6 seconds, if however the images are taken with a wide distribution (from ever millisecond up to once in an hour) then our polling needs to be much finer grain. To be fair though lets say that it takes a fairly efficient 1.5 polls for each image reception.

REST
Network costs
Okay now down to the numbers so per successful image request we have
1.5x GET on ROVER to get the Camera URI
1.5x GET on Camera to get the Image URIs
1 GET on Image URI
1 PUT on Image URI

Now assuming some efficient XML lets say that the ROVER XML is about 200 bytes (limited number of resources) and the Camera URI is about 150 bytes (minimal beyond one image). The image is 1MB and the PUT is a null so just a basic request (lets say 20 bytes and be generous).

So in total we have 300 + 225 + 20 + 1048576 bytes which is.... 1049121 bytes or 8,392,968 bits. Over our 128kbps network this will take 64 seconds of network time. Not too bad..

WS-*
Over in Web Services land Mr Muppet does the registration which is a once off cost of (lets say its heavy on XML) 4048bytes.

Then the ROVER sends the notification (another 4048 bytes)
A WS call to get the Image 1MB + 4048 bytes
A WS call to delete the Image - 4048 bytes

So in this case we have a one off cost of 0.24 seconds

Then for each call we have 1,145,728 bytes, 9,165,824 bits and a cost of 70 seconds, a whole 6 seconds worse than REST.

Then there was latency
Ahh but then we have latency, the earth is a MINIMUM of 56,000,00km away which assuming perfect transmission at light in a vacuum speed (not even theoretically possible) means a latency of 187 seconds.

Now with REST we have 5 calls and on WS-* we have 3 calls. This means that REST takes a total of 999 seconds while WS-* takes 631 seconds. Even if we allow the REST implementation to cache the camera URI (we could have the image sent with the notification on WS-* as well) then its actually the latency that matters in this equation as much if not more than the bandwidth.

My point here isn't to talk about WS-* v REST but to point out that when doing distributed code you shouldn't ignore those 8 fallacies and you shouldn't assume that everything will be fine. You might not have to communicate with Mars but you might well want to deal with partners in the rest of the world where network links aren't as great. Even the giants like Google have latencies close to 100ms so a chatty approach will just cause issues in even 1st world networking environments. I've made the point before about creating scalable XML and Web Services and I think it bears repeating

The network isn't zero latency, it isn't infinite bandwidth and assuming those things is not what makes distributed systems scalable. This is not to say that REST or WS cannot be made to work in a distributed way but it does mean that these non-functional elements should feature in your design as much as obeying strict theoretical constructs.

Technorati Tags: ,

Tuesday, May 13, 2008

Selling SOA - controlling the message

I get to do presentations quite a lot and help people with how they message around SOA and selling to the business. One of the core parts of this is how important presentation skills are and how sometimes you have to face it that you might have the content but you don't have the presentation skills. The problem is that when you don't recognise this you can find yourself hijacked by someone with better skills and thus find yourself part of their agenda rather than your own.

Bill Gates summed up this dilemma brilliantly when he did a sit down with Steve Jobs.

You want about 7:15 in when they talk about the Microsoft input into Apple II, which was around Basic. Bill starts talking acronyms and clearly trying to talk up the BASIC that he did (fair enough), Steve then steps in and reduces the Microsoft input to being just doing floating point.

The thing to think about here is that your best presenter and negotiator might not be your best architect and techy. So when you think about messaging to the business and explaining what you do think about who is best able to present that message. Have the detail guy ready to go if needed but focus on the person who can get the message across. This is an important part of an SOA journey and one skill that is often under-rated in IT.


Technorati Tags: ,

Do I hear a popping sound?

Being over at JavaOne there were three things that shouted over the base signal
  1. Web 2.0 - participation, Java + You, social networks
  2. SaaS and Cloud Computing
  3. Scripting languages
All the time I heard comments about numbers of users, community, eyeballs, sell the advertisements, "it doesn't matter if enterprises don't do it, it will happen anyway".

And it made me think back to 2000 and my first JavaOne, back then I saw a load of people doing .com stuff saying

"its about the community"
"Our revenue model is about advertising"
"Eyeballs is what counts"

and of course

"The traditional businesses are dead, it doesn't matter what they think"

Within a few short months there was the unmistakable sound of a great big 'POP' as these companies hit the wall like Danica Patrick hitting pit crew.

Now in 2008 I got exactly the same feeling I had back then "umm really?", what I saw then and now was some potentially useful technologies and a few great business ideas applying the technologies but lots of crap business ideas based purely on the technology. Web 2.0, SaaS and Cloud Computing are all things that could be useful and there will be some business models that come out of them but lots of the current ideas are just hype and nonsense.

The question with any technology is about its application to solve business and consumer problems. The technology remains the tool and the enabler but if the business idea is crap your only hope is hype and sell (which lets face it can be successful, but could you look in the mirror?).

The first internet revolution resulted in large scale companies, and a limited number of new additions, changing the way they work and interacted with customers and partners. I'd suggest that these next generations will be no different. If it doesn't change standard enterprises then its just fluff. Nice and interesting fluff maybe but not valuable fluff. The shifting of users from MySpace to Facebook and onwards shows the issue around basing a revenue model around fashion fads and communities, the cost of exit is low (and with things like Open Social getting lower) and the fad level is high. This makes it a high volume, short term environment rather than a long term sustainable element.

Some companies will come out of this bubble, but an awful lot will not. The good thing this time around is that the Web 2.0 and SaaS folks that I've met are much more personable than the majority of the "destroy the world" types in the .com era.

I'm not saying that Web 2.0, SaaS and cloud computing aren't decent changes in the way IT works, what I'm saying is that this isn't an over-throw of the old world order and it is hard to see the numbers living up to the hype.


Technorati Tags: ,

Sunday, May 11, 2008

Green IT - save the characters

Around JavaOne there was a lot of buzz around Java becoming a bit bloated. Now I've argued for a long time (including in the dreadful JavaSE 6 group) that Java should have a basic core and then architects should be able to decide on the extensions they want for their project. So the issue with Java isn't the bloat, its the process by which the JCP and (from experience) the Sun JavaSE team want to add more "features" into the language.

But the thing I really don't get is the obsession with the "new" languages on using the character saving, comprehension limiting syntax of C. Back in the wild and crazy 80s the focus on language design was as much on the syntax of the language as it was on the semantics and function of the language. In the 21st century we seem to have abandoned that strategy in favour of a "lets just use similar syntax" approach.

Take Scala


/* Defines a new method 'sort' for array objects */
object implicits extends Application {
implicit def arrayWrapper[A](x: Array[A]) =
new {
def sort(p: (A, A) => Boolean) = {
util.Sorting.stableSort(x, p); x
}
}
val x = Array(2, 3, 1, 4)
println("x = "+ x.sort((x: Int, y: Int) => x < y))
}


(stolen from the scala site)

What is this some sort of Green IT campaign based around the idea that characters are in short supply and should be rationed? This is the homepage example of the language.

# The Greeter class

class Greeter

  def initialize(name)


    @name = name.capitalize

  end

 

  def salute


    puts "Hello #{@name}!"

  end

end

 

# Create a new object


g = Greeter.new("world")

 

# Output "Hello World!"

g.salute


From the Ruby homepage.

Seriously do these really represent the sort of syntax that will help more people adopt the language and make support of the programmes developed in that language easier? My money is on no.

Here in lies the problem as pointed out by Jim Waldo (via DanC).

there is still the worry that engineers who aren’t producing code are not doing anything useful.

And this is where I think the focus on a limited syntax comes from. Its the certifiably insane idea that the time taken typing in characters is the sort of time that needs to be reduced. The reality is that it is the time taken to read someone else's code that is the primary problem especially in larger systems. The focus on limited syntax and on saving a couple of characters is nonsensical in a world where the number of developers is increasing and the complexity of systems is increasing at pace. It also isn't the sort of approach that helps in making sure that the code from the majority of developers is of a reasonable quality and can be easily maintained by other majority developers.

Scala won't be Java 3, Ruby won't be the next Java. Java might have problems but solving them by having an even more obfuscated syntax is not the way to make the languages an operational rather than a blogsphere success.

Unfortunately part of the problem is the lack of professionalism and engineering in IT which means that there is a huge body of opinion that rates the quality of a language based on how few characters it takes them to write the code, not on how many minutes or hours it takes someone else to understand them.



Technorati Tags: , ,

Right tool for the right job

I've always believed that consistency matters, whether it be at the architecture or the technology level. Its just much easier to manage a team with differing abilities if there is a consistent model and implementation.

I'm fortunate however in that I've picked technologies that actually work. Pity two fools that I heard on the floor of JavaOne (Moscone South if I remember correctly) they were arguing, well lets say talking at each other, about the merits of two different Ajax libraries (I have no clue which). The discussion was annoyingly loud and annoyingly stupid. At one stage however there was a work of genius

One chap (lets call him Fred) said

"Yeah, but at least we agree that you can do anything in Ajax"

To which his friend (lets call him Bert) said

"Yeah, there isn't anything you can't do"

A chap wandering past threw in one of those conversation grenades that leave the discussion dead and the participants suffering shell shock

"Try writing a VOIP client"

Personally I nearly exploded trying to hold in the belly laugh that wanted to get out. The point was brilliantly made. Consistency is good, but make sure you are at least in the right technology ball-park for the problem you are trying to solve.

Consistency matters, but don't be consistently stupid


Technorati Tags: ,

Friday, May 09, 2008

Ending JavaOne with a crash

Well that was a dramatic finish to JavaOne. Myself and Duane were due to do a repeat presentation today at JavaOne on SOA Level Setting around the OASIS SOA Reference Model. Duane emailed me earlier in the day to say that he had been very sick over night and had just been to see the Docs. I told him I'd fly solo but the Canadian Billy Idol vowed to carry on as only a man ignoring the bloody obvious can.

Looking like crap we kicked off and it was going fine, in fact we made it to the last slide and then it went a bit like this

Me: You okay?
Duane: No I need to sit down
Me (to audience): Sorry about this Duane has a bit of a virus....

I then turn around and see Duane taking a bit of a kip on the floor, he'd gone from upright to horizontal in one seamless motion, brilliantly meaning that we avoided a Q&A session. After a short trip to the medical centre Duane recovered and was lobbed into a cab to catch a flight back up to the frozen north.

So lessons learnt today include the all important one.... if you feel like crap stay in bed, the world will go on and you won't end up as a YouTube highlight (please tell me someone got it!)

Technorati Tags: ,

JPC - winner most mentally brilliant thing I saw at JavaOne

At JavaOne you always seen some crap presentations and you see some great presentations on things that you will never actually use in the real world. Then occasionally you wander into a presentation where people have done something in Java that is truly mental but actually has a point.

Welcome to JPC, the Java PC emulator. Yup you can run an x86 PC on top of a JVM, including running Linux. Okay so it isn't fast but this does give a great demonstration how mentally powerful modern machines are. The clever bit is around the work that they have done around compiling the x86 code into VM code.

The presenters did a good job of describing a very complex area such as compiler design and the pieces that they did.

Why though isn't this just another crazy concept that you will never use? Well first off it means that you can run x86 on any VM platform, this is important because just think in 20 years time, will x86 code from 1990 still run on the modern hardware? Quite probably not so it gives a great addition for future security of archives. The other bit where it wins over a virtualisation solution is that you could run it as a minor slave on a box rather than having to virtualise everything. If you have some ancient DOS programme that just processes and dumps a file or has a very basic green screen interface than you don't have to virtualise the entire platform before you can run it, you can just run a VM and have the application running along side the main OS. This could be quite a nice way of deploying those old crappy DOS applications on the new shiny hardware and doing that in a way that doesn't require expensive virtualisation of all those new shiny terminals.

It also has a great case (which is why a bunch of physics people did it) around grid computing as a way to provide a more scalable approach to distributing applications that can utilise downtime without requiring a local install and thus giving a nice secure environment (the JVM) for that grid code to operate in. This is one of the few (hell I think its the ONLY) practical ways I've seen to deploy a multi-purpose grid in a secure sand-boxed environment for any hardware (they demo'ed it on mobile phones, sort of like the iPod supercomputer concept but without the hardware hack).

In the presentation they actually set up a grid with people in the room, that is confidence on quality.

Oh and they are claiming they can get to 50% of the native machine in performance.... Java is so slow that it runs a PC at half speed.... in a browser window.

Mental, very clever and something you could even see a use for.

Technorati Tags: ,

Thursday, May 08, 2008

SCA and JBI - a match made in enterprise heaven

Some technologies are aimed at developers, some are aimed at fanboy developers, sometimes however technologies are aimed at the bigger picture, how to architect, deliver and operate enterprise systems. SCA and JBI are two such technologies and there seems to be a misunderstanding around them being competing technologies. I've said before that SCA and JBI should work together so this isn't news but I think its worth quickly explaining why the various vendors need to get the politics out of the way and start working together to make SCA and JBI work together.

This is a standard sort of SCA view. So what are the good things about SCA?
  1. SCA helps you think about the business services not the technology
  2. SCA helps you construct services, and their teams, around a service view
  3. SCA gives you a management entity that fits with a service architecture not a technology architecture
  4. SCA destroys the dreadful layer marketectures that vendors push
So simply put SCA helps you build better SOA by giving you more of a SOA view of the world. The way to build good SCA is the way to build good architectures
  1. Think about your services
  2. Organise your teams around those services
  3. Work out the best way to build each service
  4. Delivery
  5. Operate
The last is a brilliant part of SCA as unlike the layer diagrams of BPEL/EJB/Fish/Database etc it makes it clear at the operational level what the Service architecture is, it also encourages you to think about the services and your delivery before getting into even the design (let alone the code).

So that is what SCA is good at. What about JBI? Well lets first be clear JBI is not for developers its for product companies. JBI is one of the few standards out there (SCA is another) that actually has a business case for its existence. The various scripting JSRs and some of the other fanboy elements out there just have a technical case. This is typical of lots of IT which aims to deliver technical flexibility to the detriment of business flexibility.

So how does JBI work with SCA? Well JBI is about Service engines communicating so its not about the services its about how the engines talk together. This means that it works underneath the services. Combining SCA and JBI therefore is pretty easy

In reality the call isn't from one code area to another its really between the service engines, i.e. the bit that JBI does. The goal of JBI therefore isn't portability of the code/service its portability of the engines.

This solves one of the big issues of SCA which is that implementations are limited to a single vendor's platform. It also doesn't really have a great upgrade story so you are again linked to what the vendor wants to do, if the BPEL engine you are happy with is upgraded and the rules engine that you don't like is upgraded and they all run on the same platform which is upgraded then you have to move everything at once, which is a bit of a pain.

So the real vision here is for SCA and JBI to work together, unfortunately at the moment the JBI group is missing Oracle, IBM and SAP which makes it unlikely that this will be done. This is to the detriment of customers as it means that SCA will remain a great single vendor platform but will not have the portability and operational flexibility that JBI could deliver.

Politics appears to be getting in the way of an SCA/JBI match up as no-one I've spoken to on either side of the divide thinks it isn't technically a good idea.




Technorati Tags: ,

Thursday, May 01, 2008

VM backup problems

One of the best things about working in a Virtual Machine environment for work is that you can take a full backup of the machine and if there are issues roll back to it. This has worked really well when I've been installing software that tended to trash Windows, but I came across a big problem today as I tried to resolve my space problem.

I decided to revert to a saved VM from 12 months ago and then just take the security updates and copy over it the modern files that I had.

Err slight problem. My work policy is for a new Password ever month and I pride myself in having passwords that are tough to break.

So yes I have the VM. Yes it starts up... but can I remember the password? Can I bollocks. So its in to work to connect to the network to get the "right" tokens and password. The point however is clear.

If you are backing up a VM... make sure that you have a local admin account set up as well, otherwise its just a nice set of files that give you a pointless login screen.



Technorati Tags: ,