Thursday, May 27, 2010

Travel Insurance, Real Options, Cloud and VFM from IT

With the recent focus on cutting IT waste in government - a thought experiment. Before going on vacation you purchase a one week travel insurance policy for £15. You go for a weekend in Paris and return without incident. Was the money spent on the policy value for money at the point of purchase ? Would your answer be different looking back if you had a wallet picked on the Paris metro and ended up claiming £50 ? Or if you were hospitalised with some exotic virus, also caught on the metro, and had your medical expenses covered ? Did you get Value for money ? Would an annual multi-trip policy that cost £100 provide more value for money if you normally make 10 trips abroad ? Would your answer be different if you ended up making just 4 last year ? When you go up to buy the policy how do you assess if you're getting a 'good deal' ? Is it the cheapest policy on the market for the same features ? Is it a better deal not to buy insurance at all ?

Bizarre as it may sound, almost all large capital expenditure decisions exhibit similar characteristics and a similar difficulty in getting non IT related decision makers to understand the real value from significant IT investments. This non-obvious value derives from the risk reduction and future flexibility that an organisation gets when they put in significant investment into IT capabilities.

Almost every IT investment proposition of reasonable size gets put through a "Value for Money" ( VFM) assessment. This is due to the combination of the increasingly large proportion of IT capital expenditure in a typical organisational budget and a recent legacy of IT silver bullets whose effects were closer to those of the projectile than the precious metal.

One major danger in making these assessments though is the over-simplification of the measures employed in objective assessment. For example, how do you choose the best option when confronted with multiple options of varying degrees of complexity and benefits. The traditional and the most common way of doing this is to evaluate IT capital expenditure as a upfront investment that will give a stream of of quantifiable benefits in the
future. This is very much an NPV/IRR approach and by far the most common in industry. The basic method involves estimating future benefit streams and adjusting them by a risk factor to take into account the uncertainty associated with the benefits as well as the time value, earlier benefits being weighted higher.

IT investment, however, is not so easily categorised in this simple model because it has value in two crucial areas that these dimensions do not measure :
- Risk Reduction.
- Flexibility in dealing with future circumstances

In both these cases, the value of the investment largely derives from investing now for either avoiding situations that MAY arise in the future or reacting to such situations. The situations themselves, of course, may not materialise at all. This however does not mean that the investment does not have any value.
I started encountering this issue about 5 years where a significant number of enterprises where considering, or at least were being seduced by SOA based investment opportunities. A typical SOA initiative proposed by an IT department involved a signficant investment in platform infrastructure in Enterprise Service Bus software, Canonical Data model and schema development, the modelling of Business Process orchestrations etc. This was expensive and proved hard to defend on anything other than a "visionary" basis since VFM assessments forced people down the hard quantifiable benefits road. The value of the investment that derives from the SOA investment making the organisation more flexible was typically not assessed
since the flexibility is only measurable in relation to a range of future possibilities that may or may not arise. For this reason, a number of Enterprises opted for small SOA-lite initiatives, nto because the lack of "vision" but mainly because they lacked the tools to assess the longer
term value of flexibility. I see similar issues these days in clients dealing in particular with investment with "Green" themes or significant investment in creating private-cloud models of infrastructure provision where a one off upfront investment buys future flexibility which is hard to value with existing tools.

How can a decision maker take this into account whilst evaluating competing proposals ? Tools such as Real-Options analysis are out there but get into advanced mathematics too quickly and lose their intuitive appeal and get too hard for the average organisation to use effectively. The other technique I've seen organisations use is to make the risk reduction and flexibility features a standard part of the requirement against which they get competiting compliant proposals and then choose the cheapest. The results, though leave no mechanism for value-add to be accurately assessed.

What else are people using to assess value from IT ?

Labels: ,

Wednesday, October 18, 2006

Data Centre in a box

The New York Times has this story about Sun's new offering - a datacentre in a box. Google has been doing this internally for a while anyway and it seems like a reasonable idea to try out.

The key proposition is of course how to scale quickly and I guess the appeal is obvious for large companies experiencing rapid growth. Are there enough of these to justify this as a product line ? Time will tell.

What has surprised me is the lack of emergence of the remotely hosted, massively distributed, cheap hardware based infrastructure at a large enough scale that Enterprises can simply provision capacity at will. My view is that something will emerge in this space in the next 5 years - essentially the google infrastructure for Enterprises. I am convinced there is value in it.


The problem is no one apart from Google has been able to master this at the moment and their business model does not evaluate offering their low cost/large scale computing power to Enterprises ....yet. They are moving in this direction with consumers already and it shouldn't be long before they or their competitors realise that the storage and information processing needs for most enterprises are an extension of end user needs that they currently deal with.

Wednesday, September 20, 2006

What Web 2.0 means for Enterprise Architecture in the next 3-5 years

Since Web 2.0 has sneakily crept up on us all and replaced SOA as the instant ticket to techie street-cred, there has been a reasonable amount of thrashing around in the industry trying to come to grips with the concept. Inevitably, like most buzz words marching firmly towards term-du-jour status, there is a bit of substance and an enormous amounts of hype associated with this. The germ of substance, however, is what interests me at the moment and I will try to lay out the key implications for Enterprise Architecture in particular, of the developments in the Web 2.0 area. I'll save the highly cynical post about what exactly it is for some other time:)

I see the following Web 2.0 related opportunities that large organisations and Enterprises can take at the moment :

1. The browser as the only UI Channel.
Most large companies can drive substantial cost out of the desktop environment by adopting rich, browser based Ajax solutions. Productivity gains should be the icing on the cake.
Based on some work with some clients and a general sense of the environment, I firmly believe a large part of the cost of constructing, deploying and maintaining hundreds of desktop applications across a typical lage Enterprise can be eliminated over 3-5 years by creating an Ajax/Rich browser based user interface platform. Compared to the UI richness demanded by sites like Myspace and Google spreadsheets, the needs of most business applications are rather simpler and most Enterprises have a large amount of cost and complexity locked up in deploying and maintaining these applications. The coming desktop refresh cycle from Microsoft is likely to further exacerbate dependency hell on the user desktop and an investment in a UI platform delivering a location independent rich interface channel is likely to be far more sensible than investing in mass OS updates. Needless to say, I am pessimistic about whether vista can survive as a pure desktop environment in the 5-10 year time frame because of the emergence of Ajax type technologies.

2.Mashups as a composition mechanism for UI oriented services.
Last year, I heard Jaron Lanier speak at JAOO Aarhus. During his talk he brought out the concept of UI based integration that resonated with me at the time but I couldn't visualise it as working in real life. One year later, it is arguably already becoming a widely practised paradigm ( urgh) for application construction.
Using mashups is likely to take application integration and the concept of services much closer to the UI channel. Traditional portals were just a start but a a large number of business processes can effectively be seen in terms of simple mashup applications that exist to bring together not just services but living , breathing and changing applications ( which may also have a UI element). Cool stuff like this has endless possibilities in the Enterprise and belongs in the mainstream of application construction patterns , not just on the nerdy fringes. The next generation of CIO dashboard applications belong here.


3.Services acquire a Face ( from our augury section)

Our current concept of services essentially as self contained programs that exchange ( frequntly) message based input and output data is likely to change. In my view the next generation of services will have a default user interface and aggregation and composition of services is likely to occur on a UI channel in addition to a middleware/ESB backbone. Be prepared for U( ser Interface) S (ervice)B(used) type products that bring together mashups of applications and not just services.


To be continued....

Wednesday, August 23, 2006

More Black Swans..

Strangely, the day after I wrote this post about black swan events and mentioned the current terrorist threat in it, Daniel Finkelstein has this opinion piece in The Times today making some interesting points - again motivated by Taleb's quirky book. He points out the difference between probability and expectation and that our brains don't comprehend the difference very well. I have some issues with the argument.

The argument Taleb makes goes something like this. A low probability event may be disproportionately high impact so we should treat it differently. It may be entirely rational to short the market even if you expect it to go up because you think that if it goes down, it will go down a lot.
Taleb's investment philosophy is based on that. The problem with this is that it is overly simplistic. Most people have a specific time frame for realising their investment returns. Whilst in theory it may make sense to wait for a crash to make money, our patterns of expeditures are steady over time and therefore a steady stream of moderate returns has more appeal than Taleb ( and Finkelstein ) expect. Ultimately, the crash may not happen during an investment lifetime and by creating a strategy that is exclusively focussed on low probability events, we run the risk of running out of money and ending up significantly worse if the event does not occur.

Basically I think the argument needs to consider finite time and funds before creating a strategy linked to the mispricing around black swan events.

In any case, an article quoting a similar point ( from the same book) to one made on an obscure blog appearing on the front page of The Times one day after it was posted is probably a significant Black Swan event in its own right !

Tuesday, August 22, 2006

5 Things that could be evil about Google

Richard Brandt wonders if he is too soft on Google and unable to see a darker side to the story. As a Google admirer and investor on the one hand and a general sceptic on the other, I decide to list the 5 issues that about Google that would worry me. Note that these are not necessarily things that make it fundamentally evil - they are just issues that might in the long run take some gloss off the current poster child for uber-successful tech companies.

1.Weird management structure - How long Google can keep the triumverate as a structure that is able to lead a high growth company and industry is debatable. So far Google has had an easy run in that there really hasn't had a challenger and an inward focussed form of leadership primarily driven by engineering excellence has worked. How long two strong willed engineers working with a veteran CEO can keep a decent working relationship is a huge unknown and because of Google's weird shareholding structure, can easily scupper the company's fortunes.

2.Lack of clear non-search strategy - From all I can perceive about google, If you take core search out of the equation, it seems to be a place full of clever whizz kids who are trying to out-do each other by coming up with 'cool-stuff' and hope that a percentage of all ideas generated in this fashion stick. If there is any major strategic thinking going on it is simply not visible. Now, arguably it is better to bet on a collection of clever folk and their ability to cope with any changes and opportunities in industry and a lot of investors in google have done exactly that. However, it is just as easy to fail and get egg on the face. If search were to see a Black Swan scenario ( which I have written about in a previous post), Google could implode dramatically.

3.An inward looking corporate culture - This is a real danger where Google employees and management get carried away by their own myth and start believing they are infallible because they have never tasted failure. Microsoft is a great blueprint for this. A symptom of this affliction is the creation of products aimed primarily at demonstrating engineering coolness and nerdery rather than clear user value. I can see signs of this already happening within Google and this could be the one issue that takes it off track - the loss of focus on the user and creating products primarily to please itself. Arguably, this is how most tech companies start ( and the succesful ones manage to link it to a user proposition as google did) but somewhere along the line they need to reconcile their need to be a business with being a playshop for clever kids.

4. Secrecy - Richard Brandt has already alluded to this but I want to extend his point by including investors. At this point, due to the lack of guidance provided by google, investors are pretty much invited to buy shares not because they have visbility of clear strategy but because they are asked to trust the management team and products . So far, this has been a good decision but will this be true going forward ? Google gives very little indication of what it is thinking and working on strategically and the lack of external inputs on strategy could well lead to an inward looking 'We are always right' mentality.

5. Inability to change - Google is about search and sees this as very much its territory. Will it have the ability to quickly go beyond its original principles if broadband connectivity and richness of the online experience improve ? By this I am alluding to their original themes of no pop-up ads and simple text based pages. How willing will Google be to ditch these principles if and when the time comes to update them to take into account beter connectivity and broadband availability ? Will it be inflexible and remain wedded to a late 90s set of principles in 2009 thus allowing an incumbent to carve out a niche in richer media ( like youtube has done ) or will it remain flexible and aggressive enough to change rapidly ?

All in all, this reads more like a list of risks than anything tangible. If I were to list a potential source of evil long term, I would probably choose the dangers from an elitist corporate culture that believes its own legend. Humility and being able to contemplate failure is a vital part of the corporate DNA for long term success in my opinion and keeps companies honest.

Black swan stops play and Gambler's Ruin

On Sunday I saw my first real black swan - and it wasn't a Cygnus atratus on a Sunday excursion. What I did see first hand was "a large-impact hard-to-predict rare event beyond the realm of normal expectations". Obviously I am referring to the 4th day of the England v Pakistan incident-fest which I happened to be at the Oval to see. I have also been recently reading Nassim Taleb's book on randomness and unpredictable events and this Sunday brought the message home fairly spectacularly.

Black Swan was a term originally used by the economist David Hume in a slightly different context. Hume pointed out that a conclusion that all swans are white cannot be logically drawn from any number of observations of white swans. However, just one observation of a black swan is enough to prove the converse - all swans are not white. The point here being to indicate the asymmetrical nature of rational deductions and how easy it can be to fall into logical traps. Taleb uses a slightly broader suggestion to use the term to refer to rare, high-impact events in general. September 11 was a classic black swan event - random, high impact and one most people are totally ill-equipped to comprehend.

The basic idea that Taleb puts forward is that human brains are not wired and calibrated to deal with such random and high impact events which occur more frequently than we think. For this reason, there may be opportunities to profit from the mispricing that occurs around these areas. Arguably, an Enron was a classic black swan opportunity. There was a time when Enron was a wall street darling and featured in Fortune as America's most innovative company 6 years in succession - and then it suddenly and spectacularly imploded. A speculator shorting Enron through the late 90s would have made an absolute killing.

Similarly, yesterday before Mr. Hair decided he could use a little attention, England were 250-1 to win the test match and within 30 minutes, they had been declared winners through a bizarre combination of events that has never happened before in the history of test cricket. Again a massive speculation opportunity for anyone who had taken a punt.

There is no doubt that black swan events do tend to lead to irrational and emotive thinking. An example is the current security checking regime at UK airports. If you step back and think, there has never been an actual case of someone smuggling liquid explosives on to an aircraft at a UK airport - the alleged conspiracy was just that and may or may not have worked. However, there HAS been an actual incident where people have successfully walked on to a tube train and blown it apart so there is clear evidence that can be done. However, no one seems to be rooting for increased security on the tube even when it has been shown with 100% certainty that it can easily be breached. At the same time, there is all manner of 'stringent' additional security checking when there is no actual evidence that the previous security regime wouldn't have worked anyway ! Another example is that when most people are offered insurance policies protecting them against either terrorist strikes or general loss, most people in a climate of fear choose the former even though it is covered by the latter. Basically our mental processes break down when confronted by randomness.

In this New Yorker article, Malcolm Gladwell refers to an investment strategy that Taleb follows to exploit mispricing in black swans. Very simply, his method seems to be to bet on low probability, high impact events and be willing to lose money on most trades in the belief that a large win on a black swan would more than compensate. The theory is seductive in its contrarianism and simplicity but has the obvious problems of limited funds. It isn't totally clear how Taleb would avoid Gambler's ruin. I guess one way is to keep attracting fresh funds at a rate higher than his general burn rate. In any case, there isn't a convincing argument that you can do very well exploiting real black swans by keeping on betting on improbable events in the hope that one spectacular payday will make up for all the bleeding. So while Taleb is right when it comes to the concept, it is hard to see that as a basis for an investment philosophy.

Monday, August 14, 2006

State in services architecture


I recently came across a post from Harry Pierson from Microsoft questioning the received wisdom that Services should be stateless. Harry makes a valid point that any meaningful business process has state and goes on to deduce that this implies that services should have state too.

I think Harry is missing the crucial point here by implicitly conflating business process and service state. In a good SOA ( and by good I mean one which delivers on the key promise of loose coupling), business processes indeed have state - they will be meaningless otherwise. However, there is a difference between process and service state.


In my view process state should not creep into services as this makes it difficult to replace service implementations over time and can lead to tight coupling in architectures. Ideally process state should be held in an intermediate metadata layer that forms part of the SOA infrastructure ( insert your favourite 3 letter flavour-of-the-month acronym in this sentence at this point - ESB being a prime candidate).


Service state is a different issue though. I think good SOA architectures should ensure that service state is kept inside the boundary and only exposed through contracts. This way, the service can truly deliver on the promise of being an independent blob of functionality that is independent of the context in which it is used. If this isn't the case in a specific instance, I would question the value in creating an SOA architecture.


To illustrate the point imagine a service that manages customers in any organisation. A CustomerManagement service in what I would see as good design would support an operation to return the status of a customer and would allow non specific states of the entity it manages to be tracked. For example, it may provide methods to change the state of a customer entity from enabled to disabled and have methods to return the current status. Note that such interactions cause the underlying entity to change state across all the processes that may be using the service. However, this still leads to loosely coupled architectures as the complete maintenance of this state lies inside the service so any change to state is only available to other servces through defined contracts.

Now consider an AddCustomer business process. In an organisation this may require a 3 day process that uses 3 manual steps including calls to the CustomerManagement Service above. Obviously keeping the state of this process inside a service makes it difficult to modify the process in the future - that is why the whole suite of BPM type tools ( including Microsoft's own Biztalk) exists. Typically one would model this process not in a service but as as some sort of orchestration using multiple services. The SOA infrastructure would manage the state of the process while the servces themselves would manage business state of the associated entities as indicated above.


This has gone on a bit but I think it is a crucial point which I summarise here in the form of some architecture principles :


- Services should only have state that is exposed through well defined contracts and makes sense in an Enterprise wide, cross-process sense ( like customer status).
- Service state should never creep outside the service boundary.
- Process state should ideally be maintained in infrastructure software - indeed it may form the justification for the purchase of such SOA infrastructure in an enterprise.
- Keeping process state in infrastructure and service state inside the boundary is key to creating loosely coupled, flexible architectures whilst supporting dynamic business processes.

Monday, August 07, 2006

Further on the SML debate

Harry Pierson and Pratul Dublish from Microsoft have written about my earlier post about SML. Before I respond I should clarify one thing. My message was not intended as an attack on SML. I am sure is a solid piece of work. I just asked - why should we care ?

The summary of their message for me is that the specification solves real deployment related issues today and is something that Microsoft will likely use to build some tools for its technology stack and roll into future products. I did mention that I can see this being useful for some tool vendors so I won't contest that. What I don't see the value of is how this is of use as a wider industry-wide specification. Do I think it is a good idea for Microsoft to invest in developments that ease some of the deployment pain a large part of which is due to the overly complex nature of its underlying platform ? Absolutely. Do I think this is the start of an industry drive towards "Service Modeling Languages" ? Absolutely not. Does this language have a future ( or past) outside Microsoft ? I don't think so. The main reason for my scepticism is that in deployment related problems need to be tackled by platform simplification rather than by inventing tools to manage dependencies. Developments in virtualisation tools already give us enough tools to run 'what-if' deployment scenarios to a far greater degree of richness and control so the problem has other solutions as well.

On a related theme, I find is odd that Microsoft has chosen to invest in 'modeling tools and specification languages' as a means of tackling the fundamental issue of deployment complexity largely due to the difficulty in dependency management. I suspect their investment is better directed towards removing the underlying fragmentation and complexity from their OS products - this is what causes dependency hell in the first place. Modeling can help people get to grips with managing the complexity, which is laudable. But it does nothing to remove the underlying problem - Microsoft's infrastructure software ( OS, databases, code frameworks (.NET) ) should probably have a simpler way of dealing with deployment related dependencies.

I do care that Microsoft is creating tooling to help in an area that causes a lot of pain on its platform. At the same time ,I have to confess to not caring very much about the specification itself and I just don't see the drivers for industry wide adoption - and by industry wide I mean solutions built on non Microsoft stacks.

In my view this drive to create industry-standards has a tendency to get out of hand very rapidly. Just witness the utter carnage the WS-* specifications have wrought on simplicity and common sense. The combined WS-* specification suite has just exploded by people adding richness to a basic framework without actual needs for it bubbling up from the wider user community ( thus causing a reaction with a rediscovery of REST). These cycles are not uncommon in software, simple frameworks start small and then as soon as they start being associated with the dreaded phrase 'industry standard' , they rapidly proceed up their own collective posteriors and get bloated and unusable - causing a simpler, stripped down version to emerge. The cycle then repeats.

Arguably a similar thing happened on the java stack with EJBs and Spring and hibernate.

What's the summary ? I'm sure SML is an intellectually robust specification that will really help Microsoft tools behave consistently and offer some value in addressing migration and deployment related concerns on the MS platform. At this point, I just don't see this getting widespread adoption outside Microsoft.