Wednesday, August 23, 2006

More Black Swans..

Strangely, the day after I wrote this post about black swan events and mentioned the current terrorist threat in it, Daniel Finkelstein has this opinion piece in The Times today making some interesting points - again motivated by Taleb's quirky book. He points out the difference between probability and expectation and that our brains don't comprehend the difference very well. I have some issues with the argument.

The argument Taleb makes goes something like this. A low probability event may be disproportionately high impact so we should treat it differently. It may be entirely rational to short the market even if you expect it to go up because you think that if it goes down, it will go down a lot.
Taleb's investment philosophy is based on that. The problem with this is that it is overly simplistic. Most people have a specific time frame for realising their investment returns. Whilst in theory it may make sense to wait for a crash to make money, our patterns of expeditures are steady over time and therefore a steady stream of moderate returns has more appeal than Taleb ( and Finkelstein ) expect. Ultimately, the crash may not happen during an investment lifetime and by creating a strategy that is exclusively focussed on low probability events, we run the risk of running out of money and ending up significantly worse if the event does not occur.

Basically I think the argument needs to consider finite time and funds before creating a strategy linked to the mispricing around black swan events.

In any case, an article quoting a similar point ( from the same book) to one made on an obscure blog appearing on the front page of The Times one day after it was posted is probably a significant Black Swan event in its own right !

Tuesday, August 22, 2006

5 Things that could be evil about Google

Richard Brandt wonders if he is too soft on Google and unable to see a darker side to the story. As a Google admirer and investor on the one hand and a general sceptic on the other, I decide to list the 5 issues that about Google that would worry me. Note that these are not necessarily things that make it fundamentally evil - they are just issues that might in the long run take some gloss off the current poster child for uber-successful tech companies.

1.Weird management structure - How long Google can keep the triumverate as a structure that is able to lead a high growth company and industry is debatable. So far Google has had an easy run in that there really hasn't had a challenger and an inward focussed form of leadership primarily driven by engineering excellence has worked. How long two strong willed engineers working with a veteran CEO can keep a decent working relationship is a huge unknown and because of Google's weird shareholding structure, can easily scupper the company's fortunes.

2.Lack of clear non-search strategy - From all I can perceive about google, If you take core search out of the equation, it seems to be a place full of clever whizz kids who are trying to out-do each other by coming up with 'cool-stuff' and hope that a percentage of all ideas generated in this fashion stick. If there is any major strategic thinking going on it is simply not visible. Now, arguably it is better to bet on a collection of clever folk and their ability to cope with any changes and opportunities in industry and a lot of investors in google have done exactly that. However, it is just as easy to fail and get egg on the face. If search were to see a Black Swan scenario ( which I have written about in a previous post), Google could implode dramatically.

3.An inward looking corporate culture - This is a real danger where Google employees and management get carried away by their own myth and start believing they are infallible because they have never tasted failure. Microsoft is a great blueprint for this. A symptom of this affliction is the creation of products aimed primarily at demonstrating engineering coolness and nerdery rather than clear user value. I can see signs of this already happening within Google and this could be the one issue that takes it off track - the loss of focus on the user and creating products primarily to please itself. Arguably, this is how most tech companies start ( and the succesful ones manage to link it to a user proposition as google did) but somewhere along the line they need to reconcile their need to be a business with being a playshop for clever kids.

4. Secrecy - Richard Brandt has already alluded to this but I want to extend his point by including investors. At this point, due to the lack of guidance provided by google, investors are pretty much invited to buy shares not because they have visbility of clear strategy but because they are asked to trust the management team and products . So far, this has been a good decision but will this be true going forward ? Google gives very little indication of what it is thinking and working on strategically and the lack of external inputs on strategy could well lead to an inward looking 'We are always right' mentality.

5. Inability to change - Google is about search and sees this as very much its territory. Will it have the ability to quickly go beyond its original principles if broadband connectivity and richness of the online experience improve ? By this I am alluding to their original themes of no pop-up ads and simple text based pages. How willing will Google be to ditch these principles if and when the time comes to update them to take into account beter connectivity and broadband availability ? Will it be inflexible and remain wedded to a late 90s set of principles in 2009 thus allowing an incumbent to carve out a niche in richer media ( like youtube has done ) or will it remain flexible and aggressive enough to change rapidly ?

All in all, this reads more like a list of risks than anything tangible. If I were to list a potential source of evil long term, I would probably choose the dangers from an elitist corporate culture that believes its own legend. Humility and being able to contemplate failure is a vital part of the corporate DNA for long term success in my opinion and keeps companies honest.

Black swan stops play and Gambler's Ruin

On Sunday I saw my first real black swan - and it wasn't a Cygnus atratus on a Sunday excursion. What I did see first hand was "a large-impact hard-to-predict rare event beyond the realm of normal expectations". Obviously I am referring to the 4th day of the England v Pakistan incident-fest which I happened to be at the Oval to see. I have also been recently reading Nassim Taleb's book on randomness and unpredictable events and this Sunday brought the message home fairly spectacularly.

Black Swan was a term originally used by the economist David Hume in a slightly different context. Hume pointed out that a conclusion that all swans are white cannot be logically drawn from any number of observations of white swans. However, just one observation of a black swan is enough to prove the converse - all swans are not white. The point here being to indicate the asymmetrical nature of rational deductions and how easy it can be to fall into logical traps. Taleb uses a slightly broader suggestion to use the term to refer to rare, high-impact events in general. September 11 was a classic black swan event - random, high impact and one most people are totally ill-equipped to comprehend.

The basic idea that Taleb puts forward is that human brains are not wired and calibrated to deal with such random and high impact events which occur more frequently than we think. For this reason, there may be opportunities to profit from the mispricing that occurs around these areas. Arguably, an Enron was a classic black swan opportunity. There was a time when Enron was a wall street darling and featured in Fortune as America's most innovative company 6 years in succession - and then it suddenly and spectacularly imploded. A speculator shorting Enron through the late 90s would have made an absolute killing.

Similarly, yesterday before Mr. Hair decided he could use a little attention, England were 250-1 to win the test match and within 30 minutes, they had been declared winners through a bizarre combination of events that has never happened before in the history of test cricket. Again a massive speculation opportunity for anyone who had taken a punt.

There is no doubt that black swan events do tend to lead to irrational and emotive thinking. An example is the current security checking regime at UK airports. If you step back and think, there has never been an actual case of someone smuggling liquid explosives on to an aircraft at a UK airport - the alleged conspiracy was just that and may or may not have worked. However, there HAS been an actual incident where people have successfully walked on to a tube train and blown it apart so there is clear evidence that can be done. However, no one seems to be rooting for increased security on the tube even when it has been shown with 100% certainty that it can easily be breached. At the same time, there is all manner of 'stringent' additional security checking when there is no actual evidence that the previous security regime wouldn't have worked anyway ! Another example is that when most people are offered insurance policies protecting them against either terrorist strikes or general loss, most people in a climate of fear choose the former even though it is covered by the latter. Basically our mental processes break down when confronted by randomness.

In this New Yorker article, Malcolm Gladwell refers to an investment strategy that Taleb follows to exploit mispricing in black swans. Very simply, his method seems to be to bet on low probability, high impact events and be willing to lose money on most trades in the belief that a large win on a black swan would more than compensate. The theory is seductive in its contrarianism and simplicity but has the obvious problems of limited funds. It isn't totally clear how Taleb would avoid Gambler's ruin. I guess one way is to keep attracting fresh funds at a rate higher than his general burn rate. In any case, there isn't a convincing argument that you can do very well exploiting real black swans by keeping on betting on improbable events in the hope that one spectacular payday will make up for all the bleeding. So while Taleb is right when it comes to the concept, it is hard to see that as a basis for an investment philosophy.

Monday, August 14, 2006

State in services architecture

I recently came across a post from Harry Pierson from Microsoft questioning the received wisdom that Services should be stateless. Harry makes a valid point that any meaningful business process has state and goes on to deduce that this implies that services should have state too.

I think Harry is missing the crucial point here by implicitly conflating business process and service state. In a good SOA ( and by good I mean one which delivers on the key promise of loose coupling), business processes indeed have state - they will be meaningless otherwise. However, there is a difference between process and service state.

In my view process state should not creep into services as this makes it difficult to replace service implementations over time and can lead to tight coupling in architectures. Ideally process state should be held in an intermediate metadata layer that forms part of the SOA infrastructure ( insert your favourite 3 letter flavour-of-the-month acronym in this sentence at this point - ESB being a prime candidate).

Service state is a different issue though. I think good SOA architectures should ensure that service state is kept inside the boundary and only exposed through contracts. This way, the service can truly deliver on the promise of being an independent blob of functionality that is independent of the context in which it is used. If this isn't the case in a specific instance, I would question the value in creating an SOA architecture.

To illustrate the point imagine a service that manages customers in any organisation. A CustomerManagement service in what I would see as good design would support an operation to return the status of a customer and would allow non specific states of the entity it manages to be tracked. For example, it may provide methods to change the state of a customer entity from enabled to disabled and have methods to return the current status. Note that such interactions cause the underlying entity to change state across all the processes that may be using the service. However, this still leads to loosely coupled architectures as the complete maintenance of this state lies inside the service so any change to state is only available to other servces through defined contracts.

Now consider an AddCustomer business process. In an organisation this may require a 3 day process that uses 3 manual steps including calls to the CustomerManagement Service above. Obviously keeping the state of this process inside a service makes it difficult to modify the process in the future - that is why the whole suite of BPM type tools ( including Microsoft's own Biztalk) exists. Typically one would model this process not in a service but as as some sort of orchestration using multiple services. The SOA infrastructure would manage the state of the process while the servces themselves would manage business state of the associated entities as indicated above.

This has gone on a bit but I think it is a crucial point which I summarise here in the form of some architecture principles :

- Services should only have state that is exposed through well defined contracts and makes sense in an Enterprise wide, cross-process sense ( like customer status).
- Service state should never creep outside the service boundary.
- Process state should ideally be maintained in infrastructure software - indeed it may form the justification for the purchase of such SOA infrastructure in an enterprise.
- Keeping process state in infrastructure and service state inside the boundary is key to creating loosely coupled, flexible architectures whilst supporting dynamic business processes.

Monday, August 07, 2006

Further on the SML debate

Harry Pierson and Pratul Dublish from Microsoft have written about my earlier post about SML. Before I respond I should clarify one thing. My message was not intended as an attack on SML. I am sure is a solid piece of work. I just asked - why should we care ?

The summary of their message for me is that the specification solves real deployment related issues today and is something that Microsoft will likely use to build some tools for its technology stack and roll into future products. I did mention that I can see this being useful for some tool vendors so I won't contest that. What I don't see the value of is how this is of use as a wider industry-wide specification. Do I think it is a good idea for Microsoft to invest in developments that ease some of the deployment pain a large part of which is due to the overly complex nature of its underlying platform ? Absolutely. Do I think this is the start of an industry drive towards "Service Modeling Languages" ? Absolutely not. Does this language have a future ( or past) outside Microsoft ? I don't think so. The main reason for my scepticism is that in deployment related problems need to be tackled by platform simplification rather than by inventing tools to manage dependencies. Developments in virtualisation tools already give us enough tools to run 'what-if' deployment scenarios to a far greater degree of richness and control so the problem has other solutions as well.

On a related theme, I find is odd that Microsoft has chosen to invest in 'modeling tools and specification languages' as a means of tackling the fundamental issue of deployment complexity largely due to the difficulty in dependency management. I suspect their investment is better directed towards removing the underlying fragmentation and complexity from their OS products - this is what causes dependency hell in the first place. Modeling can help people get to grips with managing the complexity, which is laudable. But it does nothing to remove the underlying problem - Microsoft's infrastructure software ( OS, databases, code frameworks (.NET) ) should probably have a simpler way of dealing with deployment related dependencies.

I do care that Microsoft is creating tooling to help in an area that causes a lot of pain on its platform. At the same time ,I have to confess to not caring very much about the specification itself and I just don't see the drivers for industry wide adoption - and by industry wide I mean solutions built on non Microsoft stacks.

In my view this drive to create industry-standards has a tendency to get out of hand very rapidly. Just witness the utter carnage the WS-* specifications have wrought on simplicity and common sense. The combined WS-* specification suite has just exploded by people adding richness to a basic framework without actual needs for it bubbling up from the wider user community ( thus causing a reaction with a rediscovery of REST). These cycles are not uncommon in software, simple frameworks start small and then as soon as they start being associated with the dreaded phrase 'industry standard' , they rapidly proceed up their own collective posteriors and get bloated and unusable - causing a simpler, stripped down version to emerge. The cycle then repeats.

Arguably a similar thing happened on the java stack with EJBs and Spring and hibernate.

What's the summary ? I'm sure SML is an intellectually robust specification that will really help Microsoft tools behave consistently and offer some value in addressing migration and deployment related concerns on the MS platform. At this point, I just don't see this getting widespread adoption outside Microsoft.

Wednesday, August 02, 2006

Efficient markets, data dredging and the wisdom of crowds

I've been commenting on some blogs on topics touching tangentially on the psychology of investing, data-dredging and the Efficient Market Theory.

First the oft repeated argument that long term market outperformance by an investor somehow violates the Efficient Markets hypothesis. I have seen this repeated often and have never quite understood the underlying logic. This came up again recently in the light of Anthony Bolton's retirement from the Fidelity Special Situations fund. Does his long term outperformance contradict the EMT ?

If you take an arbitrarily long time period and measure average market performance over it, clearly some investors' records would be superior to that average as they can't all be the same. Looking back it is easy to identify many such patterns and instances and the fact that it happens has nothing to do with the EMT. This will happen simply due to the nature of averages and the fact that the sum total of all investors is the market.

This outperformance could well be explained simply by chance rather than any particular skill.
The more relevant question is whether you can predict outperformance in advance rather than by creative mining of past data. Could Anthony Bolton or Buffett's performance have been predicted when he started out ? Even here it gets tricky as the number of holdings is also significant. Investors taking increased risk and holding a smaller number of stocks have a greater probability of long term outperformance ( effectively a reward for the increased risk) without contradicting the EMT. People like Buffett, Munger etc. fit into this bracket.

So I believe that long term outperformance is totally consistent with the EMT. That doesn't mean that I am convinced about the EMT though.

Which brings us to the issue of data dredging, something that even pollutes a large amount of scientific research but is especially noticeable in economics. A great ( if not entirely serious) example is here . In fact Chris seems to be a data dredger extraordinaire, and a lot of his analysis fits the definition of the term to a T. That still makes him one of the most original and intellectually stimulating bloggers out there. The key point he, perhaps deliberately, never mentions is that just because you have uncover correlations and patterns in data from the past does not mean that the relationships weren't just due to chance and have zero predictive value going forward.

Finally, some discussion on cyclical markets and rationality is here . I think people underestimate the effectiveness of decision making by the 'average' person in aggregate , as part of a large, diverse group. Admittedly, my views may have something to do with one of the books I am currently reading.

Modelling Overload

Another day and another modelling language specification that will unify the world.

We are saved. All software development shops everywhere will use just one way of depicting systems, everyone will know exactly what the complicated constructs mean, this will make systems incredibly easy to develop and maintain and get rid of unnecessary documentation and the prosperity and world peace that will ensue and prevail will constitute a giant step for mankind etc etc.

For a classic example of how the IT industry wastes precious dollars in misguided quasi-unification initiatives ( I hinted on this in a previous post), check out the recent releases of the Service Modeling Language from Messrs. Microsft, BEA, Cisco etc. They were pre-empted by their well intentioned but increasingly irrelevant brethren at OMG who spewed forth SysML about a month ago. What impact these will possibly have on the real world I am not so sure as they do not address a problem anyone is actually trying to solve. There will obviously be enough to keep the tool vendors excited for 6 months or so though.
It's tempting and easy to indulge in nit picking the specifications but that's not my point. Why, when the history of software is littered with unsuccesful attempts to impose monolithic modelling constructs that virtually no one ever ends up adopting, do organisations persist in wasting time and money in dreaming up the next mega-specification ? Arguably the only modeling notation that has gained widespread currency is UML and that too in what Martin Fowler calls the UMLAsSketch mode. The MDA related fairy dust that was sprinkled on UML ( giving us the hallucinogenic UML 2.0) , never really caught on widely and exists these days to provide material for architecture conferences which seem to have a social obligation to schedule a "How MDA is changing the world" talk.

We don't need a meta- modeling specification. The whizziest modelling tools are often used as a substitute for clear thinking and there is no magic tool that can solve that particular problem.

Update - Harry Pierson at Microsoft responds to my post and makes a point about SML being a bottom up approach and therefore standing a better chance of working. The point is a valid one, but as I point out in the comments :
- Solutions increasingly involve technology from many vendors and a bottom-up approach has limitations here.
- Is the solution to a communication problem to have everyone speak the same language ?