Unless you have been living under a rock for the last couple of weeks, you will be aware of the significant outage that the RBS/Natwest/Ulster bank group has experienced.
Just in case you do claim a rock as your humble abode take a look at the links below for details:
http://www.theregister.co.uk/2012/06/25/rbs_natwest_what_went_wrong/
http://www.telegraph.co.uk/finance/newsbysector/banksandfinance/9355028/City-puts-cost-of-RBS-glitch-at-between-50m-and-100m.html
This outage has sent a ripple through the mainframe world, especially with those clients who run CA's venerable scheduling software tool CA-7 and those looking to use offshore operations resource. Without wishing to point the finger at my former employer and the quality of CA-7, a product that I have sold in the past, it would be remiss of me to mention that other mainframe scheduling products are available:
http://www-01.ibm.com/software/tivoli/products/scheduler/
But seriously, wider than a bit of professional gloating and opportunism, the fundamental question arises - Should large organizations run significant core business critical applications on the mainframe when a single outage at a bank can cost upwards of £100m?
Well lets look at some history:
- The mainframe has been around for some 40 years and throughout this time has been supporting key clients IT needs.
- IBM back in 1974 committed to maintain investment in this platform
- IBM provides 'integrity' commitments for the O/S
- System z mean time between failure stats run into decades
But apart from these IBM commitments, large organizations have been running their IT largely glitch free for years on this platform. In fact the very lack of this type of outage is the very reason why it has been such big news. By way of proof - 96 of the top 100 banks globally run System z and this is the first time I have heard of such an outage of this magnitude.
Also other platforms are not without their high profile glitches:
http://gigaom.com/cloud/will-amazon-outage-ding-cloud-confidence/
http://www.computerworlduk.com/news/networking/3246942/london-stock-exchange-tight-lipped-on-network-outage/
http://www.itnews.com.au/News/306421,vodafone-suffers-near-nationwide-3g-outage.aspx
In summary one of my clients runs 93% of their business logic on 4 servers that cost them 7% of their total IT Budget, whilst running 4000 distributed servers that contribute to 7% of the business logic and cost 93% of the total IT spend. Which begs the question which looks the better bet?
Answers on a post card to @stevendickens3