Total Pageviews

Tuesday, January 29, 2013

2013 The Year of the Mainframe

As we start to get up to speed in 2013 and following a week in IBM's Montpellier lab I thought I would kick off the year with some musings on two trends we should be looking for in 2013.

As Big Data looks to replace cloud as the next buzz word in the IT industry, equal focus is also being placed on how you analyse this data.  From a mainframe perspective the box has always been the biggest data server in the datacentre and typically is the source of the organisation's most trustworthy data.  As the system of record or master data server the mainframe typically holds the one version of the truth when it comes to customer accounts or operational data.  The prevailing trend over the last 20-years has been to ETL data off the mainframe to distributed systems so that it can be analysed.  This logic was sound when the mainframe was unable to process long complex queries in a cost effective manner without buring excessive MIPS.  However as offload engines have matured and overall z core chip speed has increased this logic has eroded.  With the launch of IBM's DB2 Analytics Accelerator then the logic has been turned on its head.

Imagine if you will, the development of graphics accelerator cards in PC's and the rise of more complex games to take advantage of the underlying hardware capability.  Well imagine the IDAA (and I don't mean these guys as a graphics accelerator card for the mainframe where graphics = analytics and you get the picture. Why take anywhere between 9-11 copies of your mainframe data on average and spread it around your distributed estate and then have to pay for the storage to store it, back it up and generally manage data that is at least a day old.  Especially when you can query 'fresh' operational data in-situ and trust its origin and security.

Without wishing to labour a point the mainframe was the first 'Cloud' server and all my more 'senior' z colleagues will go misty eyed at the word 'bureau' and the parallels with the latest hip and trendy craze of Cloud.  So before I begin to sound like the metaphorical industry Dad let me elaborate on why zCloud should be a trend in 2013.  Looking at any industry definition of a cloud terms like; available, virtualised, scalable, secure and multi-tenant are central to a platforms claim to be a cloud.  Well the mainframe has had these attributes for the last 40-years.  However I fully get that this heritage has been based on z/OS and its predecessors and whilst there is still life in the old girl yet other O/S' are widely available.  So lets focus on Linux and particularly the likes of SUSE and RedHat in particular.  If you want a platform to handle extreme scale Linux consolidation and in particular handle workloads such as high I/O databases look no further than the mainframe.

In one recent RFI that my team responded to we positioned that a EC12 was able to easily accommodate 80 Virtual Machines per Core and in so doing be the cheapest platform for providing virtualised Redhat guests.  I know the last statement is up there with "the world isn't flat its round" but look at the numbers £131 per VM per month Vs £290 per VM per month for a comparable x86 and VMWare service...

If the above have whetted your appetite for exploring the world of z and how the mainframe continues to evolve in 2013, join me and my colleagues at the next #mainframedebate on Twitter on the 7th Febuary between 4-5pm GMT and 11-12pm EST to find out more.  Even if you think the above is heresy and you want to go 10-rounds on the numbers for zCloud join us and lets make it a lively #mainframedebate or contact me directly at @StevenDickens3...