Total Pageviews

Tuesday, February 5, 2013

Febuary 5th Announcements for Systemz

If you haven't already seen the IBM announcements relating to System z let me give you my thoughts (for what they are worth) and why you should care:

zCloud
It really amazes me that Cloud on z or the much more snappily titled zCloud isn't getting more press.  Take for example:

US organisation Nationwide’s private cloud on IBM zEnterprise has replaced thousands of standalone servers,eliminating both capital and operational expenditure. The initial #consolidation exercise is estimated to have saved thecompany some $15 million over three years.

Check out the detail at http://ibm.co/13KWqeW

If you are doing Linux at scale (and by scale I mean more than 400 Virtual Machines) then the cost case for z is massively compelling.  If you want to throw enterprise software into the mix from the likes of Oracle or even IBM's DB2 and WebSphere, then the license savings alone can pay for the underlying tin twofold within a year.

The IBM Enterprise Linux Server platform as an underlying platform for a private/public Linux cloud is a great message and doesn't come with the 'legacy' bagage oft associated (incorrectly IMHO but don't get me started...) of System z.  For heavens sake it is a large virtualised server than runs Linux, how complex can that be.

Operational Analytics
As I have previously blogged, having a 'system of record' and then proliferating copies of this data to a myriad of distributed servers, just so you can do complex analytics against this data appears to me to be folly.  When organisations are taking between 8 and 12 copies of their core data and squirting this via a multitude of various ETL methods out to other platforms and then having to manage these disparate systems, then the costs, not surprisingly spiral out of control.  Not to mention the data management and consistency and operational decision making challenges this poses.  Why oh why not just keep your data where it lives and breathes i.e. in your core system and analyse it from there?  Surely this approach must be cheaper and not lead to a proliferation of data models...

Check out this Forester whitepaper for a banking discussion on this topic:

In today's dynamic economy, analytics delivers the new competitive advantage. Financial Services firms count on a consolidated, secure platform to perform real-time analytics. This Forrester Thought Leadership Paper, commissioned by IBM, examines global financial firms' perceptions on rapidly changing data - as well as the analytics landscape - and their plans to meet the challenges head-on.

Link to the paper: http://ibm.co/Xm5P7y

Tuesday, January 29, 2013

2013 The Year of the Mainframe

As we start to get up to speed in 2013 and following a week in IBM's Montpellier lab I thought I would kick off the year with some musings on two trends we should be looking for in 2013.

Analytics
As Big Data looks to replace cloud as the next buzz word in the IT industry, equal focus is also being placed on how you analyse this data.  From a mainframe perspective the box has always been the biggest data server in the datacentre and typically is the source of the organisation's most trustworthy data.  As the system of record or master data server the mainframe typically holds the one version of the truth when it comes to customer accounts or operational data.  The prevailing trend over the last 20-years has been to ETL data off the mainframe to distributed systems so that it can be analysed.  This logic was sound when the mainframe was unable to process long complex queries in a cost effective manner without buring excessive MIPS.  However as offload engines have matured and overall z core chip speed has increased this logic has eroded.  With the launch of IBM's DB2 Analytics Accelerator then the logic has been turned on its head.

Imagine if you will, the development of graphics accelerator cards in PC's and the rise of more complex games to take advantage of the underlying hardware capability.  Well imagine the IDAA (and I don't mean these guys http://www.idaa.org/) as a graphics accelerator card for the mainframe where graphics = analytics and you get the picture. Why take anywhere between 9-11 copies of your mainframe data on average and spread it around your distributed estate and then have to pay for the storage to store it, back it up and generally manage data that is at least a day old.  Especially when you can query 'fresh' operational data in-situ and trust its origin and security.

Cloud
Without wishing to labour a point the mainframe was the first 'Cloud' server and all my more 'senior' z colleagues will go misty eyed at the word 'bureau' and the parallels with the latest hip and trendy craze of Cloud.  So before I begin to sound like the metaphorical industry Dad let me elaborate on why zCloud should be a trend in 2013.  Looking at any industry definition of a cloud terms like; available, virtualised, scalable, secure and multi-tenant are central to a platforms claim to be a cloud.  Well the mainframe has had these attributes for the last 40-years.  However I fully get that this heritage has been based on z/OS and its predecessors and whilst there is still life in the old girl yet other O/S' are widely available.  So lets focus on Linux and particularly the likes of SUSE and RedHat in particular.  If you want a platform to handle extreme scale Linux consolidation and in particular handle workloads such as high I/O databases look no further than the mainframe.

In one recent RFI that my team responded to we positioned that a EC12 was able to easily accommodate 80 Virtual Machines per Core and in so doing be the cheapest platform for providing virtualised Redhat guests.  I know the last statement is up there with "the world isn't flat its round" but look at the numbers £131 per VM per month Vs £290 per VM per month for a comparable x86 and VMWare service...

If the above have whetted your appetite for exploring the world of z and how the mainframe continues to evolve in 2013, join me and my colleagues at the next #mainframedebate on Twitter on the 7th Febuary between 4-5pm GMT and 11-12pm EST to find out more.  Even if you think the above is heresy and you want to go 10-rounds on the numbers for zCloud join us and lets make it a lively #mainframedebate or contact me directly at @StevenDickens3...

Monday, December 10, 2012

2012 - A Year End Review

As we reach then end of 2012 I just wanted to reflect a little on the momentum I have seen in the mainframe marketplace over the last 12 months and give a personal view on some of my highlights for 2012, not all of these will be mainframe related so bear with me...

2012 has been a big year for the UK:

We have won the Ryder Cup in spectacular fashion as part of European team great comeback, in my case watched on an iPad whilst in bed next to the wife trying not to wake her up but wanting to cheer so bad.

The 2012 London Olympics and Paralympics, what a great few weeks to showcase all that is good about the UK and UK sport, so many great performances.

The Tour de France and Bradley Wiggins' epic ride as part of Team Sky's dominant display.

Enough about sport (unless you are Australian and then please contact me for more).

In the IT industry it has been another dramatic year:

At my former employer HP they seem to have lurched from one disaster to another, not all it seems of their own making.  This amazing company will rise again of that I am sure.  You only have to get past the press and analyst melee and look at some of the fundamentals, the storage business is strong, they have a top-5 software business (despite Autonomy), printer ink is more profitable by weight than cocaine, they are number 2 in enterprise networking kit etc... Meg Whiman will turn it around, she just needs time and to also get the analyst community to focus on the the 'real' value of the company not what they may or may not do in Tablets...

Perhaps a quote from the iconic film Wall Street needs to be applied:

"Stick to the fundamentals. That's how IBM and Hilton were built. Good things, sometimes, take time."
Lou Mannheim

What a great segue into discussing IBM:

In my 2nd full year I have seen the good ship IBM sail on without any choppy water, we transition CEO's without fanfare or fuss in a slightly boringly efficient manner...

We have seen the EC12 launched, shame about the namebut z2101 would have been no better.  I am sure they could have done have been more creative, and perhaps we should look to our PureSystems colleagues when it comes to marketing and the fanfare they have managed to create...
 
In my own personal z world we have had a very interesting year:
 
  • We have recruited 3 new z Business Partners
  • Developed a zCloud proposition and demo facility
  • Won some interesting new name, new business with non-traditional clients
  • Built a strong foundation in a new vertical
  • Worked with some interesting non-z IBMers to drive us into new client engagements
  • Lost some valued colleagues (We miss you Mr. Gadbsy)
  • Recruited 3 new grads into the z business in the UK (making 8 in total)

So as I start to look towards 2013 (if my manager is reading I will close all my deals first I promise) then I see a strong future, and watch out for these key themes in 2013:

Analytics on z - why on earth move your freshest data to another platform to analyse it, keep you master data on z and analyse it there.

zCloud - for a Linux based cloud, look no further, we are the cheapest, most secure, highest scale and the most license efficient - dare to be different.

So don't eat too many mice pies or drink too much Eggnog, have a very Merry Christmas, Happy Holidays and very prosperous New Year and please join us early in the New Year for the next #Mainframedebate









Tuesday, October 16, 2012

Cloud Computing and the Cult of Intel and x86

So firstly apologies, I have been a bit lazy of late on my own personal ramblings, and have been cut and pasting rather than being original, well that is going to change...

Over the last couple weeks I have observed a worrying trend in my interactions with clients and on the Twittersphere i.e. the cult of x86 and it's hold over the world of Cloud Computing...to quote a recent Twitter interaction, after asking a recent follower (you know who you are) for their views on cloud on z, I got the following response:

"Technically magnificent - totally impractical - won't run Intel Binary products. Chocolate teapot. (PS - I love Z and wish it were not so)"

 Well you can imagine my response!!!  For those with no imagination it went something like this:

 If cloud = intel and x86, then agree. What about Linux? (RHEL & SLES), we are seeing adoption for a Linux cloud on z

I met with a large global outsourcer recently who has an x86 based cloud offering (which is losing money by the way) and we positioned DBaaS to them based on Linux on z.  Well after much mudslinging and a verbal battle they begrudgingly agreed that databases don't usually play nice with x86 virtualization and that there may be an alternative, especially since the approach we proposed was approx 40% cheaper!!!

It amazes me in this cash strapped times, that when you present a solution that is patently cheaper and has numerous other benefits such as industry leading security, availability and scalability, that you still get push back.  So let’s explore why we get pushback for Cloud on the Mainframe:
  • Mainframe as a word has a dark meaning, in some languages it is a substitute word for old, in others it translates to proprietary.  Well my response to this is:
  •  $5bn of R&D investment since 2010, and $1bn in the latest EC12 alone does not equal an ‘old’ platform and as for proprietary how does a server that runs 7 different operating (2 of which are Linux) systems and 13 databases sound?
  •  Skills are hard to find – well there is some truth in this, but only because the box runs so damn efficiently that you don’t need and any army of IT Staff!!! I met with an energy supply company that has over 2 million active customers who has a mainframe team totalling one sys prog the other week and they only have their core billing system running on z, so nothing important…
  • x86 is what everybody else is doing – well where do I start, have you heard the one about the lemmings? Or the one about the PC manufacturers who thought that Apple would never get a hold in the home computing space…I can go on but you get the picture.
So let’s be clear for all those x86 cult members who have found their way to the darker recesses of the blogsphere, namely here, read this whitepaper:

http://www-03.ibm.com/systems/z/advantages/virtualization/platformmatters.html

And then contact me on Twitter at @StevenDickens3

Friday, October 5, 2012

Details of zTechnically Speaking in the UK

2012 zTechnically Speaking is a stream in the November IBM Smarter Computing conferences
which you and your colleagues in your organisation are invited to register and attend.

Roger Fowler/ Iain Neville from the IBM UK Hardware Technical Team will be covering the
recently announced zEnterprise zEC12 hardware which is a key part of IBM’s Smarter Computing
revolution. It is a great opportunity for anyone who wants to understand these recent announcements.

At the Manchester and Edinburgh events Jeff Biamonte will give an insight on zEnterprise chip design,
manufacture, direction and futures. Jeff is the Manager for the System z Processor Design & Verification
Group based at the IBM  Poughkeepsie, USA manufacturing plant.

At the London event Howard Rogers from Tesco will provide an insight to their 2012 zEnterprise
hardware refresh  replacing all their Escon connections with Ficon while keeping their legacy Escon equipment.

Speakers from IBM's z Software technical team will explain how recent IBM software announcements enable IBM middleware to exploit the new zEC12 technology to deliver data analytics and cloud services, with even greater security. 

During the breaks and lunch there will be a chance to see live zEnterprise systems and talk to experts
from IBM Software and Hardware Groups.  These events are planned to start around 9am and finish around 4:30pm. Agendas will be sent to registered attendees nearer the time.

I look forward to meeting you at a z Technically Speaking stream. These events are very popular so please register today to secure a place at your preferred location. There is no charge for these events so please feel free to invite a colleague, but remember these will be very popular and places are limited.

These are the scheduled events with the webpages for you to register:

Thursday, 1 November – DoubleTree by Hilton Hotel London 

Tuesday, 6 November – Manchester - The Lowry Arts and Entertainment Centre

Thursday, 8 November – Edinburgh - Sheraton Grand Hotel and Spa hotel


These are the webpages to Register for each event if you choose use cut and paste.

Please ensure you cut and paste both lines to work in your Browser

LONDON REGISTRATION
https://www-950.ibm.com/events/wwe/grp/grp019.nsf/v16_enrollall?openform&seminar=BCBCJDES&locale=en_GB&lang=&S_TACT=102JH7WE&MATTACT=102JH7WE&CONT=43242.0

MANCHESTER REGISTRATION
https://www-950.ibm.com/events/wwe/grp/grp019.nsf/v16_enrollall?openform&seminar=92FD54ES&locale=en_GB&lang=&S_TACT=102JH7WE&MATTACT=102JH7WE&CONT=43242.0

EDINBURGH REGISTRATION
https://www-950.ibm.com/events/wwe/grp/grp019.nsf/v16_enrollall?openform&seminar=9CFAS4ES&locale=en_GB

Thursday, October 4, 2012

Workload Placement and the economics of Cloud

Had a very interesting meeting with a global outsourcer yesterday where got under the covers of their rate card for private cloud so thought I would share some observations.

My presentation on workload characteristics was entirely focused on how all workloads are different and therefore require different platforms.  Whilst presenting, this point was widely acknowledged and accepted.  Then when we get into the general discussion about the suitability of z as a cloud platform the discussion focused on how can IBM make z just like other platforms so it is easy to size. 

Now I fully understand that z is a different religion and understanding how our I/O based approach relates to cloud is not easy to articulate or fully understand from an x86 standpoint, but when you acknowledge that all workloads are different can you not acknowledge that platforms differ to?

An analogy I used after the meeting was:

Imagine you commute to work by train and a colleague commutes to work by bike, which form of transport is better?  Obviously a lot of factors come into play, some of which could be:

  1. What is the weather like generally for each commute?
  2. How long are the commutes?
  3. How close do you both live to a train station?
  4. Which country do you live in?

If you live in Oslo next to a train station and your office is also next to a train station and the distance between the two is 50-miles, then the answer is clear.

If you colleague lives in Holland and is 10-miles from the nearest train station, and the commute is 10-miles on a bike, then the answer is clear.

Why can't the same principle apply to workloads?

If you take a look at Oracle as a Service on a private cloud as an example, then the answer is a train, and let me explain why....

  • Oracle does not play nice with VMWare... the default position of Oracle is don't virtualise our database unless you do it on our virtualisation software
  • Oracle typically charge for a license per every two cores of x86
  • A typical Oracle license will cost £25K over 3 years

So if you have 288 cores of x86 for instance then you need 144 licenses or put another way you need to give the Larry Ellison yacht Fund £3.6m over 3-years.

Now imagine a weird and wonderful black magic platform existed where you could handle the same workload on a 12 core machine, which need 12 Oracle licenses, which had no problems with its virtualisation layer supporting Oracle.  The math goes someithing like this:

12 cores = 12 licenses = 12 x £25,000 = £300,000

Or put another way Larry has to wait another year to buy that racing yacht.

Or to go back to my original analogy still want to ride to work?






Thursday, September 20, 2012

Sometimes you just have to cut and paste...

There are times when you can't improve on perfection, so when that occurs the best thing to do is cut and paste!

The Technology Economics of the Mainframe: Mainframe Computing Still Growing in Banking

Despite perceptions, mainframe computing continues to grow in banking. But from an economic standpoint, that may not be a bad thing.
August 14, 2012

Contrary to conventional wisdom, mainframe computing is growing. In fact, financial services has been passed by only a few sectors in terms of growth in mainframe MIPS, or million instructions per second. But maybe this is good news: Multiyear results show that "mainframe-heavy" organizations are more economically efficient in supporting business computational demands and have more upward scalability than distributed-server-heavy organizations.

Decisions about computer platform choices and options typically are made without consideration of true business impact from a cost-of-goods or other perspective. As a consequence of what we know about technology economics today, however, platform choices can be based on factual criteria and should not be decided as a "fashion statement."

During the 50-year history of what we now call "technology economics," it has always been clear that demand for computing is increasing and that upward expense pressure is a fact of life in what many have called the Information Age. Between 2006 and 2010, demand for processing cycles (MIPS, servers and the like) slowly approached an 18% annual growth rate in the big banks, while storage demand has been growing at 45% or more per year.


With infrastructure spending -- on computing power, networks, storage, help desks and so on -- historically accounting for 57% of overall IT expense, it is likely that it is the largest component of an organization's "IT Cost of Goods." As such, it is worthy of investigation and analysis.

Previous research that exlpored the dynamics of platform economics indicated that firms with a mainframe computing platform bias ("mainframe heavy") exhibited a lower IT Cost of Goods and overall IT costs in situations in which the mainframe was a suitable platform. Conversely, "server heavy" firms were at an economic disadvantage -- higher IT Cost of Goods and overall infrastructure costs. Additional research updates these earlier findings, continuing to chart the interaction (and value) of computing choices and real bottom-line business impacts.

Key cost of goods metrics were identified for the sectors under study. Within each sector, analyses were performed to determine average levels of both mainframe and distributed server usage relative to business volumes/revenue. Within each sector, two groups were identified -- "mainframe heavy" and "distributed-server heavy," relative to average levels of usage.
Within these two groups (by industry), IT Cost of Goods was computed and compared.
mainframe computing
The research database for this study contained data from 498 companies across 20 sectors, spanning the years 2008 to 2011. Data elements include the amount of computational resources along with key business performance parameters.

Across the 498 companies studied, on average, computational needs grew far faster than revenue. MIPS capacity grew at 2.33 times the rate of revenue growth, while distributed server deployments grew at 3.5 times the rate of revenue growth. Additionally, firms that had higher mainframe growth had 25% lower distributed server growth and exhibited approximately

67 percent more-effective cost containment than those with less mainframe intensity. The implication is that the required computational growth is roughly three times more economically efficient in a mainframe environment.

Further, organizations with high mainframe intensity had 39% more upward scalability in that they could support revenue growth with 61% less investment than those that were distributed-server-intense. And organizations with high mainframe intensity maintained their leverage in terms of lower IT Cost of Goods. Across sectors, the gap widened by 3%; in banking, the gap widened by 2%, with mainframe platforms maintaining an astounding 67% cost advantage at the unit cost-per-core transaction level.
This research reveals a pattern that indicates that mainframe-heavy organizations are more economically efficient in supporting the computational demands of increased revenue than distributed-server-heavy organizations. Such patterns are critical to observe and understand as computational demand increases in the global economy, in business and government, and in our daily lives.

It is likely that 2012 is the "technology economic tipping point" -- the point at which demand for computing growth outstrips the ability of Moore's Law to offset increased costs. This perhaps is also the point at which organizations that have not learned by codifying the patterns of technology costs and value will see their tech expenses spiral out of control, leading to uninformed demands to cut IT expenses. On the flip side, those oganizations that have an understanding of technology economics will find themselves in a position of extreme competitive advantage.

Howard A. Rubin is founder of Rubin Worldwide, a research and advisory firm focused on the economics of business technology. Howard.Rubin@rubinworldwide.com