Total Pageviews

Monday, January 20, 2014

"Zed's Dead Baby"- The relevance of the mainframe to modern computing and IT trends.



“Zed’s dead, baby.” These may be the immortal words of a motorbike wielding Bruce Willis in Pulp Fiction, but also some of the first words uttered to me when I made the decision to move into the System z software team. The perception of mainframe is simple: it’s a dinosaur. In a world where companies like Apple render their previous generations of hardware ‘vintage’ or ‘obsolete’ after 5 years, and release software upgrades annually and free of charge, how can a product that is celebrating it’s 50th birthday this April still be relevant? Equally, as the current IT landscape evolves towards mega trends such as Cloud, Mobile and Big Data, how can this ‘vintage’ platform keep up?

In 1969, IBM and the mainframe helped NASA put the first men on the moon, and now, 45 years later, focus is still skywards for putting work in the Cloud (tenuous link). Cloud has always been part of the mainframe, since its inception VMs have been a basic component of the mainframe hardware. Despite this, for many x86 seems the natural choice for Cloud workloads: cheap and simple. However, while x86 serves the commodity workloads, System z is undeniably the most suitable choice for high complexity and high criticality Cloud workloads. The mainframe is famed for unmatched reliability and the ultimate security choice, something other platforms simply cannot contend with. What’s more, in the market where IBM System z Cloud mainly operates (private, in-house), Cloud computing costs less per virtual machine on IBM mainframe vs x86* and therefore the power usage per VM is significantly lower on z, in turn improving OpEx costs.
Cloud is a huge part of the System z strategy, with a focus on orchestration choices for our customers- in order to automate deployment and lifestyle management which results in a reduced time to market and improved productivity.

With the average System z box standing at 2 metres tall, how can it possibly be described as mobile? With many believing the mainframe can only be a solution for banking giants and top secret agencies, System z is often pushed aside in customer mobile strategy. But mobile is no simple task. With over 200 million employees bring their own devices who, on average, are checking their mobiles 30 times per hour, and never being more than an arm’s reach away from their beloved smart phone; demand for a powerful, secure platform is high.  Enterprise mobile strategy plays into this ideally: from seamless building and development of applications from Rational; to word class security and management, preventing costly breaches in Tivoli; and extending capability and user flexibility. While mobile can generate billions of transactions; System z can handle more than 30 billion transactions a day. While the billions of mobile users globally expect real time data; System z on average has less than 6 minutes of downtime a year. So despite not being pocket-sized, System z is the clear choice for business transformation into the mobile world.

We now generate more data in two days than we did in total up until 2003. This data can either be wasted, sit on tapes and disks and hard drives until the day when the auditors finally tell us we can destroy it. Or it could become an incredibly powerful business tool. Businesses who implement analytics tools outperform their competitors by over two times. Which CIO wouldn’t want that? But there are hurdles. Many companies grow through acquisition, and even if they don’t, they can develop a large server sprawl, which in turn creates siloed workloads and no single repository of data for companies. System z can provide that. By making sure your data is available, consolidated, and secure, System z can form the basis of your analytics and data management strategy. Additionally, creating a new analytics environment can take up to 6 months at a cost of approximately $250,000. However, with the simplicity of the System z architecture, companies can have deployed analytics environment in 2 days at $25,000. Not bad for an out of date, dinosaur platform.

So is z still relevant? Well, if you want an incredibly powerful infrastructure, which boasts maximum security and unmatched reliability: yes. If you want to be able to keep ahead of the latest IT trends, with value and strategy plans spanning mobile, cloud and data analytics: you bet it is.
Zed’s back, baby.

*based on 275 VMs

Friday, January 17, 2014

“Without IBM and the systems that they provided, we would not have landed on the Moon.”

‘This is one small step for man, one giant leap for mankind’. Of course these weren’t the words muttered by a techy in an IBM lab when years of work had finally completed the mainframe. We all know where these words were really heard across the universe in 1969, when the world first saw two men land on the moon. However, this famous quote is still quite relevant to the IBM mainframe but in a different sense. Few people know that the IBM ® System/360 mainframe helped NASA to launch the Apollo missions making history and changing the way we see the world today.

Mainframe computing has done just that! It's developed the world we live in so these achievements should be remembered when celebrating the Big Iron’s Big Five O! Considered as one of the most successful computers in history, the mainframe led the way for innovative computing changing in more ways than just its name! Our ever-evolving IT industry has tested and pushed the super computer and whilst it's taken a fair battering along the way, it's safe to say that it epitomises Darwin's theory of ‘survival of the fittest’. Fit is a good way to describe it: it's strong, doesn't lack energy and it definitely looks the part!

Arguably its biggest contribution is pioneering real-time transaction processing which led the way to credit card authorizations; one of many things we take for granted today. Online transactions are so simple that you’ll find many a student waking up hung-over with no recollection of the night before, but an interesting eBay parcel on its way...When sequential input output processing was the norm, the ground-breaking ability to take this online helped businesses become more responsive. Personally I can’t imagine computing life before it; we were a patient nation! Now every transaction is fulfilled with ease; we have faith in computers to give us the correct information and pay people successfully because they rarely fail us.  Today I have click and shopped saving me time, money and effort. Had I chosen to actually leave my house I probably wouldn’t have written this entry. Revolutionary! 

Aside from how efficient our lives are becoming because of ground-breaking technologies we also see productivity for the biggest firms in all industry sectors. The extreme scalability, high data handling capability and vast security measures enable firms to rely heavily on this machine that trudges away in the background. I’m sure you’ll know it’s not the cheapest but I’ve yet to find an inexpensive high-valued asset and when I do I probably won’t share it!  Ironically, our mainframe grows in capability inversely to size. Its resilience is second to none allowing for mission-critical data to be handled without the fear of disruption. We are not aware of the impact that the mainframe has on our lives for we don’t see or hear of it but housework is never noticed until it’s not done.

Friday, January 10, 2014

‘Someones hacked into the mainframe!’ View of the mainframe from the 1990s.




‘The Mainframe’ to those born in the 90’s probably refers to a scene in a Hollywood action film where the villain has ‘hacked into the mainframe’ causing all sorts of problems for James Bond and John McClane. In fact, only last week on Celebrity Big Brother were the words ‘hack into the mainframe’ uttered referring to a plastic screen on an extremely unrealistic alien space ship, so you can excuse the confusion that people have when referring to the platform. Although those within the industry understand how detrimental a malicious act like this would be for any business, I’m sure most are also aware that of all the platforms running, the mainframe would more than likely be the last to suffer a security breech. This year in April we celebrate the 50th birthday of the mainframe, now often referred to as the System Z Platform, so I feel it’s a relevant time to address some of the myths and speculation surrounding the platform.

Like most people my age I had little experience of the mainframe until I started working for IBM where I have developed an understanding into this super computer and a month ago I saw my first z196 in the flesh. The common misconception amongst 20-something year olds is that the mainframe is a ‘dead’ or ‘dying’ piece of hardware and this is a myth that I would like dispel. A computer that used to be larger than my living room is now 201.3cm tall, 156.5cm wide and 186.7cm deep making it a much more practical solution for customers. Although some clients previously attempted to move away from the mainframe, we are beginning to witness a trend of customers proactively moving workloads back onto the platform to benefit from the scalability and energy cost savings. These are typical mainframe customers such as large banks and large retailers who have to process an extremely vast amount of data daily.

A development that actually happened during 90s has not been given enough attention in my opinion and that is the integration of Linux onto the System Z Platform. This is a development that allows the mainframe to act in a very similar ways to other X based distributed platforms, bridging the gap between the two. I think it is something that should be highlighted, especially within our software division as if the software is installed with this in mind it will also run Linux on the distributed platform, giving the customer more choice as to where they put specific workloads.

So when asking the question around what the mainframe means to those from the 90s, the answer will massively depend on whom you ask. Those from a non technical background might say, “a company’s main computer” whereas some within the industry, could say “the platform that IBM used to sell software on.” For myself I see it as somewhere that over the next decade will begin to become ever more prominent within the technology industry. The reasoning behind changing the name to System Z was to refer to the almost ‘zero’ downtime that the hardware was capable of and to give an ‘old’ dog a new name. But why change the name of something that gains publicity within popular culture? If I told my friends I worked in mainframe software, they would think that I had somehow gained some fantastic technical expertise, but when I say System Z, they look at me blankly.

With the half century anniversary of the mainframe this year I am sure that there will be a lot of developments, innovation and press surrounding the platform and the vision for it in the future.
The most I can hope for from this blog is to engage with those, like myself, born in the 1990’s in a time when IBM had failed to invest in their core hardware and to challenge their perception of the platform, but on a wider scale, anyone who has their doubts about System Z as a platform. I agree that in some cases it is not the right way to go, however, not considering utilizing a mainframe that already exists within an organization is a cardinal sin.


Thursday, January 2, 2014

Mainframe Platform Economics

Happy 2014 to all my readers, I hope you had a merry holiday with your families and friends and that you have a happy and prosperous 2014.

So with that out of the way, I would like to cover the thorny subject of Mainframe Platform Economics.  Despite your obvious audible groan as you read that last sentence, I hope to make it as light hearted as possible and poke some fun along the way...

So the Mainframe is obviously expensive and everybody knows their are better platforms to run applications on in 2014, heck the mainframe is 50-years old in 2014 there is no way it can compete with modern technology from a TCO perspective...

No, I have not been abducted by aliens, I just thought I would open up with the 'accepted' wisdom that us Mainframers hear pretty much every day from our IT brethren who look after distributed platforms. We have all heard these words or something similar when the subject of platform selection for any new application is being discussed.

Let me put in context some of the things such simplistic statements over look or more sinisterly ignore on purpose:

Total Cost of Acquisition Vs Total Cost of Ownership
Since the financial meltdown of 2008 the cost of running businesses has come more strongly into focus, the role of the Accountant and ultimately the CFO has gained a level of involvement in investment decisions previously unseen.  Given this rise to the forefront of IT decision making, of cost, then it is not surprising that investment decisions need to be made soundly and with foresight to how they affect the cost base of the business over the medium to long term.  I think that most readers would subscribe to the view that financial prudence is good practice, unless of course you are Larry Ellison (bugger that New Years Resolution didn't last long).  So why given this background do I see an increasing trend to focus on upfront costs or Total Cost of Acquisition?  This is short sightedness or delusion depending on how much credit you want to give the person proposing this approach.  Looking at how much a server costs is obviously important and should be ignored, but neither should the fact that it has little impact on the running costs over 5-years.  Indulge me if you please, would you buy a V12 Jaguar XJS that was 15-years old for £3000 or a £4000 Ford Fiesta for your 18-year olds first car after they pass their driving test.  Now if you are part of the Jaguar Owners Club and have landed here by the vagaries of Google I apologise profusely, but if you are not then you will get the point.  We all know the Jag would be thirsty, expensive to insure and cost a lot more to service...  You get my point...

Platform Selection - Security, Availability and Scalability
Again we in IT all know that a platform that is insecure, always down and has limited scalability should never be purchased... So why in so many business cases this author has seen over the last 19-years are these critical factors at worst not covered or at best glossed over.  When a business case looks solely at the year one costs then be wary.  We all know that applications grow and increasingly issues such as Security need to be taken into account up front.  Why then are these two areas not given the focus they so richly deserve?  Could it be that if we include security and scalability provision the TCA would suffer and the CFO would never sign off our business case and they don't know about IT so what the heck... Availability is another ball game altogether, take a look at this very interesting Wikipedia link:

http://en.wikipedia.org/wiki/High_availability

Once you have read this, is Amazon's new Cloud service that offers 99.5 availability still such a good decision.  What are you going to do for instance for the 50.4 minutes every week when your mission critical platform is down, put the kettle on? play Sudoku? apply online for a new job?  It really is not that difficult to put a cost on availability. One simple way would be to note the revenue/profit generated by an application down and then calculate how many hours in a year (365*24) and then work out what an hour of downtime costs.  I know this bewitching maths and beyond most 5-year olds but seriously people how can this be ignored in a platform decision?

How to choose the Platform for your new application
IBM has an approach called Fit For Purpose that looks at ALL of the factors above plus a number of others and proposes a decision tree for platform selection.  However let me propose an alternative method.  You made a decision about how you got to work today, classic Jaguar (hope that makes up for my earlier error), helicopter (Larry Ellison again), hovercraft, train, car or push bike or some combination.  Your decision probably included variables such as length of journey, time available, weather, cost etc.  Apply this same principle when you next choose an underlying platform for your application and remember that other platforms apart from x86 are available...