Total Pageviews

Thursday, December 5, 2013

5th December #Mainframedebate transcript

#MainframeDebate Twitter Chat
December 5, 2013

Q1: How is #zEnterprise helping orgs cope w/ IT compliance + regulation work? @DerekBrittonUK  #MainframeDebate (from @DerekBrittonUK)

A1: #zEnterprise is staying up to date w/ all the #compliance and #regulatory trends cross-industry.  #MainframeDebate 

Q2: How does #zEnterprise support clients looking to improve their end-user #efficiency? #MainframeDebate  (from @DerekBrittonUK)

A2: Many organisations using #Mobile & #Systemz to drive end-user productivity plus drive new revenue  #MainframeDebate 

Q3: How is IBM celebrating the upcoming 50th anniversary of the #mainframe? how can we get involved? #MainframeDebate (from @jcanglin)

A3: Tons in plan to celebrate the 50th anniversary of the #mainframe next year! Stay tuned for more... We will be partying! #MainframeDebate

Q4: Whats the max amount storage an IBM #Mainframe can use? #MainframeDebate (from @DavidLoams20)

A4: Max memory (RAM) is 3TB per machine, but w/ clustering, this is 64x3 = 192 Tb #MainframeDebate 

Q5: I read many retailers powered their #CyberMonday transactions via the mighty #mainframe, is that true? Any details? #MainframeDebate (from @SystemsandTech)

A5: Yes, that's true. 23 of the top 25 retailers run on IBM System z  #MainframeDebate 
A5 cont: And don't forget all of those credit card companies & banks processing your #CyberMonday purchases! #MainframeDebate

Q6: How does #zEnterprise help organizations manage accelerating changes and demands within their market? #MainframeDebate (from @alexabrutledge) 

A6: As organizations rely more on data-driven decisions, many are turning to #zEnterprise. IDC paper  #MainframeDebate
A6 cont: IBM is seeing a shift in the market & continues to advance #zEnterprise ahead of the curve   #MainframeDebate

Q7: What exciting developments can we expect from Mainframes and the systems around it? #MainframeDebate (from @hattie_lister)

A7: Several billion $ investment in #zEnterprise hybrid, there's plenty more coming in 2014 and beyond; Watch this space…#MainframeDebate

Q8: Curious who is using Flash Express did it help you deal with spikes like Cyber Monday or market open? #MainframeDebate (from @samknutson)

A8: Over 200 systems have installed Flash Express -- mostly banking and financial institutions #MainframeDebate 

Next #MainframeDebate is scheduled for January 30, 2014 at 11am ET. 

Tuesday, November 12, 2013

IBM System z - zBC12 blog directory - everything you need to know about the new baby mainframe from IBM

See below for a list of blog posts from press, analysts, IBMers, BPs, etc on the recently announced zBC12:

In addition, we have published several blogs about new features (RoCE, zEDC, etc.) that were introduced with the zBC12 back in July on the IBM Mainframe Insights blog

Press Blogs

New York Times Bits blog, “Mainframe Computers That Change With the Times,”Quentin Hardy, July 23, 2013

Silicon Angle blog, “IBM Announces New zEnterprise Mainframe”Bert Latamore, July 23, 2013 

WebCommunicati, “IBM: arricchita l’offerta zEnterprise con un nuovo sistema e ulteriori innovazioni”July 23, 2013 

ARS Tecnica “IBM unveils new ‘mainframe for the rest of us’”, Sean Gallagher, July 23, 2012 

Mainframe Update “New Business Class baby”, Trevor Eddolls, July 28, 2013 

Analyst Blogs

IT Candor, “IBM Announces zEnterprise BC12 Business Class Booster”Martin Hingley, July 23, 2013 

Dancing Dinosaur, “New zEnterprise Business Class Entry Model—zBC12”, Alan Radding, July 23, 2013 

Computer Weekly: Open Source Insider, “Old school IBM meets new school OpenStack”, Adrian Bridgewater, July 23, 2013

Tech-Tonics/ Gabriel Lowy, “Can the Mainframe be a new Cloud Platform?”, Gabriel Lowy, August 1, 2013 

Bloor Research blog: The Norfolk Punt, “Enterprise server 3.0 - zEnterprise BC12”, David Norfolk, August 2, 2013 

IT Director: The Norfolk Punt, “Enterprise server 3.0 - zEnterprise BC12”, David Norfolk, August 2, 2013 

Business Financing Magazine: wiredFINANCE blog, “Newest Low Cost IT and Cloud Computing Option,”Alan Radding, August 5, 2013 

IT-TNA Trends (IT Trends and Analysis), “IBM’s zEnterprise BC12: Powerful, Comprehensive, Impactful”, Joe Clabby, August 6, 2013 

Business Partners and ISV blogs

Brocade Community blog, “IBM System z Announcements Encourage Channel Consolidation”, Steve Guendert, July 30. 2013 

Mainframe Watch Belgium blog, “zEnterprise BC12 - A technical Introduction”, Marc Wambeke, July 23, 2013 

Mainframe Watch Belgium blog, “zEC12 and zBC12: new 2:1 ration for zIIPs”, Marc Wambeke, July 24, 2013 

Mainframe Watch Belgium blog, “Official Announcement: z/OS 2.1”, Marc Wambeke, July 24, 2013 

Mainframe Watch Belgium blog, “z/VM 6.3 –Virtualization for efficiency at scale”, Marc Wambeke, July 24, 2013 

Mainframe Watch Belgium blog, “PL/I for z/OS V4.4 - new function and improved performance”, Marc Wambeke, July 24, 2013 

Mainframe Watch Belgium blog, “End of marketing for z196 and z114 announced”, Marc Wambeke, July 24, 2013 

Mainframe Watch Belgium blog, “I’m on holiday…”, Marc Wambeke, July 24, 2013  

Infinity Systems Software blog, “Announcement: zEnterprise Customer Council in Atlanta August 23rd”Renee Solheim, July 23, 2013 

Maintec blog, “IBMs Business Class Mainframes zBC12, z114 and z10BC: a quick comparison”, Maintech Technologies (author unknown), July 23, 2013 

Mainframe Voice (CA Technologies blog), “A MANAGEMENT PLATFORM? Who needs another management platform?”, Marcel den Hartog, July 26, 2013 

IBM Subject Matter Expert blogs

IBM Mainframe Insights blog, “Extending enterprise-class technologies to client of all sizes,”Pat Toole, July 23, 2013 

The Mainframe Blog, “Happy New Mainframe Day! Introducing the zBC12 (Post #1)”, Timothy Sipples, July 23, 2013 

The Mainframe Blog, “Happy New Mainframe Day! Introducing the zBC12 (Post #2)”, Timothy Sipples, July 23, 2013 

The Mainframe Blog, “Canonical (Ubuntu) Needs a Mainframe (and zBC12 Post #3)”Timothy Sipples, July 24, 2013 

Turbo Todd, “IBM Boosts zEnterprise Mainframe Portfolio”, Todd Watson, July 23, 2013 

DB2 for z/OS blog, “The Best just keeps getting Better: the zBC12 is here... among other things”, Willie Favero, July 23, 2013 

DB2 for z/OS blog, “The July 23, 2013 Announcement Letters”, Willie Favero, July 23, 2013 

DB2 for z/OS blog, “Introducing the zEnterprise BC12”, Willie Favero, July 27, 2013 

System z Management, “IBM Automation Control for zOS Toleration Support for new IBM zEnterprise BC12 and new IBM zEnterprise EC12 features available,”Wiltrud Rebmann, July 24, 2013 

Mainframe Performance Topics. “The Missing Link”, Martin Packer, July 30, 2013 

Millennial Mainframer, ä„śReasons Millennials should care about the Mainframe Announcements”Sean McBride, July 24, 2013 

CTO, M.Teyssedre, Le Blog “New IBM zEnterprise BC12 Entry-Level Mainframe Launches”, Michel Teyssedre, July 25, 2013 

Business Partner Voices “Extending enterprise-class technologies to clients of all sizes”Pat Toole, July 25, 2013 

The Greater IBM Connection, “IBM Boosts zEnterprise Mainframe Portfolio”Khalid Raza, July 26, 2013 

Storage CH Blog: an (IBM) Storage-related blog, ”IBM zEnterprise BC12 Technical Guide”, Roger Luethy, August 5, 2013 

Misc. Blogs
Cloud Guys, “IBM Boosts zEnterprise Mainframe Portfolio”, July 23, 2013 

Global WebSphere Community blog, “IBM zBC12 announcement - smaller footprint ideal to consolidate distributed workloads on zLinux”, Elena Nanos (HSBC / Blue Cross Blue Shield), July 23, 2013 

Search Data Center, Tech Target “IBM mainframes reinvigorated at lower price for cloud apps, data analytics”Ed Scannell, July 29, 2013 

Search Data Center, Tech Target, “Linux’s flexibility, native hardware integration as a mainframe OS”, Sander van Vugt, July 30, 2013 

Never Talk When You Can Nod, “Should IBM drop the “Mainframe”Moniker?”, Andew Chapman (CA Technologies), July 24, 2013 

Tuesday, September 10, 2013

The 7 Pillars of Connecting With Absolutely Anyone

The 7 Pillars of Connecting With Absolutely Anyone

1. Be genuine. The only connections that work will be the ones that you truly care about; the world will see through anything short of that. If you don’t have a genuine interest in the person with whom you’re trying to connect, then stop trying.

2. Provide massive help. Even the biggest and most powerful people in the world have something they’d like help with. Too many people never reach out to those above them due to the fear that they wouldn’t be able to offer anything in return. But you have more to offer than you realize: write an article or blog post about them, share their project with your community, offer to spread their message through a video interview with them. Give real thought to who you could connect them with to benefit their goals. If it turns out you can’t be that helpful, the gesture alone will stand out.

3. Pay ridiculous attention. It’s nearly impossible to genuinely offer help if you don’t pay attention — I mean real attention, not just to what business they started or what sport they like! Do your research by reading blog posts, books and articles about the connection beforehand. Learn about their backgrounds and passions. Invest genuine time in learning what really matters to them and how you can help.

4. Connect with people close to them. Most job openings are filled through networking and referrals, and making connections is no different. You automatically arrive with credibility when referred to someone you want to meet by a mutual friend. For example, I recently wanted to meet a best-selling author, and it turned out we had the same personal trainer. In reality, that fact means nothing, but in the world of social dynamics, it’s gold! Spend more time connecting with your current network of friends and colleagues and see where it leads.

5. Persistence wins most battles. If you can’t get a direct referral, simply click send on that email or leave a message after the beep. But do not stop there, as most the world tends to. The first attempt is just the very beginning. Realize that the first try may get you nowhere, but the fifth or the tenth tries are the ones that start to yield results. An unreturned email or voicemail doesn’t mean they don’t want to connect with you. It’s your job to be persistent! I sometimes get hundreds of requests in a day from readers who want to connect, but only about 2 percent ever follow up. Don’t be in a hurry, but don’t be invisible either.

6. Make real friends. Think about how you’ve made the friends you have. That’s all this is. You only make friends with people you genuinely want in your life. The same rule should go for bigger-name connections. Don’t over-think it. Be human, be helpful and most humans will happily be human in return, regardless of who they are.

7. Remain unforgettable. All of the above are simple — yet sadly underused — ways of standing out. Send birthday cards. Mail your favorite book with a signed personal note from you on the inside flap. Send them your family Christmas card. Be genuinely helpful. You’d be surprised how the simplest things actually never get done. Being memorable isn’t as hard as some think!

It all comes back to helping others. If you spent 100 percent of your waking hours thinking about how you can help absolutely everyone you come in contact with — from the woman who makes your latte, to the top authority in your industry — you will find everything else tends to take care of itself. The world will suddenly be in your corner.

Scott Dinsmore - 'Live your legend' coaching company wrote this and I think it has nailed the concept of connecting with people really well ! Thanks to Toby Woolfe for putting me onto this...

Friday, September 6, 2013

Pitfalls of being a 'Dedicated Follower of Fashion' the Trendy Mainframe

As the Kinks sang in 1966 a 'Dedicated Follower of Fashion' can be found "eagerly pursuing all the latest fads and trends" now whilst Vogue and GQ will tell us in their advert laden publications that is the way for us to tailor (excuse the pun) our attire as the seasons progress, taking the same approach to IT decisions is fraught with danger.  Let me elaborate on my point.

On average a CIO has a tenure just longer than a presidential term, namely 5-years according to research, details of which can be found at:

The first year as CIO is the honeymoon. The second year is about strategy and planning, and the third year is about implementing. In the fourth year they (the higher-ups) figure out that the execution isn't going that well, and in the fifth year, you start looking for your next job.  So whilst it tempting for a CIO to back the latest fad and trend early in their tenure and look to make this their signature project, according to the research this approach does not appear to be serving CIO's well, assuming they want a job longer than 5-years that is...

The old IT maxim was that "you won't get fired for buying IBM", now this largely holds true as big Blue's list of 'bad' products over the years has been pretty small compared to it's peers, but it appears to this Blogger at least that it is still just about perceived as fashionable to move away from the mainframe.  

The rationale for this from most of my conversations with mainframe shops is that the mainframe is massively visible on IT cost reports and therefore attracts undue attention and typically not positively.  Now whilst on one level this makes 'sense' if you just drill down and scratch the surface then this simple rationale is fatally flawed.  Just because the mainframe is a single line item on a spreadsheet does not make it the least cost effective.  How hard is it to group all the x86 costs and UNIX costs into one-line item and compare them platform-by-platform.  When we did this analysis within a large UK retail bank the analysis showed that 97% of the banks IT logic ran on the mainframe and that it represented only 7% of the total IT budget.  To take a personnel approach in another of my clients their entire IT staff numbers 700 people, while only 14 of these work on the mainframe.

Lets take an alternative look, a large retail bank in the UK has for the last 5-6 years been looking to migrate off their mainframe platform to a more fashionable COTS (Commercial Off The Shelf) application.  The previous CIO (are you seeing the trend yet) embarked on this high profile project convinced he could remove the single largest item on his cost spreadsheet.  Well with the help of our DeLorean we can move forward to the latest press release on this project:

Where the bank decides to stay on the mainframe as it's strategic platform and mothball the reputedly £600m+ 'investment' it has made in trying to get off the mainframe.

Now I am not saying that everybody should remain on their current hardware platform and every IT shop needs progress and cost reduction initiatives, IT Execs need to be sure and damned sure they are making financially sound well thought through business driven decisions rather than following the latest trend.  Only then in this Bloggers view will they look to get the career stability at least some of them crave...

Keen to get your insight so post your comments below or ping me @StevenDickens3 or jst post using the hashtag #mainframedebate

Tuesday, July 30, 2013

Using Facts To Go On Offense Against Distributed FUD (copy of CA Blog)

"The highest form of flattery is to plagiarise" Steven Dickens

Original blog can be found at:

In my last post, “Myths and Misconceptions about the Costs of Mainframe Computing” I wrote about the challenges facing IT executives who see their department’s task list grow year after while their budgets shrink, and how the financial analysts and business managers are likely to take aim at the biggest line items. The result is that mainframe takes a hit because its costs are neatly conveyed as large line items while distributed costs are distributed across the financial report. Fighting budget battles is no fun and can easily take more time—and have a greater impact on overall IT department performance—than the real work of IT. But since many of you are likely to have to fight those battles, here are some points to help you.
To go on offense (or at least provide a robust defense), you need to shift the conversation from obtrusive or highly variable numbers on the spreadsheet to a reasoned discussion around the total cost of ownership (TCO) and total cost of acquisition (TCA) of various parts of the IT infrastructure. No reasonable business professional will ignore the facts of the situation. However, given insufficient and/or incorrect information, even reasonable business pros can—and will—come to faulty conclusions and make poor decisions.
When making IT purchasing decisions, in addition to the obvious incremental costs of hardware and software, ask whether your business management takes any or all of the following into account:
  1. The cost to power IT equipment (servers, switches, etc.)
  2. The cost of the space occupied by IT equipment?  Do you even have enough space to contain additional servers and switches
  3. The cost to cool IT equipment. In many cases, an increased workload can be absorbed by System z equipment already on the floor (by simply enabling the additional needed capacity), meaning there is no incremental cost for your System z equipment.
  4. The additional cost of all of the above for development/test/QA environments
  5. Enabling backup/recovery, disaster recovery and/or business continuity capability in the environment
    In many cases, the distributed system "bid" does not include any of these costs. When you parachute a new workload into an existing System z environment, it tends to already have these capabilities "in the base." The incremental cost can be far less than having to newly establish these capabilities for a distributed implementation.
  6. The cost of new personnel to manage the systems. Typically on System z, as long as the workload is not a complete departure from what is currently in the environment, the demands of new workloads are easily absorbed by existing staff. My observations show that the number of administrators for an IT group is a function of the number of servers. The cost of the required additional admins required by the new implementation should also be included into the distributed bid.
  7. The cost to upgrade or refresh the systems:  make sure that the bids cover enough time to include at least one hardware technology refresh at some predicted application growth rate
When you replace a server or blade you have to purchase a new one. The cost of a technology refresh is likely to be rather close to the original technology outlay plus the new capacity to accommodate application growth. On the mainframe side, when you upgrade your System z server, IBM gives you a significant credit for the capacity in your current servers, you essentially only pay for the additional capacity needed for application growth, even if you are replacing the physical System z server.

Also, there's this funny thing about the cost of System z software:  for most products, incremental increases in software capacity always costs less per MSU than existing capacity--each incremental MSU costs less than the previous one. This means is two things: first, the cost of the added capacity will actually be less than the capacity you are already using. Second, adding new capacity reduces the average cost of all of your current capacity. That’s right, adding new workload actually reduces the cost of the workload that is currently running.

With regards to the fact that variable costs draw attention when high as discussed in my previous post, a little conversational jiu-jitsu might be helpful. Rather than responding to the complaint in a defensive way, "Well, it was a busy month.  Last month, the cost was only 70 percent," try turning it around. Remember that overall cost of the distributed infrastructure (when fully discovered and added together) is going to be a rather large fixed cost, regardless of whether the infrastructure performs a single transaction or one billion in a particular month. You can say something like, "Yes, absolutely it cost more this month. We did twice as many trades as the previous month, which cost about 70 percent of this month. What was the cost of the distributed side last month? Was it any different than this month?  Do you even know how much they cost? Does that cost include…(and rattle of the list of items above)?"

Finally, this may be a way to get this conversation kick started. An IT analyst named Dr. Howard Rubin has done extensive research into the comparative economics of System z and distributed deployments. His research has found that, when you look at the costs of IT for product or service delivered, companies that use relatively more System z capacity deliver those products and services with an overall lower cost of IT than those that use more distributed. It may be helpful to have some of your company's financial people have a look at this research and consider the impact on your business.

Here are some links:

It may just may be true that distributed IT at your company is being delivered in a more cost effective manner. However, if you and your company are serious about good IT cost management, this kind of exercise and visibility can yield other areas where you can become more cost efficient.

Tuesday, June 18, 2013

Systems of Record and Systems of Engagement

As the mainframe moves into its 50th year and its 'death' predictions fade into folklore then the role of the mainframe evolves. A lexicon I have heard that perfectly describes the mainframe in 2013 is the System of Record. This is opposed to Systems of Engagement whose role is to provide access.

This Forbes article expands on the terms:

Increasingly CIO's have to focus development effort on getting apps out to the marketplace; be that the internal Bring Your Own Device (BYOD) driven internal customer of the IT department or external customers. Whilst this form of development is becoming 'table stakes' rather than optional, the challenges this type of 24x7x365 access creates for back end systems cannot be over exaggerated.  Let me expand. I met with a client last week who has their back-end logistics and stock management systems running on the mainframe, and whilst this is the crown jewels of their IT operation and is continually being developed they face key architectural challenges moving forward into a world of increased access.  Their requirement is for customers and internal users to access the back-end systems via mobile apps.  Whilst this on the surface sounds simple, just develop and app and hook it up to the mainframe right!! The reality is somewhat different, the back end data is in a system which is batch orientated and is brought down between 4-5 days a year for application maintenance and also is not tuned for random SQl queries at any time of the day...

So as you can see how you architect mainframe systems to handle these new systems of engagement is key.  Also don't let this challenge be met with the standard - just ETL it off the mainframe and do it there - as this approach also has its challenges:

  • Data currency
  • Managing and Backing up yet another system
  • Often ETL load is as bad as keeping the system on the mainframe
  • Multiple copies of data to manage

So lets look at technologies that may be able to help:

  • Running the app serving and web layer as close to the data as possible, i.e. on a Linux partition on the mainframe.
  • Accelerate and optimise the DB2 query using the DB2 Analytics Accelerator
  • Re-architecting your IMS and DB2 databases to facilitate better 24x7 access
  • Leverage the web services layer in CICS

If you need more help get in touch via Twitter - @StevenDickens3, but don't let mobile apps that access mainframe data become just another distributed platform, engage the brave new world...

Saturday, June 15, 2013

Systems of Record and Systems of Engagement

As the mainframe moves into its 50th year and its 'death' predictions fade into folklore then the role of the mainframe evolves. A lexicon I have heard that perfectly describes the mainframe in 2013 is the System of Record. This is opposed to Systems of Engagement whose role is to provide access. 

This is just a prelude to a more detailed blog post hopefully later this week.  But thought I would solicit feedback before I expand further...

Tuesday, June 4, 2013

Check this out...

Got to give credit where it due, check out this very interesting article:

Definitely worth a read, love the comment about the car being 100-year old technology...  Perhaps we should see the mainframe as a 911, almost 50-years old and still at the top of the game...

Wednesday, May 29, 2013

Packaging Vs Bespoke - The latest debate in solution design

As the IT industry goes through another cycle of convergence of vendors back to largely to where it was 30+ years ago, namely 4 or 5 key vendors, these vendors are increasingly offering integrated vertical 'stacks' to their clients to address largely similar cloud like deployment requirements.

IBM has made great waves recently with Pure Systems as a solution with 'expertise' built in in the form of solution patterns for various common COTS (Commercial Off The Shelf) software from IBM and others such as SAP.  This approach is starting to resonate with clients as it addresses some fundamental concerns around time to value and the cost of deployment for new IT solutions.  IBM is not alone in this space with Oracle/SUN also making great claims for their range of pre-integrated and tuned offerings.

So given the rest of the industry is looking at pre-packaging and pre-integrating what then of the the mainframe?  Well the mainframe is largely still a bespoke suit amongst racks of off the shelf offerings.  Now I am not going to argue that we should be pre-packaging a core banking solution based on in-house written COBOL/PL1 applications as the requirements for each client in the banking sector would be soooo different it would make this whole exercise a waste of time.  However if we are looking at Linux consolidation onto an ELS then the story might be somewhat different.

So perhaps we should be looking at S/M/L large configuration options with storage sized accordingly with pre-integrated cloud tooling and patterns for common zLinux deployments such as WAS and DB2.  This would mean we could massively decrease the 'solutioning' required whilst making the proposition easier to consume for business partners...

Without going further keen to get feedback and comments...

Thursday, April 11, 2013

IBM System z: The Lowest-Cost Database Server Solution

Enterprises that trusted in the mainframe myths and moved their corporate databases to distributed platforms are spending 100 percent more than necessary on database servers, creating data integrity issues and increasing the data risk exposure by constraining the ever-shrinking backup window. By moving databases from shared-nothing distributed data servers to the shared storage environment of IBM zEnterprise systems and putting the applications on Integrated Facility for Linux (IFL) processors and an IBM zEnterprise BladeCenter Extension (zBX), IT executives can reduce their ecosystem costs by more than 50 percent per year. At Robert Frances Group (RFG), we completed a Total Cost of Ownership (TCO) analysis of the traditional distributed Linux and Microsoft Windows environment vs. a zEnterprise with zBX environment that consolidates the databases on the mainframe and found the distributed environment to be twice as expensive. Our study used the standard three-year zEnterprise leasing and refresh strategy and traditional five-year purchase plan for the distributed x86 scale-out scenario. IT executives should evaluate the shared zEnterprise database server alternative to lower costs, improve productivity and reduce data risk. Additionally, IT executives should work with IBM or a third-party lessor to structure a package that best meets current and future business, financial and IT objectives. We had several ideas and hypotheses in mind before conducting our analysis. Specifically:
• The scale-out distributed server model using shared-nothing databases is costly and inefficient, creates data integrity and operational exposures, and fails as a best practice. A switch to using the mainframe as a database server eliminates the need for database duplication and synchronization since the mainframe uses a shared-everything architecture. While the acquisition cost of the zEnterprise and zBX servers collectively runs more than distributed x86-based servers, this is more than compensated for by the drastic reduction in database arrays and their associated costs. IT executives should assess the platform options holistically rather than piecemeal to identify the optimal solution.
• A zEnterprise environment can place Linux applications on IFLs and Windows applications on a zBX. Using this tightly knit, workload-optimized solution reduces the number of processors required, improves application and system management, and uses a high-speed interconnect so performance isn’t diminished when shifting to a shared-everything database engine. A zEnterprise solution enables enterprises to improve automation, control, security and visibility to their applications and databases without degrading performance. IT executives should determine which applications and databases should move to a zEnterprise environment and perform a TCO analysis to gain executive buy-in for the shift.
• Several non-financial gains accrue when moving to a shared-everything storage environment and these should also be factored into the decision-making process. Having a single copy of data means there’s only one version of the truth, all outputs and reports will be consistent and keeping things in synch won’t require manual manipulation, which is error-prone. Most enterprises today spend between 25 and 45 percent of their time synchronizing the many database copies. The associated time consumption used for duplication also creates a backup exposure; some backups don’t occur when administrators are pressed for time. Business and IT executives should consider these data integrity and risk exposures.
• Most IT executives have blindly accepted as fact the theorem that distributed processing is the least expensive solution. This hypothesis has gained ground because of a focus on a Total Cost of Acquisition (TCA) perspective. If the only valid cost analysis is the TCA of servers, then this might hold water. However, when the entire ecosystem is analyzed—including administrator costs, application and middleware software license and maintenance fees, cabling, networking, servers, storage, floor space and power and cooling—this theory falls apart.
• When the zEnterprise is used as a database server and IFLs and zBX are fully leveraged—and the analysis occurs holistically—a different picture emerges. The zEnterprise environment costs more than 50 percent less than that of a distributed x86 ecosystem, mostly due to the savings on storage, administrator, warranty and software costs.
• The mainframe architecture supports shared-everything storage while all distributed operating system platforms use a shared-nothing architecture. The mainframe architecture is unique in that multiple workloads share processors, cache, memory, I/O and storage. Moreover, zEnterprise systems provide data, IT and storage management practices and processes that facilitate and simplify the centralized, shared environment and enable application and database multitenancy. This means mainframe applications can share a single instance of a database, such as customer data, while distributed systems force the creation of a copy for each application’s use.
• Often, companies have between seven and 50 copies of the same database in use, so every terabyte of data stored is expanded by requirements for archiving, backup, mirroring, snapshots, test systems and more (see Figure 1). This data store expansion is then duplicated by the number of copies the distributed systems require. Thus, 1 TB of data in a distributed environment could grow to in excess of 100 TBs—more than 10 times the amount needed when databases are shared using a zEnterprise. There are software clustering solutions to get around this distributed duplication phenomenon and some of the storage sprawls, but they’re partial fixes and only address certain data sets.
• Mainframe storage capacity requirements are a fraction of what’s required for distributed systems. Annual acquisition costs for additional storage on a mainframe will be far less than that for distributed storage solutions. The capital expenditure (CAPEX) savings from the differential in storage costs when mainframes are used as a database engine far exceed the added expense of the mainframe hardware. The mainframe’s smaller storage footprint will reduce the operational expenditures (OPEX) and lower the TCO.

The Methodology

We hypothesized that a large Small to Midsized Business (SMB) with revenues between $750 million and $1 billion might operate a more economical data center environment if it used the new zEnterprise architecture and the mainframe as a database server. Most SMBs run their applications on Windows and/or Linux on x86-architected servers that don’t offer the advantages of a scale-up architecture. Let’s assume AB Co. (ABCo) runs 500 applications with 75 percent of them (375) executing on top of the Windows operating system. The remaining applications (125) run on Red Hat's Enterprise Linux. Additionally, 10 percent are CPU-intensive and require their own blade servers. All other applications operate under either VMware or KVM, depending on whether they’re Windows or Linux applications, respectively. The application workload growth rate is at 20 percent per year.

We also assumed a Storwize V7000 Unified Storage System houses the databases for the mainframe and distributed environments. To keep the analysis from becoming too complex, only two sizes of databases are used (1 and 2 TBs) and each application accesses 10 databases, half of each size. The storage growth rate is 25 percent. There are a total of 70 unique databases, half of each size. For the purposes of the study, only the production servers and storage are included; excluded are the archive, backup and snapshot copies of data. Because a Storwize storage solution is used, we assume a 60 percent utilization is achieved in all environments.

We further assumed that 126 TB of storage is required to handle storage needs for the first 12 months of operation. This includes an additional 20 percent for duplicate databases for the mission-critical applications. On the x86 side, since this is a shared-nothing framework, a minimum set of seven copies of databases would be needed. This results in the total initial storage capacity of 770 TB being required to support the storage needs of the first year’s operation. Finally, note that DB2 10 for z/OS is the database software used to access all databases.

The x86 server scenario uses all IBM 16-core HX5 blade servers for application and database processing. The zEnterprise uses the Central Processing (CP) environment to handle all the database interactions, exploits IFLs for all of the Linux workloads and the zBX for the Windows workloads. In this way, each workload is allocated to the server platform best-suited to perform the task. We further assumed the x86 servers were purchased and kept in operation throughout the five-year analysis period while the zEnterprise boxes were leased and refreshed at the end of three years.

The Distributed Approach

We assumed the distributed environment used 24 16-core HX5 blade servers to handle the 500 Linux and Windows applications. Since these environments require shared-nothing storage, the Storwize solution ends up requiring 126 enclosures and 1,285 raw TB of storage. All the hardware was purchased with the financing of the purchase price spread out over the five-year period. To meet the additional capacity demands year-over-year, new servers or storage arrays were purchased using the same methodology.

The Mainframe Solution

We configured a zEnterprise z196 model 501 to handle the database management, along with 13 IFLs and a zBX containing 14 16-core HX5 blade servers. The only application in the CP is the DB2 database management package. None of the distributed applications are rewritten to run on the CP. The Linux applications are relocated to IFLs, where there’s better memory management, allowing for greater utilization (up to 60 percent) and performance. We assume that 10 Linux applications can run on each IFL. Due to the improved management capabilities of a zBX, we assume a 10 to 15 percent performance improvement per HX5 on the zBX compared to a standard distributed environment.

In the zEnterprise environment, the data I/O requests start from the applications in the IFLs and zBX blades and are relayed to the DB2 application in the CP for handling. Only the DB2 application in the CP interfaces with the Storwize storage arrays. This environment initially requires 21 shared-storage Storwize enclosures and 214 raw TBs of storage.

At the end of the three-year lease, the zEnterprise model 501 is upgraded to a model 601 so it could handle the database workload through the next three-year period. As is common when upgrading a mainframe, IFLs are also upgraded. The cost to upgrade each IFL is $6,000 and is factored into the new lease. When the HX5 blade servers are upgraded at the end of the third year, the number of servers shrinks by two. We assumed that even though there are two fewer servers in use in year four, the licenses and associated software maintenance should be continued. This way, when it’s necessary to add more servers in the last year of the analysis, only the software for two servers needs to be factored in instead of four.

Using the previous scenario, we find that, as expected, the cost of the mainframe environment exceeds that of the distributed x86 servers by $9.4 million to $5.3 million on a Net Present Value (NPV) basis. However, the $4.1 million differential is more than recouped on the storage side. The zEnterprise storage costs come in at $3.8 million on an NPV basis while the distributed storage costs exceed $21.7 million. This is a net savings in excess of $13.8 million. Moreover, this savings is more than the cost of the entire zEnterprise ecosystem.

Analysis Considerations

The TCO analysis was done over a five-year period. On the leasing side, the original zEnterprise processors (CP, IFLs and HX5 blades) are returned after 36 months and replaced by the latest-generation servers. By swapping out the old hardware and moving to more powerful processors, the CP growth is contained and excess capacity is minimized. The IFLs growth is retarded and maxed out at 25 while the HX5 blades shrink initially upon replacement and then expand to a total of 22 blade servers. The Storwize arrays grow from the initial 128 TBs (214 raw TBs) to 314 TBs (523 raw TBs). However, the number of enclosures only grows from 21 to 28. This small expansion is the result of leasing the storage and replacing the units with more dense storage at the end of the three-year lease period.

The purchase model assumes that all servers are kept in service for a full five-year cycle and that, whenever added capacity is required, additional servers are bought. Thus, with the purchase model, the 24 servers slowly expand at a 20 percent rate annually until it reaches 48 servers by the end of the five-year cycle. The Storwize arrays expand from an initial 771 TBs (1,285 raw TBs) to 1.9 PBs (3.14 raw PBs) over the five-year period. The number of enclosures jumps from 126 to 216 in the same period, as none of the arrays or enclosures are swapped out.

On the software side in the purchase model, we assumed that payments for all software licenses were financed over the five-year period. However, in the leasing model, the costs of software licenses were spread out over the term of the lease. The leasing model selected was a Fair Market Value (FMV) lease obtained from IGF at a reasonable, but not most favorable, lease rate. The cost of capital and the purchase financing rate were estimated to be 6 percent.


We found it’s more than 100 percent more costly to distribute database serving among the distributed x86 servers than to consolidate the databases onto a common shared database platform using the zEnterprise as a database server. This cost savings is true on a current dollar basis and NPV basis.

The primary inhibitor to selection of the zEnterprise database engine approach is the fact the zEnterprise server alternative is more than twice as expensive as the x86 servers. Business and IT executives see the price tag differential—$1.14 million for the x86-based servers vs. $3.88 million for the zEnterprise servers over the five-year period—and conclude mainframes aren’t the way to go. However, the server costs pale when the database environment is factored into the equation. The cost for the distributed shared-nothing x86 storage systems comes in at $10.7 million while the mainframe storage system only costs $2 million over the five-year period. The $8.7 million savings in storage acquisition costs more than compensates for the $2.74 million in added zEnterprise acquisition expenses.

When all the TCO factors are examined, the purchased x86 solution runs almost $34.8 million, or on an NPV basis, just over $27 million. The leased zEnterprise solution comes in at more than 50 percent less—$16.5 million on a current dollar basis, or $13.2 million on an NPV basis.

The zEnterprise solution costs remained fairly flat over the five-year period, with most of the yearly expenses in the low $3 million range. There were two years when that didn’t occur—years three and four, where the expenses jumped to $4 million and then dropped to $2.7 million. The purchased x86 solution saw its total annual costs climb from $4.8 million in year one to $9.4 million at the end of the five years (see Figure 2).


The out-of-pocket charges to install the zEnterprise alternative is a wash compared to the installation costs of the x86 solution. However, there’s an $18.2 million savings that’s achieved by using the mainframe as a database server. Approximately one-third of that is hardware costs while another 27 percent savings comes from administrator costs.

In the purchased option, there was a requirement for additional software licenses and maintenance fees and growth in energy consumption. The total additional software expenditures in the purchase model exceeded $3.4 million, with most of that being software license and warranty fees. Similarly, power and cooling charges increased by more than $811,000 over the five years in the purchased model, or about 4 percent of the added expenditures (see Figures 3 and 4).

Other Considerations

There are several other advantages the zEnterprise platform offers that weren’t included in the cost analysis. Some of these are server-related while others are tied to the compressed storage footprint. Having just one copy of data reduces the risk of data integrity exposures caused by application or timing errors. This eliminates the need for syncing copies, which can consume between 25 and 45 percent of administrator time. Most companies today are concerned about the shrinking backup window; eliminating synchronization frees up time for backups. Companies often are hard pressed to get all their backups done as scheduled and are exposed, should a backup run fail to complete. There’s little time for a rerun. If a recovery is necessary, the most recent recovery point may not have been captured, potentially causing data integrity problems, lost revenues and customer dissatisfaction.

zEnterprise processors are architected for maximizing throughput and system utilization when consolidating multiple workloads on a server complex. Mainframes can consistently handle utilization levels of 80 to 100 percent without freezing or failing. Moreover, mainframes are recognized as the best platform for continuous and high availability, investment protection, performance, reliability, scalability and security. Because of its unique scale-up architecture, the cost per unit of work on a mainframe goes down as the workload increases; that isn’t the case with the scale-out architecture (see Figure 5). The cost/performance gains are due to the need for fewer administrators per unit of workload and higher levels of utilization. Mainframes can achieve higher utilization levels because of memory and processor sharing. Under the covers, there are hundreds of I/O processors to handle the data movements, freeing the central and specialty processors to focus on the application and task workloads.

This analysis didn’t examine the added costs of development systems. Here, too, the zEnterprise environment can share databases while each of the x86 test systems would have its own copies of the data. Moreover, users archive and back up the various databases and create snapshots. As shown in Figure 1, these database duplicates increase the rate of storage growth in the distributed environment over that of the mainframe solution. If these additional costs were added to the TCO, the zEnterprise advantage would improve even more.


We found that the zEnterprise reduced costs in all the TCO factors considered. zEnterprise hardware costs were 33 percent less than the x86 ecosystem costs while administrator costs were 28 percent lower. Warranty costs were 16 percent less and the cost of software dropped by 12 percent when the mainframe alternative was used. For much smaller SMBs or departmental systems, mainframes aren’t the answer, but for midsize to large enterprises, the economies of scale provided by mainframe solutions make a compelling case for organizations to re-examine their assumptions and consider the zEnterprise as a target environment.
Mainframe myths have led to higher data center costs and suboptimization. Organizations running hundreds of applications and multiterabytes of data should re-evaluate their architectural platform choices and evaluate whether or not a zEnterprise solution might provide them with a lower TCO. IT executives should insist on an evaluation that addresses the financial facts and ignores the religious platform wars. In today’s environment, IT must select and implement the best target platforms. The zEnterprise as a database server is a great choice.

Monday, April 8, 2013

Suggestions on how to save HP

As an ex-HP guy of 10-years standing, who is now an IBMer just want to put forward a simple plan for how to 'save' HP.

Sell the PC business to Samsung, Lenovo or Google
As IBM did with the PC business a few years back HP needs to get out of PC's.  HP has retained the Compaq brand for shelf space in large retailers, so my advice is to sell the PC business under this name which makes single digit profit margin to the likes of Samsung, Lenovo or Google.  The funds would right the balanace sheet and allow the business to exit the consumer market.  HP needs to retain the workstation and thin client business and scale down to focus on these high margin elements of the PC segment.

Get out of Printers!!!
Again being in the slowly declining but highly profitable printer business leaves HP exposed to the consumer market.  HP needs to retain printer maintenance and services, but exit the hardware and cartridge business.  The funds from this business would again aid the balance sheet.

What is left?
The business that is left is squarely focused on the corporate sector and has a healthy balance sheet.  With the war chest from the divestments HP can then go spending on acquiring software businesses in the space of Big Data, Analytics, Security and Cloud.  This would leave IBM focused on datacenter servers, storage, networking, consulting and outsourcing as well as software.

What to buy with the money?
Get serious about another segment in the software business, my suggestions would be:
  • Symantec
  • SAP
  • Salesforce

These would all be expensive, but with the war chest from getting out of printers and PC's these acquisitions would be possible.

A hardware/software alternative would be to buy EMC/VMWare this would consolidate HP's position as number 1 in Enterprise Storage and they get the highly profitable VMWare...

In Summary
HP needs to be an enterprise focused IT player that innovates, rather than a dual focused enterprise/consumer company, because at the moment they are failing to execute on both fronts...

Tuesday, March 19, 2013

Language Barrier

Given all the recent press coverage of IBM's latest set of annual financial results and the prominence of the mainframe within these, figures the analyst coverage over the last couple of months around System z seems to be have in the main positive.  I have even had good feedback from a Enterprise Data Architect at a first meeting, following his recent attendance at a Gartner event. So against this largely recent positive backdrop I would like to focus on what I see as one of the barriers to the 'mainframe going mainstream' namely the language barrier.

What do I mean by the 'language barrier'?  Well as former colleague put it very succinctly when describing this issue - "we worship different gods and speak in different tongues" when we describe the characteristics of the mainframe than our distributed brethren.  Let me give you a few examples to illustrate my meaning:

OSA Cards Vs NIC
Now I am not going to attest to be a techie or a former hardware engineer, but why oh why do us z guys have to come up with a completely different TLA (Three Letter Acronym) for a network connector, namely an Open Systems Adapter when the whole industry called them Network Interface Controllers.  Check out IBM's very own website:

The first sentance for me completely encapsulates the issue I am discussing in this post:

Although the OSA card is the only NIC for z/OS, this is a bit of an understatement. The OSA card variants support Ethernet in all of its current implementations.

 If the OSA card is the only NIC for z/OS then call it a NIC and address the following issues which get replayed back to every mainframe advocate when they engage with senior IT management:

  • Mainframe skills are in short supply
  • Training people on the mainframe is time consuming and expensive
  • Re-skilling existing staff will take too long

Now I fully appreciate to take a Windows admin and retrain him/her to be a Sys Prog will never be a 2-day training course, and rightfully so, do we really need to make it so hard?  At the end of the day the mainframe is only another server that connects to storage and a network?  If we want to make the mainframe mainstream again and attract the Gen Y'ers to the platform then we need to focus not only on GUI interfaces as is so often the discussion point, but make the underlying hardware concepts in-line at least in language terms with the rest of the industry. 

Take for example the area of storage, us mainframers talk ECKD and DASD when the whole rest of the industry talks SAN.  Now I am not going to open the debate here about what is better FICON, ESCON etc... but I raise it just to highlight that in every large shop I go to there is a mainframe storage team and distributed storage team, why? If as I say the mainframe is only a server (bear with me) then to have a separate storage team to administer it, can only be seen by the CIO as reinforcing any perceptions they may have (however correct or not) that the mainframe is expensive.

What am I proposing as the answer?
Well IBM can look through the lexicon of the mainframe and when it launches a new model (the z114 replacement for example) and take the bold step of proposing a Language Harmonization Program or LHP to bring in line mainframe terms with distributed terms.  If IBM changes what it calls the components of the box and the various interfaces the whole industry will come on board and within 3-years gone will be esoteric 'z-only' phrases and the mainframe again will have moved closer to becoming mainstream...

Feel free to comment here  for other LHP suggestions or raise them at the next #mainframedebate on the 8th April at 4-5pm GMT | 11-12pm EST. Or as always please don't hesitate to get in touch directly via my Twitter account @StevenDickens3

Tuesday, February 5, 2013

Febuary 5th Announcements for Systemz

If you haven't already seen the IBM announcements relating to System z let me give you my thoughts (for what they are worth) and why you should care:

It really amazes me that Cloud on z or the much more snappily titled zCloud isn't getting more press.  Take for example:

US organisation Nationwide’s private cloud on IBM zEnterprise has replaced thousands of standalone servers,eliminating both capital and operational expenditure. The initial #consolidation exercise is estimated to have saved thecompany some $15 million over three years.

Check out the detail at

If you are doing Linux at scale (and by scale I mean more than 400 Virtual Machines) then the cost case for z is massively compelling.  If you want to throw enterprise software into the mix from the likes of Oracle or even IBM's DB2 and WebSphere, then the license savings alone can pay for the underlying tin twofold within a year.

The IBM Enterprise Linux Server platform as an underlying platform for a private/public Linux cloud is a great message and doesn't come with the 'legacy' bagage oft associated (incorrectly IMHO but don't get me started...) of System z.  For heavens sake it is a large virtualised server than runs Linux, how complex can that be.

Operational Analytics
As I have previously blogged, having a 'system of record' and then proliferating copies of this data to a myriad of distributed servers, just so you can do complex analytics against this data appears to me to be folly.  When organisations are taking between 8 and 12 copies of their core data and squirting this via a multitude of various ETL methods out to other platforms and then having to manage these disparate systems, then the costs, not surprisingly spiral out of control.  Not to mention the data management and consistency and operational decision making challenges this poses.  Why oh why not just keep your data where it lives and breathes i.e. in your core system and analyse it from there?  Surely this approach must be cheaper and not lead to a proliferation of data models...

Check out this Forester whitepaper for a banking discussion on this topic:

In today's dynamic economy, analytics delivers the new competitive advantage. Financial Services firms count on a consolidated, secure platform to perform real-time analytics. This Forrester Thought Leadership Paper, commissioned by IBM, examines global financial firms' perceptions on rapidly changing data - as well as the analytics landscape - and their plans to meet the challenges head-on.

Link to the paper:

Tuesday, January 29, 2013

2013 The Year of the Mainframe

As we start to get up to speed in 2013 and following a week in IBM's Montpellier lab I thought I would kick off the year with some musings on two trends we should be looking for in 2013.

As Big Data looks to replace cloud as the next buzz word in the IT industry, equal focus is also being placed on how you analyse this data.  From a mainframe perspective the box has always been the biggest data server in the datacentre and typically is the source of the organisation's most trustworthy data.  As the system of record or master data server the mainframe typically holds the one version of the truth when it comes to customer accounts or operational data.  The prevailing trend over the last 20-years has been to ETL data off the mainframe to distributed systems so that it can be analysed.  This logic was sound when the mainframe was unable to process long complex queries in a cost effective manner without buring excessive MIPS.  However as offload engines have matured and overall z core chip speed has increased this logic has eroded.  With the launch of IBM's DB2 Analytics Accelerator then the logic has been turned on its head.

Imagine if you will, the development of graphics accelerator cards in PC's and the rise of more complex games to take advantage of the underlying hardware capability.  Well imagine the IDAA (and I don't mean these guys as a graphics accelerator card for the mainframe where graphics = analytics and you get the picture. Why take anywhere between 9-11 copies of your mainframe data on average and spread it around your distributed estate and then have to pay for the storage to store it, back it up and generally manage data that is at least a day old.  Especially when you can query 'fresh' operational data in-situ and trust its origin and security.

Without wishing to labour a point the mainframe was the first 'Cloud' server and all my more 'senior' z colleagues will go misty eyed at the word 'bureau' and the parallels with the latest hip and trendy craze of Cloud.  So before I begin to sound like the metaphorical industry Dad let me elaborate on why zCloud should be a trend in 2013.  Looking at any industry definition of a cloud terms like; available, virtualised, scalable, secure and multi-tenant are central to a platforms claim to be a cloud.  Well the mainframe has had these attributes for the last 40-years.  However I fully get that this heritage has been based on z/OS and its predecessors and whilst there is still life in the old girl yet other O/S' are widely available.  So lets focus on Linux and particularly the likes of SUSE and RedHat in particular.  If you want a platform to handle extreme scale Linux consolidation and in particular handle workloads such as high I/O databases look no further than the mainframe.

In one recent RFI that my team responded to we positioned that a EC12 was able to easily accommodate 80 Virtual Machines per Core and in so doing be the cheapest platform for providing virtualised Redhat guests.  I know the last statement is up there with "the world isn't flat its round" but look at the numbers £131 per VM per month Vs £290 per VM per month for a comparable x86 and VMWare service...

If the above have whetted your appetite for exploring the world of z and how the mainframe continues to evolve in 2013, join me and my colleagues at the next #mainframedebate on Twitter on the 7th Febuary between 4-5pm GMT and 11-12pm EST to find out more.  Even if you think the above is heresy and you want to go 10-rounds on the numbers for zCloud join us and lets make it a lively #mainframedebate or contact me directly at @StevenDickens3...