Keeping the Mainframe Alive and Well
Editor's Comments: With the advent of big data large and medium size businesses should be looking at the mainframe as a serious player. In my opinion, while the mainframe is far from dead IBM needs to invest in some really creative marketing to the C-suite. As long as businesses keep going after the next shiny object and do not take the time to understand Total Cost of Ownership and apply it to their decision making - the mainframe is in much more peril than it needs to be.
“The mainframe gets a bullseye on its back because it’s one line item, not ten thousand smaller line items,” says Jay Lipovich, director at BMC Software. When Y2K surfaced in data center management's awareness, many people declared the mainframe dead. There appeared to be some reasons behind this perspective; for one thing, newer, cheaper platforms like UNIX were looking more viable for production systems. Its hardware and software appeared to be cheaper and allowed managers to get people with UNIX skills right out of college. By contrast, no one was training mainframers and the experienced ones wouldn't work for low wages. We’re seeing the same problem now. As companies seek to modernize applications, move to the cloud and implement web and mobile-enabling applications, the “obvious” solution is to use newer platforms that host these new technologies. But there’s an issue here—the same one industry leaders were faced with in the Y2K era. Moving complex and mission-critical systems to another platform poses quite a few challenges. Since COBOL or PL/1 systems can’t be directly ported to UNIX, they have to be rewritten, and these languages don't make that an easy task. You can't move anything without ensuring that business rules are understood and translated into new coding languages. Few developers are stars at documentation, and in many cases the original developers have moved on, leaving people who don’t understand the details of what the application(s) needs to do. In other cases, the only code available is object code; the source code was lost long ago. So there’s a risk; if you rewrite an application and omit key elements, your business will be impacted. You’ll need people who know the hardware and software on both ends well enough to manage the translation, but the skill set required to rewrite code is in short supply. As a result, the translation will likely cost a great deal of time and money. Still, some determined managers insist on trying it, with decidedly mixed results. The few successes tend to small and isolated, even if they’re critically important to the business. The ultimate goal is maximizing cost-savings while minimizing business risks, and more companies are beginning to realize that they’ve made a bad bargain. A few vendors have offered emulation as a way to rehost legacy code onto Linux, including the ability to continue to use VSAM and CICS. It sounds good; with no code rewrites, business risk, cost and time factors are all reduced. But since the code doesn’t change, you can’t do all the things you might want to with it. You’re also left with a lot of acquired technical debt—inefficient, poorly written code—which costs you money, slows your applications down and keeps you from modernizing. And yet, many consider this an option because they think it makes their IT budget look better. This is a fallacy, but one that persists because the cost of the mainframe initially looks big as a singular line item. The true total cost of ownership, however, requires more careful analysis. Another option that was popular during the Y2K years is purchasing an application package (e.g., SAP or PeopleSoft) with the idea that while your existing applications were very customized, over time, you could apply the customization to the packaged app and return all the functionality. It solved the Y2K problem; you never needed to look for and fix the problematic code. But many companies find that it takes much longer to implement than had been expected and it costs more. On top of that, customization takes much longer, adding real risk to a company’s competitive edge. Nowadays, you’ll see this option less frequently. An additional cost to those doing any form of rehosting is the large learning curve. A mainframe support team generally has years of institutional knowledge, not just of how the systems work, but also how each one relates to business functions. If you bring in new people to support UNIX and Wintel platforms, they’ll have to start over with a clean slate. With new platforms comes new software to learn and manage. Rather than a proposing a simple change in IT direction, you’ll essentially be rebooting your company. So what’s a mainframer to do when faced with the cyclic push to get off the mainframe? Know the challenges faced in doing so, and be prepared with a solid business case that states why the mainframe is still relevant. Understand the high risk and high cost of moving off the mainframe, and learn how to sustain the mainframe for less while modernizing the legacy code to meet current and future demand. Get your talking and proof points together so you’re ready to support your position. Next, we’ll talk about how technical debt is a real problem for mainframers, why it matters and how you can do something about it now. Denise P. Kalm is chief innovator of Kalm Kreative Inc.