This article is part of SHARE’s intro to the mainframe series. If you would like to contribute to this series, please reach out to editor@share.org.
Some terms have been bandied about from the earliest days of computing, and their meaning has grown and changed along with the growth and changes in IT. One such term is “artificial Intelligence” (AI), and it is our chronic aspiration that traces its origins to our earliest history – see my SHARE’d Intelligence article about AI. More about AI later.
Another such term is “automation.” The idea is deceptively simple: getting computers to do something in the place of people doing it. After all, that’s what computers were designed to do, right? Rote, repetitive work that is hard for people to do consistently.
The origins of automation cannot be teased apart from those of AI, yet they are not identical, just inextricably bound together, and interwoven with such initiatives as autonomic self-management and rules/logic-based automation. And it turns out that the very concept of computer-based automation is recursive — i.e., defined in terms of itself. Everything you automate on computers is the next layer down of automation. At the top layer of automation, there are always people handling the controls. For example, people responding to console messages from computer applications that are automating some previously manual process.
As each new layer of automation is developed – such as automatically responding to console messages – a new layer of human interaction emerges on top. For example, designing and managing the automation that will respond to the messages. This can then be subject to further automation, moving the humans to yet a higher level of automation abstraction, and so on.
The idea, then, is to get the computer to handle as many layers of automation as possible so that humans can simply monitor and ensure everything is working as it should, and to occasionally give input to the computer to ensure the automation is on the right track.
Where Does Automation Fit Into the IT and Mainframe Landscape?
But here’s the thing: there is no outer limit to automation. You don’t just automate traditional operations. You can automate security, workloads, network management, database management, applications, and then automate the running of that automation. So, begin with monitoring console messages and issuing automated commands to proactively watch the health and state of a range of systems, starting and stopping them at appropriate times. Progressing to a graphical display of the system’s status with the ability to point-and-click to drill down to specifics to the line of business and health visibility, and outwards from there.
Of course, people need to stay involved with monitoring and configuring that automation, as it is the business users whose needs must be constantly front of mind with any automation, especially as business needs and initiatives change. After all, people are much better at recognizing when something doesn’t seem right based on unfamiliar patterns, as well as business mandates, though AI is beginning to inherit that level of awareness. As science fiction author Isaac Asimov reminded us, “The most exciting phrase to hear in science, the one that heralds new discoveries, is not ‘Eureka’ but ‘That's funny.’” So, AI can begin to help us see such exceptions as well. More about that soon.
What Are the Major Benefits and Drawbacks of Automation?
The benefits are savings of time and effort, the advancement of functionality, in addition to the cost savings and business value generation that go with that. Not only do computer operators have to spend less time looking up console messages and requests for reply, or follow out-of-date printed procedures for doing an IPL (“Initial Program Load” – a mainframe reboot), but also systems, database, and security people can wait until the computer sends them an alert before digging in to problems that aren’t obvious to resolve. A vast savings of employee time and attention, not to mention avoidance of the costs of inevitable errors arising from the all-too-human tendency to follow procedures inaccurately. In addition, automation can alert personnel of a problem before users do, which saves management significant time, embarrassment, and consternation.
Such automation, first created in the earliest days of computing, and greatly advanced from the 1970’s onward, is a mainstay for the largest organizations that use enterprise computing platforms, such as IBM Z. In other words, the organizations that have the data and processing of record for the world economy.
Still, eternal vigilance remains a mandate to ensure the automation doesn’t accidentally become a fossilized rut when the business needs to shift gears or change directions. Another “gotcha” can be when the automation has come to embody all the business processes and rules, but the people who wrote it have moved on, and suddenly no human has the expertise to understand it or modify or advance it. When only the computer automation “knows” why a particular system has to be started, stopped, or changed in a particular automated manner, it can become prohibitive to make changes for business value that might break other interdependent things in undocumented ways, so a sort of sclerosis immobilizes things in the face of new demands or opportunities.
That’s where AI is looking helpful — both to analyze and understand what is in place, and to help generate and manage automation in a more dynamic and business-sensitive way. Of course, there are many kinds of AI. There are neural networks that do advanced pattern recognition that can say the equivalent of “that’s funny” when unexpected things happen. There are large language models that can generate automation based on a deep analysis of the behavior of an environment. There’s predictive intelligence that can offer insights into which things are likely to occur in a manner that can be pre-emptively automated. And that’s just the beginning. As I like to say, meta-intelligence is intelligence.
How Do You Use Automation?
To illustrate, I have worked with various automation systems on the mainframe, many of which can employ the REXX programming language for writing automation routines but also have more features for discovering what needs automating and generating that automation. Organizations such as IBM, BMC, Rocket, Broadcom, and others have excellent mainframe automation solutions. While I admire and respect all of them, the one I have worked most on, and is a favorite of mine, is OPS/MVS. Currently a Broadcom product, it was written in Pittsburgh and generally available by the 1980’s and passed through several acquisitions on its way to its current purveyor. It started out by offering the ability to respond to, suppress, and automate console message handling. But it also offers a vast range of more advanced functionality, which can be invoked by REXX programs, through automated rules or through simpler high-level interfaces for managing the state of the system, and participating in an organization-wide real-time interactive depiction of the state of IT. Competing products could be characterized similarly, and their differentiators are very advanced functionality in each case.
But wait, there’s more! Much more than just data center and operating system automation is available. One major example being workload automation, where the batch jobs of applications are coordinated and scheduled. And there are many more types beyond that.
How Would You Recommend Learning to Use Automation?
One great way to learn more about automation is by reading the documentation and viewing videos about the automation solutions offered by mainframe vendors and at sessions at conferences such as SHARE, where you can meet and be mentored by the experts. As an illustration, here are some sample workload automation vendors and products with informative links:
If Anyone Would Like a Job Where They Regularly Use Automation, What Would You Recommend?
The very best way to get to know automation is to become involved in a project to automate something in your environment, ideally by borrowing and modifying automation code that does something similar but different. But why wait? If you have access to a sandbox system (e.g., a test LPAR) look at what automation is there, and (with permission, of course) copy and modify and test your own ideas.
One excellent way to get started in all such matters is via the Open Mainframe Project, including excellent samples and examples from the CBT Tape.
Before you know it, you too can be an automation maven!
Check out these SHARE'd Knowledge members-only offerings on mainframe automation, such as "Zowe: The First Five Years."
Reg Harbeck is the Chief Strategist at Mainframe Analytics, with a B.Sc. in Computer Science and an M.A. in Interdisciplinary Humanities (focused on the humanity of the IBM mainframe). He has worked with operating systems, networks, security and applications on mainframes, UNIX, Linux, Windows and other platforms. He has also traveled to every continent where there are mainframes and met with and presented to IT management and technical audiences, including at SHARE, Gartner, IBM zSeries, CMG, GSE, CA World and ManageTech user conferences. He has had many roles at SHARE, from speaker and volunteer to the SHARE Board of Directors. He has published many articles and blog entries and podcasts (available online) and taught many mainframe courses. Since 2020, Reg has also been recognized as an IBM Champion for Z.
Read SHARE's Mainframe Intro Series articles: