Artificial intelligence is having a dramatic effect on the amount of data the world needs to store.
In fact, consumers and businesses are expected to generate twice as much data in the next five years as they did over the last 10 years, according to a JLL Technologies report. AI alone is expected to increase data center storage from 10 zettabytes (ZB) in 2023 to over 21 ZB by 2027, for a compounded annual growth rate of 18.5%.
That’s a lot of storage. In other words, big data shows no signs of slowing. And AI is only making it bigger. What’s an enterprise with a hybrid IT infrastructure to do?
Obviously, organizations must increase their mainframe data storage capacity. But that simple answer incites a whole slew of challenging questions. For example, how much mainframe data storage will your company require? Where should you store it? How much AI are you cultivating? How long will you need to store the data it’s generating? And how might third-party lock-in affect your data access and financial liability?
These are all serious questions worth considering. But before you begin to answer those, it’s important to review and update your data management and lifecycle policies (both internal and external). I know that sounds like a lot of fun. But in our experience, companies who fail to complete this important first-step create a lot more headaches for themselves later on and complicate their ability to answer the above questions with confidence.
Furthermore, creating a compelling data governance and compliance policy plays a huge role in your overall mainframe data management approach. What users should access which data? What type of authorship rights should each user have? As you can see, not only is AI creating the need for additional data storage, but it’s adding complexity and questions for database administrators to tackle. In short, AI will require significantly more data for training.
Let’s assume, then, that your organization has an updated, AI-enabled, and future-ready policy. Then you can start examining how much AI your organization is planning to train and deploy, how your customers will use it, and how it will affect the quantity, location, access, and cost of said storage. That could be an internal volume of new data storage, or perhaps an outside vendor will store it for you. If so, will you be locked into paying for that storage service (e.g. AWZ, GCP, Azure, etc.)?
If internally (on-prem) stored, what length of time for storage is cost effective but still meets performance requirements (access time)? Private cloud? Virtual tape? Online, all on the mainframe? How might your hybrid IT environment handle this new storage? Finally, how long does this AI-generated data need to be kept for compliance purposes? Perhaps your company would benefit by keeping it forever for trending and history purposes, but this introduces additional legal and compliance risks. Those risks can vary widely by industry and country.
For example, credit card companies are already using AI to identify and confirm fraud patterns. But a false positive is an expensive mistake for them to make. So, credit card issuers are increasingly using AI to leverage the growing amount of data on their mainframes to create a competitive advantage while protecting their cardmembers.
Regardless of how you address the questions above, they should be discussed and planned with your teams in the coming months. At Broadcom, we partner with our customers on these important topics and implement strategies equipped for modern data management challenges. Our customers are preparing for AI and new volumes of data by implementing new flexible storage technologies as well as modern output management approaches. Together, we help them ease some of the burden that AI is creating with mainframe data storage and management.
Broadcom is committed to evolving the core technology that powers the world, ensuring that the mainframe remains an integral part of every enterprise architecture. Our mission is to seamlessly integrate the mainframe into modern IT environments, extending its value and enabling customers to support next-generation AI workloads.