Machine learning is a hot topic for businesses that want to offer their customers a unique, personalized experience. To create these interactions in real time, businesses need to access large and diverse data sets, which is why many of them are now looking for ways to bring machine learning capabilities to their biggest source of transactional and customer data: the mainframe.
Dr. Avijit Chatterjee, Chief Analytics Officer at IBM, spoke about this trend in a recent SHARE presentation. He described how expectations for customer experience have changed over the decades. Businesses have shifted away from the one-size-fits-all mass marketing approach, as well as the era of demographic customer “segments,” toward an entirely new level of personalization.
Businesses do this with machine learning solutions built on the mainframe, Chatterjee said, because it allows them to take customer-oriented actions based on the most relevant and up-to-date information they have – as fast as possible. Customers don’t want to wait days or even hours to be presented with a new offer or solution to meet their needs. Businesses will get better results if these interactions happen instantly when consumers are at the peak point of interest.
“Business data resides on the mainframe,” he said. “Instead of copying it to a data lake or external source, you need to do analysis closer to where the data is so that you can make real-time decisions based on real-time data, not stale information.”
IBM has developed a number of solutions to enable hyper-personalized marketing and customer service. Machine learning on the z/OS platform enables what Chatterjee called “data gravity” – that notion of analysis residing closer to the data source. Increasing data gravity allows businesses to personalize every interaction with their customer, and to do it quickly.
IBM’s Db2 Analytics Accelerator is also important to this process because it provides an architecture for keeping the transactional and analytical copies of the data in sync. In other words, this architecture keeps the data being used to conduct analysis and make important decisions for customers fresh – based on the most recent and relevant information stored in the database of record on the mainframe.
Chatterjee described a multi-stage machine learning lifecycle. Everything starts with transactional data that’s ingested and combined with more data from external sources, which could be something like customer support interactions or even a customer’s social media comments. Then, businesses create machine learning models for the specific business problem they’re trying to solve. (Examples of these problems include lowering customer churn, eliminating credit card fraud, or defining the “next best action” for a customer to take.) Further, these machine learning models might be deployed as RESTful APIs, allowing business applications, like mobile banking, or a website chatbot, to tap into the model during a customer interaction.
There are several popular platforms for building machine learning models, like the highly visual SPSS Modeler, or the popular RStudio. Machine learning on z/OS includes support for models that have been created with either platform through an interchange standard called Predictive Modeling Markup Language (PMML). The idea is to give companies the option to continue using their existing modeling tools rather than starting from scratch with open source.
The models may be working behind the scenes, but to the customer, the result is an improved service experience. Chatterjee provided the example of a credit card company that used machine learning on z/OS to identify risks of churn. Churn is a major problem for financial service companies today because customers are becoming more savvy about taking advantage of great introductory deals (“Get 50,000 airline miles just for signing up!”) to buy what they want at a steep discount, and then eventually cancel the card if they can find better perks elsewhere.
The credit card company in Chatterjee’s example identified a high-earning, high-spending customer with multiple accounts at the bank. The model determined that the customer was still a high risk for churn because he had recently tweeted publicly about how much he disliked high foreign exchange fees.
All customer interactions were handled through a simple chatbot on his credit card account page, but behind the scenes, a machine learning model had been built to leverage transactional data from the mainframe, behavioral data, and other collected information that painted a picture of the customer’s personality, personal values, and priorities. The chatbot offered a great discount on a unique travel package that fit the customer’s desire to be adventurous and spiritual – plus the company waived all foreign transaction fees on the trip, which resolved the exact problem he disliked the most.
Importantly, machine learning models aren’t set-it-and-forget-it, Chatterjee said. Businesses need to constantly test and retrain models based on new data. But, with machine learning tools built for the mainframe, companies in industries like financial services, government, retail, and healthcare are finding ways to better understand, and then cater to, their customers.
Check out the SHARE Communities for more resources on important issues in mainframe, including technology, training, and industry trends.