Network segmentation is a crucial concept in computer networking that involves dividing a larger network into smaller, isolated segments or subnetworks.
This process, also known as network isolation or segregation, aims to enhance both security and performance within a network infrastructure. This segmentation can occur at the physical, virtual, or logical level. Most organizations today prefer segmentation at the virtual and logical level as it accomplishes the same goal as physical isolation without the significant overhead of purchasing separate physical lines and physical switches. Liken this to running an IT department that uses all physical hardware for IT, versus running operating systems and applications on virtual machines either loaded onto physically co-located hardware or rented hardware in the cloud (IAAS).
There really is no need to purchase new hardware and physically isolate a network when virtual isolation using existing hardware provides the same capabilities with less overhead. Logical controls can bring additional complementary or more granular isolation.
What is the difference, then, between macro-segmentation and micro-segmentation? It’s a matter of granularity. Macro-segmentation provides high-level control over traffic moving between areas of an organization’s network; micro-segmentation offers more granular network visibility and the ability to effectively enforce zero-trust access controls.
Macro-segmentation breaks the network into zones that typically consist of multiple devices and the applications they host. Micro-segmentation takes this a step further, placing each device or even each application within its own segment.
The Need for Macro-segmentation
Macro-segmentation is necessary for all organizations in that it allows for better control over traffic flows and isolates network issues, reducing the impact on the whole network. It can also help to prevent unauthorized access to sensitive information by keeping that information on a separate segment isolated from general network traffic.
Macro-segmentation has been around for years and is often used for separation of business functions and in multitenancy situations. This type of segmentation can also help to limit the spread of potential cyber threats. If a threat infiltrates one segment, it’s contained within that segment and cannot spread to other segments.
Macro-segmentation can also boost network performance. By breaking a larger network into smaller segments, organizations can reduce congestion and improve the speed of network connections within each segment.
The Need for Micro-segmentation
With micro-segmentation, access to individual workloads or applications is controlled at each endpoint in the network using host-based filtering technologies. This approach means that access to a particular network zone is not sufficient to allow a user to access a specific workload or application on a specific host. Rather, they must be explicitly permitted to access to the application. This granular approach is not only desirable but is now required by regulations such as PCI-DSS 4.0 and advantageous for zero-trust frameworks/models. Without micro-segmentation, the entire network is in-scope for purposes of evaluation, as are all applications within the network. In other words, the more isolated your PCI-DSS 4.0 network traffic generating applications are, the fewer portions of the network and end points that will need evaluating.
This additional level of granularity requires more sophisticated configuration infrastructure and administrative processes to coordinate the micro-segmentation rules across the various hosts; however, the benefits far outweigh the added complexity, offering unparalleled control, enhanced security, and reduced attack surface for critical systems.
How Hard Is Micro-segmentation, and Who Is Responsible?
With the right training and tooling such as pagent, packet filtering techniques and rules, z/OS comm server, and z/OSMF Network Configuration Agent (NCA), implementing micro-segmentation is achievable by your network staff (both z/OS and Distributed folks) working together.
The responsibility of this segmentation belongs to everyone: Systems Programmers, Applications Owners, Security Architects and Engineers, and with management leading the way as a priority.
If you are an organization that is subject to PCI-DSS 4.0 requirements or an organization that follows a zero-trust model, you should already have everything that you need in this article to move forward. And don’t forget, in addition to the IBM technologies described in some detail below, various external tools and add-ons are available that can make your life easier. For instance, Vertali’s zTrust software enables detailed network discovery — so you can better understand all traffic in and out of the network — together with micro-segmentation in support of industry standards, compliance, and security policies.
z/OS and IBM Z Technologies for Network Segmentation
If we look to the distributed environment and the world of containers, it’s clear that micro-segmentation solutions are typically based on technologies that provide control over traffic at a very granular level (down to the specific application) and are enforced at the endpoints. Think host-based firewall rules and network access controls. How can z/OS and IBM Z play in this space?
There are plenty of technologies in both IBM Z hardware (VLANs, multiple NICs, memory-based network fabrics, etc.) and the z/OS operating system (virtual and logical controls) that can play a role in your enterprise network segmentation strategy:
IP packet filtering by z/OS Communications Server
The most directly applicable technology for micro-segmentation is the IP packet filtering provided by z/OS Communications Server. Think of z/OS packet filters as a personal firewall of sorts for z/OS. With z/OS IP packet filters, you can define rules to control which IP packets are allowed to proceed to or from the local z/OS TCP/IP stack and which are to be denied (dropped). These filters can be defined to cover a broad range of traffic (at the network or subnet level), a very narrow range (down to a single application), or anywhere in between.
IP packet filtering is a policy-based technology, which means the filter rules are processed by the z/OS Communications Server Policy Agent (pagent). Packet filter rules are defined in pagent’s IPSecurity policy file. Once pagent reads and successfully parses the rules in that file, it installs the rules into the z/OS TCP/IP stack, which then enforces those rules as IP packets attempt to enter or exit the stack. Once the rules are installed into the TCP/IP stack, pagent plays no further role until the policy file needs to be updated or refreshed.
There are two approaches to creating the IPSecurity policy file.
- z/OS network security administrators create and manage IP packet filter rules using the z/OSMF Network Configuration Assistant (NCA). Once a satisfactory set of filter rules is defined, the NCA generates an IPSecurity policy file in the format that the z/OS Policy Agent (pagent) understands.
- Packet filter rules can also be created by coding policy agent IpFilterRule statement (and all the related supporting statements) directly in the IPSecurity policy file. The policy statement syntax is fully documented in the z/OS Communications Server IP Configuration Reference.
If you want to use an approach that includes both NCA-generated policy rules and rules created outside of the NCA, you need to merge the two sets of rules together into a single policy file with one IpFilterPolicy statement that references all the filter rules.
Given the ability to define extremely granular filters, z/OS IP packet filtering can be useful in implementing micro-segmentation on z/OS systems, so long as the number of filter rules remains manageable. It is important to balance the design of your z/OS IP packet filter rules in a way that balances granularity with sheer numbers of rules. You must also keep in mind that a single z/OS TCP/IP stack might support dozens or even hundreds of unique IP addresses (addresses assigned to real IP interfaces as well as virtual IP addresses), so a filter policy for a z/OS TCP/IP stack will often require more rules than that for a typical distributed system such as Linux or Unix.
While z/OS IP packet filtering is the most obvious fit, other hardware and software technologies can contribute to your enterprise network segmentation strategy – whether you’ve implemented macro-segmentation, micro-segmentation, or a combination.
NETACCESS controls
z/OS TCP/IP network access (NETACCESS) controls, provided through the SERVAUTH SAF class and the TCP/IP profile, let you assign resource names to different zones of your network and then control which local z/OS user IDs are permitted to access those zones. These zones can be as wide as a network or subnet and as granular as an individual IP address, which is helpful from a macro-segmentation perspective.
While IP host and subnet-level controls are typically not appropriate for micro-segmentation, the ability to control a specific user ID’s access to different network zones can be helpful in this realm.
z/OS TCP/IP subplexing
This subdivides a parallel sysplex into multiple "subplexes" that only share sysplex resources with other members of the same subplex. For example, application and DVIPA movement is restricted to within the subplex. Subplexing contributes to very strong macro-segmentation, especially when used in combination with other segmentation technologies like VLANs or physical separation.
Virtualization of IP addresses
z/OS provides rich virtualization of IP addresses that make it easy to isolate specific workloads or even applications to their own dedicated IP addresses. While IP address-level controls are typically applied to macro-segmentation, the ability to define such limited-scope IP addresses with incredibly strong resiliency and scalability characteristics also make them an effective tool for micro-segmentation.
Conclusion
Network segmentation is a necessity in today’s cyber security threat landscape. Micro-segmentation is the next necessary step to ensuring that data within the network is isolated at the highest level possible to minimize the impact of a network breach, ensure data that is related by workload has its necessary security policies implemented, and to reduce the scope of evaluation for standards like PCI-DSS. Remember: if in doubt, encrypt and segment.
Chris Meyer, an IBM senior technical staff member, is a network security architect for IBM’s z/OS operating system. His career has spanned over 40 years in IBM software development, design, architecture, test and management, primarily on mainframe operating systems. Meyer is a regular speaker at various mainframe conferences.
Mark Wilson is technical director of Vertali. A thought leader in mainframe technology and security, known across the industry, he has more than 40 years’ experience in z/OS and IBM systems operations. Wilson regularly speaks at events including SHARE and is the GSE UK region manager.
Brian Marshall is a technical strategist at Vanguard Integrity Professionals with 30 years’ experience in the z/OS space. Today he is responsible for strategy and other technical work at Vanguard, but he prefers to be locked in an office with no windows, writing code. He is a known leader in the z/OS security space and speaks at conferences annually.