Another SHARE conference has come and gone, and we have much to report on where mainframe security is headed. Each year, SHARE demonstrates that the mainframe is not only here to stay, it’s regaining its reputation as the king of big data in an IT landscape of massive complexity and high data risk.
Every day, after you get your first cup of coffee, do you scan the mainframe security system violation and logging reports looking for abnormal behavior, strange activity, etc.? Given the size of these, do you do a thorough job of it? How much time has elapsed from the time any activity occurred to the time you got to this?
When I first developed these reports for ACF2 (dataset and resource) we had systems that ran at a rate of a few MIPS – maybe 10-20. The current IBM z13™ will process 110,000 MIPS. The volume of processing has grown exponentially and so has the volume of security incidents – either loggings or violations – produced each day. Remember that the violations and loggings are there to highlight activity which may affect sensitive data – either the organization’s sensitive data or the z/OS system itself – for, if the z/OS system is modified illicitly, this may be the vehicle for actually accessing or modifying sensitive data by bypassing the z/OS security system controls. In case you didn’t realize this – if someone can modify the z/OS system and libraries by doing something as simple as link editing a program with the Authorized Program attribute and storing it in an Authorized Library, they can then execute that program and utilize that authorization with relatively simple code to bypass whatever controls you have in place in ACF2, RACF or Top Secret.
This segment of my series was authored by Peter Hager and Earl Rasmussen of Net’Q (www.net-q.com). I thank them for their input since the network connected to our mainframes must also be secured.
In today’s world we are all connected. There was a time that mainframe access was reserved to the datacenter. Those days are long gone….
Now that you have eliminated all the z/OS system integrity vulnerabilities you could find, re-evaluated your user validation to minimize the possibility of credentials being stolen, found all your sensitive data and eliminated unneeded copies and implemented a test data management solution, and validated the users who have access to the remaining data and transactions, it is time to evaluate how accesses by authorized users are being monitored.
Remember, there are two different scenarios that can harm your organization. One is the obvious one – a trusted employee goes rogue, obtains sensitive data and uses it in a manner that either profits him and/or harms the organization – Edward Snowden of the NSA is the poster child for this type of calamity. The other is that a loyal employee has their identity stolen and the hacker misuses it. Note that even though you have gone through the steps of securing your z/OS system, nothing is perfect and there are still vulnerabilities in the network configuration and usage that allow Userids and passwords to be passed in the clear, people doing silly things like writing down their passwords on a post-it note, someone looking over a valid user’s shoulder, etc.
Information and innovation are the most valuable commodities in our increasingly digital world. Thanks to the IT revolution, we now enjoy virtually instant categorization and access to key enterprise data assets. The downside? Many organizations have consolidated their most sensitive Intellectual Property (IP) and consumer identity data in one very predictable spot – mainframes. There can be no doubt where internal and nation-state cyber-thieves have focused their attention.
The innovative technology that brought us here is the same technology canvasing the dynamic world of IT with the burden of too much complexity. IT security visibility is blinded and lethargic from the mutually repellant worlds of distributed and mainframe networks. And because we've naturally assumed our mainframes are secure, we've taken for granted how their purpose and relevance has changed over time.
That’s the thing about myths: they’re only partly true.
Yes, File Integrity Monitoring (FIM) has been part of the distributed computing landscape for a few years now. And yes, real-time enterprise security monitoring is harder to accomplish in a mainframe environment. But as attacks become more sophisticated, FIM needs to be a key component of the entire network, including your mainframe.
There’s a well-known software vendor that has an antivirus “sandbox” that is used to explode viruses in much like a police bomb squad would do with a suspicious package at a crime scene.
Now that we’ve gone through verifying that your system has no known integrity vulnerabilities, users are validated in a manner that will minimize the chance of someone stealing their identity and located all the sensitive data on your systems, remediating the copies that should not have been there in the first place, it is time to focus on who has access to your organization’s sensitive data.
Not so fast…what about MFIM.
File Integrity Monitoring (FIM) has been part of the distributed landscape for years, generally as a component of an enterprise anti-malware strategy. But as attacks become more sophisticated and nearly undetectable, FIM needs to be a key component across the entire network, mainframe included.
Mainframe Security Part 1: System Integrity
I’m often asked about what installations can do to maximize their data security in an IBM mainframe environment. For those that do not know me, I was one of the people who started the data security initiative in the mainframe environment when I was asked to form the SHARE Security Project in 1972. We worked together to create a series of requirements to be presented to IBM and I did that in 1974. For more details on this, see www.share-sec.com/history.html.When IBM delivered RACF in 1976, it did not meet two of the crucial requirements – protection by default and what we called algorithmic grouping of resources.