Another SHARE conference has come and gone, and we have much to report on where mainframe security is headed. Each year, SHARE demonstrates that the mainframe is not only here to stay, it’s regaining its reputation as the king of big data in an IT landscape of massive complexity and high data risk.
Information and innovation are the most valuable commodities in our increasingly digital world. Thanks to the IT revolution, we now enjoy virtually instant categorization and access to key enterprise data assets. The downside? Many organizations have consolidated their most sensitive Intellectual Property (IP) and consumer identity data in one very predictable spot – mainframes. There can be no doubt where internal and nation-state cyber-thieves have focused their attention.
The innovative technology that brought us here is the same technology canvasing the dynamic world of IT with the burden of too much complexity. IT security visibility is blinded and lethargic from the mutually repellant worlds of distributed and mainframe networks. And because we've naturally assumed our mainframes are secure, we've taken for granted how their purpose and relevance has changed over time.
That’s the thing about myths: they’re only partly true.
Yes, File Integrity Monitoring (FIM) has been part of the distributed computing landscape for a few years now. And yes, real-time enterprise security monitoring is harder to accomplish in a mainframe environment. But as attacks become more sophisticated, FIM needs to be a key component of the entire network, including your mainframe.
There’s a well-known software vendor that has an antivirus “sandbox” that is used to explode viruses in much like a police bomb squad would do with a suspicious package at a crime scene.
Not so fast…what about MFIM.
File Integrity Monitoring (FIM) has been part of the distributed landscape for years, generally as a component of an enterprise anti-malware strategy. But as attacks become more sophisticated and nearly undetectable, FIM needs to be a key component across the entire network, mainframe included.
Mainframe Security: Part 3 - Where is all your sensitive data?
One vulnerability I see a lot are copies of sensitive data outside of the production environment. This sensitive data, if disclosed, can harm the organization just as much as the production versions. Examples are Social Security Numbers, medical diagnosis or treatments, credit information, and, of course, credit card numbers which should never be stored unencrypted in the first place. One example that comes to mind is an insurance company discovering a series of database query results, stored under an individual user’s high-level index that correlated medical treatments with diagnosis, but also contained the patient’s identification. When investigated, it turns out that the employee was asked by an executive to do this analysis, but, never bothered checking with the security people on where and how to temporarily store this information and never cleaned it up afterwards.
Mainframe Security Part 2: User Authentication
How can a system accurately determine whether access to data should be allowed when it is not certain who the user is? We have seen this in the NSA - Edward Snowden case – he borrowed other administrators’ User IDs and passwords in order to gain access to data that he was not authorized for. Also, people working together sometimes share this information for convenience. But, what does that do for security and accountability? It destroys it. This is a critical situation for any user with access to some segment of an organization’s sensitive data, which is almost everyone these days.
I raised the idea of two-factor identification in my 1974 papers, but the world was different then.
Mainframe Security Part 1: System Integrity
I’m often asked about what installations can do to maximize their data security in an IBM mainframe environment. For those that do not know me, I was one of the people who started the data security initiative in the mainframe environment when I was asked to form the SHARE Security Project in 1972. We worked together to create a series of requirements to be presented to IBM and I did that in 1974. For more details on this, see www.share-sec.com/history.html.When IBM delivered RACF in 1976, it did not meet two of the crucial requirements – protection by default and what we called algorithmic grouping of resources.
Your network is vulnerable because your log management practice fails to include real-time mainframe data.
The InfoSec World show is upon us. For those of you unfamiliar with InfoSec World, it is an educational conference organized by the MIS Training Institute, an international organization that specializes in audit and information security training. According to their website, www.misti.com, they have trained more than 200,000 IT professionals over the course of its existence.
Database Activity Monitoring (DAM) is defined by Gartner as “… tools that can be used to support the ability to identify and report on fraudulent, illegal or other undesirable behavior, with minimal impact on user operations and productivity(1)…” If you know what to look for in z/OS DB2 audit trails, you have an excellent window into your mainframe database security health.
It should come as no surprise that security information and event management, or SIEM, has been fueled by industry standards groups and government agencies. Leading the charge to how data and security policies are drawn up are organizations with acronym-laden names like PCI SSC, FISMA, FERC, NERC, SOX, HIPAA and many others. The Payment Card Industry Security Standards Council, issuers of the PCI data security standard (PCI DSS), was founded by payment card giants MasterCard, American Express, Discover and several others. In 2006 they issued requirements and offered certifications for merchants, vendors and security consulting companies with the intent to “mitigate data breaches and prevent payment cardholder data fraud.”