Another SHARE conference has come and gone, and we have much to report on where mainframe security is headed. Each year, SHARE demonstrates that the mainframe is not only here to stay, it’s regaining its reputation as the king of big data in an IT landscape of massive complexity and high data risk.
Information and innovation are the most valuable commodities in our increasingly digital world. Thanks to the IT revolution, we now enjoy virtually instant categorization and access to key enterprise data assets. The downside? Many organizations have consolidated their most sensitive Intellectual Property (IP) and consumer identity data in one very predictable spot – mainframes. There can be no doubt where internal and nation-state cyber-thieves have focused their attention.
The innovative technology that brought us here is the same technology canvasing the dynamic world of IT with the burden of too much complexity. IT security visibility is blinded and lethargic from the mutually repellant worlds of distributed and mainframe networks. And because we've naturally assumed our mainframes are secure, we've taken for granted how their purpose and relevance has changed over time.
Mainframe Security: Part 3 - Where is all your sensitive data?
One vulnerability I see a lot are copies of sensitive data outside of the production environment. This sensitive data, if disclosed, can harm the organization just as much as the production versions. Examples are Social Security Numbers, medical diagnosis or treatments, credit information, and, of course, credit card numbers which should never be stored unencrypted in the first place. One example that comes to mind is an insurance company discovering a series of database query results, stored under an individual user’s high-level index that correlated medical treatments with diagnosis, but also contained the patient’s identification. When investigated, it turns out that the employee was asked by an executive to do this analysis, but, never bothered checking with the security people on where and how to temporarily store this information and never cleaned it up afterwards.
Mainframe Security Part 2: User Authentication
How can a system accurately determine whether access to data should be allowed when it is not certain who the user is? We have seen this in the NSA - Edward Snowden case – he borrowed other administrators’ User IDs and passwords in order to gain access to data that he was not authorized for. Also, people working together sometimes share this information for convenience. But, what does that do for security and accountability? It destroys it. This is a critical situation for any user with access to some segment of an organization’s sensitive data, which is almost everyone these days.
I raised the idea of two-factor identification in my 1974 papers, but the world was different then.
Your network is vulnerable because your log management practice fails to include real-time mainframe data.
The InfoSec World show is upon us. For those of you unfamiliar with InfoSec World, it is an educational conference organized by the MIS Training Institute, an international organization that specializes in audit and information security training. According to their website, www.misti.com, they have trained more than 200,000 IT professionals over the course of its existence.
- FIM ensures file compliance by scanning files in configuration-specified directories and then checks for unauthorized changes.
- FIM creates a baseline file configuration to be compared to any future configuration state. If there are any deviations from the baseline, an alert of potential threat can be issued.
- Good FIM practice allows for archiving to compliance standards - PCI DSS, FISMA, SOX, HIPAA, NERC, GLBA, etc... - in the event you need the data for forensics.
Seems like every day we see news headlines about yet another cyber-breach. Government agencies, local municipalities, online gaming and social platforms, financial institutions, even high-school records have been exposed in recent attacks. Scour the web and you will be hard-pressed to find the percentage of breaches occurring on mainframe versus distributed. The data just doesn’t seem to be there. Mainframe gurus will say that it is rare for a mainframe to be compromised, but the reality is that the data to confirm or dismiss this is just too hard to come by. Unless you are an insider and know the details of the breach, all we know publicly is that there was a breach, the number of records compromised and maybe the dollars affected.
Two weeks ago, I wrote that one obstacle to getting your Mainframe to “speak” to your security information and event management (SIEM) console was that mainframe people and enterprise security people speak a different language. They both use the same word, “Syslog,” to mean two different things. SIEM people of course use the word Syslog – as they write it – to mean the RFC 3164 Syslog messages that are at the heart of SIEM processing. Mainframe people use the word SYSLOG – as they usually write it – to refer to a voluminous stream of messages, which for the most part, have little to do with enterprise IT security, log management or network availability. Why?
Recently, one of our customers reported that they are running upwards of 200 million messages per day through the CorreLog Enterprise Server – and this is just from the IBM z/OS mainframe! The closer we get to December 25, the more that number will balloon upwards. Collecting all of this data is certainly a necessity for compliance standards, forensic analysis and managing end-user performance and availability. But how can they possible make sense of all the data filing through every minute?
Topics: compliance standards, automated threat detection, collect log data, PCI DSS compliance, Log Management, security threat, enterprise SIEM system, end-user performance and availability, managing corporate IT security and compliance, indexing and storing data