IT Management: Audit Those Windows Servers

You may not think you need to audit your Windows Server infrastructure, but indeed you should.

Tom Kemp

There are many business and technology trends at work that increase the complexity of managing and securing your IT infrastructure. Virtualization, the “consumerization of IT” in the dramatic proliferation of mobile devices, and cloud computing are gradually shifting IT assets from within the firewall to outside the firewall. The end result is more of a hybrid datacenter, with less direct control by the IT department.

At the same time, a “new office” model is evolving. There are more mobile users, contractors and offshore personnel—all of whom require access to the network. The IT department has to provide IT services to a broader cross section of users. Many users that have not historically had computing devices (such as nurses, retail salespeople and so on) now require these devices and applications to drive new levels of productivity.

As you deal with an increasingly hybrid IT infrastructure, you’re also facing significant waves of regulatory compliance demands and security concerns. To ensure the defense of your company’s proprietary information against inside or outside threats, you and your IT department require end-to-end visibility and control over users, applications, servers and devices.

You must ensure the business is protected, while remaining agile enough to respond to quickly changing business conditions. You also need this level of visibility to address any auditors’ needs. Historically, this level of security has involved locking down on-premises devices, servers and applications. Now you must apply the same level of security capability to IT resources that are outside the firewall and not directly under the control of your IT department.

Microsoft Windows Server is the leading server OS and is heavily deployed as part of Infrastructure as a Service (IaaS) offerings from vendors such as Microsoft, Rackspace US Inc. and Amazon.com Inc. Therefore, auditing a business-critical system such as Windows Server is a must, whether deployed on-premises or in the cloud.

Often times, it’s difficult to articulate the business justification for auditing efforts and expenses to senior management. There are also certain IT organizations that might think detailed auditing efforts aren’t appropriate or necessary for their systems. Here are three major business reasons why you should audit your Windows Servers.

Compliance Requirements

There are some IT personnel who believe industry and government compliance requirements might not apply to their organizations. This is most likely untrue. If you work for a public company, if you take credit-card orders, if you store patient health information, your organization is on the hook for compliance.

There are myriad compliance regulations that create ongoing challenges for enterprises in every industry. Many companies must meet multiple requirements for internal controls (Sarbanes-Oxley Act, or SOX), data security for credit-card payments (Payment Card Industry Data Security Standard, or PCI DSS), patient health information (Health Insurance Portability and Accountability Act, HIPAA) and other industry-specific requirements (Gramm-Leach-Bliley Act, or GBLA; North American Electric Reliability Corporation/Federal Energy Regulatory Commission, or NERC/FERC; and Federal Information Security Management Act/National Institute of Standards and Technology, or FISMA/NIST SP 800-53).

Every major compliance regulation and industry mandate also requires users to authenticate with a unique identity. Privileges are limited only to those needed to perform job functions. User activity is audited with sufficient detail to determine what events occurred, who performed those events and the outcome of the events.

Here’s a look at some of the compliance rules and the corresponding auditing requirements:

SOX Section 404 (2): Must contain an assessment … of the effectiveness of the internal control structure and procedures of the issuer for financial reporting.

PCI DSS Section 10.2.1-2: Implement automated audit trails to reconstruct the user activity, for all system components. Verify all individual access to cardholder data. Verify actions taken by any individual with root or administrative privileges.

HIPAA 164.312(b) Audit Controls: Implement hardware, software and procedural mechanisms that record and examine activity in information systems that contain or use electronic protected health information, or ePHI.

NIST SP 800-53 (AU-14): The information system provides the capability to capture/record and log all content related to a user session, and remotely view all content related to an established user session in real time.

NERC CIP-005-1 R3 (Monitoring Electronic Access): Implement and document an electronic or manual process for monitoring and logging access.

Mitigating Insider Attacks

Many of the security breaches that have made headlines over the past year have been insider attacks, not outside hacks. Mitigating the risk of insider attacks that can lead to a data breach or system outage is a key concern.

There are several factors that have led to an increase in insider incidents, including sharing account credentials, privileged users with numerous credentials across systems and assigning privileges that are too broad with respect to the user’s job responsibilities. Many organizations have privileged users that are geographically dispersed, so those organizations must have visibility into the activities of local and remote administrators and users.

For example, auditing user activity can create the required accountability for security and compliance, including:

  • Capturing and searching user activity so you can examine suspicious actions to determine if an attack is occurring—before the damage is done.
  • Changing privileged user behavior through deterrents, ensuring that trustworthy employees aren’t taking shortcuts, and ensuring disgruntled employees know malicious actions will be recorded.
  • Establishing a clear, unambiguous record for evidence in legal proceedings and dispute resolution.

Insider threats aren’t going away. One report from the United States Computer Emergency Readiness Team, or US-CERT (produced in cooperation with the U.S. Secret Service), estimated that 86 percent of internal computer sabotage incidents are perpetrated by a company’s own technology workers. It further states that 33 percent of the 2011 CyberSecurity Watch Survey participants responded that insider attacks are more costly than external ones.

Third-Party Access, Troubleshooting and Training

Today’s business environment is driving enterprises to find cost efficiency at every operational level. Outsourcing, offshoring and cloud computing are giving organizations agility, flexibility and the cost control they require to remain competitive.

Nevertheless, you and your organization are still responsible for the security and compliance of your IT systems. This is made clear in newly revised compliance requirements that specifically call out your responsibility when contracting independent software vendors, services providers and outsourcing firms. In fact, the Health Information Technology for Economic and Clinical Health Act, or HITECH, enhancements to HIPAA closed one of the last loopholes related to third-party liability.

Third-party user access creates even more impetus to deploy auditing. In addition to insider attacks and compliance demands, third-party access increases the pressure to quickly troubleshoot ailing systems, auto-document critical processes and create training procedures for personnel hand-offs. These events occur more frequently with contractors and services providers.

Auditing Tactics

Now that it’s clear you can justify auditing your Windows Server infrastructure, what are some of the tactics you can take, and what are the pros and cons of each? Most auditing regimens use systems and security log file collection and aggregation.

There are dozens of vendors offering log file management and security event management. One drawback to relying solely on log management for your auditing approach is that log files often provide an incomplete picture of what’s really happened. The large amounts of inconsequential event and management data aren’t often detailed enough to determine which user performed specific actions on a system that resulted in a system failure or attack.

Interpreting log files is time-consuming and requires specialized skills. Log data is useful for top-level alerting and notification, but logged events aren’t tied to the actions of a specific user. Troubleshooting and root-cause analysis won’t provide the accountability that security best practices and compliance regulations demand.

Another critical factor is the lack of visibility, because some applications have little or no internal auditing. This is often the case with custom-built applications. Auditing capabilities might not be the highest priority and developers might not understand the organization’s audit needs, including the level of detail required and importance of securing access to log data itself. Many enterprise applications are also highly customized, so they could end up not logging critical events.

Another approach is file change auditing. A change to a critical file can sometimes reflect a significant event such as improper access to something like a payroll spreadsheet. Specific regulatory requirements call for use of file change tracking, including HIPAA sections 164.312 and 164.316, as well as PCI section 11.5. These state you must “deploy file-integrating monitoring software personnel to alert personnel to unauthorized modification of critical system files, configuration files or content files.”

The drawback to this approach is that a lot of your critical data is stored within databases that generic OS-specific file change tracking can’t detect. Often the overhead associated with auditing file changes is too prohibitive.

A third and newer approach is user-level activity monitoring. Solutions for this type of monitoring increase visibility and give you a clear understanding of the intentions, actions and results of user activity. This approach can also generate higher-level alerts that will point to more detailed data on the actions, events and commands that led up to the alert.

You can only collect this important metadata by capturing critical user-centric data. You can’t reconstruct this from system and application log data. The downside to this approach includes the need to capture vast amounts of data in a centralized database. Like some other approaches, this likely requires additional infrastructure.

In the end, when auditing your most mission-critical Windows Server, you should use all three approaches: log-, file- and user-level auditing. This will give you a 360-degree view of what’s happening on your servers. The risks of not auditing include data security breaches; loss of reputation and business; significant fines for lack of compliance; and loss of visibility regarding what third parties are doing on your systems. The expression, “Better safe than sorry,” definitely applies when it comes to auditing your Windows Servers.

Tom Kemp

Tom Kemp is cofounder and chief executive officer of Centrify Corp., a software and cloud security provider. Prior to Centrify, Kemp held various executive, technical and marketing roles at NetIQ Corp., Compuware Corp., EcoSystems Software and Oracle Corp. He holds a Bachelor of Science degree in computer science and in history from the University of Michigan.