This article is the last part of a three part series. Click here to read Part One
. Click here to read Part Two
Inside the Windows Vista Kernel: Part 3
At a Glance:
This series has so far covered Windows Vista kernel enhancements related to processes, I/O, memory management, system startup, shutdown, and power management. In this third and final
installment, I take a look at features and improvements in the areas of reliability, recovery, and security.
One feature I'm not covering in this series is User Account Control (UAC), which comprises several different technologies, including file system and registry virtualization for legacy applications, elevation consent for accessing administrative rights, and the Windows® Integrity Level mechanism for isolating processes running with administrative rights from less-privileged processes running in the same account. Look for my in-depth coverage of UAC internals in a future issue of TechNet Magazine.
Windows Vista™ improves the reliability of your system and your ability to diagnose system and application problems through a number of new features and enhancements. For example, the kernel Event Tracing for Windows (ETW) logger is always active, generating trace events for file, registry, interrupt, and other types of activity into a circular buffer. When a problem occurs, the new Windows Diagnostic Infrastructure (WDI) can capture a snapshot of the buffer and analyze it locally or upload it to Microsoft support for troubleshooting.
The new Windows Performance and Reliability Monitor helps users correlate errors, such as crashes and hangs, with changes that have been made to system configuration. The powerful System Repair Tool (SRT) replaces the Recovery Console for off-line recovery of unbootable systems.
There are three areas that rely on kernel-level changes to the system and so merit a closer look in this article: Kernel Transaction Manager (KTM), improved crash handling, and Previous Versions.
Kernel Transaction Manager
One of the more tedious aspects of software development is handling error conditions. This is especially true if, in the course of performing a high-level operation, an application has completed one or more subtasks that result in changes to the file system or registry. For example, an application's software updating service might make several registry updates, replace one of the application's executables, and then be denied access when it attempts to update a second executable. If the service doesn't want to leave the application in the resulting inconsistent state, it must track all the changes it makes and be prepared to undo them. Testing the error recovery code is difficult and consequently often skipped, so errors in the recovery code can negate the effort.
Applications written for Windows Vista can, with very little effort, gain automatic error recovery capabilities by using the new transactional support in NTFS and the registry with the Kernel Transaction Manager. When an application wants to make a number of related changes, it can either create a Distributed Transaction Coordinator (DTC) transaction and a KTM transaction handle, or create a KTM handle directly and associate the modifications of the files and registry keys with the transaction. If all the changes succeed, the application commits the transaction and the changes are applied, but at any time up to that point the application can roll back the transaction and the changes are then discarded.
As a further benefit, other applications don't see changes made in a transaction until the transaction commits, and applications that use the DTC in Windows Vista and the forthcoming Windows Server®, code-named "Longhorn," can coordinate their transactions with SQL Server™, Microsoft® Message Queue Server (MSMQ), and other databases. An application updating service that uses KTM transactions will therefore never leave the application in an inconsistent state. This is why both Windows Update and System Restore use transactions.
As the heart of transaction support, KTM allows transactional resource managers such as NTFS and the registry to coordinate their updates for a specific set of changes made by an application. In Windows Vista, NTFS uses an extension to support transactions called TxF. The registry uses a similar extension called TxR. These kernel-mode resource managers work with the KTM to coordinate the transaction state, just as user-mode resource managers use DTC to coordinate transaction state across multiple user-mode resource managers. Third parties can also use KTM to implement their own resource managers.
TxF and TxR both define a new set of file system and registry APIs that are similar to existing ones, except that they include a transaction parameter. If an application wants to create a file within a transaction, it first uses KTM to create the transaction, then passes the resulting transaction handle to the new-file creation API.
TxF and TxR both rely on the high-speed file system logging functionality of the Common Log File System or CLFS (%SystemRoot%\System32\Clfs.sys) that was introduced in Windows Server 2003 R2. TxR and TxF use CLFS to durably store transactional state changes before they commit a transaction. This allows them to provide transactional recovery and assurances even if power is lost. In addition to the CLFS log, TxR creates a set of related log files to track transaction changes to the system's registry file in %Systemroot%\System32\Config\Txr, as seen in Figure 1, as well as separate sets of log files for each user registry hive. TxF stores transactional data for each volume in a hidden directory on the volume that's named \$Extend\$RmMetadata.
Figure 1 System registry hive TxR logging files (Click the image for a larger view)
Enhanced Crash Support
When Windows encounters an unrecoverable kernel-mode error—whether due to a buggy device driver, faulty hardware, or the operating system—it tries to prevent corruption of on-disk data by halting the system after displaying the notorious "blue screen of death" and, if configured to do so, writing the contents of some or all of physical memory to a crash dump file. Dump files are useful because when you reboot from a crash, the Microsoft Online Crash Analysis (OCA) service offers to analyze them to look for the root cause. If you like, you can also analyze them yourself using the Microsoft Debugging Tools for Windows
In previous versions of Windows, however, support for crash dump files wasn't enabled until the Session Manager (%Systemroot%\System32\Smss.exe) process initialized paging files. This meant that any critical errors before that point result in a blue screen, but no dump file. Since the bulk of device driver initialization occurs before Smss.exe starts, early crashes would never result in crash dumps, thus making diagnosis of the cause extremely difficult.
Windows Vista reduces the window of time where no dump file is generated by initializing dump file support after all the boot-start device drivers are initialized but before loading system-start drivers. Because of this change, if you do experience a crash during the early part of the boot process, the system can capture a crash dump, allowing OCA to help you resolve the problem. Further, Windows Vista saves data to a dump file in 64KB blocks, whereas previous versions of Windows wrote them using 4KB blocks. This change results in large dump files being written up to 10 times faster.
Application crash handling is also improved in Windows Vista. On previous versions of Windows, when an application crashed it executed an unhandled exception handler. The handler launched the Microsoft Application Error Reporting (AER) process (%Systemroot%\System32\Dwwin.exe) to display a dialog indicating that the program has crashed and asking whether you wanted to send an error report to Microsoft. However, if the stack of the process's main thread was corrupted during the crash, the unhandled exception handler crashed when it executed, resulting in termination of the process by the kernel, the instant disappearance of the program's windows, and no error reporting dialog.
Windows Vista moves error handling out of the context of the crashing process into to a new service, Windows Error Reporting (WER). This service is implemented by a DLL (%Systemroot%\System32\Wersvc.dll) inside a Service Hosting process. When an application crashes, it still executes an unhandled exception handler, but that handler sends a message to the WER service and the service launches the WER Fault Reporting process (%Systemroot%\System32\Werfault.exe) to display the error reporting dialog. If the stack is corrupted and the unhandled exception handler crashes, the handler executes again and crashes again, eventually consuming all the thread's stack (scratch memory area), at which point the kernel steps in and sends the crash notification message to the service.
You can see the contrast in these two approaches in Figures 2 and 3, which show the process relationship of Accvio.exe, a test program that crashes, and the error reporting processes highlighted in green, on Windows XP and Windows Vista. The new Windows Vista error handling architecture means that programs will no longer silently terminate without offering the chance for Microsoft to obtain an error report and help software developers improve their applications.
Figure 2a Application error handling in Windows XP (Click the image for a larger view)
Figure 2b(Click the image for a larger view)
Figure 3a Application error handling in Windows Vista (Click the image for a larger view)
Volume Shadow Copy
Windows XP introduced a technology called Volume Shadow Copy to make point-in-time snapshots of disk volumes. Backup applications can use these snapshots to make consistent backup images, but the snapshots are otherwise hidden from view and kept only for the duration of the backup process.
The snapshots are not actually full copies of volumes. Rather, they are views of a volume from an earlier point that comprise the live volume data overlaid with copies of volume sectors that have changed since the snapshot was taken. The Volume Snapshot Provider driver (%Systemroot\%System32\Drivers\Volsnap.sys) monitors operations aimed at volumes and makes backup copies of sectors before allowing them to change, storing the original data in a file associated with the snapshot in the System Volume Information directory of the volume.
Windows Server 2003 exposed snapshot management to administrators on the server and to users on client systems with its Shadow Copies for Shared Folders. This feature enabled persistent snapshots that users could access via a Previous Versions tab on the Explorer properties dialog boxes for their folders and files located on the server's file shares.
The Windows Vista Previous Versions feature brings this support to all client systems, automatically creating volume snapshots, typically once per day, that you can access through Explorer properties dialogs using the same interface used by Shadow Copies for Shared Folders. This enables you to view, restore, or copy old versions of files and directories that you might have accidentally modified or deleted. While technically not new technology, the Windows Vista Previous Versions implementation of Volume Shadow Copy optimizes that of Windows Server 2003 for use in client desktop environments.
Windows Vista also takes advantage of volume snapshots to unify user and system data protection mechanisms and avoid saving redundant backup data. When an application installation or configuration change causes incorrect or undesirable behaviors, you can use System Restore, a feature introduced into the Windows NT® line of operating systems in Windows XP, to restore system files and data to their state as it existed when a restore point was created.
In Windows XP, System Restore uses a file system filter driver—a type of a driver that can see changes at the file level—to make backup copies of system files at the time they change. On Windows Vista, System Restore uses volume snapshots. When you use the System Restore user interface in Windows Vista to go back to a restore point, you're actually copying earlier versions of modified system files from the snapshot associated with the restore point to the live volume.
Windows Vista is the most secure version of Windows yet. In addition to the inclusion of the Windows Defender antispyware engine, Windows Vista introduces numerous security and defense-in-depth features, including BitLocker™ full-volume encryption, code signing for kernel-mode code, protected processes, Address Space Load Randomization, and improvements to Windows service security and User Account Control.
An operating system can only enforce its security policies while it's active, so you have to take additional measures to protect data when the physical security of a system can be compromised and the data accessed from outside the operating system. Hardware-based mechanisms such as BIOS passwords and encryption are two technologies commonly used to prevent unauthorized access, especially on laptops, which are most likely to be lost or stolen.
Windows 2000 introduced the Encrypting File System (EFS) and, in its Windows Vista incarnation, EFS includes a number of improvements over previous implementations, including performance enhancements, support for encrypting the paging file, and storage of user EFS keys on smart cards. However, you can't use EFS to protect access to sensitive areas of the system, such as the registry hive files. For example, if Group Policy allows you to log onto your laptop even when you're not connected to a domain, then your domain credential verifiers are cached in the registry, so an attacker could use tools to obtain your domain account password hash and use that to try to obtain your password with a password cracker. The password would gain them access to your account and EFS files (assuming you didn't store the EFS key on a smart card).
To make it easy to encrypt the entire boot volume (the volume with the Windows directory), including all its system files and data, Windows Vista introduces a full-volume encryption feature called Windows BitLocker Drive Encryption. Unlike EFS, which is implemented by the NTFS file system driver and operates at the file level, BitLocker encrypts at the volume level using the Full Volume Encryption (FVE) driver (%Systemroot%\System32\Drivers\Fvevol.sys) as diagrammed in Figure 4.
Figure 4 BitLocker FVE filter driver (Click the image for a larger view)
FVE is a filter driver so it automatically sees all the I/O requests that NTFS sends to the volume, encrypting blocks as they're written and decrypting them as they're read using the Full Volume Encryption Key (FVEK) assigned to the volume when it's initially configured to use BitLocker. By default, volumes are encrypted using a 128-bit AES key and a 128-bit diffuser key. Because the encryption and decryption happen beneath NTFS in the I/O system, the volume appears to NTFS as if it's unencrypted and NTFS does not even need to be aware that BitLocker is enabled. If you attempt to read data from the volume from outside of Windows, however, it appears to be random data.
The FVEK is encrypted with a Volume Master Key (VMK) and stored in a special metadata region of the volume. When you configure BitLocker, you have a number of options for how the VMK will be protected, depending on the system's hardware capabilities. If the system has a Trusted Platform Module (TPM) that conforms to v1.2 of the TPM specification and has associated BIOS support, then you can either encrypt the VMK with the TPM, have the system encrypt the VMK using a key stored in the TPM and one stored on a USB flash device, or encrypt the key using a TPM-stored key and a PIN you enter when the system boots. For systems that don't have a TPM, BitLocker offers the option of encrypting the VMK using a key stored on an external USB flash device. In any case you'll need an unencrypted 1.5GB NTFS system volume, the volume where the Boot Manager and Boot Configuration Database (BCD) are stored.
The advantage of using a TPM is that BitLocker uses TPM features to ensure that it will not decrypt the VMK and unlock the boot volume if the BIOS or the system boot files have changed since BitLocker was enabled. When you encrypt the system volume for the first time, and each time you perform updates to any of the components mentioned, BitLocker calculates SHA-1 hashes of these components and stores each hash, called a measurement, in different Platform Configuration Registers (PCR) of the TPM with the help of the TPM device driver (%Systemroot%\System32\Drivers\Tpm.sys). It then uses the TPM to seal the VMK, an operation that uses a private key stored in the TPM to encrypt the VMK and the values stored in the PCRs along with other data BitLocker passes to the TPM. BitLocker then stores the sealed VMK and encrypted FVEK in the volume's metadata region.
When the system boots, it measures its own hashing and PCR loading code and writes the hash to the first PCR of the TPM. It then hashes the BIOS and stores that measurement in the appropriate PCR. The BIOS in turn hashes the next component in the boot sequence, the Master Boot Record (MBR) of the boot volume, and this process continues until the operating system loader is measured. Each subsequent piece of code that runs is responsible for measuring the code that it loads and for storing the measurement into the appropriate register in the TPM. Finally, when the user selects which operating system to boot, the Boot Manager (Bootmgr) reads the encrypted VMK from the volume and asks the TPM to unseal it. Only if all the measurements are the same as when the VMK was sealed, including the optional PIN, will the TPM successfully decrypt the VMK.
You can think of this scheme as a verification chain, where each component in the boot sequence describes the next component to the TPM. Only if all the descriptions match the original ones given to it will the TPM divulge its secret. BitLocker therefore protects the encrypted data even when the disk is removed and placed in another system, the system is booted using a different operating system, or the unencrypted files on the boot volume are compromised.
Code Integrity Verification
Malware that is implemented as a kernel-mode device driver, including rootkits, runs at the same privilege level as the kernel and so is the most difficult to identify and remove. Such malware can modify the behavior of the kernel and other drivers so as to become virtually invisible. The Windows Vista code integrity for kernel-mode code feature, also known as kernel-mode code signing (KMCS), only allows device drivers to load if they are published and digitally signed by developers who have been vetted by one of a handful of certificate authorities (CAs). KMCS is enforced by default on Windows Vista for 64-bit systems.
Because certificate authorities charge a fee for their services and perform basic background checks, such as verifying a business identity, it's harder to produce anonymous kernel-mode malware that runs on 64-bit Windows Vista. Further, malware that does manage to slip through the verification process can potentially leave clues that lead back to the author when the malware is discovered on a compromised system. KMCS also has secondary uses, like providing contact information for the Windows Online Crash Analysis team when a driver is suspected of having a bug that's crashing customer systems, and unlocking high-definition multimedia content, which I'll describe shortly.
KMCS uses public-key cryptography technologies that have been employed for over a decade by Windows and requires that kernel-mode code include a digital signature generated by one of the trusted certificate authorities. If a publisher submits a driver to the Microsoft Windows Hardware Quality Laboratory (WHQL) and the driver passes reliability testing, then Microsoft serves as the certificate authority that signs the code. Most publishers will obtain signatures via WHQL, but when a driver has no WHQL test program, the publisher doesn't want to submit to WHQL testing, or the driver is a boot-start driver that loads early in system startup, the publishers must sign the code themselves. To do so, they must first obtain a code-signing certificate from one of the certificate authorities that Microsoft has identified as trusted for kernel-mode code signing. The author then digitally hashes the code, signs the hash by encrypting it with a private key, and includes the certificate and encrypted hash with the code.
When a driver tries to load, Windows decrypts the hash included with the code using the public key stored in the certificate, then verifies that the hash matches the one included with the code. The authenticity of the certificate is checked in the same way, but using the certificate authority's public key, which is included with Windows.
Windows also checks the associated certificate chains up to one of the root authorities embedded in the Windows boot loader and operating system kernel. Attempts to load an unsigned 64-bit driver should never occur on a production system, so unlike the Plug and Play Manager, which displays a warning dialog when it's directed to load a driver that doesn't have a signature confirming that it's been through WQHL testing, 64-bit Windows Vista silently writes an event to the Code Integrity application event log, like the one shown in Figure 5, anytime it blocks the loading of an unsigned driver. 32-bit Windows Vista also checks driver signatures, but allows unsigned drivers to load. Blocking them would break upgraded Windows XP systems that require drivers that were loaded on Windows XP, and also allows support for hardware for which only Windows XP drivers exist. However, 32-bit Windows Vista also writes events to the Code Integrity event log when it loads an unsigned driver.
Figure 5 Unsigned driver load attempt events (Click the image for a larger view)
Because code signing is commonly used to label code as a rigorously tested official release, publishers typically don't want to sign test code. Windows Vista therefore includes a test-signing mode you can enable and disable with the Bcdedit tool (described in my March 2007 TechNet Magazine article), where it will load kernel-mode drivers digitally signed with a test certificate generated by an in-house certificate authority. This mode is designed for use by programmers while they develop their code. When Windows is in this mode, it displays markers on the desktop like the one in Figure 6.
Figure 6 Windows Vista test-signing mode
Next-generation multimedia content, like HD-DVD, BluRay, and other formats licensed under the Advanced Access Content System (AACS), will become more common over the next few years. Windows Vista includes a number of technologies, collectively called Protected Media Path (PMP), that are required by the AACS standard for such content to be played. PMP includes Protected User-Mode Audio (PUMA) and Protected Video Path (PVP) that together provide mechanisms for audio and video drivers, as well as media player applications, to prevent unauthorized software or hardware from capturing content in high-definition form.
PUMA and PVP define interfaces and support specific to audio and video players, device drivers, and hardware, but PMP also relies on a general kernel mechanism introduced in Windows Vista called a protected process. Protected processes are based on the standard Windows process construct that encapsulates a running executable image, its DLLs, security context (the account under which the process is running and its security privileges), and the threads that execute code within the process, but prevent certain types of access.
Standard processes implement an access control model that allows full access to the owner of the process and administrative accounts with the Debug Programs privilege. Full access allows a user to view and modify the address space of the process, including the code and data mapped into the process. Users can also inject threads into the process. These types of access are not consistent with the requirements of PMP because they would allow unauthorized code to gain access to high-definition content and Digital Rights Management (DRM) keys stored in a process that is playing the content.
Protected processes restrict access to a limited set of informational and process management interfaces that include querying the process's image name and terminating or suspending the process. But the kernel makes diagnostic information for protected processes available through general process query functions that return data regarding all the processes on a system and so don't require direct access to the process. Accesses that could compromise media are allowed only by other protected processes.
Further, to prevent compromise from within, all executable code loaded into a protected process, including its executable image and DLLs, must be either signed by Microsoft (WHQL) with a Protected Environment (PE) flag, or if it's an audio codec, signed by the developer with a DRM-signing certificate obtained from Microsoft. Because kernel-mode code can gain full access to any process, including protected processes, and 32-bit Windows allows unsigned kernel-mode code to load, the kernel provides an API for protected processes to query the "cleanliness" of the kernel-mode environment and use the result to unlock premium content only if no unsigned code is loaded.
Address Space Load Randomization
Despite measures like Data Execution Prevention and enhanced compiler error checking, malware authors continue to find buffer overflow vulnerabilities that allow them to infect network-facing processes like Internet Explorer®, Windows services, and third-party applications to gain a foothold on a system. Once they have managed to infect a process, however, they must use Windows APIs to accomplish their ultimate goal of reading user data or establishing a permanent presence by modifying user or system configuration settings.
Connecting an application with API entry points exported by DLLs is something usually handled by the operating system loader, but these types of malware infection don't get the benefit of the loader's services. This hasn't posed a problem for malware on previous versions of Windows because for any given Windows release, system executable images and DLLs always load at the same location, allowing malware to assume that APIs reside at fixed addresses.
The Windows Vista Address Space Load Randomization (ASLR) feature makes it impossible for malware to know where APIs are located by loading system DLLs and executables at a different location every time the system boots. Early in the boot process, the Memory Manager picks a random DLL image-load bias from one of 256 64KB-aligned addresses in the 16MB region at the top of the user-mode address space. As DLLs that have the new dynamic-relocation flag in their image header load into a process, the Memory Manager packs them into memory starting at the image-load bias address and working its way down.
Executables that have the flag set get a similar treatment, loading at a random 64KB-aligned point within 16MB of the base load address stored in their image header. Further, if a given DLL or executable loads again after being unloaded by all the processes using it, the Memory Manager reselects a random location at which to load it. Figure 7 shows an example address-space layout for a 32-bit Windows Vista system, including the areas from which ASLR picks the image-load bias and executable load address.
Figure 7 ASLR's effect on executable and DLL load addresses (Click the image for a larger view)
Only images that have the dynamic-relocation flag, which includes all Windows Vista DLLs and executables, get relocated because moving legacy images could break internal assumptions that developers have made about where their images load. Visual Studio® 2005 SP1 adds support for setting the flag so that third-party developers can take full advantage of ASLR.
Randomizing DLL load addresses to one of 256 locations doesn't make it impossible for malware to guess the correct location of an API, but it severely hampers the speed at which a network worm can propagate and it prevents malware that only gets one chance at infecting system from working reliably. In addition, ASLR's relocation strategy has the secondary benefit that address spaces are more tightly packed than on previous versions of Windows, creating larger regions of free memory for contiguous memory allocations, reducing the number of page tables the Memory Manager allocates to keep track of address-space layout, and minimizing Translation Lookaside Buffer (TLB) misses.
Service Security Improvements
Windows services make ideal malware targets. Many offer network access to their functionality, possibly exposing remotely exploitable access to a system, and most run with more privilege than standard user accounts, offering the chance to elevate privileges on a local system if they can be compromised by malware. For this reason, Windows started evolving with changes made in Windows XP SP2 that reduced the privileges and access assigned to services to just those needed for their roles. For example, Windows XP SP2 introduced the Local Service and Network Service accounts that include only a subset of the privileges available to the account in which services always previously ran, Local System. This minimizes the access an attacker gains when exploiting a service.
In my previous article, I described how services run isolated from user accounts in their own session, but Windows Vista also expands its use of the principle of least privilege by further reducing the privileges and access to the files, registry keys, and firewall ports it assigns to most services. Windows Vista defines a new group account, called a service Security Identifier (SID), unique to each service. The service can set permissions on its resources so that only its service SID has access, preventing other services running in the same user account from having access if a service becomes compromised. You can see a service's SID by using the sc showsid command followed by the service name, as seen in Figure 8.
Figure 8 Viewing a service SID (Click the image for a larger view)
Service SIDs protect access to resources owned by a particular service, but by default services still have access to all the objects that the user account in which they run can access. For example, a service running in the Local Service account might not be able to access resources created by another service running as Local Service in a different process that has protected its objects with permissions referencing a service SID, however, it can still read and write any objects to which Local Service (and any groups to which Local Service belongs, like the Service group) has permissions.
Windows Vista therefore introduces a new restricted service type called a write-restricted service that permits a service write access only to objects accessible to its service SID, the Everyone group, and the SID assigned to the logon session. To accomplish this, it uses restricted SIDs, a SID type introduced back in Windows 2000. When the process opening an object is a write-restricted service, the access-check algorithm changes so that a SID that has not been assigned to a process in both restricted and unrestricted forms cannot be used to grant the process write access to an object. You can see if a service is restricted with the following command:
Another change makes it easy for a service to prevent other services running in the same account from having access to the objects it creates. In previous versions of Windows, the creator of an object is also the object's owner, and owners have the ability to read and change the permissions of their objects, allowing them full access to their own objects. Windows Vista introduces the new Owner Rights SID, which, if present in an object's permissions, can limit the accesses an owner has to its own object, even removing the right to set and query the permissions.
A further enhancement to the service security model in Windows Vista enables a service developer to specify exactly what security privileges the service needs to operate when the service registers on a system. For example, if the service needs to generate audit events it could list the Audit privilege.
When the Service Control Manager starts a process that hosts one or more Windows services, it creates a security token (the kernel object that lists a process user account, group memberships, and security privileges) for the process that includes only the privileges required by the services in the process. If a service specifies a privilege that is not available to the account in which it runs, then the service fails to start. When none of the services running in a Local Service account process need the Debug Programs privilege, for example, the Service Control Manager strips that privilege from the process's security token. Thus, if the service process is compromised, malicious code cannot take advantage of privileges that were not explicitly requested by the services running in the process. The sc qprivs command reports the privileges that a service has requested.
This concludes my three-part look at Windows Vista kernel changes. There are features and improvements I didn't cover or mention, like a new worker thread pool for application developers, new synchronization mechanisms such as shared reader/writer locks, service thread tagging, support for online NTFS disk checking and volume resizing, and a new kernel IPC mechanism called Advanced Local Procedure Call (ALPC). Look for more information on these and other features in the next edition of Windows Internals, scheduled for publication by the end of 2007.
Mark Russinovich is a Technical Fellow at Microsoft in the Platform and Services Division. He's coauthor of Microsoft Windows Internals (Microsoft Press, 2004) and a frequent speaker at IT and developer conferences, including Microsoft Tech•Ed and the PDC. He joined Microsoft with the acquisition of the company he co-founded, Winternals Software. He also created Sysinternals, where he published many popular utilities, including Process Explorer, Filemon, and Regmon.
© 2008 Microsoft Corporation and CMP Media, LLC. All rights reserved; reproduction in part or in whole without permission is prohibited