x86 Kernel Memory Space Visualization (KernelMAP v0.0.1)

What I would like to write about today is a subject I have been playing with for quite some time – Windows kernel vulnerability exploitation techniques. While digging through various articles and other materials, I appeared to find bunches of interesting facts that are worth being described here. The post presented today aims to describe various ways of obtaining kernel-mode addresses from the user-mode (application) level.

One could ask, what would we want to retrieve any internal system addresses for. Well, it is indeed a very good question – as for me, the kernel addresses become most useful in the vulnerability exploitation process. Since a majority of bugs found in device drivers belong, directly (pointer validation) or indirectly (pool buffer overflow), to the write-what-where condition family, one must know the exact address to be overwritten before performing the operation. This basically means that the more information about kernel memory layout we can gather, the more stable and effective attacks can be conducted.

The idea I am writing about is not new, for sure. A great part of kernel exploits programmers has already used such techniques in their source code. However, I haven’t ever found any publication that would thoroughly describe every possible vector of obtaining somewhat “sensitive” kernel data (addresses) from within user-mode. Hence, I would like to present a short introduction of each method I could think of – a longer article will presumably be released within a few days. Huh, let’s get to the point, already!

NtQuerySystemInformation function

Before trying to retrieve any information from the kernel, one should firstly realize, what are the possible “communication channels”, that could be used to get the desired data from. The most basic method division could look like this:

  • Kernel communication – calling some of the exported system routines and receive the data we are interested in, through the output buffer
  • Processor communication – directly using some of the processor characteristics (i.e. instruction set) in order to query for some processor-specific values that the system is obliged to fill.

All in all, the kernel isn’t meant to release too much information about itself to the user (since every leaked piece of data could potentially help the attacker to hack the machine); due to this fact, there are special routines designed to handle queries about the system state, configuration etc. As it turns out, these system calls (named NtQuery*Information) can provide very miscellaneous kinds of information that not every low-level coder is aware of. I strongly advice you to take a look and test the functions below, using many different arguments:

Even thought these syscalls are either documented very poorly or not documented at all, some independent researchers have already managed to describe a great part of them – their work is publicly available, for example here. For our purposes (global system info), the last function on the list seems to be the most useful one – it is, indeed. As one can see, the first routine parameter is _SYSTEM_INFORMATION_CLASS – a single enum containing all the possible request types, shown below:

typedef enum _SYSTEM_INFORMATION_CLASS {
  SystemInformationClassMin = 0,
  SystemBasicInformation = 0,
  SystemProcessorInformation = 1,
  SystemPerformanceInformation = 2,
  SystemTimeOfDayInformation = 3,
  SystemPathInformation = 4,
  SystemNotImplemented1 = 4,
  SystemProcessInformation = 5,
  SystemProcessesAndThreadsInformation = 5,
  SystemCallCountInfoInformation = 6,
  SystemCallCounts = 6,
  SystemDeviceInformation = 7,
  SystemConfigurationInformation = 7,
  SystemProcessorPerformanceInformation = 8,
  SystemProcessorTimes = 8,
  SystemFlagsInformation = 9,
  SystemGlobalFlag = 9,
  SystemCallTimeInformation = 10,
  SystemNotImplemented2 = 10,
  SystemModuleInformation = 11,
  SystemLocksInformation = 12,
  SystemLockInformation = 12,
  SystemStackTraceInformation = 13,
  SystemNotImplemented3 = 13,
  SystemPagedPoolInformation = 14,
  SystemNotImplemented4 = 14,
  SystemNonPagedPoolInformation = 15,
  SystemNotImplemented5 = 15,
  SystemHandleInformation = 16,
  SystemObjectInformation = 17,
  SystemPageFileInformation = 18,
  SystemPagefileInformation = 18,
  SystemVdmInstemulInformation = 19,
  SystemInstructionEmulationCounts = 19,
  SystemVdmBopInformation = 20,
  (...)
} SYSTEM_INFORMATION_CLASS;

(you can find the complete definition in the standard ddk\ntapi.h header file). As the name implies, there are really plenty of information to get – the only thing required is the knowledge of how to the input/output structures for each request looks like. At this point, we are particularly interested in three query types – SystemModuleInformation, SystemHandleInformation and SystemLocksInformation. Moreover, SystemObjectInformation could be also useful, under specific circumstances. Let’s go through these requests and find out, what information can we get.

SystemModuleInformation

As far as my observations go, this request is the most commonly used type, across kernel-mode exploits. To understand why, one should first take a look at what this operation returns:

typedef struct _SYSTEM_MODULE_INFORMATION_ENTRY {
  ULONG     Unknown1;
  ULONG     Unknown2;
  PVOID  Base;
  ULONG  Size;
  ULONG  Flags;
  USHORT  Index;
  /* Length of module name not including the path, this
  field contains valid value only for NTOSKRNL module */
  USHORT    NameLength;
  USHORT  LoadCount;
  USHORT  PathLength;
  CHAR  ImageName[256];
} SYSTEM_MODULE_INFORMATION_ENTRY, *PSYSTEM_MODULE_INFORMATION_ENTRY;

typedef struct _SYSTEM_MODULE_INFORMATION {
  ULONG  Count;
  SYSTEM_MODULE_INFORMATION_ENTRY Module[1];
} SYSTEM_MODULE_INFORMATION, *PSYSTEM_MODULE_INFORMATION;

What the listing presents is a main structure, containing the number of module information entries returned. Right after this value, Count SYSTEM_MODULE_INFORMATION_ENTRY structures follow, each containing information about one, specific executable image, loaded inside the kernel-mode address space.

As the names themselves suggest, after calling NtQuerySystemInformation(SystemModuleInformation,…) and passing a properly-sized buffer, the application obtains the Name, ImageBase and ImageSize of every single device driver (excluding those that are hidden by rootkits, of course ;)). This includes the very first Windows kernel images like ntosknrl.exe (or other types of the system core), HAL.dll (hardware support), win32k.sys (std graphic device driver) and so on. Because of the fact that most write-what-where attacks are based on modyfing the ntosknrl.exe memory regions (such as the [HalDispatchTable+4] technique), obtaining the kernel ImageBase  value is an essential part of the entire exploitation. Some very educational articles covering kernel-mode exploitation techniques can be found here (Analyzing local privilege escalations in win32k), here (Exploiting Common Flaws in Drivers) and here (Exploiting Windows Device Drivers).

SystemHandleInformation

Another interesting request type, already used by some rootkit detection mechanisms (RootkitAnalytics.com). The general purpose is providing information about all the active HANDLE objects present in the system memory. Performing a NtQuerySystemInformation(SystemHandleInformation,…) will result in filling the output buffer with structures of the following definition:

typedef struct _SYSTEM_HANDLE_INFORMATION {
  ULONG  ProcessId;
  UCHAR  ObjectTypeNumber;
  UCHAR  Flags;
  USHORT  Handle;
  PVOID  Object;
  ACCESS_MASK  GrantedAccess;
} SYSTEM_HANDLE_INFORMATION, *PSYSTEM_HANDLE_INFORMATION;

Not too many fields, this time; however, the most important part of the struct is present – PVOID Object. This is where we can find another kernel-mode pointer (inaccessible from user-mode, of course). Apart from the address itself, the HANDLE is also described with the creator process ID, type of the object and most importantly – the HANDLE value itself. Therefore, the object identification is very easy to perform and should not cause too much of a problem to the coder. More information will follow in the upcoming paper :)

SystemLocksInformation

This time, what we are getting is the information regarding locks used by the kernel. Locks in Windows are special “multiple reader single writer” synchronization mechanisms, otherwise known as “resources”.  The output structure definition follows:

typedef struct _SYSTEM_LOCK_INFORMATION {
  PVOID  Address;
  USHORT  Type;
  USHORT  Reserved1;
  ULONG  ExclusiveOwnerThreadId;
  ULONG  ActiveCount;
  ULONG  ContentionCount;
  ULONG  Reserved2[2];
  ULONG  NumberOfSharedWaiters;
  ULONG  NumberOfExclusiveWaiters;
} SYSTEM_LOCK_INFORMATION, *PSYSTEM_LOCK_INFORMATION;

where the PVOID Address value points to a ERESOURCE structure in the kernel memory. These structures can be initialized using the ExInitializeResourceLite routine and are said to be documented in DDK (Windows NT 2000 Native API Reference).

These are, more or less, all the places (known by me), one can request K-M addresses from. Even though only three sources could seem to be little, it is enough to create a really impressive (imho) kernel memory map, as you will see in a few minutes. If you – the blog reader – are aware of any other kind of system information request leading to kernel address “leak”, please let me know through e-mail / post comments – I will be more than happy to add it to this list.

Processor specific structures

Apart from asking the system kernel to provide some information about its memory layout, one can also use direct application -> processor communication in order to read addresses related to some of the architectural structures that the system has to implement to work correctly. To be more precise, these structures are Global Descriptor Table (per processor/core) and Interrupt Descriptor Table (per processor/core) plus structures implemented inside GDT (Task State Segment, Local Descriptor Table etc).

To start playing with these structures, one should begin by reading Intel Software Developer’s Manuals: Volume 1 (Basic Architecture) and Volume 3A, 3B (System Programming Guide) – all of these can be found here. The most interesting instructions here appear to be SGDT and SIDT, storing the GDTR and IDTR registers in user-specified memory.

What is more, the system itself also makes some segment-related API functions available; these are:

It should be noted (once more), that each processor has its own GDT/IDT structure. Hence, in order to retrieve all the addresses possible, it is necessary to make sure that a specified thread/routine is executed in the context of a chosen processor. This can be achieved by using SetThreadAffinityMask or SetProcessAffinityMask API functions. Please refer to the KernelMAP source code to get more information about how to implement it in practice.

KernelMAP v0.0.1

Despite some strictly theoretical deliberations, I would also like to present a simple program of mine. Its main purpose is to gather all (or most, at least) information about kernel-mode memory layout and show it to the user in the most attractive way possible. The application consists of two windows: the first, text window prints some basic, statistical information based on the data provided by kernel. Second, graphical window is responsible for the real visualization. Its size is equal 1024×512 (400×200 hexdecimally), and every virtual page is represented by a single pixel on the board. These pixels have various colors associated to themselves, depending on what type of data the page in consideration contains.

As the above description might not give you any idea of how it looks like on a real system, some screenshots from various systems follow:

Windows XP SP3

Windows Vista SP2

Windows 7 SP0

Some strictly technical info: this program is designed to be compiled using MinGW GCC compiler and is probably not MS VC++ compatible. In order to make it work correctly, the program needs to find the SDL.dll, libpng3.dll and zlib1.dll libraries – you can find them inside the package.

A complete ZIP file, including the source code, executable and external DLL files can be downloaded from here (387kB)

Since I am myself very curious about how the kernel memory layout looks like on different systems that I don’t have access to, you should feel encouraged to make your own shot and share (I hope this is not too much of information disclosure ;D). Furthermore, if you find any bugs in the existing code, or would like it to be extended with some additional functionalities (like new kernel addresses I don’t know about, yet), please let me know.

Every single comment is very welcome!

Have fun!

13 thoughts on “x86 Kernel Memory Space Visualization (KernelMAP v0.0.1)

  1. Hi Dmitry! ;>

    Wow, so you’re apparently a memory visualisation specialist ;)

    Impressive, colorful graphics – KernelMAP was however designed to be as informative as possible in terms of kernel memory layout – might not be a perfect material for a book cover ;D

    Cheers,
    j00ru//vx

  2. Pingback: IDELIT
  3. One other useful address info leakage is in Win32k/User32. There, user32!gSharedInfo contains the address of a shared memory region (mapped in user mode) that contains the entire _HANDLEENTRY table for the session; plus there’s the ulSharedDelta to help map userkernel addresses. On Win7, gSharedInfo is exported by User32, so coding is even more easy.

    Alex Ionescu’s Recon 2011 presentation and Tarjei Mandt’s BH 2001 paper describe this in detail.

Leave a Comment