Windows CSRSS Write Up: Inter-process Communication (part 1/3)

In the second post of the Windows CSRSS Write Up series, I would like to explain how the practical communication between the Windows Subsystem and user’s process takes place under the hood. Due to the fact that some major improvements have been introduced in Windows Vista and later, the entire article is split into two parts – the first one giving an insight at what the communication channel really is, as well as how is it taken advantage of by both CSRSS and a user processes. The second one, on the other hand, is going to talk through the modifications and new features shipped with the Windows systems starting from Vista, as most of the basic ideas remain the same for decades. As you already know what to expect, proceed to the next section :-)

Local Procedure Calls

Before starting to analyze the mystery API interface implemented by CSRSS (otherwise known as CsrApi), one must first get some basic knowledge regarding the internal mechanism, used to establish a stable inter-process connection and actually exchange information.

The basics

LPC is a packet-based, inter-process communication mechanism implemented in the NT kernel (supported since the very first Windows NT versions – most likely 3.51). The mechanism was originally designed so that it was possible to communicate between modules running in different processor privilege levels – i.e. process – process, process – driver and driver – driver connections are equally well supported. This is possible thanks to the fact that the required API functions are exposed to both user-mode (via ntdll.dll) and kernel-mode (via ntoskrnl.exe). Even though we are mostly concerned by the first scenario (where numerous ring-3 processes communicate with csrss.exe), practical examples of the remaining two also exist – let it be the Kernel Mode Security Support Provider Interface (KSecDD.sys) communicating with LSASS.exe, for instance. Apart from being used by certain system processes talking to each other (e.g. Lsass verifying user credentials on behalaf of Winlogon), LPC is also a part of the RPC (Remote Procedure Call) implementation.

What should be also noted is that the LPC mechanism is directed towards synchronous communication, and therefore enforces a blocking scheme, where the client must wait until its request is dispatched and handled, instead of continuing its execution. As mentioned in the Introduction section, Windows Vista has brought some major changes in this matter – one of these changes was the implementation of a brand new mechanism called ALPC (standing for Advanced or Asynchronous LPC – which one?), deprecating the old LPC mechanism. Since then, all the client – server requests are performed in an asynchronous manner, so that the client is not forced to wait for the response, for ages.

Underlying port objects

As it turns out, a great part of the Windows system functionalities internally rely on special, dedicated objects (implemented by the Object Manager) – let it be File System operations, Windows Registry management, thread suspension or whatever you can think of – the LPC mechanism isn’t any different. In this particular case, we have to deal with a port object, otherwise known as LpcPortObjectType. The OBJECT_TYPE structure, describing the object in consideration, is defined as follows:

kd> dt _OBJECT_TYPE 81feca90 /r
ntdll!_OBJECT_TYPE
   +0x000 Mutex            : _ERESOURCE
   +0x038 TypeList         : _LIST_ENTRY [ 0x81fecac8 - 0x81fecac8 ]
   +0x040 Name             : _UNICODE_STRING "Port"
     +0x000 Length           : 8
     +0x002 MaximumLength    : 0xa
     +0x004 Buffer           : 0xe1007110  "Port"
   +0x048 DefaultObject    : 0x80560960 Void
   +0x04c Index            : 0x15
   +0x050 TotalNumberOfObjects : 0xdb
   +0x054 TotalNumberOfHandles : 0xd9
   +0x058 HighWaterNumberOfObjects : 0xdb
   +0x05c HighWaterNumberOfHandles : 0xd9
   +0x060 TypeInfo         : _OBJECT_TYPE_INITIALIZER
      +0x000 Length           : 0x4c
      +0x002 UseDefaultObject : 0x1 ''
      +0x003 CaseInsensitive  : 0 ''
      +0x004 InvalidAttributes : 0x7b2
      +0x008 GenericMapping   : _GENERIC_MAPPING
      +0x018 ValidAccessMask  : 0x1f0001
      +0x01c SecurityRequired : 0 ''
      +0x01d MaintainHandleCount : 0 ''
      +0x01e MaintainTypeList : 0 ''
      +0x020 PoolType         : 1 ( PagedPool )
      +0x024 DefaultPagedPoolCharge : 0xc4
      +0x028 DefaultNonPagedPoolCharge : 0x18
      +0x02c DumpProcedure    : (null)
      +0x030 OpenProcedure    : (null)
      +0x034 CloseProcedure   : 0x805904f3        void  nt!ObReferenceObjectByName+0
      +0x038 DeleteProcedure  : 0x805902e1        void  nt!ObReferenceObjectByName+0
      +0x03c ParseProcedure   : (null)
      +0x040 SecurityProcedure : 0x8056b84f        long  nt!CcUnpinDataForThread+0
      +0x044 QueryNameProcedure : (null)
      +0x048 OkayToCloseProcedure : (null)
 +0x0ac Key              : 0x74726f50
 +0x0b0 ObjectLocks      : [4] _ERESOURCE

This object can be considered a specific gateway between two modules – it is being used by both sides of the communication channel, while not seeing each other directly at the same time. More precisely, the subject of our considerations are named ports, only; this is caused by the fact that the object must be easily accessible for every possible process.

After the server correctly initializes a named port object – later utilized by the clients – it waits for an incoming connection. When a client eventually decides to connect, the server can verify whether further communication should or shouldn’t be allowed (usually based on the client’s CLIENT_ID structure). If the request is accepted, the connection is considered established – the client is able to send input messages and optionally wait for a response (depending on the packet type).

Every single packet exchanged between the client and server (including the initial connection requests) begins with a PORT_MESSAGE structure, of the following definition:

 //
 // LPC Port Message
 //
 typedef struct _PORT_MESSAGE
 {
   union
   {
     struct
     {
       CSHORT DataLength;
       CSHORT TotalLength;
     } s1;
     ULONG Length;
   } u1;

   union
   {
     struct
     {
       CSHORT Type;
       CSHORT DataInfoOffset;
     } s2;
     ULONG ZeroInit;
   } u2;

   union
   {
     LPC_CLIENT_ID ClientId;
     double DoNotUseThisField;
   };

   ULONG MessageId;

   union
   {
     LPC_SIZE_T ClientViewSize;
     ULONG CallbackId;
   };
 } PORT_MESSAGE, *PPORT_MESSAGE;

The above header consist of the most essential information concerning the message, such as:

  • DataLength
    Determines the size of the buffer, following the header structure (in bytes)
  • TotalLength
    Determines the entire size of the packet, must be equal sizeof(PORT_MESSAGE) + DataLength
  • Type
    Specifies the packet type, can be one of the following:
 //
 // LPC Message Types
 //
 typedef enum _LPC_TYPE
 {
   LPC_NEW_MESSAGE,
   LPC_REQUEST,
   LPC_REPLY,
   LPC_DATAGRAM,
   LPC_LOST_REPLY,
   LPC_PORT_CLOSED,
   LPC_CLIENT_DIED,
   LPC_EXCEPTION,
   LPC_DEBUG_EVENT,
   LPC_ERROR_EVENT,
   LPC_CONNECTION_REQUEST,
   LPC_CONNECTION_REFUSED,
   LPC_MAXIMUM
 } LPC_TYPE;
  • ClientId
    Identifies the packet sender by Process ID and Thread ID
  • MessageId
    A unique value, identifying a specific LPC message

Due to the fact that LPCs can be used to send both small and large amounts of data – two, distinct mechanisms of passing memory between the client and server were developed. In case 304 or less bytes are requested to be sent, a special LPC buffer is used and sent together with the header (described by Length and DataLength), while greater messages are passed using shared memory sections, mapped in both parties taking part in the data exchange.

LPC Api

Due to the fact that LPC is an internal, undocumented mechanism (mostly employed by the system executables), one cannot make use of it based on the win32 API alone. However, a set of LPC-management native routines is exported by the ntdll module; using these functions, one is able to build their own LPC-based protocol and use it in their own favor (e.g. as a fast and convenient IPC technique). A complete list of the Native Calls follows:

  1. NtCreatePort
  2. NtConnectPort
  3. NtListenPort
  4. NtAcceptConnectPort
  5. NtCompleteConnectPort
  6. NtRequestPort
  7. NtRequestWaitReplyPort
  8. NtReplyPort
  9. NtReplyWaitReplyPort
  10. NtReplyWaitReceivePort
  11. NtImpersonateClientOfPort
  12. NtSecureConnectPort

The above list is somewhat correspondent to the cross-ref table for _LpcPortObjectType (excluding NtQueryInformationPort, NtRegisterThreadTerminatePort and a couple of other routines).. All of the functions are more or less documented by independent researchers, Tomasz Nowak and Bo Branten – a brief description of each export is available on the net, though most of the symbols speak by themselves anyway. Having the function names, let’s take a look at how the functions can be actually taken advantage of!

Server – Setting up a port

In order to make the server reachable for client modules, it must create Named Port by calling NtCreatePort (specyfing the object’s name and an optional security descriptor):

NTSTATUS NTAPI
NtCreatePort
 (OUT PHANDLE PortHandle,
  IN POBJECT_ATTRIBUTES ObjectAttributes,
  IN ULONG MaxConnectInfoLength,
  IN ULONG MaxDataLength,
  IN OUT PULONG Reserved OPTIONAL );

When the LPC port is successfully created, it becomes visible to other, external modules – potential clients.

Server – Port Listening

In order to accept an inbound connection, the server starts listening on the newly created port, awaiting for the clients. This is achieved using a NtListenPort routine of the following definition:

NTSTATUS
NTAPI
NtListenPort
(IN HANDLE PortHandle,
 OUT PLPC_MESSAGE ConnectionRequest);

Being dedicated to the synchronous approach, the function blocks the thread and waits until someone tries to make use of the port. And so, while the server is waiting, some client eventually tries to connect…

Client – Connecting to a Port

Knowing that the port has already been created and is currently waiting (residing inside NtListenPort), our client process is able to connect, specifying the port name used during the creation proces. The following function will take care of the rest:

NTSTATUS
NTAPI
NtConnectPort
(OUT PHANDLE ClientPortHandle,
 IN PUNICODE_STRING ServerPortName,
 IN PSECURITY_QUALITY_OF_SERVICE SecurityQos,
 IN OUT PLPC_SECTION_OWNER_MEMORY ClientSharedMemory OPTIONAL,
 OUT PLPC_SECTION_MEMORY ServerSharedMemory OPTIONAL,
 OUT PULONG MaximumMessageLength OPTIONAL,
 IN ConnectionInfo OPTIONAL,
 IN PULONG ConnectionInfoLength OPTIONAL );

Server – Accepting (or not) the connection

When a client tries to connect at one side of the port, the server’s execution track returns from NtListenPort, having the PORT_MESSAGE header filled with information. In particular, the server can access a CLIENT_ID structure, identifying the source process/thread. Based on that data, the server can make the final decision whether to allow or refuse the connection. Whatever option is chosen, the server calls a NtAcceptConnectPort function:

NTSTATUS
NTAPI
NtAcceptConnectPort
 (OUT PHANDLE ServerPortHandle,
  IN HANDLE AlternativeReceivePortHandle OPTIONAL,
  IN PLPC_MESSAGE ConnectionReply,
  IN BOOLEAN AcceptConnection,
  IN OUT PLPC_SECTION_OWNER_MEMORY ServerSharedMemory OPTIONAL,
  OUT PLPC_SECTION_MEMORY ClientSharedMemory OPTIONAL );

In case of a rejection, the execution ends here. The client returns from the NtConnectPort call with an adequate error code (most likely STATUS_PORT_CONNECTION_REFUSED), and the server ends up calling NtListenPort again. If, however, the server decides to proceed with the connection, another routine must be called:

 NTSTATUS
 NTAPI
 NtCompleteConnectPort
 (IN HANDLE PortHandle);

After the above function is triggered, our connection is confirmed and read to go!

Server – Waiting for a message

After opening up a communication channel, the server must begin listening for incoming packets (or client-related events). Because of the specific nature of LPC, the server is unable to send messages by itself – rather than that, it must wait for the client to send a request, and then possibly respond with a piece of data. And so, in order to (as always – synchronously) await a message, the server should call the following function:

 NTSTATUS
 NTAPI
 NtReplyWaitReceivePort
 (IN HANDLE PortHandle,
  OUT PHANDLE ReceivePortHandle OPTIONAL,
  IN PLPC_MESSAGE Reply OPTIONAL,
  OUT PLPC_MESSAGE IncomingRequest);

Client – Sending a message

Having the connection established, our client is now able to send regular messages, at the time of its choice. Moreover, the application can choose between one-side packets and interactive requests. By sending the first type of message, the client does not expect the server to reply – most likely, it is a short, informational packet. On the other hand, interactive messages require the server to fill in a return buffer of a given size. These two packet types can be sent using different native calls:

NTSTATUS
NTAPI
NtRequestPort
(IN HANDLE PortHandle,
 IN PLPC_MESSAGE Request);

or

NTSTATUS
NTAPI
NtRequestWaitReplyPort
(IN HANDLE PortHandle,
 IN PLPC_MESSAGE Request,
 OUT PLPC_MESSAGE IncomingReply);

Apparently, the difference between these two definitions are pretty much obvious :-)

Server – Replying to incoming packets

In case the client requests data from the server, the latter is obligated to respond providing some output data. In order to do so, the following function should be used:

NTSTATUS
NTAPI
NtReplyPort
(IN HANDLE PortHandle,
 IN PLPC_MESSAGE Reply);

Client – Closing the connection

When, eventually, the client either terminates or decides to close the LPC connection, it can clean up the connection by simply dereferencing the port object – the NtClose (or better, documented CloseHandle) native call can be used:

NTSTATUS
 NTAPI
 NtClose(IN HANDLE ObjectHandle);

The entire IPC process has already been presented in a visual form – some very illustrative flow charts can be found here (LPC Communication) and here (LPC Part 1: Architecture).

All of the described functions are actually used while maintaining the CSRSS connection – you can check it by yourself! What should be noted though, is that the above summary covers the LPC communication (which can be already used to create an IPC framework), but tells nothing about what data, in particular, is being sent over the named port. Obviously, the Windows Subsystem manages its own, internal communication protocol implemented by both client-side (ntdll.dll) and server-side (csrsrv.dll, winsrv.dll, basesrv.dll) system libraries.

In order to make it more convenient for kernel32.dll to make use of the CSR packets, a special subset of routines dedicated to CSRSS-communication exists in ntdll.dll. The list of these functions includes, but is not limited to:

  1. CsrClientCallServer
  2. CsrClientConnectToServer
  3. CsrGetProcessId
  4. CsrpConnectToServer

Thanks to the above symbols, it is possible for kernel32.dll (and most importantly – us) to send custom messages on behalf of the current process, without a thorough knowledge of the protocol structure. Furthermore, ntdll.dll contains all the necessary, technical information required while talking to CSRSS, such as the port name to connect to. The next post is going to talk over both client- and user- sides of the LPC initialization and usage, as it is practically performed – watch out :)

Conclusion

All in all, a great number of internal Windows mechanisms make use of LPC – both low-level ones, such as the Windows debugging facility or parts of exception handling implementation, as well as high-level capabilities, including user credentials verification performed by LSASS. One can list all of the named (A)LPC port object present in the system using the WinObj tool by Windows Sysinternals. It is also highly recommended to create one’s own implementation of a LPC-based inter-process communication protocol – a very learning experience. An example source code can be found in the following package: link.

Have fun, leave comments and stay tuned for respective entries ;D

References

  1. LPC Communication
  2. Local Procedure Calls (LPCs)
  3. LPC Part 1: Architecture
  4. Sysinternals WinObj
  5. Windows Privilege Escalation through LPC
  6. Ntdebugging on LPC interface

7 thoughts on “Windows CSRSS Write Up: Inter-process Communication (part 1/3)”

  1. “Local Procedure Calls” – this is wrong acronym.
    Local Inter-Process Communication (LPC) – is correct(wrk: lpc.h).
    ;)

Comments are closed.