Design Patterns
This page documents recurring patterns for using kernite’s primitives. Each pattern shows how capabilities, IPC, memory, and scheduling compose to solve a common problem.
Pattern 1: Client-Server RPC
The fundamental microkernel communication pattern. A client invokes a service; the server processes the request and replies.
Kernel Primitives Used
-
Endpoint — rendezvous point between client and server.
-
Call(syscall 2) — client sends request and blocks for reply. -
ReplyRecv(syscall 3) — server replies to previous client and waits for the next. -
Capability mint — server distributes badged endpoint capabilities to identify clients.
Server Loop
loop {
(badge, msg) = ReplyRecv(endpoint, reply_msg)
match msg.label {
OP_READ => reply_msg = handle_read(badge, msg)
OP_WRITE => reply_msg = handle_write(badge, msg)
_ => reply_msg = error(INVALID_OPERATION)
}
}
The server calls ReplyRecv which atomically replies to the previous client (using the one-shot reply capability) and waits for the next request.
The badge identifies which client sent the message (set during mint).
Client Call
result = Call(server_endpoint, request_msg)
The client blocks until the server replies. Priority inheritance ensures the server runs at least at the client’s priority while processing the request.
Pattern 2: IRQ-Driven Device Driver
A userspace driver handles hardware interrupts through the notification mechanism.
Kernel Primitives Used
-
IrqHandler — binds an IRQ to a notification.
-
Notification — receives asynchronous IRQ signals.
-
Wait(syscall 6) — driver blocks until an IRQ fires. -
IRQ_ACK(invoke label0x61) — re-enables the interrupt.
Driver Loop
// Setup: IRQ_SET_HANDLER(irq_handler_cap, notification_cap)
loop {
bits = Wait(notification)
// IRQ fired — handle the device
read_device_status()
process_data()
// Re-enable the interrupt
Invoke(irq_handler_cap, IRQ_ACK)
}
The Wait blocks until the hardware interrupt fires.
The kernel’s interrupt handler calls dispatch_irq(), which signals the notification with 1 << (irq_num % 64).
The driver is woken, processes the interrupt, and acknowledges it to re-enable delivery.
Pattern 3: Combined Server (Endpoint + Notification)
A server that handles both client RPCs and asynchronous events (IRQs, timers) in a single loop.
Kernel Primitives Used
-
Endpoint — for client requests.
-
Bound notification — for async events.
-
TCB_BIND_NOTIFICATION(invoke label0x49) — binds notification to the server’s TCB.
Server Loop
// Setup: TCB_BIND_NOTIFICATION(self_tcb, notification_cap)
loop {
(badge, msg) = ReplyRecv(endpoint, reply_msg)
if woken_by_notification {
// Notification fired (IRQ, timer, etc.)
bits = Poll(notification)
handle_async_event(bits)
continue // no client to reply to
}
// Normal client RPC
reply_msg = handle_request(badge, msg)
}
When the server is blocked in Recv on the endpoint, a signal on the bound notification wakes it immediately.
The server checks woken_by_notification to distinguish between client messages and async events.
The notification pre-check runs before entering the endpoint recv queue: if bits are pending, they are consumed without blocking.
Pattern 4: Multi-Endpoint Server
A server that listens on multiple endpoints simultaneously.
Kernel Primitives Used
-
RecvAny(syscall 23) /ReplyRecvAny(syscall 24) — wait on up to 32 endpoints.
Server Loop
// Register endpoints
register_recv_wait(endpoints[0..N])
loop {
(badge, msg, source_index) = ReplyRecvAny(endpoints, reply_msg)
match source_index {
0 => reply_msg = handle_service_a(badge, msg)
1 => reply_msg = handle_service_b(badge, msg)
N => reply_msg = handle_notification() // bound notification
_ => reply_msg = error()
}
}
The kernel enqueues the thread in all registered endpoints' recv queues.
Whichever endpoint receives a message first wakes the thread and dequeues it from all others.
The source_index identifies which endpoint delivered the message.
Pattern 5: Memory Allocation for a Child Process
The memory manager server (mmsrv) allocates memory for child processes using the four-tier memory model.
Kernel Primitives Used
-
Untyped memory — raw physical memory source.
-
MemoryObject — page-granular abstraction.
-
VSpace — hardware page table mapping.
Allocation Flow
-
Create MO:
UNTYPED_RETYPE(untyped_cap, MemoryObject, size_bits, dest_cnode, slot). -
Commit pages:
MO_COMMIT(mo_cap, offset, count)— allocates physical frames from the untyped source. -
Map into child VSpace:
VSPACE_MAP_MO(child_vspace_cap, va, mo_cap, mo_offset, page_count, perms). -
The child process accesses the memory at the mapped virtual address.
For demand-paged allocation, step 2 is skipped. The kernel installs demand markers in the PTEs. When the child first accesses a page, the page fault fast-path commits the page without IPC to mmsrv.
Pattern 6: Fork (COW Clone)
The process manager implements fork() using COW-cloned MemoryObjects.
Kernel Primitives Used
-
MO_CLONE(invoke label0x93) — creates a COW child MO. -
VSPACE_FORK_RANGE(invoke label0x9A) — bulk fork operation.
Fork Flow
-
Create child VSpace:
UNTYPED_RETYPE(untyped, VSpace, …). -
Create child TCB:
UNTYPED_RETYPE(untyped, Tcb, …). -
Bulk COW fork:
VSPACE_FORK_RANGE(parent_vspace, child_vspace).-
For each mapped region in the parent:
-
Clone the backing MO → creates a
CowChildMO. -
Map the clone into the child VSpace.
-
Mark both parent and child PTEs as read-only + COW.
-
-
-
Configure child TCB: set registers (IP, SP), VSpace, CSpace.
-
Resume child:
TCB_RESUME(child_tcb).
After fork, both parent and child share all pages. The first write by either process triggers a COW fault, which the kernel resolves in the fast-path by copying the page.
Pattern 7: Capability Delegation and Confinement
Controlling what a child process can access.
Delegation (Granting Access)
// Server holds endpoint_cap with GRANT CNODE_COPY(child_cnode, slot, server_cnode, endpoint_slot, SEND|RECV)
The child receives a capability with SEND|RECV but no GRANT.
It can use the endpoint but cannot delegate access to others.
Pattern 8: Timed Operations
Implementing timeouts for IPC operations.
Kernel Primitives Used
-
SendTimed(syscall 21) /RecvTimed(syscall 22) — IPC with timeout. -
RecvAnyTimed(syscall 25) — multi-endpoint receive with timeout. -
NanoSleep(syscall 13) — pure sleep.
Timeout RPC
result = SendTimed(endpoint, msg, timeout_ns)
if result == TIMEOUT {
// Server did not respond within timeout
handle_timeout()
}
The thread is placed in both the endpoint queue and the sleep queue. Whichever fires first (server response or timeout) wakes the thread.
Related Pages
-
Endpoints — synchronous IPC operations
-
Notifications — async signaling and IRQ delivery
-
Memory Objects — MO commit, clone, mapping
-
Capabilities — copy, mint, revoke operations
-
Threads — priority inheritance during Call