wEEK 11

Systemd and Services (IPC)

This week explores how Linux manages background services and process communication through Inter-Process Communication (IPC) using named pipes (FIFOs). You’ll create a simulated service that uses a FIFO to pass messages between processes, record logs, and gracefully stop using system signals.

F-I-F-O-L-C

F — Folder

In Linux, every service—whether managed directly by systemd or launched manually—operates within a defined environment. This environment provides controlled access to executable files, runtime communication channels, and persistent logs. Such organization is essential for maintaining isolation between processes and for ensuring that inter-process communication (IPC) occurs safely and predictably.

When a service is started under systemd, the system automatically prepares directories for its operation, such as those used for runtime sockets, temporary files, and log outputs. Reproducing this structure manually illustrates how systemd maintains order in a multi-process environment. Before a service begins execution, it should have distinct spaces for its program code, communication endpoints, and logging output—each supporting a specific stage of the service lifecycle.

FolderPurpose in Service LifecycleSystemd Equivalent
bin/Contains the executable or script that defines the service’s primary function./usr/lib/systemd/system/ or /usr/local/bin/
logs/Captures all messages written to standard output or error, providing an auditable execution history./var/log/ or journald
ipc/Hosts the mechanisms for inter-process communication, such as FIFOs or sockets, allowing data exchange between service components.

/run/ or /tmp/

 

 

This structure reflects the principle of process containment, a fundamental design goal of systemd. Each service executes within a bounded workspace that defines where it reads input, writes output, and communicates with other processes. By isolating these components, the operating system ensures that concurrent services remain independent, secure, and traceable throughout their lifecycle.

 
 

I — Initialize FIFO

After defining the service environment, the next step is to establish a communication channel through which processes can exchange data. In Linux, one of the simplest and most instructive mechanisms for Inter-Process Communication (IPC) is the FIFO, or named pipe.

A FIFO behaves like a transient data stream that connects a producer and a consumer. Unlike regular files, it does not store data permanently; instead, it transmits information sequentially between processes. When one process writes to the pipe, another can immediately read from it—providing a lightweight, ordered, and blocking method of synchronization.

Within systemd-managed systems, similar mechanisms underlie service communication. Many background daemons expose sockets or message queues to receive requests, report status, or coordinate with other units. Initializing a FIFO manually demonstrates this same principle: before a service can begin operation, it must have a well-defined channel for message exchange.

ConceptDescription
FIFO (First In, First Out)A special file type used for one-way data transfer between processes. Data is read in the same order it was written.
Named PipeA FIFO that persists in the filesystem with a name, allowing unrelated processes to connect to it.
Producer / Consumer ModelThe process writing to the FIFO is the producer; the one reading from it is the consumer. Synchronization occurs naturally because the writer blocks until the reader is ready.
Blocking BehaviorEnsures ordered communication—writes pause until reads occur, preventing data loss or race conditions.

When preparing a service, initializing the FIFO (for instance, creating one in the ipc directory) formalizes the communication boundary of that process. The service that consumes data from the FIFO functions as the receiver, while user commands or other applications that write into it act as senders.

Establishing this channel before the service starts ensures that once the process begins execution, input, output, and synchronization are already defined—precisely the way systemd prepares sockets for socket-activated services prior to launching their executables.

F — Fork Service

Once the communication channel has been established, the service itself must be created and executed as an independent process. In Linux, this independence is achieved through process forking—the mechanism that allows a new process to be spawned from an existing one.

When a program is executed in the background, either manually or under systemd, the operating system duplicates the parent process and allows the child process to continue running after the parent terminates. This is the foundation of how daemons and background services operate. Forking transforms an interactive command into a self-contained service capable of running continuously without user supervision.

A minimal example of such a service might involve a script located in the bin directory that reads from the FIFO and records messages to a log file. When executed with a background operator (for example, using nohup and &), the shell returns control to the user while the child process remains active in the background. The process ID (PID) can be stored in a file so that it can later be monitored or terminated gracefully.

ConceptDescription
ForkThe system call that duplicates a process to create a new one. The child inherits the environment but runs independently.
DaemonizationThe act of detaching a process from the terminal to let it operate autonomously as a background service.
nohupPrevents a process from terminating when the controlling terminal closes, ensuring persistence.
PID TrackingSaving the process ID allows later control of the service (for status checks or termination).

Forking a service manually in this way reflects the same logic that systemd applies automatically. When systemd launches a service, it forks a process, manages its standard input and output streams, and supervises its lifecycle. By performing these steps explicitly, you observe how systemd abstracts these details to maintain process supervision, graceful termination, and reliable restart behavior across the entire system.

Navigate the tree​

O — Output

Once the service is running as an independent process, it must handle input and output streams—the essential communication pathways of any Linux service. In the context of this exercise, the service reads messages from the FIFO (its input channel) and writes timestamped records to a log file (its output channel). This relationship illustrates how systemd and other service managers maintain controlled data flow between processes.

In Linux, every process has three fundamental I/O streams: standard input (stdin), standard output (stdout), and standard error (stderr). A service that reads from a FIFO effectively redirects its standard input from the named pipe, while writing logs redirects its standard output to a file. This setup enables continuous, real-time communication between independent processes.

When a user or another process writes to the FIFO—using, for instance, a shell command that echoes or prints a message into the pipe—the service immediately receives that data, processes it, and appends the output to its log. This cycle simulates the event-driven communication model common in system daemons, where input requests trigger corresponding output actions.

ConceptDescription
Standard StreamsFundamental channels (stdin, stdout, stderr) through which processes read and write data.
FIFO InputThe service receives data through the named pipe, acting as its dynamic input source.
Log OutputProcessed messages and timestamps are written to an output file, preserving execution history.
Real-Time ExchangeThe FIFO enables immediate data transfer between producer and consumer without intermediate storage.

In systemd-managed services, these same data flows are handled automatically. Systemd captures standard output and error streams from the service process and routes them to the system journal. By manually implementing input through a FIFO and output through a log file, you reproduce the underlying mechanics of service I/O redirection—the process through which Linux systems record, trace, and supervise the live behavior of running daemons.

L — Logs

Logging is a fundamental aspect of any persistent service. It provides a continuous record of events, messages, and system responses, allowing administrators to verify correct operation, diagnose failures, and audit historical behavior. In Linux, systemd centralizes this function through the journald service, which collects output from all managed units and stores it in a unified, queryable log.

In a manually configured service, this same principle is demonstrated by writing all output to a dedicated file in the logging directory. Each message appended to this file represents the service’s observable state at a specific time. When the service reads input from the FIFO, it formats and records that message—typically including a timestamp and contextual details—to ensure traceability.

This practice exemplifies temporal accountability, a core concept in service supervision: every action taken by a process must be reproducible and reviewable. Logs allow administrators to reconstruct system behavior after execution has completed, providing both operational assurance and forensic visibility.

ConceptDescription
Persistent LoggingRetains a chronological record of all service activity, even after termination.
TimestampsAnnotate each entry to capture precise event timing and ordering.
Append ModeEnsures that new records are added without overwriting previous data.
Monitoring and AnalysisTools such as tail or grep can inspect live or historical service behavior.

By maintaining an explicit log file for the service, you mirror systemd’s journal functionality on a smaller scale. While journald automatically captures output streams and tags them by service name, this manual approach reveals how logging integrates with the service lifecycle itself—starting when the process launches, updating dynamically as data flows through the FIFO, and finalizing upon termination.
Effective log management thus transforms raw process output into structured operational insight—the same foundation upon which Linux’s broader observability tools are built.

C — Control

 

A service is not complete until it can be supervised and controlled. In Linux, this control is achieved through the system’s process management and signaling mechanisms, which allow administrators to start, stop, and monitor running processes predictably.

When a service is executed manually, control is often maintained through its Process ID (PID)—a unique identifier assigned by the kernel. By storing this PID when the service starts, the administrator or control script can later reference it to check the process status or terminate it in an orderly fashion. This approach parallels how systemd tracks every service it manages, maintaining precise knowledge of which processes belong to which units.

To stop a service cleanly, the SIGTERM (Terminate Signal) is sent to its process. Unlike forced termination signals, SIGTERM requests that the process perform any necessary cleanup—closing open files, writing final log entries, or releasing communication channels—before exiting. This controlled termination ensures that the service concludes its lifecycle without leaving residual data or orphaned processes.

ConceptDescription
Process ID (PID)A unique identifier used by the kernel to manage active processes. Enables precise tracking and targeted signaling.
SIGTERMA software signal that instructs a process to terminate gracefully. Allows cleanup before exit.
Signal TrappingThe service script can intercept termination signals to perform final operations, such as writing “Service stopping…” to its log.
Lifecycle CompletionThe process transitions from the active to the stopped state, releasing system resources and closing communication channels.

This sequence—start, run, stop—represents the complete service lifecycle, both for manually launched scripts and for daemons managed under systemd. While systemd automates these transitions through commands like systemctl start, systemctl stop, and systemctl status, replicating them manually demonstrates the underlying operating system behavior.
By understanding how a simple background process responds to signals, maintains state through its PID, and logs its termination, one gains direct insight into the mechanisms that underpin systemd’s reliability as a service manager: deterministic control, consistent supervision, and graceful recovery.

This concludes Lecture 11. Please return to Blackboard to access the Week 11 materials.

Scroll to Top