Skip to main content

14.2 UTS Namespace Implementation

In the previous section, we mentioned that the kernel doesn't care about a namespace's "name" — it distinguishes different instances solely by inode number. This sounds minimalist, but minimalism is the highest form of design.

Before diving into the most complex network namespace, we need to start with something easier. The UTS namespace is that low-hanging fruit.

Why start here? Because it involves all the core elements of namespace implementation: struct definition, reference counting, how it associates with a process, and how system calls adapt. Once we understand these few dozen lines of code, the network namespace won't look like gibberish later on.

Starting with a Struct

To implement the UTS namespace, the kernel introduces a struct called uts_namespace.

You can think of it as an "ID card" — this card records only the most basic information: who I am (nodename) and which domain I belong to (domainname).

struct uts_namespace {
struct kref kref;
struct new_utsname name;
struct user_namespace *user_ns;
unsigned int proc_inum;
};

The design of this "ID card" is quite deliberate. Let's break down each field:

  • kref: This is a reference count. There are many types of counters in the kernel. The UTS namespace uses the more common kref, managing its lifecycle through kref_get() and kref_put(). Here's a fun fact: the UTS and PID namespaces use kref, while the other four namespaces use the lower-level atomic_t. This is a historical artifact — just understand it and don't dwell on it.

  • name: This is the real meat. It's a new_utsname struct containing nodename (hostname) and domainname (domainname). This is the core data we want to isolate.

  • user_ns: Points to the user namespace. Namespaces aren't isolated islands. UTS needs to know which user context it belongs to, as this is the foundation for permission control.

  • proc_inum: The proc inode number we emphasized in the previous section. The kernel doesn't distinguish namespaces by string names; it relies entirely on this unique numeric ID.

How Does a Process Find It?

Having a struct isn't enough — the process needs to be able to access it. Remember the nsproxy we mentioned in the previous section? That "middleman" now comes into play.

struct nsproxy {
...
struct uts_namespace *uts_ns;
...
};

When a process wants to query its hostname, it follows the current->nsproxy->uts_ns path to reach this "ID card." Once this pointer points to a different uts_namespace, the process is living in a new isolated environment.

What Data Is Actually Isolated?

Let's peel back the core of uts_namespace and see what new_utsname actually looks like. This is the essence of the UTS namespace:

struct new_utsname {
char sysname[__NEW_UTS_LEN + 1];
char nodename[__NEW_UTS_LEN + 1];
char release[__NEW_UTS_LEN + 1];
char version[__NEW_UTS_LEN + 1];
char machine[__NEW_UTS_LEN + 1];
char domainname[__NEW_UTS_LEN + 1];
};

The nodename here is the hostname we're familiar with, and domainname is the NIS domain name.

Note that although this struct contains fields like sysname (OS name) and release (kernel version), the UTS namespace only allows you to modify nodename and domainname. The other fields are globally read-only — you can't use the UTS namespace to pretend you're running on a different kernel version, as that logically makes no sense.

The Evolution of System Calls

We have the struct and the data. Now the key question is: how do we change the system calls?

Without namespaces, the gethostname() system call only needs to read a string from a global variable. But with namespaces, it must know to "read the name from the current process's namespace."

The kernel provides a helper function utsname() specifically for this purpose:

static inline struct new_utsname *utsname(void)
{
return &current->nsproxy->uts_ns->name;
}

It's that simple: whoever calls it gets back their own new_utsname pointer.

Now we can look at the actual gethostname() system call implementation. This is a textbook example of "how to adapt existing code to namespaces":

SYSCALL_DEFINE2(gethostname, char __user *, name, int, len)
{
int i, errno;
struct new_utsname *u;

if (len < 0)
return -EINVAL;
down_read(&uts_sem);

Step one: take the lock. uts_sem is a read-write semaphore that prevents others from changing the name while we're reading it.

u = utsname();
i = 1 + strlen(u->nodename);
if (i > len)
i = len;
errno = 0;

Step two: get the current process's new_utsname object and calculate the name length. If the user-provided buffer is too small, truncate; otherwise, copy the whole thing.

if (copy_to_user(name, u->nodename, i))
errno = -EFAULT;
up_read(&uts_sem);
return errno;
}

Step three: copy the data back to user space, release the lock, and return.

There's no black magic here. The only change is that instead of reading a "global variable init_uts_ns.name", we now read from current->nsproxy->uts_ns->name.

The same logic applies to system calls like sethostname() and uname(). As long as we replace "global references" with "indirect references through the current pointer," namespace isolation automatically takes effect.

Final Details: Procfs Adaptation

Adapting the system calls isn't the end of it. Users typically view and modify the hostname through the /proc filesystem as well.

The UTS namespace must ensure that the /proc/sys/kernel/hostname file displays different content in different namespaces.

There's a table in the kernel called uts_kern_table (defined in kernel/utsname_sysctl.c) that handles exactly this. You'll notice that some entries have permissions set to 0444 (read-only, such as ostype), while hostname and domainname have permissions of 0644 (read-write).

When you read or write these proc files, the kernel calls the proc_do_uts_string() method. Inside, this function cleverly reuses the utsname() logic we just saw, ensuring it operates on the current namespace's data rather than global data.

Summary

The reason the UTS namespace is a "low-hanging fruit" is that the data it isolates is extremely simple: just two strings.

But this small change brings out an entire mechanism:

  1. We need a dedicated struct (uts_namespace) to hold the data;
  2. We need a reference count (kref) to manage its lifecycle;
  3. We need to mount a pointer in nsproxy so processes can find it;
  4. We need to modify all system calls and proc interfaces that read this data, changing them from "global reads" to "process-context reads."

Once you understand this flow, you'll find that the network namespace up next simply replaces these two strings with hundreds or thousands of network devices and routing tables — the skeleton is exactly the same.