next up previous
Next: 2 Design Up: Enhancing NFS Cross-Administrative Domain Previous: Enhancing NFS Cross-Administrative Domain

1 Introduction

NFS was originally designed for use with LANs [22,17], where a single administrative entity was assumed to control all of the hosts in that site and create unique user accounts and groups. The access model chosen for exporting NFS volumes was simple but weak. In a different administrative domain, the password database may define different users with the same UIDs; a UID clash could occur if files in one domain are accessed from another. Worse, users with local root access on their desktops or laptops can easily access files owned by any other user via NFS, by simply changing their effective UID (i.e., using /bin/su).

Therefore, NFS servers rarely export volumes outside their administrative domain. Moreover, administrators resist opening up access even to hosts within the domain, if those hosts cannot be controlled fully. Today, users and administrators must compromise in one of two ways. Either volumes are exported across administrative domains and security is compromised, or the volumes are not exported across administrative domains, preventing users from accessing their data. Neither solution is acceptable.

Although NFSv4 [19] promises to provide strong authentication and provides a convenient framework for fixing these problems, it will not be available for many platforms and in wide use for several years. The transition between NFSv2 [22] and NFSv3 [2] took around 10 years and corresponds to relatively small changes compared to the changes between NFSv3 and NFSv4. Even today, NFSv3 is not fully implemented on all platforms. Moreover, the NFSv4 specification does not address all of the problems that we wish to fix. Nevertheless, the techniques described in this paper can enhance NFSv4 functionality. For example, whereas NFSv4 optionally supports ACLs (Access Control Lists), it does not specify how to use them to hide files or consider the idea of hiding files.

Current NFS servers implement a simple form of security check for the super user, intended to stop a root user on a client host from easily accessing any file on the exported NFS volume. However, current NFS servers do not allow the restriction and mapping of any number of client credentials to the corresponding server credentials.

We present a combination of two techniques that together increase both security and convenience: range-mapping and file-cloaking. Range-Mapping allows an NFS server to map any incoming UIDs or GIDs from any client to the server's own known UIDs and GIDs. This allows each site to continue to control their own user and group name-spaces separately while allowing users on one administrative domain to access their files more conveniently from another domain. Range-mapping is a superset of the usual UID-0 mapping and Linux's all-squash option which maps all UIDs or GIDs to -2.

Our second technique, file-cloaking, lets the server determine which ranges of UIDs or GIDs should a client be allowed to view or access. We define visibility as the ability of an NFS server to make some files visible under certain conditions. We define accessibility as the NFS server's ability to permit some files to be read, written, or executed. Cloaking extends normal Unix file permission checks by restricting the visibility and accessibility of users' files when those files are exported via NFS. Cloaking can be used to enforce the NFS client options nosuid and nosgid which prevent the execution of set-bit files.

Range-mapping and cloaking complement each other. Together, they allow NFS servers to extend access to more clients without compromising the existing security of those files. Whereas ACLs can allow a greater degree of flexibility than cloaking, ACLs are not available on all hosts and all file systems, are not supported in NFSv2, and are partially implemented in NFSv3. Furthermore, ACLs are often implemented in incompatible ways; this is one reason why the new NFSv4 protocol specification lists ACL attributes as optional [19].

Our system is implemented in the Linux kernel-mode NFS server. No changes were made to the NFS client side and our system is compatible with existing NFS clients. This has the benefit that we can deploy our system fairly easily by changing only NFS servers.

We performed a series of general-purpose benchmarks and micro-benchmarks. Range-mapping has an overhead of at most 0.6%. File-cloaking overheads range from 72% for a large test involving 1000 cloaked users--to an improvement of 26% in performance under certain conditions, reflecting a 4.7 factor reduction in network I/O.

The rest of this paper is organized as follows. Section 2 describes the design of our system and includes several examples. We discuss interesting implementation aspects in Section 3. Section 4 describes the evaluation of our system. We review related works in Section 5 and conclude in Section 6.

next up previous
Next: 2 Design Up: Enhancing NFS Cross-Administrative Domain Previous: Enhancing NFS Cross-Administrative Domain
Erez Zadok 2002-04-19