We implemented this project in two places: the NFS Utilities (nfs-utils) package version 0.3.1  and the NFS server code (both NFSv2 and NFSv3) in the Linux 2.4.4 kernel. We added 1678 lines of code to nfs-utils, an increase of 5.8% to its size. This code primarily handles parsing /etc/exports files for range-mapping and cloaking entries, packing them into exports structures, and passing these structures to the kernel using a special-purpose ioctl used by the in-kernel NFS server.
Most of the code we added was to the kernel: 1330 lines of additional code, or an increase of 15.1% to the total size of the NFS server sources. Although this increase is substantial, the bulk of our changes to the kernel code are in new C source and header files, and in stand-alone functions we added to existing source files. The placement of our changes in the kernel sources made it easier to develop: two first-year graduate students spent a combined total of 12 man-weeks developing and testing the code.
For range-mapping we faced three questions: where to do forward mapping, where to do reverse mapping, and how to get the mapping context from the export structures for each client request.
Forward mapping is done in the nfsd_setuser function, which is passed a pointer to the relevant export structure; the latter contains the information we need to perform the mapping. Implementing reverse mapping was more difficult. The best place for it was in the server's outgoing path, where it encodes file attributes into XDR structures before shipping them back to the NFS client. This is done in the encode_fattr3 routine (or encode_fattr2 for NFSv2). We find a response packet inside the request structure passed to this function; from this we get the NFS file handle. The latter contains the export information we need to compute the range-mapping.
Cloaking was more challenging to implement than range-mapping because of the restriction that we only modify the server. Cloaking needs to display different directory listings to each user on the same client. Since clients cache directory contents and file attributes, we have to force the NFS clients to ignore cached information (if any) and reissue an NFS/SMALL>_READDIR procedure every time users list a directory. We investigated two options: (1) lock the directory, and (2) fool the client into thinking that the directory's contents changed and thus must be re-read. We chose the second option because locking the directory permanently would have serialized all access to that directory and prevented more than one NFS client from making changes to that directory (such as adding a new file).
To force the client to re-read directories, we increment the in-memory modification time (mtime) of the directory each time it is listed; we do not change the actual directory's mtime on disk. This technique has been used before to prevent client-side caching in NFS-based user level servers [25,29,14]. NFS clients check the mtime of remote directories before using locally cached data. Since the mtime always changes, the clients re-read the directory each time and effectively discard their local cache. The mtime field has a resolution of one second, but sometimes several readdir requests come in one second. We therefore had to ensure that the mtime is always incremented on each listing. This has a side effect that the modification time of directories being listed frequently could move into the future. In practice this was not a problem because directory-reading requests are often bursty and in between bursts the real clock has a chance to catch up to a directory's mtime that may be in the near future. Furthermore, future Linux kernels will increase the mtime resolution to microseconds, thus practically eliminating this problem.
We expected that forcing the clients to ignore their directory caches will reduce performance. However, if a client machine has only one user (as is the case with most personal workstation and laptops), we can allow the client to cache directory entries normally since there is little risk that another user on that client will be able to view cached entries. We made client caching optional by adding a server-side export option called no_client_cache that, if enabled, forces the directory mtime to increase and cause clients not to cache directory entries. If no_client_cache is not used (the default), we do not increase the mtime and NFS clients cache directory entries normally.
Cloaking requires that some files be hidden from users and therefore those files' names should not be sent back to the NFS client. We implemented this in the encode_entry function. Given a file's owner, group, mode bits, and the export data structures, we compute whether the file should be visible or not. If the file is invisible, we simply skip the XDR encoding of that file. If the file is not invisible, then we allow access to that file based on normal Unix file permissions. A user could try to lookup (perhaps guess) a file that is hidden to that user. To catch this we perform a cloaking check also in nfsd_lookup and if the file should be invisible to the calling user, we return an error code back to the lookup request.