next up previous
Next: 6. Related Work Up: Stackable File Systems as Previous: 4. Examples

Subsections

   
5. Evaluation

When evaluating the file systems we have built, we concentrated mostly on the more complex ones: Wrapfs and its derivative Cryptfs. To test the stability of these two, we ran two concurrent loops of the tests described below for over two weeks. We ensured that no errors occurred, the system remained operational, and all file systems involved incurred no corruptions.

In the following sections we evaluate the performance of our file systems, and then their portability. As each of these file systems is based on several others, our performance measurements were aimed at identifying the overhead that each layer adds. The main goal was to prove that the overhead imposed by stacking is small enough and comparable to other stacking work[8,23].

   
5.1 Wrapfs

For most of our tests, we included figures for a native disk-based file system because disk hardware performance can be a significant factor. This number should be considered the base to which other file systems compare. We included figures for Wrapfs (our full-fledged stackable file system) and for lofs (the low-overhead simpler one), to be used as a base for evaluating the cost of stacking. When using lofs or Wrapfs, we mounted them over a local disk based file system.

For testing Wrapfs, we decided to use as our performance measure a full build of Am-utils[29], a new version of the Berkeley Amd automounter. The test auto-configures the package and then builds it. Only the sources for Am-utils and built binaries were using the file system being tested; compiler tools and such were left outside. The configuration runs several hundred (600-700) small tests, many of which are small compilations and executions. The build phase compiles about 50,000 lines of C code spread among several dozen files and links about a dozen binaries. The whole procedure contains a fair mix of CPU and I/O bound operations as well as file system operations: many writes, binaries executed, small files created and unlinked, many reads and lookups, and a few directory and symbolic link creations. We felt this test is a more realistic measure of the overall file system performance, and would give users a better feel for the expected impact Wrapfs might have on their workstation. For each file system measured, we ran 12 successive builds on a quiet system, measured the elapsed times of each run, removed the first measure (cold cache) and averaged the remaining 11 measures. The results are summarized in Table 2.1 The standard deviation for the results reported in this section did not exceed 0.8% of the mean. Finally, there is no native lofs for FreeBSD (and the nullfs available is not fully functional, as discussed in Section B.2).


 
Table 2: Time (in seconds) to build a large package on a given platform, and inside a specific file system. The percentage lines show the overhead difference between some file systems.
File SPARC 5 Intel P5/90
System Solaris Linux Solaris Linux FreeBSD
  2.5.1 2.0.34 2.5.1 2.0.34 3.0
ext2/ufs/ffs 1242.3 1097.0 1070.3 524.2 551.2
lofs 1251.2 1110.1 1081.8 530.6 n/a
wrapfs 1310.6 1148.4 1138.8 559.8 667.6
cryptfs 1608.0 1258.0 1362.2 628.1 729.2
crypt-wrap 22.7% 9.5% 19.6% 12.2% 9.2%
nfs 1490.8 1440.1 1374.4 772.3 689.0
cfs 2168.6 1486.1 1946.8 839.8 827.3
cfs-nfs 45.5% 3.2% 41.6% 8.7% 20.1%
crypt-cfs 34.9% 18.1% 42.9% 33.7% 13.5%

 

First, we need to evaluate the performance impact of stacking a file system. Lofs is only 0.7-1.2% slower than the native disk based file system. A single Wrapfs layer adds an overhead of 4.7-6.8% for Solaris and Linux systems, but that is comparable to the 3-10% degradation previously reported for null-layer stackable file systems[8,23]. On FreeBSD, however, Wrapfs adds an overhead of 21.1% compared to FFS: because of limitations in nullfs, we were forced to use synchronous writes exclusively. Wrapfs is more costly than lofs because it stacks over every vnode and keeps its own copies of data, while lofs stacks only on directory vnodes, and passes all other vnode operations to the lower level verbatim.

   
5.2 Cryptfs

To measure the performance of Cryptfs and CFS[2], we performed the same tests we did for Wrapfs described in Section 5.1. These results are also summarized in Table 2, for which the standard deviation did not exceed 0.8% of the mean.

We used Wrapfs as the baseline for evaluating the performance impact of the encryption algorithm. The only difference between Wrapfs and Cryptfs is that the latter encrypts and decrypts data and file names. The line marked as ``crypt-wrap'' in Table 2 shows that percentage difference between Cryptfs and Wrapfs for each operating system. Cryptfs adds an overhead of 9.2-22.7% over Wrapfs. That is a significant overhead but is unavoidable. It is the cost of the Blowfish encryption code, which, while designed as a fast software cipher, is still CPU intensive.

Measuring the encryption overhead of CFS was more difficult. CFS is implemented as a user-level NFS file server, and we also ran it using Blowfish. We expected CFS to run slower due to the number of additional context switches that must take place when a user-level file server is called by the kernel to satisfy a user process request, and due to NFS V.2 protocol overheads such as synchronous writes. While CFS is based on NFS, it does not use the NFS server code of the given operating system. CFS serves user requests directly to the kernel. Since NFS server code is implemented in general inside the kernel, it means that the difference between CFS and NFS is not just that of the encryption, but also due to context switches. However, the NFS server in Linux 2.0 is implemented in user-level, and is thus also affected by context switching overheads. If we ignore the fact that CFS and Linux's NFS are two different implementations, and just compare their performance, we see that CFS is 3.2-8.7% slower than NFS on Linux. This is likely to be the overhead of the encryption in CFS. That overhead is somewhat smaller than the encryption overhead of Cryptfs because CFS is more optimized than our Cryptfs prototype; CFS precomputes large stream ciphers for its encrypted directories.

We have performed more finer-grained tests on the file systems listed in Table 2, specifically reading and writing of small and large files. These tests were designed to isolate and show the performance differences for specific file system operations. They show that Cryptfs is anywhere from 43% to an order of magnitude faster than CFS. Since the encryption overhead is roughly 3.2-22.7%, we can assume that rest of the difference comes from the reduction in number of context switches. Details of these additional measurements are available in a separate report[31].

   
5.3 Other Wrapfs-Based File Systems

The other file systems we developed using Wrapfs are simple, so we did not perform such rigorous performance measurements on them as we did with Wrapfs and Cryptfs. Only Rot13fs and Cryptfs have any significant impact on performance over that which was reported for Wrapfs. We ran similar the same (Am-utils) package build on these and have noted that in all cases, performance degraded by no more than an additional 8% over Wrapfs. We are convinced that with additional optimization time, these prototypes can be made to run even faster.

   
5.4 Portability

We first developed Wrapfs and Cryptfs on Solaris 2.5.1. As seen it Table 3, it took us almost a year to initially develop Wrapfs and Cryptfs together for Solaris. As we gained more experience, the time to port the same file system to a new operating system grew significantly shorter. Developing these file systems for Linux 2.0 was a matter of days to a couple of weeks, not months. The Linux 2.0 port would have been faster had it not been for Linux's rather different vnode interface.


 
Table 3: Time to Develop and Port File Systems
File  Solaris  Linux  FreeBSD  Linux
System    2.x    2.0     3.0    2.2
wrapfs 9 months 2 weeks 5 days 1 week
cryptfs 3 months 1 week 2 days 1 day
all others $\leq$1 day $\leq$1 day $\leq$1 day $\leq$1 day

 

The following port, FreeBSD 3.0, was even faster. This was mostly due to the great similarities between the vnode interfaces of Solaris and FreeBSD. We recently also accomplished these ports to the Linux 2.2 kernel. The Linux 2.2 vnode interface makes significant changes to the 2.0 kernel, which is why we list it as another porting effort. We held off on this port until the kernel became more stable (it only recently entered final development phase).

Another metric of the effort involved in porting Wrapfs is the size of the code. Table 4 shows the total number of source lines for Wrapfs, and breaks it down to three categories: common code that needs no porting, code that is easy to port by simple inspection of system headers, and code that is hard to port. The hard-to-port code accounts for more than two-thirds of the total and is the one involving the implementation of each Vnode and VFS operation (operating system specific).

 
Table 4: Wrapfs Code Size and Porting Difficulty
Porting Solaris Linux FreeBSD Linux
Difficulty 2.x    2.0    3.0     2.2   
Hard 80% 88% 69% 79%
Easy 15% 7% 26% 10%
None 5% 3% 5% 11%
Total Lines 3431 2157 2882 3279

 

The difficulty of porting file systems written using Wrapfs depends on several factors. If plain C code was used in the encoding and decoding routines alone, the porting effort would be minimal or none. Wrapfs, however, does not restrict the user from calling any in-kernel operating system specific function. Calling such functions could make portability more difficult.

Wrapfs can be ported to other operating systems that fulfill these prerequisites, such as AIX, HP-UX, Irix, Digital Unix, NetBSD, OpenBSD, BSDI, and SunOS 4.x. However, in practice we found that source access to an operating system's VFS is necessary to port Wrapfs, because many kernel APIs are unknown outside the kernel.


next up previous
Next: 6. Related Work Up: Stackable File Systems as Previous: 4. Examples
Erez Zadok
2000-02-07