SSD Linux benchmarking: Comparing filesystems and encryption methods

Introduction

After (again) suffering under KMail’s recent sluggishness when dealing with my email spool and general Eclipse slowness when run with many plugins (such as the excellent Android ADT or the still-to-mature Scala plugin), I decided that the best update for my Lenovo Thinkpad X201s laptop would be a solid state disk (SSD). Some preliminary web article research yielded the Crucial C300 256GB as one candidate with near top-level performance and reasonable pricing. However, just dumping my previous Ubuntu 10.10 / Windows 7 dual-boot installation onto the new disk would not have been optimal in any case:

  • For SSDs it is crucially (pun intended) important to have correct partition and filesystem alignment to reach maximum speed.
  • As mentioned in some tuning guides, extended MSDOS partitions are likely to have alignment problems, and therefore the 5 partitions I used previously on the standard 500GB 2.5" HDD may have been problematic.
  • Going from 500GB to 256GB meant reducing the partition sizes anyways.
  • And finally, my Windows 7 installation, although rarely used in production, already showed signs of decay and a fresh installation was in order.
  • Fortunately, my Debian/Ubuntu installations rarely deteriorate significantly over time, so I did not opt to re-install Ubuntu. A simple tar.bz2 with correct preserve options was enough as a complete partition backup.

For reference, the test system is a Lenovo Thinkpad X201s with a dual-core Intel Core2 Duo i7 with 2GHz, which amounts to 4 virtual CPU cores when taking hyperthreading into account. The laptop has 8GB RAM and, after repartitioning the Crucial C300 SSD, now uses 4 partitions (printed with parted):


Modell: ATA C300-CTFDDAC256M (scsi)Festplatte  /dev/sda:  256060514kBSektorgröße (logisch/physisch): 512B/512BPartitionstabelle: msdosNummer  Anfang       Ende         Größe        Typ      Dateisystem  Flags 1      1049kB       6292505kB    6291456kB    primary  ext4 2      6292505kB    42992665kB   36700160kB   primary  ntfs         boot 3      42992665kB   200279065kB  157286400kB  primary 4      200279065kB  256060162kB  55781097kB   primary  ntfs


The first partition /dev/sda1 is a small ext4 partition mounted as /boot and containing the kernel(s), initramfs image(s), Grub, and GRML as a rescue system in the form of an ISO image. Keeping this separate allows full-partition encryption and any file system for the root partition even if not supported directly by Grub. This partition is not performance critical at all and does not need to be encrypted. The second partition /dev/sda2 holds Windows 7 (chain loaded from Grub) and does not use any encryption at the moment, as I don’t keep sensitive data on the Windows system partition (although I will play with Truecrypt system encryption at some time in the near future). The third partition /dev/sda3 is the one under test and is used for the benchmark. It is ca. 155GB large and will contain the Ubuntu root filesystem (including home directories). Finally, /dev/sda4 is a data parition shared between Windows 7 and Linux and not used for sensitive data. It currently uses NTFS, although for performance reasons I am considering to format it as ext3/4.

Options for the benchmark

On every laptop, encryption of my user home directory is a must-have for the data I carry around (including e.g. the additionally-encrypted SSH and GnuPG private keys that give me access to the Debian distribution infrastructure). There are two options in current Linux distributions that offer a decent compromise between usability and performance in this regard:

  • ecryptfs with the advantages of only taking as much space for the encryption as the sum of all the files to be encrypted (i.e. no container file with a fixed size) and excellent support in Debian/Ubuntu for setting up ecryptfs-encrypted user home directories; and
  • dm-crypt with LUKS for key management, which can be used either for container files mounted loopback (typically for user home directories as described in one of my earlier articles) or for the whole root partition. In this case, I chose the latter approach because my laptop is mostly a single-user system and, when turned on, my main user account will be logged in (therefore, the granularity of different encryption keys for different users is not required in this scenario) and because this additionally secures all files outside the home directory as well.

Both options are sufficient for my use case and, therefore, the decision should be made based on raw performance: I bought an SSD to improve the laptop performance, and I therefore want to maximise it while still fulfilling my requirements for encryption. The first two (three for better comparison of the overhead of encryption) options for the benchmarking therefore distinguish between encryption options:

  1. plain, no encryption (this is not going to be used, but I include it for reference)
  2. dm-crypt with LUKS for key management and aes-xts-plain mode with 128 Bit keys (hardware accelerated by the kernel crypto implementation) for the whole root partition
  3. ecryptfs with 128 Bit AES and enabled filename encryption (FNEK)

For usage on an SSD, ecryptfs as a stacked filesystem has the additional advantage of supporting the TRIM command when the underlying file system does (e.g. ext4 and btrfs support TRIM), while dm-crypt does not yet pass through TRIM from the inner filesystem to the outer block device. My current workaround is to use a patched wiper.sh script to manually send TRIM commands to the SSD as hints for internal optimizations, although the Crucial C300 series of SSDs is reported not to suffer from a lack of TRIM support. If it doesn’t help, it definitely shouldn’t hurt… (And I have not yet used my SSD for long enough to be able to repeat this benchmark after some time to see if performance deteriorates without issuing TRIM commands.)

The second set of options concerns the filesystems used below ecryptfs or within dm-crypt. Best candidates seem to be:

  1. ext4 with discard option (to issue TRIM commands)
  2. [only for comparison] ext4 with custom stride size
  3. btrfs with SSD alignment
  4. btrfs with zlib compression and SSD alignment
  5. btrfs with lzo compression and SSD alignment

Other filesystems were not considered because only ext4 and btrfs support TRIM at the time of this writing (with kernel 2.6.38) and there don’t seem to be any significant advantages in others for laptop use cases. Note that ext4 will issue TRIM only when mounted with the discard option while btrfs will do so by default when it detects the capability. btrfs is however mounted with the ssd option in all cases. All filesystems are mounted with the noatime,nodiratime options to avoid unnecessary writes to the filesystem (which would decrease SSD lifetime).

File system setup

All benchmarks on this test system were done with a Ubuntu 11.04 live USB stick with standard Ubuntu kernel 2.6.38-8-generic in the x86_64 variant (amd64). The NOOP scheduler was selected by executing

        echo noop > /sys/block/sda/queue/scheduler

For each of the combinations of filesystem and encryption method, bonnie++ and Linux kernel compilation were used as simple benchmarks, with kernel compilation done on an empty filesystem and on one filled with a zero-byte file to consume ca. 93% of the partition to find any negative effects on nearly-full partitions. The “benchmarks” were executed as:


`sudo chmod 777 /mnttime bonnie++ -d /mnt/ -c 5 -s 16Gdo_kernel() {    cd /mnt    time tar xvfj /home/ubuntu/Downloads/linux-2.6.38.5.tar    cd linux-2.6.38.5    cp /boot/config-2.6.38-8-generic .config    yes n | make oldconfig    time make -j 4    cd ..    time rm -r linux-2.6.38.5/}do_kerneltime dd if=/dev/zero of=/mnt/filler bs=1024k count=130000     # –> partition ca. 93% fulldo_kernel                                                    

–> partition ca. 99% full`


Filesystem and encryption method combinations were created with an explicit step to make sure that TRIM commands were issued for the free parts on the partition immediately during creating it, giving the firmware the option to optimize internally:

  1. ext4-plain with standard options:
    sudo mkfs.ext4 /dev/sda3
    sudo /home/ubuntu/wiper.sh –commit /dev/sda3
    sudo mount -o noatime,nodiratime,discard /dev/sda3 /mnt
  2. ext4-plain with custom stride as suggested by a few SSD tuning guides (with the result that the custom stride option produces worse results than the default options, and therefore not being used for the encryption combinations):
    sudo mkfs.ext4 -b 4096 -E stride=128,stripe-width=128 /dev/sda3
    sudo /home/ubuntu/wiper.sh –commit /dev/sda3
    sudo mount -o noatime,nodiratime,discard /dev/sda3 /mnt
  3. ext4-dmcrypt with standard options:
    sudo cryptsetup -c aes-xts-plain -s 256 luksFormat /dev/sda3
    sudo cryptsetup luksOpen /dev/sda3 luksroot
    sudo mkfs.ext4 /dev/mapper/luksroot
    #sudo /home/ubuntu/wiper.sh –commit /dev/mapper/luksroot
    sudo mount -o noatime,nodiratime,discard /dev/mapper/luksroot /mnt
  4. ext4-ecryptfs with default options:
    sudo mkfs.ext4 /dev/sda3
    sudo /home/ubuntu/wiper.sh –commit /dev/sda3
    sudo mount -o noatime,nodiratime,discard /dev/sda3 /mnt
    sudo mkdir /mnt/encrypted; sudo mkdir /mnt/plain
    sudo mount -t ecryptfs /mnt/encrypted /mnt/plain
  5. btrfs-plain without compression:
    sudo mkfs.ext4 /dev/sda3
    sudo /home/ubuntu/wiper.sh –commit /dev/sda3
    sudo mkfs.btrfs /dev/sda3
    sudo mount -o noatime,nodiratime,ssd /dev/sda3 /mnt
  6. btrfs-dmcrypt without compression:
    sudo mkfs.ext4 /dev/sda3
    sudo /home/ubuntu/wiper.sh –commit /dev/sda3
    sudo cryptsetup -c aes-xts-plain -s 256 luksFormat /dev/sda3
    sudo cryptsetup luksOpen /dev/sda3 luksroot
    sudo mkfs.btrfs /dev/mapper/luksroot
    sudo mount -o noatime,nodiratime,ssd /dev/mapper/luksroot /mnt
  7. btrfs-ecryptfs without compression:
    sudo mkfs.ext4 /dev/sda3
    sudo /home/ubuntu/wiper.sh –commit /dev/sda3
    sudo mkfs.btrfs /dev/sda3
    sudo mount -o noatime,nodiratime,ssd /dev/sda3 /mnt
    sudo mkdir /mnt/encrypted; sudo mkdir /mnt/plain
    sudo mount -t ecryptfs /mnt/encrypted /mnt/plain
  8. btrfs-zlib-plain with zlib compression for the whole partition:
    sudo mkfs.ext4 /dev/sda3
    sudo /home/ubuntu/wiper.sh –commit /dev/sda3
    sudo mkfs.btrfs /dev/sda3
    sudo mount -o noatime,nodiratime,ssd,compress=zlib /dev/sda3 /mnt
  9. btrfs-zlib-dmcrypt with zlib compression for the whole partition:
    sudo mkfs.ext4 /dev/sda3
    sudo /home/ubuntu/wiper.sh –commit /dev/sda3
    sudo cryptsetup -c aes-xts-plain -s 256 luksFormat /dev/sda3
    sudo cryptsetup luksOpen /dev/sda3 luksroot
    sudo mkfs.btrfs /dev/mapper/luksroot
    sudo mount -o noatime,nodiratime,ssd,compress=zlib /dev/mapper/luksroot /mnt
  10. btrfs-zlib-ecryptfs with zlib compression for the whole partition:
    sudo mkfs.ext4 /dev/sda3
    sudo /home/ubuntu/wiper.sh –commit /dev/sda3
    sudo mkfs.btrfs /dev/sda3
    sudo mount -o noatime,nodiratime,ssd,compress=zlib /dev/sda3 /mnt
    sudo mkdir /mnt/encrypted; sudo mkdir /mnt/plain
    sudo mount -t ecryptfs /mnt/encrypted /mnt/plain
  11. btrfs-lzo-dmcrypt with lzo compression for the whole partition (plain and ecryptfs were skipped because the influence of encryption is already apparent in the uncompress vs. zlib cases):
    sudo mkfs.ext4 /dev/sda3
    sudo /home/ubuntu/wiper.sh –commit /dev/sda3
    sudo cryptsetup -c aes-xts-plain -s 256 luksFormat /dev/sda3
    sudo cryptsetup luksOpen /dev/sda3 luksroot
    sudo mkfs.btrfs /dev/mapper/luksroot
    sudo mount -o noatime,nodiratime,ssd,compress=lzo /dev/mapper/luksroot /mnt

Results

 

bonnie++

untarbz2
(empty)

compile
(empty)

remove
(empty)

filler file
creation

untarbz2
(filled)

compile
(filled)

remove
(filled)

Summary of benchmark results (values in each cell represent real, user, and sys values as reported by time)

ext4
plain

5m31.439s
0m3.890s
1m10.010s

0m22.293s
0m21.610s
0m2.710s

41m49.204s
131m24.910s
12m55.250s

0m3.698s
0m0.080s
0m3.300s

11m6.336s
0m0.080s
2m37.820s

0m22.924s
0m22.590s
0m2.690s

41m54.401s
130m32.100s
13m11.580s

0m6.672s
0m0.120s
0m4.070s

ext4
custstride
plain

7m21.210s
0m3.250s
1m9.250s

0m22.888s
0m22.410s
0m2.610s

41m47.584s
131m12.230s
13m2.450s

0m10.987s
0m0.080s
0m4.320s

10m57.498s
0m0.220s
2m17.090s

not tested

not tested

not tested

ext4
dmcrypt

6m31.526s
0m2.010s
0m56.550s

0m24.559s
0m22.530s
0m2.880s

41m13.000s
134m55.380s
13m22.790s

0m5.267s
0m0.110s
0m4.090s

11m40.738s
0m0.230s
2m41.920s

0m25.176s
0m22.600s
0m2.730s

41m28.321s
135m21.460s
13m48.630s

0m5.325s
0m0.080s
0m4.280s

ext4
ecryptfs

14m13.130s
0m3.340s
12m10.470s

0m26.457s
0m24.460s
0m13.690s

43m22.883s
134m19.410s
19m37.150s

1m25.489s
0m0.290s
0m12.490s

23m38.278s
0m0.270s
22m29.310s

0m28.022s
0m24.760s
0m14.170s

43m40.194s
134m40.670s
20m4.680s

1m27.743s
0m0.340s
0m12.250s

btrfs
plain

5m15.177s
0m2.070s
1m46.370s

0m27.078s
0m24.820s
0m3.860s

41m11.315s
135m15.670s
13m30.190s

0m6.560s
0m0.070s
0m6.360s

10m7.359s
0m0.000s
1m49.980s

0m23.820s
0m22.750s
0m4.120s

41m25.578s
135m23.840s
14m4.780s

0m6.594s
0m0.080s
0m6.380s

btrfs
dmcrypt

5m17.016s
0m1.910s
1m38.330s

0m26.963s
0m24.360s
0m3.950s

41m19.414s
135m6.570s
13m41.730s

0m6.695s
0m0.060s
0m6.480s

10m28.800s
0m0.110s
1m51.810s

0m25.950s
0m23.080s
0m4.190s

41m31.437s
135m24.660s
13m48.310s

0m6.934s
0m0.080s
0m6.710s

btrfs
ecryptfs

15m27.150s
0m3.830s
13m24.810s

0m30.500s
0m25.770s
0m16.490s

43m48.347s
134m25.360s
20m41.500s

0m32.573s
0m0.300s
0m20.850s

27m33.130s
0m0.290s
24m43.330s

0m29.714s
0m24.960s
0m16.870s

44m10.248s
134m21.970s
21m31.050s

0m33.140s
0m0.320s
0m21.270s

btrfs
zlib
plain

4m48.960s
0m2.810s
1m34.400s

0m31.058s
0m25.780s
0m4.260s

43m37.871s
137m28.460s
13m27.920s

0m6.450s
0m0.100s
0m6.230s

15m30.253s
0m0.000s
1m32.060s

0m32.258s
0m27.110s
0m4.530s

43m44.289s
135m29.930s
14m8.860s

0m8.678s
0m0.100s
0m8.410s

btrfs
zlib
dmcrypt

5m9.068s
0m2.810s
1m32.840s

0m31.279s
0m25.680s
0m4.200s

43m8.367s
135m1.880s
13m34.570s

0m6.411s
0m0.070s
0m6.220s

16m3.178s
0m0.940s
1m29.700s

0m31.341s
0m25.660s
0m4.710s

43m51.727s
135m5.970s
14m7.070s

0m6.860s
0m0.050s
0m6.690s

btrfs
zlib
ecryptfs

15m24.590s
0m3.580s
13m33.940s

0m44.005s
0m28.690s
0m17.900s

45m26.734s
134m9.170s
20m50.550s

0m38.583s
0m0.240s
0m21.690s

25m8.729s
0m0.390s
25m2.820s

0m42.679s
0m27.330s
0m17.310s

45m10.735s
134m1.000s
20m41.810s

0m40.182s
0m0.240s
0m23.130s

btrfs
lzo
dmcrypt

4m9.260s
0m2.760s
1m29.140s

0m27.360s
0m23.310s
0m3.930s

41m29.109s
135m1.420s
13m35.870s

0m7.868s
0m0.100s
0m7.320s

4m0.141s
0m0.250s
1m34.880s

0m27.928s
0m24.270s
0m4.250s

41m41.290s
135m15.670s
14m4.930s

0m7.212s
0m0.080s
0m7.010s

“Good” values are marked green, “bad” ones red, and full logs including the detailed bonnie++ output for reference (the details are used for the recommendations/analysis below, but not reported in the summary table) are available here.

Summary

Based on these measurements, I now use dm-crypt in aes-xts-plain mode with 128 Bit keylength and btrfs with ssd alignment and compress=lzo compression for the root partition of the Ubuntu/Debian installation on my laptop SSD. It was surprising that ecryptfs performed that poorly in comparison to dm-crypt, but it was actually the reason for starting these systematic measurements in the first place. My initial approach (without systematic evaluation) was to use ecryptfs on the SSD because of the inherent TRIM support when the underlying filesystem has that capability, hoping to give the SSD firmware all information it needs to optimize for maximum performance. However, the system felt sluggish and especially KMail was unusable for the first 10 minutes after login – not at all what I expected when upgrading to an SSD (on the previously installed HDD, I also used dm-crypt for the whole root partition, but ext4 as the inner filesystem, as the installation was done early in 2010 when btrfs was considered really unstable). The measurements show that ecryptfs was indeed the culprit, and some mailing list posts indicate that, at the time of this writing, ecryptfs hits performance limits when used on fast backend storage (such as SSDs). dm-crypt does not have any such limitation and performs well on my system without any significant slowdown caused by encryption.

The following, more specific results of (some based on bonnie++ data not shown in the summary table above) are:

  • Custom stride options for file system creation are not helpful, best is to use the mkfs.ext4 defaults.
  • ext4 on top of dm-crypt causes ca. 10-50% loss in bonnie benchmark, <10% loss for file unpack and remove, and no difference for compile when compared to ext4 without encryption.
  • ecryptfs on top of ext4 causes >100% overall loss in bonnie benchmark, little difference for compile, and 100% loss for filling with a huge zero-bytes file.
  • btrfs is roughly equivalent to ext4, with slightly higher CPU load during benchmarks (time->sys value). File meta operations seem slightly slower, while file access is slightly faster, and filling is 10% faster (huge single file) with less CPU load.
  • btrfs on top of dm-crypt incurs <5% loss in all benchmarks, which is not significant. In the bonnie benchmark, btrfs-dmcrypt performs differently from ext4-dmcrypt, but overall slightly faster than with ext4, filling is ca. 10% faster (huge single file). Therefore, btrfs on top of dm-crypt is a good option for encrypted filesystems on a (laptop) SSD.
  • ecryptfs on top of btrfs produces a bonnie benchmark that is significantly slower than plain or with dmcrypt and slightly (10-20%) slower than with ext4-ecryptfs, and filling is nearly 200% slower.
  • btrfs with zlib compression and without encryption causes the bonnie benchmark to be marginally faster than on plain btrfs without compression, while file creation is slower (compilation 5% slower, filling 50% slower) than on plain btrfs. This slowdown seems tolerable, given the potential space savings on (limited) SSDs.
  • btrfs with zlib compression on top of dm-crypt is insignificantly slower than btrfs-zlib-plain, and we again see that dmcrypt causes virtually no overhead. The bonnie benchmark is even slightly faster than on btrfs-dmcrypt (without compression), but compile is slower and filling is ca. 50% slower (but naturally with significant space savings for zero-bytes file!).
  • ecryptfs on top of btrfs with zlib compression is (currently) the slowest option, with compile being 10% slower than btrfs-plain and bonnie 200% slower. A slowdown as expected, because the files created by ecryptfs for the “lower” filesystem are encrypted and therefore can not be compressedm incurring the performance overhead of compression/decompression without any gain in I/O transfer size.
  • btrfs with lzo compression on top of dm-crypt causes the bonnie benchmark to be faster than with btrfs-zlib-dmcrypt, and comparable to ext4-plain: btrfs-lzo is faster on block operations, but slower on character operations. Compilation with btrfs-lzo is marginally slower, with metadata operations (create/remove) being slightly slower, but fill with zero-byte file is significantly faster (50%). Therefore, btrfs on top of dm-crypt with lzo compression is currently the best option for my personal use case (with Ubuntu kernel 2.6.38-8-generic on x86_64 and using a Crucial C300 256GB SSD).
René Mayrhofer
René Mayrhofer
Professor of Networks and Security & Director of Engineering at Android Platform Security; pacifist, privacy fan, recovering hypocrite; generally here to question and learn