Notes on Linux File-System performance

For the past 5 years I just used ReiserFS (3) for all file-systems, as it was the first and then fastest journaling filesystem that made it into the Linux kernel. However since then Ext3 made progress and with XFS and JFS new player hit the Linux kernel “game”, I recently gave the alternatives a little test-run and just wanted to drop my tiny, unnumbered notes on the topic.

First of all I should mention that I have quite an exessive workload - even on my laptop’s /home partitions. This is due to my full-system development with multiple complete root filesystems and deep ccache directories beeing created as part of my development with and on the T2 SDE.

Testing XFS (somewhere around 2.6.19) oopsed the kernel a couple of times already in the first hours, so I went on testing JFS. While JFS never oopsed in my testing it shared the same slightly slower initially performing on operations like “svn st” on the multiple-thousand files T2 working copy with XFS. JFS does not come with in-kernel log replay after crashes or power outage - the user-space fsck.jfs is needed to perform this job. This decreased in-kernel code complexity, however your distribution must support fsck’ing JFS, especially the root (/) partition if you want to use JFS on it. As the Ext2/3 family was way too slow on the multi million files partition, I kept using JFS for half a year. However, it degenerated significantly in performance over time to a point where it was no fun to work anymore. Right now I even cleanup temporary files as well as the resulting root filesystems sandboxes created over the past 6 months in oder to backup the whole partition and re-format with reiserfs (v3), but unfortunatle just wiping the ccache directories already took over an hour so far:


removing build/ccache-arm-1 ...
removing build/ccache-avr32-1 ...
removing build/ccache-x86-64 ...

and is still going on :-( with an average disk-load of just 300kB/s and and CPU load (of the Core 2 DUo @ 2GHz) of just 1%/0% … :-(

At least the disk-io while removing increased to 700kB/s, now - so I have the hope that removing the various complete systems (== the next million files to delete), becomes a little more performant.

I also tested reiser4 which in contrast to to XFS did not oops and showed quite a performace increase over reiser (v3), however with reiser4 not beeing in the mainline kernel and the still ongoing cleanup and rewriting and uncertain future, I rather do not put my data at that risk. It’s on-the-fly compression however would probably be a nice benefit for the laptop HD and the many plain/text files resulting while compiling whole systems.

Leave a Reply

You must be logged in to post a comment.