Friday, April 29, 2011

[kdzepfiz] File system limits are silly

Create a filesystem that has no arbitrary limits (or the limits are so large as to be impossible to exceed, for example 2^128).

Nowadays, filesystems have limits that are large enough you don't hit them for a practice run for large computation, but then you do hit them in production or the full computation.  This is annoying.

At various points in my life, I've hit (on various different filesystems): root directory file limit, file size limit, directory entries limit, file name length limit (which especially hurts in multiple layers of encrypted file systems), volume size limit, inode count limit.  Get rid of all of these; they should only be limited by the capacity of the physical disk.  I've also hit (ext2) the limitation of forbidden characters in file names: "Meeting minutes 6/28".  Also, get rid of the Y2038 problem while we're at it.

While it's fine to optimize for the common case, avoid pathologically poor worst case behavior e.g., the 'ls -l' command taking several minutes to complete.  Always use balanced trees; never linked lists.

No comments :