Subject: Re: Veritas File System
To: None < (Victor Escobar -- DirectX 5.0 Beta>
From: Jim Reid <>
List: tech-userlevel
Date: 07/07/1997 10:34:46
>>>>> "Victor" == Victor Escobar <> writes:

    Victor> I don't know exactly where file system issues fall, or
    Victor> whether there is a separate mailing list, but here goes.
    Victor> I was perusing an ancient (Feb 95) issue of BYTE magazine
    Victor> and noticed a very interesting article on a robust file
    Victor> system called Veritas (vxfs).  It has the following
    Victor> advantages over ffs and its BSD derivatives:

    Victor> * a Volume Manager accessible via either graphical,
    Victor> text-menu or command line modes.  Each physical disk is
    Victor> divided into _subdisks_ (unmanaged blocks of disk
    Victor> sectors).  You can combine one or more of these into a
    Victor> _plex_ to store live data.  A sysadmin is able to span
    Victor> these, shrink them, expand them, all while the system is
    Victor> up and running.  This is transparent to users.

It's also not a Veritas feature: it's one provided by the Logical
Volume Manager. It just happens that most sites that are stupid or
unlucky enough to use LVM use it with VxFS filesystems...

    Victor> * the _intent log_ (vxfs is a journalling file system) is
    Victor> a circular buffer that logs all pending changes to the fs.
    Victor> In the event of a power failure, a modified fsck merely
    Victor> looks in the log and rolls back and performs the pending
    Victor> operations, as opposed to examining disparate structures
    Victor> all over the drive.

True, but not the whole story. The intent log only holds details of
operations on filesystem metadata: ie what changes are being done to
which inodes are as a result of some IO request. It does not store the
actual I/O data: the block (sorry extent in VxFS-speak) being added to
the file. So VxFS isn't as "crash-resistant" as the glossies would
have you believe. The filesystem metadata is OK: the intent log can
be used to roll forward or back IOs (transactions in VxFS-speak) that
were in progress when the crash occurred. However the user data that
was being written isn't stored. In short, the filesystem is consistent
but file contents might not be.

    Victor> * the _snapshot_ provides a valuable backup/restore
    Victor> resource.  A read-only file system is created, which
    Victor> duplicates the live file system on the main volumes.  A
    Victor> deleted file is first copied to the snapshot system before
    Victor> being deleted from the real fs and the space reallocated.

I think this is a side-effect of the VxFS implementation. The snapshot
filesystem is probably little more than the old location(s) for the
old block(s) of the file(s) which have been deleted or updated. IIRC,
the filesystem on the NetApp file servers does the same trick.

You should also appreciate that VxFS is *huge*. When I briefly looked
at it ~5 years ago, it was ~100,000 lines of code. The Berkeley FFS is
less than a tenth that size. Personally, I don't think VxFS is worth
the cost - either in dollars or kernel overheads - for the alleged
benefits it brings. After all, how often does your enterprise-wide
server crash these days? And of those failures, how many are caused by
disk problems? What good is a "crash resistant" filesystem when the
disk drives it lives on have died?