Subject: Re: newfs can't make filesystems over 1TB in size
To: None <firstname.lastname@example.org>
From: Chuck Yerkes <email@example.com>
Date: 12/10/2002 22:37:32
Quoting Steven M. Bellovin (firstname.lastname@example.org):
> In message <20021210184206.GA9121@rek.tjls.com>, Thor Lancelot Simon writes:
> >On Tue, Dec 10, 2002 at 10:35:31AM -0800, Bill Studenmund wrote:
> >I've _already_ seen this cause people to use Linux, Solaris, or FreeBSD
> >instead of NetBSD. That's not cool, particularly since people with big
> >disk arrays are typically the kind of high-profile or at least well-funded
> >sites that can bring lots of other people along with them when they choose
> >an operating system.
> Right. And given the rate at which disk densities are increasing, it
> will be common within a couple of years. Let's get the fix going now,
> so that it's ready when such disk sizes are here for everyone.
The "folks at FreeBSD" who developed UFS2 is Kirk McKusick
(and others, I'm sure). The challenge, IIRC, was that some
FreeBSD folks were hitting terrabyte arrays a couple years
Obviously, machines should handle PetaBytes or more sooner than later.
So yeah, NetBSD could suck in UFS 2 (or hell, ReiserFS).
Is there a gentle way to fork the hooks in the kernel that
use large file systems and keep the old one for those of
us who don't want to live on the cutting edge?
I'm really happy keeping my file systems really, really stable
and changes should be for those on scrap systems?
old stuff here