NetBSD-Users archive

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]

Re: Beating a dead horse



On Thu, Nov 26, 2015 at 06:45:04AM +0700, Robert Elz wrote:
>     Date:        Wed, 25 Nov 2015 15:59:29 -0600
>     From:        Greg Oster <oster%netbsd.org@localhost>
>     Message-ID:  <20151125155929.2a5f2531%mickey.usask.ca@localhost>
> 
>   |  time dd if=/dev/zero of=/home/testfile bs=64k count=32768
>   |  time dd if=/dev/zero of=/home/testfile bs=10240k count=32768
>   | 
>   | so that at least you're sending 64K chunks to the disk...
> 
> Will that really work?   Wouldn't the filesystem divide the 64k writes
> from dd into 32K file blocks, and write those to raidframe?   I doubt
> those tests would be productive.

No, the filesystem can cluster writes up to MAXPHYS (64k) for sequencial
writes (and also read ahead up to 64k for reads) even if the
filesystem block size is smaller.

-- 
Manuel Bouyer <bouyer%antioche.eu.org@localhost>
     NetBSD: 26 ans d'experience feront toujours la difference
--


Home | Main Index | Thread Index | Old Index