NetBSD-Users archive

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]

Re: NetBSD disk performance on VirtualBox



On Tue, 20 Mar 2018, 12:30 Sad Clouds, <cryintothebluesky%gmail.com@localhost> wrote:

> Hello, a few comments on your tests:
>
> - Reading from /dev/urandom could be a bottleneck, depending on how that
> random data is generated. Best to avoid this, if you need random data, try
> to use a bench tool that can quickly generate dynamic random data.
>

Obviously. I pre-created the file and measured the transfer between two
filesystems on different disks.

>
> - Writing to ZFS can give all sorts of results, i.e. it may be doing
> compression, encryption, deduplication., etc. You'd need to disable all
> those features in order to have comparable results to NetBSD local file
> system.
>

Ditto. Included for comparison only - e.g. see the figure when reading
/dev/zero - it is almost instantaneous.

Subsequently I did some FreeBSD tests as well, those were in line with
NetBSD.

Anyway, nothing so far explains Martin's results being just a tad below
those of Linux and everyone else getting speeds 5-6 times slower.

>
> - I think by default, dd does not call fsync() when it closes its output
> file, with GNU dd you need to use conv=fsync argument, otherwise you could
> be benchmarking writing data to OS page cache, instead of virtual disk.
>

Right.

>
>
>
>
> On Tue, Mar 20, 2018 at 9:20 AM, Chavdar Ivanov <ci4ic4%gmail.com@localhost> wrote:
>
>> Well, testing with a file of zeroes is not a very good benchmark - see
>> the result for OmniOS/CE below:
>> ----
>> ➜  xci dd if=/dev/zero of=out bs=1000000 count=1000
>> 1000+0 records in
>> 1000+0 records out
>> 1000000000 bytes transferred in 0.685792 secs (1458168149 bytes/sec)
>> ----
>>
>> So I decided to switch to previously created random contents and move it
>> with dd between two different disks. Here is what I get:
>> ---
>> --------Centos 7.4 -------- XFS
>> ➜  xci dd if=/dev/urandom of=rand.out bs=1000000 count=1000
>> 1000+0 records in
>> 1000+0 records out
>> 1000000000 bytes (1.0 GB) copied, 9.6948 s, 103 MB/s
>> ➜  xci dd if=rand.out of=/data/rand.out bs=1000000
>> 1000+0 records in
>> 1000+0 records out
>> 1000000000 bytes (1.0 GB) copied, 2.49195 s, 401 MB/s
>> --------OmniOS CE --------- ZFS
>> ➜  xci dd if=/dev/urandom of=rand.out bs=1000000 count=1000
>> 1000+0 records in
>> 1000+0 records out
>> 1000000000 bytes transferred in 16.982885 secs (58882812 bytes/sec)
>> ➜  xci dd if=/dev/urandom if=rand.out of=/data/testme/rand.out  bs=1000000
>> 1000+0 records in
>> 1000+0 records out
>> 1000000000 bytes transferred in 21.341659 secs (46856713 bytes/sec)
>> --------NetBSD-current amd64 8.99.12 ------- FFS
>> ➜  sysbuild   dd if=/dev/urandom of=rand.out bs=1000000 count=1000
>> 1000+0 records in
>> 1000+0 records out
>> 1000000000 bytes transferred in 32.992 secs (30310378 bytes/sec)
>> ➜  sysbuild dd if=rand.out of=/usr/pkgsrc/rand.out bs=1000000
>> 1000+0 records in
>> 1000+0 records out
>> 1000000000 bytes transferred in 23.535 secs (42489908 bytes/sec)
>> ----
>>
>> OmniOS/ZFS and NetBSD/FFS results are comparable, the Centos/XFS one is a
>> bit hard to explain.
>>
>> This is on the same Windows 10 host as before.
>>
>> Chavdar
>>
>> On Mon, 19 Mar 2018 at 23:16 Chavdar Ivanov <ci4ic4%gmail.com@localhost> wrote:
>>
>>> I ran my tests with our dd and also with /usr/pkg/gnu/bin/dd, supposedly
>>> the same or similar enough to the one in Centos; there was no significant
>>> difference between the two. The fastest figure came on the system disk when
>>> it was attached to an IDE controller with ICH6 chipset. about 180MB/sec


Home | Main Index | Thread Index | Old Index